From funnel to flywheel

If you’re like most marketers, you could name the basic parts of the sales funnel in your sleep: Awareness, Interest, Evaluation, Decision, and Purchase. Of course, businesses have tweaked the model over the years, adding extra steps and so forth, but the basic premise has remained the same. But there is one problem with the […]

The post From funnel to flywheel appeared first on Marketing Land.

If you’re like most marketers, you could name the basic parts of the sales funnel in your sleep: Awareness, Interest, Evaluation, Decision, and Purchase.

Of course, businesses have tweaked the model over the years, adding extra steps and so forth, but the basic premise has remained the same. But there is one problem with the model: it’s the opposite of customer-centric. In fact, in the traditional sales funnel, leads are treated a bit like uniform widgets moving along a conveyor belt, with various things happening to them along the way.

The problem is that if you’re not centered on the customer, your marketing efforts might be going to waste. If we had a nickel for every brilliant content strategy that seemed to explode with engagement while yielding little (if any) measurable return on investment, we’d have more than a piggy-bank full of change.

Centering the customer in your sales model changes that, though, because the customer now drives all content and all marketing efforts, instead of the other way around. In this piece, we’ll explain a new sales model. Maybe by the end you’ll be like us: falling ever-so-slightly out of love with the funnel — and in love with the flywheel.

A what wheel?

Like its predecessor the funnel, a flywheel is not just a metaphor, but also a real-life tool that powers multiple, modern-day inventions. Invented by James Watt of lightbulb fame, the flywheel is a disc or wheel around an axis. It has assorted industrial applications and can be found in car engines, ships, and a lot of other places where energy needs to be generated, amplified, stored, and stabilized.

The flywheel effect, described by Jim Collins in his book, Good to Great, describes a massive, 5,000-pound metal disc mounted horizontally on an axle. He asks the reader to imagine pushing it, so that it turns around that axle. At first, getting it to move at all is extremely difficult. But with each push, it gets fractionally easier and the flywheel begins to pick up speed. Collins writes:

Then, at some point—breakthrough! The momentum of the thing kicks in in your favor, hurling the flywheel forward, turn after turn … whoosh! … its own heavy weight working for you. You’re pushing no harder than during the first rotation, but the flywheel goes faster and faster. Each turn of the flywheel builds upon work done earlier, compounding your investment of effort. A thousand times faster, then ten thousand, then a hundred thousand. The huge heavy disk flies forward, with almost unstoppable momentum. 

It’s a great metaphor for marketing. Because that momentum isn’t the product of any single push. Instead, the energy is cumulative, generated by a lot of little pushes, with the whole greater than the sum of its parts.

Ideally, marketing and sales should work the same way. The energy, leads, and revenue created by marketing efforts is not due to any single channel, piece of content, or campaign; it’s a cumulative effect. And once it really gets going, a good marketing campaign keeps spinning. It generates energy.

Putting the customer at the center

Instead of a funnel into which prospective customers are unceremoniously dumped, the flywheel puts the customer at the center of the wheel: the axle.

Hubspot CEO Brian Halligan, for example, sees the customer as the lynchpin, with the flywheel itself divided into three equal segments, each representing stages along the customer journey: attract, engage, and delight. Each area creates energy and passes it along to the next, with the delight phase feeding back into attract.

Other flywheel devotees divide the disc into Marketing, Sales, and Service — again putting the customer in the center position. Each effort feeds into the next, cycling around and around, but always circling the customer.

This may be the most important aspect of the flywheel model — that it centers the customer. The funnel, on the other hand, doesn’t consider how those customers can feed back into the funnel (or the flywheel) to help create additional growth and engagement.

The funnel can’t conceive of customers buying from you more than once, so the momentum you build acquiring customers via the funnel just falls away. Following every quarter, every customer, every conversion — you’re starting all over again.

Learning to fly

The momentum of a flywheel is determined by three primary pieces:

  1. The weight of the wheel

With a physical flywheel, the greater the mass of the flywheel, the greater its momentum and the harder it is to stop. In the customer-focused model, the “weight” looks like an exceptional customer service experience that builds your reputation and brand in ways that create retention, build ambassadors, and deliver value into your marketing and sales segments. The way that you deliver that customer experience will be unique to your business model.

  1. How fast you spin it

The speed in the flywheel model is really about the number of “pushes” you give the wheel. How much content is your marketing team delivering? Which channels are you using to reach prospects? How many leads are coming from the content?

  1. The friction

Reducing flywheel friction is about ensuring customers remain satisfied and keeping your efforts aligned. If poor sales performance is slowing the momentum from marketing — or if poor service is hurting retention of hard-won sales — your flywheel will slow down, and your business will suffer. On the other hand, when everything is aligned, your efforts will feed into each other and keep your flywheel humming along.

Finding alignment and purpose

It’s one thing to draw up a model and another to align cross-organizational efforts in real life. Part of finding alignment is cultural, getting leadership to buy in and coordinating communication among departments. But a huge part of the lift has to be operational — and will be dependent on having technology that enables marketing, sales, and service to coordinate.

At CallTrackingMetrics (CTM), we’ve been thinking this way for some time now — though we only recently discovered the flywheel model. Our call intelligence and management platform brings together all the three segments of the flywheel: marketing, sales, and service.

It tracks call sources, lets agents tag and score calls, helps businesses respond immediately to inquiries, and provides a data-rich environment that can inform stakeholders across organizations about marketing, sales, and service performance. It also helps create reporting to determine returns on investment for content and campaigns, customer feedback, and more. In short, it makes it easier to understand and engage with customers in a meaningful, helpful way.

That engagement matters. A lot. Because, at the end of the day, marketing and sales are all about creating better experiences along your customers’ journeys. And the funnel model has never recognized the important part customer service teams play in generating customer retention, brand building, and developing stronger relationships and alignment between your business and your customers — as well as within the disparate teams in your organization.

In the end, the flywheel ensures that everyone in your business shares the same purpose: keeping the flywheel spinning, in order to create better relationships with and experiences for your customers. However hard it might seem to get it spinning at first, once the flywheel gains momentum and sales start churning, it’s well worth the effort.

The post From funnel to flywheel appeared first on Marketing Land.

Is it time to graduate to Google Analytics 360?

It’s a common concern for marketers and analysts worldwide: As your business has grown, so have your needs for complete and accurate data that can be integrated with other platforms. It might be time to upgrade to Google Analytics 360. This eBook from InfoTrust will provide you with everything you need to know when considering […]

The post Is it time to graduate to Google Analytics 360? appeared first on Marketing Land.

It’s a common concern for marketers and analysts worldwide: As your business has grown, so have your needs for complete and accurate data that can be integrated with other platforms. It might be time to upgrade to Google Analytics 360.

This eBook from InfoTrust will provide you with everything you need to know when considering and purchasing a Google Analytics 360 license for your organization. It covers:

  • The differences between Google Analytics and Google Analytics 360
  • Identifying if there is a business case for migrating to Google Analytics 360
  • Negotiating the right deal with a Google Analytics 360 Reseller

Visit Digital Marketing Depot to download “Google Analytics 360: A Purchasing Guide.”

The post Is it time to graduate to Google Analytics 360? appeared first on Marketing Land.

IRI to help Google measure offline sales impact of YouTube ads

Company will provide anonymous loyalty card and point-of-sale data to Google.

The post IRI to help Google measure offline sales impact of YouTube ads appeared first on Marketing Land.

Earlier this week YouTube rolled out new extensions and measurement capabilities for advertisers. The ad extensions will enable a variety of new actions (e.g., app downloads, booking, movie showtimes). There will also be new brand lift metrics, including offline sales lift.

Brand lift and offline sales data. IRI is one of the Google Measurement Partners that’s supporting the company’s new brand lift metrics. Brand lift studies are typically survey based. In this case they will show marketers the following types of information:

  • Positive response rate
  • Absolute brand lift
  • Headroom lift
  • Number of lifted users
  • Cost-per-lifted user
  • Control positive response rate
  • Relative brand lift

Brand lift studies offer valuable information. Arguably more compelling than self-reported attitudinal data is the sales impact data that IRI will provide for YouTube ads. IRI will supply e-commerce and in-store sales data to Google to show if YouTube ads are having an actual impact on sales (avoiding a last click attribution problem is a separate discussion).

The IRI measurement capabilities come from IRI’s loyalty card-based “vast point-of-sale, frequent shopper, causal and media exposure data.”

Why it matters. Last year Google announced that YouTube ads could be tied to store visitation and sales. That was being done chiefly through Google partnerships with credit card companies, which provided anonymous, aggregated sales data to Google. So the IRI lift measurement capabilities are not entirely new.

Regardless, it’s another important source of offline data that will help marketers better understand the real-world impact of their ads and which ones are truly performing.

The post IRI to help Google measure offline sales impact of YouTube ads appeared first on Marketing Land.

Bring order to chaos: Wrangling data for actionable insights

How to bring an overwhelming amount of data under control and use the insights gained throughout your business.

The post Bring order to chaos: Wrangling data for actionable insights appeared first on Marketing Land.

Producing actionable insights is one of the most challenging issues that brands face today. Urgency is ever-present, pushing marketers and analysts to rush decisions. But urgency is only half of the problem. Making the situation more chaotic is the fact that we are simultaneously awash in waves of data from too many sources. Between the urgency to produce results combined with the massive sea of data, we are inundated us every time we wade in and then simply washed back to shore.

So where do we start? Transactional, engagement, or demographic data? Prospecting or retention? The inundation keeps pushing us back.

There are strategies to navigate the churn and turbidity, and remedy those issues. Sometimes we simply need to take a step back, narrow our focus, and even get a little ruthless.

Insights begin with goal-setting

First, we need leadership teams to get ruthless with what really matters. Analytics can’t chase the shiny object or rely on some utopian commerce breakthrough — if only we could find attribution in some rabbit-hole metric.

Think bigger. Get brutal with company and divisional goals.

Great goals have a couple of key characteristics in common. First, they’re specific — they have clear expectations and a path forward to measure and show success. Great goals also unify teams instead of dispersing them in different directions where everyone has a separate idea of how they can accomplish them.

To reach goals, every single person needs to be pulling the boat in the same direction. Great goals produce unity, which in turn helps to focus analytical firepower where it matters most. Remove the rest.

Where to start

The lowest hanging fruit is almost always customer retention. It’s the easiest behavior to shift; it has the most room to grow; it’s the most profitable. One way to understand the importance of retention is to ask this question: if a brand acquires a new customer, what does that matter if that brand can’t keep the customer engaged? Prospecting without first nailing down the current customer makes teams spin their wheels and waste energy.

Align your performance indicators

So, we have our goals narrowed down and all teams are working towards a common purpose.  The next step is to flawlessly align our performance indicators to those stringently selected goals. Again, narrow your focus and be strict with the fidelity of indicators to goals.  They should have either a clear cause-and-effect relationship or a very strong correlation to prove success.

Once we’ve identified those core components, we can simply let the rest of the data wash away.  It takes work up front, but that work will be rewarded with a strong path forward and will avoid data paralysis down the road. By deriving indicators naturally from a core set of goals, you organically narrow the data set, so we can focus on producing insights that drive change.

It’s easy to see how many brands can get stuck in the mud during this phase. There are so many temptations, so many paths to take that could work if only for one added piece that we don’t have in the model. But this is a faulty mindset and the effort will be wasted with little to show for all that added work. Put the blinders on and be strict.

Where to start

The answer is almost always transactional data, especially if we’ve felt the impact of overwhelming data paralysis. Stick to transactional indicators early. They’re reliable and strongly aligned to behavior. What shows customer sentiment better: a Facebook Like or purchasing items?

Measure, rinse, repeat

Lastly, all of that work is useless if we don’t have a measurement plan in place to prove success. If we can’t measure, it doesn’t matter.

The best approach is a rigorous test-and-learn strategy. Not only does it prove success, but it also provides actionable insights for the future to help build individual successes into larger groups of changes across channels and teams to drive and achieve goals.

Analytics teams can definitely get backed up, especially with A/B testing. Sometimes the waitlist is daunting. But there are two good options if that happens. First, consider an outside agency dedicated to helping us learn about the customer. An outside source can provide focus when things get too tight for internal teams to produce results.

The other option is to test historically. I can hear the gasps and guffaws of analytics teams, but we need to read the tea leaves however we can to produce results. That means pushing changes to market and measuring year-over-year data instead of one-off direct causations.  That option is better suited to areas where we already know best practices or have some data points to suggest the right decisions with high degrees of confidence.

Another reason it’s a viable option — and why analysts should love it — is that it frees up the testing schedule dramatically. So many tests don’t really need to be run in an A/B format; sometimes we have years of historical data or mountains of best-practice to influence our decision. In those instances, measurement is less of a read and more of a confirmation.

Bring order to chaos

These ideas may sound simple and, to a large degree, they are simple. They’re foundational. But without a foundation, how can we achieve our brand aspirations?

So many brands run before they can walk and they fall flat. To bring order to chaos, we need to start with the lowest common denominators to build on our learnings. Start small, grow big. Incrementally and soon, teams from every channel will have the learnings they need to act and provide the best experiences possible for both the brands and the customers.

The post Bring order to chaos: Wrangling data for actionable insights appeared first on Marketing Land.

Google Data Studio comes out of beta

The company continues to develop new features for the reporting product.

The post Google Data Studio comes out of beta appeared first on Marketing Land.

Google’s two-year-old free reporting and data visualization platform, Google Data Studio, is now generally available.

Integrations and data connectors. Data Studio, now part of Google Marketing Platform, has native integrations with Google Analytics, Google Ads, Display & Video 360, Search Ads 360, YouTube Analytics, Google Sheets and Google BigQuery. Marketers can also connect to hundreds of other non-Google data sources.

A crop of data connectors from companies like Supermetrics, a digital analytics and reporting tool provider, has also sprouted to help marketers pull data from multiple sources into Data Studio.

More new features. Over the past two years, Google has steadily added features and capabilities to Data Studio. As of this month, Google Marketing Platform (GMP) users can access Data Studio reports in other GMP products, including Analytics, Optimize and Tag Manager. Among other updates, in August, Google added the ability to combine data from multiple sources in a single chart or table with data blending (shown above).

Google says millions now use the product.

The post Google Data Studio comes out of beta appeared first on Marketing Land.

How to supercharge the Salesforce lead source field

Strategic management of the lead source field within Salesforce setup will unlock the magic of campaign tracking and measure the efforts of your paid media and content efforts. Here’s how.

The post How to supercharge the Salesforce lead source field appeared first on Marketing Land.

Salesforce lead source has long been the data point that has ruled measurement of marketing initiatives. This field tracks channel attribution and is used to measure return on marketing investments.

However, in today’s marketplace, the field is very limited out of the box. Absent multitouch attribution flexibility, you really only get one lead source on a record within the database. But what about second, third and fourth touches?

There are a few options utilizing just lead source on its own, but they all have limitations:

  1. We can ignore all touches after the first and stick to the original source only. This is often the method we see used at most companies, and it leads to inaccurate data
  2. The second option is to override the existing lead source with each new touch. This leaves you with the most recent lead source only. Unfortunately, this destroys info and creates inaccurate data, leading to really scary decision-making.
  3. The last option is to create a new lead record for each touch. This approach is the most disruptive, leading to mass confusion and degraded data quality.

So, how do we measure the big picture of a combination of channel influence and maintain the integrity of the database? The answer is to use campaign tracking alongside a customized lead source architecture.

In this post, we’ll focus on how to get your lead source field customized with a level of granularity that serves the business while maintaining the integrity of the data.

Lead source field review

Out of the box, the default lead source list in Salesforce is not granular enough. This list predates most of the channels we utilize in marketing today — rolling all digital channel activity into one bucket labeled “web.”

Neglecting to customize this list during a Salesforce implementation leads to pandemonium and frustration once data begins to populate the reports, and marketing can’t see the results of their efforts clearly enough. Which then leads to shooting in the dark and misalignment between sales and marketing.

We highly recommend you perform an audit of your lead source field options and customize them for your organization. Get rid of lead sources that don’t serve you and add lead sources you would like to track. This can easily be done with a simple spreadsheet to allow everyone to come together and agree on what level of granularity you would like to see.

Customize Salesforce lead source

Once you have gone through your existing lead source list and pruned the lead sources that don’t belong, you can draft the new lead sources that should be added. Now, it’s time to go through and look for missing granularity and opportunity to consolidate.

Effective marketers and data folks always want things tracked as granularly as possible. This often leads to a new lead source for every single activity, event and campaign, which creates a bloated database and makes for difficult reporting.

You’ll likely find that you have several lead sources that can be combined or eliminated. Look for opportunities to standardize your lead sources similar to Google Analytics parameters. If you need additional details, add more fields.

Standardize data entry

Once things are cleaned up, we highly recommend you standardize and automate your data entry. The lead source field should be locked down and set automatically by the system, not by human hands. You can do this using a web-to-lead form (first touch email acquisition) or by passing data from your marketing automation tool (first touch visit).

Once the data is set, don’t allow it to change or allow it to map to other objects.

Strategic management of the lead source field within your Salesforce setup will allow you to unlock the magic of campaign tracking and really measure the efforts of your paid media campaigns and content efforts.

The post How to supercharge the Salesforce lead source field appeared first on Marketing Land.

How to capitalize on the competitive advantage of real-time data analysis

Contributor Stela Yordanova explains how to capitalize on the competitive advantage provided by real-time data analysis.

The post How to capitalize on the competitive advantage of real-time data analysis appeared first on Marketing Land.

The Real-Time report in Google Analytics allows you to monitor website activity as it actually occurs on your website or app. The report is continuously updated, and website activity is reported just a few seconds after it happens. This immediacy of real-time data provides digital marketers with unique and valuable insights.

There are many ways you can use real-time reporting such as gauging the effectiveness of your mobile app through event tracking and monitoring one-day promotions on your site.  Today I want to focus on and recommend marketers use Google’s Real-Time report for three specific things:

  1. To quickly monitor results for short-term campaigns or promotional efforts.
  2. To track immediate interaction with newly published content.
  3. To test and verify Google Analytics and Google Tag Manager implementation.

Real-Time Overview

The Real-Time report contains an Overview plus five specific reports:

  • Location report.
  • Traffic Sources report.
  • Content report.
  • Events report.
  • Conversion report.

Each report is described below with suggestions on how marketers should use them to analyze real-time website data and improve marketing results.

 

[Read the full article on Search Engine Land.]

The post How to capitalize on the competitive advantage of real-time data analysis appeared first on Marketing Land.

Planning and creating the perfect landing page

Landing pages can make or break your digital marketing. This guide from SharpSpring is written for any marketer looking to initiate or improve their landing page strategy. It will guide you through everything you need to know to allow you to create and optimize landing pages for your website. Download your copy to find out: […]

The post Planning and creating the perfect landing page appeared first on Marketing Land.

Landing pages can make or break your digital marketing. This guide from SharpSpring is written for any marketer looking to initiate or improve their landing page strategy. It will guide you through everything you need to know to allow you to create and optimize landing pages for your website.

Download your copy to find out:

  • What a landing page is and is NOT.
  • Planning & creating the perfect landing page.
  • Testing & optimizing: Why your landing pages are never “done.”

Visit Digital Marketing Depot to download “Creating Landing Pages That Convert.”

The post Planning and creating the perfect landing page appeared first on Marketing Land.

12 pieces of conversion optimization advice you should ignore

Whenever you hear a marketing practice referred to as “easy,” it’s usually not. Contributor Ayat Shukairy looks at some common CRO misconceptions and their uncommon realities.

The post 12 pieces of conversion optimization advice you should ignore appeared first on Marketing Land.

A lot of content on conversion rate optimization (CRO) is published every day. Most of it is spot-on, but some articles make me cringe a little.

A lot of the advice being shared gives people false hope that if they conduct CRO correctly, they’ll see the millions roll in. It’s not that easy. The process is vigorous and requires a lot of time and effort — much more than the advice being shared will lead you to believe.

Whenever you hear a marketing practice referred to as “easy,” it’s usually not.  Let’s look at some common CRO misconceptions and their uncommon realities.

Misconception 1: Anyone can do it

Not hardly! To do well in CRO, you need good people on your team. A conversion rate optimization team usually includes:

  • Two or three conversion optimization specialists.
  • A UX designer.
  • A front-end developer.
  • A customer research specialist (can be part-time).
  • An analytics specialist (can be part-time).
  • A data analyst (can be part-time).
  • A product or program manager, depending on your business.

With all the different job types and responsibilities, how can one person do it all? Unless they’re Wonder Woman, they can’t.

Now that we have an idea who we will need on our team, let’s look at common statements you’ll hear about CRO that aren’t always accurate.

Misconception 2: There are CRO best practices

Everyone wants best practices, but in CRO, best practices simply don’t exist. We wish we had best practices, but it’s not a reality because what works on one website may not work on another.

For example, CaffeineInformer and Bookings.com both tested the same navigational menus and found the most commonly recommended menu worked for one but not the other.

CaffeineInformer tested the hamburger menu (an icon made up of three bars) versus the traditional word MENU enclosed with a border and one without a border, writing up and publishing the results online. You can see that the boxed MENU results were clicked on more often than MENU without a border, and the hamburger menu showed no use.

When Bookings.com ran their test results, which a designer wrote about on the company’s blog, they found no difference in the number of clicks for their  MENU options:

Representatives from Booking.com said:

With our very large user base, we are able to state with a very high confidence that, specifically for Booking.com users, the hamburger icon performs just as well as the more descriptive version.

So, although your competitors may inspire you, most of the time you’ll find what they introduce on their site may not work on yours. In the case above, it’s a small change, but we have seen companies make a bet on a change that costs hundreds of thousands of dollars and produces a negative impact on their site.

My advice is to know what is out there and get inspiration from other sites, but validate through research, prototyping and usability testing before rolling out a change on your site (especially if it’s major). If it’s something minor like a hamburger menu, go ahead and test, but ask yourself, what are you really trying to achieve with the change? Consider the validity of the concept to begin with and see if it fits within the overall roadmap you have for your site.

Misconception 3: More testing yields positive results

Statistically speaking, more variations = greater possibilities of false positive and inaccurate results.

My staff experienced this when we were first starting out as CRO practitioners. We would start testing by running a control versus variant 1, variant 2 and variant 3.

Once we found a statistical winner, we would launch just the control versus the winner. For example, if variant 2 reached statistical power with a significant statistical lift, we would launch control versus variant 2.

Of course, variable 2 completely tanked. What happened? Well, statistically, each variant brings a chance of a false positive. So of course, more variants = more chance of false positives.

According to Sharon Hurley Hall’s blog post on OptinMonster.com:

Most experienced conversion optimizers recommend that you don’t run more than four split tests at a time. One reason is that the more variations you run, the bigger the A/B testing sample size you need. That’s because you have to send more traffic to each version to get reliable results. This is known as A/B testing statistical significance (or, in everyday terms, making sure the numbers are large enough to actually have meaning).

If you have low conversions (even in the presence of a high volume of traffic), you definitely shouldn’t test beyond one variation.

Anyone with a sufficient number of conversions should be cautious and test, then retest the winning variation over the control to ensure it sticks.

Misconception 4: CRO is A/B testing

A/B testing is a part of the conversion rate optimization process, but they are not one in the same.

Our methodology for conversion rate optimization is combined into the acronym SHIP:

Scrutinize, Hypothesize, Implement and Propagate

Over 70 percent of the time we spend doing CRO is the scrutinize (planning) phase of the process. An unplanned test that is not backed by data does not usually do well.

When we talk about conversion optimization, the mind should go to design thinking, innovation and creativity. Ultimately, you are optimizing an experience and bringing it to a new level for the site visitor. You’re putting a spin on solutions to complex problems to ensure the visitor not only converts but has a memorable, enjoyable experience they’ll buzz about.

That is no easy feat!

Misconception 5: A simple change will impact your bottom line

Sometimes a simple change can have an impact. but let’s be real: that’s the exception, not the rule.

Expecting a color change on your site will increase conversion by 40 to 50 percent is really a stretch. When someone says it will, I immediately wonder, “How long did the test run?” and “Did it reach statistical power?” I think Allen Burt from BlueStout.com said it best in an expert roundup on Shane Barker’s blog:

I love talking about how we can increase conversion rate and how we can optimize it, because most sites, especially ecommerce merchants, get this wrong. They think it’s all about A/B testing and trying different button colours, etc. In reality, for 90% of small to medium-sized businesses, the #1 change you can make to your site to increase conversion rate is your MESSAGING.

Don’t try and take the easy route; usability issues need to be addressed, and testing colors and critical calls to action like a “Proceed to Checkout” statement is a viable test. But expecting a “significant impact” on your bottom line for simple changes is asking too much

One of the key components of a successful CRO program is the creativity behind it. Test and push limits, try new things, and excite the visitor who has been accustomed to the plain and mundane.

Misconception 6: A/B test everything

In the past, there was a strong emphasis on A/B testing everything, from the smallest button to the hero image. But now, the mood has changed, and we see A/B testing differently.

Some things just need to be fixed on a site. It doesn’t take an A/B test to figure out a usability issue or to understand that conversions increase when common problems are fixed.  A simple investigation may be all that is required to determine whether or not an A/B test should be done.

When evaluating a site, we find issues and classify the fixes for those issues in “buckets,” which helps determine further action. Here are the four basic buckets:

  • Areas and issues are evaluated for testing. When we find them, we place these items in the research opportunities bucket.
  • Some areas don’t require testing because they are broken or suffer from an inconsistency and just need to be fixed. We place these issues in the fix right away bucket.
  • Other areas may require us to explore and understand more about the problem before placing it in one of the two former buckets, so we add it to the investigate further bucket.
  • During any site evaluation, you may find a tag or event is missing and not providing sufficient details about a specific page or element. That goes into the classification instrument bucket.

Misconception 7: Statistical significance is the most important metric 

We hear it all the time: The test reached 95 percent statistical confidence, so we should stop it. However, when you look back at the test, between the control and the variation, only 50 conversions were collected (about 25 for each), and the test ran for only two days.

That is not enough data.

The first step to consider when launching an A/B test is to calculate the sample size. The sample size is based on the number of visitors, conversions and expected uplift you believe you will need to reach before concluding the test.

In a blog entry on Hubspot.com, WPEngine’s Carl Hargreaves advised:

Keep in mind that you’ll need to pick a realistic number for your page. While we would all love to have millions of users to test on, most of us don’t have that luxury. I suggest making a rough estimate of how long you’ll need to run your test before hitting your target sample size.

Second, consider statistical power. According to Minitab.com, “[S]tatistical power is the probability that a test will detect a difference (or effect) that actually exists.”

The likelihood that an A/B test will detect a change in conversion rates between variations depends on the impact of the new design. If the impact is large (such as a 90 percent increase in the conversions), it will be easy to detect in the A/B test.

If the impact is small (such as a 1 percent increase in the conversions), it will be difficult to detect in the A/B test

Unfortunately, we do not know the actual magnitude of impact! One of the purposes of the A/B test is to estimate it. The choice of the effect size is always somewhat arbitrary, and considerations of feasibility are often paramount.

Another important point here is to understand that it’s important to keep your business cycles in mind. In the past, we’ve seen sites where conversions spike on the 15th and 30th of every month. In order to run a test that would account for the entirety of that 15-day business cycle, we would need to test for a minimum of  2 1/2 weeks (including one of the spikes for each testing period).

Another example is SaaS companies, where a subscription to their service was a business decision that often took two months before closing. Measuring conversions for less than that period would skew data tremendously. 

Misconception 8: Business owners understand their customer base and visitors

A client of ours insisted they knew their customer base. They are a billion-dollar company that has been around since 1932, with 1,000 stores and a lot of customer data. But they have only been online for about 10 years.

Based on our experience, we told this brand their online customers will behave and act differently from customers in their brick-and-mortar stores and may even vary in terms of overall demographics.

However, our client insisted he knew better. After doing research, we suggested running some experiments. One particular experiment dealt with the behavior and actions of visitors on the cart page. Was the cart used to store products until they came back later? Or was it just not effective in persuading visitors to move forward? Our theory was the latter. We shared that from what we observed, there was hesitation to move beyond the cart page.

This suggestion was met with a lot of resistance from the brand’s director of marketing, who claimed we didn’t understand their customers as they did. To compromise, I suggested we test a percentage of traffic and slowly grow the percentage as the test gained momentum. If the customer follow-through did not grow, we would end the test.

The test was launched and reached sample size within days because of the amount of traffic and conversions they have, and it revealed a 20.4 percent improvement.

The brand was stumped and realized there was another way to think about how their customers were using their shopping cart.

According to William Harris from Elumynt.com (also published in Shane Barker’s roundup):

It’s easy to get stuck in the “A/B testing world,” looking at data and numbers, etc. But one of the best sources of learning is still having real conversations with your customers and ideal contacts. It also increases the conversion rate.

The point of my story is this: You think you know, but until you do the research and conduct testing on theories you’ve built, you can’t be sure. Additionally, the landscape is ever-changing, and visitors are impatient. All of that plays into your ability to persuade and excite visitors.

Misconception 9: Only change one thing at a time

The next two points are related. Some people feel you should move slowly and make one change at a time in order to understand the effects of the change. When you’re testing, you create a hypothesis regarding the test, and it may involve one or more elements.

It isn’t template tweaking (e.g., just changing locations and design of elements); it’s testing against an entire hypothesis which is backed by data resulting in data-driven changes that visitors can see and feel.

Misconception 10: Make multiple changes each time

Counter to the point made in number 9 above. Sometimes we find a hypothesis becomes muddled because other changes are included within a single test. That makes it difficult to decipher the authenticity of the results and what element impacted the test.

Always stick to the hypothesis, and make sure your hypothesis matches the changes you’ve made on the site.

Misconception 11: Unpopular elements should be avoided

We had an account that simply did not believe in carousels. I’m not a fan, personally, but because the account sold a specific product, we felt carousels were necessary and recommended they be used.

But the account resisted until customers started complaining. It wasn’t until then the account realized carousels will help visitors find what they need and give breadth to the range of products they were selling.

Elements that have been deemed unpopular aren’t always unpopular with your customer base or your specific needs. If the research shows an element can provide a solution for you, test it before you completely discount it.

Misconception 12: Your site is too small for CRO

Conversion rate optimization is not only about testing. CRO is about understanding your visitors and giving them a more engaging experience. All digital marketers and webmasters owning a site of any size should be implementing CRO.

If you have the traffic to justify your theories, test! Otherwise, continuously update your site and measure your changes through observation of key metrics through your analytics or through usability testing.

The post 12 pieces of conversion optimization advice you should ignore appeared first on Marketing Land.

The worlds of brand and trade marketing need to unite

Collaboration between brand and trade marketing teams is critical for long-term success, says contributor Andrew Waber. Here’s how to make this tactical and strategic alignment a reality.

The post The worlds of brand and trade marketing need to unite appeared first on Marketing Land.

There seems to be a massive shift in the way successful brands allocate dollars and other resources to their online marketing efforts.

For example, in 2017, coworkers and I analyzed some advertising activity from P&G showing that hundreds of millions of dollars of its online ad budget had moved to trusted e-commerce channels rather than on sites and approaches typically used for brand marketing.

According to P&G Chief Brand Officer Marc Pritchard and The Wall Street Journal:

The ad dollars were pulled back from a long list of digital channels but also included reducing spending with “several big digital players” by 20% to 50% last year (2017).

These are significant changes. Driving purchases through online media is increasingly reliant on retailer sites.

This transition in the overall market landscape necessitates a change in how companies fundamentally organize their marketing. Doing well on Amazon and other online retailers today requires brand and trade teams to work closely together in order to drive long-term success.

Misalignments

At a high level, brands simply can’t afford misalignment between the information on the product page and the brand promotion (done on sites such as Facebook) that lead customers to that page.

Ten years of Google conversion optimization proves that words in ads must match words in titles as closely as possible, or the ads may suffer high bounce rates. Consumers will notice the shift in vocabulary and abandon the landing page, driving down conversion rates.

Amazon Marketing Service (AMS) placements need to be associated with popular terms and be relevant to consumers. With consumers increasingly using sites like Amazon for research purposes, on-site promotions impact other sales channels, as well.

Market mix models have shown that AMS spend — which is often allocated to trade teams to handle — drove in-store sales in non-Amazon locations like CVS. If you’re a brand marketer, this means you should consider reallocating dollars from TV ads and treat budgets for promotions like AMS as brand dollars in today’s environment.

We’ve seen some larger companies already utilizing this fluid idea of what constitutes brand and trade dollars in relation to AMS and similar ad products.

There also needs to be alignment between the trade and brand marketing teams when it comes to promotions outside of Amazon’s universe. For example, if you launch an ad campaign on Facebook that drives traffic to an Amazon product detail page but that product happens to be out of stock when the Facebook ad campaign is running, then your product is punished by the A9 search algorithm which takes into account “page views when out of stock” in its ranking criteria.

If you get traffic when you’re out of stock, then your Amazon search rankings could suffer for months. In short, you are spending money on a campaign to drive traffic to an Amazon product detail page, and actively doing your brand harm in the process!

In traditional brand marketing, local in-stock rates typically don’t directly impact the larger strategy. The trade team might have to worry about this when campaigns are run in-store, but the brand side of the house never has to. On Amazon, and increasingly on more retail websites, you really have to care. The two work in concert.

Trade teams are in the business of identifying what sets of products are worth promoting or offering at one store versus another based on customer profile, (on Amazon and other online retailers). These decisions are executed primarily via the product page.

Algorithms are powerful

The algorithm, which bases decision-making on factors like relevancy and product page robustness, holds all the power here and isn’t like a chain store buyer you can “wine and dine” to improve shelf placement. Instead, brands need to address customer segments via the product title, imagery, keywords and so on.

Additionally, the fluid nature of these online retail sites necessitates continual adjustments to meet consumer needs on a near-daily basis, rather than monthly or quarterly. This can be done by direct data connections or measuring each channel with third-party analytics. Trade teams are best served by helping guide the brand marketing teams when and where these changes need to be made.

Speed to market is both hard to execute and increasingly important if you want to outflank competitors in today’s marketplace. Collaboration between brand and trade marketing teams is more critical than ever; they need to make this tactical and strategic alignment a reality in order to maintain success over the long term.

The post The worlds of brand and trade marketing need to unite appeared first on Marketing Land.