A Well Balanced Content Personalization Diet: 3 New Years Resolutions to Increase Goal Conversions

Happy New Year, travel marketers! The beginning of January always brings its own kind of magic with resolutions and the opportunity to both reflect on the past year and look towards the next.  It’s also a time that, if I can be honest, is a little overwhelming with the pressure of setting life-changing goals. And… Read More

The post A Well Balanced Content Personalization Diet: <br/>3 New Years Resolutions to Increase Goal Conversions appeared first on Bound.

Happy New Year, travel marketers! The beginning of January always brings its own kind of magic with resolutions and the opportunity to both reflect on the past year and look towards the next.  It’s also a time that, if I can be honest, is a little overwhelming with the pressure of setting life-changing goals. And it’s not only personal goals! Working within the digital marketing space I feel that every other content piece is focused on “new year, new marketing strategy” resolutions that couldn’t be easier to implement – or so the articles read…

At Bound, we’re big believers in starting where you’re at, especially when it comes to personalization and your marketing strategy.  That’s why one of our resolutions this year is to focus on something that we know has an impact: optimizing our goal conversions

When it comes to our monthly content reports, few things give our Customer Success Managers more joy than seeing an increase in click through rates on goal related content pieces.  But as fun as these increases are to see, we are even more thrilled by increases in the goal conversions themselves. As we’ve become increasingly aware of the important relationship between clicks and conversions – and the very different stories each can highlight when they don’t align  – we’re excited to share our new Goal Dashboard and highlight three resolutions on increasing your conversions in 2020:

Read More (into your A/B tests):

When in doubt about your content, run an A/B Test!  While click through rates can certainly highlight your audience’s preferences for the imagery, copy or CTA, how do you account for the content’s impact on the actual conversion?  Within the new Goal Dashboard, you can now compare conversion rates against your campaigns, segments and pieces of content, allowing for a deeper level of insight. We recently took a closer look at an eNewsletter related A/B test we have been running with a DMO.  Month over month, we found that one content piece had consistently less clicks than the other. However, in comparing the conversion rates between the two pieces, we saw that the content piece with a lower CTR had a considerably higher conversion rate. This comparison helped us see the value of a content piece we might have otherwise removed and will help inform future A/B tests.

Exercise (your understanding of your Mobile and Desktop visitors differences):

As we’ve written about before, there are many things to take into consideration when creating content for your Desktop and Mobile visitors.  Goal conversions are no different, especially given that our Mobile visitors are often less likely to convert. Within the new Goal Dashboard, we can now dive into the conversion rates for our different segments across campaigns, allowing us to compare, for example, fly-ins served to desktop visitors and banners served to mobile audiences.  Layering in this insight can help us develop content best suited for each of our unique visitors groups.

Spend Less (time guessing how your content is performing):

Over the past few years, we’ve increasingly become fans of thoughtful “abandonment” content and the way these direct CTAs can increase conversions for visitors who have initiated, but not completed, a conversion goal.  While we often see this content with high CTRs, it can be challenging to determine how exactly this content contributes to the overall goal. Thankfully, our new Goal Dashboard takes the guesswork out of content creation and helps us see exactly which Abandonment content is best contributing to the goal. 

Our hope for your 2020 is that your conversion related content is directly increasing your goal conversions (leaving you with more time to increase engagement for your ad visitors!)   Knowing that goal conversions are a vital piece to understanding your visitors intent to travel, we’re excited that our new Goal Dashboard will bring new awareness and insight this year.  Cheers to you and your increased conversions!

Want to learn more about the Goal Dashboard or personalizing to increase your conversions?  We’d love to chat with you and hear all about your 2020 marketing resolutions!

The post A Well Balanced Content Personalization Diet: <br/>3 New Years Resolutions to Increase Goal Conversions appeared first on Bound.

What Can You Learn From Running an A/B Test for 2 Years?

We just concluded an A/B test on Analytics-Toolkit.com that has been left to run for just over 2 years. And it failed, as in failing to demonstrate a statistically significant effect based on the significance threshold it was designed for. Has it been …

We just concluded an A/B test on Analytics-Toolkit.com that has been left to run for just over 2 years. And it failed, as in failing to demonstrate a statistically significant effect based on the significance threshold it was designed for. Has it been a waste of time, though, or can we actually learn something from […] Read More...

The 5 Worst eCommerce A/B Testing Mistakes to Watch out for 

Make sure your eCommerce A/B Testing results are 100% accurate with our checklist!

Do you want to use A/B testing to increase the conversion rate on your eCommerce site?

Before you start running one of those tests, it’s a good idea to make sure that you set it up correctly.

A rigorously-run test can increase conversions, but if the parameters are flawed then the decisions you make based on the test could be too.

In this post, we’ll outline the worst eCommerce A/B testing errors that we see online stores make. Use this as a reference to make sure that any A/B testing you do returns accurate results. That way you can make better data-driven decisions that actually increase conversions.

Note: Want our CRO experts to do A/B testing to increase the conversion rate for your eCommerce business? Learn more about our services and get in touch.

What Are the Worst eCommerce A/B Testing Errors?

We’ve written in the past about the most common reasons A/B tests don’t perform well. Namely:

Here, we’re outlining the worst technical errors that can invalidate your results. These are:

  • Technical issues with the actual test
  • Making site changes during the test
  • Traffic source changes
  • Conflicting tests
  • Not accounting for user and site-specific factors

If you are going to put time into doing A/B testing to improve your site, it’s important to take the specific things about your site into account.

These are the 5 eCommerce A/B testing mistakes to avoid (beyond just setting up your analytics goals) that we commonly see while helping our clients with their conversion optimization.

#1. Technical A/B Testing Issues

Here are the things to consider when setting up an A/B test (whether you’re using Optimizely, VWO, or another platform) to make sure it yields valid and actionable results.

Didn’t Exclude Return Visitors

If people are experiencing 2 different versions of your eCommerce website, a test will be invalid. We often see this issue happen with returning visitors after a site changes their design, layout, or other elements.

A ‘Negative Response’ is the possible outcome created when return visitors to the site see the new treatment and are now lost (i.e. navigation change) or confused (i.e. page layout test). The user now has expectations about the site that we might not be meeting with the new treatmenteven if it is better.

For instance: Below is a familiar pattern we see when the test variation (purple) overtakes the control (green). This often happens when a site’s returning visitors are allowed into the test. These returning visitors prefer the control because it has a “continuity of experience” (it’s the version they’re used to).

This results in making the variation appear to perform poorly until the initial group of returning visitors exit the test. In the test below, it took roughly 12 days for this returning visitor bias to abate. 

 width=

Many tests do not run long enough (see below). If returning visitors are excluded, the true result—a big win—will never be seen. Even worse, you may end up implementing a losing variant and harming sales.

Note: For businesses with loyal client bases who need to know how a treatment will impact their existing users, you can remove returning visitors from the data after the fact.

Didn’t Run a Test Long Enough

One of the most common ways we see eCommerce businesses run tests improperly is not giving them a chance to run for long enough. The reasons for this vary from not knowing better to succumbing to the pressure to get results quickly.

Whatever the reason, not running a test long enough will rob you of the truth, so you might as well have not run the test in the first place.

We’ll revisit how long to run these tests in a moment.

Run a Test with Enough Participants and Goal Completions

The more variations a test has, the more participants it will need. Your sample size needs to be large enough to demonstrate user behavior.

Consider 100 conversions per variation to be a minimum, and only after the test has run long enough (see above) and other criteria has been met (see below)

Don’t Turn Off Variations While Testing

Turning off a variation can skew results, making them untrustworthy. 

For instance: Let’s say you have four total variations (three treatments and the control). Each variation receives 25 percent of traffic. After 10 days, one treatment is turned off and from that point on, each variation receives 33 percent of traffic. Then you again turn off another variation, leaving each remaining variation with 50 percent of the traffic. 

The example below shows how when the green variation was turned off, the control improved, just like it did at the start of the test when it benefitted from some people buying right away (the control was for a free shipping offer):

 width=

The control again lifts off when the pink variation is turned off showing the control (orange) improve for a third time when the mix of new and returning visitors shifts to include more new visitors. The variation (this time the control) disproportionately benefits since it does well at converting the first time buyers who saw the free shipping offer.

This graph would look very different if the control never saw those three bumps in conversion rate. Because the control is the baseline variation, the result of the winner (blue variation) would be more clear and confident, and the test would not have had to run so long. 

The above scenario is extremely common, and at a surface level seems benign, however, the reality is that the differences in the variations themselves may create a case where the test is corrupted by changing the weight of the remaining variations. 

Often a variation is favored by different types of buyer mindsets, such as a spontaneous shopper vs an analytical one. If one variation is preferred over the other, changing the weight of the remaining variations will result in the variation favored by spontaneous people to suddenly improve. The other variations, favored by more analytical people, will not see as much improvement until their buying cycle has concluded, perhaps days or weeks later. 

Since we never know who likes a variation for what reason (Was it the hero image? The testimonials? Product page? Overall user experience?) the safest thing to do is not to eliminate any variations. 

Custom Code vs. Test Design Editors

After trying to set up a few tests via any test design editor, you may find that the test treatments do not render or behave as you expected across all browser/device combinations. 

While design editors hold great promise for “anyone” to be able to set up a test, the reality is there is only a narrow range of test types (i.e. text-only changes) that can be done through a test design editor alone. Custom code is required to have it work well and consistently across browser types and versions.

The best practice is to write custom code because most modern websites have dynamic elements that visual editors can’t identify properly. Here are some specific cases that visual editors can’t detect: 

  • Web page elements that are inserted or modified after the page has been loaded, such as some shopping cart buttons, button color, Facebook like buttons, Facebook fan boxes and security seals like McAfee
  • Page elements that change with user interaction, such as shopping cart row changes when users add or remove elements, reviews, carousels, and page comments. 
  • Responsive websites that have duplicated elements, such as sites with multiple headers (desktop header is hidden for mobile devices and mobile header is hidden for desktop computers). 

In the beginning, you may be able to avoid running complex tests that require custom code. Eventually, you will graduate to a level of testing that demands it. Know that this requires a front-end developer to set up the tests that you will want to run. 

Run the Test at 100% Traffic

Today, many purchases online involve more than one device or one browser (i.e. researching on a smartphone, then purchasing on a laptop).

However, test tools are limited to tracking a user on a single browser/device combination. This means that someone who sees one variation of a test on mobile may come back to purchase on desktop and be provided another variation.

Showing a variation more often than another will give it the advantage since it is more likely to be seen with continuity by users who switch from device or browser. This is called “continuity bias.”

To avoid continuity bias during the testing process, we recommend you run tests at 100 percent of traffic and split that traffic evenly between variations.

When less than 100 percent of traffic is sampled for a test, the result (in today’s cross-device world) is that the control will be served more often than the variations, thus giving it the advantage.

For instance: A user visits your site from work and is not included in a test because you are only allowing 50 percent of people to participate. Then, that same user goes home later in the buying cycle and gets included in the test on their home computer. The user is then more likely to favor the control due to continuity bias.

Side note: This may be a good time to look at your past results of tests that were run with less than 100 percent of user participation and see if the Control won more than its fair (or expected) share of tests. 

Equal Weighting of Variations

For the same reasons as mentioned above with running a test at 100 percent, you also want to ensure that all variations are equally weighted (i.e. testing four variations including the control should see 25 percent of traffic go to each). If the weights are not equal there is a bias—as outlined above.

Test Targeting

Test results are easily diluted (test will have to run longer) or contaminated by not targeting the test to the right audience. The most common issues of inappropriate test targeting are:

  1. Geo (i.e. including international visitors in a test that is USA specific)
  2. Device (i.e. including tablets in a mobile phone test)
  3. Cross-Category Creep (i.e. test for Flip Flops spreads into all Sandal pages)
  4. Acquisition vs. Retention (i.e. including repeat customer in a test for first-time customers)

If the right audience and pages are not targeted, then it will take much longer to see any significant results with confidence due to the noise of users who don’t care either way. That test result will indicate that the change is not significant, leaving you to stop the test and not gain the additional sales.

These are the most common technical reasons we see invalid tests. There are some other good habits we recommend though.

#2. NO eCommerce Site Changes During Testing

As a general rule: Avoid making other site changes during the test period.

This will cause you to see a statistical result from the test without knowing what to attribute it to.

We often see eCommerce stores that launch a website redesign while they are in an active test period which causes issues.

For instance: If you are testing a trust element like McAfee’s trust seal, avoid changes that may impact the trust of the site, including:

  • Site style changes
  • Other trust seals
  • Header elements (like contact or shipping information
  • Or any other site-wide “assurance” elements (i.e. chat)

The same line of thinking applies to other changes. If you are testing a pop-up window, don’t make a change to the site style, other opt-in boxes, etc.

When split testing, it’s usually best to test one element at a time. Making site changes during a test makes that ideal a lot less feasible.

#3. Traffic Sources Change and Muddy Conversion Data

In our experience, the different sources of traffic to your website will behave differently.

When there is a change in the distribution of traffic sources (i.e. paid search increases), test results will be unreliable until the test participants brought in have had a chance to go through their entire buying cycle. 

For instance: Paid search visitors may be less trusting and less sophisticated when it comes to the web. This traffic source often responds well to trust factors like trust seals.

Increasing paid traffic during the test may result in a sharp increase in conversions. But that increase is not sustained as the test continues for a longer period and the number of non-spontaneous visitors get factored into the results. 

If one version performs better for one traffic source but another traffic source starts getting mixed into the test: you’re seeing the mixed traffic response because of the change. This can make it hard to attribute the conversion increase you see to one source of traffic or the other.

#4. Conflicting A/B Tests Spoil Attribution

It’s easy to run more than one test at a time, however, tests may conflict with user impact. It is common to see the results of one test change when another test is started or stopped. This is typically the result of the tests sharing the same purchase funnel, or impacting the same concept.

Avoid running conflicting tests. If you are running more than one test, do a bit of analytics work to see how many people will be affected by both tests (i.e. users common to the two pages involved in the separate tests).

If it’s more than 10 percent, then you will want to strongly consider how the two tests impact the user’s single experience. By using common sense and good judgment, you and your team will be able to estimate which tests can be run at the same time.

Here, we typically recommend breaking eCommerce sites into 3 “funnels” or sections:

  • Top-of-funnel is finding a product
  • Mid-funnel is when the user is on the ‘Product Detail Page’ and ‘Adds to Cart’
  • Bottom-of-funnel is Cart through a checkout page conversion

 width=

You shouldn’t be running more than one test at a time in any one of these funnels, and KPIs should reflect the goal of each “funnel.” Especially when there is another test running in one of the other funnels.

#5. Not Taking User and Site Specifics When A/B Testing

Every eCommerce site is unique. So when you do A/B testing for eCommerce, take those particular factors into account.

Clients who work with us get individualized recommendations for their websites. Here are some important general guidelines:

1. Segment Traffic

When judging if you have enough goal completion, don’t forget to consider segmentation on a user persona level.

For example, an eCommerce store selling school supplies will have a big split between classroom teachers and parents who shop on the site. If the treatment only targets one group, or if it might impact each group differently, it’s important to take that into account.

2. Test Against the Buying Cycle

When looking at test results, a test must be run and analyzed against its buying cycle. This means testing a person from their very first visit and all subsequent visits until they purchase.

If you know that 95 percent of purchases happen within three days of the user’s visit, then you have a three-day buying cycle.

Your test cycle will be the number of weeks you test (you want to test in full-week periods) plus the full length of your buying cycle added on so the last participant let into the test has an adequate chance to complete their purchase.

3. Count Every Conversion (or at Least Most of Them)

If a participant has entered a test, their actions should be counted. This may sound obvious, but correct attribution is seldom done well, and this results in inaccurate testing.

In order to do this right, it’s important that all visitors are given the chance to purchase after entering a test. If a test is just “turned off,” participants in that test who have yet to purchase have been left out. 

Since it is common to see one particular variation do well with returning visitors, leaving out these later conversions will skew the test toward the variations that favor the less methodical type people.

4. Determine Your Site’s Test Cycle (How Long to Run a Test)

You likely have been involved in discussions about how long a test should run. The biggest factor in how long to run a test is your site’s test cycle. 

To find your site’s test cycle in Google Analytics, simply start with a segment like the one below where you define that you want to view only users who had their first session during a one-week period. Then set a condition where transactions are greater than zero.

 width=

This type of segment will tell you when people whose first visit was that week eventually purchased on your site.

You can start by looking at a range such as two months, then work backwards to figure out when 95 percent of the purchases in that two months were. In the example below, the site has a three-week test cycle because 95 percent of purchases for the two months occurred in the first three weeks from the beginning of the period you started tracking purchases:

 width=

You may be wondering, “Why 95 percent?” This is a simple rule of thumb and, from experience, we have rarely seen the final 5 percent of purchases change a test’s conclusions, however, we have seen the last 10 percent do so.

5. Use 7-Day Cycles

When testing, you most likely have to test against a full week cycle. This is because people often behave differently during different days of the week.

For instance: If your site sells toys for small children, your site’s reality might be that a lot of research traffic occurs on the weekend when the children are available for questioning (i.e. “Hey Ty, what’s the coolest toy in the world these days?”). 

Another reality for a toy site might be that often the “Add to Cart” button does not need to get hit until Tuesday evening, given that a lot of toys are not needed until the weekend when birthday parties are typically held. A test run from Wednesday through Sunday (five full days with lots of data) is still not enough.

The reality is, almost every eCommerce site (from the more than 100 eCommerce analytics we’ve done test analysis on) has a seven-day cycle. You may have to figure out which days to start and stop, but it’s there because of how user behavior varies throughout the week.

Therefore, if you don’t use a seven-day cycle in your testing, your results are going to be weighted higher for one part of the week than another.

6. Use the “Test Window”

The Test Window are essentially the steps we recommend to avoid skewing a test.

Step 1: Only let new visitors into the test. This way returning visitors later in their purchase cycle will not skew results and potentially set the test off to a false start. Get as many people into the test as possible.

Step 2: Don’t look at the test for a full seven days. If you don’t have a statistical winner at this point (most test tools will tell you the test has reached 95 percent confidence), let the test run for another seven-day cycle and don’t peak.

Step 3: Turn off the test to new visitors once you have a statistical winner (at seven-day intervals). Turning off the test to new visitors will allow the participants already in the test to complete their buying cycle. Leave the test running for a full buying cycle after you’ve closed the window.

Step 4: Report out on the test. To report out on the test’s overall results, you will simply look at your A/B testing tools report. Now, because you used the Test Window, you will be able to believe the results because:

  • Everyone in the test had a consistent site experience, spending it in the same test variation (no one seeing the control on a previous visit only to later experience a treatment). 
  • A full seven-day cycle was used so weekend days and weekdays were weighted realistically. 
  • Every user (or 95 percent of them at least) was allowed to complete their buying cycle.

Conclusion

If you are going to put time into testing to improve your eCommerce site, everything that you do will be invalidated if you aren’t paying attention to these vital A/B testing factors.

In our effort to be the best eCommerce agency, we study and rank the best eCommerce sites’ conversion rate optimization strategies in our Best in Class eCommerce CRO Report. We use our findings and apply them directly to our client’s sites so that their stores are as optimized as the best of the best (and we have case studies to demonstrate).

That said, every significant site change needs to be tested. And for that, the step-by-step guidelines we’ve shown you here will help ensure you get accurate results.

We know that conversion testing is time-consuming and often overwhelming. If you would like to increase your eCommerce site’s conversion rate, our CRO experts can help with set up and make A/B testing recommendations. Learn more here and get in touch.

Does Your A/B Test Pass the Sample Ratio Mismatch Check?

Most, if not all successful online businesses nowadays rely on one or more systems for conducting A/B tests in order to inform business decisions ranging from simple website or advertising campaign interventions to complex product and business model ch…

Most, if not all successful online businesses nowadays rely on one or more systems for conducting A/B tests in order to inform business decisions ranging from simple website or advertising campaign interventions to complex product and business model changes. While testing might have become a prerequisite for releasing the tiniest of changes, one type of […] Read More...

5 Reasons You Shouldn’t Be Scared to Try A/B Testing

Editor’s Note: We want to focus on giving you actionable advice for what you can pay attention to, evaluate, and implement quickly to make sure you get the most out of Q4 and the remainder of 2019. Building on the video from our post on our sessi…

Editor’s Note: We want to focus on giving you actionable advice for what you can pay attention to, evaluate, and implement quickly to make sure you get the most out of Q4 and the remainder of 2019. Building on the video from our post on our session from the A/B Testing Summit, we wanted to […]

The post 5 Reasons You Shouldn’t Be Scared to Try A/B Testing appeared first on The Daily Egg.

AGILE A/B Testing Update: Custom API & Support for Non-Binomial Data

I’m happy to announce the release of two long-awaited features for our AGILE A/B Testing Calculator: Support for non-binomial metrics like average revenue per user A new custom API for sending experiment data to the calculator Below is an explana…

I’m happy to announce the release of two long-awaited features for our AGILE A/B Testing Calculator: Support for non-binomial metrics like average revenue per user A new custom API for sending experiment data to the calculator Below is an explanation of each of these new features in some detail. Support for Non-Binomial Data While our […] Read More...

Statistical Methods in Online A/B Testing – the book

The long wait is finally over! “Statistical Methods in Online A/B Testing” can now be found as a paperback and an e-book on your preferred Amazon store. Note that the Kindle edition is available for $2.99 or equivalent if you’ve alrea…

The long wait is finally over! “Statistical Methods in Online A/B Testing” can now be found as a paperback and an e-book on your preferred Amazon store. Note that the Kindle edition is available for $2.99 or equivalent if you’ve already purchased the paperback (through Kindle Matchbook). The book is a comprehensive guide to statistics […] Read More...

Going from 10 to 100 experiments per year: Building the frame

Note from the Editor: This article is part I in a series dedicated to helping you increase your online experiment…Read blog postabout:Going from 10 to 100 experiments per year: Building the frame
The post Going from 10 to 100 experiments per year: Bu…

Note from the Editor: This article is part I in a series dedicated to helping you increase your online experiment...Read blog postabout:Going from 10 to 100 experiments per year: Building the frame

The post Going from 10 to 100 experiments per year: Building the frame appeared first on WiderFunnel Conversion Optimization.

A/B Testing with a Small Sample Size

The question “How to test if my website has a small number of users” comes up frequently when I chat to people about statistics in A/B testing, online and offline alike. There are different opinions on the topic ranging from altering the si…

The question “How to test if my website has a small number of users” comes up frequently when I chat to people about statistics in A/B testing, online and offline alike. There are different opinions on the topic ranging from altering the significance threshold, statistical power or the minimum effect of interest all the way […] Read More...

Free Guide: How to Strategize & Execute Profitable Personalization Campaigns

When I speak with our clients, it often strikes me how many of them feel overwhelmed by the very idea of personalization. Our imagination, often fueled by the marketing teams of various software companies, creates a perfect world where personalization enables every interaction to be completely custom for every individual. In this dreamland, artificial intelligence […]

The post Free Guide: How to Strategize & Execute Profitable Personalization Campaigns appeared first on Brooks Bell.

When I speak with our clients, it often strikes me how many of them feel overwhelmed by the very idea of personalization.

Our imagination, often fueled by the marketing teams of various software companies, creates a perfect world where personalization enables every interaction to be completely custom for every individual. In this dreamland, artificial intelligence and machine learning solve all our problems. All you have to do is buy a new piece of software, turn it on, and…BOOM: 1:1 personalization.

As a data scientist, I’ll let you in on a little secret: that software only provides the technological capability for personalization. Even further, the algorithms found within these tools simply assign a probability to each potential experience that maximizes the desired outcome, given the data they have access to. Suffice to say, they’re not as intelligent as you are led to believe.

If you caught our first post in this series, you already know that we define personalization a bit more broadly, as any differentiated experience that is delivered to a user based on known data about that user. This means personalization exists on a spectrum: it can be one-to-many, one-to-few, or one-to-one.

And while there are many tools that enable you to do personalization from a technical standpoint, they don’t solve for one of the main sources of anxiety around personalization: strategy

Most personalization campaigns fail because of a lack of a strategy that defines who, where and how to personalize. So I’ve put together a free downloadable guide to help you do just that. This seven-page guide is packed full of guidelines, templates and best practices to strategize and launch a successful personalization campaign, including:

  • Major considerations and things to keep in mind when developing your personalization strategy.
  • More than 30 data-driven questions about your customers to identify campaign opportunities.
  • A template for organizing and planning your personalization campaigns.
  • Guidelines for determining whether to deliver your campaigns via rule-based targeting or algorithmic targeting.

Free Download: Plan & Launch Profitable Personalization Campaigns.

The post Free Guide: How to Strategize & Execute Profitable Personalization Campaigns appeared first on Brooks Bell.