How to Prepare for a Business Failure

Each business has its own level of difficulty. Do you know what to do when things are no longer working? How can you prepare yourself when the time comes to close up shop? Be ready for your next business idea.

The post How to Prepare for a Business Failure appeared first on CXL.

Each business has its own level of difficulty. Do you know what to do when things are no longer working? How can you prepare yourself when the time comes to close up shop? Be ready for your next business idea.

The post How to Prepare for a Business Failure appeared first on CXL.

How to Make Gains and Wins Consistently… Play the Long Game

How do you consistently make gains and wins in business? You play the long game. Don’t get wrapped up in short-term goals. Think about your long-term plan and be consistent in your process.

The post How to Make Gains and Wins Consistently… Play the Long Game appeared first on CXL.

How do you consistently make gains and wins in business? You play the long game. Don’t get wrapped up in short-term goals. Think about your long-term plan and be consistent in your process.

The post How to Make Gains and Wins Consistently… Play the Long Game appeared first on CXL.

Dedicate Time For Learning & Get Better at Anything

Do you want to get better at your work? Are you even trying? Putting in the work and meeting with mentors will get you far, but dedicated time for learning WEEKLY will increase your chances significantly. You have the time to learn! CXL Institute All-Access has everything from digital psychology to Google Analytics courses. All-Access […]

The post Dedicate Time For Learning & Get Better at Anything appeared first on CXL.

Do you want to get better at your work? Are you even trying? Putting in the work and meeting with mentors will get you far, but dedicated time for learning WEEKLY will increase your chances significantly.

You have the time to learn! CXL Institute All-Access has everything from digital psychology to Google Analytics courses. All-Access is great for teams and individual learners.

The post Dedicate Time For Learning & Get Better at Anything appeared first on CXL.

How to Make Starting and Running a Business Easier

Starting and running a business can be really hard. What’s the secret? There are many ways to run a business. Want to know how to make starting a new business easier? Watch this.

The post How to Make Starting and Running a Business Easier appeared first on CXL.

Starting and running a business can be really hard. What’s the secret? There are many ways to run a business. Want to know how to make starting a new business easier? Watch this.

The post How to Make Starting and Running a Business Easier appeared first on CXL.

How to Gain Business Experience Fast

You need business experience? Where should you work to get the best experience? Work for an agency! Working for an agency will allow you to view many business situations from the backend. Use your experience and CXL Institute’s course to Build a Brand for your Small Business.

The post How to Gain Business Experience Fast appeared first on CXL.

You need business experience? Where should you work to get the best experience? Work for an agency! Working for an agency will allow you to view many business situations from the backend.

Use your experience and CXL Institute’s course to Build a Brand for your Small Business.

The post How to Gain Business Experience Fast appeared first on CXL.

6 key insights from the “State of Experimentation Maturity 2018” research report

While Web Development and QA skills are a priority for every organization surveyed, organizations at the “Scaling” maturity level are…Read blog postabout:6 key insights from the “State of Experimentation Maturity 2018” research repo…

While Web Development and QA skills are a priority for every organization surveyed, organizations at the “Scaling” maturity level are...Read blog postabout:6 key insights from the “State of Experimentation Maturity 2018” research report

The post 6 key insights from the “State of Experimentation Maturity 2018” research report appeared first on WiderFunnel Conversion Optimization.

Predicting Winning A/B Tests Using Repeatable Patterns

If you ever ran a highly trustworthy and positive a/b test, chances are that you’ll remember it with an inclination to try it again in the future – rightfully so. Testing is hard work with many experiments failing or ending up insignificant. It would only seem optimal to try and exploit any existing knowledge for […]

The post Predicting Winning A/B Tests Using Repeatable Patterns appeared first on CXL.

If you ever ran a highly trustworthy and positive a/b test, chances are that you’ll remember it with an inclination to try it again in the future – rightfully so. Testing is hard work with many experiments failing or ending up insignificant. It would only seem optimal to try and exploit any existing knowledge for more successes and fewer failures. In our own practice we started doing just that.

In 2017 we systematically started to categorise similar test results as patterns to help us better predict more winning tests. In 2017 we ran 51 a/b tests that were purely pattern driven and 71% of these tests were positive at the time of stopping. It is now becoming clearer to us that patterns are a powerful tool for predicting test results and I want to share our approach with you. Here is the process that we follow to identify and use patterns for a higher win rate.

The Goal: Better Than 50/50 Randomness

First of all, if we’re aiming to improve our a/b test results prediction rate, then we need to set up a measuring stick for what a successful prediction really means. The simplest answer to this is a binary one – that is whether a test that should have been positive or negative, ended up being positive or negative as predicted.

In other words we are simply trying to do better than randomness. Assuming we ran completely random experiments we might expect that approximately half of our results would end up positive and the other half would be negative. From this perspective our starting goal is humble: to show predictive strength, our patterns need to help us beat randomness and achieve a better win/loss rate than 50/50.

The Pattern: Core Elements For Prediction

I define conversion patterns as easily repeatable UI changes that allow us to predict and repeat effects quickly. Given a pattern we can tactically spot an opportunity (a weak headline, too many form fields, an inauthentic photo, poor visibility of choices hidden away in a pulldown, etc.) and take rapid action to exploit their probable effects. The predictive strength of such patterns comes from one simple assumption: the more a given change performs with similar effects, the more it will perform again in the future with a similar effect. Hence, patterns ultimately obtain their predictive strength from multiple test results – the more the better. The elements of a pattern that make these predictions possible include:

  • The Change(s) – a set of properties (one or many) that define the pattern and are abstract enough to make them repeatable. Typically the changes can involve removing, replacing, or adding something new to the UI. Often the change is portrayed with the help of two screenshots: A (before, or the control) and B (after, or the variation).
  • Test Result(s) – each pattern gains its predictive strength from test results (the more the better). Tests in turn provide us with two key metrics: repeatability & median effects.
  • Degree Of Repeatability – this is a measure of how often a pattern has been tested with winning (positive) results, minus the number of any negative test results. The higher this score (either positive or negative), the more likely the pattern will repeatedly win or lose in future experiments. For patterns that don’t have any test data their repeatability score is a neutral 0.
  • Median Effect – the median effect tell us what effect we might expect from a similar change on a future test. It is calculated from the deepest effects of each test related to a pattern (ex: most meaningful measures such as a signups, lead or sales). The more tests that we have for a given pattern, the more accurate the median effects should become.

Here is a sample No Coupon Fields pattern and how we tie all of these elements together:

TECHNICAL NOTE: to compensate for test results with different degrees of confidence we attribute a full 1 repeatability point for a highly significant test result (p-value < 0.03), a 0.5 point for any suggestive results (p-value < 0.25), and 0.25 points for any insignificant results (p-value > 0.25) or test results without complete sample size data.

Good, Bad And Better Patterns

As patterns accumulate test results they quickly drift away from innocent neutrality. Those patterns that perform more positively than not, gain in their degree of repeatability (with a higher likelihood to win again). Other patterns might win and lose similarly, staying close to 0 and hinting at a lower probability of success. Finally, those patterns that tend to lose more often than not, will have a negative repeatability suggesting that they aren’t such good ideas after all.

The better patterns therefore can be defined by two criteria: they contain a high degree of repeatability and a high median effect.

Generating New Patterns

Pattern ideas can come from anywhere. The source of a pattern idea however isn’t that important as all new patterns are created equal (with a netutral repeatability of 0). Instead, it’s the test results that provide patterns with their predictive power, charging them negatively or positively. Considering the above, here are some ways by which we generate new patterns:

  • Imagination, Pen & Paper – your experience and your creative process can be a valuable source of interesting pattern ideas. Sometimes we simply take a piece of paper and sketch our ideas. We don’t worry too much about forcing ourselves to come up with perfect ideas. We know that however exploratory the patterns are, their potential will eventually come through as they become tested.
  • Your Own A/B Tests – when you finish running an a/b test, this is a perfect opportunity to capture the change (or set of changes) as a pattern. No matter what the effect was, nor the degree of confidence, each a/b test contains valuable data that has the potential to repeat in the future. In this case you can design a new pattern and already have initial data for or against your pattern – giving your pattern an advantage.
  • Other People’s A/B Tests – there are a/b tests that companies share publicly and these can be another valuable starting point for a pattern. Granted that it’s more difficult to trust other people’s result given the presence of publication bias (tendency to report the positives more than the negatives). At other times, published results also lack detailed sample sizes or conversion data which makes it more difficult to assess their quality. In this case we attribute a lower repeatability score (only 0.25) to compensate for the lack of complete data.
  • Customer Research – any qualitative research where real customers or users express their needs can be a valuable source of inspiration for new pattern ideas. This includes any methods such as: surveys, usability studies, interviews, screen recordings, etc.
  • Copying Sites That Optimize – finally, it’s always worthwhile to pay attention to web sites that you know are running experiments and actively optimizing. Chances are that whatever changes they have implemented, have gone through some kind of experimentation and therefore may have slightly higher chances of succeeding in the future.

Using Patterns To Optimize A Web Site

STEP 1: Finding Opportunities

When we set out to optimize a set of screens using patterns, our focus is on identifying as many optimization opportunities as possible. We do this by defining a set of screens and metrics to improve. At the same time we remind ourselves of all existing patterns to inspire ourselves with a wide set of common changes. We might also look up more specific pattern types by page type (ex: checkout patterns) or by metric (ex: lead-gen patterns). It doesn’t really matter in which order you begin the process. What does matter is to have the screens, goal metrics and patterns visible in front of you so that you see and capture the opportunities – the more, the better.

Practically we might use Adobe Illustrator (any screen annotation software is fine) to list out the relevant screenshots and annotate them with ideas on the sides like this:

If we have an idea for which we don’t yet have a pattern, we still capture it (without any data references of course).

STEP 2: Prioritizing With Repeatability & Median Effects

Once we list out enough ideas (usually 10 to 100) we then weigh them to see which have the highest probability of success and the highest impact. To do this, for each idea which is based on a pattern we look up its repeatability and median effect and write it down beside the idea. By doing this we officially make a prediction using real data, while our prioritized ideas may start looking like the following:

Optionally, we might also add a subjective confidence for each idea. If we do chose to do this, we limit our confidence to a range between -3 (highest confidence that the idea will be negative) and +3 (highest confidence that the idea will be positive). And if we have multiple team members expressing their subjective confidence we average these values to tap into crowd intelligence.

STEP 3: Designing Tests & Exploring Variations

Once our ideas are prioritized and we start seeing what has the most potential, we continue on with more detailed concepting. Looking at a pattern and seeing an A and B we don’t want to be mislead that this is the only way of applying a given pattern. For every A and B, there is a C, D, E and F just around the corner waiting to be discovered. Here is where we get creative and generate visual alternatives. We might even look through past tests to check for more specific examples where a pattern worked and where it failed.

Not all sketched out ideas become selected into a test, but we definitely like to have more to choose from. Our final visual is a solid test concept containing a series of variations (1 or many) with the exact changes (1 or many) that will go into testing, looking similar to:

NOTE: You always have an option to either test an idea or directly implement it at this stage. Given enough confidence (subjective or from enough positive tests) we respect the decision to skip testing and directly roll out changes on production (implementation). Exploiting knowledge in such a way although caries risk, is a valid optimization move (depending on the business context: statistical sensitivity, site traffic, predicted impact, degree of confidence, etc).

STEP 4: Feedback Loops & Correcting The Data

When we decide to run a test based on a pattern there is one last element that is critical to complete the process – we update the pattern with the new result. More specifically, both the repeatability score becomes updated (either positively or negatively) and so does the median effect change for better or for worse.

This happens for any test result independent of the effect and independent of the degree of significance. It’s important to remember and learn from any results no matter what the outcome (unless there was technical test setup issue which invalidates the experiment). This feedback mechanism is what makes future predictions more and more accurate with each new test result, further separating the better patterns from the weaker ones.

Do Patterns Work? Promising Results From 51 A/B Test Predictions

But how does such an approach perform on real projects? In order to assess if patterns have any predictive power, in 2017 we began tracking our own predictions across numerous optimization projects. We ran and tracked 51 such a/b tests that were strictly pattern driven and here are the exciting results we found.

Out of 51 experiments which were all predicted to be positive (with any positive repeatability scores), 36 of these experiments ended being positive at the time of stopping. This placed the prediction rate of these patterns at 71%. Using patterns in this way clearly helped us move away from a 50/50 success rate suggested by randomness. Hence we have a very positive outlook towards patterns (based on experiments) as a very effective way in running more winning a/b tests while minimizing effort in general. This is inline with business expectations of clients who aim for the highest impact results with the lowest possible effort (maximum ROI).

 

Do More Repeatable Tests Lead To A Higher Prediction Rate?

There is one more key question that we can ask ourselves in order to check if repeatability is a reliable predictor of test outcomes: does a higher repeatability score lead to more positive test predictions? In other words, if a pattern performs positively more frequently, does that mean that it has better chances of performing positively again in future experiments? To answer this question we organized our predictions by three sets of repeatability scores (reminder: the higher the score, the more positive evidence we have in favor of a pattern). Here is what we found:

We are seeing a clear indication that the more evidence we have in favour of a pattern, the greater our prediction rate. Our prediction rate is increasing linearly with the degree of repeatability. This is our most promising finding from 2017 in favour of continuing to identify and measure conversion patterns in this way. It’s also highly reassuring as it suggests that conversion patterns can be generalizable (perform across different websites).

From these findings we were more comfortable to add a layer of meaning to our repeatability score. We can always adjust it in the future as we collect more data and assess the accuracy of our predictions, but for now here is what we are starting with:

  • Repeatability of 0 = May Win or Lose
  • Repeatability of 0 < 0.99 = Maybe Will Win
  • Repeatability of 1 < 2.99 = Likely Will Win
  • Repeatability of 3 < 4.99 = Very Likely Will Win
  • Repeatability of 5 < = Almost Certain To Win      

Conclusion

Analysing solely individual a/b tests is limiting. We now see the value of looking at multiple experiments instead. Using patterns in the way we have outlined above is one of many ways that makes this leap possible. Patterns for us have become a powerful way of remembering multiple past experiments – critical in making more accurate predictions. The 71% success rate from last year is just the beginning. As we run and remember more experiments, our work should become easier. And pulling probabilities from past experiments ultimately will enable us to run more winning and higher impact tests.

What works for one site may not always work for another, as some skeptics like to say. But what works for one site, another, and another, will more likely than not work again somewhere else – remember and exploit this.

The post Predicting Winning A/B Tests Using Repeatable Patterns appeared first on CXL.

Improve Conversions with Website Reviews

Do you want to make your website better? There are many ways to optimize a landing page for higher conversions. One method is to perform an expert reviews to understand what you should change. Watch my step-by-step framework. On this episode of The Pe:p Show, conversion optimization champion Peep Laja breaks down how you can […]

The post Improve Conversions with Website Reviews appeared first on CXL.

Do you want to make your website better? There are many ways to optimize a landing page for higher conversions. One method is to perform an expert reviews to understand what you should change. Watch my step-by-step framework.

On this episode of The Pe:p Show, conversion optimization champion Peep Laja breaks down how you can improve landing page conversions with expert reviews and heuristic analysis.

Want to improve your conversion optimization skills?

Take our Conversion Optimization Minidegree, it includes full courses on Heuristic Analysis.

The post Improve Conversions with Website Reviews appeared first on CXL.

Mastering Mobile Popups

The m-commerce share of total e-commerce spending continues to grow at a steady pace. In the US, as many as 71% of total digital minutes are spent on a smartphone or tablet. In Indonesia, 91%. All of which confirms that launching dedicated mobile lead generation campaigns is no longer an option but a necessity. This post will […]

The post Mastering Mobile Popups appeared first on CXL.

The m-commerce share of total e-commerce spending continues to grow at a steady pace. In the US, as many as 71% of total digital minutes are spent on a smartphone or tablet. In Indonesia, 91%.

All of which confirms that launching dedicated mobile lead generation campaigns is no longer an option but a necessity.

This post will discuss how one online retailer used mobile popups to start targeting and converting their smartphone visitors, what strategies they’ve used and which ones delivered the strongest results.

What’s the Size of the Current Mobile Opportunity?

Fact: We live in a primarily mobile world.

Over 2.6 billion of us already own a smartphone (this figure does not include China and India). And in the next two years, the number of global smartphone users is supposed to reach 3.6 billion.

 

Mobile users growth wordlwide

(image source)

And we’re not shy to use those devices a lot. Statista predicts the global mobile traffic to grow at an astonishing rate.

Mobile traffic growth

(image source)

Similarly, traffic to web pages served to mobile devices is already high and growing fast.

Share of mobile traffic evolution

(image source)

In fact, as Nick DiSabato pointed here on CXL a couple of months ago:

“[…] all internet traffic growth came from mobile last year.”

Even Google signified the importance of the mobile market to me in one of their announcements, in which the search engine openly referred to their users as primary mobile.

All of which signifies that mobile visitors offer an incredible business opportunity.

  • They are already accustomed to their devices,
  • They also feel comfortable with initiating (and often completing) a purchase on a smartphone,
  • And are willing to engage with brands on mobile devices too.

However, there is a problem.

From conversations with customers, support emails we receive and general feedback we can see that converting smartphone visitors into leads proves continuously challenging.

Why is mobile so challenging for marketers?

I’m sure you’ll agree with me on this:

Strategies for capturing emails on the desktop are quite mature at this stage (not to mention oversaturated and significantly less effective, compared to only a few short years ago.)

It doesn’t matter if it’s a popup, slide in, landing page, some lead magnet strategy, we already know what works and even how to boost those results even further.

And that’s regardless of the market, product, and countless other factors that differentiate one business from another.

Mobile traffic, however, presents an entirely new set of challenges.

Screen size is one.

Smartphone screens offer an incomparably scarcer real estate that desktops or even tablets.

For example, the most common screen size is still 780 x 1280 pixels.

 

most used smartphone screen resolutions

(image source)

As a result, the amount of space you have available for your lead capture campaign is tiny.

In fact, considering that Google’s recommendation for mobile popups to take no more than 25% of the screen, then on an average screen, they should be no bigger than 320px high.

That’s the entire space you can devote to:

  • The headline,
  • Additional message,
  • Any visuals you might want to include, and
  • The opt-in form.

Lack of comfortable input device is another.

Smartphone keyboards are hardly an input device customers prefer.

That’s the conclusion of research conducted by Amanda L. Smith and Barbara S. Chaparro, published in The Journal of Human Factors and Ergonomics Society that analyzed the performance and perceived usability for five smartphone text input methods – physical Qwerty, on-screen Qwerty, tracing, handwriting, and voice.

The researchers concluded:

“Both younger and older adults preferred voice and physical Qwerty input to the remaining methods.” (source)

Nick DiSabato points to a similar factor. He writes:

“There are numerous barriers [to conversion]: shipping, payment, and filling out forms are all harder to parse and act on.”

Network speed plays a huge role in lower conversions.

Connection speed affects user experience too.

(And let’s face it, for many of us, mobile bandwidth still lags behind fixed broadband.)

Average desktop download speed in the US

Average mobile download speed in the US

Finally, the user behavior on mobile is different too.

Smartphone users tend to multitask, and their attention span is much shorter.

But as it turns out, often, it’s the device itself that causes interruptions.

As researchers, Henry H. Wilmer, Lauren E. Sherman, and Jason M. Chein wrote in their paper, “Smartphones and Cognition: A Review of Research Exploring the Links between Mobile Technology Habits and Cognitive Functioning,” published in Frontiers in Psychology last year:

“Exogenous interruptions occur when some environmental cue captures the user’s attention. This often involves an alert coming directly from the smartphone itself, but can also involve some other external event that triggers subsequent smartphone use, such as noticing someone else interacting with his or her phone, or being reminded during a live conversation (either explicitly or implicitly) about an activity that can be accomplished on one’s smartphone (email, information search, etc.). Importantly, smartphones are capable of interfering with focused attention even when the user attempts to ignore them.”

All of the above means that any task you want a person to take on their smartphone must be:

  • Relatively easy,
  • Based on a simple workflow,
  • Quick to complete.

Otherwise, you might lose their attention.

But in the context of our conversation, all this begs the question:

Why even consider mobile popups then?

Why not skip lead generation altogether, and focus on improving cart conversions instead?

For one, as we’ve discussed already because the business opportunity is too great to miss.

But also, because mobile popups match a new user behavior.

As many research projects have proved, we often begin the buying process on a smartphone. And this could happen at home, work, bus or even the privy

Here’s one finding showing how we use our devices (research by Google):

 

think With Google - Mobile research

(image source)

Note that over a quarter of buyers would use only their smartphone, nearly twice as many as those who use only their computers.

Moreover, smartphones have become our primary device when searching for information:

think With Google - Mobile usage

(image source)

Finally, a third of us use smartphones for shopping, both away and at home.

 

think With Google - App usage

(image source)

All this suggests that smartphones have become the main device for initiating a purchase.

As Michael Mace pointed here on CXL last year (note, the emphasis in bold is mine):

People will usually come to a mobile commerce site for one of two reasons:

  • They want to shop recreationally, by browsing through products; or
  • They want to go directly to a particular product they’re considering.

And here’s a great example Michael shared in the same article:

“For example, a user might have received a smartphone message from a friend mentioning a pair of shoes. During the day the user might check them out on a notebook computer. Then in the evening, over a glass of wine, the shoes are ordered via tablet.”

Note how a message from a friend led to product awareness, and potentially, a quick visit to the store.

And so, their first exposure to your brand would have happened on the smartphone.

Naturally, there could be countless other scenarios for that person to visit your store.

However, a considerable portion of shoppers who are only entering the buying process, a time where they’re the most susceptible to your lead generation efforts, would visit your site on smartphones.

If you don’t convert them right at that point, you might be pretty much handing them over to the competition. Which brings us to the story how one company used mobile popups to boost their signups and revenue.

Mobile Popups Case Study – Skechers

Skechers.com.au is the well-known performance shoes brand’s online retail outlet in Australia.

And it’s big in Oz.

A quick check on Google Trends reveals that the store’s online popularity beats their major competitors significantly.

Google Trends - Skechers

 

And a quick look at the Ahrefs data reveals that the company’s web traffic has been growing continuously too.

ahrefs - Skechers

 

Now, let’s compare that with the Australian ecommerce opportunity.

The Asia-Pacific region is the most-significant online market globally, and Australia is one of Asia Pacific’s most highly developed ecommerce markets.

In fact, according to a data by eMarketer, online retail sales should have reached AU$32.56 billion by the end of last year.

Growth in ecommerce sales - Asia-Pacific

(image source)

What’s more, Australians are a highly connected audience. As it turns out, the average person owns three devices, and many as many as 5. See the most current data from Google Consumer Barometer.

Google Customer Barometer - Australia

 

Also:

  • 14.9M of Australians use their smartphones every day. (source)
  • Smartphone ownership grows by 2% each year. (source)
  • And 45% of consumers admit having purchased a product via a mobile device. (source)

What’s important, all this reflected in Skechers data.

For example, over 55% of the company’s traffic comes from mobile devices.

Share of Traffic by device - Skechers

 

As Catherine Aliotta, Skechers Ecommerce manager points:

“Our Online Skechers customers are predominately aged between 25-34 yrs, so it is no wonder that more than half our traffic comes from mobile devices, which is why a seamless and user friendly mobile experience is so important for us!”

Taking all of the above into consideration, it’s almost unbelievable that Skechers had absolutely no lead capture campaigns targeting smartphone users specifically.

So, here’s what they did to turn this around.

Concerns for the Campaign

One of the main considerations Skechers had before launching mobile popups strategy was its impact on SEO.

In short, the company didn’t want the campaign to affect their rankings negatively. And given Google’s stance on what it called “intrusive interstitials,” the threat of a search engine’s penalty was real.

So, from the outset they designed their popups to pass Google’s criteria for mobile popups:

  • They would take up no more than 25% – 30% of the screen, and
  • And be easily dismissible by the user to allow them to return to continue viewing the content.

Campaign Scenarios

To comply with the search engine’s requirements while still using best UX practices for popups, Skechers decided to test three distinct mobile popups.

Note, this wasn’t an A/B test. Each version ran independently, and they differed by a number of criteria:

  • The timing of the display,
  • Popup size,
  • Position on the screen,
  • Display scenario.

Another important factor – there was no distinction between what a user had seen, only when they saw and how a popup appeared on the screen. All three popups featured the same headline, copy, form, and call to action.

Option #1: A popup displaying immediately after a person landed on the site

Mobile-popup-displayed-on-landing

The first popup appeared at the very moment a person entered the site, regardless of the page on which they’ve landed.

It had no background overlay and displayed at the bottom of the screen, obstructing only a small portion of the page’s content.

A visitor could easily dismiss the popup by clicking on a clearly visible “x” button or simply anywhere outside of the popup and return to viewing their content.

Option #2: A full-size popup displaying on the second page visited

Mobile popup displayed after one page

The second version targeted engaged visitors who continued viewing the site beyond their landing page.

Skechers set up this popup to display on the second page a person has visited.

Screen capture of the popup settings - After one page

Moreover, they used a design that displayed a popup at the center of the screen and covered its remaining parts with a semi-transparent background overlay. This prevented a visitor from viewing the content without clicking the popup off. (Note: In the first version, a person was able to still scroll the content without closing the popup.)

Option #3: Hidden popup triggered by users through a call to action.

Mobile popup - Tab

With the final version, however, the company decided to try a unique approach. In this option, the website didn’t trigger the popup at all.

A visitor did.

Unlike the other two versions, where a popup would display automatically either on landing or after a visitor has taken a specific action (i.e., visiting more than one page), this option hid the popup until a person has triggered it by themselves.

For this, the company used an option to add a “Tab” –  a call to action prompting the visitor to launch the popup.

The call to action appeared on every page on the site, except for the checkout, giving a person the opportunity to act on it at the time they felt right for them.

You can spot the Tab at the bottom right corner of the image above – it’s the little corner button that says “Join.”

Once a person clicked on the CTA, the popup would act similarly to the second scenario – appearing in the middle of the screen, covering the rest with semi-transparent overlay.

Mobile popup displayed after one page

Campaign Results

The third option, using a call to action to allow visitors trigger the popup collected a staggering 48% more emails than the second-best version. And that’s in spite of the fact that its popups displayed significantly fewer times than the other two.

Here’s the full breakdown of the results:

Mobile popups - ABC test results

 

Things to note:

All versions displayed to roughly the same number of visitors.

The number of displays varied between each version. But this was largely due to different display scenarios Skechers used for them.

For example, a popup triggered on landing would naturally display significantly more often than the second option. And the third option relied on a visitor to actually trigger the popup.

And yet, it’s the option with the call to action generated the highest conversions (34% vs. 3.9%.)

As Catherine Aliotta concludes:

“Mobile customer acquisition has been difficult to acquire for Skechers as there were so many factors to consider and we were blown away by the results from this study, proving that our customers prefer simplicity and are more likely to sign up to our database when they are not being disrupted during their browsing journey. It was amazing to see the difference in results just by changing small factors such as pop up timing and display whilst keeping with the same layout and copy. Key takeaway for us: Prominent and bold pop-ups and call to actions that work for us on desktop aren’t always the answer when it comes to mobile lead generation!”

Conclusions from the Case Study

Although each market, product, and audience are unique, there are certain conclusions we can draw from the Skechers example:

#1. Focusing on providing great user experience trumps all

Skechers test clearly confirms that for smartphone users, UX beats everything else.

In spite of the first two popups being bigger and more prominent, it’s the inauspicious call to action that delivered results.

It would suggest that mobile visitors seem aware of the limitations of their devices and respond better to lead generation strategies that do not disrupt their browsing experience.

In fact, Nick Babich, UX expert admits that on mobile, there’s an accrued need of simplicity:

“Cluttering your interface overloads your user with too much information: every added button, image, and line of text make the screen more complicated.”

#2. Timing of the display affects conversions

True, it’s tempting to just set up a popup to appear right when a person lands on your site and leave it at that.

However, if you trigger the offer too soon, you might just be disrupting visitors. They’ve only landed on your site, and most likely, become irritated at the interruption.

At the same time, if you leave displaying the popup for too long, you might end up missing a lot of potential leads – users who might have left the site before the popup would even show.

Here’s another example from one of our customers that illustrates this well. Note how, for this company, delaying a popup to allow a person to familiarize themselves with their content first gave a boost to conversions.

Popup timing - ABC test results

Naturally, not all popups should display after a time delay. For one, there are other ways to define when to trigger a popup – user activity on the page being one of them. For that reason, you should always test and find the best scenario for the particular segment of the audience you’re targeting.

#3. Testing is the key to your campaign’s success

Skechers example clearly demonstrates that to figure out what engages your audience; you need to test anything from different versions of the popup, size, copy to display scenarios, timing, and a lot in between.

And that’s still just the top of the iceberg.

In other words, testing is your key to unlocking your audience’s preferences and identifying what would spring them into action.

Conclusion

Given the size of the mobile traffic opportunity, it’s clear that unless you’re actively targeting it with lead generation strategies, you’re pretty much handing over the business to the competition.

Doing what Sketchers did – testing different popups approaches to discover what engages mobile visitors is the most effective way to overcome this and start converting this traffic into leads or sales.

 

The post Mastering Mobile Popups appeared first on CXL.