Part 2: Our Top Takeaways from Click Summit 2018

Last week, we shared the first of many takeaways from Click Summit 2018, our annual conference for professionals in digital experimentation and personalization. This week, we’re back with more insights from each impactful conversation, inspired by this year’s edition of Clickaways. 1. Manage the three P’s of scaling your testing program: people, process, prioritization. Many companies […]

The post Part 2: Our Top Takeaways from Click Summit 2018 appeared first on Brooks Bell.

Last week, we shared the first of many takeaways from Click Summit 2018, our annual conference for professionals in digital experimentation and personalization. This week, we’re back with more insights from each impactful conversation, inspired by this year’s edition of Clickaways.

1. Manage the three P’s of scaling your testing program: people, process, prioritization.

Many companies have found it more effective to establish a dedicated optimization team rather than having these duties dispersed across the organization. However, if that’s not possible for you, let your Center of Excellence take the lead on defining key processes, training and developing a maturity model to determine when each team is ready to start testing.

Develop a formal process for submitting, presenting, prioritizing and executing new testing ideas. Using various automation technologies can further simplify these steps.

Additionally, agree to one source of truth for your test results across multiple platforms. Companies that have various groups looking at different data sources struggle to establish the necessary credibility to scale their programs. This is one area where a knowledge platform that houses testing results, insights and ideas (like Brooks Bell’s Illuminate platform, or Optimizely’s Program Management) can help.

Finally, growing your experimentation program comes with the expectation of more tests, executed faster. When determining your velocity goals, be sure to consider quality over quantity. Always prioritize running a few, quality tests over many, low-impact tests.

2. Personalization and optimization teams should remain separate functions with connected but distinct goals.

Personalization is a worthwhile investment for any online industry, but it has to be adopted as a company-wide strategy in order to ensure you’re delivering a consistent customer experience.

To get the most out of your investment, establish a separate personalization team to run your program rather than looking to your existing experimentation team. Here are a few reasons for this: First, personalization is a longer-term strategy and “wins” occur at a much slower rate. Additionally, while there are similarities between A/B testing and personalization technologies, the questions you ask and the answers you get are very different.

Finally, running split tests is inherently easier and faster than implementing personalization. So long as your team is overseeing both functions, they’re likely to focus more on testing than personalization.

3. Focus on organizational outputs and customer insights, not just test outcomes.



Oftentimes, experimentation professionals find themselves nearest to the customer. Sure, you may not speak with them directly, but your work can have a direct effect on your customers’ experience and brand perception. That’s a lot of power, but also a lot of opportunity.

So here’s the challenge: Go beyond simple tests like button color or check out features and consider the bigger picture. Use testing to seek out insights that would be useful for other departments within your organization.

Here at Brooks Bell, we have our own framework for doing this (and we’d be happy to tell you about it). In lieu of our services, we’d encourage you to take a step back from test outcomes, spot trends and use these to develop testable customer theories.

Developing a customer theory requires you to conduct a deeper interpretation of your results–so don’t do it alone. Look to your working team to brainstorm customer theories and additional tests to validate or invalidate those. Bring in additional data sources like NPS, VOC or qualitative research to paint a more detailed picture of your customers.

Doing this can have huge implications for your customers, your experimentation program and your brand overall.

4. Build a program that strikes the perfect balance of innovation and ROI.

In order for creativity to flourish within your experimentation program, you have to establish clear goals. These are used as a framework within which your team can look for opportunities to innovate.

Develop a process for brainstorming test ideas that encourages participation and creative thinking, like using Post-It notes.



Finally, demonstrate a willingness to take calculated risks in order to make room for creativity in your optimization strategy. There is always something to be learned from negative or flat results.

Like the information in this post? Download this year’s Clickaways to access more tips, tricks and ideas from Click Summit 2018.

The post Part 2: Our Top Takeaways from Click Summit 2018 appeared first on Brooks Bell.

Part 1: Our Top Takeaways from Click Summit 2018

Another year, another epically productive Click Summit. In the weeks since Click Summit 2018, we’ve spent some time reflecting on the event and even our heads are still reeling from the depth and quality of each conversation. This event isn’t your run-of-the-mill marketing conference. We strive to create an intimate and super-productive experience in our […]

The post Part 1: Our Top Takeaways from Click Summit 2018 appeared first on Brooks Bell.

Another year, another epically productive Click Summit. In the weeks since Click Summit 2018, we’ve spent some time reflecting on the event and even our heads are still reeling from the depth and quality of each conversation.

This event isn’t your run-of-the-mill marketing conference. We strive to create an intimate and super-productive experience in our small group conversations. Of course, the true credit goes to our attendees and moderators for their candid participation. It takes a certain level of vulnerability to look to others for feedback and direction. Those types of conversations are where the true insights come to light.

Had to sit out Click Summit this year? You’re in luck. We’ve compiled the key takeaways from each of the 22 thought-provoking conversations into an easy-to-read, downloadable resource.

Here’s our summary of some of the insights you’ll find in this year’s Clickaways

1. Relationships are key to creating buy-in for experimentation. Get to the right meetings and make the right connections. Target influential leaders to gain traction and credibility for your program. Build working partnerships with other teams, taking time to understand their goals. Work with them to make testing and personalization part of the solution.



Finally, know that proving people wrong doesn’t create buy-in. Rather, invite other departments to participate in your program and frame your tests as an opportunity to learn together. Hold monthly or bi-weekly meetings with direct and indirect stakeholders to review test wins, brainstorm new tests and discuss any resulting customer insights.

2. Instill testing in your company culture through establishing a credible team and program. Trust is easily lost, so you really need to take steps to ensure your team is positioned as a source of truth for the business, rather than one that’s encroaching on other departments. Your team should not only be experts in optimization and behavioral economics, but also experts in your customers–know their behaviors online, what motivates them and what truly makes them tick.

Hold training sessions on best practices for testing, personalization and customer insights. Regularly communicate test results and any subsequent insights to the entire company. And when sharing results, consider your audience. It may be worth creating different reporting formats for different stakeholders

3. If you want to build an army of optimization evangelists, you’ve gotta get everyone on the same page first. So long as end-to-end optimization requires working across multiple teams, it’s important that you establish clear processes and governance. Develop a common language for testing terminology; abandon jargon in favor of words that are easy to understand and don’t have multiple contexts.

Set clear rules of engagement and expectations between all teams involved in optimization. This includes engineering, IT, analytics, marketing, creative and others. Make sure communication and reporting processes are defined and any associated technologies are being used consistently.Finally, take into account how success is measured for all these other stakeholders. Not all teams are incentivized with revenue targets or conversion goals. Connect your test strategy to their objectives to ensure a unified vision.

Like the information in this post? Stay tuned for part two next week. Until then, download this year’s Clickaways to access more tips, tricks and ideas from Click Summit 2018.

The post Part 1: Our Top Takeaways from Click Summit 2018 appeared first on Brooks Bell.

Confidence Intervals & P-values for Percent Change / Relative Difference

In many controlled experiments, including online controlled experiments (a.k.a. A/B tests) the result of interest and hence the inference made is about the relative difference between the control and treatment group. In A/B testing as part of conversio…

In many controlled experiments, including online controlled experiments (a.k.a. A/B tests) the result of interest and hence the inference made is about the relative difference between the control and treatment group. In A/B testing as part of conversion rate optimization and in marketing experiments in general we use the term “percent lift” (“percentage lift”) while in […] Read More...

Testing Your App Listing in the Google Play Store

Recently, while attending a native app session at our annual conference, Click Summit, it was brought to my attention that not many people know about the ability to run A/B tests on their Google Play Store app listing.   Testing your store listing can be an untapped area for gaining key insights about your customers and […]

The post Testing Your App Listing in the Google Play Store appeared first on Brooks Bell.

Recently, while attending a native app session at our annual conference, Click Summit, it was brought to my attention that not many people know about the ability to run A/B tests on their Google Play Store app listing.  

Testing your store listing can be an untapped area for gaining key insights about your customers and increasing app installs. Additionally, the insights you gain by testing your Google Play Store listings could be transferable to your Apple App Store listing as well.

Within the Google Play Console, there’s a little known tool that will allow you to A/B test your app listing called “store listing experiments.” This can be found under the “store presence” menu item.  

There is no need for a technical resources or technical knowledge as the technology is built right into the Google Play Console. The Store Listing Experiments feature allows you to A/B Test six different attributes of the store listing: Hi-res icon, Feature Graphic, Screenshots, Promo Video, Short Description and Long Description. Tests can include all of these in combination or individually. You can run tests globally (graphics only) or localized (text and graphics). Note that you are limited to 3 variations in a test.

The analytics and reporting is all housed within the Google Play Console and unfortunately, cannot be exported. Three metrics area automatically tracked: Installs on active devices, installs by user, uninstalls by user. Results are measured at a 90% confidence interval.

For more details, check out Google’s step by step documentation.

When it comes to experimentation, Brooks Bell is happy to lend our expertise to help your optimization program expand its reach, capabilities, and impact. This can include testing store listings, to landing pages, to check out experiences and more. If you’re interested in learning more about Brooks Bell and how we can help optimize your web experiences, contact us today.

The post Testing Your App Listing in the Google Play Store appeared first on Brooks Bell.

How to Use Personalization to Enhance Your Existing Optimization Program

We know an A/B test can lead to powerful insights. However, the information gained from traditional A/B tests tends to be focused on what’s best for the majority of users – not every individual user. That’s where personalization comes in. Personalization enables you to leverage the specific wants and needs of each individual user on […]

The post How to Use Personalization to Enhance Your Existing Optimization Program appeared first on Brooks Bell.

We know an A/B test can lead to powerful insights. However, the information gained from traditional A/B tests tends to be focused on what’s best for the majority of users – not every individual user. That’s where personalization comes in.

Personalization enables you to leverage the specific wants and needs of each individual user on your site. This can lead to even more substantial results–higher conversion rates, deeper engagement with your site, and increased revenue.

Many of the traditional testing tools–Adobe Target, Optimizely and Maxymiser–have personalization capabilities available. There has also been an emergence of companies like Dynamic Yield and Evergage, which offer personalization technology as their core focus.

As technology in this space improves, personalization has become a major focus for many Brooks Bell clients. However, the question we’re often asked is not whether to implement personalization alongside existing optimization efforts – rather, its how to do this.

Luckily, there are many ways to do just that. For the purposes of this blog post, we’ve outlined two relatively simple strategies for implementing personalization alongside your existing optimization program.

Strategy 1: Rule-based targeting

Rule based targeting is a personalization technique that’s available on most A/B testing platforms. Instead of targeting all users, you select a specific segment of users to target an experience to: new or returning users; mobile or desktop users; or users in a specific location.

Because these different types of users are interacting with your site differently, you’ll likely see higher returns by personalizing your content to each group.

You can also apply rule-based targeting after running a traditional A/B test, by breaking down your results by those specific user segments. In doing so, you may find that a “winning” homepage experience performed very well among new users, but was flat for returning users.

Though pushing the winning variation live to all users would increase revenue, you might see a bigger increase if you were to push it live to new users only. This gives way to additional opportunities to test different strategies for returning visitors.

Strategy 2: Predictive personalization

Many testing platforms now offer predictive personalization, which works in real time to learn which experiences are ideal for certain types of users.

A predictive personalization “test” runs indefinitely – and adjusts as users’ preferences change over time, showing the optimal experience to each user.

Predictive targeting technology is exciting for many reasons. It accounts for the fact that a winner from a year ago might not be the best option for your users now.  

The technology also makes it easier to figure out the best option for short term website changes, like a holiday promotion–for which A/B testing is not a viable option due to time constraints.

Additionally, having the ability to step back and leave the analysis to the computer – instead of spending the time analyzing data yourself – is a huge benefit to experimentation professionals and the companies they work for.  

There are, of course, potential pitfalls to this form of personalization.

When you run a traditional A/B test with a clear winner across all users, it’s easy to make the decision to build the winning code into your site. However, with predictive personalization, you may have many different versions of a page for different segments of users, and continue relying on the testing tool to deliver the code, never building it into your site.

This can be risky for a few reasons: it can increase load time; and if, over time, other updates are made to your site, those updates could break the experience.

Additionally, you’ll also want to make sure you trust that the machine learning algorithms are actually making the best decisions for your users. To that end, many platforms offer a control experience which segments users randomly. You can then compare metrics from the control against the personalized segments to ensure the algorithm is working optimally.

Personalization offers the opportunity to gain new insights about your users and deliver the most valuable content for each individual. Incorporating personalization into your testing program is certainly worth the investment, with the potential for huge rewards.

At Brooks Bell, our Personalization Jumpstart program enables enterprise optimization teams to incorporate and scale personalization strategies into their existing optimization programs. To learn more about our services, contact us today.

The post How to Use Personalization to Enhance Your Existing Optimization Program appeared first on Brooks Bell.

Affordable A/B Tests: Google Optimize & AGILE A/B Testing

The problem most-often faced by owners of websites who want to take a scientific approach to improving them by using A/B testing is that they might have relatively small revenue. Thus, when the ROI calculation for the A/B test is done it might turn out…

The problem most-often faced by owners of websites who want to take a scientific approach to improving them by using A/B testing is that they might have relatively small revenue. Thus, when the ROI calculation for the A/B test is done it might turn out that it is economically unfeasible to test. In some cases, […] Read More...

The Google Optimize Statistical Engine and Approach

Google Optimize is the latest attempt from Google to deliver an A/B testing product. Previously we had “Google Website Optimizer”, then we had “Content Experiments” within Google Analytics, and now we have the latest iteration: …

Google Optimize is the latest attempt from Google to deliver an A/B testing product. Previously we had “Google Website Optimizer”, then we had “Content Experiments” within Google Analytics, and now we have the latest iteration: Google Optimize. While working on the integration of our A/B Testing Calculator with Google Optimize I was curious to see […] Read More...

Bayesian vs Frequentist A/B Testing – What’s the Difference?

Bayesian versus Frequentist Statisticians: the war is real
Imagine that you wake up in the one morning and you don’t remember anything from your previous life. You’ve erased all memories from…

Please click on the title to read the full articl…

Bayesian versus Frequentist Statisticians: the war is real Imagine that you wake up in the one morning and you don’t remember anything from your previous life. You’ve erased all memories from...

Please click on the title to read the full article!

20-80% Faster A/B Tests? Is it real?

I got a question today about our AGILE A/B testing calculator and the statistics behind it and realized that I’m yet to write a dedicated post explaining the efficiency gains from using the method in more detail. This despite the fact that these …

I got a question today about our AGILE A/B testing calculator and the statistics behind it and realized that I’m yet to write a dedicated post explaining the efficiency gains from using the method in more detail. This despite the fact that these speed gains are clearly communicated and verified through simulation results presented in our AGILE […] Read More...

Case Study: Getting consecutive +15% winning tests for software vendor, Frontline Solvers

The post Case Study: Getting consecutive +15% winning tests for software vendor, Frontline Solvers appeared first on WiderFunnel Conversion Optimization.

The post Case Study: Getting consecutive +15% winning tests for software vendor, Frontline Solvers appeared first on WiderFunnel Conversion Optimization.