Part 2: Our Top Takeaways from Click Summit 2018

Last week, we shared the first of many takeaways from Click Summit 2018, our annual conference for professionals in digital experimentation and personalization. This week, we’re back with more insights from each impactful conversation, inspired by this year’s edition of Clickaways. 1. Manage the three P’s of scaling your testing program: people, process, prioritization. Many companies […]

The post Part 2: Our Top Takeaways from Click Summit 2018 appeared first on Brooks Bell.

Last week, we shared the first of many takeaways from Click Summit 2018, our annual conference for professionals in digital experimentation and personalization. This week, we’re back with more insights from each impactful conversation, inspired by this year’s edition of Clickaways.

1. Manage the three P’s of scaling your testing program: people, process, prioritization.

Many companies have found it more effective to establish a dedicated optimization team rather than having these duties dispersed across the organization. However, if that’s not possible for you, let your Center of Excellence take the lead on defining key processes, training and developing a maturity model to determine when each team is ready to start testing.

Develop a formal process for submitting, presenting, prioritizing and executing new testing ideas. Using various automation technologies can further simplify these steps.

Additionally, agree to one source of truth for your test results across multiple platforms. Companies that have various groups looking at different data sources struggle to establish the necessary credibility to scale their programs. This is one area where a knowledge platform that houses testing results, insights and ideas (like Brooks Bell’s Illuminate platform, or Optimizely’s Program Management) can help.

Finally, growing your experimentation program comes with the expectation of more tests, executed faster. When determining your velocity goals, be sure to consider quality over quantity. Always prioritize running a few, quality tests over many, low-impact tests.

2. Personalization and optimization teams should remain separate functions with connected but distinct goals.

Personalization is a worthwhile investment for any online industry, but it has to be adopted as a company-wide strategy in order to ensure you’re delivering a consistent customer experience.

To get the most out of your investment, establish a separate personalization team to run your program rather than looking to your existing experimentation team. Here are a few reasons for this: First, personalization is a longer-term strategy and “wins” occur at a much slower rate. Additionally, while there are similarities between A/B testing and personalization technologies, the questions you ask and the answers you get are very different.

Finally, running split tests is inherently easier and faster than implementing personalization. So long as your team is overseeing both functions, they’re likely to focus more on testing than personalization.

3. Focus on organizational outputs and customer insights, not just test outcomes.



Oftentimes, experimentation professionals find themselves nearest to the customer. Sure, you may not speak with them directly, but your work can have a direct effect on your customers’ experience and brand perception. That’s a lot of power, but also a lot of opportunity.

So here’s the challenge: Go beyond simple tests like button color or check out features and consider the bigger picture. Use testing to seek out insights that would be useful for other departments within your organization.

Here at Brooks Bell, we have our own framework for doing this (and we’d be happy to tell you about it). In lieu of our services, we’d encourage you to take a step back from test outcomes, spot trends and use these to develop testable customer theories.

Developing a customer theory requires you to conduct a deeper interpretation of your results–so don’t do it alone. Look to your working team to brainstorm customer theories and additional tests to validate or invalidate those. Bring in additional data sources like NPS, VOC or qualitative research to paint a more detailed picture of your customers.

Doing this can have huge implications for your customers, your experimentation program and your brand overall.

4. Build a program that strikes the perfect balance of innovation and ROI.

In order for creativity to flourish within your experimentation program, you have to establish clear goals. These are used as a framework within which your team can look for opportunities to innovate.

Develop a process for brainstorming test ideas that encourages participation and creative thinking, like using Post-It notes.



Finally, demonstrate a willingness to take calculated risks in order to make room for creativity in your optimization strategy. There is always something to be learned from negative or flat results.

Like the information in this post? Download this year’s Clickaways to access more tips, tricks and ideas from Click Summit 2018.

The post Part 2: Our Top Takeaways from Click Summit 2018 appeared first on Brooks Bell.

Part 1: Our Top Takeaways from Click Summit 2018

Another year, another epically productive Click Summit. In the weeks since Click Summit 2018, we’ve spent some time reflecting on the event and even our heads are still reeling from the depth and quality of each conversation. This event isn’t your run-of-the-mill marketing conference. We strive to create an intimate and super-productive experience in our […]

The post Part 1: Our Top Takeaways from Click Summit 2018 appeared first on Brooks Bell.

Another year, another epically productive Click Summit. In the weeks since Click Summit 2018, we’ve spent some time reflecting on the event and even our heads are still reeling from the depth and quality of each conversation.

This event isn’t your run-of-the-mill marketing conference. We strive to create an intimate and super-productive experience in our small group conversations. Of course, the true credit goes to our attendees and moderators for their candid participation. It takes a certain level of vulnerability to look to others for feedback and direction. Those types of conversations are where the true insights come to light.

Had to sit out Click Summit this year? You’re in luck. We’ve compiled the key takeaways from each of the 22 thought-provoking conversations into an easy-to-read, downloadable resource.

Here’s our summary of some of the insights you’ll find in this year’s Clickaways

1. Relationships are key to creating buy-in for experimentation. Get to the right meetings and make the right connections. Target influential leaders to gain traction and credibility for your program. Build working partnerships with other teams, taking time to understand their goals. Work with them to make testing and personalization part of the solution.



Finally, know that proving people wrong doesn’t create buy-in. Rather, invite other departments to participate in your program and frame your tests as an opportunity to learn together. Hold monthly or bi-weekly meetings with direct and indirect stakeholders to review test wins, brainstorm new tests and discuss any resulting customer insights.

2. Instill testing in your company culture through establishing a credible team and program. Trust is easily lost, so you really need to take steps to ensure your team is positioned as a source of truth for the business, rather than one that’s encroaching on other departments. Your team should not only be experts in optimization and behavioral economics, but also experts in your customers–know their behaviors online, what motivates them and what truly makes them tick.

Hold training sessions on best practices for testing, personalization and customer insights. Regularly communicate test results and any subsequent insights to the entire company. And when sharing results, consider your audience. It may be worth creating different reporting formats for different stakeholders

3. If you want to build an army of optimization evangelists, you’ve gotta get everyone on the same page first. So long as end-to-end optimization requires working across multiple teams, it’s important that you establish clear processes and governance. Develop a common language for testing terminology; abandon jargon in favor of words that are easy to understand and don’t have multiple contexts.

Set clear rules of engagement and expectations between all teams involved in optimization. This includes engineering, IT, analytics, marketing, creative and others. Make sure communication and reporting processes are defined and any associated technologies are being used consistently.Finally, take into account how success is measured for all these other stakeholders. Not all teams are incentivized with revenue targets or conversion goals. Connect your test strategy to their objectives to ensure a unified vision.

Like the information in this post? Stay tuned for part two next week. Until then, download this year’s Clickaways to access more tips, tricks and ideas from Click Summit 2018.

The post Part 1: Our Top Takeaways from Click Summit 2018 appeared first on Brooks Bell.

The Persuasive Power of Privileged Moments

Check out this video: Ok, now check out a similar technique: Or how about the same general technique used for another cause: And just so you can really see the pattern, how about this: In every case, the ad’s effectiveness comes from what Cia…

Check out this video: Ok, now check out a similar technique: Or how about the same general technique used for another cause: And just so you can really see the pattern, how about this: In every case, the ad’s effectiveness comes from what Cialdini calls Pre-Suasion and the creation of “Privileged Moments.” Cialdini’s overall thesis is that the […]

Demo vs. Dramatization

What’s the difference between a demo and a dramatization? You might have a different answer, but for me, it boils down to: Character vs. Salesperson Scenario vs. Situation Focus on Stakes vs Focus on Feature A demo, in essence, is a salesperson putting…

What’s the difference between a demo and a dramatization? You might have a different answer, but for me, it boils down to: Character vs. Salesperson Scenario vs. Situation Focus on Stakes vs Focus on Feature A demo, in essence, is a salesperson putting a product in a situation wherein it can showcase the operation of […]