“Alexa, how do I A/B test my voice-enabled customer experience?”

Another Amazon Prime Day has come and gone, and we’re betting many in the e-commerce space are now breathing a little easier. But looking beyond the challenges Amazon poses for other online retailers, the company’s foray into AI, smart speakers and similar devices presents a new opportunity for online brands to reach consumers; a channel […]

The post “Alexa, how do I A/B test my voice-enabled customer experience?” appeared first on Brooks Bell.

Another Amazon Prime Day has come and gone, and we’re betting many in the e-commerce space are now breathing a little easier.

But looking beyond the challenges Amazon poses for other online retailers, the company’s foray into AI, smart speakers and similar devices presents a new opportunity for online brands to reach consumers; a channel that—in our humble opinion—is begging to be tested.  

Alexa, Siri, Google Assistant, Jarvis, Watson…the list goes on

Various studies over the last year have shown that 20% of online searches are conducted using voice-based technologies and this is expected to increase to 50% by 2020.  This means the time to get on board with voice-based experiences is now—whether you’re looking to build your own AI assistant, make your site more friendly for voice searches, or just simply identify where voice-based technologies can enhance your customer experience.

As part of a continuous effort to help our clients capitalize on new technologies and strategies to deliver a better customer experience, our development team recently challenged themselves to build an Alexa Skill with an integrated A/B test function (if you’re unfamiliar with the terminology, think of an Alexa Skill as similar to an app).

To explain how we went about this, it helps to have a general understanding of how Alexa works and where testing fits in:

Step 1: The Alexa-enabled device hears a “wake word” spoken by the user, and listens to the user’s request.

Step 2: The audio of that request is streamed from the device to the Alexa Server.

Step 3: The Alexa Server converts the audio to text and uses this to process the user’s intent.

Step 4: After processing, the Alexa Server sends the intent to a custom Alexa Skill, which is usually housed on a separate server.

Step 5: The Alexa Skill server processes that user request and determines how best to respond.

Step 6: This is where testing comes into play. As the Alexa Skill server is determining the best way to respond to the user’s request, your built-in testing tool triggers a control or challenger response. The Alexa Skill then responds to the request, sending the corresponding text response or visual media back to the Alexa Server.

Step 7: The Alexa Server then converts that text response to speech or renders whatever visual media was returned from the Skill.

Step 8: The Alexa Server sends that content to the device, which is then broadcasted back to the user.

For the purposes of this challenge, our developers built an Alexa Skill for a fictitious online book retailer, Happy Reads. Although testing can be integrated using any server-side testing tool, the team chose to build Optimizely into our custom Alexa Skill as it’s a popular tool among our clients.

So, what does this mean for the customer experience?  

Here’s how our scenario would play out, as designed by our development team:

You’re making a purchase at your favorite online book retailer, Happy Reads. You want to make sure you’re getting a good deal. As you’re browsing the Happy Reads website, you ask Alexa to find promotions by opening the Happy Reads skill and ask Alexa to find coupon codes.

In this A/B test, the control results in Alexa reading off multiple coupon codes at once. The challenger delivers only one coupon code at a time, with the option to search for more.

In this scenario, the testing team would identify the winning experience by having distinct coupon codes for both the control and the challenger and tracking the number of purchases using each coupon code (note: to keep the variables consistent, it’s important that each codes’ promotional value is the same).

Of course, this is just one means of implementing A/B testing in a voice-enabled environment. But there are many other opportunities within customer service, on-boarding, and in-app search experiences, as well as others.

And this doesn’t only apply to e-commerce. Banking and financial services, insurance, healthcare and media are just a few examples of industries looking to voice technologies to enhance their customer experience.

So long as humans can speak faster they can type, voice-enabled experiences present a powerful opportunity for brands to respond in real-time to customer requests and to offer suggestions, as well as an opportunity to position themselves as more of a service.

If you’re looking to implement experimentation within your voice-enabled experiences and other marketing channels, but don’t know where or how to start, contact us today

The post “Alexa, how do I A/B test my voice-enabled customer experience?” appeared first on Brooks Bell.

“Oh, BEhave” Series: Our Behavioral Economics Journey Recap

Beginning last October, Brooks Bell and Irrational Labs teamed up to tackle the topic of Behavioral Economics through a six-part blog series exploring the basics of behavioral economics and providing tips to help you apply it in your experimentation program. In this final blog, we’re going to revisit some of the big takeaways from each […]

The post “Oh, BEhave” Series: Our Behavioral Economics Journey Recap appeared first on Brooks Bell.

Beginning last October, Brooks Bell and Irrational Labs teamed up to tackle the topic of Behavioral Economics through a six-part blog series exploring the basics of behavioral economics and providing tips to help you apply it in your experimentation program. In this final blog, we’re going to revisit some of the big takeaways from each post.

Post 1: Creating Experiments with Behavioral Economics

Read more. Test more. The first step to understanding social science is to appreciate the complexities of humans. There is no ‘one size fits all’ solution to any problem.

In this post, we introduced you to Kristen Berman, co-founder of Irrational Labs. Irrational Labs uses the power of behavioral economics for good: helping companies improve the world by saving people money, encouraging healthy living, among other things. They often publish their findings in academic journals.  

Here at Brooks Bell, we encourage the use of behavioral economics to understand customers, applying these principles to guide digital experimentation strategies.

Despite these two distinct applications of behavioral economics, our processes for implementing them are incredibly similar.

Post 2: Even Experimentation Experts Get Surprised

Expert intuition is not always correct. It’s important to use data and behavioral economics to better understand your users and their motivations – and experiment from there.

Although both Kristen and I have run thousands of experiments between us, even we get caught off guard by test results from time to time. In this post, we shared examples of experiments that kept us guessing–highlighting one of Irrational Labs’ Google Adwords promotions and a test we ran on behalf of one of our clients, the Jimmy V Foundation.

Behavioral economics enables you to better understand the intricacies of human decision making–even when people’s actions surprise you. Applying this to testing gives you the best chance at creating a successful experience for your customers.

And while there’s no universal template for designing a customer experience, there’s a huge opportunity to test into the experience that works best for your customers. These examples highlighted the importance of having an experimental mindset rather than relying on your intuition.  

Post 3: Create a Powerhouse Methodology Using Quantitative and Qualitative Data Alongside Behavioral Economics

The only way to know if something affects another thing is to do a controlled trial.

This post detailed two different approaches to bringing quantitative data, qualitative data and behavioral economics together to create a powerhouse ideation methodology.  

Brooks Bell and Irrational Labs’ approaches to ideation are very similar, differing only in structure and terminology.

At Brooks Bell, we use an iterative, five-part structure that begins with pre-strategy data, dives into user needs, problem/opportunity identification and experiment brainstorming, and finally ends with a prioritization process.  Behavioral Economics is engrained throughout the process but takes center stage during the problem/opportunity identification as well as the experiment brainstorm.

Irrational Labs’ process begins with a literature review, followed by a quantitative data study and qualitative feedback. Then they run their controlled trial (a.k.a. an A/B test).

Of course, applying these processes into your organization depends on your resources, timeline and many other factors.

If you’re unsure where to start, let’s talk! Brooks Bell has years of experience building world-class testing programs and can help you build and implement an ideation methodology that’s specific to your team and your business goals.

Post 4: Ethical Experimentation: Using Behavioral Economics for Good

When we understand the power that companies have over our decisions, it becomes unethical when they do NOT experiment on their users.

In this post, we examined the topic of ethical experimentation: is it always right to experiment on your users? And how do you ensure you’re testing in the customer’s best interests?

Central to this debate is the imbalance of power between a company and its customers. Companies need to drive profit, and while driving profit may drive customer value, there are some situations where it could drive high prices and/or negative customer value. The most famous example of this? The tobacco industry.

But not experimenting is not an option. If a company doesn’t test a feature, it means they think the engineer who first designed the feature got it 100% right. This means the power to influence our decisions lies with the engineer who designed the feature and did so without any data on how the feature is influencing the end user.

So, how do you design a system to ensure noble intent with your experiments? First, focus on building tests with short-term and long-term value. Be transparent about your experiments. Online dating service, OK Cupid, for example, openly uses its customer data for research and publishes summaries of the insights.

Finally, give customers a means of public recourse if they feel they’ve been wronged. This not only empowers the user but also shows that your company prioritizes customer relationships over reputation or profit.

Post 5: Behavioral Principles to Know – And How to Use Them

If you get familiar with [behavioral] principles…your digital experiments are going to be more inspired, and better-informed than ever.

There are decades of behavioral science experiments at your fingertips that can be leveraged to better inform your digital experiments.  This post highlighted a few of them, including social proof, choice overload, goal gradient hypothesis and the sunk cost fallacy (among others).

We also provided a few tips and tricks to help you and your team become more familiar with these principles, with additional links to various resources (including one of my personal favorites, Dan Ariely’s Irrational Game).  

We had a lot of fun creating this series and hope you found it valuable. If you have additional thoughts or questions about behavioral economics and how to apply the principles to experimentation, let us know!  We’d love to help!

__

Suzi Tripp, Sr. Director of Experimentation Strategy
At Brooks Bell, Suzi sets the course of action for impact-driving programs while working to ensure the team is utilizing and advancing our test ideation methodology to incorporate quantitative data, qualitative data, and behavioral economics principles. She has over 14 years of experience in the industry and areas of expertise include strategy, digital, communications, and client service. Suzi has a BS in Business Management with a concentration in marketing from North Carolina State.

 

Kristen Berman, Co-founder of Irrational Labs, Author, Advisor & Public Speaker
Kristen helps companies and nonprofits understand and leverage behavioral economics to increase the health, wealth and happiness of their users.  She also led the behavioral economics group at Google, a group that touches over 26 teams across Google, and hosts ones of the top behavioral change conferences globally, StartupOnomics. She co-authored a series of workbooks called Hacking Human Nature for Good: A practical guide to changing behavior, with Dan Ariely. These workbooks are being used at companies like Google, Intuit, Neflix, Fidelity, Lending Club for business strategy and design work.  Before designing, testing and scaling products that use behavioral economics, Kristen was a Sr. product manager at Intuit and camera startup, Lytro.  Kristen is an advisor for Loop Commerce, Code For America Accelerator and the Genr8tor Incubator and has spoke at Google, Facebook, Fidelity, Equifax, Stanford, Bay Area Computer Human Interaction seminar and more.

The post “Oh, BEhave” Series: Our Behavioral Economics Journey Recap appeared first on Brooks Bell.

Moving the needle: Strategic metric setting for your experimentation program

Once you have your metrics and KPIs set, you’ll want to devise a system for tracking and sharing your results….Read blog postabout:Moving the needle: Strategic metric setting for your experimentation program
The post Moving the needle: Strategi…

Once you have your metrics and KPIs set, you’ll want to devise a system for tracking and sharing your results....Read blog postabout:Moving the needle: Strategic metric setting for your experimentation program

The post Moving the needle: Strategic metric setting for your experimentation program appeared first on WiderFunnel Conversion Optimization.

Eight Mistakes Every Rookie Experimentation Team Makes (& How to Avoid Them)

If testing were easy, everyone would do it. But that’s simply not the case. Rather, experimentation requires extensive learning, hard work, lots of experience and a unique set of skills to consistently produce results that have a meaningful impact on a business. At Brooks Bell, we’ve spent the last fifteen years helping global brands build […]

The post Eight Mistakes Every Rookie Experimentation Team Makes (& How to Avoid Them) appeared first on Brooks Bell.

If testing were easy, everyone would do it. But that’s simply not the case.

Rather, experimentation requires extensive learning, hard work, lots of experience and a unique set of skills to consistently produce results that have a meaningful impact on a business.

At Brooks Bell, we’ve spent the last fifteen years helping global brands build and scale their experimentation programs. Drawing on that experience, we’ve identified eight common mistakes that rookie experimentation teams often make in the first few months of testing. We’ve also detailed useful and actionable strategies to help you navigate these challenges as you work to establish your optimization program. 

Mistake #1: Testing Without a Hypothesis

While the importance of a hypothesis may seem obvious, it’s easy to get swept up in the excitement and politics of launching the first test without realizing you haven’t determined a clear expectation for the outcome.

Without a defined hypothesis, it’s difficult (if not impossible) to make sense of your test results. Additionally, a well-articulated hypothesis can shape the entire test creation, evaluation, analysis and reporting process. It also makes communication and coordination across teams—another common challenge for new programs—much easier.

Learn strategies to avoid this misstep, along with seven other common rookie mistakes for newly established experimentation teams. Get the white paper.

The post Eight Mistakes Every Rookie Experimentation Team Makes (& How to Avoid Them) appeared first on Brooks Bell.

How to Get More Google Reviews using Neuroscience

Reviews can make or break your business. Here’s how to get more on Google.
The post How to Get More Google Reviews using Neuroscience appeared first on Neuromarketing.

Get More Google Reviews Using Neuroscience

Reviews can make or break your business. Here's how to get more on Google.

The post How to Get More Google Reviews using Neuroscience appeared first on Neuromarketing.