“Alexa, how do I A/B test my voice-enabled customer experience?”

Another Amazon Prime Day has come and gone, and we’re betting many in the e-commerce space are now breathing a little easier. But looking beyond the challenges Amazon poses for other online retailers, the company’s foray into AI, smart speakers and similar devices presents a new opportunity for online brands to reach consumers; a channel […]

The post “Alexa, how do I A/B test my voice-enabled customer experience?” appeared first on Brooks Bell.

Another Amazon Prime Day has come and gone, and we’re betting many in the e-commerce space are now breathing a little easier.

But looking beyond the challenges Amazon poses for other online retailers, the company’s foray into AI, smart speakers and similar devices presents a new opportunity for online brands to reach consumers; a channel that—in our humble opinion—is begging to be tested.  

Alexa, Siri, Google Assistant, Jarvis, Watson…the list goes on

Various studies over the last year have shown that 20% of online searches are conducted using voice-based technologies and this is expected to increase to 50% by 2020.  This means the time to get on board with voice-based experiences is now—whether you’re looking to build your own AI assistant, make your site more friendly for voice searches, or just simply identify where voice-based technologies can enhance your customer experience.

As part of a continuous effort to help our clients capitalize on new technologies and strategies to deliver a better customer experience, our development team recently challenged themselves to build an Alexa Skill with an integrated A/B test function (if you’re unfamiliar with the terminology, think of an Alexa Skill as similar to an app).

To explain how we went about this, it helps to have a general understanding of how Alexa works and where testing fits in:

Step 1: The Alexa-enabled device hears a “wake word” spoken by the user, and listens to the user’s request.

Step 2: The audio of that request is streamed from the device to the Alexa Server.

Step 3: The Alexa Server converts the audio to text and uses this to process the user’s intent.

Step 4: After processing, the Alexa Server sends the intent to a custom Alexa Skill, which is usually housed on a separate server.

Step 5: The Alexa Skill server processes that user request and determines how best to respond.

Step 6: This is where testing comes into play. As the Alexa Skill server is determining the best way to respond to the user’s request, your built-in testing tool triggers a control or challenger response. The Alexa Skill then responds to the request, sending the corresponding text response or visual media back to the Alexa Server.

Step 7: The Alexa Server then converts that text response to speech or renders whatever visual media was returned from the Skill.

Step 8: The Alexa Server sends that content to the device, which is then broadcasted back to the user.

For the purposes of this challenge, our developers built an Alexa Skill for a fictitious online book retailer, Happy Reads. Although testing can be integrated using any server-side testing tool, the team chose to build Optimizely into our custom Alexa Skill as it’s a popular tool among our clients.

So, what does this mean for the customer experience?  

Here’s how our scenario would play out, as designed by our development team:

You’re making a purchase at your favorite online book retailer, Happy Reads. You want to make sure you’re getting a good deal. As you’re browsing the Happy Reads website, you ask Alexa to find promotions by opening the Happy Reads skill and ask Alexa to find coupon codes.

In this A/B test, the control results in Alexa reading off multiple coupon codes at once. The challenger delivers only one coupon code at a time, with the option to search for more.

In this scenario, the testing team would identify the winning experience by having distinct coupon codes for both the control and the challenger and tracking the number of purchases using each coupon code (note: to keep the variables consistent, it’s important that each codes’ promotional value is the same).

Of course, this is just one means of implementing A/B testing in a voice-enabled environment. But there are many other opportunities within customer service, on-boarding, and in-app search experiences, as well as others.

And this doesn’t only apply to e-commerce. Banking and financial services, insurance, healthcare and media are just a few examples of industries looking to voice technologies to enhance their customer experience.

So long as humans can speak faster they can type, voice-enabled experiences present a powerful opportunity for brands to respond in real-time to customer requests and to offer suggestions, as well as an opportunity to position themselves as more of a service.

If you’re looking to implement experimentation within your voice-enabled experiences and other marketing channels, but don’t know where or how to start, contact us today

The post “Alexa, how do I A/B test my voice-enabled customer experience?” appeared first on Brooks Bell.

Eight Mistakes Every Rookie Experimentation Team Makes (& How to Avoid Them)

If testing were easy, everyone would do it. But that’s simply not the case. Rather, experimentation requires extensive learning, hard work, lots of experience and a unique set of skills to consistently produce results that have a meaningful impact on a business. At Brooks Bell, we’ve spent the last fifteen years helping global brands build […]

The post Eight Mistakes Every Rookie Experimentation Team Makes (& How to Avoid Them) appeared first on Brooks Bell.

If testing were easy, everyone would do it. But that’s simply not the case.

Rather, experimentation requires extensive learning, hard work, lots of experience and a unique set of skills to consistently produce results that have a meaningful impact on a business.

At Brooks Bell, we’ve spent the last fifteen years helping global brands build and scale their experimentation programs. Drawing on that experience, we’ve identified eight common mistakes that rookie experimentation teams often make in the first few months of testing. We’ve also detailed useful and actionable strategies to help you navigate these challenges as you work to establish your optimization program. 

Mistake #1: Testing Without a Hypothesis

While the importance of a hypothesis may seem obvious, it’s easy to get swept up in the excitement and politics of launching the first test without realizing you haven’t determined a clear expectation for the outcome.

Without a defined hypothesis, it’s difficult (if not impossible) to make sense of your test results. Additionally, a well-articulated hypothesis can shape the entire test creation, evaluation, analysis and reporting process. It also makes communication and coordination across teams—another common challenge for new programs—much easier.

Learn strategies to avoid this misstep, along with seven other common rookie mistakes for newly established experimentation teams. Get the white paper.

The post Eight Mistakes Every Rookie Experimentation Team Makes (& How to Avoid Them) appeared first on Brooks Bell.