Let’s Meet Up!

In this era of video conferencing and emails, there is something to be said for face-to-face meetings.  So, where are we gonna be? Check it out below–and drop by or drop us a note if you want to meet up!   Network & Chill: San Francisco Meetup Group Wednesday, August 29, 2018  LOCATION: Press Club […]

The post Let’s Meet Up! appeared first on Brooks Bell.

In this era of video conferencing and emails, there is something to be said for face-to-face meetings.  So, where are we gonna be? Check it out below–and drop by or drop us a note if you want to meet up!  

Network & Chill: San Francisco Meetup Group
Wednesday, August 29, 2018 
LOCATION: Press Club – 20 Yerba Buena Ln, San Francisco, CA 94103 · San Francisco, CA

Details: We’re hosting a casual meet up of optimization and personalization experts in the Bay Area. Register here for the opportunity to connect and exchange war stories with your peers over drinks. 

Opticon
September 11-13, 2018
The Cosmopolitan, Las Vegas, NV

Details: 
Don’t miss our CEO Brooks Bell speaking at Opticon about up-leveling your experimentation strategy. It’ll take place on Thursday, September 13th, from 11-11:40 am.We’ll also have a booth in the exhibit hall, so make sure you stop by to meet the crew and get some fun new swag.

Need a ticket? Register today  and use code SPEAKER300 for a $300 ticket (regular price is $850). Deadline to register is September 3rd! 

Evergage Personalization Summit
September 11-13, 2018
Boston Park Plaza, Boston, MA

Details:

Come meet Brooks Bell data scientist Aaron Baker and user experience expert AJ Bikowski. 

Click here to register and use our discount code for 50% off: BROOKSBELL50

None of these work? Fear not, there will be more opportunities for you to put a face to a name. Sign up for our newsletter for monthly updates on our whereabouts!

The post Let’s Meet Up! appeared first on Brooks Bell.

Unlocking the True Power of Testing & Other Takeaways from Brooks Bell’s Interview With Ambition Data

Recently, our Founder and CEO, Brooks Bell, sat down with Allison Hartsoe, host of the Customer Equity Accelerator—a podcast produced by Ambition Data. Listen to the full podcast or read on for a few highlights from their conversation:  On what inspired her to build an experimentation consultancy… Originally, Brooks founded Brooks Bell Inc. in 2003 as […]

The post Unlocking the True Power of Testing & Other Takeaways from Brooks Bell’s Interview With Ambition Data appeared first on Brooks Bell.

Recently, our Founder and CEO, Brooks Bell, sat down with Allison Hartsoe, host of the Customer Equity Accelerator—a podcast produced by Ambition Data. Listen to the full podcast or read on for a few highlights from their conversation:

On what inspired her to build an experimentation consultancy…

Originally, Brooks founded Brooks Bell Inc. in 2003 as a website development agency. After working with a few local clients, a chance introduction led to her first major experimentation client, AOL.

Today, you might think of AOL as one of the [now-extinct] internet dinosaurs, but even back in the early 2000s, the media giant was facing its fair share of challenges. According to one story by Time Magazine, despite having 34 million members in 2002, AOL was battling slowing subscriber growth, falling ad revenue and exorbitant operational costs. 

So, the company turned to experimentation. “AOL had the right environment to build a testing culture,” said Brooks. “They had a closed technology environment, their own analytics platform, and their data was clean and connected.”

Back then, AOL relied on pop-ups to drive new subscriptions. Working with Brooks, the company issued a challenge: design a new subscription pop-up that would beat the control experience. And so, drawing from her background in design and psychology, she did—and then she did it again, and again, and again.

But that was just the start. As other large companies began to rely more on the digital space to drive their business, Brooks saw an opportunity to help them tap into the power of experimentation.

“We realized that no one was testing!” said Brooks. “No other large companies had the data, culture and processes in place to test. So we set out to help them build the data fidelity and really recreate what we saw at AOL in those early years.”

On the difference between optimization and experimentation…

It’s one of the more common questions we get: “Brooks Bell is an experimentation consultancy. What’s that? What’s the difference between experimentation and optimization?” As Brooks explains it, it all comes down to science.

By definition, experimentation is the application of the scientific method to determine something. And while optimization is one potential outcome of an experiment, true experimentation requires running tests without a prescriptive outcome or application.

To put it simply – you’re testing to learn. And as long as your results are statistically significant, there is always something to be learned from experiments—even those with flat or negative results.

On how to unlock the real power of experimentation…

Today, in the age of Amazon, a customer-centric experience is critical. But for some established companies, this requires a bigger paradigm shift in culture and processes.  

“Customer-centricity requires rethinking metrics, the type of data you collect, how teams are organized, how teams are incentivized, how you communicate and also your core values,” said Brooks.

The true power of experimentation lies in its ability to align your customer needs with your company’s strategic goals and your program’s agenda. Furthermore, you can use experimentation to learn new things about your customers in a scientific way.

“Having statistically-sound customer insights can totally change how you organize your store, how you train your team, and how you structure your website,” said Brooks. “This is where testing programs can really drive change.”

To that end, we recently celebrated the launch of Illuminate, our customer insights software for testing teams and executives. Illuminate not only provides a place to store, share and learn from your experiments, but also a means to develop impactful customer insights.

“We launched Illuminate to provide a repository of great test examples, to learn from each other, and to build a library of great test case studies,” said Brooks.  This is because outside of the testing program, any key learnings from an experiment can get lost within the data. Illuminate solves this by encouraging deeper thinking about customers, their needs, preferences, and behaviors. 

Learn more about Brooks Bell’s experimentation consulting services. 

The post Unlocking the True Power of Testing & Other Takeaways from Brooks Bell’s Interview With Ambition Data appeared first on Brooks Bell.

9 Things You Can Do Now to Ensure Your Experimentation Program Survives a Reorganization

It often begins with rumors and murmurs. Then, perhaps, a shift in executive leadership. At first, changes seem minor or isolated. But eventually, the inevitable becomes reality—a reorg is underway. Because testing programs typically work with and across several divisions in a business and don’t fulfill a traditional business function, they are particularly vulnerable during […]

The post 9 Things You Can Do Now to Ensure Your Experimentation Program Survives a Reorganization appeared first on Brooks Bell.

It often begins with rumors and murmurs. Then, perhaps, a shift in executive leadership. At first, changes seem minor or isolated. But eventually, the inevitable becomes realitya reorg is underway.

Because testing programs typically work with and across several divisions in a business and don’t fulfill a traditional business function, they are particularly vulnerable during the organizational upheaval.

However, to paraphrase Benjamin Franklin, with an ounce of prevention, it’s possible to avoid a pound of problems. Here are nine practices for surviving a reorg that you can start doing nowbefore you ever have to deal with one.

1. Calculate Impact

The first thing any new leadership will want to see is the value the testing program provides to the business. Of course, “value” is a multidimensional concept. It includes the contribution to business goals and priorities, insight into customer preferences and behavior, development of new innovations, and minimizing certain opportunity costs.

While it’s true all of these things will help communicate the importance and value of testing, nothing will be as compelling as a big annualized impact number. This number can be complex and time-consuming to calculate, but even if it isn’t an important metric for your program today, it’s worth having an analyst crunch, record and update just in case.

2. Write an Elevator Pitch

If you had five minutes or less with the CEO of your company, could you clearly communicate the mission, focus, and value of the testing program in a memorable way? If the answer to that question is “no,” or even “maybe” it’s worth spending some time crafting an elevator pitch for your testing program.

Open your pitch with a short anecdote about a problem that was solved with testing. Next, add a sentence or two about the essential mission of the program. Follow this with a sentence about the methods used. Close with a statement about the contribution the testing program has made to the business as a whole.

Once you’ve written this pitch, practice delivering it to your friends, family, team and other stakeholders. After a lot of practice—and incorporating the feedback you will inevitably receive—you’ll be ready to introduce testing to any new leader or team you may encounter.

3. Archive Results

Surviving a reorg isn’t all about making a case for testing. It’s also about maintaining a consistent pipeline after teams have been shuffled around.

The first step to protecting the testing pipeline is to create a complete, detailed, navigable archive of past test results. This is critical for developing new ideas, training new team members, orienting new teams and stakeholders, and simply making the case for what works and what hasn’t.

Recently, we launched Illuminate, our new software for enterprise-level testing teams. Illuminate offers an executive-friendly repository of your tests and any insights you’ve learned about your customers along the way. It’s direct integration with Optimizely, easy-to-use reporting tools, and custom case study generator significantly simplify the task of archiving and reporting your test results.

4. Outline the Process

Testing is complex and when teams get rearranged, processes that once flowed smoothly can become intractably clogged. To prevent this, document the process as it exists and identify areas of parallelization, possible redundancies, problematic bottlenecks and opportunities for redirection.

If you have access to a project manager, ask her to run a few what-if analyses to estimate potential problems if your processes were to be disrupted. Then, work together to develop possible solutions or workarounds to the most likely scenarios.

5. Clearly Define Roles and Responsibilities

Having a fast and nimble all hands on deck approach to managing the testing process is great until a critical person leaves the company or is assigned to a different team.

To avoid this, define the roles and responsibilities of each team member at each stage of the process. Doing this is critical for mapping the resource requirements of the testing process and quickly identifying gaps if the team is restructured.

It’s also important, however, to track ongoing responsibilities and duties in a more specific way. Having a project management platform or system that identifies what stage of the process each test is in, and which team member is responsible for that task is essential for avoiding disruptions.

6. Develop Training Programs

Looking on the bright side, a reorg could mean your testing team is greatly expanded. It might also mean fewer people are doing more, including jobs they have little experience with. In either case, having a developed and ready-to-execute training program is helpful.

Like all of these tips, the best time to develop a training plan is not the first day your new team member walks into the office. Instead, start training and cross-training your existing team right away. This gives you an opportunity to develop extensive content, deliver it, get a sense of what works, and make adjustments before a reorg renders training critical to the continuation—and not just the improvement—of the program.

7. Centralize Documentation

Having lots of documentation is useless if no one can find it. Moreover, it doesn’t help if it isn’t standardized in some way.
The archive of results, process documents, test plans, training materials, and everything else should be stored in a public or shareable archive, in a format that is easily accessible and navigable.

Using filename conventions, consistent directory structures, and standard documentation practices across the team may be mundane, but it’s just as important to the robustness of the testing program as tracking each person’s ongoing responsibilities.

8. Get essential access

One often overlooked consideration is whether the testing team has access to the essential technologies on which it relies.

Even if most development is done by an outside group, it’s important to have access to tools that enable you to upload and modify your code. Additionally, if reports are pulled by a sovereign analytics team, it’s equally important for someone from the testing program to have the access and ability to do so in a pinch.


Some solutions—like tag management systems—address this challenge. Training, cross-training, and collaboration is another helpful way to build the necessary competencies to get access to and make basic use of all your testing tools.

9. Keep it all up to date

Building the previous eight resources can take a lot of time and effort. Many teams will make any one of them a goal for the quarter, work hard to get it done, drop it in an archive, then forget it.

Months, maybe years, pass without giving the resource a second thought. Then, a sweeping reorg happens and the five-year-old process document is unearthed, dusted off, and found to be frustratingly obsolete. That’s why you must take care to not only produce these resources but maintain them as well.

A reorg can be a scary thing for a lot of reasons. However, by following these nine tips today, even the biggest organizational shakeup doesn’t have to disrupt the flow and productivity of the testing program.

The post 9 Things You Can Do Now to Ensure Your Experimentation Program Survives a Reorganization appeared first on Brooks Bell.

How to Convince Your Boss to Invest in Experimentation: 5 Hard Truths

Would you rather have a root canal, or try to build something completely new at a huge, enterprise-level company that seems to be held together with nothing but red tape and internal politics? That may seem a bit dramatic, but obtaining buy-in for experimentation is a difficult challenge faced by many of our clients. Over […]

The post How to Convince Your Boss to Invest in Experimentation: 5 Hard Truths appeared first on Brooks Bell.

Would you rather have a root canal, or try to build something completely new at a huge, enterprise-level company that seems to be held together with nothing but red tape and internal politics?

That may seem a bit dramatic, but obtaining buy-in for experimentation is a difficult challenge faced by many of our clients. Over the last 15 years, we’ve identified some hard truths about getting others to invest in testing. And in the spirit of “being real,” (one of our core values) we’ve decided to share them with you.

1. You have to bait the hook to suit the fish.

“The world is full of people who are grabbing and self-seeking. So the rare individual who unselfishly tries to serve others has an enormous advantage. He has little competition.” – Dale Carnegie, How to Win Friends & Influence People

We’ll take good ole Dale Carnegie’s advice on this one: if you want to convince someone to do something, you have to frame it in terms of what motivates them. In order to do that effectively, you have to be able to see things from their point of view.

“My go-to strategy for gaining executive buy-in for testing programs is to focus my discussions with them on the business impact of a well-run and successful testing program,” says Kenny Smithnanic, Director of E-Commerce at Ultra Mobile. “Most executives I’ve worked with want to see better company performance, even if they aren’t directly evaluated on this. So, I tend to discuss the exact revenue, orders, leads, and/or average order value increase that our business could realize from the accumulated gains of a successful testing program.”

When you’re pitching your experimentation program, position it as a program that works in service to other departments. Consider also your company’s big picture objectives and pinpoint where experimentation can work in direct support of those.

2. You’re gonna lose if you make it all about winning. 

Managing expectations is critical. Optimization is as much about learning about your customers as it is about landing some quick wins. And learning sometimes requires failure. Or, in this case, a few flat or losing tests. In fact, the industry win rate is actually 25%. If the expectation is that every test has double-digit lifts, it’s going to be a rough road ahead. 

All that said, here are a few strategies to manage expectations for your program.

Include executives in the ideation process. Let them see how your team is using data, generating lots of great testing ideas and working collaboratively. Including them in this process also provides them with a sense of ownership and investment to see how a strategy performs.

Develop a consistent (in frequency, timing and format) reporting structure. As you’re reporting the results of each experiment, begin by framing it in the context of your company’s larger goals. Don’t make the mistake of only focusing on the winning experience. Rather, walk your stakeholders through each variation individually and the resulting insights based on their performance.

State what’s next on your priority list, based on these results. This builds anticipation for your next round of tests, setting you up for future success, regardless of individual results.

3. Proving people wrong doesn’t convince them you’re right.

Avoiding pushback to your ideas requires three things:  Knowing what you’re up against, anticipating challenges and being open to others’ solutions.

Because executing tests often requires cross-functional support, you may have to take a test and learn approach to your experimentation program. You might also have to make a few compromises or adopt short-term solutions along the way.

Ultimately, if you’re seeking to make experimentation a core competency at your organization, you have to bring others along in the process. This means that as problems arise, it’s okay to recommend a solution, but you also have to be open to trying others’ solutions as well.

Only by solving problems together will you be able to build a high-functioning experimentation program and team.

4. You’re not going to blow anyone’s mind by using big words or snazzy acronyms.

In fact, when you’re trying to get people on board with testing, using super technical jargon can actually work against you.

Consider, for instance, the difference between “experimentation,” “testing,” and “optimization.” To you, these terms might mean the same thing, but to your boss, they may evoke different meanings.

So when you look to pitch your experimentation program, use terms that are familiar, accessible and aren’t at risk of being misinterpreted.

The same goes when introducing testing terminology to the broader organization. There are tons of acronyms associated with experimentation—RPV, AOV, UPT—the list goes on. Be sure you are decoding them until your team is more familiar. Revenue per Visit, Average Order Value, and Units per Transaction communicate far more meaning than an acronym.

Finally, consistency and intentionality are key here. Develop a common language for experimentation through trainings, lunch-and-learns and/or a company-wide roadshow. Finally, be sure that these terms are reflected in your internal processes: within meetings, in status reports, playbooks and in other means of communication.

5. Getting buy-in is not a one-time event.

You have to lay the groundwork. You have to build trust and credibility. You have to follow through. And through it all, you have to be a generally good person to work with.

Experimentation is amazing because it is supported through quantitative data and statistically significant results – making it a very quantifiable and persuasive case,” says Suzi Tripp, Senior Director of Experimentation Strategy at Brooks Bell.  “When you can share an experiment and tie it directly to incremental gains, it’s a powerful statement.”

But even if you get executive support for your program, you simply can’t expect everyone else to get on board immediately. In fact, having that expectation can lead to conflict and tension between teams, and cause a lot more internal problems.

To address this, we suggest working gradually: start out by working with one group, generate a success story, and share it through your internal communication channels, and keep going from there. Only by sharing your stories will other departments begin to be interested in harnessing that success as well.

Have ideas for other strategies to obtain buy-in for testing at your organization? We’d love to hear what’s worked for you. Share your story in the comments.

The post How to Convince Your Boss to Invest in Experimentation: 5 Hard Truths appeared first on Brooks Bell.

How to hire your testing unicorn (without using magic)

When I was running my own testing program, I was in desperate need of an associate to help me manage my small (but mighty!) team. My single associate and I were launching tests left and right and we were unable to do anything other than focus on the day-to-day of the program. A job description […]

The post How to hire your testing unicorn (without using magic) appeared first on Brooks Bell.

When I was running my own testing program, I was in desperate need of an associate to help me manage my small (but mighty!) team. My single associate and I were launching tests left and right and we were unable to do anything other than focus on the day-to-day of the program.

A job description had been posted and the company’s recruiters were doing everything they could to find the right hire.

I remember reaching out to an old friend of mine to see if she knew anyone who might fit the role. I told her that I was looking (simply) for a data-driven individual with stellar communication skills and the ability to manage several complicated web projects at one time.

“Oh,” she said. “So you’re looking for a unicorn.”

“No, Susan… I’m looking for a Testing Specialist.”

Now, I don’t want to be too dramatic here, but this unicorn revelation did rock my world a bit. (It also made me want a bowl of rainbow sorbet with sprinkles… but I digress.)

When I finally overcame this existential testing crisis, I realized that I believed, deep down, that testing unicorns did exist. But I also knew that due to magic (obviously), I might never find one.

There were three main things I was looking for in my unicorn:

  1. Strong analytics skills and the ability to develop advanced data-driven recommendations
  2. Amazing communication skills – for helping stakeholders understand and action off of that data
  3. Organized and efficient project management skills for planning and managing the execution of test strategies

First, I had to assess which skills I already had on my team.

I took a look at my own skills and the skills of the team I had in place. To be honest, I’m much better at talking about analytics than I am at sitting behind a desk and doing a deep dive into the numbers.

My personal strength is in the communication realm of testing and my associate was an awesome project manager. So, it became pretty clear to me that there was a need for a strong analyst on our team.

Then, I had to decide what was teachable.

This is where things get controversial. Because teachable skills can really depend on the skills of the trainee, the trainee’s willingness to learn, and the skills of the trainer.

I did a quick poll here at Brooks Bell to see which skills my colleagues believe is the toughest to teach.

As you can see, many people here believe that good communication skills are hard to coach. And during my search for a Testing Specialist, I felt the same way.

I was pretty confident that I would be able to help my next hire become a better analyst or project manager, but I wasn’t so sure I could teach someone to communicate well in a stakeholder-facing role.

Finally, I had to decide if I could tweak my program structure

Depending on my next hire’s strengths, there were a few scenarios that I had to consider in order to structure my program without a unicorn. Here are a few examples:

If I decided to hire a strong analyst with weak communication skills

In this scenario, I would consider making this Testing Specialist role a non-stakeholder facing role. Because this person would not be project managing or communicating directly with stakeholders, they would be solely dedicated to analytics and free up the rest of the team’s time to focus on project management and stakeholder communication.

If I decided to hire a strong project manager with weak analytics skills

Because I believed that analytics skills were teachable, this associate could focus on project management in the beginning and slowly take on analytics work when they were ready.

If I decided to hire a strong communicator with weak project management skills

In this scenario, I would start by putting this associate in a stakeholder-facing role focused on analytics. After some time, I would begin training him or her on project management.

The magical lesson I learned

When I first approached this seemingly impossible task of hiring my next Testing Specialist, I was discouraged by the reality that I wanted so many specific skills in one individual.

But the truth is, Experimentation and Optimization is still a very niche industry, so finding a single person with so many abilities is going to continue to be tough for a while. That’s why I recommend first looking at the structure of your team, and then deciding which skills you feel comfortable teaching.

And always remember this: Testing unicorns do exist, sometimes we just have to help them find their wings.

Are you a testing unicorn looking for your next big challenge? Check out our monthly “who’s hiring” post for open positions in testing and personalization at top companies.


About the Author:

Sam Baker has eight years of experience running experimentation and digital analytics programs for major e-commerce brands. As a consultant at Brooks Bell, she helps global brands build and grow their testing programs.

In addition to her role at Brooks Bell, Sam is also an accomplished career coach, providing guidance to ambitious women looking to land their dream careers. Originally from Indiana, Sam now lives in Raleigh, North Carolina with her husband and her dog.

The post How to hire your testing unicorn (without using magic) appeared first on Brooks Bell.

“Alexa, how do I A/B test my voice-enabled customer experience?”

Another Amazon Prime Day has come and gone, and we’re betting many in the e-commerce space are now breathing a little easier. But looking beyond the challenges Amazon poses for other online retailers, the company’s foray into AI, smart speakers and similar devices presents a new opportunity for online brands to reach consumers; a channel […]

The post “Alexa, how do I A/B test my voice-enabled customer experience?” appeared first on Brooks Bell.

Another Amazon Prime Day has come and gone, and we’re betting many in the e-commerce space are now breathing a little easier.

But looking beyond the challenges Amazon poses for other online retailers, the company’s foray into AI, smart speakers and similar devices presents a new opportunity for online brands to reach consumers; a channel that—in our humble opinion—is begging to be tested.  

Alexa, Siri, Google Assistant, Jarvis, Watson…the list goes on

Various studies over the last year have shown that 20% of online searches are conducted using voice-based technologies and this is expected to increase to 50% by 2020.  This means the time to get on board with voice-based experiences is now—whether you’re looking to build your own AI assistant, make your site more friendly for voice searches, or just simply identify where voice-based technologies can enhance your customer experience.

As part of a continuous effort to help our clients capitalize on new technologies and strategies to deliver a better customer experience, our development team recently challenged themselves to build an Alexa Skill with an integrated A/B test function (if you’re unfamiliar with the terminology, think of an Alexa Skill as similar to an app).

To explain how we went about this, it helps to have a general understanding of how Alexa works and where testing fits in:

Step 1: The Alexa-enabled device hears a “wake word” spoken by the user, and listens to the user’s request.

Step 2: The audio of that request is streamed from the device to the Alexa Server.

Step 3: The Alexa Server converts the audio to text and uses this to process the user’s intent.

Step 4: After processing, the Alexa Server sends the intent to a custom Alexa Skill, which is usually housed on a separate server.

Step 5: The Alexa Skill server processes that user request and determines how best to respond.

Step 6: This is where testing comes into play. As the Alexa Skill server is determining the best way to respond to the user’s request, your built-in testing tool triggers a control or challenger response. The Alexa Skill then responds to the request, sending the corresponding text response or visual media back to the Alexa Server.

Step 7: The Alexa Server then converts that text response to speech or renders whatever visual media was returned from the Skill.

Step 8: The Alexa Server sends that content to the device, which is then broadcasted back to the user.

For the purposes of this challenge, our developers built an Alexa Skill for a fictitious online book retailer, Happy Reads. Although testing can be integrated using any server-side testing tool, the team chose to build Optimizely into our custom Alexa Skill as it’s a popular tool among our clients.

So, what does this mean for the customer experience?  

Here’s how our scenario would play out, as designed by our development team:

You’re making a purchase at your favorite online book retailer, Happy Reads. You want to make sure you’re getting a good deal. As you’re browsing the Happy Reads website, you ask Alexa to find promotions by opening the Happy Reads skill and ask Alexa to find coupon codes.

In this A/B test, the control results in Alexa reading off multiple coupon codes at once. The challenger delivers only one coupon code at a time, with the option to search for more.

In this scenario, the testing team would identify the winning experience by having distinct coupon codes for both the control and the challenger and tracking the number of purchases using each coupon code (note: to keep the variables consistent, it’s important that each codes’ promotional value is the same).

Of course, this is just one means of implementing A/B testing in a voice-enabled environment. But there are many other opportunities within customer service, on-boarding, and in-app search experiences, as well as others.

And this doesn’t only apply to e-commerce. Banking and financial services, insurance, healthcare and media are just a few examples of industries looking to voice technologies to enhance their customer experience.

So long as humans can speak faster they can type, voice-enabled experiences present a powerful opportunity for brands to respond in real-time to customer requests and to offer suggestions, as well as an opportunity to position themselves as more of a service.

If you’re looking to implement experimentation within your voice-enabled experiences and other marketing channels, but don’t know where or how to start, contact us today

The post “Alexa, how do I A/B test my voice-enabled customer experience?” appeared first on Brooks Bell.

“Oh, BEhave” Series: Our Behavioral Economics Journey Recap

Beginning last October, Brooks Bell and Irrational Labs teamed up to tackle the topic of Behavioral Economics through a six-part blog series exploring the basics of behavioral economics and providing tips to help you apply it in your experimentation program. In this final blog, we’re going to revisit some of the big takeaways from each […]

The post “Oh, BEhave” Series: Our Behavioral Economics Journey Recap appeared first on Brooks Bell.

Beginning last October, Brooks Bell and Irrational Labs teamed up to tackle the topic of Behavioral Economics through a six-part blog series exploring the basics of behavioral economics and providing tips to help you apply it in your experimentation program. In this final blog, we’re going to revisit some of the big takeaways from each post.

Post 1: Creating Experiments with Behavioral Economics

Read more. Test more. The first step to understanding social science is to appreciate the complexities of humans. There is no ‘one size fits all’ solution to any problem.

In this post, we introduced you to Kristen Berman, co-founder of Irrational Labs. Irrational Labs uses the power of behavioral economics for good: helping companies improve the world by saving people money, encouraging healthy living, among other things. They often publish their findings in academic journals.  

Here at Brooks Bell, we encourage the use of behavioral economics to understand customers, applying these principles to guide digital experimentation strategies.

Despite these two distinct applications of behavioral economics, our processes for implementing them are incredibly similar.

Post 2: Even Experimentation Experts Get Surprised

Expert intuition is not always correct. It’s important to use data and behavioral economics to better understand your users and their motivations – and experiment from there.

Although both Kristen and I have run thousands of experiments between us, even we get caught off guard by test results from time to time. In this post, we shared examples of experiments that kept us guessing–highlighting one of Irrational Labs’ Google Adwords promotions and a test we ran on behalf of one of our clients, the Jimmy V Foundation.

Behavioral economics enables you to better understand the intricacies of human decision making–even when people’s actions surprise you. Applying this to testing gives you the best chance at creating a successful experience for your customers.

And while there’s no universal template for designing a customer experience, there’s a huge opportunity to test into the experience that works best for your customers. These examples highlighted the importance of having an experimental mindset rather than relying on your intuition.  

Post 3: Create a Powerhouse Methodology Using Quantitative and Qualitative Data Alongside Behavioral Economics

The only way to know if something affects another thing is to do a controlled trial.

This post detailed two different approaches to bringing quantitative data, qualitative data and behavioral economics together to create a powerhouse ideation methodology.  

Brooks Bell and Irrational Labs’ approaches to ideation are very similar, differing only in structure and terminology.

At Brooks Bell, we use an iterative, five-part structure that begins with pre-strategy data, dives into user needs, problem/opportunity identification and experiment brainstorming, and finally ends with a prioritization process.  Behavioral Economics is engrained throughout the process but takes center stage during the problem/opportunity identification as well as the experiment brainstorm.

Irrational Labs’ process begins with a literature review, followed by a quantitative data study and qualitative feedback. Then they run their controlled trial (a.k.a. an A/B test).

Of course, applying these processes into your organization depends on your resources, timeline and many other factors.

If you’re unsure where to start, let’s talk! Brooks Bell has years of experience building world-class testing programs and can help you build and implement an ideation methodology that’s specific to your team and your business goals.

Post 4: Ethical Experimentation: Using Behavioral Economics for Good

When we understand the power that companies have over our decisions, it becomes unethical when they do NOT experiment on their users.

In this post, we examined the topic of ethical experimentation: is it always right to experiment on your users? And how do you ensure you’re testing in the customer’s best interests?

Central to this debate is the imbalance of power between a company and its customers. Companies need to drive profit, and while driving profit may drive customer value, there are some situations where it could drive high prices and/or negative customer value. The most famous example of this? The tobacco industry.

But not experimenting is not an option. If a company doesn’t test a feature, it means they think the engineer who first designed the feature got it 100% right. This means the power to influence our decisions lies with the engineer who designed the feature and did so without any data on how the feature is influencing the end user.

So, how do you design a system to ensure noble intent with your experiments? First, focus on building tests with short-term and long-term value. Be transparent about your experiments. Online dating service, OK Cupid, for example, openly uses its customer data for research and publishes summaries of the insights.

Finally, give customers a means of public recourse if they feel they’ve been wronged. This not only empowers the user but also shows that your company prioritizes customer relationships over reputation or profit.

Post 5: Behavioral Principles to Know – And How to Use Them

If you get familiar with [behavioral] principles…your digital experiments are going to be more inspired, and better-informed than ever.

There are decades of behavioral science experiments at your fingertips that can be leveraged to better inform your digital experiments.  This post highlighted a few of them, including social proof, choice overload, goal gradient hypothesis and the sunk cost fallacy (among others).

We also provided a few tips and tricks to help you and your team become more familiar with these principles, with additional links to various resources (including one of my personal favorites, Dan Ariely’s Irrational Game).  

We had a lot of fun creating this series and hope you found it valuable. If you have additional thoughts or questions about behavioral economics and how to apply the principles to experimentation, let us know!  We’d love to help!

__

Suzi Tripp, Sr. Director of Experimentation Strategy
At Brooks Bell, Suzi sets the course of action for impact-driving programs while working to ensure the team is utilizing and advancing our test ideation methodology to incorporate quantitative data, qualitative data, and behavioral economics principles. She has over 14 years of experience in the industry and areas of expertise include strategy, digital, communications, and client service. Suzi has a BS in Business Management with a concentration in marketing from North Carolina State.

 

Kristen Berman, Co-founder of Irrational Labs, Author, Advisor & Public Speaker
Kristen helps companies and nonprofits understand and leverage behavioral economics to increase the health, wealth and happiness of their users.  She also led the behavioral economics group at Google, a group that touches over 26 teams across Google, and hosts ones of the top behavioral change conferences globally, StartupOnomics. She co-authored a series of workbooks called Hacking Human Nature for Good: A practical guide to changing behavior, with Dan Ariely. These workbooks are being used at companies like Google, Intuit, Neflix, Fidelity, Lending Club for business strategy and design work.  Before designing, testing and scaling products that use behavioral economics, Kristen was a Sr. product manager at Intuit and camera startup, Lytro.  Kristen is an advisor for Loop Commerce, Code For America Accelerator and the Genr8tor Incubator and has spoke at Google, Facebook, Fidelity, Equifax, Stanford, Bay Area Computer Human Interaction seminar and more.

The post “Oh, BEhave” Series: Our Behavioral Economics Journey Recap appeared first on Brooks Bell.

Eight Mistakes Every Rookie Experimentation Team Makes (& How to Avoid Them)

If testing were easy, everyone would do it. But that’s simply not the case. Rather, experimentation requires extensive learning, hard work, lots of experience and a unique set of skills to consistently produce results that have a meaningful impact on a business. At Brooks Bell, we’ve spent the last fifteen years helping global brands build […]

The post Eight Mistakes Every Rookie Experimentation Team Makes (& How to Avoid Them) appeared first on Brooks Bell.

If testing were easy, everyone would do it. But that’s simply not the case.

Rather, experimentation requires extensive learning, hard work, lots of experience and a unique set of skills to consistently produce results that have a meaningful impact on a business.

At Brooks Bell, we’ve spent the last fifteen years helping global brands build and scale their experimentation programs. Drawing on that experience, we’ve identified eight common mistakes that rookie experimentation teams often make in the first few months of testing. We’ve also detailed useful and actionable strategies to help you navigate these challenges as you work to establish your optimization program. 

Mistake #1: Testing Without a Hypothesis

While the importance of a hypothesis may seem obvious, it’s easy to get swept up in the excitement and politics of launching the first test without realizing you haven’t determined a clear expectation for the outcome.

Without a defined hypothesis, it’s difficult (if not impossible) to make sense of your test results. Additionally, a well-articulated hypothesis can shape the entire test creation, evaluation, analysis and reporting process. It also makes communication and coordination across teams—another common challenge for new programs—much easier.

Learn strategies to avoid this misstep, along with seven other common rookie mistakes for newly established experimentation teams. Get the white paper.

The post Eight Mistakes Every Rookie Experimentation Team Makes (& How to Avoid Them) appeared first on Brooks Bell.