What Happens When Data Meets Creative (and How to Make it a Reality at Your Company)

There are quite a few people out there that just don’t *get* creative. They don’t understand the way in which we work or make decisions. And, indeed, creative teams are known to be cost centers rather than revenue generators. To certain execs, creatives are simply the sneaker-wearing hipsters who are brought in to make things […]

The post What Happens When Data Meets Creative (and How to Make it a Reality at Your Company) appeared first on Brooks Bell.

There are quite a few people out there that just don’t *get* creative. They don’t understand the way in which we work or make decisions. And, indeed, creative teams are known to be cost centers rather than revenue generators. To certain execs, creatives are simply the sneaker-wearing hipsters who are brought in to make things look pretty or sound good.

While this is a far cry from reality, it’s also not that hard to understand why. As creatives, we understand the value of good creative work. Proving that value, however, can be difficult. So here are a few tips for proving the ROI of your creative team and incorporating data within your creative process.

Tip #1: Know and Speak the Language of Business.

Smart creative work requires an objective-based approach. Objective-based creative is driven by data—often in the form of user feedback, website analytics, and strategic business goals. As a designer or copywriter, your job is to gather and digest this data and apply it to your work.  

When pitching your concepts to your stakeholders, most aren’t going to accept work that just “looks” better. It’s important that you are able to articulate the business problem, your target audience and the objective-based reasoning behind your decisions. This ensures that your work is influenced by hard data and research, rather than just design preferences.

On the other side, it’s important to train your stakeholders in the art of objective-based feedback. That is, feedback in the context of whether or not your work is effective in addressing the objective at hand. Doing this takes time, practice and a lot of patience, but the payoff is huge. Your executives will feel more confident after seeing that your creative team is aligned and hyper-focused on providing measurable value.

Tip #2: Use Testing to Eradicate B.S. in the Creative Process.

Brooks Bell was founded on the idea that you can eliminate creative guesswork by applying the scientific method. But at many companies, creative and UX teams rarely engage with testing teams. While this might make sense from the perspective of your org chart, few realize just how much collaboration between these functions could positively impact a business.

A few years back, my team and I were brought in to work with one of our retail clients. Looking at their website data, our analysts realized that a large majority of people were abandoning the express checkout form for the full checkout form. This seemed counterintuitive to us: less friction is always better, right? Why would anyone prefer to fill out the long form!?

In order to develop a strategy to test, we needed more data—so we turned to user research. We polled a select group of users about their purchasing experience and uncovered some potential reasons for their behavior.

We discovered that many users preferred to use alternative or saved payment methods, yet the account login and gift card payment options were only available in the full checkout experience. We ran a test adding these options to the express checkout flow, which resulted in a 5% lift. When implemented, this test translated to a $5M increase in revenue.

The impact of this was significant—and not just from a revenue perspective. Through this process, we were able to identify other areas where users could be experiencing anxiety. It also prevented us from over-designing in the future. For this company’s customers, a simple and clear message and a less cluttered experience were enough to quell their anxiety.

For data-starved creatives, these types of insights can be extremely valuable and can greatly influence the company’s overall design aesthetic.

Tip #3: Be Sure You Recruit Relevant User Groups for Discovery Research

This tip is for you if—upon presenting the results of your user research—you’ve ever been asked “why did you talk to [audience group]?” or the alternative: ”why didn’t you talk to [audience group]?”

Sure, conducting guerilla research on random mall-goers or your coworkers at lunchtime will get you basic usability feedback. But if you want actionable insights, you need to not only research the group that’s generating the most business for your company, but also the group that’s most impacted by the problem you’re trying to solve.

If return users drive the majority of your revenue, don’t research new users. Similarly, don’t ask someone to look at your mobile design if they don’t fit the demographics of the segment you’re trying to reach.

Here at Brooks Bell, we believe it’s important for our clients to be closely involved in the process of selecting user segments for research. This not only manages the scope of the project and ensures maximum impact, but it also helps to avoid the frustrating line of questioning I mentioned above.

Tip #4: Embrace Survey-Based Research

If you’re well-versed in usability testing, you know that elaborate usability tests are a waste of resources and you really can get the best results from testing no more than 5 users.  But to an executive, that number 5 can seem awfully small. And no matter how many times you reference or point them to this blog post, they still might just not buy it.

This is where survey-based research comes in. We’ve had tremendous success in conducting survey-based research for our clients, and find it is often better received by executives.

Executives respond well to survey research for a couple of reasons: You can survey a larger population of people. It’s fast—most of the time we get responses back within a day or two. And finally, depending on the types of questions you ask, it’s largely quantifiable.

While surveys are different from usability tests, oftentimes, you can use survey results to back up your usability test results.

Finally, it’s important that you also become the master of your research domains and empower yourself to dig in on your own.  For this, pivot tables are a great tool. Pivot tables unlock the magic of Excel by allowing us to take all of our survey results and slice and dice them any way we want… filtering answers by segments, averaging, counting, and creating data visualizations all without ever having to talk to an analyst.

How many of you thought you’d leave this post adding Excel to your list of preferred programs? 😉

Tip #5: Don’t Hoard Your Ideas – Bring Others Into the Creative Process

It’s every designer’s tale of despair: you spend tons of time on a project—putting in extra hours to make sure every pixel has been pushed into the perfect position, every line kerned and leaded—only to have your work completely shat on upon unveiling it.

Trust me on this one: hoarding your ideas and excluding other from your design process really only sets you up for disappointment, depression and frustration.  

So stop with the big reveal and instead invite others into the design process. Voice your ideas in a collaborative way. Position yourself as a guide within a creative process in which the objective is to build something collaboratively. Without a doubt, you’ll find you’ll get things approved faster and more frequently.

 


Interested in learning how Brooks Bell can help empower your creative and UX teams with data? Learn more about our services or contact us today.

The post What Happens When Data Meets Creative (and How to Make it a Reality at Your Company) appeared first on Brooks Bell.

Use predictive personalization to drive increased conversion rates

Every day, every month, every quarter, marketers are tasked with a conundrum: create web sites and messages that resonate with target audiences. It’s not a rare request. In fact, it’s a fundamental principle of marketing. Why is it a conundrum? Because you’re being asked to make one size fit all of your visitors. Think about […]

The post Use predictive personalization to drive increased conversion rates appeared first on Marketing Land.

Every day, every month, every quarter, marketers are tasked with a conundrum: create web sites and messages that resonate with target audiences. It’s not a rare request. In fact, it’s a fundamental principle of marketing. Why is it a conundrum? Because you’re being asked to make one size fit all of your visitors. Think about your site. Who are the different segments of visitors? What are their different needs and motivations when visiting your site?

Predictive personalization systems use machine learning to automatically choose and deliver the experiences most likely to drive each site visitor to convert.

This white paper from Intellimize covers:

  • How predictive personalization works.
  • The advantages of predictive personalization vs. A/B testing.
  • How predictive personalization works in practice.

Visit Digital Marketing Depot to download “An Introduction to Predictive Personalization.”

The post Use predictive personalization to drive increased conversion rates appeared first on Marketing Land.

How To Build A Culture Of Experimentation

It’s one thing to run an A/B test correctly and get a meaningful uplift. It’s another thing entirely to transform your organization into one that cares and respects experimentation. This is the goal, though. You can only shake off so much additional revenue if you’re the only rogue CRO at your company. When everyone is […]

The post How To Build A Culture Of Experimentation appeared first on Blog.

It’s one thing to run an A/B test correctly and get a meaningful uplift. It’s another thing entirely to transform your organization into one that cares and respects experimentation.

This is the goal, though. You can only shake off so much additional revenue if you’re the only rogue CRO at your company. When everyone is involved in the game, that’s when you stride past the competition.

It’s not just about the tools you use, or even the skills, but also about the people involved. But organizational matters tend to be a bit complex, as anything that involves humans is. How do you build a culture of experimentation?

This article will outline 9 tips for doing so.

1. Get the Stakeholders Buy into CRO, and Establish Program Principles

First thing’s first—we need to get everybody on the same page.

It used to be more difficult to convince people of the value of conversion optimization. Now, it seems that it is more mainstream, and most people buy into the benefits.

We know from conducting State of the Industry Report that CRO is being more widely adopted, and those that are adopting it are increasingly establishing systems and guidelines for their program. All of this is good.

If you’re just getting started on your CRO journey, though, don’t fret. There are some simple and tactical ways you can start establishing a vision.

First, if you don’t have full buy-in from stakeholders, make sure you have at least one influential executive sponsor who is on your side. If you don’t have this, you won’t go far. (Programs tend to have a substantial ramp-up period before you see a good return.)

Second, write down up front, your program principles and guidelines. I like to create a “principles” document for any team I’m on (and a personal one as well), just so that we know what our operating principles are and we know how to make decisions when things are ambiguous.

Here’s an example of a principles document from my team at HubSpot (just a small section of it, but you would get the point):

Of course, we have tons of documentation from everything on how we run experiments to our goals, and more.

Andrew Anderson gave a great example of his CRO program principles in a CXL blog post:

  • All test ideas are fungible.
  • More tests does not equal more money.
  • It is always about efficiency.
  • Discovery is a part of efficiency.
  • Type 1 errors are the worst possible outcome.
  • Don’t target just for the sake of it.
  • The least efficient part of optimization is the people (with you also included).

Yours could look completely different, but just make sure you up-front script the critical plays and don’t leave any questions hanging in the air. This will help stakeholders understand what you are up to and will also help onboard new employees in your team when they get started.

2. Embrace the Power of “I Don’t Know”

With most marketing efforts, we expect a linear model. We expect that for X effort or money we put into something, we should receive Y as the output (where Y > X).

Experimentation is somewhat different. It may be more valuable to think of experimentation as building a portfolio of investments, as opposed to a machine with a predictable output (like how you’d view SEO or PPC).

According to almost every reputable source, many tests are going to fail. You’re not going to be right. Your idea is not going to outperform the control.

This is okay.

If, for every 5 tests that fail, you get 1 true winner, you’re probably already ahead. That’s because, on the 5 tests that failed to improve conversion rates, you are only “losing” money during the test period. You didn’t set them live for good, so you mitigated the risk of a suboptimal decision. (This alone is a great benefit!)

Outside of that, the one test that did win should add some compounding value over time. A 5% lift here and a 2% lift there add up; and eventually, you’ve got the rolling equivalent of a portfolio with compounding returns:

Source

A side point to the whole “embrace I don’t know” thing is that you shouldn’t seek to test things to validate only what you think is right. The best possible case is that something wins that you didn’t think would win.

That’s how Andrew Anderson frequently frames conversion optimization, saying in this post that “the truth is, in optimization, the more often we prove our own perceptions wrong, the better the results we are getting.”

Ronny Kohavi, too, makes the point that a valuable experiment is when the “absolute value of delta between expected outcome and actual outcome is large.” In other words, if you thought it would win and it wins, you haven’t learned much.

Source

3. Make It a Game

Humans like competition; competition and other elements of gamification can help increase engagement and true interest in experimentation.

How can you gamify your experimentation process? Some tools, such as GrowthHackers’ NorthStar, embed this competition right into the product with features like a leaderboard:

Source

You can create leaderboards for ideas submitted, experiments run, or even the win rate of experiments. Though, as with any choice in metric, be careful of unintended incentives.

For example, if you create a leaderboard for the win rate, it might be possible that people are disincentivized from trying out crazy, creative ideas. It’s not a certainty, but keep an eye on your operational metrics and what behaviors they encourage.

4. Adopt the Vernacular

Sometimes, a culture can be shifted by subtle uses of language.

How does your company explain strategic decision making? How do you talk about ideas? How do you propose new tactics? What words do you use?

If you’re like many companies, you talk about what is “right” or “wrong,” what you have done in the past, or what you think will work. All of this, of course, is nourishing for the hungry, hungry HiPPO (who loves talking about expert opinion).

What if, instead, you talked in terms of upside, risk mitigation, experimentation, and cost versus opportunity instead?

The world sort of opens up for those interested in experimentation. Obviously, you still have to be grounded in reality. You can’t throw insane test ideas at the wall and hope that everyone jumps on board.

But if you can propose your ideas in the context of a “what if,” something you can test out with an A/B test rather quickly, you can probably get people on board.

We see here that 40% of our users are dropping off at this stage of the funnel. We’ve done a small amount of user research and have found that our web form is probably too long. It would take us very little time to code up X, Y, and Z variants, and we’d have a definitive answer in 4 weeks. The upside is big. The risk is low. Let’s run the experiment?

It’s much harder to argue against something like this.

Most of persuasion is framing. If the person you are trying to convince feels attacked or threatened (“you think a scrappy A/B test is better than my 25 years of experience?!”), you’re not going to get far.

If you pull people into the ideation process and propose ideas as experiments with lots of upside, it’s easier to get people involved in the process. Or maybe just start throwing the words “hypothesis,” “experiment,” “statistically significant,” “risk mitigation,” and “uncertainty reduction” into all of your conversations, and hope that people follow along.

It doesn’t need to be limited to experimentation, either. You can make it normal to talk about pulling the data, cohort analyses, user research, and others. These should be normal processes for decision making that replace gut feel and opinions.

5. Evangelize Your Wins

It’s important to stop and take the time to smell the roses. When you win, celebrate! And make sure that others know about it.

It’s through this process of evangelization that you both cement the impact and results you’re creating in others’ minds as well as recruit others to become interested in running their own experiments.

How do you evangelize your wins? Many ways:

  • Have a company Wiki? Write your experiments there!
  • Send a weekly email including a roundup of the experiments.
  • Schedule a weekly experiment readout that anyone can attend.
  • If possible, write external case studies on your blog. This isn’t always possible, but can be a great way to recruit interesting candidates to your program.

I’m sure there are many other interesting and creative ways to celebrate and evangelize wins as well. Make sure to comment in the end on how your company does it.

6. Define Your Experiment Workflow/Protocol

If you want everyone to get involved with experimentation, make sure that everyone understands the rules. How does someone set up a test? Do they need to work with a centralized specialist team or can they just run it themselves? Do they need to pull development resources? If so, from where?

These all are questions that can cause hesitation, especially for new employees; and this hesitation can really hinder the pace of experimentation throughput.

This is why it’s so beneficial to have someone, or a team, owning the experimentation process.

Even if you don’t have someone in charge of the program, though, you can still build out the documentation and protocol. At the very least, you can create an “experimentation checklist” or FAQ that can answer the most common questions.

In Switch, Chip and Dan Heath wrote:

“To spark movement in a new direction, you need to provide crystal-clear guidance. That’s why scripting is important – you’ve got to think about the specific behavior that you’d want to see in a tough moment, whether the tough moment takes place in a Brazilian railroad system or late at night in your own snack-packed pantry.”

“Clarity dissolves resistance,” they say.

Luckily, there are tons of examples of testing guidelines, frameworks, and rules out there to borrow from.

7. Invest in Ongoing Education and Growth Opportunities

This is anecdotal; but I’ve found that the best organizations, those that run very mature experimentation programs, tend to invest heavily in employee development.

That means granting generous education stipends for conferences, books, courses, and internal trainings.

Different companies can have different protocols as well. Airbnb, for example, sends everyone through data school when they start at the company. HubSpot gives you a generous education allowance.

There are tons of great CRO specific programs out there nowadays, specifically through CXL Institute. Some programs I think everyone should run through:

8. Embed Subtle Triggers in Your Organization

I’ve found one of the most powerful forces in an organization is inertia. It’s exponentially harder to get people to use a new system or program than it is to incorporate new elements into the current system.

So what systems can you use to inject triggers that inspire experimentation?

For one, if you use Slack, this is certainly easy. Most products integrate with Slack—Airtable, Trello, GrowthHackers Northstar, and others—so you can easily set up notifications to appear when someone creates a test idea or launches a test.

Just seeing these messages can nudge others to contribute often. It makes the program salient overall.

Whatever triggers you can embed in your current ecosystem—even better if they’re automated—can be used to help nudge people toward contributing more test ideas and experiment throughput.

9. Remove Roadblocks

According to the Fogg Behavioral model, there are 3 components that factor into someone taking an action:

  • Motivation
  • Ability
  • Prompts/Triggers

Source

I think the ability, or the ease at which someone can accomplish something, is a lever that we tend to forget about.

Sure, you can wow stakeholders with potential uplifts and revenue projects. You can embed triggers in your organization through Slack notifications and weekly meetings so that people don’t forget about the program. But what about making it easier for everyone who wants to run a test?

That’s the approach Booking.com seems to have taken, at least according to this paper they wrote on democratizing experimentation.

Some of their tips include:

  • Establish safeguards.
  • Make sure data is trustworthy.
  • Keep a knowledge base of test results.

To summarize, do everything you can to onboard new experimenters and mitigate their potential to mess up experiments. Of course, everyone has to go through the beginner phase of A/B testing, where they’re expected to mess things up more often than not. The trick, however, is to make things less intimidating while also making it less likely that the newbie may drastically mess up the site.

If you can do that, you’ll soon have an excited crowd anxiously waiting to run their own experiments.

Conclusion

An organization with a mature testing program knows that almost all of it is dependent on a nourishing experimentation culture. One cannot operate, at scale and truly efficiently, with only one or a handful of rogue experimenters.

The program needs to be propped up by influential executive stakeholders; and everyone in the company needs to buy into the basic process of making evidence-based decisions by using research and experiments.

This article outlines some ideas I’ve seen to be effective in establishing a culture of experimentation, though it’s clearly context dependent and not limited to the items on this list.

Got any cool ideas for implementing a culture of experimentation? Make sure you let me know!

The post How To Build A Culture Of Experimentation appeared first on Blog.

What’s the ROI of SEO for eCommerce Websites?

How we calculate the ROI of SEO services for eCommerce sites: a detailed analysis of how increases in organic traffic will likely impact revenue.

It is easier than you think to estimate the ROI of SEO improvements for eCommerce websites.

In this article, we’re going to show you how we do it.

We Recently Improved Our Method

We recently improved the way we calculate the expected ROI of new SEO improvements for our eCommerce clients.

In the past, when talking to a new client, we used our years of experience working with hundreds of eCommerce clients to give our best guess about the ROI they could expect in three to six months.

We called this the “eyeball method,” and it’s still a part of the method we use to calculate ROI (as you’ll see below).

Essentially, we’d say, “We believe your organic traffic will grow by 10%, 20%, or so on.”

That was fine, but it didn’t demonstrate for clients the revenue impact SEO improvements would have on their business.

Now, with the addition of data, we see much better buy-in from prospects considering our services.

If you need to sell someone on the value of improving organic traffic to your eCommerce site, this is an ideal way to present the opportunity in a way that better demonstrates its direct impact on revenue.

Note: We’ve worked with dozens of eCommerce companies to increase conversions based on organic traffic. We can create a custom SEO strategy for your business. Contact us here.

Calculating the Opportunity

The first thing to calculate is the opportunity available if you improve rankings across a set of keywords you’re targeting.

When we’re investigating a new client’s account, we start by using a tool like SEMRush to determine how a site is already ranking.

Use SEMRush to determine how a site is already ranking.

Using SEMRush, we’d filter down the list of keywords they’re ranking for on the bottom of page one (slots 7-10), as well as pages two and three. These represent their top opportunities for SEO improvements and new conversions.

We then add up the average monthly search volume based off of those keywords, which represents the majority of the traffic and revenue potential for improving your rankings.

The Eyeball Method

Now that you know the opportunity that’s in front of you, the next step is to estimate how much of that opportunity you can capture.

For this step, it will help tremendously to be working with an SEO expert with experience working with other eCommerce sites.

When a new eCommerce company is interested in working with us, the first thing we do is review their site’s current and past performance, including:

  • Rankings for target keywords
  • Organic traffic
  • Revenue from organic traffic

This gives us a good idea of how the site is doing overall and what impact we believe we can make on the site.

We give each prospective client a detailed breakdown of the strengths and weaknesses we find in their search ranking performance. When assessing a site, we label different opportunities with red, yellow or green lights based on some of the factors listed below. This light system helps us set proper expectations around time to results and is one of the elements used in eyeballing and communicating the final ROI expectations.

We estimate the impact we expect our SEO improvements to have on rankings, traffic, and revenue.

From our experience, we estimate the impact we expect our SEO improvements to have on rankings, traffic—and ultimately, the most important factor—their revenue.

We’d say something like: “We expect to increase your organic search traffic by 30% over the next three to six months.”

Because our team has been doing this work for so long, we tend to get extremely close with our estimates.

What we’ve changed is what we do with this assessment.

It’s good to tell a new client that we expect to increase their organic search traffic by 20% or 30% over the next few months.

But something was missing.

Stats.

Adding the Numbers to Demonstrate the Revenue Opportunity

When a company is going to invest in something like SEO services, they want to know how it’s going to pay off for them. They want to see the numbers. And our “eyeball method,” didn’t cut it.

So we began developing a new method that more strongly tied potential new revenue to an increase in organic search traffic.

Because seeing the numbers can help you grasp the realistic potential behind your SEO work.

When we look at the SEO of an eCommerce website, two of the most important metrics are:

  • Annual revenue from organic traffic
  • Average revenue per session

For example, a typical eCommerce client might have numbers that look like this:

ROI of SEO: Sample numbers of a typical eCommerce client.

Using those numbers, we can estimate what monthly revenue would be if we increased the number of monthly sessions that occur from organic traffic.

ROI of SEO: we can estimate what monthly revenue would be if we increased the number of monthly sessions that occur from organic traffic.

Impressive numbers right?

We still tell prospective clients that we expect to increase their average number of sessions by 30% in six months, etc. (By the way, when we pick a projected increase and timeframe, we make our estimate conservative.)

The difference is that now they can see what it means in terms of increased revenue.

If we increased sessions by 30% for a client with numbers similar to those shown above, we’d increase their monthly revenue by $249,999.

Subtract our fee, and that gives them an ROI they can take to their boss or whoever needs to sign off on committing resources to SEO.

It’s Not an Exact Science, But It Can Get You Approval to Move Forward

When you’re working with your boss or an executive—hoping to get approval for SEO improvements to your site—don’t be vague.

Find those keywords where you rank in positions 7-10, as well as on pages 2 and 3 of the search results, then work with an experienced SEO specialist to calculate the numbers as we describe above.

That way you’re taking a number to your boss that’s directly tied to revenue, instead of just a traffic number that may or may not be meaningful to them.

None of this is an exact science of course. Some of those keywords might not convert; others might not be as relevant as you thought they were.

But in our experience, this method is better than any other for demonstrating the eCommerce opportunity to decision makers without having to spend countless hours doing detailed keyword research and complex ROI calculations.

Note: Want a custom in-depth assessment of your search ranking performance? Contact us to get started.

Video Series: Conquer Your Biggest Testing Challenges

Here at Brooks Bell, we work with clients that are at varying stages of maturity when it comes to experimentation. Despite the differences in these partnerships, you might be surprised to learn that regardless of whether we’re working with a new or established testing program, they all face common enemies: pressure to deliver results; inefficient […]

The post Video Series: Conquer Your Biggest Testing Challenges appeared first on Brooks Bell.

Here at Brooks Bell, we work with clients that are at varying stages of maturity when it comes to experimentation. Despite the differences in these partnerships, you might be surprised to learn that regardless of whether we’re working with a new or established testing program, they all face common enemies: pressure to deliver results; inefficient processes; a lack of understanding and support for testing; and difficulty iterating on and applying learnings from test results.

In this four-part video series, you’ll hear from Suzi Tripp, our Sr. Director of Innovative Solutions, Jonathan Hildebrand, Sr. Director of Design & UX, and Claire Schmitt, VP of Strategic Consulting and Solutions at Brooks Bell. They’ll discuss tips and tricks for addressing these challenges. You’ll also get insight into best practices for organizing your testing program, developing smarter tests, showcasing your results and obtaining insights about your customers.

Check out the first video below, or watch the full series by filling out the form at the bottom of this post.

Part 1: Storing and Learning from Past Tests

Fill out the form below to view the other three videos, covering:

  • Collaborative Ideation / Strategizing Better Tests
  • Communicating Testing Insights Up The Ladder
  • Retaining and Growing Testing Budget

The post Video Series: Conquer Your Biggest Testing Challenges appeared first on Brooks Bell.

The Truth About Kids and Technology: Jean Twenge (iGen) and Nir Eyal (Hooked) Discuss Tech’s Effect on Children’s Mental Health

Recently, I was invited to discuss how technology might impact children’s mental health at the Johnson Depression Center at the University of Colorado. I shared the stage with Dr. Jean Twenge, author of the book iGen and an article in The Atlanti…

Recently, I was invited to discuss how technology might impact children’s mental health at the Johnson Depression Center at the University of Colorado. I shared the stage with Dr. Jean Twenge, author of the book iGen and an article in The Atlantic that got a lot of attention titled, “Have Smartphones Destroyed a Generation?” Dr. Twenge and I […]

The post The Truth About Kids and Technology: Jean Twenge (iGen) and Nir Eyal (Hooked) Discuss Tech’s Effect on Children’s Mental Health appeared first on Nir and Far.

Measure Your Success

Business management consultant Peter Drucker is often attributed with the saying “you can’t manage what you can’t measure.” By this he meant that you don’t know whether you’re succeeding unless your goal is defined and tracked. When it comes to DMO websites there are six goals we see tracked more often than others. They are:… Read More

The post Measure Your Success appeared first on Bound.

Business management consultant Peter Drucker is often attributed with the saying “you can’t manage what you can’t measure.” By this he meant that you don’t know whether you’re succeeding unless your goal is defined and tracked.

When it comes to DMO websites there are six goals we see tracked more often than others. They are:

  • eNewsletter SignUp
  • Visitor Guide Download
  • Aggregate Bounce Rate
  • Aggregate Time On Site
  • Aggregate Goal Conversion Rate
  • Aggregate Pages Per Visit

Because it is the most commonly tracked, we covered eNewsletter Sign-up in more detail in this previous post. In this post, we’ll pull from our report State of Personalization for Destination Marketers, so you can see how you measure up to your peers.

In the below charts, the Non-Targeted numbers represent website visitors who were not served personalized content. If you are not serving personalized content, you should compare your own performance against this group.

If you are serving personalized content, you will be in the higher performing group and should compare your performance to that of the website visitors tracked under Targeted.

How does your website compare to your peers on these key metrics? Does this bring up questions about what you’re measuring and managing? A simple but well organized measurement strategy is critical to managing a successful website. If you have any questions about best practices, please feel free to contact the Bound team here, and we’ll be happy to chat.

If you would like to download the  Free Guide: State of Personalization 2018 Report from which we pulled these metrics, click here. In the report, you will learn how destination marketers like you are leveraging:

  •      Website personalization benchmark statistics
  •      Strategies for implementing personalization
  •      2018 trends in content and personalization
  •      Real case studies from successful destinations

Related Posts

The post Measure Your Success appeared first on Bound.

Why You Shouldn’t Split Test During an eCommerce Redesign

Why we don’t recommend split testing and other best practices for eCommerce website redesigns.

If you’re seeing mediocre performance and customer complaints on your eCommerce site (or you’re simply envying a competitor’s flashy new site), you might be tempted to start over.  

Trash the old site and come up with something brand new.

Because new has to be better, right?

Not necessarily.

In this post, we’ll explain the risks and costs of massive redesigns, along with our recommendation for how to proceed. We’ll also talk about the specific situations in which starting over is a good idea and the best practices for running a successful redesign.

The Three Main Risks of Redesigns

  1. Budget Creep: Redesigns are time and resource intensive. They almost always take twice as long as you expect and twice what you’ve budgeted for. When we hear from a prospective client that their redesign will be complete in two months, we always plan on it potentially being ready in four.
  2. Opportunity Cost: Let’s say a complete frontend facelift takes six months to complete. During those six months, you’re maintaining your existing site (but not improving it). You’ve lost six months’ worth of opportunities to make valuable improvements that could have been increasing your revenue all along!
  3. Too Many Variables: When you launch a new site, it’s difficult to know what is and isn’t working.

When you’re continuously improving an existing site, there are a few changes made at a time, and you can usually isolate the cause of changes to traffic or conversions, especially if you are properly testing changes as part of the process.

With a brand new site, everything changes at once. So if numbers tank, in many cases, you’ll have no specific idea why.

These three reasons are why most of the time, we don’t recommend starting over. Instead, our philosophy is to make research-backed changes to continuously improve an existing site, typically with testing along the way.

The Alternative: Continuous Optimization

Continuous optimization is an ongoing process in which small and large changes are rolled out via A/B testing or limited and measurable updates.

Want to change your color scheme? Design a new banner? Move to a one-page checkout process?

Instead of making all of these changes at once, as you would with a redesign, test them individually. By doing so, you’ll have solid proof of their impact on your conversion rate and revenue.

This also addresses the ‘design by committee’ process that eventually consumes most redesigns.  Instead of creating a hodgepodge page full of compromises with various stakeholders, a continuous optimization process frees managers to test a number of ideas and lets the users decide, with the accompanying conversion rate benefits of doing so.

Bottom line: you can make incremental improvements in a much shorter time period without the expense or opportunity cost of an overhaul.

3 Situations When It’s Still Better to Start Over

Even though we’re firm believers in continuous optimization, there are three situations in which it’s still a good idea to start from scratch.

  1. When your site is near impossible to maintain.
  2. When you’re going through a massive rebranding.
  3. When the site is so ridiculously ugly and unusable that a designer’s best guess would be better than what you have now.

In any other situation, we suggest continuous optimization.

eCommerce redesign best practices: a site redesign is a great option during a rebrand.

In 2011, CSN Stores rebranded itself as Wayfair. This was a good opportunity for a redesign.

Note: Unsure whether you should start over or work to optimize your existing site? We can help you figure out the best way to move forward. Contact us to get started.

3 Best Practices to Follow If You DO Need to Redesign

1. Make Research-Backed Design Choices

Your unique customers have specific insights and expectations that outweigh a designer’s opinion:

Every.

Time.

You should be listening to your customers (through analytics, surveys, heatmaps, user testing and any other method that leads to user insights) before, during, and after the redesign process.

Follow this to-do list before starting a redesign project:

    • Study the analytics from your existing site for insights
    • Run user testing and focus groups on the existing site to see what users like and don’t like
    • Complete a competitive analysis of major sites in your market
    • Set internal goals so that, when you start, everyone is on the same page.

Then run the redesign through an iterative agile process. Complete a piece of the project, test, then move forward.

2. Don’t Run Split-Tests on the Current vs. New Site

When our clients are undergoing a redesign, they often ask us to run a split test comparing the conversion rate of their existing site with their new site.

They want to know that the new site is going to perform well.

While it’s essential to exhaustively test a redesign before launch, comparing the old and new site is not typically the best way to do this.

The main problem is that the sites are generally not set up the same way, so it’s impossible to run a fair A/B test.

To run a successful split test:

  • The backends need to match exactly — the same architecture and content. If you’ve made major architecture changes as part of your redesign, this will be difficult at best to test accurately. The only thing that should be different is the user interface. Otherwise, it’s really difficult to set up the test properly. Successful A/B testing requires very precise variables in a controlled environment.

If the sites are majorly different, it’s difficult to know what causes the results you’re seeing and, ultimately, the test won’t help you.

  • There can be no major situational differences in your testing groups. For instance, if you sell bathing suits, and you show the old site to folks in Maine, and the new site to folks in Florida, then it has to be a summer month where it’s bathing suit season across the country. Controlling all of these variables between two different sites is difficult.

Plus, there’s another major problem: what do you do if the results from the new site aren’t any better? Do you abandon the new site?

If you’ve committed to a redesign, then you’re probably wasting time and resources by testing the new site vs. the old site. Comparing the performance of the old and new site isn’t as useful as other exercises during the redesign process you can run.

3. Instead, Focus on Improving Your Redesign

While there are plenty of DIY user testing sites out there, we recommend steering clear and hiring an experienced testing team. The secret to user testing is knowing how to write a good test, and CRO specialists can help you find the best user experience to increase conversions.

You can also let customers opt into a beta site to collect powerful user feedback before officially launching.

By the way, split testing can be a part of this user testing. Instead of comparing the old and new site, we strongly recommend comparing design elements within the redesign itself.

For instance, we ran split tests on a client’s homepage, focusing on specific elements above the fold. The result was similar to a redesign.

eCommerce redesign: Old Homepage

eCommerce redesign: New Homepage with Updated Design Elements

Tests like these have lost in the past, which is similar to a redesign where designers and stakeholders arrive at redesign by committee. In this case, we tested this area and confirmed that it was better instead of guessing.

Changes to the homepage resulted in a 107% lift in conversions.

We were also able to use some of the winning elements in other areas of the site since they had already proven to improve conversion.

The Bottom Line: No Matter How You Update Your Site, Test as You Go

Whether you’re giving your site a frontend facelift, completely overhauling the architecture, or using a continuous optimization process to make incremental changes, for us, the golden rule is to use research to guide your decisions.

By listening to your customers, we believe you can transform your site into a well-performing, beautiful creation that will make your competitors jealous.

Note: We’ve performed hundreds of user tests for our clients. We can help you write and run tests that bring quality insights about your eCommerce site. Contact us to get started.

Easy personalization; how to get product–market fit; The Queen; and some great resources

Here are some great resources we have recently shared with each other Personalization features are now available in Google Optimize Google Optimize now allows you to create personalization. If you want to offer, say, free shipping to all customers in S…

Here are some great resources we have recently shared with each other Personalization features are now available in Google Optimize Google Optimize now allows you to create personalization. If you want to offer, say, free shipping to all customers in San Francisco, then just… Click the “Create Experience” button Select “Personalization” as the experience type. Using the […]

If it’s Creepy it isn’t Personalization

I have spent the better part of the last decade working in the marketing personalization space and I can say with confidence that personalization that can be described as intrusive, creepy or offensive is not personalization. As a …

I have spent the better part of the last decade working in the marketing personalization space and I can say with confidence that personalization that can be described as intrusive, creepy or offensive is not personalization. As a rule...