How to Hire a CRO Agency: A Process to Get It Right

Historically, CRO has taken a backseat to SEO, PPC, and other forms of digital marketing. But it’s on the rise—60.8% of businesses are making CRO a priority. As more companies think about starting or expanding CRO programs, the “agency or in-house?” question is also earning more attention. Hiring an agency can make sense for companies […]

The post How to Hire a CRO Agency: A Process to Get It Right appeared first on CXL.

Historically, CRO has taken a backseat to SEO, PPC, and other forms of digital marketing. But it’s on the rise—60.8% of businesses are making CRO a priority.

As more companies think about starting or expanding CRO programs, the “agency or in-house?” question is also earning more attention. Hiring an agency can make sense for companies that don’t have the time or resources to build an in-house team.

But to find the right agency, you need to do your homework. CRO isn’t cheap: Picking an agency is a five- or six-figure decision. (Some 44% of companies spend more than $10,000 a year on A/B testing products alone—many spend far more.)

Before you start contacting agencies, weigh the pros and cons of doing the CRO work in-house versus outsourcing.

Should you outsource CRO to an agency?

Both in-house and agency CRO teams have advantages and drawbacks. The right choice depends on your company’s size, goals, and budget.

Even then, some limitations apply to any CRO program. For example, if you don’t have enough traffic to your site, you won’t have enough data to run tests. The potential ROI of CRO also varies based on revenue. Peep Laja, CXL Founder, explains:

If you increase sales by 1% for a company that makes $100 million, that’s $1 million of added revenue. CRO makes sense. But if you get that same 1% increase for a site that makes $1 million a year, the ROI—$10,000—isn’t there. 

If your company and site clear those initial hurdles, here are other factors to consider.

Building an in-house team:

  • Takes time. Finding seasoned CRO experts can take many months, and launching a CRO program can take even longer.
  • Is prohibitively expensive for many companies. You’ll need at least $500,000 to build a skeletal team.
  • Can be a bottleneck for launching multiple tests. An in-house team often can’t launch multiple tests because their plates are full.
  • Doesn’t make sense if you have only one funnel to optimize. Once your in-house team maxes out conversions for that funnel, what will they do next?

On the other hand, hiring a CRO agency isn’t always the best fit either. This is the case if you have:

  • No clear (or realistic) goals. You’re not aligned on what an experimentation program needs to achieve or can realistically achieve.
  • No one to own the relationship. You need a point person who can clear roadblocks and hold the agency accountable.
  • No one to implement changes. Test wins become revenue only after they’re deployed across the site.

The two options aren’t mutually exclusive: If you hire an agency, you can lean on them to help you build and run (or speed up) an in-house CRO program. They can also, periodically, serve as a second set of eyes on a program or come in to help resolve a particular challenge.

As Viljo Vabrit, Managing Director of CXL Agency notes:

The best goal for every organization should be running a CRO program in-house. Hiring a CRO agency should create additional capacity when there’s a lack of resources.

Agencies are also used to teach processes to improve existing CRO programs. If a CRO program ends after an agency is done with their work, it’s not been a successful engagement

Optimization has to be continuous.

If your business goals and challenges push you away from the in-house option, here are some additional reasons that companies settle on agency services.

4 common reasons companies go with CRO agencies

1. Conversion rates are low—or unknown.

Plenty of sites have succeeded at acquisition—getting people to the site—but struggle to persuade those same visitors to take action.  

Codeacademy struggled with that exact issue. “We were just trying to get better at monetizing the business side of Codecademy, and CRO is a big part of that,” said Daniel Layfield, Product Manager of Growth at Codeacademy. “Codecademy has always had strong traffic, but we just didn’t have a lot of expertise in conversion.”

Other companies have trouble nailing down their conversion rate due to analytics challenges. That was the obstacle Priscilla Leake, previously the Conversion Optimization Manager for startup JungleScout, faced. Her team needed to untangle a misconfigured analytics setup:

I was hired to work on the marketing team and own analytics and CRO for the website and realized that our revenue-tracking in Google Analytics was misconfigured by 60–80%, not directly tied to our credit card processor, and didn’t have events or tracking to account for churn and returns—which had large revenue implications for the business model. 

Before scaling testing, we wanted to get foundational tracking implemented on the marketing side to better understand testing results relative to the health of the business.

2. You can test CRO before going all-in with a team.

Only 7.8% of companies made CRO less of a priority in 2018, and most optimizers (56.4%) reported better results compared to the previous year. Strong CRO programs get results. But how do you go from “nothing” to “strong”?

effectiveness of cro in 2018 vs 2017 chart.
Most CRO analysts report that programs were more effective in 2018 compared to the previous year. (Image source)

Outsourcing CRO to an agency lets companies try optimization before building their own team. Unlike SEO, you don’t have to wait six months to see whether the changes are delivering results. 

If you have a wealth of traffic and want to see if you can make it more efficient, a CRO agency is an option. The ROI from a short-term project can help gauge whether building an in-house team makes sense.

3. In-house teams are a big, long-term financial commitment.

“Running an effective CRO program is like running an Olympic cycling team,” Vabrit advises.

He continues: 

The coach (CRO agency) is not enough to get your champion to the pedestal; you need world-class equipment, physiotherapists, technicians, funding, etc. If some of that is missing, the whole program falls apart. A CRO program is never isolated from other marketing and development programs.

Experts are hard to source, and expensive—an average of $76,000 in the United States, according to Payscale. And you don’t want to hire below-average talent.

As Laja cautions, “CRO is 100% skill work—it’s not a lottery or game of luck. A mediocre person can get you results if your site is terrible. If it’s somewhat or highly optimized, hiring a cheap person won’t work.”

Laja: “CRO is 100% skill work—it’s not a lottery or game of luck.” 

While hiring one or two CRO generalists can help a company get set up, it takes a team to implement the ongoing testing that makes a major impact on your bottom line. That team can set you back around $500,000 annually

Even if your team is able to run a few tests here and there, the reality is that CRO goes far beyond a single test. It’s a process that takes time, consistent effort, and ongoing resources.

“It is a process, not a solution,” cautions Ben Labay, Research Director of CXL Agency. “There aren’t any silver bullets in CRO—those days are gone.”

4. The rate of testing is too slow.

The slow rate of testing by in-house teams is one of the biggest reasons companies outsource CRO work to agencies.

Lars Lofgren, CEO of Quicksprout, said that, in his experience, building a bare-bones growth program typically takes six months to establish, and several months more to “run at full speed.” Agencies can get up and running more quickly and, quite often, run faster.

“The development resources are a common bottleneck,” explains Laja. “Agencies can come in and offer additional capacity for building tests.”

If the agency route is the right choice, here’s how to hire the right agency.

How to set an agency up for success (and get the most out of your engagement)

Before you jump on a call with prospective agencies:

  • Figure out your budget and needs;
  • Compile your own testing history and customer research. 

Figure out your budget and needs

“CRO services” can mean many things: ongoing experimentation, user research, or analytics audits.

For help with ongoing experimentation, retainer services make sense. A longer engagement means an agency can run multiple, iterative experiments over time.

Many CRO agencies also offer stand-alone research projects that include user surveys, mouse-tracking analysis, and user testing. Research can help your internal team better understand what customers want, what problems they’re encountering, and how they’re using your site.

You can also engage an agency, as Leake did, for an analytics health check. You can’t run an effective experimentation program—or, for that matter, marketing department—on bad data. 

Once you have a sense of your needs, you should be able to answer the following questions:

  • How much can you realistically spend? What is your budget for CRO services plus the costs to run tests? One report shows that top-converting companies spend more than 5% of their budget on optimization alone. “It’s tough to find a great agency under $10,000 a month,” says Laja. “And $3,000 a month is a major red flag.”
  • What are your objectives? Get as specific as possible. You won’t know the percentage improvements that are possible; you may not even know what needs optimizing. But any details beyond “increase conversions” (e.g., you’re planning UX changes, leads are down, your internal team lacks experience for a particular project, etc.), can help identify the right agency partner. 
  • How will you measure success? This could include KPIs like customer lifetime value, average conversion rate, cart abandonment rates, and others.
  • Which services fit for your needs? Are you looking for ongoing experimentation, customer research, analytics implementation, or all of the above?
  • What tools are you currently using? Catalog your martech stack with all the log-in details. With this handy list, it’s easy for an agency to dive in and get started right away.
  • Who’s the point person in your company for the agency?

Compile your own testing history and customer research

Historical testing data can help an agency avoid repeating past work.

If your team or company has run experiments or done testing in the past, put together a comprehensive history. Include details like:

  • A description of each test/experiment;
  • The goal of each experiment;
  • How and when the test was executed;
  • The results and learnings from all testing.

The history offers much-needed context and avoids waste. “We can either learn from them and iterate on these ideas, or we will just know what has been already done. This will save us time,” says Gertrud Vahtra, an analyst for CXL Agency.

Customer data is equally vital. “Experimentation and testing is about being customer-centric,” Labay notes. “It’s about listening to customers, finding out about their problems, coming up with solutions and testing those. It is the opposite and doesn’t work very well with product-centric and siloed teams.” 

Collect all existing information about your ideal customers:

You can share existing documents or set up a call to walk through the information. If an agency doesn’t request historical testing data or customer info, consider it a giant red flag.

Once you’ve gathered all your research and data, it’s time to start chatting with a few CRO agencies. 

How to vet CRO agencies

Word-of-mouth referrals are a great first step. It’s also a good idea to ask agencies for references. Request a list of companies they’ve worked with in the past, particularly any in your industry—and actually contact them. 

“Asking about an agency’s experience is essential, especially to assess if an agency has experience working with companies in your league/size—startups, SMEs, enterprise,” Vabrit notes. “A solid CRO process should be universal to whatever industry; experience is needed to know how to apply it effectively for organizations in different growth stages.”

You can also ask agencies about their experience solving specific problems. In Leake’s case, JungleScout looked for digital agencies that worked on analytics issues. “Our use-case was pretty specific since it was more of an analytics/development challenge than a traditional CRO ask,” Leake explained. 

To figure out whether an agency can help you achieve your specific goals, contact at least three of their previous clients and ask key questions:

  • What problem did they hire the agency to solve?
  • Did they have a process when researching and launching experiments? If so, what did that process look like?
  • What were the tests and experiments that the agency launched?
  • What were the results of those experiments?
  • How was the agency’s response time and level of service? Did they always reply quickly and were they polite and helpful? Were they transparent throughout their engagement?
  • Is the company still working with the agency? If not, why?

If an agency is hesitant to provide a list of clients or claims that it’s “confidential,” Vahtra says, that’s a red flag. While an agency might not be able to divulge info about all their clients due to non-disclosure agreements, they should be able to offer a few client references.

Two major red flags from a prospective agency? Not asking about your testing history or consumer research, or claiming that no references are available due to confidentiality agreements.

On top of referrals and references, you can also establish an agency’s credibility with case studies. Check their website (or ask your contact person) for case studies for customers within your industry, of a similar size, and/or with a similar problem. 

Case studies reveal agencies’ approach to CRO and give you an overview of the kinds of problems they’re great at solving. But be wary of case studies that tout big conversion gains without offering specifics on how they achieved those results. 

Almost every agency has case studies that showcase outsized wins. But how big were their sample sizes? Did the test results hold up when pushed live across the site? If it seems too good to be true, it probably is. 

Ask agencies about their process, costs, and structure

Once you’ve checked an agency’s references, it’s time to ask about their approach to CRO—the related elements of process, cost, and structure.

Process. “Ask about the process that’s being used to optimize websites,” Vabrit suggests. “This process should be data-driven. Clients should investigate what data will be gathered, how it will be analyzed and turned into a testing hypothesis, and how learning management is organized.”

Frameworks are particularly important because the CRO industry is rife with rogue consultants promising big lifts. I reality, those “experts” are just throwing ideas at the wall to see what sticks.

Asking about stopping rules for tests can be revealing. There are no magic numbers, like “100 conversions per test,” a response that would be a major red flag. Sample size calculators should determine how many conversions are necessary for statistical validity.   

“A good agency has already found the patterns of successful work,” said CRO expert Andre Morys, CEO and founder of konversionKRAFT. Morys continues:

And if you found these patterns, usually you build your own framework or methodology because success in CRO is not a result of trial and error. There are enough people who misuse A/B testing for trial and error or gambling. But what you want is a success that is based on a great methodology.

There are a variety of CRO systems and frameworks used throughout the industry. For some examples, check out this rundown of a few effective CRO systems, or read about CXL Agency’s ResearchXL framework and PXL prioritization model.

researchxl framework for conversion research.
What framework does a CRO agency candidate use? Great agencies have repeatable processes.

Some other ways to separate the amateurs from the experts:

  • Big, upfront promises. If they predict big lifts in conversions without running any tests, those results are unlikely to materialize.
  • No process details. A good agency will be transparent about all the testing details before launching. For example, they’ll work with you to decide how long specific tests should run.
  • Too many clients per analyst. How many clients does each CRO analyst manage at an agency? It’s a revealing question. “They should have one analyst per 3–4 clients max,” suggests Laja. “Anything more and quality suffers.”

Cost. “Ask about all costs related to running a CRO program with a specific agency,” Vabrit says. “What’s included in the service, what’s not. Should they allocate additional budget to develop and QA tests? What are different tool costs needed for optimization?”

For example, one agency’s retainer may be half that of a competitor—but if that fee doesn’t include QA testing and development costs, you may end up paying more in the long run.

Ask whether an agency’s fee includes:

  • QA testing;
  • Multiple variations for each test (ask how many variations are included in a package);
  • Optimization/analytics tools;
  • Developer time;
  • Designer time.

If one agency comes in vastly lower than others, ask why. As Laja notes, some young agencies are just hungry—everyone starts somewhere—and they may be willing to take on clients for less money.

However, others may try to close the revenue gap with client volume, meaning that analysts are managing 10 or 12 clients at a time.

Structure. Does the agency team function like a factory (i.e. many clients per analyst), or would you receive a personalized, consultant-like service? Will you have a single contact, and, if so, who? 

Some aspects of team structure aren’t necessarily better than others, but knowing how the team is structured can help you establish expectations. “The main piece of advice that I would have is making sure that all of the parts of the company that an agency will touch are aligned,” Codeacademy’s Layfield said. “Communicating everything is key.”

Also, ask about their human resources. Do they have developers, designers, customer researchers, and data specialists in-house, or do they outsource their projects to external experts? While outsourcing design or development isn’t inherently bad, this setup can affect timelines, so it’s important that you know what to expect.

“It’s one thing to have a customer research specialist and to have a web analytics or data specialist. A lot of agencies don’t have both,” Morys explained. “On a deeper look, if I have to drill down, I would ask also them: ‘How do you connect data and customer behavior?’ Because for me, this is the secret sauce.”

Additional checks before choosing a CRO agency

At this point, you’ve established a rapport with your top CRO agency candidates and asked some tough questions.

Here are a few final methods to help you narrow down your list further:

  • Customer testimonials. Read any testimonials on their website (or ask for them, if none are published).
  • Let your team ask questions. Leake had plenty of questions while vetting JungleScout’s CRO agency. She recommends letting relevant team members ask questions as well. This engages the entire team and helps them understand how the agency would affect their role.
  • Ensure the agency specializes in CRO. Vahtra cautions against hiring an agency that focuses on SEO, for example, but does CRO as an add-on. Outsourcing your CRO as an add-on service often leads to slower testing, inaccurate data interpretations, and poor processes.
  • Read their content. Is the agency publishing thought-leadership content that highlights new, innovative ideas? 
  • Educational opportunities. CRO agencies will often help “teach a company to fish.” Do they educate your team so that you can eventually run your own in-house CRO program? 

Conclusion

More and more companies are trying conversion optimization, but it doesn’t always make sense to build an in-house CRO team—at least, not right away.

Still, concedes Laja, “hiring any agency is a risk. Even the best agencies in the world fail. Every CRO agency has failed. If you hire an agency and think they’ll magically solve everything, odds are, it’s not going to happen.”

Here are ways to give yourself the best possible chance at success:

  • Know whether the potential ROI is there. Hiring a CRO agency won’t make sense for every company.
  • For some, leaning on experts can help you run tests efficiently and transition the agency to a consulting role as your team matures.
  • Put together a history of the tests you’ve run previously and any customer data to set your agency up for success and get the most out of your relationship.
  • When vetting CRO agencies, ask tough questions. Talk about costs, processes, testing frameworks, team structures, and client referrals.

The post How to Hire a CRO Agency: A Process to Get It Right appeared first on CXL.

I’ve Built Multiple Growth Teams. Here’s Why I Won’t Do It Again.

I love running growth teams. It’s everything I could want from a job. It directly impacts the company, is fairly autonomous, works great with a few high-caliber folks, and involves a ton of A/B tests. I’ve spent years running these teams—but I don’t know if I’ll ever build one again. I doubt that I’ll even […]

The post I’ve Built Multiple Growth Teams. Here’s Why I Won’t Do It Again. appeared first on CXL.

I love running growth teams.

It’s everything I could want from a job. It directly impacts the company, is fairly autonomous, works great with a few high-caliber folks, and involves a ton of A/B tests.

I’ve spent years running these teams—but I don’t know if I’ll ever build one again. I doubt that I’ll even have a growth team at any company I’m managing in the future.

In fact, I believe most companies should not have a growth team. In the rest of this post, I’m going to try to talk you out of building one.

How I define “growth”

The term “growth” gets used loosely these days. A lot of folks treat it as a synonym for online marketing.

I use a more restrictive definition: A growth team is a product tech team that’s focused on acquisition instead of core product features. It other words, it’s a team of designers and engineers.

And the most common type of work I’ve done with my growth teams is using A/B tests to optimize an existing funnel.

The growth teams that I’ve built

I’ve had the opportunity to build growth teams across multiple companies:

KISSmetrics

I joined as employee 14 and spent a few years at this analytics startup, which was founded by Hiten Shah and Neil Patel. I had the great fortune of being able to learn growth from one of the original growth hackers, Hiten Shah. He was literally in the room when Sean Ellis coined the term “growth hacker.”

After working as an individual contributor for a while, I had the opportunity to build my first growth team. We had a designer, a front-end engineer, and a data scientist.

We ran A/B tests around the clock on our free-trial funnel for nine months and got these wins:

  • Quadrupled monthly lead volume in one year;
  • Tripled the conversion rate from visitor-to-trial signups on our homepage.

I Will Teach You to Be Rich

I was hired to level-up the marketing team. The team started as all marketers focused on lead generation. It was a typical lead-gen team when I joined. Then, I evolved it into a growth team by hiring a designer and two engineers.

Using the same playbook that I developed at KISSmetrics, we ran non-stop A/B tests on our email subscription funnel, driving 480,000 leads in 2016 and smashing our lead goals for the year.

We expanded to four growth teams, each assigned to different parts of the funnel. Two of the teams focused on the two main sources of revenue, one team on inbound leads, and the last team on site conversion rates. Every team included a mix of designers, engineers, marketers, and copywriters.

Things…did not go well.

My whole growth system fell apart, and I learned a lot of tough lessons about the limits of growth programs. Hopefully, the insights below will help you avoid the same mistakes that I made.

9 reasons why growth teams fail

After years of building and managing growth teams, I’ve come across nine difficulties.

1. Probability is very counter-intuitive.

What do you expect if I say there’s an 80% chance of Version A winning over Version B? Most people assume it’s practically a sure thing. I don’t. Version B still has a decent chance of winning.

I even consider a 95% chance to have too much uncertainty for long-term A/B testing. Because we need to rely on our funnels over the long-term, small amounts of volatility can erase all our previous gains. We earn gains one inch at a time, but we can lose all our gains with one bad test.

a/b testing platform.
Non–data scientists often overestimate the certainty of test results.

We all have a hard time intuitively understanding volatility, which increases the odds that we’ll make a bad call and erase our gains.

Take 95% certainty compared to 99%. Because 95 is pretty close to 99, it feels like the difference should be minimal. In reality, there’s a gulf between those two benchmarks:

  • At 95% certainty, you have 19 people saying “yes” and 1 person saying “no.”
  • At 99% certainty, you have 99 people saying “yes” and 1 person saying “no.”

It feels like a difference of four people when, in reality, it’s a difference of 80. That’s a much bigger difference than we expect.

Most folks never get a deep grasp for how this works. Even the ones who do take a good six months of mentoring and supervision. We all want to cut corners on testing because it feels like there’s less risk than there really is.

In my experience, only data scientists have an intuition for this stuff. I have yet to come across a designer, engineer, or marketer that intuitively understood probability on day one.

This makes it very difficult to scale up teams that do lots of A/B testing, one of the primary tasks for a growth team. Because of this, I would expect it to take me a good 3–5 years to build multiple growth teams in the future. That’s not an easily scalable strategy.

If you want to dig deeper, this whitepaper completely changed how I approach A/B testing. I still review it regularly.

2. Most department heads don’t want data.

Finding winners was never my biggest problem when running a growth program. It was trying to avoid having other executives kill my previous wins.

I wish I were joking.

I spent more time advocating to keep verified wins live in our funnels than I did looking for new wins. Take our winning homepage at KISSmetrics:

kissmetrics homepage.
Testing determined this homepage was a winner. Too bad executives hated it.

This thing crushed sign-up conversions. After a year of testing, we had this page almost perfectly optimized. The headline, the URL box, the call-to-action (CTA) button copy, the secondary CTA toward the bottom, even the random stock-photo dude. This page converted triple the number of signups compared to more generic SaaS homepages.

The thing is, multiple executives hated this page. I had to defend it regularly to keep it live. After I left, the homepage immediately changed to something more generic.

Now that I have more experience, I get it. When things aren’t going well, folks reach for easy scapegoats. An aggressive homepage makes for an easy scapegoat.

Remember this: Anything can be rationalized away. If a test or strategy goes against the instincts of the core people at the company, it will get nixed sooner or later.

A methodical program built on data and testing won’t spare you from the internal conflicts that happen at every company. Getting buy-in, building political capital, and wielding power internally are still essential for any growth program. Data isn’t a shortcut.

3. Following the data gets the growth team out of sync.

I have a rule-of-thumb for picking A/B test winners: Whichever version doesn’t make any sense or seems like it would never work, bet on that. More often than I like to admit, the dumber or weirder version wins.

This is actually how I tell if a company is truly optimizing their funnel. From a branding or UX perspective, it should feel a little “off.” If it’s polished and everything makes sense, they haven’t pushed that hard on optimization.

complex machine rusting out.
Keeping a growth team in sync with the rest of the company is hard. It’s easy to lose that alignment—and break down.

I believe this happens because we stumble across an insight about our market that breaks our previous frameworks and understanding. So what appears to make no sense to us makes perfect sense to our users.

At first, this is a quirky realization. “Haha, that variant seems ridiculous, but it worked—let’s ship it.” After you’ve been managing teams and working with executives for awhile, this becomes a huge liability.

Executives won’t care that you found a piece of data that potentially invalidates the entire positioning, company strategy, or brand. A single A/B test doesn’t carry enough weight internally to change the directions of other teams. As an executive, this makes sense. It’s just one data point, and most data is flawed. So why bet the company on it?

As a growth manager, that puts you in a bind. Your funnels will get out of whack with the direction the rest of the company is going. Headlines won’t align with positioning, branding will be slightly off, and UX won’t be consistent.

All these items leave the growth team vulnerable to criticism from other teams. Design, Product, Marketing, and Sales get pissed that you’re not in sync.

It took me a long time to realize this, but they’re right. The perfect plan executed poorly isn’t as good as a good plan executed well. There are many ways to win, but all require teams to work together. Being in sync is more important than optimizing my own area.

Take product positioning. I now believe that “good enough” positioning pushed consistently across the whole company will make more of an impact on company growth than “perfect” positioning used inconsistently across marketing assets. Markets absorb messages only if they’re delivered extremely consistently. 

While it is possible to run an optimization program that stays in sync, it takes a lot of judgment and experience to know when to follow the data and when to pull back.

4. Growth programs are really easy to screw up.

garbage overflowing from bin.
One bad test can force you to throw out months of wins.

Multiple times in my career, I’ve had to throw out 3–6 months worth of tests. Every winner, every data point, every insight had to be scrapped. One small bug in my testing environment threw off all my results.

One time, we ran a bunch of tests on our drip email campaigns over a five-month period. Our new email subscribers would receive email campaigns for different products. We were optimizing revenue from these campaigns by testing different versions.

Suddenly, our revenue from these campaigns dropped by 50%. It was a sharp drop on our conversion rates—like something changed in the funnel—not a gradual drop from a slower trend. 

I tore through our data and testing to find the problem. We re-ran tests. I personally looked for patterns in hundreds of email subscriber data profiles. We QA’d everything to death. We reversed changes.

We never got the conversions back. And I never found the answer for why the conversion rate dropped.

Maybe the drop in conversion was outside my control. Maybe not. I did find a number of serious bugs and infrastructure flaws in our testing during my audit. These discoveries gave me zero confidence that our program was airtight. So I threw out all our data, and we started over. 

Every time I’ve started testing with a new set of tools, I’ve gotten hit with a problem like this. At best, I have to throw out test data. At worst, I (may have) tanked the funnel. Even with QA and control vs. control testing, I wasn’t able to avoid errors like this.

5. Most companies don’t have the data.

The cold, hard truth: Most companies don’t have enough data for a full-time optimization team. 

In the companies where I’ve worked, we had hundreds of thousands or millions of visitors per month. That volume was just enough to optimize sign-up flows. We didn’t even have enough to A/B test against a revenue event. Total purchase volume was simply too low.

Yes, if you work at Facebook or Amazon, you’ll have plenty of data. Consumer tech companies tend to have enough data since consumer markets (and purchase volume) tend to be large. 

But for startups or enterprise tech companies, there’s just not enough volume for testing.

Even if you have enough volume to optimize your main funnel, you’ll run out of data once you hit a ceiling on your main funnel and need to optimize other flows.

This is exactly what happened to me. I had just enough data to optimize the main funnel, found a bunch of wins, then hit a hard wall and needed to rebuild the entire distribution strategy from scratch to keep growing.

My rules of thumb on testing volume:

  • Need 20,000 people per month to see an asset that you want to optimize.
  • Also prefer to have 1,000+ revenue conversions per month on that same funnel.

I know folks in this space say that you can test with 100 conversions per variant. When I look back on my optimization programs, we always had 20,000 people moving through the funnel per month on my most notable wins. When we tested lower-volume assets, we just couldn’t find enough wins fast enough to make a real difference.

6. Growth teams are expensive.

Growth teams get expensive fast. Even a bare-bones team needs 4 people:

  • Growth manager;
  • Designer;
  • Two engineers.

I’ve found that one engineer isn’t quite enough bandwidth to keep up with a full-time designer and growth manager. Two engineers keeps the whole team moving without downtime.

None of these roles is cheap. Even if you recruit in lower-cost areas and look for more junior folks, you’re looking at $150K each, fully loaded. That’s an optimistic estimate, too, so we’re at $600K in labor per year.

Now add tools and data infrastructure, easily another $20–50K per year. A/B testing tools used to be cheap—then they realized the only companies that can legitimately use them are large. So, they raised their prices substantially. Decent data tools don’t come cheap, either.

Let’s call it $650K per year in total budget. It generally takes me six months to get a growth program started. This includes onboarding, approvals for everything, setting up tools, running a bunch of preliminary tests to verify that they’re working, and getting the team comfortable working together.

Then, another 12 months of non-stop testing to drive conversions in a funnel. After about 12 months, I can usually double conversion rates. If I’m lucky, maybe even triple them.

one hundred dollar bill.
Want a growth team? Make sure you have a six- or seven-figure budget.

That’s a year and a half of work (i.e. $950K) to double conversions in one funnel. That’s a lot of coin. For $950K, I could also build a blog—from scratch—to hundreds of thousands of visitors per month. Or cover every major event in my industry for several years. Or blitz Facebook with enough branding campaigns that everyone will know the company after a few months.

Looking back, I sure wish I had spent that money elsewhere. It would have served those companies much better. If a million dollars is a rounding error in your budget, go for it. Otherwise, consider using those marketing dollars to scale up your core distribution channel first.

7. Growth teams are not fungible.

After building a “growth” team, we can just assign that team another KPI to grow and they’ll figure it out, right? That has not been my experience.

Most people need a playbook of some kind. They’re great at incrementally improving a playbook but really struggle when trying to build something from scratch. I’ve found this to be true across all disciplines and departments.

Of course, there are exceptions. But the exceptions are rare and tend to be founders.

I trained my growth teams to optimize funnels and run A/B tests. When that wasn’t a priority anymore—and I needed top-of-funnel growth—those same teams were completely unprepared to solve a different problem. We went back to the bottom of the learning curve.

Whatever you assign your “growth” team, they’ll get good at that one thing. Switching gears means starting over and learning a whole new process from scratch. While the term “growth” may be flexible, the team won’t be.

8. Growth teams are only as good as their managers.

I love the concept of a fully functional team. Get every discipline into one team and cut them loose. These are the teams that I prefer to manage myself.

This is part of what makes growth teams so effective. There’s no need to wait on design or engineering to complete a project. The team already has all the design and engineering resources they need. Simply make a decision as a team on what needs to get done, then ship it.

When I scaled one of my programs to four growth teams, I discovered that fully functional teams have a major weakness: The team is only as good as the manager.

We all know that management is a tough job. This isn’t news. But being a manager of a fully functional team is even more difficult, which caught me by surprise.

Managing a high-performing team is hard. Even the right candidate may take a full year—with lots of mentoring—to settle into the role.

In hindsight, it seems obvious. Someone stepping into a growth team manager role needs to know how to work with designers and engineers. They need to know how to guide the team through technical hurdles. They need to own a KPI. They need to find alternative options when plans fail. They need to coordinate with other managers and teams. That’s on top of having a knack for improving conversions.

Most folks require many years of management experience to handle a team like this. Even for an ideal candidate, it takes me at least six months of coaching before they start to find their feet—then another six months of guidance before they can truly run the show.

If someone isn’t extremely coachable, loves learning multiple disciplines, empathizes with their teammates, and relentlessly pursues a goal, it’ll take much longer.

Whether you’re building a growth team from scratch or attempting to scale multiple teams, finding the right managers is a serious bottleneck.

9. Growth teams have limited revenue potential.

I’ve left the biggest problem for last. Most companies can get only so much revenue growth from an optimization program. After all, conversion rates can go only so high.

My rule is that I can double or triple the conversion rates in a funnel within 9–12 month. I’ve done this multiple times across multiple companies. For that year, the metrics look amazing. Conversions are up. Leads are up. Revenue is up.

But then what? Once you hit a wall on conversions, what do you do?

google analytics revenue chart.
A conversion rate can go only so high. If you have just one funnel, revenue growth—and the ROI for your growth team—will plateau.

There’s all this data infrastructure, a fully staffed team, and lots of experience with testing. (Remember: The team isn’t fungible; they can’t easily switch into another workflow.)

If you’re a massive company with a ton of funnels, you move on to the next user flow, and that’s great. Most of us are not in that position. We have one primary funnel. Once conversion increases hit a wall, there’s not much else to do.

Contrast this with a marketing team going after any particular channel—events, paid, SEO, community, PR, whatever. The long-term game completely changes:

  • At scale, these channels can 10X the size of most companies. There’s plenty of room to grow.
  • When you hit a ceiling on a given channel, there are plenty of wins around process efficiency to lower acquisition costs.
  • As long as the channel is profitable, it’ll continue to kick off profit year after year. Smart management and process design keeps the channel running smoothly for years on end.
  • Once it’s mostly automated, that frees up your time to go build another channel. You never have to dissolve the team because you run out of opportunities.

A properly scaled marketing channel can 10X your business. Best case for a growth team that optimizes your funnel? 3X. I’d rather spend my time on a 10X strategy than a 3X one.

One major exception

Some businesses are driven primarily through user loops. More users brings in more users. 

For example, take Facebook, the very company that popularized the concept of growth teams. 

Their growth came from users, and they had true virality. A new user comes in, activates, then invites other users.

With funnels like this, a growth team can drive core business growth. The user acquisition funnel is such an enormous lever that it’s well worth the cost of a growth team. 

Consumer tech companies like Facebook, Twitter, Skype, Snapchat, and Pinterest all have this option. Many marketplaces like Uber and Airbnb also have strong user loops worth optimizing. 

In these situations, I wholeheartedly recommend building a growth team.

Just remember that true virality is really rare. Most businesses don’t have a user loop that drives their core growth. They rely on standard distribution strategies to grow revenue.

What do we do instead?

So if it’s not worth investing a million dollars to double conversion rates in our funnel, should we worry about conversion rates at all? Yes.

There are still plenty of things we can do:

1. Use your funnel as a guide for a healthy business.

The overall conversion rates in your funnel give you a feel for how well your business model is working. If conversion rates suck at multiple points at your funnel, work on the foundational parts of your business like product-market fit, positioning, and your offer.

For anyone in a senior marketing role, you should absolutely audit your positioning. There are almost always gaps, and it’s the single most important variable that applies to all your marketing assets.

Follow the recommendations from the book Obviously Awesome by April Dunford. It’s extremely practical and the best resource on positioning that I’ve found. I really wish she had written it 10 years ago. It would have saved me a lot of heartache.

2. To fix your funnel, eyeball iterations that go after big wins.

If you have one step in your funnel that’s broken, iterate on that step without A/B tests.

Collect qualitative feedback from users through user tests, heat maps, surveys, and interviews. Find the biggest objections and opportunities, then design a few new versions for that step of the funnel.

Focus on major changes—don’t test small stuff. Launch a new version, run it full-time for a month, and eyeball the impact on your conversions.

Even if your funnel is fairly healthy, run 5–10 user tests on the onboarding funnel to pick up any glaring problems that need to be addressed. Don’t worry about A/B testing here. Find points of friction and get rid of them. When you find a real winner, you’ll feel it.

It’s not advanced testing, but eyeballing major changes can get you a few wins. And basing your iterations on qualitative research dramatically increases the odds that you’ll find a winner.

coffee shop conversation
Qualitative research is often the key to huge wins.

Wait, doesn’t this contradict my testing philosophy of 99% statistical significance on A/B tests? It may seem like they conflict, but in practice both philosophies excel in different circumstances.

The honest truth is that most massive businesses grew without ever running an A/B test. CEOs and founders don’t run A/B tests. Most teams don’t run A/B tests. And yet we all try different ideas, eyeball the results, then act accordingly. This is how most of us pursue progress.

The trick is knowing which philosophy you’re using. To see results, you have to chase wins that could have major impacts on your business. A 10% improvement doesn’t matter; you’re looking for 100% wins and above. 

A/B testing works beautifully when chasing 1–30% wins. These wins are too small to feel but can be detected with methodical testing. Our intuition is no longer a useful guide.

3. Test homepage headlines.

Even if your funnel is healthy from top to bottom, try out some different headlines on your homepage.

Headlines are the one variable on every marketing asset that always has a massive impact. Finding better headlines has routinely given me 30%+ conversion rate lifts on my funnels.

No need to A/B test anything here, either. Once you have a good handle on your positioning, sit down and write 3–5 really strong—but completely different—headlines. My favorite resource on headlines is the book Great Leads.

Once you have 3–5 strong headlines, launch one at a time on your homepage and run each for a month. When you find a winner, you’ll feel the impact in your monthly signups, leads, purchases, etc.

Could you quickly A/B test these with a free tool like Google Optimize? Yes, but I still consider it a waste of time. For me, it’s about knowing the game I’m playing. Eyeballing with intuition is one approach. A full program of A/B testing to statistical significance s another. I generally don’t like to mix the two.

When you find a really great headline, you’ll feel it. Multiple KPIs will be up, prospects will repeat it back to you, your team will start using it on their own. You’ll get multiple signals that it hit a nerve.

And if you don’t feel it, the headline wasn’t a big enough improvement to matter. Try something else instead.

4. Check off the conversion best practices.

I tend to hate best practices. I find them uninspired, and most don’t work as well as everyone claims. But I’ve come around over the years. They have their place.

As a business, you’ll truly innovate in only a few core areas. And that’s all you need to win. For everything else, pull the best practices off the shelf, implement, and move on.

Best practices can also make a difference when you install them quickly. As a single improvement, most are a waste of time. But in aggregate, they can make a big difference. I can spike conversions on most sites simply by running full-speed through my conversion checklist.

Here are the items that have made it on my conversion checklist:

  • Whatever headline or offer works best, use it across the entire site (homepage, pop-ups, sidebars, etc.). A great offer tends to work over and over again.
  • Throw up a chat box like Drift. If this doesn’t work, you probably have product/market fit problems.
  • Install a pop-up. I know we all hate them, but they still work beautifully.
  • Get your site speed up. Make sure you’re on a good web host, remove any unused marketing scripts on your site (the worst offenders), and optimize your images.
  • Make sure product and pricing pages communicate your positioning really well.
  • Build dedicated landing pages for every paid campaign that you run.
  • If anything needs to be set up during onboarding flows, find a way to automate it.

There are tons of lead-gen and conversion guides out there, like this one. Read a few, combine them with my checklist, then launch all the recommendations as quickly as you can. Half the suggestions won’t do anything, but that won’t matter if you launch everything within a few weeks. You’ll easily make an impact on your conversions.

Conclusion

The recommendations above are exactly how I run my business today. We’re moving faster than I ever have before—at a fraction of the budget.

That’s why I doubt I’ll ever build a growth team again:

  • Few folks understand probability, and most executives don’t care about the data—regardless of what it says.
  • Testing encourages growth teams to get out of sync with company strategy, and it’s easy to screw up the data, which forces you to throw away months of testing.
  • Even if you get past all this, you probably don’t have enough data to work with anyway.
  • A 1.5-year growth program will cost you just short of a million dollars.
  • Once you hit a wall on your conversions and need your growth team to do something else, they’ll have to start from scratch to learn another workflow.
  • And it all depends on finding an amazing growth manager to run the team.

Difficult? Yes. Impossible? No. But that’s an awful lot of work when the upside is limited to doubling or tripling the conversion rate in your funnel.

Yes, massive business and businesses with user-driven growth are the exceptions. It’s absolutely worth it to them. For the rest of us, I’d rather spend that money and time building a marketing team that can continue to grow my business for years to come.

The post I’ve Built Multiple Growth Teams. Here’s Why I Won’t Do It Again. appeared first on CXL.