Transform Data’s Impact: Pick The Right Success KPI!

Your analysis provides clear data that the campaign was a (glorious) failure. It could not be clearer. The KPI you chose for your brand campaign was Trust, it had a pre-set target of +5. The post-campaign analysis that compares performance across Test & Control cells shows that Trust did not move at all. (Suspiciously, there […]

The post Transform Data’s Impact: Pick The Right Success KPI! appeared first on Occam’s Razor by Avinash Kaushik.

Your analysis provides clear data that the campaign was a (glorious) failure.

It could not be clearer.

The KPI you chose for your brand campaign was Trust, it had a pre-set target of +5. The post-campaign analysis that compares performance across Test & Control cells shows that Trust did not move at all. (Suspiciously, there are indications that in a handful of Test DMAs it might have gone down!)

Every so often, the story is just as simple as that.

You do the best you can with a marketing campaign (creative, audience, targeting, channels, media plan elements like duration, reach, frequency, media delivery quality elements like AVOC, Viewability, etc.), and sometimes the dice does not roll your way when you measure impact.

You would be surprised to know just how frequently the cause for failure is things that have nothing to do with the elements I mentioned above.  In future Premium editions we’ll cover a bunch of these causes, today I want to cover one cause that is in your control but often a root cause of failure:

Judging a fish by its ability to climb a tree!

AKA: You picked the wrong KPI for the campaign.

[Note 1: I’m going to use the phrase Success KPI a lot. To ensure clear focus, clear postmortems and clear accountability, I recommend identifying one single solitary metric as the Success KPI for the initiative. You can measure seven additional metrics – say for diagnostic purposes -, but there has to be just one Success KPI. Close accountability escape hatches.]

[Note 2: Although the guidance in this article applies to companies/analytics teams of all sizes, it applies in particular to larger companies and large agencies. It is there that the largest potential for mischief exists. It is also there, with an army of brilliant Analysts, that the highest potential for good exists.]

[Note 3: This article, part 1 of 2, was originally published as an edition of my newsletter The Marketing < > Analytics Intersect. In part 1, below, we’ll sharpen our skills in being able to recognize the problem, and five of the twelve rules for success. If you are TMAI Premium member, check your inbox for TMAI #313 for part 2 with the remaining rules and additional guidance. If you can’t find it, just email me. Merci.]

Be sure to save the summary visual at the end for implementing it in your company/agency.


The Subtle Art of Picking Bad KPIs.

Example 1.

Let’s say I work at Instagram, specifically in the Reels team. We want Reels to, say, crush TikTok. The team runs a $250 mil multi-platform campaign to increase Awareness of Reels. The campaign Success KPI was chosen to be: Incremental Reels Videos Created.

Good campaign. Bad Success KPI.

If you truly build Awareness creative, then judge success using the KPI Awareness. No?

Fish swim.

[Yes, long-term success of Reels will only come from Instagram users uploading Reels, but that was not the problem the creative was solving for. If the goal was Incremental Reels Videos Created, you would build an entirely different creative, you would target the campaign, potentially, to a different audience, you might create a different media plan, you would… run a different campaign.]

Creating a performance Success KPI for a brand campaign is a particularly common, and heartbreaking, mistake. Sophisticated brand measurement is hard. It feels simpler to pick what’s easy to measure, but you are going to make the fish feel bad when you judge its ability to climb trees AND you don't accomplish the desired outcome.

Example 2.

Let’s say I ran the campaign mentioned at the top of this email for my employer American Express.

If you look at Brand Trackers published by numerous industry sources, it becomes apparent in two minutes that American Express does not have a Trust problem. Americans trust American Express in massive quantities.

If you run a trust campaign for American Express, that campaign is going to fail. You are solving a problem that’s not a problem.

Bad Success KPI because of, technically speaking, high baselines.

Example 3.

Your new, Extremely Senior Leader is obsessed about doing Marketing that makes people fall in love with our brand. So, they conceive of a multi-million dollar Social campaign and demand the success KPI be: Brand Adoration.

[A KPI like that instinctively makes Analysts cringe because what’s Brand Adoration anyway. What does that even mean? Do we just make something up? If we do, how would we ever know if we did something meaningful, how we are doing compared to competitors/industry, what kind of creative/media do we even use to build “Brand Adoration,” and what are the core drivers of Brand Adoration, and if you don’t know, what are you actually doing spending all this money? I am going to set all this aside for a future TMAI Premium editions!]

You’ll measure that KPI using a question (or five) that will be presented in both the Test & Control cells. Will anyone who is not an employee of your company or in your Team's orbit even understand what the question is?

Let’s say, you ask Do you adore PayPal? Will the responding human know how to process this question?

Let’s say, you try an even more clever trick and ask PayPal is my preferred choice for financial transactions of a personal nature, and I would never use any other service, choose Yes or No.

Would the responding human understand that you are measuring brand adoration and give you a valid answer?

This is a bad Success KPI because no responding human can understand what you are asking – then the signal you accumulate to assess the campaign success or failure is a false signal.

And, it is the analytics person/team/agency's mistake.

Example 4.

A little grab bag for you…

When you are trying to drive long-term profit, picking Conversion Rate as a Success KPI for a campaign would be a mistake.

For your Display Advertising campaigns, picking any Success KPI close to buying (ex: Revenue) usually is a mistake. (Assisted Conversions – over a 30 or 90-day period, depending on your business – might be better.)

Anointing Conversion Rate (or dare I say even Revenue) as the Success KPI for your Email newsletter is a double mistake. It will cause your team to use newsletters in the spirit of pushy spam, and it will stop newsletters from truly becoming a strategically valuable owned asset, as Email is magnificent at See and Think, not so much Do.

I could keep going on. I have a hundred thousand more stories of judging a fish by its ability to climb a tree.


12 Rules for Picking the Right Success KPI.

While there is enough responsibility to spread around, I rest accountability for this common mistake on the Analyst/s. Marketers, CMOs, Finance peeps should know the implications of picking an imprecise Success KPI, but the Analyst is the expert and, hence, I expect them to take the lead.

To help you do that, here are 12 rules I codified for our team to use when we pick the Success KPI for a campaign. Each of these rules helps address a common error, collectively they also help you/leaders think through the campaign strategy, consider if they are solving the right problem, and so much more beyond just the KPI.

Ready to be A LOT MORE influential in your company?

Here are 12 rules brilliant companies use for picking the right Success KPI (and do Marketing that matters):

1. Is it an industry standard KPI?

It sounds like bad news that I’m saying you are not a special snowflake, that your campaign/tactic/magnificently brilliant idea is not so very incredibly unique that you need to make up a Metric to measure its success.

When you use an industry standard KPI, you have access to standards and benchmarks – providing you the super cool benefit of being able to assess your own performance in a much bigger context. This choice also comes with guidance on best practices for measuring this KPI – so that you don’t have to invent a methodology/technique that has no benefit of the industry’s collective wisdom.

Bonus: If you use an industry standard KPI, very often you’ll get access to research related to drives of that KPI that your Creative, Media and Strategy teams will kill for. If they know the drivers, they can internalize at a deeper layer what it takes to drive success.

Try not to make up a KPI, try not to make up the formula/question/methodology for a Success KPI. On that note…

2. (If it is a made up metric:) Is the KPI definition clear and understandable by a non-employee (aka consumer)?

For brand marketing, you and I assess success using a question we ask consumers.

When we make up our own metrics, the questions come from our best expertise, they might then get changed by a non-expert (Director of Marketing, CEO) because they like the sound of a particular word or phrase. But, phrased like that… Only your Director, and five people who say yes to everything the Director says, actually understand the question and answer choices. People taking the survey are super confused or putting their own interpretation on what you are asking. Now, their answers are suspect and – regardless of if their campaign results are indicated as a Big Success or Big Failure by the data – the measurement is imprecise.

Non-employees – aka normal people – need to be able to clearly and quickly understand what you are asking in your brand measurement surveys. Both the question AND the answer choices.

For performance marketing, you can see this confusion practiced when you create compound metrics. I bet your CMO dashboard has Social Engagement on it – only you understand what that metric actually is, and the convoluted formula ensures no one will ever know why Social Engagement went up or down. Not a good success KPI.

3. Is the Success KPI a business metric or a third-order driver metric?

You might have noticed above that I’m a fan of understanding the drivers of success (driver metrics) and not just the Big Thing we are trying to move (success KPI).

But, there is a special type mistake I see often made: The driver metric is chosen as the Success KPI.

An example of this is choosing Conversion Rate – certainly a driver metric – as the Success KPI vs. Profit. Yes, perhaps Profit will go up if you have a higher Conversion Rate, but the team could just use coupons or targeting low-value customers to drive the Conversion Rate and Profit will never go up.

Another example of this is that we want to influence Trust in our company, and we end up picking Product Quality as the Success KPI. Yes, Product Quality will improve Trust over time, but the coefficient is probably petite.

To correctly identify the impact of your campaign, pick the business outcome you want as the Success KPI and not one of the many driver metrics.

4. Is the Success KPI the goal set in the creative brief?

The creative is the ad we see on TV or TikTok, it is the lines of text in your Bing ad, and it is the (hopefully not annoying) image, text, animation, call to action, in your Display ad currently running in the Sacramento Bee.

Creative teams love big challenges and are motivated by solving existential issues. Hence, when Marketers / Leaders write creative briefs, they end up briefing the team for Big Things.

Make the world believe we are as good as Apple in quality… We are trying to get customers to think we are an innovative company… Our goal is to have the world believe that we are a force for good when it comes to climate change… The campaign investment is meant to help shift the perception that we are committed to our customers in the long run!

These are all fantastic things to shoot for (if your reality matches these aspirations).

The challenge occurs when the Success KPI for all of the above campaigns is set as In-Store Sales. Or, Lifetime value. Or, Most Valuable Brand in the world.

When there is a conflict between what the creative brief is (what the ads are being built for) and the measured Success KPI, the latter is an extremely poor choice because it will invariably show failure.

Brief the creative team for an outcome that actually matters to the business, and then set that exact same outcome as the Success KPI. Clear alignment between input and output.

5. Does the KPI have headroom?

I love this one. Not only as a great rule, but also to force Marketers to be clever.

What’s headroom?

Let’s consider this brand question: Is Apple an innovative company?

The answer: Yes (68%).

That is a very high baseline. If 68% of the people think anything positive of a company, there is likely no one else left in the world to persuade.

[In the case of Apple, there are a fair number of people who love to dislike Apple. That further means, purely from a measurement perspective, no headroom.]

You cannot move an unmovable metric.

No matter how much money you spend.

Even IF the campaign had great creative, it was well delivered, on the right channels, with optimal reach and frequency. The campaign will look like a failure. And, it was not the Marketing team’s fault.

Before you pick a Success KPI, do a bit of research to understand headroom. If you have less than six or eight points, don’t solve that problem (because data is indicating that it is not a problem!).

Pick something else. Unaided Awareness of Apple Tags is just 12 points. Solve that problem. Lots of headroom!

[Note: The concept of headroom applies to performance marketing as well. You might be maxed out for the audience you can reach in a particular channel. You already have max possible Click Share on Google. There might not be any more new customers to entice across the East Coast of the US. Etc. Assess headroom available across your performance Success KPIs as well.]

[Note:
​​​​ Premium subscribers will recognize assessment of headroom as another clever manifestation of the win before you spend Minerva (Pre-Flight) Check outlined in TMAI #273.]


Scoring Success KPIs.

It would not surprise you to learn that smart teams codify their thinking (frameworks FTW!), and implement a process that ensures that thinking is applied 1. at scale 2. at the right moment, and 3. is understood by all.

That’s the real success to winning influence with data. To make it easier for teams I've led to implement the rules for success KPIs framework, we use the following checklist (with part 1 rules)…

12_rules_for_picking_success_kpis_part_1

[Click image above for a higher resolution version. It is pretty easy to type it all up in Excel, but if you need an Excel version, just email me.]

A thoughtful assessment, upfront. Simple and clear to all the cross-functional teams involved (and not just the Analytics team).

Rules 1 through 8 are mandatory, all of them have to be met for a KPI to be anointed a success KPI. The scoring in light blue row above. Rules 9 through 12 are for Analysis Ninjas, those who want to go above and beyond, those who do not leave things to chance, those looking for coming as close to guaranteeing success as possible. The scoring is in the darker blue row.

The KPI candidate with the best score wins! :)

In a future blog post, we can cover the process to put in place to ensure this happens at scale in your company/non-profit.


Bottom line.

Measuring the wrong thing should be the last reason to get a false signal of the impact of a campaign. False positive or false negative.

Measuring the right thing, and ensuring there is a process and framework in place to discuss that up front, ensuring every good and bad dimension of thinking can be put on the table up front, is a gift of immeasurable proportions to your employer/client.

Pick the right Success KPI.

It won’t guarantee campaign success, it will ensure that you’ll know when success occurs that it is real, and when failure occurs, there are clear lessons to learn for doing better in the future.

Pick the right Success KPI.

How good is your team, your agency, at ensuring that you are picking success KPIs that deliver in-depth insights, and optimal accountability? Please share via comments below. Merci.

[Quick reminder: If you are a TMAI Premium subscriber, part 2, with rules six through twelve and bonus content, is in your inbox. If you can’t find it, just email me.]

The post Transform Data's Impact: Pick The Right Success KPI! appeared first on Occam's Razor by Avinash Kaushik.

Robust Experimentation and Testing | Reasons for Failure!

Since you’re reading a blog on advanced analytics, I’m going to assume that you have been exposed to the magical and amazing awesomeness of experimentation and testing. It truly is the bee’s knees. You are likely aware that there are entire sub-cultures (and their attendant Substacks) dedicated to the most granular ideas around experimentation (usually […]

The post Robust Experimentation and Testing | Reasons for Failure! appeared first on Occam’s Razor by Avinash Kaushik.

Since you're reading a blog on advanced analytics, I'm going to assume that you have been exposed to the magical and amazing awesomeness of experimentation and testing.

It truly is the bee's knees.

You are likely aware that there are entire sub-cultures (and their attendant Substacks) dedicated to the most granular ideas around experimentation (usually of the landing page optimization variety).  There are fat books to teach you how to experiment (or die!). People have become Gurus hawking it.

The magnificent cherry on this delicious cake: It is super easy to get started. There are really easy to use free tools, or tools that are extremely affordable and also good.

ALL IT TAKES IS FIVE MINUTES!!!

And yet, chances are you really don’t know anyone directly who uses experimentation as a part of their regular business practice.

Wah wah wah waaah.

How is this possible?

It turns out experimentation, even of the simple landing page variety, is insanely difficult for reasons that have nothing to do with the capacity of tools, or the brilliance of the individual or the team sitting behind the tool (you!).

It is everything else:

Company. Processes. Ideas. Creatives. Speed. Insights worth testing. Public relations. HiPPOs. Business complexity. Execution. And more.

Today, from my blood, sweat and tears shed working on the front lines, a set of two reflections:

1. What does a robust experimentation program contain?

2. Why do so many experimentation programs end in disappointing failure?

My hope is that these reflections will inspire a stronger assessment of your company, culture, and people, which will, in turn, trigger corrective steps resulting in a regular, robust, remarkable testing program.

First, a little step back to imagine the bigger picture.


This blog post was originally published as an edition of my newsletter TMAI Premium. It is published 50x/year, and shares bleeding-edge thinking about Marketing, Analytics, and Leadership. You can sign up here – I donate all revenues to charity.


Robust & Remarkable Experimentation – so much more than LPO! 

The entire online experimentation canon is filled with landing page optimization type testing.

The last couple pages of a 52,000 words book might have ideas for testing the cart and checkout pages that pay-off big, or exploring the limits of comparison tables/charts/text or support options, or even how you write support FAQs. (Premium subscribers see TMAI #257).

Yes. You can test landing pages. But, there is so, so, so much when it comes to experimentation possibilities.

So. Much!

A. As an example, we have a test in-market right now to determine how much of the money we are spending on Bing is truly incremental.

(As outlined in TMAI Premium #223, experiments that answer the priceless marketing incrementality question are the single most effective thing you can do to optimize your marketing budgets. This experimentation eats landing page optimization for breakfast, lunch, dinner, and late-night snack.)

B. Another frequent online experimentation strategy is to use matched-market tests to figure out diminishing return curves for the online-offline media-mix in our marketing plans.

(Should we spend $5 mil on Facebook and YouTube, and $10 mil on TV, or vice versa or something else?)

C. We are routinely live testing our ads on YouTube with different creatives to check for effectiveness, or the same ad against different audiences/day parts/targeting strategies to find the performance maxima. So delicious.

(Bonus read: Four types of experiments you can run on Facebook: A/B, Holdout, Brand Survey, and Campaign Budget.)

D. Perhaps one of the most difficult ones is to figure out complex relationships over long periods of time: Does brand advertising (sometimes hideously also called “upper funnel”) actually drive performance outcomes in the long run? A question with answers worth multi-million dollars.

(Any modern analysis of consumer behavior clearly identifies that the “marketing funnel” does not exist. People are not lemmings you can shove down a path convenient to lazy Marketers. Evolve: See – Think – Do – Care: An intent-centric marketing framework.)

E. Oh, oh, oh, and our long-term favorite: Does online advertising on TikTok, Snap, and Hulu actually drive phone sales in AT&T, Verizon and T-Mobile stores in the real world? The answer will surprise you – and it is priceless!

Obviously, you should also be curious about the inverse.

(Print and Billboards are surprisingly effective at driving digital outcomes! Yes. Print. Yes. Billboards! Well-designed experiments often prove us wrong – with data.)

F. One of the coolest things we’ve done over the last year is to ascertain how to get true incremental complete campaign brand lift for our large campaigns.

You, like us, probably measure change in Unaided Awareness, Consideration, Intent across individual digital channels. This is helpful and not all that difficult to accomplish. But, if you spent $20 mil across five digital platforms, what was the overall de-duped brand lift from all that money?

The answer is not easy, you can get it only via experimentation, and it is invaluable to a CMO.

These are six initial ideas from our work to stretch your mind about what a truly robust experimentation program should solve for! (The space is so stuffed with possibilities, we’ll cover more in future Premium newsletter.)

Next time you hear the phrase testing and experimentation, it is ok to think of landing page optimization first. But. To stop there would be heartbreaking. Keep going for ideas A – F above. That is how you’ll build a robust AND remarkable experimentation practice that’ll deliver a deep and profound impact.

Let me go out on a limb and say something controversial: A robust experimentation program (A – F above) holds the power to massively overshadow the impact you will deliver via a robust digital analytics program.

Note: That sentiment comes from someone who has written two best-selling books on digital analytics!

With the above broad set of exciting possibilities as context (and a new to-do list for you!), I’ll sharpen the lens a bit to focus on patterns that I’ve identified as root causes for the failure of experimentation programs.

From your efforts to build a robust landing page optimization practice to complex matched-market tests for identifying diminishing return curves these challenges cover them all.


9 Reasons Experimentation Programs Fail Gloriously! 

As I’d mentioned above, the barrier to experimentation is neither tools (choose from free or extremely affordable) nor Analysts (I’m assuming you have them).

Here are a collection of underappreciated factors why neither you nor anyone you directly know have an experimentation program worth its salt:

1. Not planning for the first five tests to succeed.

I know how hard what I’m about to say is. I truly do.

Your first few tests simply can’t fail.

It takes so much to get going. Such high stakes given the people, process, permissions, politics, implementation issues involved.

You somehow pull all that off and your first test delivers inconclusive results (or “weird” results) and there is a giant disappointment (no matter how trivial the test was). You and I promised a lot to get this program going, so many employees, leaders, egos, teams were involved, it will be a big let-down.

Then your second and third test fail or are underwhelming, and suddenly no one is taking your calls or replying to your emails.

It is unlikely you’ll get to the fifth one. The experimentation program is dead.

I hear you asking: But how can you possibly guarantee the first few tests will succeed?!

Look, it will be hard. But, you can plan smart:

Test significant differences.

Don’t test on 5% of the site traffic.

For the sake of the all things holy, don’t test five shades of blue buttons.

Don’t test for a vanity metric.

Don’t pick a low traffic page/variation.

Have a hypothesis that is entirely based on an insight from your digital analytics data (for the first few) rather than opinions of your Executives (save those ideas for later).

And, all the ideas below.

Your first test delivers a winner, everyone breathes a sigh of relief.

Your fifth test delivers a winner, everyone realizes there's something magical here, and they leave you alone with enough money and support for the program for a year.  Now. You can start failing because you start bringing bold ideas – and a lot of them fail (as they should).

Bonus Tip: The more dramatic the differences you are testing, the less the sample size you’ll need, and faster the results will arrive (because the expected effect is big). Additionally, you’ll reduce the chances of surrounding noise or unexpected variables impacting your experiment.

splash_page_experiments

2. Staring at the start of the conversion process, rather than close to the end.

On average, a human on an ecommerce site will go through 12–28 pages from landing page to order confirmation.

If you are testing buttons, copy, layout, offer, cross-sells / upsells, other things, on the landing page, chances are any positive influence of that excellent test variation will get washed out by the time the site visitor is on page 3 or 5 (if not sooner).

If you plan to declare success of a start of user experience test based on end of user experience result… You are going to need a massive sample to detect (a likely weak) signal. Think needing to have 800,000 humans to go through an experimental design variation on the landing page to detect an impact on conversion rate.

That will take time. Even if you get all that data, there are so many other variables changing across pages/experience for test participants that it will be really hard to establish clear causality. Bottom line: It comes with a high chance of experiment failure.

So, why not start at the end of the conversion process?

Run your first A/B test on the page with the Submit Order button.

Run your MVT test on the page that shows up after someone clicks Start Checkout.

Pick the page for the highest selling product on your site and use it for your first A/B test.
Etc.

The same goes for your Television experiments or online-to-offline conversion validations you need. Pick elements close to the end (sale) vs. at the start (do people like this creative or love it?).

The closer you are to the end, the higher the chances your test will show clear success or failure, and do so faster. This is great for earning your experimentation program executive love AND impact the real business results at the same time.

In context of website experimentation, I realize how oxymoronic it sounds to say, do your landing page optimization on the last page of the customer experience. But that, my friends, is what it takes.


3. Hyping the impact of your experiments.

This is the thing that irritates me immensely, because of just how far back it sets experimentation culturally.

This is how it happens: You test an image/button/test on the landing page. It goes well. Improves conversion rate by 1.2% (huge by the way) and that is, let’s say, $128,000 more in revenue (also wonderful).

You now take that number, annualize the impact, extend it out to all the landing pages, and go proclaim in a loud voice:

One experiment between blue and red buttons increased company revenue by $68 mil/year. ONE BUTTON!

You’ve seen bloggers, book authors and gurus claiming this in their tweets, articles and industry conference keynotes.

The Analysis Ninja that you are, you see the BS. The individual is ignoring varied influence across product portfolios, seasonality, diminishing returns, changes in the audience, and a million other things.

The $68 mil number is not helping.

Before making impossible to believe claims… Standardize the change across 10 more pages, 15 more products, for five more months, in three more countries, etc., and then start making big broad claims.

Another irritating approach taken by some to hype the impact of experiments: Adding up the impact of each experiment into a bigger total.

I’d mentioned above that you should try and make your first five tests succeed to build momentum. You do!

Test one: +$1 mil. Test two: +$2 mil. Test three, four, five: +$3, +$4, +$5 mil.

You immediately total that up and proudly start a PR campaign stating that the experimentation program added $15 mil in revenue.

Except… Here’s the thing, if you now made all five changes permanent on your site, in your mobile app, the total impact will be less than $15 mil. Often a lot less.

Each change does not stack on top of the other. Diminishing returns. Incrementality. Analysis Ninjas know the drill.

So. Don’t claim $15 mil (or more) from making all the changes permanent.

Also, the knowledgeable amongst you would have already observed that the effect of any successful experiment will wash off over time. Not to zero. But less with every passing week. Accommodate for this reality in your PR campaign.

There is a tension between wanting to start something radical and being honest with the impact. You want to hype, I understand the pressure. Even if you aren't concerned about the company, there are books to sell, speeches to give at conferences. Resist the urge.

When you claim outlandish impact, no one believes you. And in this game, credibility is everything.

You don’t want to win the battle but lose the war.

experiment_success_metric

4. Overestimating your company’s ability to have good ideas.

I’ve never started an experimentation program, worried that we won’t have enough worthy ideas to test. I’m always wrong.

Unless you are running a start-up or a 40-people mid-sized business, your experimentation program is going to rely on a large number of senior executives, the creative team, the agency, multiple marketing departments and product owners, the digital team, the legal team, the analysts, and a boatload of people to come up with ideas worth testing.

It is incredibly difficult to come up with good ideas. Among reasons I’ve jotted down from my failures:

People lack imagination.

People fight each other for what’s worthy.

People have strong opinions for what  is out of bounds for testing. (Sacré bleu, sacrosanct!)

People, more than you would like to admit, insult the User’s intelligence.

People will throw a wrench in the works more often than you like.

People will come up with two great ideas for new copy when you can test six – and then they can’t supply any more for five months.

Go in realizing that it will be extraordinarily difficult for you to fill your experimentation pipeline with worthy ideas. Plan for how to overcome that challenge.

Multivariate Testing sounds good, until you can’t come up with enough ideas to fill the cells, and so you wind up filling them with low-quality ideas. Tool capabilities currently outstrip our ability to use them effectively.

One of my strategies is to create an ideas democracy.

This insight is powerful: The C-Suite has not cornered the market on great ideas for what to test. The lay-folks, your peers in other departments, those not connected to the site/app, are almost perfect for ripping the existing user experience and filling your great ideas to test bucket.

One of my solutions is to send an email out to Finance, Logistics, Customer Support, a wide diversity of employees. Invite them for an extended (free) lunch. We throw the site up on the conference room screen. Point out some challenges we see in data (the what) and ask for their hypotheses and ideas (the Why). We always left the room with 10 more ideas than we wanted!

My second source of fantastic ideas to test are the site surveys we ran. If you were unable to complete your task, what were you trying to do (or the variation, why not)? Your users will love to complain, your job is to create solutions for those complaints – and then test those solutions to see which ones reduce complaints!

My third sources of solid ideas is leveraging online usability testing. Lots of tools out there, checkout Try My UI or Userfeel. It is wonderful how seeing frustration live sparks imagination. :)

Plan for the problem that your company will have a hard time coming up with good ideas. Without a source – imagination – of material good ideas, the greatest experimentation intent will amount to diddly squat.

splash_page_variations.

5. Measuring final outcome, instead of what job the page is trying to do.

How do you know the Multivariate Test worked?

Revenue increased.

BZZZZT.

No.

Go back to the thought expressed in #2 above: An average experience is 12 to 28 pages to conversion.

If you are experimenting on page one, imagine how difficult it will be to know on page #18 if page #1 made a difference (when any of the pages between 1 and 18 could have sucked and killed the positive impact!).

For the Primary Success Criteria, I’m a fan of measuring the job the element / button / copy / configurator was trying to do.

On a landing page, it might be just trying to reduce the 98% bounce rate. Which variation of your experiment gets it down to 60%?

On a product page, it might be increasing product bundles added to the cart instead of the individual product.

On the checkout page, it might be increasing the use of promo codes, or reduce errors in typing phone numbers or addresses.

On a mobile app, success might mean easier recovery of the password to get into the app.

So on and so forth.

I’ll designate the Primary Success of the experiment based only on the job the page/element is trying to do (i.e., the hypothesis I’m testing). If it works, the test was a success (vs. the page test is a success from what happens 28 pages later!).

In each case above, for my Secondary Success Criteria I’ll track the Conversion Rate or AOV or, for multiple purchases in a short duration of time, Repeat Purchases. But, that’s simply to see second order effects. If they are there, I take it as a bonus. If they are not there, I know there are other dilutive elements in between what I’m trying to do and the Secondary Success. Good targets/ideas for future impactful tests.

If all your experiments are tagged to deliver higher revenue/late-stage success, they’ll take too long to show signal, you will have to fix a lot more than just the landing page, and, worse, you’ll be judging a fish by its ability to climb a tree.

Never judge a fish by its ability to climb a tree!


6. (Advanced) Believing there is one right answer for everyone.

If you randomly hit a post I’ve written, it likely has this exhortation: Segment, Segment, SEGMENT!

What can I say. I’m a fan of segmentation. Because: All data in aggregate is crap.

We simply have too many different types of people, from different regions, with different preferences, with different biases, with different intent, at different stages of engagement with us, with different contexts, engaging with us.

How can there be one right answer for everyone?

Segmentation helps us tease out these elements and create variations of consumer experiences to deliver delight (which, as a side effect, delivers cash to us).

The same principle applies for A/B & MV testing.

If we test version A of the page and a very different version B of the page, in the end we might identify that version B is better by a statistically significant amount.

Great.

Knowing humanity though, it turns out that for a segment of the users version A did really well, and they disliked version B at all.

It would be totally a bonus item being able to determine who these people are and what it is about them that made them like A when most liked B.

Not being able to figure this out won’t doom your experimentation program. It might just limit how successful it is.

Hence, when the program is settled, I try to instrument the experiment and variations so that we can capture as much data about audience attributes as possible for later advanced analysis.

The ultimate digital user experience holy grail is that every person gets their own version of your website (it is so cheap to deliver!). It is rare that a site delivers on this promise/hope.

As you run your experiments, it is precisely what I’m requesting you to do.

This one strategy will set your company apart. If you can ascertain which cluster of the audience each version of the experiment works with, it holds the potential to be a competitive advantage for you!

Bonus Tip: If you follow my advice above, then you'll need a experience customization platform that allows you to deliver these unique digital experiences. ​​And, you are in luck. Google Optimize allows you to create personalized experiences from experiments I mention above (and the results you see immediately below).

different_strokes_for_different_folks

7. (Advanced) Optimizing yourself to a local maxima.

A crucial variation of #5 and #6.

An illustrative story…

When I first started A/B testing at a former employer, we did many things above, and the team did an impressive job improving the Primary Success Metric and at times the Secondary Success Metric. Hurray! Then, because I love the what but also the why, we implemented satisfaction surveys and were able to segment them by test variations.

The very first analysis identified that while we had improved Conversion Rate and Revenue via our test versions, the customer satisfaction actually plummeted!

Digging into the data made me realize, we’d minimized (in some instances eliminated) the Support options. And, 40% of the people were coming to the (ecommerce!) site for tech support.

Ooops.

Many experimentation programs don’t survive because they end up optimizing for a local maxima. They are either oblivious to the global maxima (what is the good for the entire company and not just our team) or there are cultural incentives to ignore the global maxima (in our case likelihood to recommend our company – not just buy from us once).

Be very careful. Don’t be obligated only to only your boss/team.

Think of the Senior Leader 5 (or 12) levels higher and consider what their success looks like.

To the extent possible, try to instrument your experiments to give you an additional signal (in a Secondary Success Criteria spirit) to help you discern the impact on the company’s global maxima.

In our case above, it was integrating surveys and clickstream data. You might have other excellent ideas (pass on primary keys into Salesforce!).

Global maxima, FTW. Always.


8. Building for scaled testing from day one.

Experimentation is like religion. You get it. It seems so obviously the right answer. You build for a lifetime commitment.

You start by buying the most expensive and robust tool humans have created. You bully your CMO into funding a 40 people org and a huge budget. You start building out global partnerships across the company and its complexity, and start working on scaled processes. And so much more.

Huuuuuuuuuuuuge mistake.

By the time you build out people, process, tools, across all the steps required to test at scale across all your business units (or even customer experiences), the stakes are so high (we’ve already invested 18 months, built out a 40 people organization, and spent $800k on a tool!!), every test needs to impress God herself or it feels like a failure.

Even if your first five tests succeed, unless you are shaking the very foundations of the business instantly (after 18 months of build out of course) and company profit is skyrocketing causally via your tests… Chances are, the experimentation program (and you) will still feel like a failure to the organization.

And, what are the chances that you are going to make many of the above mistakes before, during and after you are building out your Global Scaled Massive Integrated Testing Program?

Very high, my dear, very high. You are only human.

Building for scale from the start, not having earned any credibility that experimentation can actually deliver impact, is silly and an entirely avoidable mistake.

Try to.

From my first Web Analytics book, this formula works wonderfully for a successful experimentation program:

1. Start small. Start free (if you can). Prove value. Scale a bit.
2. Prove value. Scale a bit more.
3. Prove value.
4. Rinse. Repeat. Forever.

Robust and remarkable experimentation platforms are built on many points of victory along a journey filled with learnings from wins and losses.

obama_landing_page_variations

9. (Advanced) Not realizing experimentation is not the answer. 

Testing fanatics often see every problem as a nail that can be hammered with an experimentation hammer.

Only sign up to solve with experimentation, what can be solved with experimentation.

A few years back I bumped into an odd case of a team filled with smart folks constructing an experiment to prove that SEO is worth investing in. There was a big presentation to go with the idea. Robust math was laid out.

I could not shake the feeling of WTH!

When you find yourself in a situation where you are using, or are being asked to use, experimentation to prove SEO’s value… Realize the problem is company culture. Don’t use experimentation to solve that problem. It is a lose – lose.

Not that long ago, I met a team that was working on some remarkable experiments for a large company. They were doing really cool work, they were doing analysis that was truly inspiring.

I was struck by the fact that at that time their company’s only success was losing market share every single month for the last five years, to a point where they were approaching irrelevance.

Could the company not use a team of 30 people to do something else that could contribute to turning the company around? I mean, they were just doing A/B testing and even they realized that while mathematically inspiring and executionally cool, there is no way they were saving the business.

It is not lost on me that this recommendation goes against the very premise of this newsletter – to build a robust and remarkable experimentation program. But, the ability to see the big picture and demonstrate agility is a necessary skill to build for long-term success.


Bottom line.

I’ve only used or die in two of my slogans. Segment, or die. Experiment, or die.

I deeply, deeply believe in the power of A/B and Multivariate Testing. It is so worthy of every company’s love and affection. Still, over time, I’ve become a massive believer in the value of robust experimentation and testing to solve a multitude of complex business challenges. (Ideas A – F, in section one above.)

In a world filled with data with few solid answers, ideas A – F and site/app testing provide answers that can dramatically shift your business' trajectory.

Unlock your imagination. Unlock a competitive advantage.

May the force be with you.

Special Note: All the images in this blog post are from an old presentation I love by Dan Siroker. Among his many accomplishments, he is the founder of the wonderful testing platform Optimizely.

The Premium edition of my weekly newsletter shares solutions to your real-world Marketing & Analytics challenges, while unpacking future-looking opportunities. Subscribe here, and help raise money for charity.

The post Robust Experimentation and Testing | Reasons for Failure! appeared first on Occam's Razor by Avinash Kaushik.

The Most Important Business KPIs. (Spoiler: Not Conversion Rate!)

I was reading a paper by a respected industry body that started by flagging head fake KPIs. I love that moniker, head fake. Likes. Sentiment/Comments. Shares. Yada, yada, yada. This is great. We can all use head fake metrics to calling out useless activity metrics. [I would add other head fake KPIs to the list: […]

The post The Most Important Business KPIs. (Spoiler: Not Conversion Rate!) appeared first on Occam’s Razor by Avinash Kaushik.

I was reading a paper by a respected industry body that started by flagging head fake KPIs. I love that moniker, head fake.

Likes. Sentiment/Comments. Shares. Yada, yada, yada.

This is great. We can all use head fake metrics to calling out useless activity metrics.

[I would add other head fake KPIs to the list: Impressions. Reach. CPM. Cost Per View. Others of the same ilk. None of them are KPIs, most barely qualify to be a metric because of the profoundly questionable measurement behind them.]

The respected industry body quickly pivoted to lamenting their findings that demonstrate eight of the top 12 KPIs being used to measure media effectiveness are exposure-counting KPIs.

A very good lament.

But, then they then quickly pivot to making the case that the Most Important KPIs for Media are ROAS, Exposed ROAS, “Direct Online Sales Conversions from Site Visit” (what?!), Conversion Rate, IVT Rate (invalid traffic rate), etc.

Wait a minute.

ROAS?

Most important KPI?

No siree, Bob! No way.

Take IVT as an example. It is such a niche obsession.

Consider that Display advertising is a tiny part of your budget. A tiny part of that tiny part is likely invalid. It is not a leap to suggest that it is a big distraction from what's important to anoint this barely-a-metric as a KPI.  Oh, and if your display traffic was so stuffed with invalid traffic that it is a burning platform requiring executive attention… Any outcome KPI you are measuring (even something basic as Conversion Rate) would have told you that already!

Conversion Rate obviously is a fine metric. Occasionally, I might call it a KPI, but I have never anointed it as the Most Important KPI.

In my experience, Most Important KPIs are those that are tied to money going into your bank account.

The paper from the respected body made me open PowerPoint and create a visual that would make the case for never identifying Conversion Rate or ROAS the Most Important KPI in your company / practice of analytics.

We expect greatness from our work, let’s focus on great KPIs.

This blog post was originally published as an edition of my newsletter TMAI Premium. It is published 50x/year, and shares bleeding-edge thinking about Marketing, Analytics, and Leadership. You can sign up here. All revenues from your subscription are donated to charity.

 

The Money In-Out Continuum | Intro.

When I think  of importance, I have five elements in mind.

Let’s identify them first.

The Money Making Continuum

To make money, you have to spend money. The law of God.

That’s the red box on your left.

Revenue is what the customer will pay for a product or a service. It is a range above because some products and services you sell for more, others for less.

Media Costs is the amount you have to spend on advertising (a category that also includes your Owned and Earned efforts – after all SEO, Email, Organic Social all cost money).

Hopefully, you spend less on acquiring the order than the revenue you earned. Hopefully. :)

Obviously, whatever you sell is not free to you.

Cost of Goods Sold (CoGS) is the amount it costs you to manufacture the product or the service.

As an example, revenue from selling an iPhone is approx. $1,099 and the CoGS is approx. $490. (Source: Investopedia.)

But. Wait. $609 is not all Profit. There’s more to account for.

Fully Loaded Costs (FLCo) contains the costs associated with salaries of human and robotic employees, agency fees, depreciation associated with building, free doughnuts on Fridays for all employees, credit card processing fees, discounts, and the long laundry lists of things that goes into producing the product/service that you sold to earn revenue.

I’ve represented FLCo (I’m pronouncing that as flock, what do you think?) as a smallish bar above, I don’t need to stress just how big it can be. Hence, crucial to measure and account for.

$$$ – something close to Profit – is the money left over that will go into your bank account.

Money at last. Money at last. Thank God almighty, we have money at last!

:)

The Money In-Out Continuum | KPIs.

Now that we have a common understanding of the elements that form the money in-out continuum, we can layer in what it is that we understand when we measure every day metrics — and the ones anointed Most Important KPIs by the respected industry group.

Let's lay out the depth of what each KPI measures on our continuum.

The Money Making Continuum | KPIs

Conversion Rate is a fine metric. A junior Analyst – even a budding reporting-focused new hire – should be watching it.

But. As illustrated above:

1. It is very, very, very far from the green, and2. It does not have any sense of what it cost you to get that conversion!

You can, literally, go bankrupt increasing your Conversion Rate.

(Hence, at the very minimum, pair up Conversion Rate with Average Order Value to get an initial whiff of doom.)

Conversion Rate is not a Most Important KPI.

Return on Ad Spend (ROAS) is an ok metric.

It is typically computed by dividing the Revenue from Advertising by the Cost of Advertising (a.k.a. Media Costs). You times that by 100, and you get a ROAS %.

ROAS only sucks less. It remains very, very, very far away from the green. Additionally, by aggregating products/services into lumpy groups, it can give a misleading sense of success.

[Disclosure: I profoundly dislike ROAS — even hate it — for, among other reasons, driving a disproportionate amount of obsession with ONLY Paid Media by CMOs when Paid Media typically delivers a minority of the incremental business revenue. Bonus Read: Attribution is not incrementality.]

Gross Profit is revenue minus Media Costs minus CoGS.

Now, you have yourself a KPI! Not yet the Most Important KPI, but a KPI nonetheless.

In the past, I’ve recommended using Custom Metrics in tools like Google Analytics to compute Gross Profit. You can do this using an aggregate % number that you can lop off for CoGS. At the very minimum, your Traffic Sources report does not have to stop at Revenue (misleading much?).

With Google’s Data Studio, you can actually bring item level CoGS in and easily compute Gross Profit for every single order you get.

It. Will. Change. Your. Life.

Net Profit then is revenue minus Media Costs minus CoGS minus FLCo.

Finally, you have something super cool.

You can work with your Finance team to get FLCo. You’ll get a different number for your Owned, Earned, Paid media strategies. You’ll have a number that’ll accommodate for a sale that might have happened on your website vs. retail store vs. placed on website but picked up on retail, etc.

You can build this into Google’s Data Studio if you like. Or, the Business Intelligence tool of choice used by your company.

Net Profit totally qualifies for the Most Important KPI tag.

It helps identify how much money you created that is going into the bank, and what it is that you did exactly to create that money.

Yep. Understanding that will deliver a transformative impact on your business.

I’ll go out on a limb and say that it will also shock your CMO.

The Money In-Out Continuum | The Problem.

I say this with some confidence that none of your reports for digital, and barely any reports for the entire business, currently report on either one of the above two Most Important KPIs.

Why?

Simple. You are using Adobe Analytics or Google Analytics or some such tool, and they have no built-in concept of 1. Media Costs 2. CoGS, and 3. FLCo.

Sure, if you connect Google Analytics to your Google Ads account, #1 becomes easy. You have Media Costs. But, in addition to Google, you are advertising on a ton of other channels and getting all those costs is a pain – even when possible.

Obviously, digital analytics tools have no concept of #2 (CoGS) or #3 (FLCo).

You are stuck making poor business decisions, in the best case scenario, at stage two of the Stages of Savvy.

This is not enough.

The Money Making Continuum | Stages of Savvy

To build a strategy to address this gap in your analytics strategy…  My recommendation is to break out of the limitations that your digital analytics tools, and shift to your business intelligence tools (start with exhausting the features Data Studio provides you with for the magnificent cost of zero dollars – lower FLCo!).

Recognize that Analysis Ninjas live at stage 3, and they truly come into their own when they get to stage 4.

Is this true for you? Does your analytical output include Net Profit?

By a staggering coincidence, Analysis Ninjas who live in stages 3 and 4 also have long, productive, well-compensated careers! Because getting there is hard, AND it requires building out a wide array of cross-functional relationships (always crucial when it comes to annual performance reviews!).

#liveinstage4

The Money In-Out Continuum | The Most Important KPI.

Obviously, the most important KPI is the one you are not measuring.

Customer Lifetime Value (CLV) is the sum of Net Profit earned from a customer over the duration they are your customer.

Say I buy the Pixel 1 phone from Google, and Google makes $50 Net Profit from that sale.

Then, I buy the Pixel 2, Pixel 3a, and Pixel 4a. Google makes $60, $60, and $60 Net Profit (they save on advertising costs to me, which translates into higher profit).

Then, for reasons related to innovativeness, I switch to Samsung and buy a Z Flip 3 (great phone!).

The Money Making Continuum | Customer Lifetime Value

My CLV for Google is: 50+60+60+60 = $230.

I originally converted to buying a Pixel 1 after typing best android phone into Bing.

Analytics tools, configured right, with analysis done by Stage 4 Analysts, will show a Net Profit of $50 driven by Bing.

Except, it is $230.

Cool, right?

So. Why don’t we all calculate CLV every day and every night, and then some more of it on the weekend?

Because it is hard.

Go all the way back up and reflect on why is it that we are satisfied with Conversion Rate or ROAS vs. Gross Profit?

Because it is easy.

It is so hard to get to Gross and Net Profit.

Then, to be able to keep track of that same person (me, in the above Pixel example). Then, wait for me to churn so that you get my CLV. Oh, and remember to have systems interconnected enough to keep track of every touchpoint with me to ensure you attribute accurately.

It is hard.

Of course, you don’t have to do the computation for every individual. You can do it by micro-segments (like type people, same geo, age groups, products, etc. etc.). You can do it in aggregate.

Sadly, none of these is easy.

Hence. You don’t do it.

No matter what CLV zealots will tell you.

If they make you feel bad. Don’t feel bad.

My advice is twofold:

1. Keep your primary quest to get to stage 4 (Net Profit) because the quality of your insights will improve by 10x.2. (If you don’t have it already) Create a long-term plan to understand the lifetime value of a customer for your company.

Execute that advice in that order, and you'll get to the global maxima faster.

As you contemplate your strategy for #2 above, my dear friend David Hughes helped write one of my favorite posts on this blog: Excellent Analytics Tip #17: Calculate Customer Lifetime Value.

Read it. Internalize the recommendations. Download the detailed lifetime value model included in the post, and jumpstart your journey.

#CLVFTW!

Bottom Line.

It is unlikely that any of you reading this blog on advanced analytics is measuring a head fake metric. You realize the futility already.

I also believe that you and I can do more to move beyond stage 1 and stage 2 of the stages of savvy. And, I hope I’ve encouraged you to do that today. It is so worth it.

I believe almost all of us can do more to be on a CLV journey — but not at the cost of losing focus in stages 3 and stage 4.

Let’s get to it!

As always, it is your turn now.

Via comments, please share your critique, reflections, tips and your KPI lessons from the front lines of trying to drive material business impact. What do you disagree with above? What has been the hardest nut for you to crack in your career?

The Premium edition of my weekly newsletter shares solutions to your real-world challenges while unpacking future-looking opportunities. Subscribe here.

 

The post The Most Important Business KPIs. (Spoiler: Not Conversion Rate!) appeared first on Occam's Razor by Avinash Kaushik.

Increase Analytics Influence: Leverage Predictive Metrics!

Almost all metrics you currently use have one common thread: They are almost all backward-looking. If you want to deepen the influence of data in your organization – and your personal influence – 30% of your analytics efforts should be centered around the use of forward-looking metrics. Predictive metrics! But first, let’s take a small […]

The post Increase Analytics Influence: Leverage Predictive Metrics! appeared first on Occam’s Razor by Avinash Kaushik.

Almost all metrics you currently use have one common thread: They are almost all backward-looking.

If you want to deepen the influence of data in your organization – and your personal influence – 30% of your analytics efforts should be centered around the use of forward-looking metrics.

Predictive metrics!

But first, let's take a small step back. What is a metric?

Here’s the definition of a metric from my first book:

A metric is a number.

Simple enough.

Conversion Rate. Number of Users. Bounce Rate. All metrics.

[Note: Bounce Rate has been banished from Google Analytics 4 and replaced with a compound metric  called Engaged Sessionsthe number of sessions that lasted 10 seconds or longer, or had 1 or more conversion events or 2 or more page views.]

The three metrics above are backward-looking. They are telling us what happened in the past. You'll recognize now that that is true for almost everything you are reporting (if not everything).

But, who does not want to see the future?

Yes. I see your hand up.

The problem is that the future is hard to predict. What’s the quote… No one went broke predicting the past. :)

Why use Predictive Metrics?

As Analysts, we convert data into insights every day. Awesome. Only some of those insights get transformed into action – for any number of reasons (your influence, quality of insights, incomplete stories, etc. etc.). Sad face.

One of the most effective ways of ensuring your insights will be converted into high-impact business actions is to predict the future.

Consider this insight derived from data:

The Conversion Rate from our Email campaigns is 4.5%, 2x of Google Search.

Now consider this one:

The Conversion Rate from our Email campaign is 4.5%, 2x of Google Search.

Our analysis suggests you can move from six email campaigns per year to nine email campaigns per year.

Finally consider this one:

The Conversion Rate from our Email campaign is 4.5%, 2x of Google Search.

Our analysis suggests you can move from six email campaigns per year to nine email campaigns per year.

We predict it will lead to an additional $3 mil in incremental revenue.

The predicted metric is New Incremental Revenue. Not just that, you used sophisticated math to identify how much of the predicted Revenue will be incremental.

Which of these three scenarios ensures that your insight will be actioned?

Yep. The one with the Predictive Metric.

Becaues it is hard, really hard, to ignore your advice when you come bearing $3 mil in incremental revenue!

Starting your Predictive Metrics journey: Easy Peasy Lemon Squeezy.

In a delightfully wonderful development, every analytics tool worth its salt is adding Predictive Metrics to its arsenal. Both as a way to differentiate themselves with their own take on this capability, and to bring something incredibly valuable to businesses of all types/sizes.

In Google Analytics, an early predicted metric was: Conversion Probability.

Simply put, Conversion Probability determines a User’s likelihood to convert during the next 30 days!

I was so excited when it first came out.

Google Analtyics in this instance is analyzing all first-party data for everyone, identifying patterns of behavior that lead to conversions, now looking at everyone who did not convert, and on your behalf giving a score of 0 (no chance of conversion) to 100 (very high chance of conversion).

Phew! That’s a lot of work. :)

What’s particularly exciting is that Conversion Probability is computed for individual Users.

You can access the report easily in GA: Audience > Behavior > Conversion Probability.

google_analytics_conversion_probability_report

An obvious use of this predicted behavior is to do a remarketing campaign focusing on people who might need a nudge to convert, 7,233 in the above case.

But, there are additional uses of this data in order to identify the effectiveness of your campaigns.

For example, here is the source of traffic sorted by Average Conversion Probability

conversion_probability_report_3

In addition to understanding Conversion Rate (last column) you can now also consider how many Users arrived via that channel who are likely to convert over the next 30 days.

Perhaps more delightfully you can use this for segmentation. Example: Create a segment for Conversion Probability > 50%, apply it to your fav reports like the content ones.

There is so much more you can explore.

[TMAI Premium subscribers, to ensure you are knocking it out of the park, be sure to review the A, B, O clusters of actionable recommendations in #238: The OG of Analytics – Segmentation! If you can’t find it, just emial me.]

Bonus Tip: I cannot recommend enough that you get access to the Google Merchandise Store Google Analytics account. It is a fully working, well-implemented real GA data for an actual business. Access is free. So great for learning. The screenshot above is from that account.

Threee Awesome New Predictive Metrics!

With everything turning over for the exciting world of Google Analytics 4 you get a bit more to add to your predictive metrics arsenal.

Conversion probability is being EOLed with GA 4, but worry not as you get a like-type replacement: Purchase Probability

The probability that a user who was active in the last 28 days will log a specific conversion event within the next 7 days.

Currently, purchase/ecommerce_purchase and  in_app_purchase events are supported.

You can do all of the same things as we discussed above for Conversion probability.

To help you get closer to your Finance team – you really need to be BFFs with them! – you also get a predictive metric that they will love: Revenue Prediction

The revenue expected from all purchase conversions within the next 28 days from a user who was active in the last 28 days.

You can let your imagination roam wild as to what you can do with this power.

Might I suggest you start by looking at this prediction and then brainstorm with your Marketing team how you can overcome the shortfall in revenue! Not just using Paid strategies, but Earned and Owned as well.

Obviously in the rare case the Revenue Prediction is higher than target, you all can cash in your vacation days and visit Cancun. (Wait. Skip Cancun. That brand’s tainted. :)

There’s one more predicted metric that I’ve always been excited about: Churn Probability.

The probability that a user who was active on your app or site within the last 7 days will not be active within the next 7 days.

What’s that quote? It costs 5000x more to acquire a new User than to retain the one you already have? I might be exaggerating a tad bit.

For mobile app/game developers in particular (or for content sites, or any entity for whom recency/frequency is a do or die proposition). Churn is a constant obsession and now you can proactively get churn probability. Make it a core part of your analytical strategy to understand Behavior, Sources, Users, who are more/less likely to churn and action the insights.

GA 4 does not simply hand you these metrics willy-nilly. The algorithms require  a certain number of Users, Conversions etc., in order to ensure they are doing sound computations on your behalf.

These three predictive metrics illustrate the power that forward-looking computations hold for you. There are no limits to how far you can take these approaches to help your company not only look backwards (you’ll be stuck with this 70% of the time) but also take a peek into the future (aim to spend 30% of your time here).

And please consider segmenting Purchase Probability, Revenue Probability and Churn Probability!

Bonus Tip: If you would like to migrate to the free version of Google Analytics 4 to take advantage of the above delicious predictive metrics, here’s a helpful article.

Predictive Metrics Nirvana – An Example.

For a Marketing Analyst, few things come close to nirvana in terms of forward-looking predictions from sophisticated analysis than to help set the entire budget for the year including allocation of that budget across channels based on diminishing returns curves and future opportunity and predict: Sales, Cost Per Sale, and Brand Lift.

Here’s how that looks from our team’s analytics practice…

predicted_budget_channel_allocation_sales

Obviously, all these cells have numbers in them. You’ll understand that sharing them with you would be a career-limiting move on my part. :)

I can say that there are thirteen different element sets that go into this analysis (product launches, competitor behavior, past analysis of effectiveness and efficiency, underlying marketing media plan, upcoming industry changes, and a lot, lot, lot, of data).

Supercool – aka superhard – elements include being able to tie Brand Marketing to short, medium, long-term Sales.

Forward-looking allocations are based on simulations that can take all of the above, to answer low, medium, high-risk plans – from which our senior leader gets to choose the one she believes aligns with her strategic vision.

[Note: Strictly speaking what we are doing above is closer to Predictive Modeling, even though we have a bunch of Predictive Metrics. Potato – Potahto.]

I share our work as a way to invite your feedback on what we can do better and in the hope that if you are starting your Predicted Metrics practice, that it might serve as a north star.

From experience, I can tell you that if you ever felt you as an Analyst don’t have influence, that your organization ignores data, then there is nothing like Predicted Metrics to deepen your influence and impact on the business.

When people use faith to decide future strategy, the one thing they are missing is any semblance of what impact their faith-based strategy will have. The last three rows above are how you stand out.

BOOM!

The Danger in Predicting the Future.

You are going to be wrong.

A lot, initially. Then less over time as you get better and better and predicting the future.

(Machine Learning comes in handy there as it can ingest so much more complexity and spit out scenarios we simply can’t imagine.)

But, you will never be exactly right. The world is complicated.

This does not scare me for two reasons, I urge you to consider them:

1. Very few companies drove straight looking out of the rear view mirror. But, that is exactly what you spend time trying to do every single day.

2. Who is righter than you? The modern corporation mostly runs of faith. You are going to use data, usually a boat-load of it. It is usually far better than faith. And, when you are wrong, you can factually go back and update your models (faith usually is not open to being upgraded).

So. Don’t be scared.

Every time you are wrong, it is an opportunity to learn and be more right in the future – even if perfection will always be out of reach.

Bottom Line.

My hypothesis is that you are not spending a lot of time on predictive metrics and predictive modeling. Change this.

It is a great way to contribute materially to your company. It is a great way to invest in your personal learning and growth. It is a fantastic way to ensure your career is future-proof.

Live in the future – at least some of the time – as an Analyst/Marketer.

I’ll see you there. :)

As always, it is your turn now.

Please share your critique, reflections, tips and your lessons from projects that shift your company from only backwards looking metrics to foward looking metrics that predict the future.

The post Increase Analytics Influence: Leverage Predictive Metrics! appeared first on Occam's Razor by Avinash Kaushik.

Marketing Analytics: Attribution Is Not Incrementality

One of the business side effects of the pandemic is that it has put a very sharp light on Marketing budgets. This is a very good thing under all circumstances, but particularly beneficial in times when most companies are not doing so well financially. There is a sharper focus on Revenue/Profit. From there, it is […]

The post Marketing Analytics: Attribution Is Not Incrementality appeared first on Occam’s Razor by Avinash Kaushik.

One of the business side effects of the pandemic is that it has put a very sharp light on Marketing budgets. This is a very good thing under all circumstances, but particularly beneficial in times when most companies are not doing so well financially.

There is a sharper focus on Revenue/Profit.

From there, it is a hop, skip, and a jump to, hey, am I getting all the credit I should for the Conversions being driven by my marketing tactics? AKA: Attribution!

Right then and there, your VP of Finance steps in with a, hey, how many of these conversions that you are claiming are ones that we would not have gotten anyway? AKA Incrementality!

Two of the holiest of holy grails in Marketing: Attribution, Incrementality.

Analysts have died in their quests to get to those two answers. So much sand, so little water.

Hence, you can imagine how irritated I was when someone said:

Yes, we know the incrementality of Marketing. We are doing attribution analysis.

NO!

You did not just say that.

I’m not so much upset as I’m just disappointed.

Attribution and Incrementality are not the same thing. Chalk and cheese.

Incrementality identifies the Conversions that would not have occurred without various marketing tactics.

Attribution is simply the science (sometimes, wrongly, art) of distributing credit for Conversions.

None of those Conversions might have been incremental. Correction: It is almost always true that a very, very, large percentage of the Conversions driven by your Paid Media efforts are not incremental.

Attribution ≠ Incrementality.

In my newsletter, TMAI Premium, we’ve covered how to solve the immense challenge of identifying the true incrementality delivered by your Marketing budget. (Signup, email me for a link to that newsletter.)

Today, let me unpack the crucial differences between attribution and incrementality to empower you – and your Senior Leaders – to have intelligent discussions about the actual problems you need solved, and justify an investment in additional Analysis Ninjas.

Understanding this difference will also help you ace your job interviews – a lovely bonus. :)

An introduction to multi-channel digital attribution analysis.

When you open a report in any digital analytics tool, like Google Analytics, almost all the reports you look at attribute full credit for the Conversion to the referrer associated with the last session where the conversion occurred.*

(*For the Analytics super nerds, my sisters and brothers: Strictly speaking Google Analytics reports are not last-click conversion, they are last-non-direct-conversion.)

In English, this means if someone clicked on a Paid Search ad on Bing and converted..

attribution_paid_search

…Bing gets all the credit for that conversion in your Analytics reports.

For example, a report like this one…

google_analytics_conversions

This report is not an entirely accurate representation of advertising’s performance because the last-click rarely represents the complete consumer journey.

The customer might have seen, or been forced to see :), other ads preceeding that last touch-point which happened to be Bing in our example.

The act of identifying all the touch-points (impressions and/or clicks) and their influence is what’s known as attribution analysis.

Many analytics tools, including Google Analytics, come built with functionality to help you understand the complete consumer journey and all the touch-points leading up to a conversion.

What Marketers do, in particular the ones whose job descriptions include only paid media, is say ok fine, let us look at all the touch-points that led to conversions.

In a nifty bit of sleight of hand, they create analysis that is reflected in this picture…

attribution_paid_media

This does sucks less.

But. Notice a pattern. They are all paid ads.

Paid Media Marketers (sometimes referred to as Direct Response Team or Performance Marketing) love doing attribution analysis across only paid media channels becuase it allows them to distribute credit only across their work.

This inflates the importance of paid advertising, at the expense of owned and earned tactics – which your company is also investing in.

That of course suits the agenda of your Paid Media Marketers just fine.

But. It is wrong.

Because the complete consumer journey to conversion, in this instance, actually looks like this…

attribution_owned_earned_paid_media

The advertising played a role in driving the conversion. Yes. Absolutely!

So did owned media. Email, was a crucial second to last before conversion.

So did earned media. Organic Search, got the individual to an optimal page on your site.

As you do digital attribution analysis, watch out for Paid Marketers who say they do attribution analysis but only count paid media channels. Work to help them, and your Senior Leaders, understand that this inflates paid marketing’s value well beyond what’s deserved.

The paid media team might resist saying omg but that will require expensive tools and more data and take so much time and it is so painful and omg why do you hate us so much!

Worry not. Every decent web analytics tool now includes built-in attribution analysis across owned, earned, and paid across digital platforms.

It will, literally, take 30 seconds to get going (of which 25 seconds is you booting up your computer).

Is my attribution analysis awesome?

Here’s how you can measure how sophisticated your attribution approach is:

If you are using the full power of the attribution modeling across owned, earned, and paid, you are at an industry-average level of analytics sophistication.​

If you have hooked up the owned, earned, and paid multi-channel attribution analysis results directly to platforms you are buying ads on to ensure smarter bidding, you are at an industry-leading level of analytics sophistication. ​

(In English: Attribution analysis will give proper credit to AdWords instead of an over/under-inflated value. This can be connected to AdWords. AdWords will lower/raise your bids to account for the credit it deserves. Nice.)

Getting to industry-leading level is not just about being able to do the analysis, it is about automating the actions that can be take from the analysis.

Now you understand why attribution is important, how your Paid Marketing team is likely inflating its value, and how to check how sophisticated your approach is… Let’s take a small detour and understand two attribution analysis challenges I want you to be careful about. Then, we’ll get back to Marketing incrementality.

Attribution Challenge #1. Which attribution model rocks?

If last-click and last-non-direct-click are not the best options, which attribution model should you use?

Some people like to use first-click.

First-click attribution is akin to giving my first girlfriend 100% of the credit for me marrying my wife.

Not that smart, right?

You can read about all digital attribution models in this post on the good, bad, and ugly attribution models – it contains pretty pictures!

TL;DR: I recommend not using last-click, first-click, linear, time decay, or position-based, models. If you are a genius, you can use custom attribution modeling. See the post above for all the delicious details, pros and cons.

The one I recommend is data-driven attribution modeling.

Don’t overthink it. (In this instance…) You and I are not as smart as machine learning algorithms that can analyze insane complexity across terabytes of information from millions of customer interactions.

Trust the machines. There are more productive uses of your time.

Attribution Challenge #2. Wait, what about all my offline media?

This is not the complete picture for so many companies…

The real world also exists!
For so many companies, the reality of their marketing efforts looks like this…

attribution_owned_earned_paid_media

The real world also exists, in addition to the digital one!

When you create a full view of your total marketing budget, for most companies the reality of your marketing efforts looks like this…

attribution_digital_offline_media

So what do you do with your Google or Adobe web analytics tool?

Not much.

Sure every company will tell you that you can stand on one feet, curl three left toes, raise your right hand, close only one eye, stand under a shower of arctic-cold water, and fast for 27 days and then do some hard coding to jury rig some sort of signal gathering mechanism with JavaScript hacking and maybe get your offline media into your web analytics tool and maybe you have complete attribution modeling ability.

I just want to observe that their recommended solution is difficult to pull off.

There are other options at your disposal. I refer to this full attribution quest: Marketing Portfolio Attribution Analysis.

Your primary solution will consist of advanced statistical modeling.

These are custom built for each company, there is no off-the-shelf product worth its salt.
Some consulting companies will sell you media-mix modeling solutions, from experience I’ve come to see them with suspicion due to data access issues, stretching math and data beyond a stretching point and so much more.

If you spend so much on marketing that Portfolio Attribution Analysis is worth it for you, you need to hire a small number of brilliant people and empower them.  It is the only way to ensure digital is not being over-credited for the conversions that are rightly being driven by offline channels – or vice versa.

Let’s get back on the, more exciting, incrementality train.

Attribution is not Incrementality.

Now that you are so much smarter with a new level of appreciation of the nuances involved…

Let’s say we got 10 conversions this month (each worth $14 million :)).

When we do attribution analysis, whatever kind you like, what we are essentially doing is taking the credit for those 10 conversions and distributing it across identified marketing activity…

online_offline_attribution_analysis

This is good.

Be proud of yourself and your company peers.

When done right, it helps you identify how to invest your marketing budget across owned, earned, and paid media optimally (the last-wish of every CMO).

But above is not the full picture of reality.

Marketing is not the only thing that drives conversions for your company.

This one is the full picture of reality…

full_picture_attribution

By existing as a company what I mean is that there is a whole bunch of activity that could cause people to buy your company product – that has nothing to do with any kind of Marketing.

I’ve constantly recommended that you buy Patagonia products because it is a company for social good and I love them.

You might have seen me wearing my blue nano puff jacket and thought I looked snazzy in it hence right then and there you decided to buy it.

Perhaps you read a review of Patagonia products by my friend Daniel and you decided to buy it.

(True story) You might have landed in Frankfurt on a really cold day without a jacket – because California is warm! – and you bought the first jacket you bumped into, and it happened to be Patagonia.

Your mom noticed something in your hiking pictures and decided to gift you a gorgeous Mellow Melon Atom Sling.

I could keep going on about all the pathways to purchase that don’t have anything to do with the work of your Marketing team (though the team, and everyone in it, is incredbily awesome!).

The company will sell a whole bunch of products merely by existing.

This is true for your company as well.

The 10 conversions above, that you were crediting only to Marketing, is wrong.

Reality is more like this…

true_full_picture_attribution

Marketing’s Incrementality then is being able to identify how many of the 10 conversions would have happened anyway – even if you did no Marketing.

A whole bunch of Conversions you are attributing to Paid Search or Facebook would have happened any way. Do you know how many?

I’ve done loads of incrementality analyses during my career across different types of companies.

Using the highest percentage incrementality that the data has demonstrated, the answer as to how many of the 10 conversions are incremental would look something like this….

It is measuring what's known as True Incrementality…

incrementality_analysis_results

Typically, if measured accurately, the true incrementality of your Marketing will be in the range of  8% or 22%.

To recap:

Your Paid Search campaign did not entirely drive 10 Conversions.

Expanding the view, your Paid Media did not entirely drive 10 Conversions.

Expanding the view further out, your digital Owned, Earned, and Paid Media did not entirely drive 10 Conversions.

Expanding the even more,  your Online AND Offline Marketing efforts did not entirely drive 10 Conversions.

One last expansion of the view, all your Marketing efforts drove 3 Conversions that you would not have gotten any way.

Marketing’s Incrementality!

Oh, now go back and do attribution analysis to determine how to distribute the credit for those 3 Conversions across your Online and Offline Marketing using my recommendations in the first part of this post.

The very best companies on the planet know the number of outcomes incrementally driven by their Marketing (and other) initiatives.

Bottom line.

1. Whatever you do, never say attribution is incrementality. It’ll hurt my feelings.

2. Do you know what your Marketing’s true incrementality is?

Carpe diem.

As always, it is your turn now.

Please share your critique, reflections, and your lessons from the quest to measure Marketing’s incrementality via comments below. Thank you.

The post Marketing Analytics: Attribution Is Not Incrementality appeared first on Occam's Razor by Avinash Kaushik.