How do CRO professionals run experiments in 2019? We analyzed 28,304 experiments, picked randomly from our Convert.com customers. This post shares some of our top observations and a few takeaways about: When CROs choose to stop tests; Which types of experiments are most popular; How often personalization is part of the experimentation process; How many […]
How do CRO professionals run experiments in 2019? We analyzed 28,304 experiments, picked randomly from our Convert.com customers.
This post shares some of our top observations and a few takeaways about:
When CROs choose to stop tests;
Which types of experiments are most popular;
How often personalization is part of the experimentation process;
How many goals CROs set for an experiment;
How costly “learning” from failed experiments can get.
1. One in five CRO experiments is significant, and agencies still get better results.
Only 20% of CRO experiments reach the 95% statistical significance mark. While there might not be anything magical about reaching 95% statistical significance, it’s still an important convention.
You could compare this finding with the one from Econsultancy’s 2018 optimization report in which more than two-thirds of respondents said that they saw a “clear and statistically significant winner” for 30% of their experiments. (Agency respondents, on the other hand, did better, finding clear winners in about 39% of their tests.)
Failing to reach statistical significance may result from two things—hypotheses that don’t pan out or, more troubling, stopping tests early. Almost half (47.2%) of respondents in the CXL 2018 State of Conversion Optimization report confessed to lacking a standard stopping point for A/B tests.
For those experiments that did achieve statistical significance, only 1 in 7.5 showed a lift of more than 10% in the conversion rate.
In-house teams did slightly worse than average: 1 out of every 7.63 experiments (13.1%) achieved a statistically significant conversion rate lift of at least 10%. Back in 2014, when we published an earlier version of our research on CXL, this figure was slightly higher, about 14%.
Agencies did slightly better: 15.84% of their experiments were significant with a lift of at least 10%. This number was much higher (33%) in our earlier research, although the sample size was significantly smaller (just 700 tests). Still, in both studies, agencies did better than in-house CRO teams. This year, they outperformed in-house teams by 21%.
(We didn’t find any significant difference between agencies and in-house customers when comparing their monthly testing volumes.)
2. A/B tests continue to be the most popular experiment.
A/B testing (using DOM manipulation and split URL) is still the go-to test for most optimizers, with A/B tests totaling 97.5% of all experiments on our platform. The average number of variations per A/B test was 2.45.
This trend isn’t new. A/B tests have always dominated. CXL’s test-type analysis over the years also shows this. Back in 2017, CXL’s report found that 90% of tests were A/B tests. In 2018, this figure increased by another 8%, reinforcing A/B testing as the near-universal experiment type.
Certainly, A/B tests are simpler to run; they also deliver results more quickly and work with smaller traffic volumes. Here’s a complete breakdown by test type:
North American optimizers ran 13.6 A/B experiments a month, while those from Western Europe averaged only 7.7. Using benchmarks from the the 2018 CXL report, that puts our customers in the top 30% for testing volume.
There were other cross-Atlantic differences: Western Europe runs more A/B tests with DOM manipulation; the United States and Canada run twice as many split-URL experiences.
3. Optimizers are setting multiple goals.
On average, optimizers set at least four goals (e.g. clicking a certain link, visiting a certain page, a form submit, etc.) for each experiment. This means they set up three secondary goals in addition to the primary conversion rate goal.
Additional “diagnostic” or secondary goals can increase learning from experiments, whether they’re winning or losing efforts. While the primary goal unmistakably declares the “wins,” the secondary metrics shine a light on how an experiment affected the target audience’s behavior. (Optimizely contends that successful experiments often track as many as eight goals to tell the full experiment story.)
We see this as a positive—customers are trying to gain deeper insights into how their changes impact user behavior across their websites.
The 2018 edition of Econsultancy’s optimization report, too, saw many CRO professionals setting multiple goals. In fact, about 90% of in-house respondents and 85% of agency respondents described secondary metrics as either “very important” or “important.” While sales and revenue were primary success metrics, common secondary metrics included things like bounce rate or “Contact Us” form completion rates.
The Econsultancy study also found that high performers (companies that secured an improvement of 6% or more in their primary success metric) were more likely to measure secondary metrics.
4. Personalization is used in less than 1% of experiments.
Personalization isn’t popular yet, despite its potential. Less than 1% of our research sample used personalization as a method for optimization, even though personalization is available at no added cost on all our plans.
Products like Intellimize, which recently closed $8 million in Series A funding, and Dynamic Yield, recently acquired by McDonald’s, are strong indicators of investors’ and corporate America’s big bet on personalization.
But as far as the CRO stack goes, personalization is still a tiny minority. A quick look at data from BuiltWith—across 362,367 websites using A/B testing and personalization tools—reinforces our findings:
We did find that U.S.-based users are using personalization six times more often than those from Western Europe. (Additionally, some 70% of those on our waitlist for an account-based marketing tool are from the United States, despite the product being GDPR-compliant.)
Personalization in the European market and elsewhere may rise as more intelligent A.I. optimization improves auto-segmentation in privacy-friendly ways.
Back in 2017, when Econsultancy surveyed CRO professionals, it found personalization to be the CRO method “least used but most planned.” Some 81% of respondents found implementing website personalization to be “very” or “quite” difficult. As several reports mentioned, the biggest difficulty for implementation of personalization was data sourcing.
Our findings on personalization diverged with a few other reports from the CRO space. Econsultancy’s survey of CRO executives (in-house and agency) reported that about 42% of in-house respondents used website personalization, as did 66% of agency respondents. Dynamic Yield’s 2019 Maturity Report reported that 44% of companies were using “basic” on-site personalization with “limited segmentation.”
When CXL surveyed CRO professionals for its 2017 report, 55% of respondents reported that they used some form of website personalization. In the latest CXL report, respondents scored personalization a 3.4 on a 1–5 scale regarding its “usefulness” as a CRO method.
5. Learnings from experiments without lifts aren’t free.
In our sample, “winning” experiments—defined as all statistically significant experiments that increased the conversion rate—produced an average conversion rate lift of 61%.
Experiments with no wins—just learnings—can negatively impact the conversion rate. Those experiments, on average, caused a 26% decrease in the conversion rate.
We all love to say that there’s no losing, only “learning,” but it’s important to acknowledge that even learnings from non-winning experiments come at a cost.
With roughly 2.45 variations per experiment, every experiment has around an 85% chance of decreasing the conversion rate during the testing period (by around 10% of the existing conversion rate).
Businesses need to archive and learn from all their experiments. According to the CXL report, about 80% of companies archive their results, and 36.6% use specific archiving tools. These are strong indicators that CROs are refueling their experimentation programs with learnings from past efforts.
But while tracking results and documenting learnings can improve a testing program in the long run, there’s urgency to learn from failed experiments and implement successes quickly.
There’s also a need to research and plan test ideas well so that experiments have a higher likelihood of success. The ResearchXL model is a great way to come up with data-backed test ideas that are more likely to win.
While our research helped us establish some industry benchmarks, a few of our findings hardly surprised us (for example, the popularity of A/B tests).
But what did surprise us was that so few customers use personalization. We expected more businesses to be progressive on that front since the feature is available in all our plans and doesn’t require massive traffic volumes. As noted earlier, better data management may make personalization easier for companies to execute.
Other than that, we view the setup of multiple goals as a positive sign—testers want to dig deeper into how their experiments perform to maximize learnings and, of course, wins.
Email is one of the few marketing channels that spans the full funnel. You use email to raise awareness pre-conversion. To stay connected with content subscribers. To nurture leads to customers. To encourage repeat purchases or combat churn. To upsell existing customers. Getting the right email to the right person at the right time throughout […]
Email is one of the few marketing channels that spans the full funnel. You use email to raise awareness pre-conversion. To stay connected with content subscribers. To nurture leads to customers. To encourage repeat purchases or combat churn. To upsell existing customers.
Getting the right email to the right person at the right time throughout the funnel is a massive undertaking that requires a lot of optimization and testing. Yet, even some mature email marketing programs remain fixated on questions like, “How can we increase the open rate?” Moar opens! Moar clicks!
What about the massive bottom-line impact email testing can have at every stage of the funnel? How do you create an email testing strategy for that? It starts by understanding where email testing is today.
The current state of email testing
According to the DMA, 99% of consumers check their email every single day. (Shocking, I know.)
In 2014, there were roughly 4.1 billion active email accounts worldwide. That number is expected to increase to nearly 5.6 billion before 2020. In 2019, email advertising spending is forecasted to reach $350 million in the United States alone.
0.29% is the average email unsubscribe rate in the architecture and construction industry.
1.98% is the average email click rate in the computers and electronics industry.
0.07% is the average hard bounce rate in the daily deals and e-coupons industry.
20.7% is the average open rate for a company with 26–50 employees.
But why are these statistics the ones we collect? Why do blog posts and email marketing tools continue to prioritize surface-level testing, like subject lines (i.e. open rate) and button copy (i.e. click rate)?
Why email testing often falls flat
Those data points from AWeber and Mailchimp are perhaps interesting, but they have no real business value.
Knowing that the average click rate in the computers and electronics industry is 1.98% is not going to help you optimize your email marketing strategy, even if you’re in that industry.
Similarly, knowing that 434 is the average number of words in an email is not going to help you optimize your copy. That number is based on only 1,000 emails from 100 marketers. And, of course, there’s no causal link. Who’s to say length impacts the success of the emails studied?
For the sake of argument, though, let’s say reading that 60% of email marketers use sentence case in their subject lines inspired you to run a sentence case vs. title case subject-line test.
Congrats! Sentence case did in fact increase your open rate. But why? And what will you do with this information? And what does an open rate bump mean for your click rate, product milestone completion rates, on-site conversion rates, revenue, etc.?
A test is a test is a test. Regardless of whether it’s a landing page test, an in-product test, or an email test, it requires time and resources. Tests are expensive—literally and figuratively—to design, build, and run.
Focusing on top-of-funnel and engagement metrics (instead of performance metrics) is a costly mistake. Open rate to revenue is a mighty long causal chain.
If you’re struggling to connect email testing and optimization to performance marketing goals, it’s a sign that something is broken. Fortunately, there’s a step-by-step process you can follow to realign your email marketing with your conversion rate optimization goals.
The step-by-step process to testing email journeys
Whenever you’re auditing data, ask yourself two questions:
Am I collecting all of the data I need to make informed decisions?
Can I trust the data I’m seeing?
To answer the first question, have your optimization and email teams brainstorm a list of questions they have about email performance. After all, email testing should be a collaboration between those two teams, whether an experimentation team is enabling the email team or a conversion rate optimization team is fueling the test pipeline.
Can your data, in its current state, answer questions from both sides? (Don’t have a dedicated experimentation or conversion rate optimization team? Email marketers can learn how to run tests, too.)
With email specifically, it’s important to have post-click tracking. How do recipients behave on-site or in-product after engaging with each email? Post-click tracking methods vary based on your data structure, but there are five parameters you can add to the URLs in your emails to collect data in Google Analytics:
The second issue—data integrity—is more complex and beyond the scope of this post. Thankfully, we have another post that dives deep into that topic.)
Once you’re confident that you have the data you need and that the data is accurate, you can get started.
1. Mapping the current state
To move away from open rate and click rate as core metrics is to move toward journey-specific metrics, like:
Gross customer adds;
By focusing on the customer journey instead of an individual email, you can make more meaningful optimizations and run more impactful tests.
The goal at this stage is to document and visualize as much as you can about the current state of the email journey in question. Note any gaps in your data as well. What do you not know that you wish you did know?
It all starts with a deep understanding of the current state of the email journey in question. You can use a tool like Whimsical to map it visually.
On-site destinations and their conversion rates (for email, specifically).
Really, anything that helps you achieve a deep understanding of who is receiving each email, what you’re asking them to do, and what they’re actually doing.
Take this email from Amazon Web Services (AWS), for example:
There are a ton of different asks within this email. Tutorials, a resource center, three different product options, training and certification, a partner network—the list goes on.
Your current state map should show how recipients engage with each of those CTAs, where each CTA leads, how recipients behave on-site or in-product, etc. Does the next email in the sequence change if a recipient chooses “Launch a Virtual Machine” instead of “Host a Static Website” or “Start a Development Project,” for example?
Your current state map will help answer questions like:
How does the email creative and copy differ between segments?
Who receives each email and how is that decision made?
Which actions are recipients being asked to take?
Which actions do they take most often?
Which actions yield the highest business value?
How frequently are they asked to take each action and how quickly do they take it on average?
What other emails are these recipients likely receiving?
What on-site and in-product destinations are email recipients being funneled to?
What gaps exist between email messaging and the on-site or in-product messaging?
Where are the on-site holes in the funnel?
Can post-email, on-site, or in-product behavior tell us anything about our email strategy?
2. Mapping the ideal state
Once you know what’s true now, it’s time to find optimization opportunities, whether that’s an obvious fix (e.g. an email isn’t displaying properly on the iPhone 6) or a test idea (e.g. Would reducing the number of CTAs in the AWS email improve product milestone completion rates?).
There are two methods to find those optimization opportunities:
Quantitatively. Where are recipients falling out of the funnel, and which conversion paths are resulting in the highest customer lifetime value (CLTV)?
Qualitatively. Who are the recipients? What motivates them? What are their pain points? How do they perceive the value you provide? What objections and hesitations do they present?
The first method is fairly straightforward. Your current state map should present you with all of the data you need to identify holes and high-value conversion paths.
Combined, these two methods will give you a clear idea of your ideal state of the email journey. As best you can, map that out visually as well.
How does your current state map compare to your ideal state map? They should be very different. It’s up to you to identify and sort those differences:
What did you learn during this entire journey mapping process that other marketers and teams will find useful?
What needs to be fixed or implemented right away? This is a no-brainer that doesn’t require testing.
What needs to be tested before implementation? This could be in the form of a full hypothesis or simply a question.
What gaps exist in your measurement strategy? What’s not being tracked?
3. Designing, analyzing, and iterating
Now it’s time to design the tests, analyze the results, and iterate based on said results. Luckily, you’re reading this on the CXL blog, so there’s no shortage of in-depth resources to help you do just that:
Common pitfalls in email testing—and how to avoid them
1. Testing the email vs. the journey
It’s easier to test the email than the journey. There’s less research required. The test is easier to implement. The analysis is more straightforward—especially when you consider that there’s no universal customer journey.
Sure, there’s the nice, neat funnel you wax poetic about during stakeholder meetings: session to subscriber, subscriber to lead, lead to customer; session to add to cart, cart to checkout, checkout to repeat purchase. But we know that linear, one-size-fits-all funnels are a simplified reality.
When presented with the choice of running a simple subject line A/B test in your email marketing tool or optimizing potentially thousands of personalized customer journeys, it’s unsurprising many marketers opt for the former.
But remember that email is just a channel. It’s easy to get sucked into optimizing for channel-level metrics and successes, to lose sight of what that channel’s role is in the overall customer journey.
Now, let’s say top-of-funnel engagement metrics are the only email metrics you can accurately measure (right now). You certainly wouldn’t be alone in that struggle. As marketing technology stacks expand, data becomes siloed, and it can be difficult to measure the end-to-end customer journey.
Is email testing still worth it, in that case?
It’s a question you have to ask yourself (and your data). Is there an inherent disadvantage to improving your open rate or click rate? No, of course not (unless you’re using dark patterns to game the metrics).
The question is: is the advantage big enough? Unless you have an excess of resources or are running out of conversion points to optimize (highly unlikely), your time will almost certainly be better spent elsewhere.
2. Optimizing for the wrong metrics
Optimization is only as useful as the metric you choose. Read that again.
All of the research and experimentation in the world won’t help you if you focus on the wrong metrics. That’s why it’s so important to go beyond boosting your open rate or click rate, for example.
It’s not that those metrics are worthless and won’t impact the bigger picture at all. It’s that they won’t impact the bigger picture enough to make the time and effort you invest worth it. (The exception being select large, mature programs.)
Val Geisler of Fix My Churn elaborates on how top-of-funnel email metrics are problematic:
Most people look at open rates, but those are notoriously inaccurate with image display settings and programs like Unroll.me affecting those numbers. So I always look at the goal of the individual email.
Is it to get them to watch a video? Great. Let’s make sure that video is hosted somewhere we can track views once the click happens. Is it to complete a task in the app? I want to set up action tracking in-app to see if that happens.
It’s one thing to get an email opened and even to see a click through, but the clicks only matter if the end goal was met.
You get the point. So, what’s a better way to approach email marketing metrics and optimization? By defining your overall evaluation criterion (OEC).
To start, ask yourself three questions:
What is the tangible business goal I’m trying to achieve with this email journey?
What is the most effective, accurate way to measure progress toward that goal?
What other metric will act as a “check and balance” for the metric from question two? (For example, a focus on gross customer adds without an understanding of net customer adds could lead to metric gaming and irresponsible optimization.)
The question is what OEC should be used for these programs? The initial OEC, or “fitness function,” as it was called at Amazon, gave credit to a program based on the revenue it generated from users clicking-through the e-mail.
There is a fundamental problem here: the metric is easy to game, as the metric is monotonically increasing: spam users more, and at least some will click through, so overall revenue will increase. This is likely true even if the revenue from the treatment of users who receive the e-mail is compared to a control group that doesn’t receive the e-mail.
Eventually, a focus on CLTV prevailed:
The key insight is that the click-through revenue OEC is optimizing for short-term revenue instead of customer lifetime value. Users that are annoyed will unsubscribe, and Amazon then loses the opportunity to target them in the future. A simple model was used to construct a lower bound on the lifetime opportunity loss when a user unsubscribes. The OEC was thus
OEC = ∑Rev𝑖𝑖 −∑Rev𝑗𝑗− 𝑠∗unsubscribe_lifetime_loss
where 𝑖 ranges over e-mail recipients in Treatment, 𝑗 ranges over e-mail recipients in Control, and 𝑠 is the number of incremental unsubscribes, i.e., unsubscribes in Treatment minus Control (one could debate whether it should have a floor of zero, or whether it’s possible that the Treatment actually reduced unsubscribes), and unsubscribe_lifetime_loss was the estimated loss of not being able to e-mail a person for “life.”
Using the new OEC, Ronny and his team discovered that more than 50% of their email marketing programs were negative. All of the open- rate and click- rate experiments in the world wouldn’t have addressed the root issue in this case.
Instead, they experimented with a new unsubscribe page, which defaulted to unsubscribing recipients from a specific email program vs. all email communication, drastically reducing the cost of an unsubscribe.
3. Skimping on rigor
Email marketing tools make it easy to think you’re running a proper test when you’re not.
Email tests require the same amount of rigor and scientific integrity as any other test, if not more. Why? Because there are many little-known nuances to email as a channel that don’t exist on-site, for example.
Too many people jump to make changes too soon. Email should be tested for a while (every case varies, of course), and no other changes should be made during that test period.
I have people tell me they changed their pricing model or took away the free trial or did some other huge change in the midst of testing email campaigns. Well that changes everything! Test email by itself to know if it works before changing anything else.
Designing a test for a single-send email is different than designing a test for an always-on drip campaign. Designing a test for a personalized campaign is different than designing a test for a generic campaign.
To demonstrate the complexity of email testing, let’s say you’re experimenting with frequency. You’re sending the control group three emails and the treatment group five emails. Halfway through the test, you realize the treatment group’s unsubscribe rate increased because of the increased frequency. Suddenly, you don’t have a large enough sample size to call the test either way.
Also consider that the email journey you’re testing is (likely) one of many. Even at a mid-sized company, you have to start controlling for automation. How do the other emails the test participants receive impact the test? How do the emails they receive differ from the emails the other recipients in their assignment (control or treatment) receive?
And we haven’t even touched on segmentation. Let’s say you market a heatmap tool and have one generic onboarding journey but want to test a personalized onboarding journey. You know different segments will respond differently, and that their brands of personalization will differ.
So, you segment once: people who switched from a competitive tool, people who have started their first heatmap, and people who have not started their first heatmap. And again: solopreneurs, agencies, and enterprise companies. Before you know it, you’re trying to design and build nine separate tests.
The point is not to scare you away from the complexity of email testing and optimization. It’s to remind you to invest the time upfront to properly design, build and run each test. What are the potential validity threats? What is your sample size and have you accounted for fluctuations in your unsubscribe rate? How will the effects of personalization impact your test results? Are you segmenting pre-test or post-test?
There’s nothing inherently wrong with post-test segmentation, but the decision to segment results can’t be made post-test. As Chad Sanderson of Microsoft explains:
Like anything else in CRO, constructing a segmentation methodology is a process, not something to be done on a whim once a test finishes.
Segmentation is a wonderful way to uncover hidden insights, but it’s easy to discover false positives and run into sample- size limitations when segmenting post-test. The famous line, “If you torture the data long enough, it will confess to anything,” comes to mind.
If you want to develop your email marketing program beyond “more opens” and “more clicks,” you have to:
Align on a strategic overall evaluation criterion (OEC) that goes beyond open rate and click rate.
Map out the current state of your email journey.
Map out the ideal state of your email journey. How do they compare?
Extract relevant insights to share with other teams and stakeholders.
Implement quick fixes you spot along the way.
Use your journey maps to generate a list of test ideas. Then prioritize them, run them, analyze them, and iterate.
Focusing on the customer journey will help you make smarter email testing decisions and invest your limited resources in the highest value optimization opportunities. It will also serve as a catalyst for improved segmentation and landing pages, for example.
Every nonprofit that accepts online donations has a donation page. But there’s a big difference between having a donation page and having an effective donation page. Your donation page may follow purported “best practices,” but you could still be losing donors and revenue. In fact, our experience running over 1,500 online fundraising A/B tests has […]
Every nonprofit that accepts online donations has a donation page. But there’s a big difference between having a donation page and having an effective donation page.
Your donation page may follow purported “best practices,” but you could still be losing donors and revenue. In fact, our experience running over 1,500 online fundraising A/B tests has shown that traditional “best practices” are rarely the most effective way to increase donations.
In light of this, I want to share strategies—based on our research and experimentation, not just assumptions—that have proven to increase conversions, donations, and revenue. Often, these tactics go beyond or, in some cases, contradict popular “best practices.”
1. Choose the right type of donation page.
One of the most common mistakes that new online fundraisers make is assuming that a single donation page is sufficient. In reality, donors come to your donation page with a huge variety of motivations.
If you send all of your traffic to a single donation page, you’ll likely see poor results. But if you utilize three types of donation pages—general, campaign, and instant—you’ll be able to align your pages with the motivations of your donors. Let’s cover each type in more detail.
General donation page
The general donation page on your website is your primary donation page. Every organization that accepts donations online has one. But to optimize this page, you must understand that visitors to your general page will always have a wide variety of reasons for giving.
To make this page more effective, keep these ideas in mind:
Use copy to communicate why someone should give using broad reasons, rather than focusing on a specific project or fund designation.
Keep your message clear and concise, using bullets.
Offer a free gift for a specific giving level to drive up conversions.
A general donation page case study
In the experiment below, the organization began with a donation page (left) that was virtually devoid of copy. They had one small line of text in red that said: “Together, we’re writing the next chapter of Illinois’ comeback story.”
Many fundraisers assume that general donation page visitors are already motivated to give, and so they neglect to add much copy that explains why giving is important.
But in reality, even highly motivated donors have the potential to abandon your page. In fact, according to M+R’s 2018 benchmark report, 83% of all donation page visitors leave without giving.
The organization below tested a new version of their donation page that included a lot more copy. The updated page:
Explained what the organization did in broad terms.
Used bold text and headers to make it easily scannable.
Included a call-to-action headline with a specific donation ask.
The result? The new version of the donation page led to a 150% increase in donations.
Campaign donation page
It’s not enough to send all of your traffic (whether via email, advertising, etc.) to your general donation page. You need to create a dedicated campaign donation page for specific donation appeals.
Dedicated campaign pages work because your donation ask is made in a particular context. For example, if you’re raising money to build a new building and you send your potential donors an email about it, your campaign donation page copy needs to focus on that specific project.
If you focus on the broad reasons why your organization is great (as you would on your general donation page), donors won’t be confident that their money is going to the right place. They also won’t have a full understanding of the impact their gift can make.
In the example below, this organization converted their general donation page into a dedicated campaign donation page by making five distinct changes. This new page resulted in a 50% increase in revenue.
When creating a campaign page, keep these key ideas and page elements in mind:
Write copy that’s specific to your campaign, not broad generalizations about your organization as a whole.
Add a progress bar to show how close you are to reaching your campaign goal.
Add a countdown clock to visualize your campaign deadline and create urgency.
Even small changes make a big impact on campaign pages
Optimizing your campaign donation pages can often come down to small copy variations and nuanced language. What may seem like an insignificant change to you may significantly impact the impression that your page makes on your potential donor.
In the experiment below, an organization tested a new headline on their campaign donation page. The change wasn’t drastic, but the impact was.
The original headline read, “You Make Kelly’s Website Possible,” emphasizing the organization’s broader cause—providing websites to keep family and friends connected for people going through a health crisis.
A new version of the page used a slightly different headline: “This Website Helps Kelly Stay Connected to Family and Friends.” The shift was subtle, but significant. For the new version, the emphasis shifted to the impact the donation had on the individual goal (human connection) rather than the organizational goal (websites).
The result? The more specific headline led to a 21.1% increase in donations.
Instant donation pages
The instant donation page is the least common donation page. In fact, it flies in the face of traditional thinking about online donor acquisition.
Rather than trying to acquire a subscriber and then waiting months and months to cultivate them, the instant donation page focuses on converting new subscribers into donors right away.
Here’s how it works:
Use a free offer (ebook, course, petition, etc.) to acquire an email address.
Make a donation ask right away on your confirmation page.
Make your donation ask in the context of the free offer they’ve just received.
Include a donation form right on the confirmation page.
For example, one organization running Facebook Ads targeted likely supporters with a call-to-action of “Donate Now.” They conducted an A/B test with a version of the ad that offered a free online course. After enrolling in the course, the student was presented with an instant donation page.
The “Donate Now” ads saw an abysmal 0.46% click-through rate and brought in zero donations. On the other hand, the instant donation page model increased clicks by 209% and started converting new donors right away—at a 1.18% conversion rate with an average gift size of $58.33.
The conversion rate is low. But many organizations using this instant donation page model make back all of their advertising costs, plus some. While most organizations plan to spend money on acquisition, this model can help you recoup some of your costs, or even make money on acquisition.
2. Friction can be your biggest donation killer.
It’s impossible to remove every element of friction from your donation page. Friction may include:
Filling out form fields;
Errors on your page;
Confusing page layouts;
Unnecessary required fields.
Some elements of friction are always present. For instance, you can’t make an online donation without requiring payment info.
But there are some elements of friction that you can reduce to create a better giving experience and increase the likelihood of someone making an online donation.
Field number friction
Field number friction is one of the most common barriers—too many fields, asking for unnecessary information, etc.
In the example below, you can see how too many fields make a donation form feel overwhelming and can cause a potential donor to abandon the process altogether.
A few common fields that we see on many donation pages are unnecessary to complete a donation:
“Make this gift in memory of…”;
Titles (like Mr., Mrs., Ms., Dr., etc.).
Field number friction all comes down to perception. In many cases, you can keep the same number of fields but group them together in a logical fashion to make the page appear shorter.
A shorter form (usually) makes someone perceive donating as less work, even if it has the same fields.
Decision friction occurs when you ask a donor a question that they’renot informed enough to answer. Or, in some cases, decision friction can be caused by simply giving too many options for someone to choose from.
In the example below, you see one of the most common ways that decision friction shows up on the donation page: gift designation.
While there are many reasons why an organization may want each donor to designate how to spend their gift, most donors aren’t informed enough to know how to answer this question.
Easy solutions are to:
Not require a gift designation;
Default the gift designation field to “Where most needed”;
Remove the field on campaign donation pages.
Another common way we often make things more difficult is through registration friction. Registration friction occurs when you ask a donor to create an account or log in just to make a donation.
Logging in might make things easier for the organization in terms of data tracking and gift processing, but it makes the donation experience much more difficultand frustrating for the donor—and can lead them to abandon their donation.
3. But making it “easy” to donate doesn’t guarantee you’ll get more donations.
A common refrain in non-profit meeting rooms is, ”We just need to make it as easy as possible for someone to donate.”
While there’s an element of truth to that, removing all friction from the donation process can cause more harm than good. The common practice to “make things easier” can dilute the impact of the most important element on your donation page: your value proposition.
If your only goal is to get people to the donation form faster, you won’t ask the donor to read about your organization before entering their payment info. But if you remove elements of your page that strengthen the reasons why someone should give to you, you risk losing donors.
An experiment in copy length
Fundraisers and nonprofit marketers often ask, “How long should my donation page copy be?” After running hundreds of A/B tests, we’ve learned that the length of your copy isn’t nearly as important as how effectively your copy communicates your value proposition.
In the experiment below, this organization had a short amount of copy on their original donation page. One might think this makes it “easier” for the donor because there’s less to read.
They created a new version of the page that added a considerable amount of copy. But the primary change was that the length of copy gave them more opportunity to explain why someone should donate.
The result? Adding more value-focused copy led to a 134% increase in donations, despite making the page significantly longer.
An experiment with “Donate” buttons
The donation shortcut button usually sits in the header on your donation page, anchored to the donation form at the bottom of the page. Functionally, when you click the button, it jumps you past all the copy and right to form.
The argument for these shortcut buttons seems sound: “If someone is ready to give, why slow them down by making them read a bunch of copy?”
In the experiment below, this organization added the button in hopes that it would lead to greater donations. The result? Allowing donors to bypass the copy by clicking the button led to a 28% decrease in donations.
Although the shortcut button made it “easier” to get right to the transaction, it made it harder for donors to understand the impact their donation would have.
Those results may not hold true for every site, but it’s a cautionary tale about blindly following “best practices” or focusing solely on making donating “easier.”
All of these tactics come down to a single skill, which is the one that all successful online fundraisers must develop: empathy.
If you can’t put yourself in the shoes of your donors and potential donors, you’re going to make decisions based on your personal preferences or your organization’s preferences. But in most cases, what fundraisers want and what donors want are very different.
Thankfully, testing and experimentation allows us to listen to donors and see exactly what works to inspire greater generosity—and leads to greater donations and revenue.
To increase conversions on your donation pages, consider:
Creating multiple donation pages to serve specific audiences;
Removing form fields that aren’t essential to complete a donation;
Expanding copy, if necessary, to communicate your value proposition more effectively.