Data Blending: What You Can (and Can’t) Do in Google Data Studio

Over the last 18 months or so, Google Data Studio has evolved from an appealing but clunky application to a tool that we recommend to any digital marketer. Data Studio allows you to communicate data simply and in a repeatable format, and their expanded integrations, customizations, and editability have made Data Studio dashboards extremely powerful. […]

The post Data Blending: What You Can (and Can’t) Do in Google Data Studio appeared first on CXL.

Over the last 18 months or so, Google Data Studio has evolved from an appealing but clunky application to a tool that we recommend to any digital marketer.

Data Studio allows you to communicate data simply and in a repeatable format, and their expanded integrations, customizations, and editability have made Data Studio dashboards extremely powerful.

A relatively new feature, data blending, came out last year. This underused function can do a lot of cool things; it also has some limitations. Once you’ve got your head around the basics, the possibilities are endless.

What is data blending?

Data blending in Google Data Studio lets you create charts based on multiple data sources. Separate data sources—not just those from the same application—can be combined as long as they’re comparable (i.e. share a “join key,” something discussed more below).

explanation of data blending in google data studio.

Traditionally, if you wanted to create direct comparisons from different sources, you had to export data from each source and combine them in Excel. If you suddenly needed to study a longer timeframe, you had to download the data again and start over.

By default, each element in Data Studio pulls information from a single data source. You could hook up multiple data sources to a dashboard, but until the introduction of data blending, you couldn’t present those together in a single chart or table.

With a few clicks, data blending can reveal valuable relationships between data sets. Because everything happens within Data Studio, you save time on data manipulation and enjoy new opportunities to present findings.

Jon Meck, Senior Marketing Director at Bounteous, highlighted several benefits his team has identified:

Jon Meck

Jon Meck:

With a familiar and intuitive interface, we continue to find use cases for data blending that showcase the connected Google ecosystem, facilitate real-time decision-making, and save hours of manual work.

We love using it to combine third-party data around advertising details or CRM data, and it’s allowed us to bring in personal data from external sources that are off-limits for other Google products.

What can it offer your analytics team? Here’s what you need to know.

Keys are…key…to data blending

To blend data, the data sources need to share a common dimension. This is known as a “join key.” It’s the common denominator to compare data. Your join key could be a page URL, product name, user ID, or many other things.

The simplest join key is “Date.” Measuring things over time is a common part of data analysis, so let’s use that as an example.

Selecting “Date” as the join key lets you spot correlations in data sets. Want to see how many leads came into your CRM against compared to organic sessions last month? No problem:

sample use of data blending in data studio.

Choosing the right key depends on what you’re trying to illustrate. A good starting point is to come up with a hypothesis. For example, your hypothesis might be that “website users are more likely to pay via PayPal if they’re on a mobile device.”

In this instance, you’d combine data from your ecommerce platform with Google Analytics data. Transaction ID would be the join key. (You can find more examples in the table below.)

How to blend data in Google Data Studio

When it comes to data blending, there’s an easy way and a not-so-easy way.

The easy way to blend data

  1. Create two charts that you want to compare.
  2. Select both graphs (CTRL + Click).
  3. Right-click and hit “Blend data.”

As long as the two charts share a common dimension, Data Studio automatically combines your two charts into one. If you select two charts but a right-click offers no option to blend the data, your charts are unblendable.

The not-so-easy way to blend data

The second way to blend data is a little more involved but gives you more control.

  1. Click “Resource > Manage blended data.”
  2. Click “Add a data view.”
  3. In the panel displayed, select or search for the first data source you want to compare.
  4. Click the “Add another data source” button. This will be the second data source.
  5. Select a join key(s) available in both data sources.
  6. Select dimensions and metrics you want to compare.
  7. Adjust settings as usual and hit “Save.”
  8. Your new blended data source is now available when you select “Data source” for new charts.

Start with the easy option to get a feel for data blending before going with the not-so-easy way.

Importantly, data blending uses “left-outer join,” which means that charts and graphs contain all values from Data source A—whether or not there’s corresponding data in Data source B. Additionally, values in Data source B that do not exist in Data source A are ignored.

left-outer-join explanation for data blending.
The left-outer join setup in Google Data Studio affects how blended data appears in charts. (Image source)

If you want to learn more about how data blending works, Google offers some baseline documentation to get you started.

Practical applications for data blending in Google Data Studio

1. Overlaying line graphs from different data sources

Combining simple graphs shows relationships between data sets. For example, we wanted to demonstrate to a client that the keywords we tracked in Advanced Web Ranking (AWR) correlated with an increase in organic traffic:

example of blended data from two line charts.

Our blended data combined organic sessions from Google Analytics with AWR’s Visibility Score. Date was our join key. The correlation wasn’t a surprise, but it confirmed the success of our SEO campaign.

Whether you’re an agency or an in-house marketer, being able to demonstrate the value of what you’re doing is essential. Often, data visualization helps you present something you already know clearly and persuasively.

2. Combining Google Analytics data with imported data

Have you ever wanted to see data from all your tools at a glance? Well, you can. Data Studio allows you to blend data from other sources. Even if you’re not keen on forking out for Google’s Partner Connectors, any third-party data can be tapped into via a file upload or Google Sheets connection.

The example below shows data from Google Trends on interest in Wimbledon (the tennis tournament) compared to visits to a website that sells Wimbledon tickets.

blended data from google trends and website sessions.

We produced this graph for one of our clients. They invest a lot in inbound marketing and wanted to know why their traffic was so much lower on peak sales days. As their PPC agency, we had to justify this dip. We hadn’t taken our foot off the gas, so what was the cause?

We discovered that news and press activity had a huge impact on people searching for Wimbledon. On the low-traffic days in question, the sports press was divided, discussing the Women’s World Cup and cricket as much as tennis.

The drop in Google Trends for “Wimbledon” correlated with website sessions. Data blending allowed us to demonstrate these traffic blips to the client. The revelation also informed their marketing strategy for next year.

Without native integration of a data source in Data Studio, your best option is Google Sheets. It allows you to tweak your data on the fly without having to upload it again. If your Google Sheet runs scripts that update the data in real-time, your Data Studio blends will update, too.

3. Combining Google Analytics with CRM and ecommerce platform data

Connecting website data with a CRM or ecommerce platform can reveal fascinating insights. While BigQuery and Google Analytics 360 allow you to do this without data blending in Google Data Studio, that convenience comes with a six-figure price tag.

A User ID, assigned at purchase or user login, can act as a join key to connect website data directly to individual site users. In the example below, we’ve connected pageview data from Google Analytics with supplier accounts in a CRM.

blended data from ecommerce platform and google analytics.

The blended data helped the client’s sales team suggest products to cross-sell to key accounts. In this example, Data Studio’s “filter controls” let the viewer select the partner account and date range.

Similarly, as Morgan Jones details in a post for Practical Ecommerce, data blending can help calculate new metrics, like net profit by SKU. Jones’ example blends data from two sources:

  1. A table that includes products’ wholesale cost by SKU (“COGS”);
  2. Google Analytics sales data.

By combining those sources, Jones was able to create a table with the net profit of each product: 

blending skus and google analytics data in data studio.
(Image source)

More examples

The possibilities for data blending are nearly endless. Here are a few more:

Data source 1Data source 2Metric comparisonInsight gainedJoin key(s)
Google AnalyticsGoogle Search ConsoleCorrelate organic search impressions to organic trafficDetermine whether increases in SERP visibility are impacting traffic to drive future SEO strategyPage
Google AdsGoogle Search ConsoleCompare paid search impressions to organic search impressionsAssess overall SERP visibility for paid and organic traffic sourcesSearch term / Query
Google AnalyticsEcommerce platformCompare product stock levels to product sales via websiteSpot check inventory against product sales; manage stock for busy/slow periodsProduct
Google AnalyticsGoogle SheetsCorrelate any metric to blog length, title style, etc.Compare blog post performance to editorial decisionsPage
Google Analytics (Site 1)Google Analytics (Site 2)Compare the performance of two websitesVisualize the performance trend of a portfolio of websites in a single chart or tableAny Dimension

Of course, no new feature is perfect, and several early-adopters have felt the pain.

The limitations of data blending in Data Studio

Portent’s Michael Wiegand has written about data blending in Google Data Studio multiple times. His articles have showcased the potential promise—and limitations—of data blending.

When asked about some of the wrinkles he’s encountered, he listed three:  

Michael Wiegand

Michael Wiegand:

1. Sometimes you have to re-aggregate metrics in your formulas when creating custom fields that involve two metrics from different data sources to get them to work for date ranges any longer than one day.

For example, instead of Pageviews from Data Source A/Sessions from Data Source B, you need to state Sum(Pageviews DS A)/Sum(Sessions DS B).

2. There are several threads about this in the Google Marketing Partners forum for Data Studio, but since data blending works on a left-join basis, if there’s historical data in the left-most data source that isn’t also present on the data sources to the right, previous period or previous year date-range comparisons will return no percentage-change data.

3. We’ve also run into a limit on data sources with blending. A maximum of 5 data sources are allowed in any one blend.

Oeuyown Kim, an Analytics Strategist at Portent, shared additional details on the limitations identified by Wiegand.

Oeuyown Kim

Oeuyown Kim:

GDS does not replace “null” values with zeros when there’s no value available for a metric in the Outer Left Join Key.

For example, in the image below, instead of summing the values that do exist, GDS will show “null” if there’s no value for “Transactions” for a date—even if there are transaction values in the other columns, Transaction (EN) and Transaction (FR).

Similarly, Kim continued, 

When there’s no comparison value for one row, Google Data Studio shows all comparison values as “null.” In the image below,  no comparison values appear for any row in the highlighted columns—even though deltas exist for three of them.

When I raised this issue to the Google Data Studio team, their “solution” was to recommend that I not show any comparison metrics.

Not every limitation has a clear explanation, either:

In multiple configurations, I’ve attempted to join 3–5 data sources and found that values can be summed for only the first two. 

In the example below, I can add metrics together in the two left data sources, but the value goes “null” as soon as I attempt to add in a value from the third or fourth. I can swap around the data sources in any order, and GDS will still sum up only the values found in the two left spots.

There does not seem to be any identifiable consistency in what causes these errors. I have successfully blended up to three Google Analytics data sources and five Google Sheets data sources before. But even when blending three GA data sources, I was not able to apply a secondary join key (device type, campaign, or keyword) without it miscalculating totals.

A final limitation, one that we’ve encountered, is the inability to combine data blending with calculated metrics.

Conclusion

Data blending helps you create Google Data Studio dashboards that are dynamic, real-time visualizations of the metrics that define marketing success.

Over time, as Google connects its data vizualization platform with more third-party data sources, data blending will make comparisons easier and more powerful.

Getting started with data blending now will better position you and your team to take advantage of those impending improvements. Here are the key takeaways:

  • Data blending lets you compare up to five data sources in a single table or graph.
  • A common dimension, called a join key, connects data from different sources.
  • Start with a hypothesis before deciding on which sources to blend.
  • Don’t overcomplicate matters. You can blend more than two data sources, but the goal is to present clean, clear data to stakeholders and clients.

The post Data Blending: What You Can (and Can’t) Do in Google Data Studio appeared first on CXL.

The Essential SaaS Metrics for Growth

“If you cannot measure it,” declared Lord Kelvin, “you cannot improve it.” Perhaps SaaS companies have taken this advice too literally.  SaaS sales and marketing teams can get overwhelmed by metrics. But without any metrics, it’s impossible to track growth. And without growth, a SaaS company is dead in the water.  According to Statista, the […]

The post The Essential SaaS Metrics for Growth appeared first on CXL.

“If you cannot measure it,” declared Lord Kelvin, “you cannot improve it.” Perhaps SaaS companies have taken this advice too literally. 

SaaS sales and marketing teams can get overwhelmed by metrics. But without any metrics, it’s impossible to track growth. And without growth, a SaaS company is dead in the water. 

According to Statista, the SaaS market will reach $157 billion next year. And while that figure is promising, early-stage SaaS companies need a ton of growth to survive. In fact, SaaS companies with an annual growth rate of 20% or less have a 92% chance of failure, according to research by McKinsey. 

That same research found that “super growers” were eight times more likely than “stallers” to grow from $100 million to $1 billion, and three times more likely to do so than “growers.” 

If growth is the best way to get out alive, marketing metrics do little unless they correlate with sales. After talking with a bunch of SaaS experts, here’s what I learned about which SaaS metrics deserve focus—and which ones don’t. 

The nuts and bolts of measuring SaaS growth

Software and online-services companies can quickly become billion-dollar giants, but the recipe for sustained growth remains elusive.

McKinsey & Company

It’s common for companies to put a revenue figure on what it means to be successful in SaaS. But only 400 software companies have made it to the $500M revenue mark.

David Skok, author of forEntrepreneurs, identifies three keys to sustained SaaS growth:

  1. Acquiring customers;
  2. Retaining customers;
  3. Monetizing customers.

According to Gartner, three metrics form the foundation for those growth levers: 

Gartner Managing VP Steve Crawford argues that improving those metrics can supercharge a SaaS company and fund future growth organically: 

steve crawford

Steve Crawford:

But moving too quickly into an aggressive growth mode without putting the proper focus into optimizing the right metric at the right time can have the opposite effect.

It can lead to a “vicious cycle” of increasingly negative cash flow, resulting in financial failure of the business.

There is a natural progression regarding when and where to focus on optimizing each of these metrics—that is, at which stage of a SaaS offering’s customer adoption life cycle.

So what should you focus on when? Here are seven insights on SaaS metrics from successful founders and consultants. 

1. Don’t focus on metrics like MRR too early on.

Are you trying to grow an early-stage startup? Chances are you’ve been told to focus on metrics like:

  • Monthly Recurring Revenue (MRR);
  • Lifetime Value (LTV);
  • Customer Acquisition Cost (CAC).

But if you don’t have enough data to return accurate, instructive measurements, it can be a waste of time. 

ReferralCandy Growth Manager Darren Foong said the company found itself in a unique position when they launched their second SaaS startup, CandyBar, in 2017.

candybar company launch party.

Foong said the company knew right away that they couldn’t rely on the same metrics for CandyBar that they had been using to measure ReferralCandy’s success, like LTV or CAC:

darren foong

Darren Foong:

The product was at an early stage, so traditional marketing metrics were pointless.

It didn’t make sense to measure MRR (we had none), we couldn’t calculate LTV or CAC for lack of information…We didn’t have enough customers to map out their lifecycle.

Even measuring monthly traffic to the blog, Foong continued, was pointless—the content strategy prioritized long-term potential (i.e. foundational, evergreen articles) over short-term returns, and they were experimenting with different content types to see which would earn more shares and links.

Once we’d figured out the content, we started building up outreach and guest posting, and measuring the number of guest posts we’d secured.

Eventually, we tweaked the metric to include backlinks secured and moderated the domain authority of the sites involved. 

Still today, organic traffic isn’t our top priority…yet. Our focus is on building up domain authority—until the boss is ready to flip the switch.

As your company (and data collection) matures, LeadBoxer Co-Founder Wart Fransen recommends starting at the bottom of the funnel and working your way back up: 

wart fransen

Wart Fransen:

Once the metrics are in place, and you have collected some data, you start by having a good look at the bottom of your funnel (your closed deals/sales) and calculate for each step upwards from the conversion rates.

It’s only by starting at the bottom of the funnel, Fransen says, that companies can find out how many opportunities, leads, trials, Marketing Qualified Leads (MQLs), traffic, and campaigns they need for one deal/sale. 

Focus on that—other metrics are often vanity metrics and should be ignored.

You should be able to say something like, “For each $1 we put into this specific marketing campaign, we get a result of $5 in terms of revenue.”

Monitoring SQLs or PQLs can help avoid a misplaced focus on vanity metrics.

2. Put more focus on SQLs (or PQLs).

Successful SaaS growth means marketing and sales teams work in harmony.

But an emphasis on MQLs may hand over too many underqualified leads to sales teams. In fact, as many as 90% of MQLs never turn into Sales Qualified Leads (SQLs) because they were tagged as MQLs too early in the buyer’s journey. 

chart showing paths for mqls.
Unless your marketing team checks whether MQLs reach the SQL stage, the sales team could waste time chasing unqualified leads. (Image source)

Nutshell Content Marketing Manager Ben Goldstein says that the marketing metrics worth keeping an eye on all relate to conversion in some way, and a big one to watch is SQL generation:

ben goldstein

Ben Goldstein:

Is your marketing content compelling your site visitors and email subscribers to make the leap into a sales conversation or a free trial of your product?

While email acquisition generally measures the strength of your top-of-funnel content, this conversion metric is heavily influenced by your mid-funnel content (e.g. product comparison articles, new feature announcements, how-to guides, and customer success stories).

No matter what you do on a marketing team, your end goal should always be revenue growth.

Aptrinsic CEO Mickey Alon encourages SaaS companies to go a step further and look at product qualified leads (PQLs). He contends that the MQL/SQL model is highly subjective, rule based, and relies on basic activities like website visits, email opens, webinars, and gated content downloads:

mickey alon

Mickey Alon:

[The MQL/SQL model] is missing a critical element when it comes to SaaS companies, whereby potential customers expect to educate themselves by experiencing the product firsthand.

By overlooking this component of the buying experience, SaaS companies are effectively robbing themselves of the chance to gain greater visibility into buyer intent through product usage.

Alon further notes that the PQL approach centers the sales process on in-product engagements:

In the product-led approach, the customer lifecycle shifts more into the elevated axis area where product behavior becomes essential in guiding users and customers through the lifecycle. 

In fact, sales, marketing, product, and customer success can call upon product usage data to efficiently move prospects through the customer acquisition process.

Once you can identify a quality lead, it’s time to figure out which sources deliver the most of them.

3. Learn which sources generate the most high-quality leads.

According to stats from GetApp, lead quality is the biggest lead-generation problem for SaaS companies, which makes monitoring which sources generate “good” leads crucial. 

chart showing the biggest challenges for saas lead generation.

Sarah Bottorff, VP of Marketing at Fastspring, has seen companies ignore lead sources in favor of a more general focus on total leads. 

sarah bottorff

Sarah Bottorff:

It’s easy to think that more leads or MQLs will equate to more sales, but if you aren’t monitoring conversion by lead source you might find yourself with a whole lot of nothing.

When you run an A/B test, it may be tempting to assume that the variant that drove the most leads is the winner; however, you need to take that extra step to measure how those leads performed throughout the buyer lifecycle.

Bottorff says it’s vital to know how much leads from various sources are worth in terms of sales and, when including acquisition costs, their ROI.

“A page driving record numbers of leads,” she notes, “does not necessarily translate into success for your business.”

Instapage’s Head of Content, Brandon Weaver, agrees:

Conversions are great, but if those form submissions don’t eventually lead to SQL and increasing sales, was your campaign really that successful? 

Lead scoring can help, if used carefully.

4. Use lead scoring but don’t ignore “conversational values.”

Before handing over an SQL to their sales teams, many SaaS marketers use lead scoring to ensure the leads are qualified.

Lead scoring can be an essential tool to:

  • Avoid passing leads to your sales team before they’re ready to buy;
  • Highlight leads that need more nurturing in your sales funnel.

However, lead scoring comes with its own challenges. Sometimes, sales reps are given an SQL’s data instead of behavioral triggers—the “conversational values” they can put to work on sales calls.

Explicit and Implicit Criteria Table for saas metrics.
Examples of explicit and implicit lead-scoring sheets, which show the depth of data often handed over to sales teams after a lead is qualified. (Image source)

Paddle’s Ed Fry wrote that marketing teams need to help sales teams connect the dots when handing over MQLs/SQLs to give sale staff talking points with their prospects. 

drawings showing the transition of data into wisdom.
Sales reps need more than data to make a connection with an SQL. They need sales triggers and conversational information that can spark insight and wisdom when they reach out to a prospect. (Image source)

“(Sales) doesn’t want this intelligent thing that says ‘this lead is 66% more likely to buy’ because they can’t use that to communicate with the prospect,” Appcues Director of Sales John Sherer explains.

“But they can reach out to a prospect and say ‘Hey, you just installed. What are you looking to do next?'”

If a balance between MQLs, SQLs, and PQLs seems overwhelming, there are streamlined options for tracking SaaS metrics. 

5. Less is more—focusing on a single metric can be beneficial.

According to a report by Totango, SaaS companies track a bunch of different metrics: 

But what if companies focused on a single metric? CMO Tim Soulo said they once used three analytics platforms to track conversions—then they ditched tracking them all together. 

tim soulo

Tim Soulo:

Shortly after joining Ahrefs as a CMO, I wanted to do marketing “by the books” and set out to set up our conversion tracking for new leads.

Somehow, at that time, we were paying for three different analytics software. I think these were Kissmetrics, Mixpanel, and Woopra.

We used Segment for feeding exactly the same data to all three platforms, and I configured the same conversion funnel in all of them—from a visitor to our homepage and down to a successful payment for the first month of service.

Right off the bat, all three analytics platforms provided different conversion numbers at different steps of the funnel. But, Soulo continued, it got worse:

As we were rolling out some changes to our homepage and our onboarding flow, the discrepancies between those three analytics systems got even worse.

One might say that we could spend more time looking into it and finding the reason for these discrepancies, or just pick one platform and focus on improving the conversion numbers that it was reporting.

Instead, Ahrefs took a different approach and focused on a “North Star Metric”. A North Star Metric is a single metric that a company uses to define success.

example of north star metric calculation.
The qualification lifecycle of a North Star Metric. (Image source)

Ahrefs decided to track only monthly revenue growth for their product. “It’s been nearly three years since we stopped using conversion tracking software,” concluded Soulo, “and we’ve never felt any urge to try it again.”

Harver’s Marketing Lead, Mitchel de Bruin, agrees that North Star Metrics can be helpful. At Harver, they look at the number of candidates that flow through their systems. “If this number keeps on growing, it means we’re doing a good job across the board,” he said.

One number that should matter for every SaaS company? Retention rate.

6. Don’t ignore churn, even when your customer retention is on point.

Some 55% of SaaS companies rank customer retention as their key metric to measure. That puts a spotlight on churn.

For a SaaS company with a hundred customers, two customers churning isn’t going to move the needle. However, churn compounds. That 2% churn rate that wasn’t a problem at the start? If you have a half-million customers, that same churn rate translates into a monthly loss of 10,000 subscribers.

Replacing that many customers can be unsustainable. 

ProfitWell CEO Patrick Campbell said in a ChargeBee article that if companies focus solely on growth, they’ll likely hide massive retention problems that will reappear down the line.

This bad habit can start in the early days of a SaaS company, when it’s easier to replace churned customers with new ones:

Most companies don’t think about churn until deeper in their development, which results in massive problems down the road, because it’s not that simple to just change the DNA of your company when you’ve hundreds, if not thousands, of employees.

One solution? Focus on growing the loyalty of your early subscribers, not just their raw numbers:

That shift dovetails with the final piece of advice from experts.

7. Once you’re growing, focus on Net Dollar Retention (NDR).

Perhaps the most unspoken metric of SaaS success is Net Dollar Retention (NDR). NDR is the percentage of growth a company has after accounting for churn, upgrades, and downgrades.

For high-growth private SaaS companies, the median NDR is 101%. That figure mirrors the average for SaaS companies that reach an IPO; NDR also hovers over the 100% mark for acquisitions

Spark Capital’s Alex Clayton remarked on Medium that NDR can surface issues that might otherwise go unnoticed:

alex clayton

Alex Clayton:

A SaaS company could be growing ARR (annual recurring revenue) over 100% each year, but if their annualized net dollar retention is less than 75%, there is likely a problem with the underlying business.

Net dollar retention has a huge impact on the long-term success of a business; companies that go public usually have net dollar retention rates of well over 100%, and in some cases 150%+. 

Sammy Abdullah is the Co-founder of VC firm Blossom Street Ventures. The firm looked at 40 recent SaaS IPOs and found that the median NDR at the time of IPO was 108%. 

Abdullah offered additional details on the list:

Note that the top 5, which includes names like Box, Crowdstrike, and PagerDuty, were much stronger, showing an average net retention of 139%.

The top 10 were 131%, and the top 20 were 122%.

The numbers, Abdullah continued, prove the relevance of NDR as a SaaS metric:

If you’re at ~106%, you’re in line with the average. If you’re below 100%, do a little work to figure out what’s happening. And if you’re ~120%+, you’re in great company.

Conclusion

There are hundreds of metrics a SaaS company can use to track growth. But knowing which ones to use at which time can make all the difference. 

The overwhelming advice from industry leaders is to keep it simple. Focus on metrics that lead to conversions and revenue growth, even if it means reducing your analytics reporting to a single North Star Metric. 

Pinpointing the metrics that fuel growth early in your company history can save years of wasted focus—and lost growth opportunities—that many businesses sacrifice to vanity metrics.

The post The Essential SaaS Metrics for Growth appeared first on CXL.

Propensity Modeling: Using Data (and Expertise) to Predict Behavior

C.F. Kettering once said, “My interest is in the future because I am going to spend the rest of my life there.” When we look at data and analytics, we’re focused on the past. How did we do last quarter? What happened H1 2019? And how does that compare to H1 2018? How well did […]

The post Propensity Modeling: Using Data (and Expertise) to Predict Behavior appeared first on CXL.

C.F. Kettering once said, “My interest is in the future because I am going to spend the rest of my life there.”

When we look at data and analytics, we’re focused on the past. How did we do last quarter? What happened H1 2019? And how does that compare to H1 2018? How well did landing pages X, Y, and Z convert last Monday at 1:03 p.m.? (I’m kidding, I’m kidding.)

Data becomes more valuable when we use it to predict the future instead of just analyzing the past. That’s where propensity modeling comes in.

What is propensity modeling?

Propensity modeling attempts to predict the likelihood that visitors, leads, and customers will perform certain actions. It’s a statistical approach that accounts for all the independent and confounding variables that affect said behavior.

So, for example, propensity modeling can help a marketing team predict the likelihood that a lead will convert to a customer. Or that a customer will churn. Or even that an email recipient will unsubscribe.

The propensity score, then, is the probability that the visitor, lead, or customer will perform a certain action. 

Why optimizers should care about propensity modeling

Even if you’re not currently using or considering propensity modeling, understanding the mathematics behind the process is important. For example, do you know the difference between linear and logistic regression models? 

The same way SEO experts need to understand a bit of content marketing, HTML, etc., to be competent, optimizers need a basic understanding of statistics and propensity models.

But why should optimizers care about propensity modeling when there’s testing and experimentation?

Tim Royston-Webb, Executive Vice President of Strategy at HG Insights, offers a few reasons:

tim royston-webb

Tim Royston-Webb:

The thing is that we can’t always rely on these statistical methods in the real world. There might be several scenarios where real experiments are not possible: 

  • sometimes management may be unwilling to risk short-term revenue losses by assigning sales to random customers. 
  • a sales team earning commission-based bonuses may rebel against the randomization of leads. 
  • real-world experiments may be impractical and costly in certain cases when the same data or participants can be modeled through quasi-experimental procedures or when historical data is enough to produce actionable insights. 
  • real-world experiments may involve ethical or health issues, for example, studying the effect of certain chemicals.

Still, propensity modeling and experimentation are not mutually exclusive. The two work best when combined—when a propensity model fuels an experimentation program and vice versa.

Even if you don’t face any of the experimentation challenges Royston-Webb mentions, propensity modeling can help you:

  • Fill your pipeline;
  • Save time on quantitative conversion research;
  • Explore smarter segmentation options.

How to build a propensity model

Not all propensity models are created equal.

As Mojan Hamed, Data Scientist at Shopify, explains, there’s no shortage of options to choose from, and none are inherently superior:

mojan hamed

Mojan Hamed:

The first step is to actually pick a model because you have a few options. For example, instead of measuring propensity to churn, you could choose a survival analysis.

Regression is a good option because it’s very interpretable for non-technical audiences, which means it can be communicated easily.

It’s also less of a black box, making the risk more manageable. If something goes wrong and accuracy is low or you get an unexpected result, it’s easy to drill down to the formula and figure out how to fix it.

For example, if you’re forecasting and notice some segments do well with the base models while others do not, you can dig deeper into those low-accuracy segments to identify the issue. With regression, the whole process won’t take more than a few minutes. With other models, that diagnosis is more time-consuming and complex.

Once you’ve selected the model that’s right for you (in this article, we’ll focus on regression), building it out has three steps:

  1. Selecting your features;
  2. Constructing your propensity model;
  3. Calculating your propensity scores.

Edwin Chen, Software Engineer at Google, summarizes the process in more detail:

edwin chen

Edwin Chen:

First, select which variables to use as features. (e.g., what foods people eat, when they sleep, where they live, etc.)

Next, build a probabilistic model (say, a logistic regression) based on these variables to predict whether a user will start drinking Soylent or not. For example, our training set might consist of a set of people, some of whom ordered Soylent in the first week of March 2014, and we would train the classifier to model which users become the Soylent users.

The model’s probabilistic estimate that a user will start drinking Soylent is called a propensity score.

Form some number of buckets, say 10 buckets in total (one bucket covers users with a 0.0 – 0.1 propensity to take the drink, a second bucket covers users with a 0.1 – 0.2 propensity, and so on), and place people into each one.

Finally, compare the drinkers and non-drinkers within each bucket (say, by measuring their subsequent physical activity, weight, or whatever measure of health) to estimate Soylent’s causal effect.

Let’s explore each step further.

1. Selecting the features for your propensity model

First, you need to choose the features for your propensity model. For example, you might consider:

  • Product milestones;
  • App and theme downloads;
  • Demographics;
  • Device usage;
  • Buying history;
  • Plan selection.

Your imagination is the only limit. 

Selecting features is easier when you’re interested only in a prediction. You can just add all the features you’re aware of. The less relevant the feature, the closer the coefficient will be to 0. If you’re interested in understanding the factors of that prediction, it becomes more difficult. 

As Hamed explains, there are a few checks and balances:

mojan hamed

Mojan Hamed:

Let’s say when you train the model, you train it on 50% of your historical data and test it on the remaining 50%. In other words, you hide the variable you’re trying to predict from the model in the test group and try to get the model to predict the value—so you can see how well you can predict something that you already have actuals for.

If you want to interpret coefficients, you have to ensure the error (the actual value, what you predicted) has no correlation to the variable you’re trying to predict. If it does, that means that there’s a trend in the data set that you’re not capturing in your features. It’s a good sign that you have a variable you should be including that you’re not already.

Also, make sure two features aren’t linearly correlated to each other. That’d be a good use case to remove a feature.

Whether you’re interested in interpreting the coefficients or not, one thing is certain: You’ll need to gather insight from internal experts. Despite popular belief, propensity modeling does not diminish the need for business and marketing know-how.

So, gather a room full of domain experts: email marketers, conversion optimizers, data scientists, finance experts, CRM specialists⁠—anyone with relevant business acumen. 

There are numerous mathematical ways to decide which features to select, but they can’t replace human knowledge and experience.

2. Constructing your propensity model

Regression has been mentioned a few times already. But what exactly is regression analysis? It’s a predictive modeling technique that examines the relationship between a dependent variable (e.g. lead-to-customer conversion) and independent variables (e.g. product milestones, app and theme downloads, etc.)

Jim Frost, Statistical Technical Communications Specialist at Minitab, explains:

jim frost

Jim Frost:

In regression analysis, the coefficients in the regression equation are estimates of the actual population parameters. We want these coefficient estimates to be the best possible estimates!

Suppose you request an estimate—say for the cost of a service that you are considering. How would you define a reasonable estimate?

  • The estimates should tend to be right on target. They should not be systematically too high or too low. In other words, they should be unbiased or correct on average.
  • Recognizing that estimates are almost never exactly correct, you want to minimize the discrepancy between the estimated value and actual value. Large differences are bad!

These two properties are exactly what we need for our coefficient estimates!

For the purposes of this article, you’ll want to be familiar with linear and logistic regression.

(Image Source)

In linear regression, the outcome is continuous, meaning it can have an infinite number of potential values. It’s ideal for weight, number of hours, etc. In logistic regression, the outcome has a limited number of potential values. It’s ideal for yes/no, 1st/2nd/3rd, etc.

3. Calculating your propensity scores

After constructing your propensity model, train it using a data set before you calculate propensity scores. How you train the propensity model and calculate propensity scores depends on whether you chose linear or logistic regression.

Hamed explains:

mojan hamed

Mojan Hamed:

In a linear regression model, it literally multiplies the coefficients by the values and gives you a continuous number. So, if your formula is customer_value=0.323 (sessions per month), where 0.323 is the coefficient for your sessions per month, it multiplies the number of sessions you had that month by 0.323.

For logistic regression, the predicted value gives you a log-odds and the calculation can convert it to a probability. This probability is what we interpret as the “score.”

It’s important that the propensity model works with your real-world data. This is a perfect example of how propensity modeling and experimentation go hand-in-hand. Experimentation can validate the accuracy of propensity scores.

No matter how confident you are about the accuracy of a prediction, run an experiment. There could be factors you haven’t considered. Or, for example, the model may unexpectedly optimize for quantity (e.g. session-to-lead conversion rate) without considering the impact on quality (e.g. lead-to-customer conversion rate, retention, etc.)

The use of experimentation to validate propensity modeling is critical. It gives you peace of mind.

Again, propensity modeling is a tool at an optimizer’s disposal, not a replacement for a thorough understanding of experimentation and optimization. Take advantage of how open regression is⁠—look under the hood and ensure the data you’re seeing makes sense before running wild with it.

How to use your propensity model for smarter experimentation

I know, I know. You don’t need another lecture on how correlation is not causation. But, with propensity models, it’s easy to see causation where it doesn’t exist.

In a regression model, you can’t assume features have a causal relationship with the variable you’re attempting to predict.

It’s easy to look at the model and see, for example, that downloading X apps during a trial is a good indication that the lead will convert into customer. But there’s absolutely no proof that driving more app downloads during a trial will make anyone more likely to convert into customer.

Another important word of caution: Don’t substitute propensity scores for your (very valuable) optimization knowledge. 

Propensity modeling, like other tools, will not tell you how to optimize. When you open up Google Analytics or fire up an Adobe Analytics dashboard, the insights don’t fall off the screen and into your lap. You use your experience, knowledge, and intuition to dig for those insights.

For example, you might know that a customer is highly likely to churn thanks to your propensity model. But is the value of what you spend preventing that churn higher than the lifetime value of that customer? Your model can’t answer that questions—it’s not a replacement for critical thinking.

Alright. As we gently step over all of that caution tape, let’s look at three valuable propensity models optimizers can leverage:

  1. Propensity to buy or convert. How likely are visitors, leads, and customers to make a purchase or convert to the next step of the funnel? Those who have a lower propensity score need more incentives than others (e.g. you might offer a higher discount if you’re an ecommerce store).
  2. Propensity to unsubscribe. How likely are recipients, leads, and customers to unsubscribe from your email lists? To those with a higher propensity score, you might try reducing the frequency of emails or sending a special offer to reinforce the value of remaining a subscriber.
  3. Propensity to churn. Who are your at-risk leads and customers? If they have a high propensity score, you might experiment with in-product win-back campaigns or assign account concierges to reconnect them with your core value proposition.

Propensity modeling is not prescriptive. Knowing that a group of leads has a higher propensity to convert alone is not particularly valuable. What’s valuable is combining that knowledge with optimization know-how to run smarter, more targeted experiments and extract transferable insights.

Conclusion

The future is not an exact science. (Arguably, exact science is not an exact science.) But you can predict the future with a reasonable degree of certainty with propensity modeling. All you need is a rigorous process and a data scientist.

Here’s the step-by-step process:

  1. Select your features with a group of domain experts. Carefully consider whether you want to interpret the coefficients or not.
  2. After choosing linear or logistic regression, construct your model.
  3. Train your model using a data set and calculate your propensity scores.
  4. Use experimentation to verify the accuracy of your propensity scores.
  5. Combine propensity modeling with your optimization expertise to run smarter, more targeted experiments that lead to more valuable, more transferable insights.

You’ll be able to push your data beyond what has already happened and toward what is probably going to happen in the future.

The post Propensity Modeling: Using Data (and Expertise) to Predict Behavior appeared first on CXL.

How to Use Google Data Studio for Client Reporting

“Getting great results” and “creating great reports” are very different skill sets. If you’re like most marketers, you’d rather sharpen your subject-matter expertise than spend time in PowerPoint. The result is that reporting becomes an afterthought rather than an opportunity—a “necessary evil” with imperfect solutions: Manual reporting is too time-consuming, but it’s been the only […]

The post How to Use Google Data Studio for Client Reporting appeared first on CXL.

“Getting great results” and “creating great reports” are very different skill sets. If you’re like most marketers, you’d rather sharpen your subject-matter expertise than spend time in PowerPoint.

The result is that reporting becomes an afterthought rather than an opportunity—a “necessary evil” with imperfect solutions:

  • Manual reporting is too time-consuming, but it’s been the only way to report on the right platforms with the right analysis.
  • Automated dashboard reports save time but bring limited functionality and don’t help clients understand the story behind the scorecards.

Fortunately, Google Data Studio can automate the time-intensive tasks of data compilation and report building without sacrificing important context and insights.

While Data Studio gives you an ideal platform for report creation, there’s a final step to transform data into a story that drives your clients to decision and action (such as pivoting strategy, approving new resources, or simply choosing to retain your services). That step is not so easily automated.

So before you start building your Data Studio report, make sure you know what to include—and what to leave out—to create a compelling client report.

Clients need stories, not just data

In data-driven industries, it’s easy to imagine that we can “let the data decide,” but that’s actually not the function of data. It’s our job to help our clients interpret the data so they can approve recommendations and take action.

chart showing transformation of data into wisdom.
(Image source)

While dashboards and data snapshots bring value to marketers and analysts, they’re usually insufficient for clients. A Deloitte Canada study revealed that 82% of CMOs surveyed felt unqualified to interpret consumer analytics data.

As Google’s Digital Marketing Evangelist Avinash Kaushik explains:

People who are receiving the summarized snapshot top-lined have zero capacity to understand the complexity, will never actually do analysis and hence are in no position to know what to do with the summarized snapshot they see.

To build useful reports, we need to move beyond simply summarizing performance with quick charts. We need to help clients understand the story.

The benefits of data storytelling

If “storytelling with data” sounds both vague and intimidating, you’re not alone. Storytelling evokes ideas of creativity and even fiction, a sharp contrast to the left-brain data and analysis tools we’re accustomed to using.

Telling a story in a report doesn’t require a cast of characters, anecdotes, or plotlines. Essentially, you need to follow the same UX advice you’ve been giving your clients for decades: don’t make them think.

Your readers need more than features (facts and figures) to take action. Story provides context so that they understand where to focus their attention. Storytelling also heightens emotions, which is vital because decision-making is driven by emotions, not logic.

Make your data storytelling emotional

The words emotion and motivation are derived from similar Latin roots. The more your clients can feel something, the more motivated they’ll be to act.

Marketers may be tempted to highlight wins and gloss over losses in reports to nudge their clients to feel joy (or at least satisfaction). But this strategy can backfire. 

Your clients need to know about what’s not going well—even more than they need to know about what’s working. Due to what’s known as attentional bias, we’re wired to pay attention to perceived risk, and generally to ignore status-quo.

People also respond differently to winning and losing, and losses are twice as powerful compared to equivalent gains. When your clients can see and experience a loss, you place them in a highly motivated state to take necessary action and, if necessary, change course.

To illustrate, let’s say you were responsible for driving 4,000 net new email subscribers each month. You’re hitting the goal, but the steady increase in list size isn’t growing revenue—a fact that’s been overlooked and gone unreported.

By drawing attention to the discrepancy with a visualization, you can drive a discussion that wouldn’t be possible if you focused only on list growth. 

example of second trendline in report to highlight an issue.

With this new (alarming) information, you can revisit targets, value per subscriber, or changes needed for lead nurturing and sales.

With client work, there’s always a temptation to default to “everything’s sunny all the time” reporting. But those reports do a disservice to the client and the agency, even if they are more comfortable to deliver.

Transparency about actual market conditions, threats, and challenges are catalysts for real improvement. If not examined, nothing changes.

3 key elements of data stories

Analytics evangelist Brent Dykes says that storytelling with data needs three elements to drive change: data, visuals, and narrative.

When all three elements work together in your report, you reduce the cognitive load placed on clients, helping them easily identify and process the story. Reports that showcase only raw data are insufficient but are still used surprisingly often.

Adding charts and graphs can help with comprehension, especially when they employ good design principles. Our brains process high-contrast images subconsciously (before we can make sense of the data). These visual properties, known as preattentive attributes, include:

  • Form;
  • Color;
  • Movement;
  • Spatial positioning.

When you apply preattentive attributes to chart creation, you help your reader find the story more clearly and quickly. Notice the impact of adjusting weight and color in these Data Studio line charts:

example of how to use data studio to make a trendline stand out.

Narrative is the final key element for story, but it’s often missing from reports—making it difficult for clients to understand and engage with the data. Narrative provides context for your readers; it’s the answer to the question, “What am I looking at?”

Journalists begin their stories with fast facts: who, what, where, when, and why. This style, known as the inverted pyramid structure, puts the most important information first and gives supporting details further in the story.

Readers are accustomed to this style, and they assume that the earliest information is the most important.

  • On a macro level, the report should begin with account performance before diving into supporting details.
  • On a micro level, each section or chart should lead with priority metrics, or KPIs, followed by secondary metrics. 

Many reporting tools lead with secondary metrics, which measure “what must not be broken” (instead of “what needs to be fixed”). That focus can encourage clients to overweight metrics that you shouldn’t optimize. Always start with the big picture.

What your reports should contain

Before we explore what specific information “clients” need from report deliverables, we have to address the fact that businesses, roles, and people aren’t all the same. A small business owner has different priorities than a CMO. Some clients want to see all the data, others just want the phone to ring.

Keep your specific client in mind as you build out your report, because details that satiate one person can overwhelm another. That said, there are three universal guidelines that will make all your reporting better, no matter the end user. 

Your report should include:

1. What happened

Your clients hired you to solve problems, so your report should address those problems, and the progress made in solving them. This starts with basic benchmarks:

  • What are the KPIs, and were targets met?
  • How are we performing compared to previous periods? 

As mentioned above, it’s not the job of a report to showcase only the wins. Be consistent with your key metrics (don’t cherry pick flattering stats), and make it as easy as possible for readers to interpret the data you’re sharing with them. 

2. Why it happened

Once your clients know what happened, they’ll want to know why. Sometimes, changes are due simply to natural variance, but you’ll want to document causal factors:

  • External changes. Your report can reveal changes to the competitive landscape, document the impact of seasonality and news cycles, and illustrate the effect of algorithm updates. 
  • Internal changes. Note if there were changes to marketing efforts (whether on- or offline), page content, site speed, availability of inventory, offers or promotions, or pricing. Also document if tracking changed or went down. 
  • Your team’s involvement. Show progress made on tasks, including implementation and production. Note how your team helped accomplish wins or mitigate losses.

Clients want to see the return on their investment in your team. And according to the labor illusion effect, they’re happier when they feel like you’re working hard for them—whether or not that work affects the outcome. 

3. What should happen next

Just as it can be hard for novices to tease out benefits and outcomes in product copy, it can be challenging to write recommendations in reporting (e.g. “Your tracking is broken. So as a next step…we recommend you fix it.”)

The purpose of next steps isn’t necessarily to introduce groundbreaking ideas or plans but to create a clear path forward. What may feel redundant or obvious to you can provide needed specificity to your client that increases the likelihood they’ll take action. If performance isn’t meeting expectations, it’s especially important to provide recommendations that address the shortcoming.

When writing next steps, use the active voice and assign responsibility wherever possible. “This discovery should be investigated further” does not help your clients know what to do, or who should do it. “Client to provide updated content roadmap by August 15” does.

Now that you know how to tell a story and what to include in your report, it’s time to create it in Google Data Studio.

Create your report in Google Data Studio

1. Start a new report > Choose a template

After logging in to Data Studio with a Google account, your first step will be to create a new report.

choosing a template in google data studio.

You can choose a blank report or avoid “blank page syndrome” by beginning with a Data Studio template. Templates are available within the platform or from the Report Gallery. (Many marketing teams have published their own.)  

As a reminder, don’t be fooled by the apparent convenience of templates; Even the best ones are still tools, not client deliverables. You’ll need to spend time strategically customizing whichever template you choose to transform it from a one-size-fits-all dashboard into a report that provides value for your client.

2. Connect your data sources

Data Studio makes it easy to connect directly to your data source(s). You can currently select from 18 Google connectors built and supported by Data Studio, such as Google Analytics, Google Ads, and YouTube Analytics. You can also upload your data via CSV or access it through Google Sheets or BigQuery.

Here’s a quick walk through of how to add a data source:

how to add a data source in google data studio.

If you run into limitations accessing data sets or fields through Google products, you can choose from 141 partner connectors and 22 open-source connectors, with more connections being regularly added. 

Because you can connect to multiple sources in a single report, you don’t need to prepare or curate your data sources before connecting. Individual charts in Data Studio each use a single data source by default, but you can use shared values (join keys) to create blended data of up to four other sources.

Once your data sources are connected, you can begin formatting the presentation of your report.

3. Create impactful visualizations

Visualizations can increase your reader’s understanding of the data on both conscious and subconscious levels. The more clarity your charts provide, the easier the story is to interpret.

Choose the best chart for your data

Adding a chart in Data Studio is an easy dropdown. Selecting the right chart visualization takes some thought.

adding a chart in google data studio.

Be sure that each chart adds meaning to your report; don’t compare metrics or create segments just because you can. Your clients will seek patterns and meaning even where they don’t exist, and it’s far easier to omit useless charts than to explain why a perceived trend is actually just noise.

That said, you can create multiple charts to increase comprehension. By grouping distinct charts (rather than relying on viewer-enabled date-range or data controls), you clarify relationships, composition, and trends without requiring clients to conduct discovery and draw their own conclusions.

Following the inverted pyramid guidelines, you can show high-level, aggregate performance with one chart, and break out performance with another. Or, use side-by-side time series charts to “zoom in” on recent performance and “zoom out” on trends over time, giving your client at-a-glance context. 

If you’re unsure of which visualization to use, chart selection tools (like the one below from chartlr) can help you choose the best chart types for your data and objectives. 

how to choose a chart type for reporting.

Enhance charts with preattentive attributes

When charts are busy or cluttered, add contrast to clarify the story. Edward Tufte’s Data-Ink ratio suggests minimizing the amount of non-essential “ink” used in data presentation.

Data Studio makes it easy to adjust color, weight, and scale (as well as grids and axes) to create contrast and emphasis. This is handled in Data Studio’s Style panel, where each metric series is individually controlled:

changing the color of trendlines in google data studio.

Data Studio also has some built-in visualizations to help with quick data interpretation, such as the red and green time comparison arrows found in scorecards and tables.

Be sure to review whether the colors correlate with positive or negative change for the metric. “Green is up” works great for site visits, but a CPC increase with a green arrow is confusing for readers. You can override the default settings in the Style panel.

choosing the change arrow color in data studio.

4. Provide narrative and context

Narrative creates a sense of setting, time, and place for your reader and connects concepts, ideas, and events.

Help your readers make sense of data and visualizations by clarifying relationships and including background information they may not immediately know or remember. 

Establish hierarchy and organization

As with any document, a consistent layout and hierarchy in your report orients your readers. Data Studio is not a word processor, and it doesn’t apply style sheets or standardized formatting the same way. 

You can control layout and theme properties to provide a consistent look and feel. As you create (or duplicate) pages on the fly, pay attention to heading areas, positioning, text, and font size. 

You can apply elements to all pages by selecting them and making them report-level. 

making report-level changes in google data studio.

Group thematically similar charts and data together. It helps to have one main idea per page (or slide) to reduce the amount of information your reader processes at one time.

Leverage headings and microcopy

Headings are good; better headings are better. Again, your goal with headings is more to orient your reader than to repeat what they’re about to read.

Microcopy gives your readers additional context about a page element, and it’s extremely easy to add to your Data Studio report.

You can use microcopy to spell out acronyms, provide definitions and annotations, cite targets and objectives, or otherwise reduce friction for your clients as they work to understand the data.  

In this Data Studio template screenshot, the heading, microcopy, and metric labels all repeat each other. This is fine for a template but would add little value for clients.

example of uninformative headings in a report.

With just a bit of customization, each text element serves a purpose. (Note that the metrics have also been re-ordered to lead with KPIs.)

example of better use of headings in report.

Add context with chart deep dives

Context and text are not synonyms; context doesn’t have to be lengthy sentences—and doesn’t even need to be words at all.

The “why” of what happened is rarely found in standard aggregate charts. When further explanation is needed, sometimes the best approach is “show don’t tell.”

Here’s a chart showing revenue and spend year to date. Both metrics are trending up. But why?

example of a report chart that doesn't tell the whole story.

We could just say “demand for Product A increased,” but a supplemental chart does a better job illustrating the spikes in search traffic:

second chart that provides an explanation of a change in metrics.

Within Data Studio, you can easily add new pages and charts to substantiate observations and conclusions. Add pages in-line or deep link to an appendix.

Keep it current

Performance data can and should be automated to save time and ensure accurate, consistent reporting. Data Studio recently added new date range options, giving you advanced features for automatically updating visualizations, such as:

  • Compare year-to-date against two years ago;
  • Compare last 30 days with previous, aligned on Monday;
  • Fixed start date through last month.

As you may have gathered, performance-specific narrative and analysis don’t play well with “set it and forget it” automated reporting distribution. While sharing direct access makes sense for dashboards, curated reports are usually better scheduled and delivered as PDFs

If your client prefers a “live” report to a PDF, you can create a new instance with a new link each month. You’ll just need to manually set date ranges rather than using “last month” to keep data accurate to the time period you’re reporting on.

Conclusion

Many of the principles for report optimization mirror the principles of conversion optimization

There are key differences between guiding a prospect to action and guiding your client through interpreting complex data, but in both cases your audience’s needs should inform your choices about what information to include and emphasize.

With customized Data Studio reports, you can automate the compilation of the data your clients need to see, and add essential narrative and analysis to lead them on a virtuous cycle of smarter action and better results.

The post How to Use Google Data Studio for Client Reporting appeared first on CXL.