How to show Lighthouse Scores in Google Sheets with a custom function

Learn how to use machine learning to streamline your reporting workflows right within Google Sheets.

The post How to show Lighthouse Scores in Google Sheets with a custom function appeared first on Marketing Land.

Automation and machine learning have tremendous potential to help all of us in marketing. But at the moment a lot of these tools are inaccessible to people who can’t code or who can code a bit but aren’t really that comfortable with it.

What often happens is that there ends up being one or two people in the office who are comfortable with writing and editing code and then these people produce scripts and notebooks that everyone else runs. The workflow looks a bit like this:

I will show you a simple way to streamline this workflow to remove the steps where people need to run a script and format the output. Instead they can run the automation directly from within Google Sheets.

The example I will show you is for a Sheets custom function that returns the Lighthouse score for a URL like in this gif:

The method I will show you isn’t the only way of doing this, but it does illustrate a much more general technique that can be used for many things, including machine learning algorithms.

There are two parts:

  1. A Google Cloud Run application that will do the complicated stuff (in this case run a Lighthouse test) and that will respond to HTTP requests.
  2. An Appscript custom function that will make requests to the API you created in step 1 and return the results into the Google Sheet.

Cloud run applications

Cloud Run is a Google service that takes a docker image that you provide and makes it available over HTTP. You only pay when an HTTP request is made, so for a service like this that isn’t being used 24/7 it is very cheap. The actual cost will depend on how much you use it, but I would estimate less than $1 per month to run thousands of tests.

The first thing we need to do is make a Docker image that will perform the Lighthouse analysis when we make an HTTP request to it. Luckily for us there is some documentation showing how to run a Lighthouse audit programatically on Github. The linked code saves the analysis to a file rather than returning the response over HTTP, but this is easy to fix by wrapping the whole thing in an Express app like this:

const express = require('express');
const app = express();
const lighthouse = require('lighthouse');
const chromeLauncher = require('chrome-launcher');

app.get('/', async (req, res) => {
    // Check that the url query parameter exists
    if(req.query && req.query.url) {
        // decode the url
        const url = decodeURIComponent(req.query.url)    
        const chrome = await chromeLauncher.launch({chromeFlags: ['--headless', '--no-sandbox','--disable-gpu']});
        const options = {logLevel: 'info', output: 'html', port: chrome.port};
        const runnerResult = await lighthouse(url, options);

        await chrome.kill();
        res.json(runnerResult.lhr)
    }
});

const port = process.env.PORT || 8080;
app.listen(port, () => {
  console.log(`Listening on port ${port}`);
});

Save this code as index.js.

Then you will also need a file called package.json which describes how to install the above application and a Dockerfile so we can wrap everything up in Docker. All the code files are available on Github.

package.json
{
    "name": "lighthouse-sheets",
    "description": "Backend API for putting Lighthouse scores in Google sheets",
    "version": "1.0.0",
    "author": "Richard Fergie",
    "license": "MIT",
    "main": "index.js",
    "scripts": {
        "start": "node index.js"
    },
    "dependencies": {
        "express": "^4.17.1",
        "lighthouse": "^6.3"
    },
    "devDependencies": {}
}
Dockerfile
# Use the official lightweight Node.js 10 image.
# https://hub.docker.com/_/node
FROM node:12-slim

# Our container needs to have chrome installed to
# run the lighthouse tests
RUN apt-get update && apt-get install -y \
  apt-transport-https \
  ca-certificates \
  curl \
  gnupg \
  --no-install-recommends \
  && curl -sSL https://dl.google.com/linux/linux_signing_key.pub | apt-key add - \
  && echo "deb https://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list \
  && apt-get update && apt-get install -y \
  google-chrome-stable \
  fontconfig \
  fonts-ipafont-gothic \
  fonts-wqy-zenhei \
  fonts-thai-tlwg \
  fonts-kacst \
  fonts-symbola \
  fonts-noto \
  fonts-freefont-ttf \
  --no-install-recommends \
  && apt-get purge --auto-remove -y curl gnupg \
  && rm -rf /var/lib/apt/lists/*


# Create and change to the app directory.
WORKDIR /usr/src/app

# Copy application dependency manifests to the container image.
# A wildcard is used to ensure copying both package.json AND package-lock.json (when available).
# Copying this first prevents re-running npm install on every code change.
COPY package*.json ./

# Install production dependencies.
# If you add a package-lock.json, speed your build by switching to 'npm ci'.
# RUN npm ci --only=production
RUN npm install --only=production

# Copy local code to the container image.
COPY . ./

# Run the web service on container startup.
CMD [ "node", "--unhandled-rejections=strict","index.js" ]

Build the docker image and then you can test things locally on your own computer like this:

First start the image:

docker run -p 8080:8080 lighthouse-sheets

And then test to see if it works:

curl -v "localhost:8080?url=https%3A%2F%2Fwww.example.com"

Or visit localhost:8080?url=https%3A%2F%2Fwww.example.com in your browser. You should see a lot of JSON.

The next step is to push your image to the Google Container registry. For me, this is a simple command:

docker push gcr.io/MY_PROJECT_ID/lighthouse-sheets

But you might have to setup the docker authentication first before you can do this. An alternative method is the use Google Cloud Build to make the image; this might work better for you if you can’t get the authentication working.

Next you need to create a Cloud Run service with this docker image.

Open Cloud Run and click “Create service”

Name and adjust settings. You must give your service a name and configure a few other settings:

It is best to pick a region that is close to where most of the audience for your sites live. Checking the site speed for a UK site from Tokyo won’t give you the same results as what your audience get.

In order for you to call this service from Google Sheets it must allow unauthenticated invocations. If you’re worried about locking down and securing the service to prevent other people from using it you will have to do this by (for example) checking from an API secret in the HTTP request or something like that.

Next you must select the container you made earlier. You can type in the name if you remember it or click “Select” and choose it from the menu.

Then click “Show Advanced Settings” because there is further configuration to do.

You need to increase the memory allocation because Lighthouse tests need more than 256Mb to run. I have chosen 1GiB here but you might need the maximum allowance of 2GiB for some sites.

I have found that reducing the concurrency to 1 improves the reliability of the service. This means Google will automatically start a new container for each HTTP request. The downside is that this costs slightly more money.

Click “Create” and your Cloud Run service will be ready shortly.

You can give it a quick test using the URL. For example:

curl -v "https://lighthouse-sheets-public-v4e5t2rofa-nw.a.run.app?url=https%3A%2F%2Fwww.example.com"

Or visit https://lighthouse-sheets-public-v4e5t2rofa-nw.a.run.app?url=https%3A%2F%2Fwww.example.com in your browser.

The next step is to write some Appscript so you can use your new API from within Google Sheets.

Open a new Google Sheet and the open up the Appscript editor.

This will open a new tab where you can code your Google Sheets custom function.

The key idea here is to use the Appscript UrlFetchApp function to perform the HTTP request to your API. Some basic code to do this looks like this:

function LIGHTHOUSE(url) {
  const BASE_URL = "https://lighthouse-sheets-public-v4e5t2rofa-nw.a.run.app"
  var request_url = BASE_URL+"?url="+encodeURIComponent(url)
  var response = UrlFetchApp.fetch(request_url)
  var result = JSON.parse(response.getContentText())
  return(result.categories.performance.score * 100)
}

The last line returns the overall performance score into the sheet. You could edit it to return something else. For example to get the SEO score use result.categories.seo.score instead.

Or you can return multiple columns of results by returning a list like this:

[result.categories.performance.score, result.categoryies.seo.score]

Save the file and then you will have a custom function available in your Google Sheet called LIGHTHOUSE.

The easiest way to get started with this is to copy my example Google Sheet and then update the code yourself to point at your own API and to return the Lighthouse results you are most interested in.

Enhance your spreadsheet know-how

The great thing about this method is that it can work for anything that can be wrapped in a Docker container and return a result within 30 seconds. Unfortunately Google Sheets custom functions have a timeout so you won’t have long enough to train some massive deep learning algorithm, but that still leaves a lot that you can do.

I use a very similar process for my Google Sheets addon Forecast Forge, but instead of returning a Lighthouse score it returns a machine learning powered forecast for whatever numbers you put into it.

The possibilities for this kind of thing are really exciting because in Search Marketing we have a lot of people who are very good with spreadsheets. I want to see what they can do when they can use all their spreadsheet knowledge and enhance it with machine learning.

This story first appeared on Search Engine Land.

https://searchengineland.com/how-to-show-lighthouse-scores-in-google-sheets-with-a-custom-function-343464

The post How to show Lighthouse Scores in Google Sheets with a custom function appeared first on Marketing Land.

How to make your data sing

Stop reporting absolute numbers and put your data into context with ratios to engage your stakeholders with the smaller, but important, data points.

The post How to make your data sing appeared first on Marketing Land.

It is amazing; the horrible job many digital marketers do when reporting their work to clients. This includes both internal and external clients. Just think about how many marketing presentations and reports you’ve seen that simply contain screenshots from Google Analytics, Adobe Analytics, Adwords, Google Console, or reports from a backend ecommerce system. This isn’t the way to influence people with your data.

The biggest issue is that most marketers are not analytics people. Many marketers do not know how to collect all of the necessary data or how to leverage that data, and to a lesser degree, know how to present it in a meaningful way. Typically, this is the job of a data analyst. The same way purchasing a pound of nails, a hammer and a saw doesn’t make you a carpenter, gaining access to your analytics reporting tool does not make you a data analyst. This is why many reports contain those convoluted screenshots, and present data out of context, contributing little to no meaning. 

Data out of context

Many reports merely report the facts (the data) with a number and no context. Data out of context is just data. For example, simply making a statement that Adwords generated 5,000 sessions to a website last month is meaningless without context. The number 5000 is neither a good nor a bad data point without a reference point or a cost factor. It’s not until you add in other factors (open the box) that you can demonstrate whether or not your efforts were a success. If the previous month’s Adwords campaign only drove in 1,000 sessions, then yes without other data, 5000 sessions looks good. But what if the cost to drive those additional 4,000 sessions was 10 fold the previous month’s spend? What if the previous month, Adwords drove 5,000 sessions but at double the spend?

It is only by adding in the additional information in a meaningful way that marketers can turn their reporting from a subjective presentation into an objective presentation. In order to do this, stop reporting absolute numbers and put your data into context with ratios. For example, when assessing Cost per Session, toss in a 3rd factor (goal conversion, revenue, etc.) and create something similar to “Cost per Session : Revenue”.  This will put the data into context. For example, if every session generated costs $1 : $100 (Cost per session : revenue) vs. $2.25 : $100 (Cost per session : revenue) the effectiveness of a marketing spend becomes self-evident. In this example, it is clear the first result is superior to the second. By normalizing the denominator (creating the same denominator) the success or failure of an effort is easily demonstrated. 

Data is boring

Yes, presenting data is boring. Simply looking at a mega table of a collection of data will cause many to lose interest and tune out any message you might be trying to present. The best way to avoid this is to make your data sing!

Make your data sing

Just like in the marketing world, the easiest way to grab someone’s attention and make your message sing is with imagery. Take all that great data in your mega table, and turn it into an easy to understand graph, or when necessary, simplified data tables. Even better, (if you can) turn it into interactive graphs. During your presentation, don’t be afraid to interact with your data.  With some guidance, your audience can dive into the data they are most interested in.

Learn to use data visualization tools like Data Studio, Tableau, DOMO, Power BI and others. Leveraging these tools allows you to take boring data and not only give it meaning but to make the data sing, which will turn you into a data hero.

Interacting with your data

Back at the end of July 2019, my firm acquired an electric vehicle. We wanted to know if the expense was worth it. Did the cost savings of using electricity over gasoline justify the difference in the ownership cost of the vehicle (lease payments +/- insurance cost and maintenance costs).

Below is a typical data type report with all the boring detailed data. This is a mega table of data and only those truly interested in the details will find it interesting. If presented with this table most would likely only look at the right-hand column to see the total monthly savings. If presented with just this data, many will get bored, and will look up and start counting the holes in the ceiling tiles instead of paying attention.

The following graphs demonstrate many of the ways to make this data sing, by putting all of the data into context through interactive graphics.

The above graph (page 1 of the report) details the cost of operating the electric vehicle. The first question we were always curious about was how much it was costing us to operate per 100 km. By collecting data on how much electricity was used to charge the car, how many kilometers we drove in a given month and the cost for that electricity, we are able to calculate the operating cost. In the graph you can easily see the fluctuation in operating costs, with costs going up in winter months (cost of operating the heater in the car) and again in June & July (cost of running the AC). You can also see the impact of increases in electricity prices.

To truly evaluate the big question “Was acquiring an electric vehicle worth it?” we’d need to estimate how much gasoline would have been consumed by driving the same distance against the average cost for gas during the same months. On page 2 of the report the data is now starting to sing as the difference in the savings of electrical over gas becomes clear. The chart becomes interactive and allows the user to hover over any column to reveal the data details.

To make the data truly sing, we’d need to not just compare the operating costs, but the costs of ownership. Do the savings in the operating costs justify the price difference between the vehicles? We know that the difference in lease costs, insurance and annual maintenance is in the range of $85-$90/month

The above graph (page 3 of the report) demonstrates, the impact of plummeting gas prices and the reduced driving done during April 2020 due to the COVID-19 shutdown. In April 2020 a mere monthly savings of approximately $41 dollars was achieved. Therefore, there were no savings in owning a more expensive electric vehicle over an equivalent gas-powered vehicle (the difference in lease costs and insurance, etc. is in the range of $85-90/month). While it might not be sing, it definitely was screaming out when we saw it. 

Check out the entire report for yourself.  It is accessible here so you can view all the pages/charts. The report is interactive allowing you to hover given months to see data details or even change the reporting date range.

By embracing not only data visualization but the visualization of meaningful data, we as marketers can raise the bar and increase engagement with our audience. Think of the four pages of this report, which page talks most to you? Which way of presenting the data makes it sing for you? Odds are it was not the first table with all the detailed data.

The post How to make your data sing appeared first on Marketing Land.

Here’s an alternative to cookies for user tracking

Instead of having your analytics toolset read a cookie, pass a unique identifier associated with the user ID. Learn how to do it and keep it privacy compliant.

The post Here’s an alternative to cookies for user tracking appeared first on Marketing Land.

For over 20 years, website analytics has leveraged the use of persistent cookies to track users. This benign piece of code was a mass improvement over using a user’s IP address or even the combination IP and browser. Since it was first introduced, the use of these cookies has become the focus of privacy legislation and paranoia. So what alternative is there?

If your website or mobile application requires the creation of user accounts and logins, it’s time to plan to transition away from cookie-based tracking to user ID tracking. In simple terms, instead of having your analytics toolset read a cookie, you pass a unique identifier associated with the user ID and then track the user via this identifier. Typically the identifier is the login ID.

Preparing for advanced tracking

Step 1

Ensure that the user ID you’ve deployed doesn’t contain Personal Identifiable Information (PII). Too often, sites require users to use their personal email address as a login ID or event their account number. These are PII. If this is the case with your organization, then the trick is to assign a random unique client identifier to all existing accounts as well as for any future accounts as they are created. 

Step 2

Have your developers start to push the User ID to the data layer. This way, the variable will be there waiting for your analytics software to read it once you’re ready to implement the new tracking method. Check with your analytics software on the variable name for this element as it varies from analytics software to software.

Step 3

Create a new view/workspace within your analytics software and configure it to track users by their user ID. Most analytic packages will still set a temporary cookie to track user behavior prior to their login and then will connect the sessions. This way you can see what a user does on your site even prior to them logging in and what site visitors who never login do.

Benefits of tracking users by user ID

Improved accuracy

The use of cookies is flawed in many ways. If users jump between devices (from desktop, to mobile, to a tablet, or office computer to home computer) you can’t track that it was the same user. This generates inflated unique user counts.

What if a user clears their cookies (perhaps they’re utilizing antivirus software that purges all cookies every time the browser is closed)? Once again this leads to inflated user count data.

By tracking a user via their user ID, you’ll obtain a more accurate count of unique users on your site.

Cross Device Tracking

This is perhaps one of the greatest benefits of tracking users by their user ID. You can now see how users interact with your site and/or mobile app. How many use a combination of devices. Is there a specific preference for which type of device might simply be used to add to a shopping cart, only to have the order processed on another device?

Greater Analytics Insight

Armed with enhanced analytics data, new and potentially powerful insights can be harvested. With this new knowledge, you can better direct internal resources to focus and enhance the user experience and optimize the user flow for greater profits.

Real life examples

The following examples demonstrate the power of tracking users by their user ID. 

Overview – Device Overlap

The following image shows what percentage of accounts use which type of device and the percentage that use a combination of devices. For example, while 66.6% use only a desktop, 15.8% use a combination of Mobile and Desktop.

User Behavior – Device Flow

Reviewing the device flow leading up to a transaction can provide some of the greatest insights from this enhanced analytics tracking methodology.

While it might not be surprising that the two most common device (by number of Users) paths were Desktop only and Mobile only, what was surprising to me and to the client was number 3. While the device path of Desktop -> mobile -> Desktop is only experienced by approx. 3% of users, it accounts for approximately 8% of all transaction and over 9% of all revenue generated.

The minimal overall use of tablets was also a bit surprising. Of course the mix of devices does vary from client to client.

Assisted conversions

By dropping the use of cookies, the quality of the data of assisted conversions is significantly increased. For example, how many people read an email (can easily be tracked when opened and attributed to a user ID) on a mobile device, click into the site, browse around to and review the items that are being promoted (maybe add them to their shopping cart). Then think about it for a bit before logging-in later via a desktop to complete the transaction?

For example, from the above report, one can objectively assign a more accurate value to SEO efforts by examining the role Organic Search traffic played in generating sales. While a source of an immediate sale (in this case) from organic search generated traffic represents 1.3% of total revenue as an assist in the sales cycle, it played a role in over 10.4% of generated revenue.

Enhanced user insights

In this example, the client allows its customers to also have multiple logins for their account. Essentially a user ID represents a customer/client and not a single user. The client operates in the B2B world where multiple people within its clients’ organizations may require unique logins and rights (who can order, who can just view product details, who can view or add to the cart but not place an order, etc.). By leveraging both tracking by user ID and recording a unique login id within their analytics, these additional insights can be obtained.

user-breakdown.jpg

The above report not only breaks down revenue by division, but demonstrates how within different division users use the site differently. In region 1, there is almost a 1:1 relationship between user ids and login ids. Yet in Division 3, the ration is over 4:1, this means that for every customer there is an average over 4 logins being utilized in Division 3.

How can they leverage this data for more effective marketing? By understanding that within divisions there are differences, carefully crafted email marketing can be created to target customers differently with multiple logins vs. single account/login customers. 

A further dive into the data could also reveal which login IDs are only product recommenders (only view products) from those who make specific product requests (add to the shopping cart and never place the order) from those who only process orders and from those who do it all. Each one needs to be marketed to differently with different messaging to optimize the effectiveness of the marketing effort. It’s through detailed analytics that this audience definition can be obtained.

Is tracking by user ID right for me?

Making the decision to change how you track your users is a difficult choice. First, does your site/mobile app require users to login at a reasonably early part of their journey? This is ideal for e-commerce sites and sites where the vast majority of user interaction takes place after the user logins into the site/application.

If you’re running a general website with the goal to merely share information and generate “contact us” type leads, the answer to making this switch is no.

If you have a combination of a general information site plus a registered user section, then yes you might want to consider making this change and perhaps just for the registered user section.

If you do make this change, don’t stop running your other analytics views/workspaces that use cookies. Keep them running. By operating two different views, you’ll be eventually able to reconcile the differences between the two, plus it makes it easier to explain to those who you report to, why you’ll be reporting a dramatic drop in the number of users. Of course, when you first make the switch, all users will be first-time users so expect a major increase in new visitor traffic.

If you decide to make this change, don’t forget to review the impact of the change with your legal department. They will tell you if you need to update your privacy policy.

The post Here’s an alternative to cookies for user tracking appeared first on Marketing Land.

The importance of valuing latent orders to successful Amazon Sponsored Products management

Advertisers must consider the lag time between ad click and conversion as well as historic performance around key days to estimate shift.

The post The importance of valuing latent orders to successful Amazon Sponsored Products management appeared first on Marketing Land.

Sponsored Products is the most widely adopted Amazon search ad format, and typically accounts for more than six times as much ad spend as Sponsored Brands ads for the average Tinuiti (my employer) advertiser. As such, it’s incredibly important for advertisers to understand the full value that these ads drive.

Part of this is understanding the click-to-order period between when a user clicks on an ad and when that user ends up converting. Given how Amazon attributes orders and sales, it’s crucial that advertisers have an idea of how quickly users convert in order to value traffic effectively in real time.

Amazon attributes conversions and sales to the date of the last ad click

When assessing performance reports for Sponsored Products, advertisers should know that the orders and sales attributed to a particular day are those that are tied to an ad click that happened on that day. This is to say, the orders and sales reported are not just those that occurred on a particular day.

Advertisers viewing Sponsored Products conversions and sales in the UI are limited to only seeing those orders and sales attributed to the seven days following an ad click. However, marketers pulling performance through the API have greater flexibility and can choose different conversion windows from one to thirty days, which is how the data included in this post was assembled.

In the case of Sponsored Display and Sponsored Brands campaigns, performance can only be viewed using a 14-day conversion window, regardless of whether it is being viewed through the UI or through an API connection.

For marketers who wish to use a thirty-day conversion window in measuring Sponsored Products sales and conversions attributed to advertising, this means that it would take thirty days after the day in question in order to get a full picture of all conversions. Taking a look across Tinuiti advertisers, the first 24 hours after an ad click accounted for 77% of conversions and 78% of sales of all those that occurred within 30 days of the ad click in Q2 2020.

Unsurprisingly, the share of same-SKU conversions that happen in the first 24 hours is even higher, as shoppers are more likely to consider other products the further removed they become from an ad click.

For the average Amazon advertiser, we find that more than 20% of the value that might be attributed to ads happens more than one day after the ad click, meaning advertisers must bake the expected value of latent orders and sales into evaluating the most recent campaign performance. The math of what that latent value looks like varies from advertiser to advertiser.

Factors like price impact the length of consideration cycles

The time it takes for consumers to consider a purchase is naturally tied to the type of product being considered, and price is a huge factor. Taking a look at the share of 30-day conversions that occur more than one day after the click by the average order value (AOV) of the advertiser, this share goes up as AOV goes up. Advertisers with AOV over $50 saw 25% of orders occur more than 24 hours after the ad click in Q2 2020, whereas advertisers with AOV less than $50 saw 22% of orders occur more than 24 hours after the ad click.

Put simply, consumers usually take longer to consider pricier products before purchasing than they take to consider cheaper products, generally speaking. Other factors can also affect how long the average click-to-order cycle is for a particular advertiser.

In addition to latent order value varying by advertiser, there can also be meaningful swings in what latent order value looks like during seasonal shifts in consumer behavior, such as during the winter holiday season and around Prime Day.

Key shopping days speed up conversion process

The chart below depicts the daily share of all conversions attributed within seven days of an ad click that occurred during the first 24 hours. As you can see, one-day order share rose significantly on Black Friday and Cyber Monday as users launched into holiday shopping (and dropped in the days leading into Black Friday).

After these key days, one-day share returned to normal levels before rising in the weeks leading up to Christmas Day before peaking on December 21 at a level surpassing even what was observed on Cyber Monday. December 21 the last day many shoppers could feel confident in placing an order in time to receive it for the Christmas holiday, and it showed in how quickly the click-to-purchase path was for many advertisers.

Of course, Amazon created its own July version of Cyber Monday in the form of Prime Day, and we see a similar trend around one-day conversion share around the summer event as well.

This year’s Prime Day has been postponed, but reports indicate that the new event might take place in October.

As we head into Q4, advertisers should look at how the click-to-order window shifts throughout key times of the year in order to identify periods in which latent order value might meaningfully differ from the average.

Conclusion

Like any platform, advertisers are often interested in recent performance for Amazon Ads to understand how profitable specific days are. This is certainly important in determining shifts and situations in which budgets should be rearranged or optimization efforts undertaken, and that’s even more true now given how quickly performance and life are changing for many advertisers as well as the population at large.

However, in order to do so effectively, advertisers must take into consideration the lag that often occurs between ad click and conversion. Even for a platform widely regarded as the final stop for shoppers such as Amazon, more than 20% of 30-day conversions occur after the first 24 hours of the click, and this share can be much higher for advertisers that sell products with longer consideration cycles.

Further, advertisers should look to historic performance around key days like Cyber Monday and Prime Day to understand how these estimates might shift. Depending on product category, other holidays like Valentine’s Day or Mother’s Day might also cause shifts in latent order value.

Not all advertisers necessarily want to value all orders attributed to an ad over a month-long (or even week-long) attribution window equally, and particularly for products with very quick purchase cycles, it might make sense to use a shorter window. That said, many advertisers do find incremental value from orders that occur days or weeks removed from ad clicks, and putting thought into how these sales should be valued will help ensure your Amazon program is being optimized using the most meaningful performance metrics.

The post The importance of valuing latent orders to successful Amazon Sponsored Products management appeared first on Marketing Land.

Analytics Audit 101: Identify issues and correct them to ensure the integrity of your data

Marketing and analytics professionals need to work together to not only increase the accuracy of our data, but to educate people about how to leverage it.

The post Analytics Audit 101: Identify issues and correct them to ensure the integrity of your data appeared first on Marketing Land.

One thing online businesses need to use as a cornerstone of their business decision making  process is their digital analytics data (analytics data from a variety of sources: i.e. web analytics, search console, paid search, paid social, social media, etc.). Yet, according to a MIT Sloan Management Review only 15% of more than 2,400 business people surveyed trust their data. While there is no analytics method available that will guarantee 100% accuracy of your digital analytics data, by auditing your data you can ensure the data is as accurate as possible. This will provide you with the confidence to not only trust your data but to leverage the information provided in making objective business decisions, instead of subjective decisions. It is that lack of trust that explains why a mere 43% (according to the same survey) say “they frequently can leverage the data they need to make decisions.” This low confidence in one’s data equals failure.

As marketing and analytics professionals, we need to work together to not only increase the accuracy of our data, but to educate people about the data and how to leverage it. The first step in this process is auditing your analytics configurations and thereby identifying issues, and correcting them to ensure the integrity of the data. 

The analytics audit process

Step 1: Acknowledge analytics data isn’t perfect

When you start your analytics process gather together all those who have a stake in the outcome and find out why they don’t trust the data. Most likely they have good reasons. Don’t make claims that your goals is to make it 100% accurate because that is impossible. Use the opportunity to explain at a high level that analytics data captures a sampling of user activity and for various technical reasons, no system will be perfect and that’s why they are most likely seeing data difference between things like their Adwords account and their web analytics data. Use an example of taking a poll. Pollsters take a sample ranging in size of 1,000-2,000 people of a total population of over 350,000 in the USA and then state their data is accurate within a few percentage points 4 out of 5 times. In other words, they are way off 20% of the time.  However, businesses, politicians and the general public respond and trust this data. When it comes to web analytic data even at the low end of accuracy, your data is likely still capturing an 80% sample which is far more accurate then the data presented by pollsters, yet it is less trusted. Let the stakeholders know, as a result of the audit and implementing fixes, you could be improving data capture accuracy to 90% or even 95%, and that is data you can trust 100%.

Step 2: Identify what needs to be measured

One of the biggest issues when it comes to analytics data is, the analytics software isn’t configured to collect only the correct data. The software becomes a general catch-all. While on the surface it sounds perfect to just capture everything (all that you can), when you cast a huge net you also capture a lot of garbage. The best way to ensure the right data is being captured and reported on is to review the current marketing and measurement plans. Sadly, too few organizations don’t have these, so during your meeting, make sure to ask what the stakeholders’ primary items they want measured are.

Identify and gather all the “Key Performance Indicators” (KPI) currently being reported on. You’ll need this before you start the audit. Verify all KPI are still valuable to your organization and not just legacy bits of data that have been reported for years. Sadly in many organizations, they are still reporting on KPIs that actually hold little to no value to anyone within the organization.

Step 3: Review the current analytics configuration

Now is the time to roll-up those sleeves and get dirty. You’ll need admin level access (where possible) to everything or at a minimum full view rights. Next you’ll need a spreadsheet which lists the specific items that you need to review and ensure are configured correctly and if not, a place to note what is wrong and a column to set a priority to get them fix.

The spreadsheet I’ve developed over the years has over 100 standard individual items to review, grouped into specific aspects of a digital analytics implantation plus depending on the specific client additional items may be added. The following eight are some of most critical items that need to be address.

  1. Overall Analytics Integrity: is your analytics code (or tag manager code) correctly installed on all pages of your website? Check and make sure your analytics code is correctly deployed. Far too often the javascript (or the code snippet) is located in the wrong place on the web page or perhaps it is missing some custom configuration. Simply placing the code in the wrong place can cause some data to not be captured.

    Verify the code is on all pages/screens. Too often either section of a site are missed or the code doesn’t work the same on all pages resulting in lost data or potentially double counting.

    If you run both a website and an app, are their analytics data properly synced for data integration, or is it best to run them independently?
  2. Security: Review who has access to the analytics configuration and determine when the last time an individual’s access and rights were reviewed. You’d be surprised how many times, it has been discovered that former employees still have admin access. This should be something that is reviewed regularly, plus a system has to be in place to notify the analytics manager when an employee departs an organization to terminate their access. While you may think since you’re using the former employee’s email address all is fine because HR will cancel that email address, they may still have access. Many analytics systems do not operate within your corporate environment and are cloud-based. As long as that former employee remembers their email address and the password to that specific analytics account they’ll have access.
  3. Analytics Data Views: This is an especially critical feature when it comes to web analytics (i.e. Google Analytics, Adobe Analytics, etc.). Is your analytics system configuring to segregate your data into at least 3 different views? At a minimum, you need “All Data” (no filtering), “Test” (including only analytics test data or website testing) and “Production” (only customer-generated data). In many organizations, they also segment their data further into “Internal Traffic” (staff using the website) and “External Traffic” (primarily external users).

    If these don’t exist, then it is likely you are collecting and reporting on test traffic and internal users. How you’re employees use a website is completely different than customers and should at a minimum be excluded or segmented into their own data set.
  4. Review Filters: Filters are a common tool used in analytics to exclude or include specific types of activity. Most filters don’t need to be reviewed too often, but some do need to be reviewed more frequently. The ones that need to be reviewed most often are ones that include or exclude data based on a user IP address. IP addresses do have a nasty habit of changing over time. For example, a branch location switched ISPs and received a new IP address. When it comes to IP based filters it is recommended they be reviewed every 6 months, but if not possible at least once per year. As a tip, after they’ve been reviewed and verified, rename the filter by adding the date they were last reviewed.

    Don’t forget to ensure that exclude filters are in place to exclude search engine bots and any 3rd party utilities used to monitor a website. This machine-generated traffic has a nasty habit, of getting picked up and reported on which skews all the data.
  5. Personally Identifiable Information (PII): Too many developers have a nasty habit of passing PII via the data layer especially on ecommerce sites. They are unaware that it is possible that this data can end in the company’s analytics database. By storing PII on a 3rd party system, you either need to reference this in your privacy policy and even then you might be breach of various privacy laws. One of the most common types of these errors is passing a users email address as a URI parameter. The easiest way to check for this is to run a pages report for any URI containing a “@” symbol. Over the years, I’ve seen customer’s names capture and much more.

    If this happens, ideally your developers should fix what is causing this, but at a minimum you’ll need a filter to strip this type of information from the URI before it is stored in your analytics database.
  6. E-commerce Data: This is the most common issue we hear from organizations: “The sales figures reported in the analytics doesn’t match our e-commerce system!” As stated above, analytics isn’t perfect nor should it be treated as a replacement for an e-commerce/accounting backend. However, if you are capturing 85-95% (and possibly higher) of the transactional data then you can effectively leverage this data to evaluate marketing efforts, sales programs, AB tests, etc.

    From an e-commerce perspective, the easiest way to audit this is to compare the reported data in a given time period to what the backend system reports. If it is near 90%, then don’t worry about it. If it is below 80%, you have an issue. If it is somewhere in between, then it is a minor issue that should be looked into but is not a high priority.
  7. Is everything that needs to be tracked being tracked: What does your organization deem important? If your goal is to make the phones ring, then you need to be tracking clicks on embedded phone numbers. If your goal is forms driven submissions, are you tracking form submissions correctly? If you’re trying to direct people to local locations, then are you capturing clicks on location listings, on embed maps, etc.?

    What about all those social media icons scattered on your website to drive people to your corporate Twitter, LinkedIn, Facebook accounts? Are you tracking clicks on those?
  8. Campaigns: Is there a formal process in place to ensure links on digital campaigns are created in a consistent manner? As part of this are your marketing channels correctly configured within your analytics system?

You now have an outline for where to start your analytics audit. Think of your organization’s analytics data and reporting systems like a car. It always seems to be working fine until it stops working. You need to take your car in from time to time for a tune-up. This is what an analytics audit is. The audit will identify things that need to be fixed immediately (some small and some big) plus other items that can be fixed over time. If you don’t fix the items discovered during the audit your analytics system won’t operate optimally and people won’t want to use it. How frequently should an analytics audit be conducted after everything has been fixed? Unlike a car, there is no recommended set amount of time between audits. However, every time your digital properties undergo major updates or if there have been a series of minor updates that can easily be viewed together as a major update, it is time to repeat the audit process.

The post Analytics Audit 101: Identify issues and correct them to ensure the integrity of your data appeared first on Marketing Land.

Soapbox: Is my data telling me the truth?

How email security programs affected my perception of data integrity.

The post Soapbox: Is my data telling me the truth? appeared first on Marketing Land.

As marketers, we face the overwhelming challenge of demonstrating proof that our tactics are effective. But how can we convince management if we are not convinced of our own data?

Here’s the reality, which I recently learned for myself: If you’re running email marketing, it’s very likely that your performance reports are not disclosing the full truth… inflated CTRs (click-through rates) and open rates being the main culprits. 

Email security programs – loved by recipients, hated by senders

Barracuda. SpamTitan. Mimecast. Email bots that serve a single purpose: to protect users from unsafe content. These programs scan inbound emails and attachments for possible threats, including viruses, malware, or spammy content by clicking on links to test for unsafe content.

For email marketers, this creates several challenges:

  • Inflated CTRs and open rates due to artificial clicks and opens 
  • Disrupting the sales team’s lead followup process as a result of false signals
  • Losing confidence in data quality (quantity ≠ quality)

Real or artificial clicks?

In reviewing recent email marketing performance reports, I noticed an unusual pattern: Some leads were clicking on every link in the email…header, main body, footer, even the subscription preferences link — yet they were not unsubscribing. Not only that, but this suspicious click activity was happening almost immediately after the email was deployed. I speculated that these clicks were not “human”, but rather “artificial” clicks generated from email filters. 

Hidden pixels are your frenemy

To test my hypothesis, I implemented a hidden 1×1 pixel in the header, main body, and footer section in the next email. The pixels were linked and tagged with UTM tracking — and only visible to bots.

Sure enough, several email addresses were flagged as clicking on the hidden pixels.

All that brings me back to the question of whether or not marketing data can be trusted. It’s critical to “trust, but verify” all data points before jumping to conclusions. Scrutinizing performance reports and flagging unusual activity or patterns helps. Don’t do an injustice to yourself and your company by sharing results that they want (or think they want) to hear. Troubleshoot artificial activity and decide on a plan of action:

  • Use common sense and always verify key data points
  • Within your email programs, identify and exclude bots from future mailings
  • Share results with management, sales, and other stakeholders

A word of caution… 

Tread carefully before you start implementing hidden pixels across your email templates. Hiding links might appear to email security programs as an attempt to conceal bad links. You could be flagged as a bad sender, so be sure to run your email through deliverability tools to check that your sender score isn’t affected.

As the saying goes, “There are three kinds of lies: lies, damned lies, and statistics.” Sigh.

With different solutions circulating within the email marketing community, this is likely the “best solution out of the bad ones”. It all depends on what works best with your scenario and business model. 

Godspeed, marketer! 

The post Soapbox: Is my data telling me the truth? appeared first on Marketing Land.

Soapbox: Marketing data overload? Answer 2 questions that will instantly help

Success means having the answer to why the data matters and next steps are needed to turn it into action.

The post Soapbox: Marketing data overload? Answer 2 questions that will instantly help appeared first on Marketing Land.

Marketers are continuously dealing with a never-ending tsunami of data from multiple sources and wondering what to do with it all. It’s easy to come down with a bad case of “analysis paralysis” if you aren’t careful.

Whenever feeling overwhelmed by too much data, I’m reminded of Andy Pearson, CEO at PepsiCo and named by Fortune in 1980 as “one of the ten toughest bosses in America.” That distinction was well deserved because God help anyone in a meeting with Andy who wasn’t well prepared. If you just shared a bunch of facts and figures with him, he would scream out in the boardroom, “What? So what? Now what?!” Many PepsiCo careers were made or destroyed based on what the presenter did next.

I have a vivid image of Andy in my mind any time I’m in a meeting where too much data is being shared. I get the urge to (politely) ask Andy’s questions to the presenter: “What? So what? Now what?!” Marketers deal with unprecedented amounts of data today, so it’s critical to keep this admonition top-of-mind when analyzing and sharing key data with your team and management.

The “what” is in ample abundance – audience demographics, attitudes, behaviors, digital preferences, competitive data and includes all the operational and financial data at your disposal. The key to success needs to be on the “so what” and “now what” if all that wonderful data is going to be turned into consumer insights and meaningful action to build your business.

Soapbox is a special feature for marketers in our community to share their observations and opinions about our industry. You can submit your own here.

The post Soapbox: Marketing data overload? Answer 2 questions that will instantly help appeared first on Marketing Land.

Don’t misinterpret the data: Evidence-based advertising needs experience-based context

Data will always be the foundation of our evidence, but we need to consider our experience to structure and interpret this raw information.

The post Don’t misinterpret the data: Evidence-based advertising needs experience-based context appeared first on Marketing Land.

“That sounds like a great idea, but what does the data tell us?” In recent years, the principle of evidence-based advertising has taken hold of the industry, bringing the tension between advertising as a science and an art to the foreground. For some, like Professor of Marketing Science Byron Sharp, the answer is clear: in the world of Big Data, evidence must take precedence over conventional wisdom. But what exactly is evidence, and how is it best used?

Broadly speaking, evidence is information that provides a foundation for some belief. As such, raw data can certainly be used as evidence, but so can intuition. After all, I don’t need to consult a spreadsheet to be confident that the sun will rise each morning; my belief is founded on years of experience stored in a mental “database.” Advertising is no different.

This is not to say that hard numbers are gratuitous, or that industry expertise holds all the answers. We must recognize, however, that there are different kinds of evidence, and each serves a distinct function in the analytic and decision-making process. To tease out these distinctions, we can turn to the academic world, where evidence is precisely classified.

In academic work, evidence is divided into three categories: primary, secondary, and tertiary. These are distinguished by 1) where they come from, and 2) how they are stored. Primary sources are typically original accounts or physical evidence, while secondary and tertiary evidence layers interpretation and analysis over primary information.

Primary evidence

Raw consumer data is primary evidence: it expresses documented consumer behavior or literal consumer responses to some prompt. The results of ad effectiveness research, like breakthrough, branding, and brand-impact are all primary evidence; assuming participants responded honestly, they reflect the ad’s impact on consumer perspectives.

Unlike most primary evidence, ad data is typically modeled to make interpretation clear: likeability goes up when more respondents like the ad, brand, or product. But knowing how an ad affects research metrics isn’t in itself an insight, much less a strategy. Consequently, primary evidence must always be subjected to analysis, whose product is secondary evidence.

Secondary evidence

Analysis synthesizes research results into insights – actionable data-driven learnings – through deductive or inductive logic. The resulting secondary evidence is an interpretation of primary data: it uses facts to support conclusions about why consumers responded as they did, and what that suggests about the ad’s performance.

A close up of a hand

Description automatically generated

Secondary sources represent what analysts think primary evidence (consumer responses) meant; they can guide our interpretation thereof, but their validity is contingent on whether the analysis was correct. A quarterly report, for example, may draw on primary evidence to more accurately represent consumer sentiment, but only by taking the risk of misinterpreting the data.

Tertiary evidence

This article is a primary source, as it expresses my perspective on evidence-based ads, but my article on the science of storytelling is a tertiary source: it brings together different analyses to make a broader point about the intersection of neuroscience and advertising. A year-end report that synthesizes quarterly analysis into an overarching narrative is also tertiary evidence.

A picture containing text

Description automatically generated

Tertiary sources are an aggregation of primary and secondary data, forming a curated body of knowledge; they allow us to compare analyses, and to develop a more authoritative interpretation. Our experience in the ad industry can be thought of as tertiary evidence: all the ads and research we have been exposed to nuance our interpretation of new data.

Using primary evidence

Despite its name, primary evidence is almost never the right place to start an investigation. Secondary and tertiary evidence can help us define our questions and develop hypotheses before we begin collecting primary data. When we finally do sit down with a dataset, we should again look to secondary and tertiary sources for context.

In fact, this is a process that most of us undertake without thinking. We design research with the aid of past successes and failures, and when we analyze results, we look for patterns and flags that we’ve seen before. Unfortunately, in doing so implicitly we may draw on personal assumptions or misguided conventional wisdom instead of evidence-based learnings.

It is here that the balancing act between evidence and experience begins. Disregarding experience squanders years of information we have collected in our mental database; taking its veracity and logical coherence for granted, however, can lead us to baseless conclusions. Secondary documents draw on primary evidence to help substantiate and vet our intuition.

Using secondary evidence

Secondary evidence is both the product of analysis, and a vital tool in the analytical process itself. When we interpret primary data, we create secondary evidence. As primary evidence is rarely contextualized, and often does not lend itself immediately to interpretation, it is usually necessary to draw on existing secondary sources to guide or corroborate our analysis. 

When looking at consistent consumer behavior, for example, we understand it likely represents a trend. We’ve seen this type of pattern before: regular observations of some phenomenon have led us to draw a reliable conclusion. In this case, we can rely on evidence-based intuition to analyze the data, but more complex conclusions may require additional evidence.

As noted, our intuition itself is often implicitly secondary evidence. It is important to remember, though, that all secondary evidence must point back to primary sources. It may seem obvious, but this distinction is what separates analysis from assumption. Tertiary evidence is a good way to evaluate secondary sources and the analysis they rely on.

Using tertiary evidence

Tertiary evidence is often a review of secondary sources that evaluates their merit and posits a higher-order conclusion. A collection of case studies in some research methodology or advertising practice is a prime example. Consistency in their results may suggest the existence of an advertising principle, while discrepancies could indicate methodological errors.

A person in a library

Description automatically generated

The movement from primary to tertiary evidence is one of synthesis and abstraction. This can be extremely useful, but it also distances us from raw data: primary evidence. Tertiary sources are typically arranged to evidence a specific argument. Necessarily, this is to the exclusion of other insights that may be gleaned from the initial evidence.

In other words, tertiary sources result from the analysis of analysis. They can be useful both in evaluating completed research and identifying new research questions. Above all, they unite historical learnings to substantiate industry principles. By evidencing or challenging conventional wisdom, they advance our understanding of consumer behavior and advertising techniques.

Putting it all together

When we commission a new piece of research, we should always begin with extant secondary and tertiary evidence. Whether it is stored in slide decks or our memory, this historical research can inform what questions we ask, and how. That way, we won’t collect data that isn’t useful, and the data we do collect will be optimized to support the answers we need.

A picture containing sky, outdoor, person, object

Description automatically generated

Through analysis, we then convert these primary sources into secondary evidence. Again, our interpretation of the raw data will be nuanced by our experiences with similar ads or research. The insights we develop will thus yield usable and reliable conclusions about the ad, brand, or product in question. To present them, we should group them into a piece of tertiary evidence.

Much like the Pyramid Principle, different kinds of evidence build on one another to add meaning and mitigate misinterpretation. Hard data will always be the foundation of our evidence, but without drawing on the secondary and tertiary evidence of experience we are left without a reliable way to structure and interpret this raw information.

The post Don’t misinterpret the data: Evidence-based advertising needs experience-based context appeared first on Marketing Land.

The data behind incrementality on Amazon

The key to driving incremental sales combines segmented bidding strategies, contextualizing ACoS metrics and proper campaign structure.

The post The data behind incrementality on Amazon appeared first on Marketing Land.

Every marketer worth their salt is concerned about incrementality. Companies have, and should be, reevaluating their budgets, channels and service providers based on the ability to drive sales from advertising that they wouldn’t have captured otherwise.

When it comes to Amazon, this issue is particularly important, because current SERPs can naturally cannibalize an otherwise organic sale with an ad for the same product that shows up prior to the organic result. For marketers, the key to managing this issue and driving incremental sales is through a combination of segmented bidding strategies for brand, category, and competitor key terms, contextualizing Advertising Cost of Sale metrics, and proper campaign structure.

The non-incremental trap of branded keywords

Each of the larger term segments – branded, generic, and competitor – needs to be thought of in terms of the consumer’s place in the purchase funnel:

  • Brand keywords capture shoppers deepest in your purchase funnel
  • Competitor keywords capture shoppers that are deep in someone else’s purchase funnel
  • Category keywords capture shoppers at the top of your purchase funnel 

On Amazon, when a user searches for some variation on a brand name, the A9 algorithm generally does whatever it can to surface as many of those brand’s products as possible on that search page. That includes both top sellers, which will get the top organic placements, along with the long tail, which will occupy spots further down the page. Amazon takes the intent of the user – “I want to see products from this brand” – very seriously. This is borne out in the underlying data – it is much, much harder to rank organically on a category term, as compared to a branded term, as seen in the examples below.

A screenshot of a cell phone

Description automatically generated

This shows why it’s a real challenge for your brand’s keywords to be incremental. It’s almost guaranteed that your relevant products will show up organically on the SERP – with your top sellers showing up high in the results. Additionally, consumers are more likely to click on the top few results of a branded search, as compared to generic searches.

A screenshot of a cell phone

Description automatically generated

The biggest takeaway here is that advertising your top products on your own branded terms is a particularly bad practice. You’re capturing sales via paid placements you were likely to capture organically anyway. If brand defense is imperative, consider advertising new or longer-tail products on your branded terms instead. That way you’re defending your brand term, but you’re doing so by helping to sell products that aren’t yet ranking well organically, while still not cannibalizing sales of your top products.

Maximizing sales across category and competitor terms

In terms of incrementality, nothing is better than capturing a sale from your competitor. However, you’re likely to find that the ACoS of competitor keywords is significantly worse than that of generic or category keywords, as shown in the example below.

A screenshot of a cell phone

Description automatically generated

The best strategy here depends a lot on your competitive landscape. Conquesting your competitor’s terms means having a deep understanding of the terms which you can reasonably bid against successfully. Terms relating to stronger competitors with deeper brand loyalty/recognition may necessitate a less aggressive strategy to control costs, while it may be well worth your while to bid forcefully against terms related to relatively weak competitors where it’s easier to pick off customers with your top products.

As opposed to branded terms, the universe of category keywords is understandably the largest on Amazon, with new relevant terms developing over time. In this larger and more dynamic environment, it’s important that you set bids based on the expected conversion rate of a given term.

A screenshot of a cell phone

Description automatically generated

The issue here, which I’ve written about in a previous column, is that over 90% of category keywords do not get more than one click per day, given a constant bid. Additionally, with roughly 80 clicks being necessary to get a confident estimate of the true conversion rate of a given keyword, running this test could take nearly two and half months. Meanwhile, conversion rates change roughly every month – as one extreme example, think of the expected conversion rate for “easter candy” in April versus May or June.

To succeed with category keywords you must have an exploration strategy that values the rate of data acquisition. At my current workplace, we use a probalistic binary search model, that adjusts bids from very high to low in order to more quickly determine the expected conversion rate.

Outside of this more refined statistical method, what marketers can do to better find and exploit meaningful keywords on Amazon is deploy a more granular campaign structure. Because keywords define audience segments, each audience segment needs a different set of considerations in terms of aggressiveness and expectations.

A screenshot of a cell phone

Description automatically generated

To spell this out, brand keyword campaigns should have high ROAS expectations, focus on emerging products, and get tested for incrementality. Competitor keyword campaigns should have the lowest ROAS expectations and focus on launching and dominant products. Finally, Category keyword campaigns should be expected to give you a break-even ROAS and must be handled with a strong exploration strategy. These overarching themes are important to keep in mind as you scale your marketing efforts on Amazon because they are critical to driving incremental sales growth.

The post The data behind incrementality on Amazon appeared first on Marketing Land.

Align your marketing plan with your analytics measurement plan

Audit your analytics on a regular basis to ensure the data you are collecting is as accurate as possible and that all data that should be collected is reported.

The post Align your marketing plan with your analytics measurement plan appeared first on Marketing Land.

All great marketing departments and teams should have a marketing plan and know it intimately. What is surprising is how often when conducting an analytics audit when the first thing I ask for is a copy of their marketing plan, how often I’m presented with a “deer in headlights” look or at best given the response “Oh we have one, but haven’t updated it in years. We just know it!”

Why is having a current marketing plan that is disseminated and known by your entire marketing team and other teams within your organization critical and how does this have anything to do with your corporate analytics? For the simple reason: Without one, how does the marketing team actually know what they should be doing. More importantly, how can success be measured?

The key to marketing success is to merge a marketing plan with a measurement plan into a unified plan.

Key measurement elements of a marketing plan

The first step in developing or validating a marketing plan is ensuring the marketing department’s mission statement aligns with the corporate mission statement. This is where many marketing teams make their first mistake. If the two don’t align properly, then how can the marketing department effectively obtain corporate buy-in and ensure their marketing efforts are effective in helping the organization meet its overall goals?

Once the marketing department has validated its mission statement, it needs to define specific objectives. Then, it is time to merge these elements with a measurement plan that defines specific marketing tactics that can be planned, budgeted for, approved, executed and measured.

 Clearly define these tactics. Examples can be:

  • More posts (paid and non-paid) on specific social apps (i.e. Facebook, Twitter, Reddit, etc.)
  • More engagement with the public on social media sites
  • Creation of branded ads 

Perhaps the most difficult task in this process is that of defining the appropriate Key Performance Indicators to measure how these tactics measure up. Some KPI in support of the above examples might be:

  • Increase in branded organic search traffic
  • Overall increase in organic search traffic
  • Increased activity/engagement on corporate social media accounts including click-throughs on posts
  • Increase click-through rates on branded ads
  • Increase in online sales by specific channels (organic search, branded campaigns, social accounts, etc.)

When defining your KPI keep in mind the following four factors that make up a useful KPI:

  1. Must utilize obtainable data
  2. Must relate directly to the marketing objective
  3. Should not be an absolute value but a ratio or comparison. For example, a KPI for improved customer engagement might be to increase average session duration. Or, measure average time on site comparing period one to period two. Or, a KPI for measuring effective campaigns on the “average number of orders per 100 sessions” by campaign
  4. Must be easily reportable and understandable by the target audience.

With the KPIs in place, ensure your analytics account is configured correctly. Ensure that you can accurately – without too much effort – report on the identified KPIs. Have all the required channels been defined? Don’t just rely on default channels from your analytics tool. Make sure the marketing activities to support the marketing department’s mission statement are realistic and approved.

Marketing measurement plans are typically in a layout grid format. Do a quick search and you’ll discover many suggested layouts. My favorite is simple:

With a merged marketing measurement plan in place, the next task is to align your corporate analytics to capture appropriate data and making it reportable becomes an easier task, as well as getting buy-in from other departments.

Imagine if your marketing plan didn’t include a KPI on sales by channel? Could you get your IT group to make the necessary coding changes to push transaction values to your analytics tool? Frequently they simply default to saying: “Sales information is available from our e-commerce tool.” While true, you can extract sales data from a backend tool, in virtually all cases, you can’t attribute those sales to specific marketing efforts. It is only with an approved marketing plan in place can you apply leverage and get this data integrated with your analytics.

What about custom channels? Do you need to segregate paid social, from non-paid social (your team’s participating on social sites on your own posts), from public sharing of your content? Yes, these are three unique social channels that should be tracked and reported on, if your company is utilizing these channels as part of their marketing plan.

You can create custom analytic reports that demonstrate how effective various marketing efforts are in support of not only the marketing department’s mission statement but also the corporate mission statement. This allows you to easily evaluate and adjust with objectively to demonstrate just how successful these efforts are to the c-suite.

Remember that marketing mission statements are a living and breathing thing. The world of online marketing is constantly changing as are the tools that help execute marketing plans and those that measure results. Plan on reviewing the mission statement at a minimum annually and possibly quarterly or semi-annually if appropriate for the organization. Don’t forget to get analytics audited by an independent auditor on a regular basis to ensure the data being collected is as accurate as possible, and that all data that should be collected is reported.

The post Align your marketing plan with your analytics measurement plan appeared first on Marketing Land.