How to show Lighthouse Scores in Google Sheets with a custom function

Learn how to use machine learning to streamline your reporting workflows right within Google Sheets.

The post How to show Lighthouse Scores in Google Sheets with a custom function appeared first on Marketing Land.

Automation and machine learning have tremendous potential to help all of us in marketing. But at the moment a lot of these tools are inaccessible to people who can’t code or who can code a bit but aren’t really that comfortable with it.

What often happens is that there ends up being one or two people in the office who are comfortable with writing and editing code and then these people produce scripts and notebooks that everyone else runs. The workflow looks a bit like this:

I will show you a simple way to streamline this workflow to remove the steps where people need to run a script and format the output. Instead they can run the automation directly from within Google Sheets.

The example I will show you is for a Sheets custom function that returns the Lighthouse score for a URL like in this gif:

The method I will show you isn’t the only way of doing this, but it does illustrate a much more general technique that can be used for many things, including machine learning algorithms.

There are two parts:

  1. A Google Cloud Run application that will do the complicated stuff (in this case run a Lighthouse test) and that will respond to HTTP requests.
  2. An Appscript custom function that will make requests to the API you created in step 1 and return the results into the Google Sheet.

Cloud run applications

Cloud Run is a Google service that takes a docker image that you provide and makes it available over HTTP. You only pay when an HTTP request is made, so for a service like this that isn’t being used 24/7 it is very cheap. The actual cost will depend on how much you use it, but I would estimate less than $1 per month to run thousands of tests.

The first thing we need to do is make a Docker image that will perform the Lighthouse analysis when we make an HTTP request to it. Luckily for us there is some documentation showing how to run a Lighthouse audit programatically on Github. The linked code saves the analysis to a file rather than returning the response over HTTP, but this is easy to fix by wrapping the whole thing in an Express app like this:

const express = require('express');
const app = express();
const lighthouse = require('lighthouse');
const chromeLauncher = require('chrome-launcher');

app.get('/', async (req, res) => {
    // Check that the url query parameter exists
    if(req.query && req.query.url) {
        // decode the url
        const url = decodeURIComponent(req.query.url)    
        const chrome = await chromeLauncher.launch({chromeFlags: ['--headless', '--no-sandbox','--disable-gpu']});
        const options = {logLevel: 'info', output: 'html', port: chrome.port};
        const runnerResult = await lighthouse(url, options);

        await chrome.kill();
        res.json(runnerResult.lhr)
    }
});

const port = process.env.PORT || 8080;
app.listen(port, () => {
  console.log(`Listening on port ${port}`);
});

Save this code as index.js.

Then you will also need a file called package.json which describes how to install the above application and a Dockerfile so we can wrap everything up in Docker. All the code files are available on Github.

package.json
{
    "name": "lighthouse-sheets",
    "description": "Backend API for putting Lighthouse scores in Google sheets",
    "version": "1.0.0",
    "author": "Richard Fergie",
    "license": "MIT",
    "main": "index.js",
    "scripts": {
        "start": "node index.js"
    },
    "dependencies": {
        "express": "^4.17.1",
        "lighthouse": "^6.3"
    },
    "devDependencies": {}
}
Dockerfile
# Use the official lightweight Node.js 10 image.
# https://hub.docker.com/_/node
FROM node:12-slim

# Our container needs to have chrome installed to
# run the lighthouse tests
RUN apt-get update && apt-get install -y \
  apt-transport-https \
  ca-certificates \
  curl \
  gnupg \
  --no-install-recommends \
  && curl -sSL https://dl.google.com/linux/linux_signing_key.pub | apt-key add - \
  && echo "deb https://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list \
  && apt-get update && apt-get install -y \
  google-chrome-stable \
  fontconfig \
  fonts-ipafont-gothic \
  fonts-wqy-zenhei \
  fonts-thai-tlwg \
  fonts-kacst \
  fonts-symbola \
  fonts-noto \
  fonts-freefont-ttf \
  --no-install-recommends \
  && apt-get purge --auto-remove -y curl gnupg \
  && rm -rf /var/lib/apt/lists/*


# Create and change to the app directory.
WORKDIR /usr/src/app

# Copy application dependency manifests to the container image.
# A wildcard is used to ensure copying both package.json AND package-lock.json (when available).
# Copying this first prevents re-running npm install on every code change.
COPY package*.json ./

# Install production dependencies.
# If you add a package-lock.json, speed your build by switching to 'npm ci'.
# RUN npm ci --only=production
RUN npm install --only=production

# Copy local code to the container image.
COPY . ./

# Run the web service on container startup.
CMD [ "node", "--unhandled-rejections=strict","index.js" ]

Build the docker image and then you can test things locally on your own computer like this:

First start the image:

docker run -p 8080:8080 lighthouse-sheets

And then test to see if it works:

curl -v "localhost:8080?url=https%3A%2F%2Fwww.example.com"

Or visit localhost:8080?url=https%3A%2F%2Fwww.example.com in your browser. You should see a lot of JSON.

The next step is to push your image to the Google Container registry. For me, this is a simple command:

docker push gcr.io/MY_PROJECT_ID/lighthouse-sheets

But you might have to setup the docker authentication first before you can do this. An alternative method is the use Google Cloud Build to make the image; this might work better for you if you can’t get the authentication working.

Next you need to create a Cloud Run service with this docker image.

Open Cloud Run and click “Create service”

Name and adjust settings. You must give your service a name and configure a few other settings:

It is best to pick a region that is close to where most of the audience for your sites live. Checking the site speed for a UK site from Tokyo won’t give you the same results as what your audience get.

In order for you to call this service from Google Sheets it must allow unauthenticated invocations. If you’re worried about locking down and securing the service to prevent other people from using it you will have to do this by (for example) checking from an API secret in the HTTP request or something like that.

Next you must select the container you made earlier. You can type in the name if you remember it or click “Select” and choose it from the menu.

Then click “Show Advanced Settings” because there is further configuration to do.

You need to increase the memory allocation because Lighthouse tests need more than 256Mb to run. I have chosen 1GiB here but you might need the maximum allowance of 2GiB for some sites.

I have found that reducing the concurrency to 1 improves the reliability of the service. This means Google will automatically start a new container for each HTTP request. The downside is that this costs slightly more money.

Click “Create” and your Cloud Run service will be ready shortly.

You can give it a quick test using the URL. For example:

curl -v "https://lighthouse-sheets-public-v4e5t2rofa-nw.a.run.app?url=https%3A%2F%2Fwww.example.com"

Or visit https://lighthouse-sheets-public-v4e5t2rofa-nw.a.run.app?url=https%3A%2F%2Fwww.example.com in your browser.

The next step is to write some Appscript so you can use your new API from within Google Sheets.

Open a new Google Sheet and the open up the Appscript editor.

This will open a new tab where you can code your Google Sheets custom function.

The key idea here is to use the Appscript UrlFetchApp function to perform the HTTP request to your API. Some basic code to do this looks like this:

function LIGHTHOUSE(url) {
  const BASE_URL = "https://lighthouse-sheets-public-v4e5t2rofa-nw.a.run.app"
  var request_url = BASE_URL+"?url="+encodeURIComponent(url)
  var response = UrlFetchApp.fetch(request_url)
  var result = JSON.parse(response.getContentText())
  return(result.categories.performance.score * 100)
}

The last line returns the overall performance score into the sheet. You could edit it to return something else. For example to get the SEO score use result.categories.seo.score instead.

Or you can return multiple columns of results by returning a list like this:

[result.categories.performance.score, result.categoryies.seo.score]

Save the file and then you will have a custom function available in your Google Sheet called LIGHTHOUSE.

The easiest way to get started with this is to copy my example Google Sheet and then update the code yourself to point at your own API and to return the Lighthouse results you are most interested in.

Enhance your spreadsheet know-how

The great thing about this method is that it can work for anything that can be wrapped in a Docker container and return a result within 30 seconds. Unfortunately Google Sheets custom functions have a timeout so you won’t have long enough to train some massive deep learning algorithm, but that still leaves a lot that you can do.

I use a very similar process for my Google Sheets addon Forecast Forge, but instead of returning a Lighthouse score it returns a machine learning powered forecast for whatever numbers you put into it.

The possibilities for this kind of thing are really exciting because in Search Marketing we have a lot of people who are very good with spreadsheets. I want to see what they can do when they can use all their spreadsheet knowledge and enhance it with machine learning.

This story first appeared on Search Engine Land.

https://searchengineland.com/how-to-show-lighthouse-scores-in-google-sheets-with-a-custom-function-343464

The post How to show Lighthouse Scores in Google Sheets with a custom function appeared first on Marketing Land.

How to make your data sing

Stop reporting absolute numbers and put your data into context with ratios to engage your stakeholders with the smaller, but important, data points.

The post How to make your data sing appeared first on Marketing Land.

It is amazing; the horrible job many digital marketers do when reporting their work to clients. This includes both internal and external clients. Just think about how many marketing presentations and reports you’ve seen that simply contain screenshots from Google Analytics, Adobe Analytics, Adwords, Google Console, or reports from a backend ecommerce system. This isn’t the way to influence people with your data.

The biggest issue is that most marketers are not analytics people. Many marketers do not know how to collect all of the necessary data or how to leverage that data, and to a lesser degree, know how to present it in a meaningful way. Typically, this is the job of a data analyst. The same way purchasing a pound of nails, a hammer and a saw doesn’t make you a carpenter, gaining access to your analytics reporting tool does not make you a data analyst. This is why many reports contain those convoluted screenshots, and present data out of context, contributing little to no meaning. 

Data out of context

Many reports merely report the facts (the data) with a number and no context. Data out of context is just data. For example, simply making a statement that Adwords generated 5,000 sessions to a website last month is meaningless without context. The number 5000 is neither a good nor a bad data point without a reference point or a cost factor. It’s not until you add in other factors (open the box) that you can demonstrate whether or not your efforts were a success. If the previous month’s Adwords campaign only drove in 1,000 sessions, then yes without other data, 5000 sessions looks good. But what if the cost to drive those additional 4,000 sessions was 10 fold the previous month’s spend? What if the previous month, Adwords drove 5,000 sessions but at double the spend?

It is only by adding in the additional information in a meaningful way that marketers can turn their reporting from a subjective presentation into an objective presentation. In order to do this, stop reporting absolute numbers and put your data into context with ratios. For example, when assessing Cost per Session, toss in a 3rd factor (goal conversion, revenue, etc.) and create something similar to “Cost per Session : Revenue”.  This will put the data into context. For example, if every session generated costs $1 : $100 (Cost per session : revenue) vs. $2.25 : $100 (Cost per session : revenue) the effectiveness of a marketing spend becomes self-evident. In this example, it is clear the first result is superior to the second. By normalizing the denominator (creating the same denominator) the success or failure of an effort is easily demonstrated. 

Data is boring

Yes, presenting data is boring. Simply looking at a mega table of a collection of data will cause many to lose interest and tune out any message you might be trying to present. The best way to avoid this is to make your data sing!

Make your data sing

Just like in the marketing world, the easiest way to grab someone’s attention and make your message sing is with imagery. Take all that great data in your mega table, and turn it into an easy to understand graph, or when necessary, simplified data tables. Even better, (if you can) turn it into interactive graphs. During your presentation, don’t be afraid to interact with your data.  With some guidance, your audience can dive into the data they are most interested in.

Learn to use data visualization tools like Data Studio, Tableau, DOMO, Power BI and others. Leveraging these tools allows you to take boring data and not only give it meaning but to make the data sing, which will turn you into a data hero.

Interacting with your data

Back at the end of July 2019, my firm acquired an electric vehicle. We wanted to know if the expense was worth it. Did the cost savings of using electricity over gasoline justify the difference in the ownership cost of the vehicle (lease payments +/- insurance cost and maintenance costs).

Below is a typical data type report with all the boring detailed data. This is a mega table of data and only those truly interested in the details will find it interesting. If presented with this table most would likely only look at the right-hand column to see the total monthly savings. If presented with just this data, many will get bored, and will look up and start counting the holes in the ceiling tiles instead of paying attention.

The following graphs demonstrate many of the ways to make this data sing, by putting all of the data into context through interactive graphics.

The above graph (page 1 of the report) details the cost of operating the electric vehicle. The first question we were always curious about was how much it was costing us to operate per 100 km. By collecting data on how much electricity was used to charge the car, how many kilometers we drove in a given month and the cost for that electricity, we are able to calculate the operating cost. In the graph you can easily see the fluctuation in operating costs, with costs going up in winter months (cost of operating the heater in the car) and again in June & July (cost of running the AC). You can also see the impact of increases in electricity prices.

To truly evaluate the big question “Was acquiring an electric vehicle worth it?” we’d need to estimate how much gasoline would have been consumed by driving the same distance against the average cost for gas during the same months. On page 2 of the report the data is now starting to sing as the difference in the savings of electrical over gas becomes clear. The chart becomes interactive and allows the user to hover over any column to reveal the data details.

To make the data truly sing, we’d need to not just compare the operating costs, but the costs of ownership. Do the savings in the operating costs justify the price difference between the vehicles? We know that the difference in lease costs, insurance and annual maintenance is in the range of $85-$90/month

The above graph (page 3 of the report) demonstrates, the impact of plummeting gas prices and the reduced driving done during April 2020 due to the COVID-19 shutdown. In April 2020 a mere monthly savings of approximately $41 dollars was achieved. Therefore, there were no savings in owning a more expensive electric vehicle over an equivalent gas-powered vehicle (the difference in lease costs and insurance, etc. is in the range of $85-90/month). While it might not be sing, it definitely was screaming out when we saw it. 

Check out the entire report for yourself.  It is accessible here so you can view all the pages/charts. The report is interactive allowing you to hover given months to see data details or even change the reporting date range.

By embracing not only data visualization but the visualization of meaningful data, we as marketers can raise the bar and increase engagement with our audience. Think of the four pages of this report, which page talks most to you? Which way of presenting the data makes it sing for you? Odds are it was not the first table with all the detailed data.

The post How to make your data sing appeared first on Marketing Land.

Soapbox: Is my data telling me the truth?

How email security programs affected my perception of data integrity.

The post Soapbox: Is my data telling me the truth? appeared first on Marketing Land.

As marketers, we face the overwhelming challenge of demonstrating proof that our tactics are effective. But how can we convince management if we are not convinced of our own data?

Here’s the reality, which I recently learned for myself: If you’re running email marketing, it’s very likely that your performance reports are not disclosing the full truth… inflated CTRs (click-through rates) and open rates being the main culprits. 

Email security programs – loved by recipients, hated by senders

Barracuda. SpamTitan. Mimecast. Email bots that serve a single purpose: to protect users from unsafe content. These programs scan inbound emails and attachments for possible threats, including viruses, malware, or spammy content by clicking on links to test for unsafe content.

For email marketers, this creates several challenges:

  • Inflated CTRs and open rates due to artificial clicks and opens 
  • Disrupting the sales team’s lead followup process as a result of false signals
  • Losing confidence in data quality (quantity ≠ quality)

Real or artificial clicks?

In reviewing recent email marketing performance reports, I noticed an unusual pattern: Some leads were clicking on every link in the email…header, main body, footer, even the subscription preferences link — yet they were not unsubscribing. Not only that, but this suspicious click activity was happening almost immediately after the email was deployed. I speculated that these clicks were not “human”, but rather “artificial” clicks generated from email filters. 

Hidden pixels are your frenemy

To test my hypothesis, I implemented a hidden 1×1 pixel in the header, main body, and footer section in the next email. The pixels were linked and tagged with UTM tracking — and only visible to bots.

Sure enough, several email addresses were flagged as clicking on the hidden pixels.

All that brings me back to the question of whether or not marketing data can be trusted. It’s critical to “trust, but verify” all data points before jumping to conclusions. Scrutinizing performance reports and flagging unusual activity or patterns helps. Don’t do an injustice to yourself and your company by sharing results that they want (or think they want) to hear. Troubleshoot artificial activity and decide on a plan of action:

  • Use common sense and always verify key data points
  • Within your email programs, identify and exclude bots from future mailings
  • Share results with management, sales, and other stakeholders

A word of caution… 

Tread carefully before you start implementing hidden pixels across your email templates. Hiding links might appear to email security programs as an attempt to conceal bad links. You could be flagged as a bad sender, so be sure to run your email through deliverability tools to check that your sender score isn’t affected.

As the saying goes, “There are three kinds of lies: lies, damned lies, and statistics.” Sigh.

With different solutions circulating within the email marketing community, this is likely the “best solution out of the bad ones”. It all depends on what works best with your scenario and business model. 

Godspeed, marketer! 

The post Soapbox: Is my data telling me the truth? appeared first on Marketing Land.