Here’s an alternative to cookies for user tracking

Instead of having your analytics toolset read a cookie, pass a unique identifier associated with the user ID. Learn how to do it and keep it privacy compliant.

The post Here’s an alternative to cookies for user tracking appeared first on Marketing Land.

For over 20 years, website analytics has leveraged the use of persistent cookies to track users. This benign piece of code was a mass improvement over using a user’s IP address or even the combination IP and browser. Since it was first introduced, the use of these cookies has become the focus of privacy legislation and paranoia. So what alternative is there?

If your website or mobile application requires the creation of user accounts and logins, it’s time to plan to transition away from cookie-based tracking to user ID tracking. In simple terms, instead of having your analytics toolset read a cookie, you pass a unique identifier associated with the user ID and then track the user via this identifier. Typically the identifier is the login ID.

Preparing for advanced tracking

Step 1

Ensure that the user ID you’ve deployed doesn’t contain Personal Identifiable Information (PII). Too often, sites require users to use their personal email address as a login ID or event their account number. These are PII. If this is the case with your organization, then the trick is to assign a random unique client identifier to all existing accounts as well as for any future accounts as they are created. 

Step 2

Have your developers start to push the User ID to the data layer. This way, the variable will be there waiting for your analytics software to read it once you’re ready to implement the new tracking method. Check with your analytics software on the variable name for this element as it varies from analytics software to software.

Step 3

Create a new view/workspace within your analytics software and configure it to track users by their user ID. Most analytic packages will still set a temporary cookie to track user behavior prior to their login and then will connect the sessions. This way you can see what a user does on your site even prior to them logging in and what site visitors who never login do.

Benefits of tracking users by user ID

Improved accuracy

The use of cookies is flawed in many ways. If users jump between devices (from desktop, to mobile, to a tablet, or office computer to home computer) you can’t track that it was the same user. This generates inflated unique user counts.

What if a user clears their cookies (perhaps they’re utilizing antivirus software that purges all cookies every time the browser is closed)? Once again this leads to inflated user count data.

By tracking a user via their user ID, you’ll obtain a more accurate count of unique users on your site.

Cross Device Tracking

This is perhaps one of the greatest benefits of tracking users by their user ID. You can now see how users interact with your site and/or mobile app. How many use a combination of devices. Is there a specific preference for which type of device might simply be used to add to a shopping cart, only to have the order processed on another device?

Greater Analytics Insight

Armed with enhanced analytics data, new and potentially powerful insights can be harvested. With this new knowledge, you can better direct internal resources to focus and enhance the user experience and optimize the user flow for greater profits.

Real life examples

The following examples demonstrate the power of tracking users by their user ID. 

Overview – Device Overlap

The following image shows what percentage of accounts use which type of device and the percentage that use a combination of devices. For example, while 66.6% use only a desktop, 15.8% use a combination of Mobile and Desktop.

User Behavior – Device Flow

Reviewing the device flow leading up to a transaction can provide some of the greatest insights from this enhanced analytics tracking methodology.

While it might not be surprising that the two most common device (by number of Users) paths were Desktop only and Mobile only, what was surprising to me and to the client was number 3. While the device path of Desktop -> mobile -> Desktop is only experienced by approx. 3% of users, it accounts for approximately 8% of all transaction and over 9% of all revenue generated.

The minimal overall use of tablets was also a bit surprising. Of course the mix of devices does vary from client to client.

Assisted conversions

By dropping the use of cookies, the quality of the data of assisted conversions is significantly increased. For example, how many people read an email (can easily be tracked when opened and attributed to a user ID) on a mobile device, click into the site, browse around to and review the items that are being promoted (maybe add them to their shopping cart). Then think about it for a bit before logging-in later via a desktop to complete the transaction?

For example, from the above report, one can objectively assign a more accurate value to SEO efforts by examining the role Organic Search traffic played in generating sales. While a source of an immediate sale (in this case) from organic search generated traffic represents 1.3% of total revenue as an assist in the sales cycle, it played a role in over 10.4% of generated revenue.

Enhanced user insights

In this example, the client allows its customers to also have multiple logins for their account. Essentially a user ID represents a customer/client and not a single user. The client operates in the B2B world where multiple people within its clients’ organizations may require unique logins and rights (who can order, who can just view product details, who can view or add to the cart but not place an order, etc.). By leveraging both tracking by user ID and recording a unique login id within their analytics, these additional insights can be obtained.

user-breakdown.jpg

The above report not only breaks down revenue by division, but demonstrates how within different division users use the site differently. In region 1, there is almost a 1:1 relationship between user ids and login ids. Yet in Division 3, the ration is over 4:1, this means that for every customer there is an average over 4 logins being utilized in Division 3.

How can they leverage this data for more effective marketing? By understanding that within divisions there are differences, carefully crafted email marketing can be created to target customers differently with multiple logins vs. single account/login customers. 

A further dive into the data could also reveal which login IDs are only product recommenders (only view products) from those who make specific product requests (add to the shopping cart and never place the order) from those who only process orders and from those who do it all. Each one needs to be marketed to differently with different messaging to optimize the effectiveness of the marketing effort. It’s through detailed analytics that this audience definition can be obtained.

Is tracking by user ID right for me?

Making the decision to change how you track your users is a difficult choice. First, does your site/mobile app require users to login at a reasonably early part of their journey? This is ideal for e-commerce sites and sites where the vast majority of user interaction takes place after the user logins into the site/application.

If you’re running a general website with the goal to merely share information and generate “contact us” type leads, the answer to making this switch is no.

If you have a combination of a general information site plus a registered user section, then yes you might want to consider making this change and perhaps just for the registered user section.

If you do make this change, don’t stop running your other analytics views/workspaces that use cookies. Keep them running. By operating two different views, you’ll be eventually able to reconcile the differences between the two, plus it makes it easier to explain to those who you report to, why you’ll be reporting a dramatic drop in the number of users. Of course, when you first make the switch, all users will be first-time users so expect a major increase in new visitor traffic.

If you decide to make this change, don’t forget to review the impact of the change with your legal department. They will tell you if you need to update your privacy policy.

The post Here’s an alternative to cookies for user tracking appeared first on Marketing Land.

Case Study: How to Evaluate Google Automated Bidding vs Manual Bidding to Improve ROAS & Profitability

If there’s one thing you should know about our team here at Inflow it’s this: We love running tests and using data to back-up our insights. We hate navigating blindly and hoping for the best. That’s why for us it was exciting to have the opportunity to test manual PPC bidding vs automated bidding. The …

If there’s one thing you should know about our team here at Inflow it’s this: We love running tests and using data to back-up our insights. We hate navigating blindly and hoping for the best. That’s why for us it was exciting to have the opportunity to test manual PPC bidding vs automated bidding.

The current pay-per-click climate is all about platforms’ rolling out automation. Ad platforms are getting more and more data, and the machines are getting better and better at predicting who is going to convert from your ads. Some advertising account managers are quick to automate entire accounts; others, like me, are more skeptical.

Personally, I’m willing to test anything, but when you start taking away the data I need to do my job (search queries) and the levers I use to increase performance (placements, bids, negative keywords), I get concerned. Ultimately, my job is to improve performance for my client any way I can, and if that means putting pieces of the account on “smart,” so be it. I just want the best performance possible.

Here’s a quick snapshot of the results we saw in this case study: 

Manual bidding beat automated bidding in all major metrics measured across desktop, tablet and mobile. We saw:  

  • Higher Revenue
  • More Transactions
  • Higher Ecommerce conversion rate
  • Lower Cost
  • Higher ROAS

Now, let’s dig into this case study more to see how the way we evaluate metrics influences the outcome of tests.

Starting Work for a New PPC Client

This client was using a different service we offer, conversion rate optimization, and was starting to get busy managing their business. So he asked us to look at his Google ad account for improvement opportunities. 

Although the client was getting great results, we noticed some areas we could help him expand while keeping the account return on ad spend (ROAS) within acceptable ranges. So we struck a deal.

The client said he would allow us 3 months to implement and optimize our new strategies, but we had to increase profit enough by the 3rd month to pay for ourselves. So we started working on the account July of 2019 and quickly implemented a tiered shopping structure

  • Tier 1 (Campaign 1): Contains every product available on the site but with many negative search phrases applied.
  • Tier 2: This is where the average to medium performing search terms live.
  • Tier 3: This is where the best converting search terms exist.

We also reorganized search campaigns by product type and narrowed the account to push spending to the areas driving the most revenue. In month one, we were able to increase profit by $11k, and by month 3 (our deadline), we increased profit by $27k. So we hit our goal. The client was happy, and we were off to the races to continue optimizing the account.

Start of Our Manual vs Automated Google Ads Experiment

Around late September, an advertising representative from Google who used to work directly with the client contacted us and wanted to meet to discuss opportunities and strategies. I know how these meetings go (“You aren’t automating the account enough. Look at your optimization score and the opportunities in the account to automate.”), and I try to avoid them. However, we wanted to keep an open mind, so we scheduled a meeting for mid-October.

The rep suggested switching our tiered campaigns to smart shopping, but we were getting a 20x return on our shopping campaigns at this point, so I really did not want to mess with that, especially right before the holidays. I told him, “We can test some of your automated strategies as experiments on the search campaigns we are running.”

Fortunately, disputes in our industry can be quickly resolved with an experiment and data analysis. So we agreed to run an experiment against the recommendations of the Google Ads representative.

Parameters of the Experiment

  • Target ROAS test 
  • A 50/50 split with the control (manual bidding). 
  • We switched the attribution model in Google Ads to data-driven.
  • 3 weeks to run the test 

We wanted to make sure the algorithm had enough time to get “dialed in.” We also scheduled it to end before Black Friday and Cyber Monday, so the winning strategy could be at 100% for those 2 big days.

Now, let’s take a look at how the experiment progressed.

Week 1 Results

When looking at Google Ads, the Target ROAS campaign jumped out to a surprising lead. However, when looking at Google Analytics, the first week seemed to be a wash. This discrepancy made sense because we switched the attribution before starting the test, and the machine was optimizing toward the new attribution model. But would it translate into more revenue in Google Analytics? Would it continue to get smarter and better? Could our manual adjustments make up the ground?

Week 2 Results

We made some bidding adjustments, and our manual cost per click (CPC) campaign (the control) bounced back in Google Ads and slightly beat out the Target ROAS campaign in terms of conversion value and ROAS. Once again, the Google Analytics results were really close. We had a slight edge in revenue and spend, but the Target ROAS campaign had 5 more transactions. Got to love that average order value swing!

All right, as we were heading into week 3, each bidding strategy had “won” a week in Google Ads and were nearly neck-and-neck in Google Analytics. It looked as if it were going to be a photo finish (which, for the record, I was already considering a win). The rep was certain his algorithms were going to crush our manual adjustments, but our campaigns were getting better performance.

Week 3 (Final Week)

We were coming down to the wire. It was Man and Woman (shout out to Rachel) vs. Machine…

It wasn’t even close.

In Google Ads, the manual campaign had more than double the conversion value. In Google Analytics, the manual campaign had nearly triple the revenue and triple the transactions. Coming around the last corner, our campaign took off, while the automated campaign sputtered. 

This was surprising to me. I would have thought that as the automated campaign got more data each week, it would also get more efficient. That wasn’t the case in this experiment, although I’m not saying it’s always that way.

Evaluating the Results in Google Analytics

Let’s first get the Google Analytics data out of the way so we can look closer at the Google Ads data that I showed you at the beginning. In Google Analytics, the control beat the test in:

  • Revenue
  • Transactions
  • eCommerce conversion rate
  • Cost
  • ROAS. 

Manual bidding was the clear winner here.

Evaluating the Results in Google Adwords

However, armed with data-driven attribution, Google Ads was spinning a different tale, which was why the rep was immediately declaring victory. 

Inside Google Adwords the Target ROAS test: 

  • Had more conversions 
  • Higher conversion rate 
  • Higher ROAS. 

Based on this data, it makes sense the rep was claiming victory. Because we had agreed at the beginning that Google Ads data would determine the winner, I couldn’t just ignore it and send him the Analytics data. But now, after all this writing, we are getting to the point of this case study.

Don’t just take results at face value.

Why did we lose in Google Ads? What did the Target ROAS do better? Even if automated campaigns actually won and outperformed the manual bidding, wouldn’t you want to know why or how, so you can improve what you are doing?

You have to dig into those questions. So that’s what I did. I dug. And I didn’t have to dig very far to find some answers.

The manual bidding campaign out-performed the Target ROAS in two areas: 

  • Desktop
  • Tablet 

But mobile phones performed much worse, understandably so, because up to this point, we had not really been targeting mobile for the search campaigns based on historical low performance and client information. 

The table shows the average CPC on mobile phones was about $.16, much lower than computer and tablet bids. Thus, our strategy going in was to maximize desktop traffic within the budget and then expand to mobile. This test showed us we should move up our plans for the move to mobile.

After realizing our campaign had won on desktop and tablet (the two areas we were focusing on) and had only lost overall because we hadn’t been targeting mobile, I chalked up our win, and the rep conceded.

Our New Experiment Segmenting Mobile Traffic

We ended up breaking out mobile traffic from the control campaign into its campaign where we could have more control over bids, budgets, and keywords as these tend to behave differently depending on the device. (I am a big fan of this type of campaign segmentation overusing the bid modifiers we had on before.)

That mobiel breakout showed great results in December

  • 46% increase in revenue over the previous month. 
  • Mobile transactions doubled month over month. 

It was also a brand new campaign so we continued to adjust bids and keywords.

Results of the Automated vs Manual Bidding After Segmenting Mobile

The control campaign (manual bidding), now focused on the higher converting desktop traffic resulted in: 

  • Revenue increased by over $6,000 with only $300 more in additional ad spend than the test campaign.
  • The eCommerce conversion rate increased by nearly 40%.

After months of adjustments and optimizations, the ROAS for the mobile campaign using Google Analytics was over 5, in June 2020, compared to the 2.21 ROAS captured in the screenshot above.

It would be well worth another test now to see how the mobile campaign does with a Target ROAS strategy as it did well with that type of bidding before, but I will definitely be rooting for the manual bids and adjustments over the machines’.

Conclusion

Obviously Google wants you to use automated bidding because their only goal is to reduce the friction between you and the ad platform. They want to save you time in managing your ad account since there are so many levers to keep track of that users become overwhelmed. 

It’s important to understand there’s a time and place for both strategies. Manual bidding will save you money in the long run because you’ll limit your bids on specific campaigns that turn out to be unprofitable while automatic bidding is a great short term strategy to jump-start a large ad campaign if your time is limited. 

At the moment, humans are better at monitoring the nuances of campaign management. So if you need help running your own PPC campaigns our team is here to help. You can contact us here.