The post This is How to be Less Distracted By Having Fun in Tedious Tasks appeared first on Nir and Far.
The post This is How to be Less Distracted By Having Fun in Tedious Tasks appeared first on Nir and Far.
The post This is How to be Less Distracted By Having Fun in Tedious Tasks appeared first on Nir and Far.
We’d love to know how you discovered this page so quickly. It just went live, and we haven’t notified our subscribers yet. Maybe you use a news app. Or maybe you visit our website every few minutes (that sounds less likely). If you could you let us know, using the box below, we’d be ever […]
We’d love to know how you discovered this page so quickly. It just went live, and we haven’t notified our subscribers yet.
Maybe you use a news app.
Or maybe you visit our website every few minutes (that sounds less likely).
If you could you let us know, using the box below, we’d be ever so grateful.
Let’s imagine you’re a personalization marketer and thanks to Bound you’ve really been flexing your marketing chops. You’ve successfully set up targeting for all your geographic markets. You’re speaking to your Fly Markets and Drive Markets. You’re even personalizing to that one city in Germany that keeps reading your blog posts (Hello, Frankfurt!). You know… Read More
Let’s imagine you’re a personalization marketer and thanks to Bound you’ve really been flexing your marketing chops. You’ve successfully set up targeting for all your geographic markets. You’re speaking to your Fly Markets and Drive Markets. You’re even personalizing to that one city in Germany that keeps reading your blog posts (Hello, Frankfurt!). You know exactly who to speak to on your website and how to speak to them.
And that’s fantastic! Geographic targeting is a great way to personalize to your website visitors because it’s relatively easy to enable and can be highly effective. But, geographic targeting is also like hanging out in the shallow end of an Olympic-size pool. You’re going to have a good time in that shallow end, but there’s an entire pool of other opportunities to explore! And that next deeper level of segmentation is Behavioral Targeting.
Behavioral Targeting is essentially speaking to a visitor based on their interactions with your site. Instead of targeting broadly based on a visitor’s location in the world, you’re instead targeting based on what pages they are visiting or how many times they have visited the site. It’s an expansive way to categorize audiences so it may seem daunting at first. But, with the help of your trusty personalization expert, you can easily add behavioral targeting to your personalization toolbelt.
So, get your swim caps and floaties on, we’re diving into our favorite ways to target your on-site visitors based on behavior!
We’ll start with segmenting based on the page a visitor is on. Targeting based on a visitor’s current URL is a natural next step after personalizing based on Geographic location. This type of segmentation involves targeting a visitor when they are on a specific URL (i.e. the homepage) or when that visitor is on a page within a set of URLs (i.e. the visitor is currently on a page that contains /blog). Often times, this brand of behavioral segmentation is dismissed as being too simplistic, but in practice, it can be highly effective.
Imagine you have an especially tantalizing blog written about a new outdoor park in town. This would be a perfect piece of content to get in front of everyone interested in the Adventure or Outdoors area of your website. Ah-Ha! Let’s set up a fly-in to serve to every person currently on your site’s ‘Outdoors’ page to make all visitors interested in that subject aware of this wonderful resource in your city!
Similar to the above targeting strategy, you can also set up personalization based on pages that a visitor has been to in the past. If a visitor returns repeatedly to a specific page or set of pages, that’s a pretty clear indication that they are interested in content of a specific nature. The most strategic personalization would be to show them related content or to offer a conversion point related to their engagement with those interest based pages once they have left those pages.
If a visitor has gone to the dining pages on your site 2+ times they are either A) hungry or B) a ‘foodie’ (or both!) . If you’d like them to digest (pun!) the food and drink content on site without interference, you may not want to target them on a food focused page. However, if they leave the food focused area of the site and you have more related content, like a restaurant deal or a special Dining Guide, it would be fantastic practice to target them on other pages with content you know they will find interesting. Bring on that creative cuisine content!
We’ve written a blog post or two on how to speak to your repeat visitors. That’s because speaking to repeat visitors is a super effective way to target people you know are interested in your destination. Repeat visitors have seen your site and virtually said, “I should visit this site again!” What a compliment- They like you, they really like you! The trick to getting those repeat visitors to come back for more is figuring out how to show new content to keep those visitors engaged.
Within the realm of targeting repeat visitors, there’s a ton of strategic possibilities. One of my favorite ways to target repeat visitors is to set up a waterfall system of targeting based on what visit a person is on (i.e. first, second, third, fiftieth visit??). In practice, this could look as simple as targeting a ‘first-time visitor’ with a Fly-In that promotes the Visitor Guide conversion. Then on a visitor’s 2nd visit, serving a fly-in that promotes a eNewsletter conversion. On a 3rd visit, you could serve a fly-in asking for a survey completion. This gives a repeat visitor something new to do every time they engage with your site and will keep those visitors coming back for more. Of course, this is not limited to conversion centric fly-ins. You could similarly target a repeat visitor with new blog posts or perhaps send them straight to an events page. The strategy will be dependent on your visitors and dependent on your site.
A visitor comes to your site and after a few minutes browsing, decides to download a Visitor Guide. Woo-hoo! Start the Parade! Throw the confetti! But now what? Do you want that visitor to leave the site? Chances are you want to keep them around. And you may even have more conversions that you’d like them to complete. Targeting based on Goal Completions allows you to lead a visitor down a predetermined nurture path, consistently giving that visitor a new asset to download or a new form to fill out. This is when targeting based on goal completions truly enters your segmentation strategy.
If a person has downloaded your visitor guide, you may segment them into a group of visitors that has already converted on that specific goal. With this information you can assume that this visitor is highly engaged, after all, they just downloaded something from your site! In theory, that visitor would be a fantastic person to serve an eNewsletter prompt. Since they’ve already converted on the Visitor Guide, you want to push them further down your nurture path and personalize content to them which promotes the next step on their journey into your website.
The 4 Behavioral Targeting strategies listed above skim the surface of potential ways to speak to your online audiences but in this Olympic pool of personalization, there’s even more you can do! If you want to keep swimming deeper and deeper, reach out to a member of the Bound team or your designated swim instructor (CSM) to learn more!
We are excited to announce the release of our new Customer Engagement Report for Q1 2020! This report highlights the trends happening in the CRM world – from data and marketing technology to loyalty and customer experience.
In this quarter’s issue, we…
We are excited to announce the release of our new Customer Engagement Report for Q1 2020! This report highlights the trends happening in the CRM world - from data and marketing technology to loyalty and customer experience.
In this quarter’s issue, we dig deeper into marketers’ data usage as reported last quarter, setting out to find the barriers to “great” personalization (the common denominator is identity). With forty-nine percent of marketers spending more than twenty percent of their martech budget on identity solutions, we expect to see continued improvement in organizations’ ability to identify customers in the future. We also discuss measurement and how it can help us better understand and optimize the customer experience. Below, are a few highlights from our report that you should keep an eye on:
1. Only 54% of Marketers Have Clear KPIs
Marketers can also explore their measurement effectiveness by looking at how stakeholders are aligned on their program’s key performance indicators (KPIs). Just fifty-four percent of marketers have clear definitions for each metric, while sixty-three percent say their teams work together to align on a core set of KPIs. Here, we see room for refinement and growth.
2. Percent of Marketing Spend Allocated to Identity
Today, marketers are recognizing the value of identity and have dedicated a substantial amount of their budgets to identity solutions, with twenty percent or respondents allocating more than twenty-five percent of marketing budgets to identity. Understandably, many of the investments are related to the traditional processing of names and addresses, and to digital onboarding. While this is important, it only represents part of the picture; a large part of the identity strategy needs to be dedicated to aligning investments.
3. Measuring the Effectiveness of Customer Communications
When it comes to measuring the effectiveness of their marketing efforts, respondents report a strong reliance on engagement metrics. Seventy-one percent of respondents say engagement is key here, yet just forty percent indicate multi-touch attribution as a tactic employed.
Want to learn more? Check out Merkle’s Q1 2020 Customer Engagement Report here for even more insights.
As the marketing industry developed, researchers dove deeper into buying behavior and buyers’ minds. One early researcher was Edward Bernays—Sigmund Freud’s nephew—who coined the term “public relations.” Bernays believed that people could be influenced via crowd psychology and psychoanalysis. His “Torches of Freedom” campaign in the 1920s promoted smoking among women as a symbol of […]
As the marketing industry developed, researchers dove deeper into buying behavior and buyers’ minds. One early researcher was Edward Bernays—Sigmund Freud’s nephew—who coined the term “public relations.”
Bernays believed that people could be influenced via crowd psychology and psychoanalysis. His “Torches of Freedom” campaign in the 1920s promoted smoking among women as a symbol of liberation, opening a new market to cigarette companies.
Decades later, in 2002, Dutch marketing professor Ale Smidts coined the term “neuromarketing.” Neuromarketing maps neural activity to consumer behavior to help marketers craft more valuable, science-based campaigns.
It focuses on the why and how of our decision-making, much of it unconscious, and offers a more direct view of the consumer’s “black box.” Neuromarketing has been defined as the third dimension of marketing research, along with qualitative and quantitative research.
Because neuromarketing often targets unconscious processes, it raises a number of concerns:
Some of these ethical questions aren’t new. But neuromarketing’s potential power has made them increasingly relevant.
Neuroimaging techniques measure processes such as decision-making, reward processing, memory, attention, approach and withdrawal motivation, and emotional processing, all by means of specific brain-area activations.
That data isn’t available via traditional marketing research methods, whose efficacy relies on the accuracy of consumers’ stated reasons. That’s a major limitation.
“There is a very long history within psychology of people not being very good judges of what they will actually do in a future situation,” says Matthew Lieberman, a UCLA professor of psychology.
Lieberman studied the brain activity of people who watched public-service announcements about the importance of wearing sunscreen. The subjects were then asked how likely they were to use it. The researchers even gave them sunscreen to ensure they had access to it.
Meanwhile, neuroscientists compared subjects’ brain activity to their own predictions. A week later, it was time for measurement. About half the subjects had accurately predicted their behavior. The researchers’ model was accurate 75% of the time.
Neuroscience can also detect subtle emotional impacts. For example, using fMRI scans, researchers concluded that attractive ads activate the ventromedial prefrontal cortex and the ventral striatum, which are responsible for emotions in the decision-making process and the cognition of rewards.
In another study, neuroscientists at UCLA scanned the brains of people watching Super Bowl commercials. A Doritos ad stimulated empathy and connection, while other commercials provoked fear or anxiety.
Nationwide Insurance’s ad, which featured Kevin Federline as a failed rap star stuck in a job in a fast-food restaurant, generated anxiety and feelings of insecurity—not the goal of the advertiser, even though the multi-million dollar spot was surely “focus grouped” before getting approval.
Some advance neuromarketing research probably would have saved Nationwide tons of money.
Until recently, researchers relied on buyers’ abilities to report how they felt about a particular marketing message (via surveys, focus groups, interviews, etc.). This assumed that people were able to describe and predict their own cognitive processes—a dangerous assumption illustrated by the sunscreen study.
Brain responses offer more objective insights into consumer behavior. Those insights can reduce risks for new product launches or major changes that could define a brand. At a tactical level, companies can improve their customer segmentation and personalize marketing and sales experiences.
For example, by using high-res EEG headsets and eye trackers on Polish and Dutch IKEA customers, researchers learned about consumer reactions to green business strategies, which helped identify which business models customers were likely to accept, never accept, or accept in a few years.
IKEA now has a home solar offering that enables customers to generate their own renewable energy; it also shifted to renewable plastics and offers sustainable, healthy food in its restaurants.
Neuromarketing has the potential to reveal much more.
The researchers created a virtual store with 2D and 3D shopping experiences that simulated reality. Test consumers within the virtual store could interact with store merchandise and make purchase decisions in a way that resembled real in-store behavior.
While test subjects were shown video clips and still pictures from a consultative sales process, their brain activation was monitored to measure engagement.
Humans have mirror-neurons related to empathy and imitation that help us relate to people or behavior we see, even in a virtual environment. (Did you ever cry at a movie? Blame it on the mirror-neurons.)
The analysis showed heightened activity in the dorsolateral prefrontal cortex, meaning that increased feelings of safety had a positive effect on individuals’ willingness to buy. The research can help companies build appealing shopping environments, plan sales processes, and develop marketing materials.
But while the potential to pair neuroscience, virtual reality, and marketing may be tantalizing, neuromarketing has limitations—and critics.
A big disadvantage of fMRI is that it doesn’t give you “live” images. Consumers might behave differently in the real world, unlike in a controlled environment. Researchers might end up with a response bias—subjects are aware that they’re being analyzed, which may affect their behavior.
Also, inside the lab, many variables are controlled; in the real world, we can’t control those same variables—or the resulting behavior. Lab-derived conclusions may not hold up elsewhere, especially when experiments rely on small sample sizes.
Finally, claims that tie specific brain regions to mental functions may be exaggerated. Our brains are complex, and why test subjects feel arousal, pain, fear, or other emotions is unknown.
Remember the case study with the Super Bowl commercials? It used 10 subjects. To create the virtual customer journey, the study recruited 16 subjects.
Those small sample sizes, researchers argue, translate to low statistical power and a reduced ability to detect a true effect. They may also overestimate effect size and lower the replicability of results.
Unreliable methods surface ethical concerns about the publication of suspect research.
stress[es] the need for more intense and precise training of the subjects taking part in neuromarketing experiments. Thus, companies can prevent the onset of anxiety, fear or cognitive inhibition among respondents.
Some studies might also reveal incidental findings, but researchers don’t know how to handle them—there are no reporting requirements. If an abnormality is detected, who should communicate that information? And to whom?
Those concerns spill over into privacy concerns, too:
The use of data obtained from brain imaging poses ethical dilemmas for marketers, as some marketers seek to limit our understanding of their true intentions and some activity lack transparency. Potential moral issues emerging from neuroscience applications include awareness, consent and understanding of individuals consumers to what may be viewed as invasion of their privacy rights.
In Mexico’s 2015 elections, citizens’ responses to the governing party ads were often recorded without their knowledge. (The party leader said they would stop hiring neuroscience consultants to register voters’ brain waves and read their facial expressions.)
Even if those issues are solved, ethical concerns remain.
Is neuromarketing capable of subverting free will and promoting compulsive buying behavior? Probably not. But it still has the potential to cross an ethical line.
James Garvey, author of The Persuaders: The Hidden Industry that Wants to Change Your Mind, argues that our ethics system hasn’t caught up with the implications of neuromarketing:
There is a question of human dignity here. Are we treating people like people with hopes and desires? Or are we treating them as things that we can manipulate based on our understanding of how brains work?
Back in 2010, Ariely and Berns cautioned that the profit motive may incentivize companies to misuse neuromarketing—encouraging them to try to cross from persuasion to manipulation.
Roger Dooley frames marketers’ responsibilities this way:
Other codes in the business world have similar principles: informed consent, confidentiality, privacy, etc. Even as ethics in neuromarketing continue to take shape, some boundaries are clear.
The protection of vulnerable populations—kids, teens, and people with high debt, compulsive buying behavior, and other neurological diseases or pathological disorders—is often raised with neuromarketing.
Of course, traditional marketing has exploited this for years—McDonald’s has long partnered with companies like Disney to serve beloved toys with nutritionally vapid Happy Meals.
Neuromarketing techniques could elevate the already problematic outcomes of marketing messages to children:
Researchers from Liverpool University’s appetite and obesity research group found that, in one half-hour episode of Hollyoaks (a soap opera targeting youth), more than 140,000 children were exposed to nine junk food advertisements. The very next day, during an episode of The Voice, 708,500 children watched 12 junk food ads.
The risks aren’t to young children alone. In most cases, teenagers have less control over their emotions and behavior than fully mature adults, making them vulnerable. Researchers also suggest that adolescence extends into the early 20s.
Adults aren’t immune to the subtle but powerful marketing cues neuroscience can identify either.
In a 2016 study, researchers observed that people with a high body mass index (BMI) prefer a thinly shaped bottle, even if the drink is higher in price. That implies that soda manufacturers could, for example, profit more through changes to packaging. (On the other hand, the same information could benefit healthy drinks producers.)
A 2011 study suggested that the brains of obese people respond differently to nutrition labels. When given an identical milkshake, they showed more brain activity in reward areas if the label read “regular” compared to “low-fat.”
Would obese people also respond differently to color, image, smell, or touch? And if marketers had answers to these questions, how would they use the data?
In another study, researchers found that areas of fMRI scans correlated with compulsive buying. Being presented with a product and its price resulted in higher striatal activation in compulsive buyers compared to non-compulsive buyers.
The implication is clear: If sellers learn which marketing messages hyperstimulate those areas, they could override the better judgment of buyers.
Many of these concerns, though amplified, aren’t new.
Take a look at these burger ads—the promise versus the reality.
And, if we’re looking at who’s sponsoring national health organizations, we see Nestle, Coca-Cola—ethical conflicts abound.
Neuromarketing isn’t the only way to gain tacit influence over consumer behavior (or government policy). Plenty of marketing strategies (many would argue most) target our subconscious.
Neuromarketing just may be a whole lot better at doing it.
In 2017, neuromarketing pioneer Gemma Calvert claimed that “Online neuromarketing will be the industry standard for testing ad campaigns, prototypes, and packaging designs within five years.”
That seems unlikely, but neuromarketing isn’t going away. And the privacy debates surrounding online advertising suggest that technology will keep outpacing regulation. Ethical issues with neuromarketing will continue to surface; they may become more pressing.
Here’s what won’t change: Deceptive marketing tactics and promises not delivered won’t build, as Dooley rightly notes, a sustainable business. Plan accordingly.
Companies looking to remain competitive must now find ways to address consumers as unique individuals with highly specific, personal preferences.
The post See how agencies are putting data-driven marketing to work appeared first on Marketing Land.
By gathering rich, relevant data on consumer behavior and demographics, businesses can target their leads and customers on a far more personal level, optimizing their engagement rates while ensuring a positive brand experience.
But delivering on this data-driven expectation can present a number of challenges – particularly for digital agencies, whose clients are throwing unprecedented amounts of data in their direction.
In an effort to find out how agencies are overcoming some of these obstacles, SharpSpring partnered with Ascend2 to field the Data-Driven Marketing Trends Survey. This paper draws on those results to offer an in-depth view of the challenges involved in successful data-driven marketing as well as the many ways in which agencies are helping their clients stay ahead of the curve.
Visit Digital Marketing Depot to download “Data-Driven Marketing: Let Your Data Take the Wheel.”
The post See how agencies are putting data-driven marketing to work appeared first on Marketing Land.
For companies that build their analytics on Google products, purchasing Google Analytics 360 is a symbol of maturity. As a business grows, it inevitably runs up against limitations of analytics tools. For example, while the data aggregation process in Google Analytics seems like a “normal” feature, it might be a hurdle if your business needs […]
The post Google Analytics vs. Google Analytics 360 (Based on a Decade of Implementations) appeared first on CXL.
For companies that build their analytics on Google products, purchasing Google Analytics 360 is a symbol of maturity.
As a business grows, it inevitably runs up against limitations of analytics tools. For example, while the data aggregation process in Google Analytics seems like a “normal” feature, it might be a hurdle if your business needs to process data at the hit level instead of by sessions or campaigns.
It’s one of many potential business needs that could affect your decision to upgrade to a Google Analytics 360 license. But is it worth the serious investment?
If you’ve spent hours calculating your expected ROI for Google Analytics 360 and still don’t know, this article is for you. We’ll run through a feature-by-feature comparison of Google Analytics and its paid counterpart, Google Analytics 360.
Then, we’ll break down other factors to help you figure out if you should fork over the cash.
What follows isn’t an exhaustive list of feature differences; they are, however, those that we’ve seen have the biggest influence on a buy-or-don’t-buy decision for Google Analytics 360.
We’ve broken them down into three categories:
When upgrading to Google Analytics 360, you don’t need to retag your pages. Your hits will continue to be collected via existing Google Analytics tags.
You will experience, however, plenty of other changes.
A main difference between the free and paid versions of Google Analytics is that the paid version, Google Analytics 360, has service-level agreement (SLA) obligations, guaranteeing 99.9% uptime, support, and data freshness.
The data freshness period for Google Analytics is 12 to 48 hours (depending on the intensity of your traffic), but it goes all the way down to 10 minutes to an hour for most reports in Google Analytics 360.
Hit limits in Google Analytics 360 are also higher than in the standard version: around a billion hits per month (and even more for an additional fee), compared to just 10 million for the standard version.
When you approach the 10-million-hit limit, Google Analytics starts to warn you and offers a few options to resolve the situation:
An unpleasant effect of exceeding the hit limit in Google Analytics is that “you may be prevented from accessing reports,” as the data limits support article warns. Google may also stop processing data beyond that limit:
An increased hit limit is one of the most common reasons for companies to consider Google Analytics 360. Over the past 10 years, about 90% of the companies that have contacted us were considering Google Analytics 360 primarily to avoid the hit limit.
The limit of 50 Properties and 25 Views per Property in the free version of Google Analytics may also be too tight for a growing business. (There are some workaround solutions here.) By comparison, Google Analytics 360 generously offers more than 50 Properties along with 400+ views per Property.
More Properties and Views per Property gives companies the freedom to be creative with cross-domain tracking, filtering, and dicing data from all their websites, apps, and other properties. Google Analytics 360 works well for international companies with tens of brands, big retailers, agencies, etc.
Thanks to the roll-up reporting feature in Google Analytics 360, you can grab the data you need from both web and app properties based on a certain parameter (e.g., English-only audience, those who saw your last ad campaign, etc.).
The feature tracks your customers’ Client IDs or User IDs through different sources.
There’s no single way to analyze the efficiency of traffic sources in Google Analytics 360. For analyzing advertising channels, you may use the attribution modeling feature that’s part of the new data-driven attribution model, which was created exclusively for Google Analytics 360.
In the free version of Google Analytics, the standard position-based attribution models often under- or overestimate the value of channels. Even the most popular model, Last Non-Direct Click, won’t give you essential insights on how to redistribute your budget. (Those models are enough to get acquainted with the basics of attribution.)
Unlike Google Analytics, Google Analytics 360 presents a multi-channel attribution model that learns from your data on advertising channels. Google Analytics 360 shows you how channels contribute to the growth of conversions and how they interact, but it doesn’t take into account your order performance, call center activity, or brick-and-mortar sales.
How precisely does this model work? Only Google knows. But that black-box functionality is still acceptable for Google Analytics 360 users who:
Google Analytics 360 allows you to download your whole attribution model as a CSV file to rebuild it or conduct further analysis. Remember that you have to feed this model with regular traffic and conversion data for the results to be representative.
Also keep in mind that your attribution model can be rebuilt only once a week and can take into account only data from the past 90 days.
Ultimately, while the Google Analytics 360 attribution options are great for working with data-driven attribution models, they won’t help companies that want a transparent mechanism and clear logic for attribution calculations.
Ad cost attribution can’t be calculated per user or per user segment in Google Analytics or Google Analytics 360, and you can’t get info on how much it costs to acquire a user who made a particular purchase.
The root of this problem is hidden in the logic of both tools. All data—even data stored in BigQuery—is already aggregated after collection via Google Analytics tags, so it’s impossible to go to the hit level to learn more about your customers.
Unsampled reports and custom tables in Google Analytics 360 are great for those who are tired of the yellow warning sign that indicates sampled data in reports built on 500,000+ sessions in Google Analytics (or even 100,000+ sessions depending on the number of additional parameters).
If you’ve seen this warning a thousand times, then you’ve outgrown Google Analytics. Google Analytics 360 has a 100-million-session ceiling that’s harder to reach.
On those rare occasions when you do encounter sampling in Google Analytics 360, you can download any report in an unsampled format (up to 3 million rows for Google Analytics 360 versus only 50,000 for Google Analytics).
Alternatively, you can build a custom table for ongoing reports for which you want to see unsampled data, or you can use the Google BigQuery integration to build the report you need with an SQL request.
You can have up to 100 custom tables in Google Analytics 360, with up to 1 million unique rows in each.
With Google Analytics 360, you can have 200 custom dimensions and 200 custom metrics. In Google Analytics, you have one-tenth the number—20 custom dimensions and 20 custom metrics.
Custom or calculated metrics are a must if you need to observe a range of marketing parameters (like specialized lifetime value or improved customer acquisition costs) to improve your performance.
You may build up to 50 calculated metrics across all Google Analytics 360 Properties.
A custom funnel allows you to add or delete steps, personalizing the funnel to match (or, at least, better approximate) your real customer journey. With custom funnels in Google Analytics 360, you have a tool with five available steps and five different rules to modify them.
Each step can be analyzed for bottlenecks and obstacles, or can be tested to see how certain pages help or hinder customers on their way to making a purchase.
In the free version of Google Analytics, you can’t customize funnels.
Query time import is a beta feature in Google Analytics 360 for those who want to improve historical hit data with imported data (e.g., offline data) from other sources and immediately build reports on it. It makes your reporting datasets more comprehensive.
For example, if you want to build a report on your last big advertising campaign (for which offline points-of-sale identified new customers), you have to use query time to import offline data into the hit sets for a certain period. By combining historical hit data with offline data, you can reveal the campaign’s true effectiveness.
Admittedly, this functionality is tiresome to use—no automatic import is available and you have to launch the query time import tool each time you need a report. This annoys even the biggest fans of Google Analytics 360.
Integrations are the second most important cause for upgrading to the paid version of Google Analytics. If you plan to use certain tools, then Google Analytics 360 is the best choice for you as a single point of truth for your analytics system.
Google Analytics 360 has native integrations with:
These integrations multiply the value of Google Analytics 360 by:
All of these features are available in Google Analytics but only through third-party tools, not in the native, automated format in Google Analytics 360. (Search Ads 360 and Display Video 360 still can be tracked without Google Analytics 360 thanks to Floodlight.)
The audience possibilities are extremely important here—typically, within Google Analytics, you can’t export keywords, social, or other kinds of data on the path of the customer to your website. But with Google Analytics 360, you can do this with built-in tools and native integrations.
Post-view conversions—the golden dream of marketers who use media ads—are reported only at the campaign level with the help of Ads Data Hub. To overcome this challenge, you have to export DT files and run custom reports. Google Analytics 360 isn’t the best tool for that.
Features aside, the answers to a few questions can help you find out if your company would benefit from Google Analytics 360.
Based on our experience as a major Google Analytics 360 partner and reseller, companies that benefit from Google Analytics 360 have the following characteristics:
Companies should reconsider their need for Google Analytics 360 if:
Google Analytics 360 is not enough on its own in the following situations:
Other serious limitations of Google Analytics 360 that should be considered before purchasing:
When comparing Google Analytics and Google Analytics 360, consider the following:
The post Google Analytics vs. Google Analytics 360 (Based on a Decade of Implementations) appeared first on CXL.
Identity resolution and more holistic approaches to measurement are the way forward, according to 11 experts.
Now that third party cookies are on death watch, there are many questions arising about post-cookiepocalpyse marketing. Among them, what happens to attribution and what current or future methodologies will take their place?
To better understand the challenge of attribution going forward, we asked a range of marketing and martech executives to comment on replacement solutions and alternatives. Their reactions and responses cluster around three big themes: the importance of first-party data and customer engagement, identity resolution as a successor to cookies and developing a more sophisticated, holistic approach to measurement.
Considering the impending changes from Google, we believe it’s crucial for all brands to choose an approach to measurement that will allow them to gain the most accurate results when dealing with data loss. It’s vital to continuously experiment with, test and validate measurement strategies while incorporating an adaptive methodology. This is at the core of Mix Modeling and, when combined with continuous assessment of data quality, is key to ensuring robust results.
First-party data is going to continue to grow in importance and what is currently known as multi-touch attribution will morph into blended or more siloed solutions, used in a more limited way to better understand touchpoints. Analytic Partners has already been adapting and leveraging touchpoint analytics to glean tactical user-level insights.
In the short term, we will see marketers grasping at the basic and mostly ineffective practice of last-click attribution, and an uptick in federated login systems (already in play in the EU). Longer term, marketers will have to look to mapping audience segments on the open marketplace (an industry standard and buy-in will be prerequisites), contextual targeting and federated learning. In the absence of conversion tracking, marketers can and should look to a unified ID solution, which opens up new opportunities beyond digital and addresses user touchpoints across all channels.
Although cookies have started to crumble, they will not disappear completely for some time. In this new “mixed economy,” marketers will need to find new and creative ways to assess the impact of digital campaigns in a privacy-compliant way. As 2020 progresses, we may see some publishers using alternative measurement solutions based around deterministic IDs and panels, and we predict more direct integrations between publishers and measurement partners to enable the transfer of anonymized data. Other advertisers, publishers and agencies will turn to lab-based approaches to understand the effectiveness of digital media.
What is certain is that campaign measurement will become ever more complex. Marketers will need to future-proof their measurement frameworks and reduce their reliance on cookies for tracking. And many will turn to third-party data and analytics, which is the most trusted in the industry, to maintain accurate campaign measurement in the evolving media landscape.
Even before cookies were slated for extinction, attribution always had its limitations. For the most part, it was mostly about digital – so it left out many important parts of the marketing mix. Over time, this encouraged marketers to over-value (easy to measure) short-term activation at the expense of (harder to measure) long-term brand building. A lot of evidence shows this was short-sighted and led many brands to lose market share, differentiation and pricing power. And even within the realm of activation, it proved hard to assign credit properly in complex environments without at least some experimental design component.
The loss of cookies is likely to make it harder still to sustain credible systems for linking ad exposures to ad outcomes across the media landscape – at least outside of the walled gardens. In the immediate future, I would expect marketers to pursue attribution analytics increasingly within walled gardens rather than across them. I would expect increasing numbers of media companies to attempt to build their own walled gardens by encouraging or requiring unified sign-in (policies that are very congenial to subscription services and to dual revenue-stream business models). And though I also expect that a number of players will attempt to resurrect cookies through other types of IDs linking websites, devices, and platforms, these will continue to run against the headwinds of public and policy pressures for data privacy.
Despite removing third-party cookies, none of the major web browsers are trying to take away a website’s ability to track its own users. Marketers can expect continued click-through conversion to some extent indefinitely. However, they may have to rely more on their own website analytics for data instead of third parties.
Beyond that, marketers can apply the same measurement techniques used in the offline world to the measurement of their online campaigns. Marketers can use geo-based or time-based testing to determine the broader impacts of their campaigns beyond what can be measured directly. These approaches to measurement can also help estimate causal impact, as opposed to measuring only correlation, which is typical for online advertising measurement today.
The impact of data privacy regulations have consolidated power into the hands of platforms, who, to attain compliance, have nixed third-party data and measurement, instituting end-to-end reliance for marketers. Now cookies are crumbling, but the milk has already been spilled. To move forward, advertisers and marketers must shed a campaign-centric mentality and find ways to invite consumers into direct relationships, where resulting data is their own. Expect to see more ads prompting consumers to install and share mobile wallet coupons, opt in to SMS shortcodes, or engage in meaningful ways on brand-owned properties. Necessity may herald a renaissance, where brand marketers shift focus from interruptive tactics ported to the mobile era, to more authentic, contextual interactions that allow them to be there in consumers’ moments in helpful and handy ways.
Over time, we expect to see more brands look to mobile device IDs as a means to craft a more complete picture of their customers and measure the results of digital campaigns. Brands are already using location data-driven products to better understand their audiences, personalize the messages delivered to them based on their interests, and measure in-store visitation results, and we expect to see more marketers turn to location as part of a holistic strategy.
Major browsers are building, or have already built, anonymized ways for digital ad attribution to be captured via APIs. Building out a robust architecture to interact with each browser’s unique requirements will be a significant undertaking for marketers. Ultimately, all of this still points to a greater need [for] investment [in] identity resolution and building a better direct relationship with the customer to take advantage of first-party data and reporting.
The best way forward for marketers is to stop relying on the easy wins in digital. Marketers must create their own future by building relationships with customers so they are more willing to share their data and by investing in identifying customers across devices.
With user privacy now top of mind and the clock winding down on the third-party cookies, attribution is going to become both more complicated and more expensive for marketers to measure. To reach the same levels of accuracy in attribution that we see today, without relying on third-party cookies, marketers will need the ability to stitch identity together across addressable channels using first-party data. On top of that, any new solutions will need to comply with standards for collecting and resolving first-part data in our emerging opt-in (not opt-out) consumer marketing economy.
This problem isn’t new, however. Our ability to assign precise value to marketing channels that address the same person or household — everything from direct mail to cookie-targeted display — has always been difficult. And, it’s been harder in places where addressability is nearly nonexistent, like CPG products being sold to customers of Walgreens, for instance. Now that cookies can no longer serve as a reliable identifier for marketers, our industry is finally being forced to create new, privacy-first ways of leveraging first-party data to plan, track and measure the performance of campaigns across channels.
Identity resolution – and, specifically, a provider’s approach to it – will determine the relative impact marketers will face in a world beyond the cookie. Leveraging offline identity (PII), which is rooted in more stable identifiers like name, address, and phone number — as well as direct integrations with platforms and publishers, inclusive of walled gardens — gives marketers a clear path forward to doing attribution in a post-cookie world. Effective and reliable attribution measurement has always required looking beyond the cookie to capture the whole customer journey. This is the only way to accurately quantify marketing’s incremental impact to power both tactical and strategic planning, and investment decisions.
Third-party cookies are an “easy button” for retargeting across popular networks like Facebook, but they don’t provide insight across platforms (Facebook vs. Amazon, for example) or granular data on behavior (who, what, when, where, and why). Without that important context, a third-party cookie can only really tell you that a “visitor” came back, and, even then, usually can’t tell you who came back unless that person converts by filling out a form, making a purchase, etc. So the cookie changes might affect some marketing vanity metrics (e.g., retargeting CTR) and make certain multi-touch attribution models less accurate, but I don’t see it having an impact on the most important metric: sales conversions.
At a high level, focus on creating great experiences and people will still trade their data. People will still opt in for valuable tools or resources. Which is great news for everyone since the quality of marketing goes up across the board. From a technical standpoint, consider taking control of your own data and embrace first-party cookies; there are several data platforms today that let you do this. This allows you to do your own retargeting through DSPs and provide personalized audiences into platforms like Facebook that are based on your own actual product or website activity. Even better, these technologies can let you resolve identity across different media “walled gardens” so you can better understand the “who, what, when, where” and maybe even “why: of user behavior, which is where real attribution comes in.
A mix of first and third party content is required to seal the deal.
The post B2B buyers consume an average of 13 content pieces before deciding on a vendor appeared first on Marketing Land.
The average B2B buyer’s journey involves consumption of 13 pieces of content. That’s the principal finding of a new survey from market research firm FocusVision. The company polled marketing executives at companies with at least 500 employees and $50 million in annual revenue who had purchased a martech solution in the past year.
A mix of 1st and 3rd party content. The 13 content pieces breaks down into an average of eight vendor-created pieces and five from third parties. This content ranges from video to blog posts, white papers and customer testimonials to software reviews and analyst reports.
According to the report, the B2B buying process takes on average two to six weeks and involves 3 – 4 internal decision makers. The top source of content was the vendor’s website, followed by search and social media. Asked “how did you find content,” these survey respondents said:
FocusVision identified four buying stages (and the content reviewed at each stage in the process): 1) understanding the problem, 2) looking for vendors, 3) short-listing and 4) final decision.
Content reviewed at each stage of the B2B buyer’s journey
Websites and peer reviews. The consumption of content is not entirely liner. Vendor websites, for example, are visited throughout the buyer’s journey. Peer reviews were consulted at the top and bottom of the funnel as well.
The most useful types of content to aid purchase decision-making were those that addressed: product specifications and functionality (67%), product comparisons (65%), product success stories (60%), content to specifically show value to internal stakeholders (54%), product tutorials (49%) and guidance on my problem/how to solve it (48%).
Larger companies, with revenues above $250 and $500 million, displayed some differences from the average according to FocusVision. Larger companies tended to rely more heavily on third party sources — third party websites, analyst reports and third party articles — probably because of their perceived independence.
Why we care. We know that content is incredibly important for ranking in search. It’s also critical for sales support. But this report makes clear there are a broad range of first and third party content types that are highly influential to B2B buyers. It also shows how critical the vendor website is in the buying process. Indeed, the report basically outlines a content strategy for the entire B2B buyer’s journey.
The post B2B buyers consume an average of 13 content pieces before deciding on a vendor appeared first on Marketing Land.
Here’s where search marketers find themselves in the current entanglement of data and privacy and where we can expect it to go from here.
January 2020 felt like a turning point. CCPA went into effect, Google Chrome became the latest browser to commit to a cookie-less future and, after months of analytics folks sounding the alarm, digital marketers sobered to a vision of the future that looks quite different than today.
This article is not a complete history of consumer privacy nor a technical thesis on web tracking, although I link to a few good ones in the following paragraphs.
Instead, this is the state of affairs in our industry, an assessment of where search marketers find themselves in the current entanglement of data and privacy and where we can expect it to go from here.
This is also a call to action. It’s far from hyperbole to suggest that the future of digital and search marketing will be greatly defined by the actions and inactions of this current calendar year.
Why is 2020 so important? Let’s assume with some confidence that your company or clients find the following elements valuable, and review how they could be affected as the associated trends unfold this year.
Despite the setbacks, it is not my intention to imply that improved regulation is a misstep for the consumers or companies we serve. Marketing is at its best when all of its stakeholders benefit and at its worst when an imbalance erodes mutual value and trust. But the inevitable path ahead, regardless of the destination, promises to be long and uncomfortable unless marketers are educated and contribute to the conversation.
That means the first step is understanding the basics.
Search marketers know more than most about web tracking. We know enough to set people straight at dinner parties — “No, your Wear OS watch is not spying on you” — and follow along at conferences like SMX when a speaker references the potentially morbid future of data management platforms. Yet most of us would not feel confident in front of a whiteboard explaining how cookies store data or advising our board of directors on CCPA compliance.
That’s okay. We’ve got other superpowers, nice shiny ones that have their own merit. Yet the events unfolding in 2020 will define our role as marketers and our value to consumers. We find ourselves in the middle of a privacy debate, and we should feel equipped to participate in it with a grasp of the key concepts.
A cookie stores information that is passed between browser and server to provide consistency as users navigate pages and sites. Consistency is an operative word. For example, that consistency can benefit consumers, like the common shopping cart example.
Online shoppers add a product to the cart and, as they navigate the site, the product stays in the shopping cart. They can even jump to a competitor site to price compare and, when they return, the product is still in the shopping cart. That consistency makes it easier for them to shop, navigate an authenticated portion of a site, and exist a modern multi-browser, multi-device digital world.
Consistency can also benefit marketers. Can you imagine what would happen to conversion rates if users had to authenticate several times per visit? The pace of online shopping would grind to a crawl, Amazon would self combust, and Blockbuster video would rise like a phoenix.
But that consistency can violate trust.
Some cookies are removed when you close your browser. Others can accrue data over months or years, aggregating information across many sites, sessions, purchases and content consumption. The differences between cookie types can be subtle while the implications are substantial.
It is important for marketers to understand that first- and third-party cookies are written, read and stored in the same way. Simo Ahava does a superb job expanding on this concept in his open-source project that is absolutely recommended reading. Here’s a snippet.
It’s common in the parlance of the web to talk about first-party cookies and third-party cookies. This is a bit of a misnomer. Cookies are pieces of information that are stored on the user’s computer. There is no distinction between first-party and third-party in how these cookies are classified and stored on the computer. What matters is the context of the access.
The difference is the top-level domain that the cookie references. A first-party cookie references and interacts with the one domain and its subdomains.
A third-party cookie references and interacts with multiple domains.
Marketing Land has a helpful explainer, aptly called WTF is a cookie, anyway? If you’re more of a visual learner, here is a super simplistic explanation of cookies from The Guardian. Both are from 2014 so not current but the basics are still the basics.
Persistent cookies and session cookies refer to duration. Session cookies expire at the end of the session when the browser closes. Persistent cookies do not. Data duration will prove to be an important concept in the regulation sections.
Cookies are not the only way to track consumers online. Fingerprinting, which uses the dozens of browser and device settings as unique identifiers, has gotten a lot of attention from platform providers, including a foreshadowed assault in Google’s Privacy Sandbox announcement.
Privacy Sandbox is Google’s attempt at setting a new standard for targeted advertising with an emphasis on user privacy. In other words, Google’s ad products and Chrome browser hope to maintain agreeable levels of privacy without the aggressive first-party cookie limitations displayed by other leading browsers like Safari and Firefox.
It would be convenient if we could start this story with one event, like a first domino to fall, that changed the course of modern data privacy and contributed to the world we see in 2020. For example, if you ask a historian about WWI, many would point to a day in Sarajevo. One minute Ol’ Archduke Ferdinand was enjoying some sun in his convertible, the next minute his day took a turn for the worse. It is hard to find that with tracking and data privacy.
Facebook’s path to monetization certainly played a part. In the face of market skepticism about the social media business model, Facebook found a path to payday by opening the data floodgates.
While unfair to give Facebook all the credit or blame, the company certainly supported the narrative that data became the new oil. An iconic Economist article drew several parallels to oil, including the consolidated, oligopolistic tendencies of former oil giants.
“The giants’ surveillance systems span the entire economy: Google can see what people search for, Facebook what they share, Amazon what they buy,” the Economist wrote. “They own app stores and operating systems, and rent out computing power…”
That consolidation of data contributed to an increase in the frequency and impact of data leaks and breaches. Like fish in a bucket, nefarious actors knew where to look to reap the biggest rewards on their hacking efforts.
It was a matter of time until corporate entities attempted to walk the blurring line of legality, introducing a new weaponization of data that occurred outside of the deepest, darkest bowels of the internet.
Enter Cambridge Analytica. Two words that changed the way every web analyst introduced themselves to strangers. “I do analytics but, you know, not in, like, a creepy way.”
Cambridge Analytica, the defunct data-mining firm entwined in political scandal, shed a frightening light on the granularity and unchecked accessibility of platform data. Investigative reporting revealed to citizens around the world that their information could not only be used by advertising campaigns to sell widgets, but also by political campaigns to sell elections. For the first time in many homes, the effects of modern data privacy became tangible and personal.
The state of data privacy in 2020 can perhaps best be understood by framing it in terms of drivers and destinations. Consumer drivers, like those mentioned in the previous section, created reactions from stakeholders. Some micro-level outcomes, like actions taken by individual consumers, were predictable.
For example, the #deletefacebook hashtag first trended after the Cambridge Analytica story broke and surveys found that three-quarters of Americans tightened their Facebook privacy settings or deleted the app on their phone.
The largest outcomes are arguably happening at macro levels, where one (re-)action affects millions or hundreds of millions of people. We have seen some of that from consumers with the adoption of ad blockers. For publishers and companies that live and die with the ad impression, losing a quarter of your ad inventory due to ad blockers was, and still is, devastating.
Only weeks after Cambridge Analytica found its infamy in the headlines, the European Union adopted GDPR to enhance and defend privacy standards for its citizens, forcing digital privacy discussions into both living rooms and board rooms around the world.
Let’s use the following Google Trends chart for “data privacy” in the United States to dive deeper into five key outcomes.
General Data Protection Regulation (GDPR) has handed out more than €114 million in fines to companies doing business in the EU since becoming enforceable in May 2018. It’s been called “Protection + Teeth” in that the law provides a variety of data protection and privacy rights to EU citizens while allowing fine enforcement of up to €20 million or 4 percent of revenue, whichever hurts violators the most.
Months later, the United States welcomed the California Consumer Privacy Act (CCPA), which went into effect in January 2020 — becoming enforceable in July. Similar to GDPR, a central theme is transparency, in that Californians have the right to understand which data is collected and how that data is shared or sold to third parties.
CCPA is interesting for a few reasons. California is material. The state represents a double-digit share of both the US population and gross domestic product. It is also not the first time that California’s novel digital privacy legislation influenced a nation-wide model. The state introduced the first data breach notification laws in 2003, and other states quickly followed.
California is not alone with CCPA, either. Two dozen US state governments have introduced bills around digital tracking and data privacy, with at least a dozen pending legislation. That includes Nevada’s SB220 which became enacted and enforceable within a matter of months in 2019.
Corporate responses have come in many forms, from ad blockers I mentioned to platform privacy updates to the dissolution of ad-tech providers. I will address some of these stories and trends in the following section, but, for now, let’s focus on the actions of one technology that promises to trigger exponential effects on search marketing: web browsers.
The Safari browser introduced Intelligent Tracking Prevention (ITP) in 2017 to algorithmically limit cross-site tracking. Let’s pause to dissect the last few words in that sentence.
ITP 1.0 was only the beginning. From there, the following iterations tightened cookie duration, storage, and the role of first-party cookies for web analytics. Abigail Matchett explains the implications for users of Google Analytics.
“All client-side cookies (including first-party trusted cookies such as Google Analytics) were capped to seven days of storage. This may seem like a brief window as many users do not visit a website each week. However, with ITP 2.2 and ITP 2.3… all client-side cookies are now capped to 24-hours of storage for Safari users… This means that if a user visits your site on Monday, and returns on Wednesday, they will be granted a new _ga cookie by default.”
You are beginning to see why this is a big deal. Whether intended or not, these actions reinforce the use of quantitative metrics rather than quality measures by obstructing attribution. There is far more than can be said on ITP so if you are ready for a weekend read, I recommend this thorough technical assessment of the ITP 2.1 effects on analytics.
If ITP got marketer’s attention, Google reinforced it by announcing that Chrome would stop supporting third-party cookies in two years, codifying for marketers that cookie loss was not a can to be kicked down the road.
“Cookies have always been unreliable,” Simo Ahava told me. “To be blind-sided by the recent changes in web browsers means you haven’t been looking at data critically before. We are entering a post-cookie world of web analytics.”
The state of tracking and data privacy can take several paths from here. I outline a few of the most plausible then ask others in the analytics and digital space to offer their insights and recommendations.
2020 Path A: Lack of clarity leads to little change from search marketers
This outcome seemed like a real possibility in the first week of January as California enacted CCPA while enforcement deadlines got delayed. It was not yet clear what enforcement would look like later in the year and it appeared, despite big promises, that tomorrow would look a lot like today.
This path looked less likely after the second week of January. That leads us to the next section.
2020 Path B: Compounding tracking limitations keep marketers on their heels
Already in 2020 we have seen CCPA take effect, Chrome put cookies on notice, stocks for companies that rely on third-party cookies tumble, and the sacrifice of data providers that threatened consumer trust.
And that’s just January.
2020 Path C: Correction as consumer fear eases in response to industry action
The backlash to tracking and privacy is a reaction to imbalance. Consumers are protecting their data, politicians are protecting their constituents, and platforms are protecting their profits. As difficult as it is to see from our vantage point today, it’s most likely that these imbalances will normalize as stakeholders feel safe. The question is how long it will take and how many counter adjustments are required in the wake of over or under correcting.
As digital marketers, who in some ways represent both the consumers with whom we identify and the platforms with whom we depend, are in a unique position to expedite the correction and return to balance.