Gone are the days when marketing meant touting the great features and benefits of a product and just waiting for the public to buy in. The ground rules of consumer marketing have changed: today, it’s all about building meaningful…..
As a marketer, you’re hearing about artificial intelligence (AI) everywhere now, and everyone claims some form of AI capability in their product, especially in the field of personalization. And for good reason: AI algorithms can ch…
What’s the “right” way to measure the success of your advertising? Contributor Peter Minnium says the answer is “it depends,” and goes on to explain why.
The Super Bowl is the single most viewed televised event in the US. Every year, more than 100 million people tune in to see who will be crowned NFL champion. There is no bigger or brighter stage for impactful ad content, so it is no wonder that a 30-second commercial this year sold for an average of $5 million. With the stakes that high, brands fight tooth and nail for audience attention and, like most consumers, they want to make sure they get their money’s worth.
Though the need for strong evaluative metrics is clear, the means of selection is less evident. Ad Meter, Real Eyes, YouTube, Ipsos (my employer) and many others all created rankings of winners — but each used a different system and got different results. This year, Amazon’s Alexa ad crowned USA Today’s Ad Meter rankings, while Tide, which did not even make the Ad Meter top 10, won out in the inaugural neuro study conducted by Ipsos.
When you’re right, you’re right
While your first instinct may be to assume that one of these studies got it wrong, the divergence in these ratings represents differences in testing approaches, each reflecting a distinct goal and research philosophy. Though methodologies vary, the most dramatic distinctions arise between studies that evaluate audiences’ conscious responses and those that measure nonconscious reactions.
Behavioral science research has revealed that people employ two kinds of information processing: System 1 and System 2. The former is automatic, rapid, efficient, and often operates below our conscious awareness. It can be thought of as our intuitive “gut” reactions and feelings. By contrast, System 2 is controlled, analytical, and deliberate. It is only active when we have the ability and the motivation to consciously process information.
System 1 and system 2 in the spin cycle
These different systems come into play at different points in consumers’ exposure to a brand’s narrative. In the early stage of a product’s lifecycle, the best communication strategy may be to inform and educate prospects as to the new product’s superiority to other existing solutions. Emotional appeals may not reach a skeptical consumer, but providing viewers with the information they need to make an informed judgment can help win them over.
After the early stages of a product’s lifecycle, when the shine of the new has fallen away and other products have caught up in terms of rational benefits, it’s the brand’s image, relative appeal, and closeness that drive consumer choice. Lists of product benefits and demonstrations of their relative strengths become burdensome to viewers, while emotionally-stirring or visually-exciting ads can captivate their attention and maintain their interest.
Picking the right tool for measuring effectiveness
Just as the stages of a product’s lifecycle should inform the choice between educational and emotive advertising, a brand’s needs dictate whether an understanding of an ad’s System 1 or System 2 impact is most relevant. When making this determination, it is important to consider the strengths and limitations of viewer self-reporting. While System 2 studies may yield more thoughtful responses, they can overlook nonconscious viewer engagement.
Like most research techniques, the Ad Meter evaluates participants’ articulated response to individual ads. It relies entirely on viewers’ conscious assessment of the ads with self-recruited panelists scoring each ad on a 1 to 10 scale. As marketers we are very familiar with these types of survey questions: “Which ad did you like best”, “Which ads did you dislike”, or “Which ads were most memorable?” These are all examples of System 2 processing.
Measure reactions with the blink of an eye
Methodologies that measure System 1 processing of ad content may be less familiar, but they can be powerful. Neuroscience has armed researchers with an arsenal of techniques, including EEG (electroencephalogram), Facial Coding, Eye Tracking, and Biometrics. The merit of each of these approaches is determined by the importance of recreating a natural viewing environment and which level of analysis most directly answers a brand’s questions.
- EEGs measure fluctuations in the brain’s electrical activity using a headset. They provide great temporal resolution by matching voltage spikes and waves to the introduction of stimuli and can be read at fast speeds, giving almost immediate results. However, EEGs struggle to simulate a natural viewing environment, as respondents are required to wear a headset and must be mindful of moving while participating in the study.
- Facial coding reads emotions from facial expressions — happiness, surprise, sadness, disgust, fear, and confusion. It can track minute changes in mood, revealing if a scene is having the desired effect. Capable of distinguishing between a sincere and an insincere smile, it can provide great insight about the true emotional state of viewers when watching ads, but participants need to sit in front of a camera and remain focused on the content.
- Eye tracking measures the movement of a viewer’s gaze across the screen and allows practitioners to see through the audience’s eyes. It is one of the best ways to determine what attracts and captures a viewer’s attention, though it does little to explain what (if anything) the viewer is experiencing. Eye tracking also requires participants to sit in front of a specialized camera, which, while less intrusive than an EEG, is still short of a natural viewing environment.
- Biometrics can be measured in many ways, including Galvanic Skin Response (GSR), also referred to as skin conductance or electro-dermal activity. When the viewer becomes excited, sweat production increases, making it possible to directly monitor the impact of stimuli on the viewer. Measuring GSR is far less disruptive than other System 1 tests, as it does not require cumbersome equipment or severely limits participants’ movement.
Tides of sweat make for winning ads
GSR was chosen for Ipsos’ study because it was deemed the best way to collect System 1 data while preserving the natural viewing environment so important for the Super Bowl. The human body has the highest number of sweat glands in the hands and feet, making them ideal sites for testing. In the interest of both maximizing accuracy and minimizing disruption, a wearable Shimmer GSR device — designed to be attached to participants’ fingers — was used.
Participants enjoyed the Super Bowl in the comfort of a small theater, while the devices tracked their levels of excitement. When Alshon Jeffery reached for a one-handed catch, Eagles fans tensed in anticipation. Their heart rates spiked, and sweat production increased, as part of their bodies’ natural response. Every time David Harbour delivered his soon-familiar line–” It’s a Tide ad” –their amusement was nonconsciously expressed the same way.
It all comes back to the brand
The ads in this year’s Super Bowl were tested and ranked by no fewer than six companies using various System 1 and System 2 methodologies. Each was “right” for a specific context, delivering a unique perspective on the ads’ impact on viewers. Smart marketers and their agencies know to choose a methodology based on their specific objectives and are paying increasing attention to the needs of their communication to engage System 1 and System processing.
There has been considerable focus on the personalization of digital experiences for years, but that doesn’t mean that physical stores should be left out of the personalization picture. The good news is that they no longer have to b…
At the end of last year, Monetate released our 2nd Annual Personalization Development Study where we found businesses have made significant progress in their search for personalization effectiveness, but there is still a great deal…
With their Vodka for Dog People campaign, Tito’s both uses smart psychology and gets a big social sharing boost.
Most business leaders know that personalization is important, but not all are aware of the high stakes involved: a recent study shows that in the retail, healthcare, and financial services sectors alone, the top 15% of companies th…
Are you looking for ways to increase your organic visibility and rankings in local search results? Contributor Kristopher Jones shares how to shine in local search results using locally focused content.
The rise of mobile search has led to many changes in SEO, but none more dramatic than the area of local search.
By now, most of us are familiar with the Google update known as Pigeon. Launched in 2014, it allowed greater search visibility for local directories, which helped local search engine optimization (SEO) establish a foothold. With mobile usage surpassing desktop and Google reporting more than one-third of mobile searches local related, it’s no wonder local SEO has become an important part of an SEO’s overall strategy.
My name is Éric St-Jean, but you can call me ésj. I have a dream! I’ve had this idea in my head for a long while, and I think it would revolutionize customer experience… if only there was a…
Right after the Hawaii false nuclear alarm, I posted about how the user interface seemed to contribute to the error. At the time, sources were reporting it as a “dropdown” menu. Well, that wasn’t exactly true, but in the last few weeks it’s become clear that truth is stranger than fiction. Here is a run-down … Continue reading Hawaii False Alarm: The story that keeps on giving →
Right after the Hawaii false nuclear alarm, I posted about how the user interface seemed to contribute to the error. At the time, sources were reporting it as a “dropdown” menu. Well, that wasn’t exactly true, but in the last few weeks it’s become clear that truth is stranger than fiction. Here is a run-down of the news on the story (spoiler, every step is a human factors-related issue):
- Hawaii nuclear attack alarms are sounded, also sending alerts to cell phones across the state
- Alarm is noted as false and the state struggles to get that message out to the panicked public
- Error is blamed on a confusing drop-down interface: “From a drop-down menu on a computer program, he saw two options: “Test missile alert” and “Missile alert.”
- The actual interface is found and shown – rather than a drop-down menu it’s just closely clustered links on a 1990s-era website-looking interface that say “DRILL-PACOM(CDW)-STATE ONLY” and “PACOM(CDW)-STATE ONLY”
- It comes to light that part of the reason the wrong alert stood for 38 minutes was because the Governor didn’t remember his twitter login and password
- Latest news: the employee who sounded the alarm says it wasn’t an error, he heard this was “not a drill” and acted accordingly to trigger the real alarm
The now-fired employee has spoken up, saying he was sure of his actions and “did what I was trained to do.” When asked what he’d do differently, he said “nothing,” because everything he saw and heard at the time made him think this was not a drill. His firing is clearly an attempt by Hawaii to get rid of a ‘bad apple.’ Problem solved?
It seems like a good time for my favorite reminder from Sidney Dekker’s book, “The Field Guide to Human Error Investigations” (abridged):
To protect safe systems from the vagaries of human behavior, recommendations typically propose to:
• Tighten procedures and close regulatory gaps. This reduces the bandwidth in which people operate. It leaves less room for error.
• Introduce more technology to monitor or replace human work. If machines do the work, then humans can no longer make errors doing it. And if machines monitor human work, they ca
snuff out any erratic human behavior.
• Make sure that defective practitioners (the bad apples) do not contribute to system breakdown again. Put them on “administrative leave”; demote them to a lower status; educate or pressure them to behave better next time; instill some fear in them and their peers by taking them to court or reprimanding them.
In this view of human error, investigations can safely conclude with the label “human error”—by whatever name (for example: ignoring a warning light, violating a procedure). Such a conclusion and its implications supposedly get to the causes of system failure.
AN ILLUSION OF PROGRESS ON SAFETY
The shortcomings of the bad apple theory are severe and deep. Progress on safety based on this view is often a short-lived illusion. For example, focusing on individual failures does not take away the underlying problem. Removing “defective” practitioners (throwing out the bad apples) fails to remove the potential for the errors they made.
…[T]rying to change your people by setting examples, or changing the make-up of your operational workforce by removing bad apples, has little long-term effect if the basic conditions that people work under are left unamended.
A ‘bad apple’ is often just a scapegoat that makes people feel better by giving a focus for blame. Real improvements and safety happen by improving the system, not by getting rid of employees who were forced to work within a problematic system.