The “Foot-in-the-Door Technique,” or When Longer Forms May Work Better

“How can a person be induced to do something he would rather not do?” Researchers Jonathan L. Freedman and Scott C. Fraser asked that question more than half a century ago. In their era, it usually meant convincing someone to endure a pitch from a door-to-door salesman. In ours, it often means incentivizing web form […]

The post The “Foot-in-the-Door Technique,” or When Longer Forms May Work Better appeared first on CXL.

“How can a person be induced to do something he would rather not do?”

Researchers Jonathan L. Freedman and Scott C. Fraser asked that question more than half a century ago. In their era, it usually meant convincing someone to endure a pitch from a door-to-door salesman.

In ours, it often means incentivizing web form fills—gently prodding others to part with personal information that, they fear, could spark a swift, relentless inbox invasion.

Yet digital marketers have only recently recognized that applying this decades-old research to online forms can generate more leads and sales—and upend the conversion optimization best practice of “shorter forms equal more conversions.”

Shorter forms increase conversions, right?

Peep said it:

short forms work better

Oli Gardner said it:

shorter forms work better

 

Plenty of studies like this one or this one—and, I’m betting, your own experience—back it up. After all, shortening forms is a best practice because it usually works.

But no best practice should be deployed without testing. (No one mentioned above advocates test-free implementation, and don’t expect this post to bail you out either.)

Unlocking the potential of longer forms hinges on recognizing that the number of form fields may not be the main source of form friction.

When shorter forms fail

As we wrote a few years ago, reducing the number of form fields can backfire, which is exactly what Unbounce’s Michael Aagaard experienced with a site that connected people with entertainers for events.

He told his story at CTAConf:

We’re asking for a lot of information. We’re asking for date, time, type of event, number of attendees, location, name, email address, phone number, a comment field where you have to describe the event. A lot of information! And all I really want to do is just contact an entertainer.

So, to me, the solution was obvious here. Based on experience, it’s way too much information. So, let’s shorten it down and ask for less information. I finally convinced the client to let me remove three form fields. I wanted to remove more, but I could only get away with removing three. But that’s still one-third of the form fields—a lot less friction.

The result? 14% drop in conversion.

I removed all the fields that people actually want to interact with and only left the crappy ones they don’t want to interact with. Kinda stupid.

 

reducing form fields lower conversions
Aagaard realized that shortening a form left only high-friction fields, reducing conversion rates.

Distilling a form down to its most intimidating fields is one reason to retain longer forms. There are other well-known justifications, too, albeit anecdotal ones:

  • Longer forms may reduce lead quantity but increase lead quality.
  • Longer forms may work equally well if the reward justifies the effort.

But none considers the potential psychological impact of longer forms, especially multi-step forms that engage visitors with a simple ask before requesting a friction-inducing piece of information.

That strategy is called the foot-in-the-door technique.

What is the foot-in-the-door technique?

The foot-in-the-door (FITD) technique is not new. Even when proposed as a psychological concept in 1966 by Freedman and Fraser, the phrase “foot in the door” had been commonplace for decades.

According to the FITD technique, if you start with a modest request then follow up later with a larger request, you increase your chances of succeeding with the larger request. It’s the opposite of high-pressure sales that go straight for a signature on the dotted line.

Decades of research have upheld the effectiveness of the FITD technique.

In one of the studies from the authors’ original paper, they found that asking residents to put up a small “Drive Carefully” sign in their yard—the foot in the door—and returning later to ask them to put up a larger sign was vastly more successful (76% compliance) compared to asking residents to display a larger sign from the start (<20%).

A digital foot-in-the-door

Researchers later confirmed that a digital foot works as well as flesh and bone. In one study, they emailed half the participants for help with a file conversion issue, then followed up with an unrelated request to fill out a survey. The other half were sent the survey request directly.

Some 76% of the initial respondents completed the 40-question survey; only 44% of the others did. Their conclusion: “This ‘electronic foot-in-the door’ turns out as effective as in a situation where the interaction is synchronous (face-to-face or by phone).”

Why the foot-in-the-door technique may work

A key to understanding the FITD technique, Freedman and Fraser note, is that the two requests need not be related:

The basic idea is that the change in attitude need not be toward any particular issue or person or activity, but may be toward activity or compliance in general.

Why? Because the FITD technique, many psychologists believe, relies on self-perception theory. Freedom and Fraser describe the human mindset:

Once [someone] has agreed to a request, his attitude may change. He may become, in his own eyes, the kind of person who does this sort of thing, who agrees to requests made by strangers, who takes action on things he believes in, who cooperates with good causes.

The initial request, research reinforces, establishes a disposition of “helpfulness” that leads to increased compliance.

For example, agreeing to tie a shoelace for someone with back pain in a supermarket parking lot makes you more likely to agree to briefly monitor another stranger’s shopping cart when you approach the entrance moments later.

grocery store front
Agreeing to tie a stranger’s shoe in a store parking lot may alter our self-perception, making us more likely to say “yes” to subsequent requests.

Self-perception theory, however, isn’t the only reason that FITD may work. Critically, the reason that the FITD approach may work greatly influences how you should apply it to web forms.

Knowing why foot-in-the-door works guides testing

While academic research leans toward self-perception theory as the primary explanation of the FITD effect, it’s not the only theory. Two others—Cialdini’s “commitment and consistency” principle and the “mere-agreement effect”—also surface in academic studies.

In other words, the FITD effect may occur for different reasons in different situations. Each reason has significant implications for whether the FITD technique will work on your site—and how to set up forms for maximum effect.

1. Self-perception theory

  • When it works best: Surveys, review requests
  • Why: People want to perceive themselves as helpful

As part of their initial study, Freedman and Fraser ran a test with 156 housewives in Palo Alto. The “big request” was to allow a team two hours of access to their home to see which household products they used.

Prior to making that large request, they divided their subjects into four groups and called each to:

  1. Survey them over the phone about their usage of cleaning products.
  2. Merely ascertain whether they would be willing to participate in a phone survey.
  3. Provide information about the research firm and its intents.
  4. Ask directly for permission to conduct the in-home survey.

Women in the first group agreed to the “big ask” of an in-home survey 52.8% of the time, more than any other group (whose accepted requests totaled 33.3%, 27.8%, and 22.2%, respectively).

“I need your help,” is the underlying trigger for self-perception theory. And, stated or unstated, it’s equally potent when it comes to on-site surveys.

Google Maps does a great job with this. Rather than haranguing me to leave another review with generic “We value your opinion” statements (which boost my self-importance but do nothing to trigger a FITD effect), they highlight how helpful I’ve been in the past, then offer to show me further evidence of my altruism:

google maps review
The subject line in this email from Google Maps was “Your review is making a difference.”

Encouraging me to see my reviews (a small request that reinforces my self-perception as “the kind of person who does this sort of thing”) is the perfect first ask to get me one giant click closer to leaving more reviews.

For your surveys, you may ultimately want an email address or phone number to follow up with respondents. And, for reviews on Facebook or Google, you’re asking someone to declare publicly their feelings about your business. Both elements are big requests.

If you require either (the digital equivalents of an in-home visit), start by gathering anonymous responses or reinforcing the helpfulness of user reviews to others.

2. Commitment and consistency principle

  • When it works best: Lead generation, user requests for rates/estimates
  • Why: People are compelled to finish what they start

One of Cialdini’s six principles of persuasion, “commitment and consistency” argues that once we start something, we want to finish it. Thus, a small request starts us down a path that a later, larger request completes.

(Cialdini’s principle relates to several others, such as the Ovsiankina effect—the tendency to restart an interrupted action—and the Zeigarnik effect—our higher recall for incomplete or interrupted tasks.)

It’s the primary reason that Klientboost’s Johnathan Dane, who wrote about using a similar approach last year, has succeeded with FITD for his clients. Like Aagaard, Dane realized that perennial shortening of form fields reduced many to a single type of request—one that unmasked anonymous users.

Dane implemented a process that has become an almost universally successful test for clients: breaking up single-page forms into multi-step forms. Multiple steps create a sense of process that draws visitors through to completion.

The “foot in the door,” for Dane’s team, is an initial question (or three or five) that asks for non-identifiable information. At the completion of the process, visitors submit contact information to receive results (e.g. home loan rates).

The initial requests are essential disclosures for users to get the answer they need, and until the end of the process, users are uncertain that they’ll even need to enter personally identifiable information. Only then—after having taken all but the last step—are users prompted for contact information.

For several clients, it has worked wonders. Dane shared a few examples of foot-in-the-door success:

multi step form education industry

multi-step form auto industry

multi-step form software industry

3. Mere-agreement effect

  • When it works best: Community building, strong value alignment
  • Why: Perceived similarity breeds cooperation

You and I are two peas in a pod. That’s the impact, according to researchers, of the mere-agreement effect.

An initial point of positive response creates rapport between two parties that can lead to ongoing, escalating agreement—all the way through the “big request.”

Success with the mere-agreement effect aligns closely with the idea of a “yes ladder”: developing camaraderie between you (or your company) and a user by asking questions whose answers reveal ideological similarities.

It’s an obsession with the word “yes” and a search for any question that gets someone else to agree with you, even if that agreement is inconsequential to the sale.

At its worst, It’s the painfully familiar tactic of telemarketers and infomercials. (“Would you like to lose 20 pounds without leaving the couch?”) But it’s also relevant for sites that benefit by building a sense of community based on shared values, a common goal in the non-profit space.

aspca homepage mere agreement
Who’s going to say “No”? That “mere agreement,” which leads into a donation form, can build rapport prior to the monetary request.

Many nonprofits have missions that, ideologically, enjoy almost universal support. Employing the mere-agreement effect takes advantage of this common ground—a nearly friction-free way to crack open the door.

Keys to applying foot-in-the-door principles

Dane and other FITD experimenters have identified several keys to making FITD work on web forms. Related academic studies add nuance to the approach.

FITD works best when:

  1. Visitors are in the middle or bottom of the funnel. Those users are already engaged with your company and anticipate sharing more information. In contrast, a single-field email request usually outperforms multi-step forms for top-of-funnel offers like PDFs—users have no expectation of deeper interaction and, therefore, there’s no justification to extend the process.
  2. Visitors know they need to supply information to answer their question. No one expects a loan quote without providing some financial info. No one expects a house painter’s estimate without divulging basic details on square footage. A multi-step form works well if visitors believe that answering the first few questions are essential to receive their desired answer.
  3. The initial request is simple yet not trivial. A trivial request may fail to trigger the FITD effect—insignificant compliance means that the doer won’t register a change to self-perception. Researchers have shown that an unusual (but still simple) first task engages the mind sufficiently to trigger a self-perception change.

When foot-in-the-door may not work

The foot-in-the-door technique has counterweights:

  • Door-in-the-face. Start with a large, unreasonable request to soften the perception of the subsequent request you actually want someone to accept.
  • Foot-in-the-face. To maximize compliance, follow an immediate rejection with a secondary request—but wait two to three days if the initial request is accepted.

Debate over how to balance agreement and rejection isn’t limited to these techniques.

In Getting to Yes, Roger Fisher, William Ury, and Bruce Patton laid a foundation for negotiation strategy—one focused on reaching early agreement to catalyze later accord. Their perspective aligns well with the FITD technique and yes ladders.

It’s the opposite of what Chris Voss, a former FBI negotiator, argues in his book Never Split the Difference. The problem with “yes,” Voss contends, is that it cedes control, and no one is anxious to cede power in a negotiation.

By demanding a “yes,” the self-perception and mere-agreement models of the FITD technique make an immediate—and ever-expanding—power grab. It’s one reason why FITD can fail; even a small request still asks for a “yes.”

Notably, however, the commitment and consistency model for FITD demands only initial engagement, not initial agreement.

And, based on Voss’s experience, it offers another potential way to sneak a foot in the door—using a “no” can simultaneously empower users and start them on a conversion process they’re loath to abandon.

Examples of foot-in-the-door failure

At Klientboost, Dane learned early on that the wrong initial ask could negate the positive effect.

In his experience, anything that strips the visitor of anonymity is a risk. That includes some of the obvious asks—email, phone, etc.—but also semi-identifiable information, like a company name.

Dane saw a 50% decrease in conversions when they used a company name field to start a conversion process instead of more generic information on employees and location:

multi-step form fail
Dane believes the updated treatment (right) performed worse because users considered “Firm Name” too intrusive. (Image source)

MiroMind, a Toronto-based SEO agency, tested the FITD technique across several clients sites—in software, cloud services, SaaS, and health and beauty niches—and also saw form fills drop.

Their single-page versions generated 80 to 180% more leads, and as many as 70% of visitors closed the expanded forms as soon as they realized that they had multiple steps. Notably, the forms asked for an email address early in the sequence (to allow for follow-up with non-completers), which may have torpedoed efforts.

Conclusion

So how difficult or intrusive should your initial question be? How do you engage visitors without scaring them off? It’s an elusive balance that, like the FITD technique, requires testing.

Understanding the most relevant trigger for your site—self-perception theory, the commitment and consistency principle, or the mere-agreement effect—can help identify your initial question and guide the remainder of the sequence. Even Voss, despite advocating the power of “no,” is ultimately working toward a “yes.”

Take, for example, a question that might be a useful starting point for this post to increase engagement or begin a conversion sequence:

Do shorter forms always work better?

The post The “Foot-in-the-Door Technique,” or When Longer Forms May Work Better appeared first on CXL.

10 Recent Neuromarketing Research Studies (and Their Real-World Takeaways)

Neuromarketing assesses how our brain reacts to stimuli, not simply what we self-report in qualitative surveys. These are truths that our impulses write onto MRIs. Sometimes, as several studies below illustrate, those two systems—the conscious and subconscious—offer conflicting interpretations. Importantly, scientific knowledge is almost always built incrementally. Don’t expect a single paper to define, for […]

The post 10 Recent Neuromarketing Research Studies (and Their Real-World Takeaways) appeared first on CXL.

Neuromarketing assesses how our brain reacts to stimuli, not simply what we self-report in qualitative surveys. These are truths that our impulses write onto MRIs. Sometimes, as several studies below illustrate, those two systems—the conscious and subconscious—offer conflicting interpretations.

Importantly, scientific knowledge is almost always built incrementally. Don’t expect a single paper to define, for all time and every business, the ideal pixel width for product images or sample size for accurate sales forecasts. Every study is a step or building block or book page—pick your preferred metaphor—toward consensus.

These ten steps, one for each study, are some of the latest contributions from neuromarketing. All were published between 2016 and 2018. For practitioners, they reveal the potential of neuromarketing research and help guide heuristic analysis.

1. “Multiple ‘buy buttons’ in the brain: Forecasting chocolate sales at point-of-sale based on functional brain activation using fMRI”

Takeaways

  • Small-scale neuromarketing tests for product messaging may accurately forecast sales.
  • Qualitative research on consumer preference for messaging may be a poor predictor of sales.

Study details

Which is a better predictor of purchasing behavior—qualitative research or fMRI scans? Simone Kuhn, Enrique Strelow, and Jurgen Gallinat began their study with 18 women between the ages of 23 and 56, all self-described weekly chocolate buyers.

The women were shown a product picture and six related communications, including a control (a toothbrush). The product picture appeared for 2 seconds, followed by a 3-second display of a marketing communication, then the product again for 2 seconds. Researchers used fMRI imaging of several brain areas during the test.

consumer preference study
The fMRI imaging of participants during marketing communications (B) predicted sales (D) better than subjects’ stated preference (A).

Afterward, the participants were asked to order the communications according to their liking. The researchers created three sales forecasts: one based on stated preference, one based on brain activity during viewing of the communications, and one based on fMRI changes to product viewing before and after communications.

German supermarkets displayed each test treatment for one week, with researchers recording actual sales. The strongest correlation between forecasted and actual sales came from the fMRI signals during communications; the pre- and post-messaging fMRI data was second. The subjects’ stated preference finished last.

While the researchers drew only tentative conclusions about the ability of neuroimaging tests to predict sales and the poor correlation between stated preference and sales, they highlighted the potential power of small sample sizes in neuromarketing:

The present results demonstrate the feasibility to use neuroimaging methods in a relatively small sample of participants to forecast the influence of communications on the actual consumer behaviour at the point-of-sale.

Read the full study here (gated content).

2. “When Brain Beats Behavior: Neuroforecasting Crowdfunding Outcomes”

Takeaway

  • Troves of online market-level data make it possible to validate individual neuromarketing tests against collective, real-world consumer decisions.

Study details

Can neuromarketing studies of individuals predict mass behavior? Alexander Genevsky, Carolyn Yoon, and Brian Knutson used the crowdfunding site Kickstarter “to test whether neural activity could forecast market-level crowdfunding outcomes weeks later.”

For their study, they showed 36 crowdfunding requests to 30 subjects. Subjects decided whether they would fund each project, with real money taken from their study compensation for projects they supported. During the initial selection process, researchers recorded subjects’ brain activity.

Afterward, the subjects rated their opinion of each project (positive or negative), the strength of that opinion, and whether they thought the project would ultimately reach its crowdfunding goal.

Weeks later, researchers compared subjects’ ratings and brain activity to the crowdfunding projects’ success or failure. The results? Brain activity was the only successful predictor of crowdfunding outcomes:

  • “Neither average ratings of project likeability nor of perceived likelihood of success were associated with Internet funding outcomes.”
  • “Only [Nucleus accumbens, or NAcc,] activity generalized to forecast market funding outcomes weeks later on the Internet.”

As researchers noted, “These findings demonstrate that a subset of the neural predictors of individual choice can generalize to forecast market-level crowdfunding outcomes—even better than choice itself.”

Researchers replicated their findings in a second study.

Read the full study here.

3. “Measuring narrative engagement: The heart tells the story”

Takeaway

  • Audio content (e.g. podcasts) may have the potential to create stronger connections with consumers, even if their stated preference is for video content.

Study details

Does audio or video content generate more user engagement? What people claim versus what their biometric data shows, the authors found, doesn’t align.

For their study, the researchers identified equivalent audiobook and film scenes from adaptations such as Game of Thrones and The Silence of the Lambs. They selected “emotionally charged scenes” in which the audio and video were nearly identical.

(The audio portions, inevitably, lasted longer than their video counterparts; the authors evaluated relative time charts that aligned scene content.)

While participants rated the video segments as, on average, 15% “more engaging,” the physiological measures suggested otherwise:

In terms of raw measures, their average heart rate was higher when they were listening to audiobooks by about two beats a minute; they had a greater range of heart rate by about 4 beats per minute; they were roughly 2 degrees warmer in their body temperature (1.66°C), and their skin conductance (EDA) was higher by 0.02 microsiemens.

audio vs video engagement
Audio content may engage users more because it requires active participation to create a scene in the mind’s eye.

Why? The authors hypothesized that “listening to a story is a more active process of co-creation (i.e. via imagination) than watching a video.” Thus:

The act of listening to the narrative recreated the same basic pattern of brain activity as telling the story, suggesting that listening to the story is qualitatively and quantitatively similar to experiencing the speaker’s memory of the events. Moreover, activation was not limited to regions of the brain classically related to language, but also involved emotional, sensory and motor systems consistent with the notion that at some level, the listener actually experiences the story.

Read the full study here.

4. “Willingness to pay lip service? Applying a neuroscience-based method to WTP for green electricity”

Takeaways

  • “Neuropricing has been significantly better in predicting population behavior than reaction times, which in turn are significantly better than questionnaires.”
  • Neuropricing may better assess consumer valuation of non-core product benefits like ethical production, region-specific origins, use of organic materials, etc.

Study details

How does qualitative data on willingness to pay (WTP) compare to neuroscience data? Carsten Herbes and three co-authors tackled a narrow topic—consumer willingness to pay a premium for electricity from green sources—that has broader implications.

In the 40-participant study, the researchers first provided participants with a questionnaire to rate their willingness to pay between 90% and 130% of their current energy costs to source their electricity from partially or entirely green sources.

Then, researchers monitored participants’ brain activity when presented with a series of images that showed an electricity package, a price, and the word “expensive” or “cheap,” to which participants could respond “Yes” or “No.”

willingness to pay study
Neuropricing research showed that consumer willingness to pay consistently rose to the high end of qualitative estimates.

Researchers monitored brain activity and reaction time to each participant’s choice for 50 random combinations of packages, prices, and binary descriptors.

Based on the study results, neuropricing showed a tolerance for a price increase of up to 15%; in comparison, qualitative data ranged from 3 to 19%. The result, according to researchers, highlights the value of neuromarketing research over traditional methodologies:

Neuropricing delivers higher WTPs by the same respondents and thus apparently avoids the effects of strategic behavior. This yields a fundamental insight. Namely, a range of potential biases in and limitations of self-reported WTPs can be eliminated by our methodology.

Additionally, the researchers suggest two other potential benefits:

  1. The potential to “magnify the granularity of WTP research,” by “examining, for example, detailed product features such as proven regional origin or the effects of specific claims in marketing communications.”
  2. The ability to obtain valid results with a small number of test subjects.

Read the full study here (gated content).

5. “Intuition, risk, and the formation of online trust”

Takeaways

  • “‘Simple changes’ (such as page layouts and choices of fonts, images, and colors) may be far more critical to associative trust-formation processes than we previously understood.”
  • “What seem like merely aesthetic design choices may actually be the way your customers learn to trust you (or don’t).”

Study details

How does the level of risk affect consumer trust? Authors Mahdi Roghanizad and Derrick Neufeld identified two hypotheses:

  1. “When evaluating whether to trust a website while making low-risk decisions, consumers tend to rely on deliberative and explicitly logical reasoning processes.”
  2. “When faced with higher-risk decisions, online consumers are more likely to turn to associative (intuitive) reasoning processes.”

To test their hypotheses, they divided 245 research students into six groups. Each group saw a different version of a website for an actual bookstore—some saw the real version, others viewed iterations that lacked security seals or return-policy information.

Subjects were asked to make two decisions:

  1. Low risk: Determine whether, hypothetically, they would purchase a book from the site.
  2. High risk: Determine, in reality, whether they would provide their personal information (name, address, phone number) to the site in exchange for a $20 gift card.

The research confirmed the hypotheses: “When making decisions involving risk, such as an online purchase from a website, consumers tend to rely more on intuition than on deliberation.” (Deliberative actions were those that qualified as “rule-based, logical, rational.”)

In other words, the “look and feel” of the website mattered more than explicit trust guarantees when it came to high-risk decisions.

Read the full study here (gated content), or the related Harvard Business Review article here.

6. “A Neuropsychological Study on How Consumers Process Risky and Secure E-payments”

Takeaway

  • Offering familiar, trusted online payment options, such as PayPal, may reduce buying friction for reluctant customers.

Study details

Do different payment options make users more or less confident? Authors Luis-Alberto Casado-Aranda, Francisco Liébana-Cabanillas, and Juan Sánchez-Fernández tackle this topic—one they argue has been largely ignored.

Their work focuses on two primary means of payment: debit cards and PayPal. Using MRIs to identify the “neural effects,” they invited 30 participants to complete simple online purchases.

What did they find? “The analysis reveals that perceived risky e-payments activate brain areas linked to negative emotional processing, while areas involved with reward prediction are strongly triggered by secure e-payments.”

More specifically, the research reveals

not only a greater intention of use toward PayPal, but sees it as more secure, rewarding and affective [sic]. Debit card e-payments, by contrast, elicit brain activations associated with negative and risky events. Interestingly, the right cerebellum response (responsible for value encoding) covaried with more positive use intention toward Paypal.

Read the full study here (gated content).

7. “Graphical elements that can invoke trust in online web shops”

Takeaway

  • The most consistent design element associated with site trustworthiness—more than any single choice in color, font, or layout—was effort. Cheap design looks…cheap.

Study details

Which design elements make an online store trustworthy? Gustav Bergman and Felix Noren, in their study, focus on the design aspects that generate trust upon a first impression.

They created various combinations of colors, background patterns, trust badges, and address information (or lack thereof), then showed participants the website for seven seconds. Participants were given a binary “yes” or “no” option to answer, “Does this web shop seem trustworthy to you?”

website with without trust seal
A study design without (above) and with (below) a trust seal.

In addition to recording the answer, the researchers also recorded the amount of time it took subjects to respond to 31 randomized versions of the page. (They found no correlation between time-to-response and perceived trustworthiness.)

For the researchers, a primary challenge was managing the personal preferences of their participants—high saturation colors and fonts like Comic Sans, based on qualitative responses, reduced the perceived trust.

However, there was a consistent, quantitative data point associated with trust: the amount of time it took the researchers to create the sample websites:

We can see a difference in how much time we took to make one image look nice. The more time, the more “yes” it got [. . .] So, there is a certain connection between the expression “professional” and the amount of time we laid down on the image in question.

Read the full study here.

8. “How consumers are affected by product descriptions in online shopping: Event-related potentials evidence of the attribute framing effect”

Takeaways

  • Negative framing for product pages may initially generate higher levels of engagement but fail to convert more visitors.
  • For products in a highly competitive market, positive framing increases the perception of value.

Study details

Does a potential loss or gain make a product seem more valuable? Authors Jia Jina, Wuke Zhangc, and Mingliang Chen investigated how framing impacted consumer attention and decision-making when it came to mock product pages featuring wool coats.

An important dividing line in their study was the early versus late cognitive stage. Using electroencephalography (EEG) to measure event-related potential (ERP) waveforms, the researchers found that negative framing generated the most activity in the early cognitive stage—but that same heightened level of interest also led to more difficult decision making.

In contrast, positive framing, while generating less early-stage interest, made decision-making easier. In the late cognitive stage, it also generated a perception of greater product value based on higher expectations for future performance.

Read the full study here.

9. “Failure to CAPTCHA Attention: Null Results from an Honesty Priming Experiment in Guatemala”

Takeaway

  • Any form may be an opportunity for influential messaging, but the cognitive distance between the message and the intended action may have the greatest impact.

Study details

Can you add a message to your CAPTCHA to influence user behavior? It’s an obscure opportunity, but in conversion optimization, you might as well turn over every stone.

Stewart Kettle and his four co-authors (in addition to earning “Academic Article Title ‘Pun of the Year’”) tried to see if CAPTCHA messaging could improve tax collection in Guatemala.

As their title suggests, the answer was no. The researchers compared the amount of tax declared among a randomized group of more than 627,000 taxpayers to the messages the taxpayers saw in the pre-form CAPTCHA.

captcha message
One of many CAPTCHA variations. This one states, “Your taxes help pay for schools, hospitals, and the police.”

However, none of their six treatments (in addition to the message-less control CAPTCHA) had any impact on the total tax declared. Treatments ranged from messages highlighting the public use of tax funds to subtle threats of enforcement against tax evaders.

Researchers concluded that the cognitive gap between when the CAPTCHA appeared and the decision-making on the tax form likely weakened its impact:

The fact that all of the six treatments were found to be ineffective (rather than some) supports the hypothesis that the setting in which the information was conveyed may have been crucial here, rather than the content of the messages.

Read the full study here.

10. “Measuring advertising effectiveness in Travel 2.0 websites through eye-tracking technology”

Takeaway

  • Recall for ads may be low across all platforms, but those with a simpler interface may earn more user attention.

Study details

Which platform—website, social media, or third-party review site—has the “stickiest” ads? Authors Francisco Muñoz-Leiva, Janet Hernández-Méndez, and Diego Gómez-Carmonac, sought to answer the question by measuring the visibility of ad banners on three sites: a hotel blog, its Facebook page, and TripAdvisor.

Using eye-tracking measurements and self-reported recall, the authors found that the ad banner on the social profile—more than the blog or TripAdvisor—earned the most attention and generated the highest (albeit limited) recall among participants.

Regarding Facebook:

We could say that participants not only focused on the banner sooner, but also more times and for longer, although it was located in the same position on all the websites. This may be due to the fact that the complexity of a website’s design (text size and format, position of images, etc.) can have an effect on the viewing patterns. [. . .] In our case, the Facebook profiles had less editorial content than the rest.

In all instances, however, overall banner visibility and recall were low. As the authors suggest, if users are seeking information, an advertisement may seem like an “obstacle,” not an offer.

Read the full study here.

Conclusion

A single research study is rarely the defining opinion. Instead, for experts, each study adds to a baseline of knowledge—sometimes aligning, sometimes conflicting with past research.

For conversion optimization, heuristic analysis is the starting point. The quality of that initial, subjective analysis, which ultimately guides test selection and prioritizes implementation, depends on the depth of your expertise.

Build your knowledge with CXL Insitute’s courses on neuromarketing:

The post 10 Recent Neuromarketing Research Studies (and Their Real-World Takeaways) appeared first on CXL.

Kids’ Video Game Obsession Isn’t Really About Video Games. It’s About Unmet Psychological Needs.

Many parents are concerned with their child’s seemingly obsessive video game play. Fortnite, the most recent gaming phenomenon, has taken the world by storm and has parents asking whether the shooter game is okay for kids. The short answer is yes, Fort…

Many parents are concerned with their child’s seemingly obsessive video game play. Fortnite, the most recent gaming phenomenon, has taken the world by storm and has parents asking whether the shooter game is okay for kids. The short answer is yes, Fortnite is generally fine. Furthermore, parents can breathe easier knowing that research suggests gaming […]

The post Kids’ Video Game Obsession Isn’t Really About Video Games. It’s About Unmet Psychological Needs. appeared first on Nir and Far.

How to Use Personality Science to Drive Online Conversions

Nir’s Note: This guest post is by Vanessa Van Edwards, lead investigator at the Science of People — a human behavior research lab. This exclusive book excerpt is from Vanessa’s new book, Captivate: The Science of Succeeding with People, whi…

Nir’s Note: This guest post is by Vanessa Van Edwards, lead investigator at the Science of People — a human behavior research lab. This exclusive book excerpt is from Vanessa’s new book, Captivate: The Science of Succeeding with People, which was recently named as one of Apple’s Most Anticipated Books of 2017. We all want more conversions. More sign-ups, […]

The post How to Use Personality Science to Drive Online Conversions appeared first on Nir and Far.

How to Use Personality Science to Drive Online Conversions

Nir’s Note: This guest post is by Vanessa Van Edwards, lead investigator at the Science of People — a human behavior research lab. This exclusive book excerpt is from Vanessa’s new book, Captivate: The Science of Succeeding with People, whi…

Nir’s Note: This guest post is by Vanessa Van Edwards, lead investigator at the Science of People — a human behavior research lab. This exclusive book excerpt is from Vanessa’s new book, Captivate: The Science of Succeeding with People, which was recently named as one of Apple’s Most Anticipated Books of 2017. We all want more conversions. More sign-ups, […]

The post How to Use Personality Science to Drive Online Conversions appeared first on Nir and Far.