Gartner recently published their Predicts 2019 research report, outlining several converging trends that pose a threat to CMOs and marketing organizations. The report also makes several bold predictions including that “by 2023, 60 percent of CMOs will slash the size of their marketing analytics departments by 50 percent because of a failure to realize promised improvements.”
The number one success factor for CMOs today is the ability to effectively leverage customer data and analytics. And yet, according to Gartner’s report, companies today are clearly not demonstrating consistent return on that investment, a problem which often stems from a lack of marketing analytics leaders and the organizational structure necessary to effectively translate data and insights into action.
To discuss in more detail, we chatted with one of the authors of the Gartner report, Charles Golvin, to explore what CMOs and marketing leaders can do to buck the prediction and drive stronger results for their marketing analytics investment.
Our conversation, coupled with my own experience, solidified five ways CMOs can improve return on their marketing analytics investment, while also reinforcing why it matters:
1. Build organizational structure to apply better data
Knowing how to effectively leverage customer data and analytics is the number one success factor for CMOs today. And yet, to fully leverage the power of analytics, companies need to develop organizational structure and processes to be able to identify, combine and manage multiple sources of data.
As Golvin puts it, “companies need to build a better pipeline of carrying data from its raw state to decision and action systems for data science leaders to apply insights and powerful analysis to determine the right action and right strategy.”
To build these pathways, companies need a strong methodology coupled with an approach for how data gets aggregated, digested and applied to their various marketing systems.
2. Develop analytics leaders who bridge both data science with marketing strategy
Another key success factor for companies is developing and hiring the right leaders who can bridge both data science and business strategy. Simply put, analytics leaders need to know enough about business to ask the right questions of data. Only then, can they apply data and models to yield better decisions and drive sustainable growth.
This is our philosophy at Wharton – preparing well rounded, analytically-adept business leaders who don’t ask what data can do for them, but what data is needed to increase customer lifetime value (CLV) and how to apply data and customer insights to shape brand strategy.
“Gartner regularly conducts surveys about different challenges that CMOs and marketers face, and every year, the one that rises to the top is finding skilled data and analytics leaders to hire,” shares Golvin. “Companies also struggle to find those ‘unicorns,’ or people able to command both data science and business strategy.”
Golvin also pointed out that once a company does hire an analytics leader, companies need the right foundation in place to foster their success. “There’s no value to hiring a data scientist whose output leadership doesn’t understand or know how to implement.”
Too often, we see traditional marketing organizations that aren’t able to effectively apply analytics or don’t understand how to frame the questions for data scientists on their team. The reverse is also a common challenge: analytics leaders don’t grasp how to use data to shape the broader business and brand strategy.
3. Hire a Chief Analytics Officer, or up-level the importance of analytics
So how do companies up-level the importance of analytics and develop the data-driven culture, capabilities and leaders needed to successfully transform their organization? One trend we are seeing is the emergence of the Chief Analytics Officer or Chief Data Scientist across more organizations.
As Golvin notes, “we’re already starting to see the emergence of Chief Marketing Technology Officers, who are focused on deployment of the right technology, architecture and capabilities. The next trend may be marketing analytics leaders at the c-level, who are purely about analytics and understanding the data.”
When companies empower analytics leaders to lead strategy, it can transform the culture, providing a clear vision for what customer data will be used and how to reach the desired business impact. When companies fail to make this investment, it leaves high-caliber professionals in a quandary.
“Too often data science leaders end up doing grunt work such as basic data processing and preparation, rather than using their analytics mindset and abilities to drive actionable marketing strategy, separate the signal from the noise and improve marketing outcomes,” notes Golvin.
4. Focus on better data, not big data
An ongoing challenge organizations face today is what we call “better data, not big data.” Too often we see companies that are collecting data for data’s sake, rather than taking a lean approach where they only collect data when it helps to optimize the experience for their target customers or better prediction of future behaviors.
“As data becomes more integral to marketers, a ‘more is better’ attitude develops, without necessary consideration given to the downside risks,” notes Golvin. “Companies need to do a better job of being transparent about what data they use and how, as well as considering the pros/cons, and risks of incorporating that data into a profile of their customers. More data does not necessarily lead to greater business intelligence – and in many cases can expose the brand to issues that impact customer trust.”
Data collection is in no one’s interest when it’s not meaningfully tied to strategy.
5. Separate the signal from the noise to predict and optimize business outcomes
Improving ROI for marketing analytics requires constant learning and experimentation to separate the signal from noise. There’s no better way to learn about your customer than to see what works and what doesn’t.
While big data and machine learning are great to business intelligence, a well-controlled experiment can deliver far more value. Finding the most impactful experiments to run starts with asking the right questions and maintaining a test and learn mindset where you’re constantly evolving to improve the experience for customers. The iterative adaptation based on these experiments builds momentum.
Many marketers know the “Holy Grail” phrase “deliver the right product to the right person at the right time.” In the past, this was more difficult because we didn’t know where consumers were. Now when marketers use better data, they know where the customer was and is more likely to be – providing the foundation for the ultimate in contextual 1:1 marketing.
Think you should be getting more from your digital marketing agencies? Find out how to work with, negotiate with and make your digital agency relationships more profitable. We’ve trained our agencies to work against us. The pitch meeting is the c…
Think you should be getting more from your digital marketing agencies? Find out how to work with, negotiate with and make your digital agency relationships more profitable. We’ve trained our agencies to work against us. The pitch meeting is the culprit. The pitch meeting is when an agency comes to their client — or their […]
Now AdStage, a cross-channel campaign analytics and optimization platform, is getting in the add-on game with a new data connector for Google Sheets.
What is it? AdStage for Google Sheets, which launched Thursday, is an add-on that lets users import their paid search campaigns, social campaigns and analytics data from AdStage into Google Sheets with one query. AdStage supports paid search and social networks, including Google, Bing, LinkedIn, Facebook, Twitter and Google Analytics.
AdStage for Google Sheets has been in beta for about six months. The product pricing is based on media spend and starts at $29 per month, undercutting Supermetrics – the dominant leader in this space. The license includes unlimited users and unlimited accounts, which again challenges Supermetrics’ comparable offering.
How does it work? AdStage for Google Sheets is available from the Add-ons menu in Google Sheets. Once installed, you’ll see a sidebar in your Google Sheet. The low price means the sidebar interface isn’t fancy, and is designed for somewhat technical marketers who already know how, or are willing to learn, to build queries. The query structure is straightforward, with several query templates already available to get you started. There’s also a video training series and support portal built out for it.
Why we should care. They key is getting blended data calls to pull in data from across multiple channels with just one query. You can then build reporting dashboards in Google Sheets, like the example shown above. Or, you could bring it in to Google Data Studio. AdStage for Google Sheets uses the same API as the rest of the platform, so any data you can access in AdStage should be accessible in Google Sheets with a query.
“We are using AdStage for Google Sheets to combine cost and campaign performance data for the entire company to consume and work with. Without any integration work, we were able to aggregate all of our publisher accounts and blend complex cross-channel data into a single sheet,” said beta user Arndt Voges, head of growth at space rental company Peerspace, in a statement.
Digital agency 3Q Digital, another beta user, has already created a workbook in Google Sheets to help track and visualize cross-channel budget pacing, demographics performance and more.
This story first appeared on MarTech Today. For more on marketing technology, click here.
E-commerce is a pure service business that demands utmost precision, patience, and persistence to survive the industry’s ever growing competition. The goal is not just to help shoppers find good products, but also provide an exceptional customer experience that forces them to convert into repeat buyers.
Also, customer actions serve as a valuable source of data at every stage of an e-commerce’s funnel. What you do with these customer insights to improve the overall shopping experience and conversion rate, is typically what makes or breaks a deal for your business.
Most marketers suggest using multiple qualitative and quantitative analytical tools to do so. But, not all of them have the prowess to give you the kind of information you would need to study user experience.
This is where heatmaps come into play. Let’s see how heatmaps can improve your E-commerce business!
What are heatmaps?
Heatmaps serve as one of the best qualitative tools to collect relevant customer data, especially in terms of understanding their actual behavior across your web platform.
Think of heatmaps as X-Ray films. They show a detailed picture (in the form of graphical representation) of how users interact with your site or store. You can observe how far users scroll, where they click the most, and the products or pages they like or dislike.
Such data is precisely what you need to make your platform more user-friendly, drive more traffic and improve your conversion optimization strategies.
But, if you think that any heatmap would work wonders for your e-commerce site, you’re absolutely wrong!
E-commerce websites are highly dynamic in nature. They have more interactive elements and “behind login page” elements such orders, cart page, etc. than any other site. Such pages shed visitor interaction and information, which typically serves as the primary data for businesses to use to draw page performance conclusions, find elemental distractions, and improve overall customer experience.
This is where Dynamic Heatmaps save the day!
What are dynamic heatmaps?
Unlike static heatmaps which can only be plotted on static web pages such as the Home Page, Product Pages, Landing Pages, Category Pages, etc., dynamic heatmaps give you the leverage of studying real time customer behavior on pages which are beyond the scope of static heatmaps. In other words, dynamic heatmaps can be easily plotted on live websites with dynamic URLs such as My Profile, Orders, Cart, Account Settings, etc. to gather in-depth customer activity data.
In general, a typical dynamic heatmap offers four primary features, namely, click maps, scroll maps, click areas, and element list. Each of these allow you to look at a web page’s hot spot areas in a detailed manner.
Let’s now understand the scope of dynamic heatmaps for e-commmerce using some examples!
Studying Cart Page Insights Using Dynamic Heatmaps
The image below gives us an insight a dynamic heatmap plotted on an e-commerce site’s Cart page. It shows that in general, most people, after adding products to their cart, click on the product image, cancel button, postcode section, ‘Go To Checkout’ icon and discount codes.
Such information helps you draw multiple conclusions. Some of which are as follows:
Customers might want to see the product images once more before proceeding to final payment gateway.
They may not like their chosen product(s) and hence, remove the item(s) from the cart. Or, since they’re not able to view the product image (in zoom) on the cart page, they abandon their cart.
They’re most interested in availing discounts. Therefore, the area is hotter than other page elements.
Furthermore, such information also serves helpful in drawing hypothesis on how to improve the performance of various page elements with dynamic URLs, meanwhile find the right means and ways to enhance customer experience and increase your conversions.
“VWO’s dynamic heatmaps allow you to access information of those web pages which are, in general, very difficult to access using regular heatmaps.”
Studying Order Page Insights Using Dynamic Heatmaps
The “My Orders” page of an e-commerce site is an important page that is lesser explored in terms of gathering customer behavior data.
The page allows users to look at their current and previous orders, check delivery status, seek assistance, and even browse through their history. As an e-commerce company, plotting heatmaps on such pages and mapping number of clicks can significantly help you study which page elements are fetching you maximum customer attention.
For example, the plotted above shows that maximum people are clicking on the “Track” icon followed by “Need Help” and product images. They’re hardly clicking on the “Rate & Review Product” icon, which, according to your platform, can be an important form page to seek product reviews and other essential information. You can further use such qualitative data to make necessary amendments and compel customers to fill the form, like:
adding product review pop-ups or call-to-action buttons on the Order page
making the section omnipresent
giving rewards for adding reviews
So now that you know what your static heatmap tools are lacking, it’s time to upgrade your e-commerce platform with VWO’s dynamic heatmaps and make the most of them.
What answer is a searcher looking for? For sustainable, valuable search traffic, you’d better provide it. Satisfying search intent is Google’s fundamental goal. But algorithms haven’t always kept pace. Proxies like backlinks and keywords have long been—and still are—stand-ins for the likelihood that a web page will satisfy user intent. Optimizing for intent is the […]
What answer is a searcher looking for? For sustainable, valuable search traffic, you’d better provide it.
Satisfying search intent is Google’s fundamental goal. But algorithms haven’t always kept pace. Proxies like backlinks and keywords have long been—and still are—stand-ins for the likelihood that a web page will satisfy user intent.
Optimizing for intent is the long play, for Google and your site. A page that’s well-matched for user intent can outperform those that optimize primarily for search engines—in search and after the click.
It’s an SEO strategy that focuses on making users happy rather than hitting a particular keyword density or winning exact-match anchor text.
Still, to translate the “make users happy” bromide into something executable, you need to know a few things:
How Google (and others) define search intent;
How to evaluate your target keywords for intent;
What to do with search intent data.
1. How Google (and others) define search intent
For Google, understanding search intent is the key to returning useful search results. (And, by extension, the key to maintaining and growing its search market share, thus capturing more eyeballs for ads.)
The classic division of search intent offers three variations of queries:
Informational. Learn something (e.g. how to train for a marathon).
Transactional. Buy something (e.g. running shoes order online).
Navigational. Go to a specific site (e.g. runners world training plans).
Past studies have estimated that as many as 80% of queries are informational, with the remainder split equally between the other two types.
Know. “The intent of a Know query is to find information on a topic. Users want to Know more about something.”
Do. “The intent of a Do query is to accomplish a goal or engage in an activity on a phone. The goal or activity may be to download, to buy, to obtain, to be entertained by, or to interact with a website or app.”
Website. “The intent of a Website query is to locate a specific website or webpage that users have requested.”
Visit-in-person. “Some queries clearly ‘ask’ for nearby information or nearby results (e.g., businesses, organizations, other nearby places).”
The guidelines also identify two sub-types:
Know Simple. “Know Simple queries seek a very specific answer, like a fact, diagram, etc. This answer has to be correct and complete, and can be displayed in a relatively small amount of space: the size of a mobile phone screen. As a rule of thumb, if most people would agree on a correct answer, and it would fit in 1–2 sentences or a short list of items, the query can be called a Know Simple query.”
Device Action. “Device Action queries are a special kind of Do query. Users are asking their phone to do something for them. Users giving Device Action queries may be using phones in the hands-free mode, for example, while in a car [. . .] A Device Action query usually has a clear action word and intent.”
Many keywords fall clearly into one bucket or another. Some don’t.
What happens when search intent is ambiguous?
Over time, Google has gotten better at parsing search intent, particularly for ambiguous queries. (The 2013 Hummingbird update is often cited as a major improvement in Google’s understanding of search intent.)
If someone enters “new york pizza sunnyvale” (without the quotation marks) into a search box at Google or Yahoo or Bing, it’s not quite clear whether they are looking for: (1) pizza in New York, in a neighborhood or area referred to as Sunnyvale, (2) New York style pizza in a place called Sunnyvale, (3) a place called “New York Pizza,” in Sunnyvale, or (4) some other result.
Queries closer to a sale tend to be longer and less ambiguous, too. The initial consumer research that starts with “coffee grinder” may yield follow-up queries like “conical burr grinder reviews” as the searcher progresses toward a purchase.
Geography (i.e. IP address) can provide clues to search engines, as can search history, time of year, or time of day. For example, an ambiguous query like “flowers” may return different results on February 14 compared to July 14.
For site owners, ambiguity can be an advantage. For example, Justin Briggs suggests that forums and other sites full of user-generated content reveal “when Google is ‘reaching’ for a good result.” The imperative? If you can answer that query clearly, the traffic is up for grabs.
There are other methods of evaluating search intent, too, like active vs. passive intent.
Active vs. passive intent
Active intent, A.J. Kohn notes, is “explicitly described by the query syntax.” It’s not the only intent of the query, however. And, Kohn continues, to satisfy users, you need to meet passive intent, too.
Passive intent is implicit in the query. It’s best identified by asking yourself “what the user would search for next … over and over again.”
In an example shared by Kohn, the query “bike trails in walnut creek” asks explicitly (i.e. active intent) for a list of bike trails. It also implicitly asks (i.e. passive intent) for other information like maps, trail reviews, and photos.
Satisfying passive intent, Kohn argues, is essential for user engagement and conversion. If active intent brings in users at the top of the funnel, passive intent engages and converts them. It’s “the way you build your brand, convert users and ween yourself from being overly dependent on search engine traffic.”
One of the mistakes I see many making is addressing active and passive intent equally. Or simply not paying attention to query syntax and decoding intent properly. More than ever, your job as an SEO is to extract intents from query syntax.
So, how do you identify intent for the keywords you care about?
2. How to evaluate your target keywords for intent
For plenty of queries, the intent is obvious. For example, “portable phone charger reviews” is pretty straightforward.
Because bottom-of-funnel queries tend to offer more information (and less uncertainty), evaluating intent is more critical at earlier stages, with informational queries. Those informational queries are often the highest volume terms a site targets—key drivers of awareness and acquisition.
For smaller sites, intent evaluation is quick and easy. A manual process works. For larger sites, however, scaling that process is essential. Here’s how to do both.
How to evaluate search intent manually
Look at the search engine results page (SERP). What does it show? Do all results suggest a similar intent? Or do they satisfy a range of potential intents?
In SERPs, Google shows its hand. Top-ranking search results are ample evidence of what users want:
Which types of sites rank highly? Individual sites? Aggregators? Blogs? Government and university sites?
What type of content is on those pages? Long-form articles? Short explanations? Images? Videos?
What is the first question answered? What text is offset or included in headers? What subtopics are (or aren’t) covered?
The SERP for “best restaurants richmond va” tries to satisfy two different intents:
Local map listings with tons of five-star Google reviews. For searchers in Richmond who want to call or visit a local restaurant.
Blue links of aggregator sites with “Best Restaurants” lists. For searchers anywhere who want to browse options.
One takeaway: If you run a restaurant, thinking that you can “optimize” your site to get listed among the blue links would be a lost cause.
While this process is simple and intuitive, it doesn’t scale. So what can you do when need to decode intent for thousands of pages?
How to scale intent evaluation
Several SEO tools—Ahrefs, Moz, SEMRush, and others—track SERP features for individual keywords. Those features are one way to map intent to queries at scale.
If you’re already tracking keywords in one of those tools, you can export the list and assign intent categories based on the type of search result. For example:
SERPs that return a featured snippet are more likely to be Know Simple queries.
SERPs with a high cost-per-click (data those tools also return) suggest a bottom-of-funnel or transactional query.
SERPs without any ads suggest top-of-funnel informational intent.
SERPs with map results suggest Visit-in-person intent, etc.
Depending on your industry, different features may hint at different intents. You can sample keywords with various SERP features and code the intent.
So, if you’re trying to assign intent to 10,000 keywords, manually review the intent for 50 keywords for each SERP feature, then programmatically assign intent to the remainder.
Another way to do it is to classify keyword modifiers by intent. (A lengthy list of modifiers is available here.) Research From STAT, now part of Moz, suggests where certain modifiers fall along the intent spectrum:
If you’re starting with a massive list of keywords, you can use an N-gram tool to identify common modifiers within your keyword data. The most common phrases can serve as a basis for classification (and help automate tagging in a spreadsheet).
Categorizing keyword modifiers is especially useful for sites that have hundreds or thousands of similar pages, like review sites with city-specific content or sites with hundreds of similar products.
Keyword modifiers are also an easy way to expand the set of keywords you track. After all, the goal of identifying intent is not just to see where you meet it but where you may need to expand content to do so (more on that later).
A Moz article offers an example of informational modifiers for products:
what is [product name]
how does [product name] work
how do I use [product name]
The end result—for manual or automated tagging—is a spreadsheet that classifies keywords by intent:
Whether you choose Google’s four-intent model or another is up to you. You could, for example, map keywords based on your user journey. It’s one of several high-value things you can do with search intent data.
3. What to do with search intent data
Search intent data can support initial research, improve keyword tracking, or add a sharper business focus to reporting. It can also guide on-page content choices, content strategy, or web design.
Using search intent data for research and evaluation
People turn to their devices to get immediate answers. And every time they do, they are expressing intent and reshaping the traditional marketing funnel along the way.
Customers use search engines from initial consideration through the purchase—and past it. You can map that intent to your funnel. The result is a framework for evaluating search performance based on larger business goals.
For example, while all of your blog posts may qualify as “Informational” in intent, some may serve users in different stages of awareness:
A journey-based mapping of keyword intent pays dividends for competitor research, as well as keyword tracking and reporting.
2. Identify content gaps with competitor research.
Where in the user journey are you struggling? Which intent gaps are competitors filling? Tools like SEMRush and Ahrefs offer keyword-based domain comparisons.
You can enter your domain and several competitor domains. Then, filter for keyword modifiers that you’ve mapped to intent. For example, Ahrefs and Moz are outperforming SEMRush for several informational “how to” queries:
This analysis scales to compare performance at each stage of the funnel. At the same time, it provides a ready-made list of topics to try to close the gap.
Competitor analysis identifies keywords for which your site might reasonably rank. A blue-sky approach to keyword research often yields queries that you’d like to rank for but for which Google perceives an alternative intent (e.g. displays aggregators when you’re an individual site, or vice versa).
3. Track rankings based on intent.
Instead of reporting on keywords by topic (e.g. “We rank well for Product X but not Product Y”), you can measure performance in the context of your marketing funnel.
For example, you may do well for bottom-of-funnel queries (across several products) but struggle to rank for top-of-funnel informational content.
Tracking based on intent is a smarter way to prioritize content expansion, new page creation, or page design tweaks.
Using search intent data for page design and development
4. Add content to answer active and passive intent.
What else could you answer for users? What questions will they have next?
Google’s Knowledge Cards, Kohn offers, are a perfect example of aggregating intent—answering a query and providing valuable context. A restaurant name query, for example, answers so many more questions:
What type of restaurant is it? Is it expensive? Where is it? How do I get there? What’s their phone number? Can I make a reservation? What’s on the menu? Is the food good? Is it open now? What alternatives are nearby?
You may need to expand content on an existing page. Or you may want to create new pages to address unfulfilled user intent. The “Expand vs. Create” decision often hinges on search volume. If the subtopic has search volume, create a new page; if it doesn’t, expand the current one.
One method we’ve used is to write a broad, robust article first while trying to cover several aspects of the topic. We wait for it to start ranking well, then mine Google Search Console for the keywords where we’re 6 through 15. These are typically good candidates for longtail, specific follow-up posts.
Larger sites, he notes, may succeed by targeting high-volume, highly competitive terms first. Smaller sites, in contrast, benefit by targeting several long-tail queries, then attacking a top-level keyword after they’ve built topical authority.
5. Tailor content to win more clicks in the SERPs.
Google’s definition of a Know Simple query hints at several guidelines for featured snippets:
If the search intent is to get a quick answer and not click any link, optimizing for featured snippets may satisfy users (and Google) but ultimately erode organic traffic for all sites (a Prisoner’s Dilemma, according to Rand Fishkin).
It still makes sense to optimize for featured snippets. But the value may be limited to “URL awareness” as users, especially those on mobile, don’t click through.
Beyond featured snippets, there are other ways to try to improve click-through rates. Fishkin highlights an underused strategy: writing page titles and meta descriptions for intent, even at the expense of keyword targeting.
That strategy has risks, but it’s a potential pathway for “underdog” sites to compete against industry stalwarts. If you can get to the bottom of Page 1, a page title and meta description written for humans (rather than search engines) could help differentiate your site, earn more clicks, and (probably) send positive signals back to search engines.
6. Design pages to satisfy active intent first.
“It’s essential to understand the hierarchy of intent so you can deliver the right experience,” Kohn contends. “This is where content and design collide with ‘traditional’ search.”
For SEO, page design has two imperatives:
Answer active intent clearly and immediately;
Provide a logical hierarchy of information to satisfy passive intent.
For Know queries, is the answer clearly visible via header tags, larger font, or an offset block? Are follow-up questions answered with subheads? For Transactional queries, is the next click clear and easy to find?
These are basic principles of UX—but they also have an impact on search performance. Users who don’t find answers immediately are likely to bounce straight back to search results. The “UX is a ranking factor” argument has some truth—and controversy.
We all endure recipe sites that require a lengthy scroll to get to the recipe. That’s because the preceding text (usually a banal essay about the recipe’s origin) provides context for search engines. That context can help sites rank in a vertical like recipes, where search engines can’t differentiate a good chocolate chip cookie from a life-changing one.
Googlers like John Mueller continue to dissuade webmasters from producing content that serves search engines at the expense of the user experience. But the tension remains—those tactics still work.
The lesson, then, is to take the long view. Google would prefer not to value supporting text when it’s superfluous, though it may still reward sites for it now. Slowly, that need will decline. Periodically testing its removal—to see the impact on rankings and user behavior—is worthwhile.
“Target the keyword, optimize the intent.” Kohn’s maxim is the best summary of how search intent fits into an SEO effort. Keywords remain the starting point for a page. But intent should guide decision-making about what those pages should look like.
Creating a list of relevant keywords and categorizing them by intent—whether you target them now or not—can show you where in the user journey you enjoy visibility, and where you don’t.
That intent data can:
Prioritize content expansion on existing pages;
Identify the need for new pages;
Suggest a page design that quickly and clearly solves for active intent first.
The weekly UX Design newsletter from Loop11. Create your free account today! The Fab-UX Five The weekly UX Design newsletter from Loop11 So you’re not good enough… Now what? Read Article How to get promoted, according to Facebook’s Julie Zh…
The weekly UX Design newsletter from Loop11. Create your free account today! The Fab-UX Five The weekly UX Design newsletter from Loop11 So you’re not good enough… Now what? Read Article How to get promoted, according to Facebook’s Julie Zhuo Read Article Why Cancel Buttons Should Never Have a Color Read Article Introducing the United […]
Amazon is well-known for featuring some of the lowest price points across every product category, making the Marketplace a huge draw for online… > Read More
The post Unpacking Amazon Luxury With Citizen Watches and Erno Laszlo appeared first on Retai…
Amazon is well-known for featuring some of the lowest price points across every product category, making the Marketplace a huge draw for online... > Read More
Oh, that feeling when you see your latest campaign was a big success! Lots of new customers acquired, products sold, and sales revenue made… Now, let’s pause for a moment and take a look at your historical data. What’s the share of first-time buyers wh…
Oh, that feeling when you see your latest campaign was a big success! Lots of new customers acquired, products sold, and sales revenue made… Now, let’s pause for a moment and take a look at your historical data. What’s the share of first-time buyers who’ve bought from you only once? I’m going to take a […]
For marketers with an interest in research, it’s a good time to start talking about AI-facilitated online surveys. What are those, exactly? They’re surveys that use machine learning to engage with respondents (think of a chatbot) which then manage a lot of the back-end data involved with implementing and reporting (think of pure drudgery). We have had a great experience with GroupSolver, and other examples include Acebot, Wizer, Attuned (specific to HR) and Worthix. There are others out there, and probably even more by the time you read this.
The good news is that a conversation about AI-facilitated online surveys is well underway. The bad news is that it’s rife with exaggerated claims. By distinguishing between hype and genuine promise, it’s possible to set some realistic expectations and tap into the technology’s benefits without overinvesting your time and research dollars in false promises (which seems to happen a lot with AI).
As Dr. Melanie Mitchell puts it in “Artificial Intelligence Hits the Barrier of Meaning,” AI is outstanding at doing what it is told, but not at uncovering human meaning. If that’s true, what possible use can AI have for online surveys? There are five themes that need to be addressed.
Reduced customer fatigue
One misconception is that AI surveys reduce fatigue because traditional surveys are too long. Not quite. Surveys are only too long if they are poorly crafted, but that has nothing to do with how the instrument is administered. Where AI does help is in creating an experience that is very comfortable for the respondent because it looks and feels like a chat session. The informality helps respondents feel more at ease and is well-suited to a mobile screen. The possible downside is that responses are less likely to be detailed because people may be typing with their thumbs.
There are three advantages to how AI treats open-ended questions. First, the platform we used takes that all-important first pass at codifying a thematic analysis of the data. When you go through the findings, the machine will have already grouped them according to the thematic analysis the AI has parsed. If you are using grounded theory (i.e., looking for insights as you go), this can be very helpful in getting momentum towards developing your insights.
Secondly, the AI also facilitates the thematic analysis by getting each respondent to help with the coding process themselves, as part of the actual survey. After the respondent answers “XYZ,” the AI tells the respondent that other people had answered “ABC,” and then asks if that is also similar to what the respondent meant. This process continues until the respondents have not only given their answers but have weighed in on the answers of the other respondents (or with pre-seeded responses you want to test). The net result for the researcher is a pre-coded sentiment analysis that you can work with immediately, without having to take hours to code them from scratch.
The downside of this approach is that you will be combining both aided and unaided responses. This is useful if you need to get group consensus to generate insights, but it’s not going to work if you need completely independent feedback. Something like GroupSolver works best in cases where you otherwise might consider open-ended responses, interviews, focus groups, moderated message boards or similar instruments that lead to thematic or grounded theory analyses.
The third advantage of this approach over moderated qualitative methodologies is that the output can give you not only coded themes but also gauge their relative importance. This gives you a dimensional, psychographic view of the data, complete with levels of confidence, that can be helpful when you look for hidden insights and opportunities to drive communication or design interventions.
Surveys at the speed of change
There are claims out there that AI helps drive speed-to-insight and integration with other data sources in real-time. This is the ultimate goal, but it’s still a long way off. It’s not a matter of connecting more data pipelines; it’s because they do very different things. Data science tells us what is happening but not necessarily why it’s happening, and that’s because it’s not meant to uncover behavioral drivers. Unless we’re dealing with highly structured data (e.g., Net Promoter Score), we still need human intervention to make sure the two types of data are speaking the same language. That said, AI can create incredibly fast access to the types of quantitative and qualitative data that surveys often take time to uncover, which does indeed bode very well for increased speed to insight.
Cross-platform and self-learning ability
There is an idea out there that AI surveys can access ever-greater sources of data for an ever-broader richness of insight. Yes, and no. Yes, we can get the AI to learn from large pools of respondent input. But, once again, without two-factor human input (from respondents themselves and the researcher), the results are not to be trusted because they run the likely danger of missing underlying meaning.
Creates real-time, instant surveys automatically
The final claim we need to address is that AI surveys can be created nearly-instantaneously or even automatically. There are some tools that generate survey questions on the fly, based on how the AI interprets responses. It’s a risky proposition. It’s one thing to let respondents engage with each other’s input, but it’s quite another to let them drive the actual questions you ask. An inexperienced researcher may substitute respondent-driven input for researcher insight. That said, if AI can take away some drudgery from the development of the instrument, as well as the back-end coding, so much the better. “Trust but verify” is the way to go.
So, this quote from Picasso may still hold true: “Computers are useless. They can only give you answers,” but now they can make finding the questions easier too.
The good news is that AI can do what it’s meant to do – reduce drudgery. And here’s some more good news (for researchers): There will always be a need for human intervention when it comes to surveys because AI can neither parse meaning from interactions nor substitute research strategy. AI approaches that succeed will be the ones that can most effectively facilitate that human intervention in the right way, at the right time.
Conversations with prospects or customers can improve practically every metric or user state model you’re aiming for. As Jen Havice noted here on the CXL blog, insights from customer interviews can: Accelerate the customer journey; Increase conversions; Drive more leads, sales, and revenue. To do that, you don’t need to reinvent the customer interviewing wheel. […]
To do that, you don’t need to reinvent the customer interviewing wheel. You can use proven frameworks, like the Jobs-to-Be-Done (JTBD) approach. The JTBD approach is a simple but powerful way to extract ultra-valuable Voice of Customer (VoC) data.
That data, in turn, can help optimize copy throughout your site—as long as you know what you’re looking for before you start your interviews.
With that in mind, this post has three parts:
How to define needs and goals for customer interviews;
What Voice of Customer data from interviews will and won’t accomplish;
How to run Jobs-to-Be-Done interviews to gather Voice of Customer data.
1. Before you start interviewing, define your goals
The goals for a customer interview help structure your questions. They also help you understand what to look for as you dig through interview transcripts.
Any of three methods can help you define those needs and goals before you start talking to prospects or customers.
A common amateur mistake, ironically, is to ask the customer, ”What do you want?” [. . .] the customer assumes that they are supposed to describe the exact feature or solution they want. This approach confuses needs with solutions to those needs.
Contrary to popular belief, VoC does not intend to turn over innovation to the customer. Katz continues:
[A]ny researcher or product developer who has ever tried this approach understands its futility. Why? Because most customers aren’t very good at coming up with innovative solutions. And frankly, that’s not their job – it’s yours!
VoC research isn’t about asking your customers what they want but digging to find that answer yourself. Customers may not know what they want in a product, but they do know what they want in business or life.
By asking customers about their pain, motivations, objections, etc., you discover what they need in a product. Incidentally, these insights give you the exact copy and hierarchy to help other prospects decide to purchase.
Why does it work so well? It speaks the words that often go unspoken.
VoC copy taps into (unspoken) consumer emotions
When I wrote the evergreen funnel for the Copy Hackers 10x Freelance Copywriter course, I hesitated over the last email. The subject line? “You are a weirdo.”
Not exactly a safe subject line. But it came from the customer. It came from a social anxiety. And it was a powerful driver. Here’s the quote from the customer interview:
So I do think it would be just really cool to meet other people who have that same like connotation of like the weirdos, but they’re not. Or if they are, it’s in a good way. It’s because there’s something right with us, not wrong with us.
There’s a reason we squirm when we imagine someone reading our mind. In our minds, we’re safe to be vulnerable, weak, doubtful. When you put all this onto the screen—in conjunction with your product—it feels risky.
But risky copy is usually a sign you’re doing something right. Taking a risk differentiates you in the customer’s mind—and shows that you have a unique way to solve their problem.
As Joanna Wiebe noted in her talk at CXL Live, a “breakthrough or bust” approach to copywriting encourages risk-taking. (The campaign with “You are a weirdo,” isn’t live yet, but it has that “breakthrough or bust” potential.)
Ultimately, when you use VoC data, you help the prospect feel understood. And when the prospect feels understood, you earn loyalty: validation motivates connection.
How do you get that VoC data from interviews? The Jobs-to-Be-Done framework is an effective approach.
3. How to run Jobs-to-Be-Done interviews to gather VoC data
Job theory is the belief that people “hire” products to fulfill a “job.” The classic Jobs-to-Be-Done case is the prospect who walks into Home Depot to find a power drill.
If you look at her search through the lens of a product as a job, you understand that she’s not looking for a drill. She’s looking for a way to make a hole in her wall. Which means she’s really looking to hang a picture. Which means she’s looking to make her house look nice.
Where you’d once focus on power tools, jobs theory instead focuses you on hanging solutions. That’s allowed for massive disruption and innovation. Because when you let go of the traditional theory of “the prospect wants to buy a drill,” you let go of “traditional competitors.”
Now, suddenly, you can introduce this prospect to other hanging solutions like self-adhesive heavy-load hooks, or even a kitschy spread of yarn and binder clips.
JTBD interviewers don’t start an interview with questions about the product but instead ask to hear about the journey behind the purchase. (If you want sample scripts for JTBD interviews, see this one or this one.)
This method allows for a more truthful—and thorough—account. It dives into the decision-making process instead of focusing solely on the customer’s satisfaction or dissatisfaction with the product.
The JTBD framework is essential to uncovering VoC data. But remember, the full method is all about the job. Not the motivations, not the backstory. When you run a JTBD interview, you uncover those things, but according to the traditional model, their only importance is getting you to the job.
So, when interviewing for VoC insights and copy optimization, use (most of) the framework, but shift the focus. Here are five elements of the framework to follow—and three to avoid.
5 essential elements of the JTBD interview framework
1. Be human. Use natural language—this is a conversation. You’re uncovering a story, not an academic thesis. You’re also looking to uncover sensitive information. Your interviewee needs to feel comfortable being vulnerable.
Students always ask me: “So when I do the interview, what should I ask?” And I say: “That’s the wrong question.” I always say: “You have to have a question that you are really interested in. Not make it up, but find out what you are really interested in.”
3. Be biased. When you agree with your customer, when you egg her on, you make her comfortable. You allow her to trust you and open up to you. Be careful not to ask leading questions, but don’t be a completely impartial, detached interviewer.
4. Use context reinstatement. Originally used by detectives to allow witnesses to better remember crime scenes, context reinstatement is a Cognitive Interviewing method that takes the interviewee back to a moment in time.
Questions surrounding the senses of the moment—what was the weather like, what you were wearing, who were you with—allow the interviewee to re-experience the moment and give a more accurate (and emotional) account.
5. Stay factual. JTBD interviews are centered on true accounts only. There is no room for speculation or theoreticals when discussing decisions, features, or outcomes. Speculative “data” won’t lead to higher conversions.
3 elements of the JTBD framework to ignore (or use sparingly)
There are three elements in the JTBD framework that are less effective for uncovering copy insights:
1. Notetaking. JTBD interviewers take unique notes. They draw a customer journey map and note each statement in the corresponding slot of the customer’s journey. They do this during the interview to help direct their line of questioning.
The challenge with notetaking is that it can easily become a distraction. For marketers hoping to improve copy (rather than develop products), we should focus on the conversation and take only margin notes.
Remember: We’re digging for exact customer language. Yes, information about the journey can be useful, but the goal is the rich VoC language that will resonate in the prospect’s head.
There’s no way we can note exact language and focus on the conversation. Instead, record the interview (with permission), get it transcribed, and then organize the notes, post call.
2. Two-on-one interviews. The JTBD framework also suggests interviewing in pairs, for two reasons:
Two sets of notes. But you’re not taking detailed notes, remember?
Faster pace. When one interviewee is processing information shared, or developing the next line of questioning, the other interviewer can jump in with more questions. But silence is an incredible way to access vulnerable, deeper insights. As Michael Bungay Stainer suggests,
Silence is often a measure of success… it means [your interviewee is] thinking, searching for the answer. He’s creating new neural pathways, and in doing so literally increasing his potential and capacity.
Asking “Why?” JTBD interviewers love asking, “Why?” They need to dig, dig, dig to the bottom. But you don’t need to ask “Why?” to learn why. And in fact, according to neuroscience, you shouldn’t. Kay White explains:
Why [. . .] does two things very quickly, immediately in fact; two things you want to avoid: 1) it sends people straight to the word “because” which is justifying their actions or decisions; and 2) it closes down information-gathering in the request for “the reason.”
Asking “Why?” moves a customer from an emotional space to a cognitive space. Barriers and motivations are backed by emotion. We want to stay in the emotional space because we want our prospects to tap into those same emotions.
Why puts us on the defensive. Again, we need the interviewee to feel as comfortable as possible, so turn “Why?” questions into “What?”questions:
Why did you do that? >> What was the thought process behind that decision?
Why did you feel that way? >> What was feeding into those emotions?
Why were you looking for an X product? >> What was going on in your life that led you to search for an X product?
Conversations with customers are a rich resource that reach beyond product development. The JTBD framework can help customer interviews deliver rich VoC data to improve low-converting areas of your copy.
To get started:
Pick a method to define what you hope to gain from your customer interviews.
Set up interviews with customers who will help you get that information.
Listen for their story—really listen—and record their insights (digitally) to review later.
Identify emotionally charged language that relates to the pain they’re experiencing—and the “job” they need to “hire” a product to do.
Apply those insights in their raw, exact language to your weakest copy sections.