Join us Thursday, October 29th at 12 p.m. PT, for the Oracle Cloud CX Virtual Summit exploring how leading organizations are re-engineering their marketing, sales, and service processes. Larry Ellison will lay out his vision for the future of customer experience, followed by leaders at Ricoh, Motorola and Hyster-Yale who will share their strategies for harnessing data, and delivering the right message to the right customer at the right time.
To make every customer interaction matter, organizations need to listen to their customers and rethink the way they market, sell and provide service across every touchpoint. If 2020 has taught customer experience leaders anything, it’s the need to be resilient in the face of constant change. Get executive viewpoints on the importance of unifying data across your business and empowering every employee to respond effectively to customer expectations.
Hear real-life success stories and lessons learned from Oracle CX Sales and Oracle CX Commerce customers.
Learn how to visualize your path to sales and commerce excellence.
Get the latest product news and a glimpse of what’s on the horizon.
It is amazing; the horrible job many digital marketers do when reporting their work to clients. This includes both internal and external clients. Just think about how many marketing presentations and reports you’ve seen that simply contain screenshots from Google Analytics, Adobe Analytics, Adwords, Google Console, or reports from a backend ecommerce system. This isn’t the way to influence people with your data.
The biggest issue is that most marketers are not analytics people. Many marketers do not know how to collect all of the necessary data or how to leverage that data, and to a lesser degree, know how to present it in a meaningful way. Typically, this is the job of a data analyst. The same way purchasing a pound of nails, a hammer and a saw doesn’t make you a carpenter, gaining access to your analytics reporting tool does not make you a data analyst. This is why many reports contain those convoluted screenshots, and present data out of context, contributing little to no meaning.
Data out of context
Many reports merely report the facts (the data) with a number and no context. Data out of context is just data. For example, simply making a statement that Adwords generated 5,000 sessions to a website last month is meaningless without context. The number 5000 is neither a good nor a bad data point without a reference point or a cost factor. It’s not until you add in other factors (open the box) that you can demonstrate whether or not your efforts were a success. If the previous month’s Adwords campaign only drove in 1,000 sessions, then yes without other data, 5000 sessions looks good. But what if the cost to drive those additional 4,000 sessions was 10 fold the previous month’s spend? What if the previous month, Adwords drove 5,000 sessions but at double the spend?
It is only by adding in the additional information in a meaningful way that marketers can turn their reporting from a subjective presentation into an objective presentation. In order to do this, stop reporting absolute numbers and put your data into context with ratios. For example, when assessing Cost per Session, toss in a 3rd factor (goal conversion, revenue, etc.) and create something similar to “Cost per Session : Revenue”. This will put the data into context. For example, if every session generated costs $1 : $100 (Cost per session : revenue) vs. $2.25 : $100 (Cost per session : revenue) the effectiveness of a marketing spend becomes self-evident. In this example, it is clear the first result is superior to the second. By normalizing the denominator (creating the same denominator) the success or failure of an effort is easily demonstrated.
Data is boring
Yes, presenting data is boring. Simply looking at a mega table of a collection of data will cause many to lose interest and tune out any message you might be trying to present. The best way to avoid this is to make your data sing!
Make your data sing
Just like in the marketing world, the easiest way to grab someone’s attention and make your message sing is with imagery. Take all that great data in your mega table, and turn it into an easy to understand graph, or when necessary, simplified data tables. Even better, (if you can) turn it into interactive graphs. During your presentation, don’t be afraid to interact with your data. With some guidance, your audience can dive into the data they are most interested in.
Learn to use data visualization tools like Data Studio, Tableau, DOMO, Power BI and others. Leveraging these tools allows you to take boring data and not only give it meaning but to make the data sing, which will turn you into a data hero.
Interacting with your data
Back at the end of July 2019, my firm acquired an electric vehicle. We wanted to know if the expense was worth it. Did the cost savings of using electricity over gasoline justify the difference in the ownership cost of the vehicle (lease payments +/- insurance cost and maintenance costs).
Below is a typical data type report with all the boring detailed data. This is a mega table of data and only those truly interested in the details will find it interesting. If presented with this table most would likely only look at the right-hand column to see the total monthly savings. If presented with just this data, many will get bored, and will look up and start counting the holes in the ceiling tiles instead of paying attention.
The following graphs demonstrate many of the ways to make this data sing, by putting all of the data into context through interactive graphics.
The above graph (page 1 of the report) details the cost of operating the electric vehicle. The first question we were always curious about was how much it was costing us to operate per 100 km. By collecting data on how much electricity was used to charge the car, how many kilometers we drove in a given month and the cost for that electricity, we are able to calculate the operating cost. In the graph you can easily see the fluctuation in operating costs, with costs going up in winter months (cost of operating the heater in the car) and again in June & July (cost of running the AC). You can also see the impact of increases in electricity prices.
To truly evaluate the big question “Was acquiring an electric vehicle worth it?” we’d need to estimate how much gasoline would have been consumed by driving the same distance against the average cost for gas during the same months. On page 2 of the report the data is now starting to sing as the difference in the savings of electrical over gas becomes clear. The chart becomes interactive and allows the user to hover over any column to reveal the data details.
To make the data truly sing, we’d need to not just compare the operating costs, but the costs of ownership. Do the savings in the operating costs justify the price difference between the vehicles? We know that the difference in lease costs, insurance and annual maintenance is in the range of $85-$90/month
The above graph (page 3 of the report) demonstrates, the impact of plummeting gas prices and the reduced driving done during April 2020 due to the COVID-19 shutdown. In April 2020 a mere monthly savings of approximately $41 dollars was achieved. Therefore, there were no savings in owning a more expensive electric vehicle over an equivalent gas-powered vehicle (the difference in lease costs and insurance, etc. is in the range of $85-90/month). While it might not be sing, it definitely was screaming out when we saw it.
Check out the entire report for yourself. It is accessible here so you can view all the pages/charts. The report is interactive allowing you to hover given months to see data details or even change the reporting date range.
By embracing not only data visualization but the visualization of meaningful data, we as marketers can raise the bar and increase engagement with our audience. Think of the four pages of this report, which page talks most to you? Which way of presenting the data makes it sing for you? Odds are it was not the first table with all the detailed data.
Content is the main currency for marketers, no matter what their industry, but the overabundance of content available to consumers has left many marketers with the question: How do I analyze the performance of my content?
The question is valid and pressing whether we’re talking about content aimed at producing an outcome like a conversion or form completion, or content which is part of customer service interaction, and whether the content is written or visual.
For Greg Bennett, Conversational Design Principal at Salesforce, content analysis starts with identifying channels that are receiving the most response.
“In the last six months the use of Salesforce’s messaging channel has grown more than 600%,” said Bennett. “Our use of chatbots has increased over 170%, so when you see dramatic changes like that you need to instantly start analyzing the content that is being produced to measure its efficiency.”
Creating content and conversation
During the onset of the pandemic, Salesforce experienced a 50% increase in service cases from its clients. The increase of requests generated a lot of content for Bennett and his team to review.
For Salesforce increased content meant an increased use of Einstein AI for chatbot services, ensuring that all new content created fit the Salesforce corporate style guidelines, and reviewing processes and patterns of conversations to increase efficiency and problem resolution.
“In content analysis we focus on the follow-up with customer requests and patterns of usage and conversation because those are trends we can immediately correct if we see any inefficiency,” said Bennett, a linguist by training from Georgetown University. “It is not just the subject of content in a channel, but the average handling time of content in the same space or the action taken from content in a marketing space.”
The new attention being paid to content and conversational analytics and analysis has sped the industry ahead into a new level of scrutiny that is here to stay, according to Meggie Giancola, Head of CPG Sales and Strategy for Valassis, a marketing technology and consumer engagement firm.
“Now marketers are really measuring content on how much it correlates to brand value, and the past six months has brought us to a place in analytics that normally would have taken five years,” said Giancola. “How content is being used to motivate shoppers is much different than when this year started, so your analysis and measurement has to change.”
Content analysis and measurement needs to focus on the specific shift in consumer behavior and mindset that has occurred.
In particular, has the content caused an increase in purchasing for lower-priced products and services because of the economic crisis? Has it led to an increase in consumer feedback about current campaigns, or increased consumer engagement about COVID-related services or promotional pricing?
“Every analysis is different. So as long as the analysis is data-driven that is a quality start,” said Giancola. “When you are seeing a high volume of content or impressions with minimal purchasing you need to ask if it is a content issue, brand issue or pricing issue.”
For Salesforce their current service requests come 42% by chat, 37% by phone and 21% by chatbot. The increased use of chatbots has led to Bennett and his team to closely scrutinize and measure chatbot performance.
“Measuring marketing content to extend and prolong engagement is less of a race and more of a marathon,” said Bennett. “If the content is for a service request you want minimal content that leads to a shorter waiting time that increases consumer satisfaction.
As a marketer you have to separate the quality of service from the quality of conversation when measuring content. You could have great content in a conversation, but not resolve the original issue.”
When possible, content that is about a specific product or service should be measured against itself, even if the last measurement happened pre-COVID. Disparities in the two measurements should be easily explained away with new variables that have entered the market since the COVID outbreak.
Video as a vehicle
As video content surges in demand throughout nearly every industry, the way that it is analyzed and measured has to change, too. Aside from the obvious measurement including views and reposts, the popularity of short-form videos has created a new urgency in content analysis and measurement.
“Now that short-form videos are used as fundamental pieces of marketing campaigns, they need to be treated as such,” said Jon Niermann, co-founder and CEO of Loop Media, a streaming media company focused on video content. “The days of judging videos solely on if they go viral is over.”
Niermann says the foundation of measuring video content should be making sure that it triggers an emotional response, with a clear call to action or desired response. While short-form video is the most popular form of the content, there will always be a place for long-form video content, but analysis and measurement needs to be equally scrutinized.
“Clearly distinguish between how you are going to measure short-form video content compared to long-form video content, even if its about the same subject,” said Niermann. “We recommend daily reports of where and when the videos are playing, who is playing them, and the actions they are creating. As a general rule everyone should be reviewing their video performance more than they currently do.”
So when measuring content, it’s important that the analysis is comprehensive in order to identify inefficiencies and opportunities to enhance promotions, services or products.
“Assets and content of the conversation experience will see even more in-depth analysis moving forward, there will be no change in that,” said Bennett. “Be able to scale your content quickly if you have to, but never stop ongoing analysis for a more efficient process.”
For over 20 years, website analytics has leveraged the use of persistent cookies to track users. This benign piece of code was a mass improvement over using a user’s IP address or even the combination IP and browser. Since it was first introduced, the use of these cookies has become the focus of privacy legislation and paranoia. So what alternative is there?
If your website or mobile application requires the creation of user accounts and logins, it’s time to plan to transition away from cookie-based tracking to user ID tracking. In simple terms, instead of having your analytics toolset read a cookie, you pass a unique identifier associated with the user ID and then track the user via this identifier. Typically the identifier is the login ID.
Preparing for advanced tracking
Ensure that the user ID you’ve deployed doesn’t contain Personal Identifiable Information (PII). Too often, sites require users to use their personal email address as a login ID or event their account number. These are PII. If this is the case with your organization, then the trick is to assign a random unique client identifier to all existing accounts as well as for any future accounts as they are created.
Have your developers start to push the User ID to the data layer. This way, the variable will be there waiting for your analytics software to read it once you’re ready to implement the new tracking method. Check with your analytics software on the variable name for this element as it varies from analytics software to software.
Create a new view/workspace within your analytics software and configure it to track users by their user ID. Most analytic packages will still set a temporary cookie to track user behavior prior to their login and then will connect the sessions. This way you can see what a user does on your site even prior to them logging in and what site visitors who never login do.
Benefits of tracking users by user ID
What if a user clears their cookies (perhaps they’re utilizing antivirus software that purges all cookies every time the browser is closed)? Once again this leads to inflated user count data.
By tracking a user via their user ID, you’ll obtain a more accurate count of unique users on your site.
Cross Device Tracking
This is perhaps one of the greatest benefits of tracking users by their user ID. You can now see how users interact with your site and/or mobile app. How many use a combination of devices. Is there a specific preference for which type of device might simply be used to add to a shopping cart, only to have the order processed on another device?
Greater Analytics Insight
Armed with enhanced analytics data, new and potentially powerful insights can be harvested. With this new knowledge, you can better direct internal resources to focus and enhance the user experience and optimize the user flow for greater profits.
Real life examples
The following examples demonstrate the power of tracking users by their user ID.
Overview – Device Overlap
The following image shows what percentage of accounts use which type of device and the percentage that use a combination of devices. For example, while 66.6% use only a desktop, 15.8% use a combination of Mobile and Desktop.
User Behavior – Device Flow
Reviewing the device flow leading up to a transaction can provide some of the greatest insights from this enhanced analytics tracking methodology.
While it might not be surprising that the two most common device (by number of Users) paths were Desktop only and Mobile only, what was surprising to me and to the client was number 3. While the device path of Desktop -> mobile -> Desktop is only experienced by approx. 3% of users, it accounts for approximately 8% of all transaction and over 9% of all revenue generated.
The minimal overall use of tablets was also a bit surprising. Of course the mix of devices does vary from client to client.
For example, from the above report, one can objectively assign a more accurate value to SEO efforts by examining the role Organic Search traffic played in generating sales. While a source of an immediate sale (in this case) from organic search generated traffic represents 1.3% of total revenue as an assist in the sales cycle, it played a role in over 10.4% of generated revenue.
Enhanced user insights
In this example, the client allows its customers to also have multiple logins for their account. Essentially a user ID represents a customer/client and not a single user. The client operates in the B2B world where multiple people within its clients’ organizations may require unique logins and rights (who can order, who can just view product details, who can view or add to the cart but not place an order, etc.). By leveraging both tracking by user ID and recording a unique login id within their analytics, these additional insights can be obtained.
The above report not only breaks down revenue by division, but demonstrates how within different division users use the site differently. In region 1, there is almost a 1:1 relationship between user ids and login ids. Yet in Division 3, the ration is over 4:1, this means that for every customer there is an average over 4 logins being utilized in Division 3.
How can they leverage this data for more effective marketing? By understanding that within divisions there are differences, carefully crafted email marketing can be created to target customers differently with multiple logins vs. single account/login customers.
A further dive into the data could also reveal which login IDs are only product recommenders (only view products) from those who make specific product requests (add to the shopping cart and never place the order) from those who only process orders and from those who do it all. Each one needs to be marketed to differently with different messaging to optimize the effectiveness of the marketing effort. It’s through detailed analytics that this audience definition can be obtained.
Is tracking by user ID right for me?
Making the decision to change how you track your users is a difficult choice. First, does your site/mobile app require users to login at a reasonably early part of their journey? This is ideal for e-commerce sites and sites where the vast majority of user interaction takes place after the user logins into the site/application.
If you’re running a general website with the goal to merely share information and generate “contact us” type leads, the answer to making this switch is no.
If you have a combination of a general information site plus a registered user section, then yes you might want to consider making this change and perhaps just for the registered user section.
Sponsored Products is the most widely adopted Amazon search ad format, and typically accounts for more than six times as much ad spend as Sponsored Brands ads for the average Tinuiti (my employer) advertiser. As such, it’s incredibly important for advertisers to understand the full value that these ads drive.
Part of this is understanding the click-to-order period between when a user clicks on an ad and when that user ends up converting. Given how Amazon attributes orders and sales, it’s crucial that advertisers have an idea of how quickly users convert in order to value traffic effectively in real time.
Amazon attributes conversions and sales to the date of the last ad click
When assessing performance reports for Sponsored Products, advertisers should know that the orders and sales attributed to a particular day are those that are tied to an ad click that happened on that day. This is to say, the orders and sales reported are not just those that occurred on a particular day.
Advertisers viewing Sponsored Products conversions and sales in the UI are limited to only seeing those orders and sales attributed to the seven days following an ad click. However, marketers pulling performance through the API have greater flexibility and can choose different conversion windows from one to thirty days, which is how the data included in this post was assembled.
In the case of Sponsored Display and Sponsored Brands campaigns, performance can only be viewed using a 14-day conversion window, regardless of whether it is being viewed through the UI or through an API connection.
For marketers who wish to use a thirty-day conversion window in measuring Sponsored Products sales and conversions attributed to advertising, this means that it would take thirty days after the day in question in order to get a full picture of all conversions. Taking a look across Tinuiti advertisers, the first 24 hours after an ad click accounted for 77% of conversions and 78% of sales of all those that occurred within 30 days of the ad click in Q2 2020.
Unsurprisingly, the share of same-SKU conversions that happen in the first 24 hours is even higher, as shoppers are more likely to consider other products the further removed they become from an ad click.
For the average Amazon advertiser, we find that more than 20% of the value that might be attributed to ads happens more than one day after the ad click, meaning advertisers must bake the expected value of latent orders and sales into evaluating the most recent campaign performance. The math of what that latent value looks like varies from advertiser to advertiser.
Factors like price impact the length of consideration cycles
The time it takes for consumers to consider a purchase is naturally tied to the type of product being considered, and price is a huge factor. Taking a look at the share of 30-day conversions that occur more than one day after the click by the average order value (AOV) of the advertiser, this share goes up as AOV goes up. Advertisers with AOV over $50 saw 25% of orders occur more than 24 hours after the ad click in Q2 2020, whereas advertisers with AOV less than $50 saw 22% of orders occur more than 24 hours after the ad click.
Put simply, consumers usually take longer to consider pricier products before purchasing than they take to consider cheaper products, generally speaking. Other factors can also affect how long the average click-to-order cycle is for a particular advertiser.
In addition to latent order value varying by advertiser, there can also be meaningful swings in what latent order value looks like during seasonal shifts in consumer behavior, such as during the winter holiday season and around Prime Day.
Key shopping days speed up conversion process
The chart below depicts the daily share of all conversions attributed within seven days of an ad click that occurred during the first 24 hours. As you can see, one-day order share rose significantly on Black Friday and Cyber Monday as users launched into holiday shopping (and dropped in the days leading into Black Friday).
After these key days, one-day share returned to normal levels before rising in the weeks leading up to Christmas Day before peaking on December 21 at a level surpassing even what was observed on Cyber Monday. December 21 the last day many shoppers could feel confident in placing an order in time to receive it for the Christmas holiday, and it showed in how quickly the click-to-purchase path was for many advertisers.
Of course, Amazon created its own July version of Cyber Monday in the form of Prime Day, and we see a similar trend around one-day conversion share around the summer event as well.
This year’s Prime Day has been postponed, but reports indicate that the new event might take place in October.
As we head into Q4, advertisers should look at how the click-to-order window shifts throughout key times of the year in order to identify periods in which latent order value might meaningfully differ from the average.
Like any platform, advertisers are often interested in recent performance for Amazon Ads to understand how profitable specific days are. This is certainly important in determining shifts and situations in which budgets should be rearranged or optimization efforts undertaken, and that’s even more true now given how quickly performance and life are changing for many advertisers as well as the population at large.
However, in order to do so effectively, advertisers must take into consideration the lag that often occurs between ad click and conversion. Even for a platform widely regarded as the final stop for shoppers such as Amazon, more than 20% of 30-day conversions occur after the first 24 hours of the click, and this share can be much higher for advertisers that sell products with longer consideration cycles.
Further, advertisers should look to historic performance around key days like Cyber Monday and Prime Day to understand how these estimates might shift. Depending on product category, other holidays like Valentine’s Day or Mother’s Day might also cause shifts in latent order value.
Not all advertisers necessarily want to value all orders attributed to an ad over a month-long (or even week-long) attribution window equally, and particularly for products with very quick purchase cycles, it might make sense to use a shorter window. That said, many advertisers do find incremental value from orders that occur days or weeks removed from ad clicks, and putting thought into how these sales should be valued will help ensure your Amazon program is being optimized using the most meaningful performance metrics.
The global market for customer data platforms is expected to rise dramatically over the next few years. The CDP Institute pegged industry revenue for 2019 at $1 billion and it expects the sector to reach at least $1.3 billion in 2020. Meanwhile, ResearchandMarkets predicts the industry will grow from $2. billion in 2020 to $10.3 billion by 2025, expanding at an astounding compound annual growth rate (CAGR) of 34.0% during the forecast period.
This growth is being driven by the proliferation of devices and customer touchpoints, higher expectations for marketers to orchestrate real-time personalized experiences across channels and the need to navigate complex privacy regulations. Let’s explore each of these factors in greater detail.
More devices, fragmented interactions and high expectations
Gartner predicted that the average U.S. adult would own more than six smart devices by 2020, and Cisco forecasts that the number of devices connected to IP networks globally will expand to more than three times the global population by 2023. There will be 3.6 networked devices per capita (29.3 billion overall) by 2023, says Cisco, up from 2. networked devices per capita (18. billion overall) in 2018.
Customers and potential customers are using all of these devices — several in a day, often — to interact with the companies they do business with, and they expect these brands to recognize them no matter what device they’re using at any given time.
According to a Salesforce State of the Connected Customer survey conducted April 2019, 78% of respondents prefer to use different channels to communicate with brands depending on context, but 6% expect companies’ engagements with them to be tailored based on past interactions.
This challenge isn’t going to go away anytime soon. Segmenting Salesforce’s customer data by generations reveals that younger cohorts switch devices more than older, and they’re also more likely to be adding IoT-type connected devices to their repertoire.
Meanwhile, customer data security and governance have leapt to the forefront of marketer concerns, as the alphabet soup of data regulations — from HIPAA (Health Insurance Portability and Accountability) to HITECH (Health Information Technology for Economic and Clinical Health) to GDPR (General Data Protection Regulation), CCPA (California Consumer Privacy Act) and CASL (Canada Anti-Spam Legislation) — continues to grow.
Enter the Customer Data Platform, a system designed for non-IT use to streamline the flow of customer data throughout the martech stack and create a single view of the customer. High expectations, along with the proliferation of possible customer touchpoints, make cross-device IDs and identity resolution — the ability to consolidate and normalize disparate sets of data collected across multiple touchpoints into an individual profile that represents the customer or prospect — critical for helping marketers, sales and service professionals deliver the ideal total customer experience. CDPs offer this consolidation and normalization and also make the data profiles freely available to other systems.
Additionally, CDP vendors seek to help marketers address the privacy challenge by providing strong data governance protocols that are certified by third-party organizations to ensure compliance with these types of regulations, as well as other data security standards. For example, many CDP vendors are SOC (Service Organization Control), SSAE (Statement on Standards for Attestation Engagements) and/or ISO (International Standards Organization) certified. These audits confirm best practices around internal processes, data management, data privacy and security.
As marketers, we face the overwhelming challenge of demonstrating proof that our tactics are effective. But how can we convince management if we are not convinced of our own data?
Here’s the reality, which I recently learned for myself: If you’re running email marketing, it’s very likely that your performance reports are not disclosing the full truth… inflated CTRs (click-through rates) and open rates being the main culprits.
Email security programs – loved by recipients, hated by senders
Barracuda. SpamTitan. Mimecast. Email bots that serve a single purpose: to protect users from unsafe content. These programs scan inbound emails and attachments for possible threats, including viruses, malware, or spammy content by clicking on links to test for unsafe content.
For email marketers, this creates several challenges:
Inflated CTRs and open rates due to artificial clicks and opens
Disrupting the sales team’s lead followup process as a result of false signals
Losing confidence in data quality (quantity ≠ quality)
Real or artificial clicks?
In reviewing recent email marketing performance reports, I noticed an unusual pattern: Some leads were clicking on every link in the email…header, main body, footer, even the subscription preferences link — yet they were not unsubscribing. Not only that, but this suspicious click activity was happening almost immediately after the email was deployed. I speculated that these clicks were not “human”, but rather “artificial” clicks generated from email filters.
Hidden pixels are your frenemy
To test my hypothesis, I implemented a hidden 1×1 pixel in the header, main body, and footer section in the next email. The pixels were linked and tagged with UTM tracking — and only visible to bots.
Sure enough, several email addresses were flagged as clicking on the hidden pixels.
All that brings me back to the question of whether or not marketing data can be trusted. It’s critical to “trust, but verify” all data points before jumping to conclusions. Scrutinizing performance reports and flagging unusual activity or patterns helps. Don’t do an injustice to yourself and your company by sharing results that they want (or think they want) to hear. Troubleshoot artificial activity and decide on a plan of action:
Use common sense and always verify key data points
Within your email programs, identify and exclude bots from future mailings
Share results with management, sales, and other stakeholders
A word of caution…
Tread carefully before you start implementing hidden pixels across your email templates. Hiding links might appear to email security programs as an attempt to conceal bad links. You could be flagged as a bad sender, so be sure to run your email through deliverability tools to check that your sender score isn’t affected.
As the saying goes, “There are three kinds of lies: lies, damned lies, and statistics.” Sigh.
With different solutions circulating within the email marketing community, this is likely the “best solution out of the bad ones”. It all depends on what works best with your scenario and business model.
Here are our picks: Website Optimization Specialist – In Atlanta, SunTrust is looking for a specialist to be responsible for “developing and executing business strategies, processes and policies to enhance the sales and service experiences intrinsic to SunTrust’s digital spaces.” A/B Testing & Personalization Analyst – Join Barnes & Noble’s Optimization team in New York […]
Website Optimization Specialist – In Atlanta, SunTrust is looking for a specialist to be responsible for “developing and executing business strategies, processes and policies to enhance the sales and service experiences intrinsic to SunTrust’s digital spaces.”
A/B Testing & Personalization Analyst – Join Barnes & Noble’s Optimization team in New York to “improve bn.com’s content, design, and usability for customers and to create unique experiences based on customers’ preferences and behaviors.”
Director-Digital Product Analytics & Testing – Join the Enterprise Digital and Analytics team at American Express in New York. They are looking for a leader to “provide value to the online card shopping experiences within the Global Consumer and Commercial businesses through customer data and measurement, insights through analytics techniques and experimentation.”
Marketing Manager, International Conversion – Ancestry is looking for a candidate to join their Conversion Marketing team in San Francisco. This person is “responsible for improving and optimizing the user experience at each step in the conversion funnel with the end goal of maximizing revenue from visitors in each of Ancestry’s key global markets.”
Director of B2B Marketing, Demand Generation – Join Vimeo’s B2B marketing team in New York to “scale qualified lead acquisition, build and continuously optimize digital marketing, account-based marketing (ABM), email automation, social, and event-based marketing channels.”
Digital Marketing Leader – Website Optimization – Join GE Healthcare in Wauwatosa, Wisconsin to “develop a rigorous testing and experimentation framework, and conceive, scope and implement experimentation initiatives to improve the website user experience and drive conversion rate optimization.”
Manager, Marketing Planning, Test & Analysis – Express is looking for an individual to lead the testing and optimization program in Columbus, Ohio, “starting with A/B & multivariate testing taking us into experience optimization and eventually personalization.”
Looking for a job or to fill a position? Give us a shout and we’ll help spread the word in our next careers blog post.
When I speak with our clients, it often strikes me how many of them feel overwhelmed by the very idea of personalization. Our imagination, often fueled by the marketing teams of various software companies, creates a perfect world where personalization enables every interaction to be completely custom for every individual. In this dreamland, artificial intelligence […]
When I speak with our clients, it often strikes me how many of them feel overwhelmed by the very idea of personalization.
Our imagination, often fueled by the marketing teams of various software companies, creates a perfect world where personalization enables every interaction to be completely custom for every individual. In this dreamland, artificial intelligence and machine learning solve all our problems. All you have to do is buy a new piece of software, turn it on, and…BOOM: 1:1 personalization.
As a data scientist, I’ll let you in on a little secret: that software only provides the technological capability for personalization. Even further, the algorithms found within these tools simply assign a probability to each potential experience that maximizes the desired outcome, given the data they have access to. Suffice to say, they’re not as intelligent as you are led to believe.
If you caught our first post in this series, you already know that we define personalization a bit more broadly, as any differentiated experience that is delivered to a user based on known data about that user. This means personalization exists on a spectrum: it can be one-to-many, one-to-few, or one-to-one.
And while there are many tools that enable you to do personalization from a technical standpoint, they don’t solve for one of the main sources of anxiety around personalization: strategy
Most personalization campaigns fail because of a lack of a strategy that defines who, where and how to personalize. So I’ve put together a free downloadable guide to help you do just that. This seven-page guide is packed full of guidelines, templates and best practices to strategize and launch a successful personalization campaign, including:
Major considerations and things to keep in mind when developing your personalization strategy.
More than 30 data-driven questions about your customers to identify campaign opportunities.
A template for organizing and planning your personalization campaigns.
Guidelines for determining whether to deliver your campaigns via rule-based targeting or algorithmic targeting.
Free Download: Plan & Launch Profitable Personalization Campaigns.
It’s January 3, and if you’re like us, you’re already heads down at your desk and neck deep in emails. But we’d be remiss if we didn’t take a minute to reflect on the previous year. In November of 2018, we quietly celebrated 15 years of being in business. When Brooks Bell was founded, experimentation was in […]
It’s January 3, and if you’re like us, you’re already heads down at your desk and neck deep in emails. But we’d be remiss if we didn’t take a minute to reflect on the previous year.
In November of 2018, we quietly celebrated 15 years of being in business. When Brooks Bell was founded, experimentation was in its infancy. But despite all the changes we’ve experienced since then, one thing remains true: it is the opportunity to connect with so many interesting people that are solving big problems for their business that makes our work worthwhile. Thanks for walking with us.
In October, things got a little spooky around the office and it had everything to do with Scott, our Director of Sales, who decided to channel his inner Ellen Degeneres for the day (much to our colleagues’ horror). Watch the video if you dare.
Making Bacon for our Clients
Back in 2014, we set a Big Hairy Audacious Goal to achieve $1 billion in projected revenue for our clients. By the end of 2017, we’d reached $500 million. And this past December, we hit $1 billion. (cue ::gong::)
But we’re not resting on our laurels. We’ve set some aggressive goals for 2019, with a focus on personalization, and we’re pumped to get to work.
Brooks Bell takes the Bay Area
In September, we officially opened the doors to our San Fransisco office. This decision came after years of working with clients on the West Coast and our desire to work even more closely with them. And with the Bay Area’s rich history of innovation, we can’t think of a better place to help more companies push their boundaries through experimentation.