Data & Business Impact with Feras Alhlou

A few months ago I had the opportunity to chat with my friend and work partner Feras Alhlou, Co-Founder and Principal Consultant at E-Nor & Co-Author of Google Analytics Breakthrough. Feras and I have known each other for almost 10 years, and it is…

Google Analytics Stuido

A few months ago I had the opportunity to chat with my friend and work partner Feras Alhlou, Co-Founder and Principal Consultant at E-Nor & Co-Author of Google Analytics Breakthrough. Feras and I have known each other for almost 10 years, and it is always great to hear more about the work that he and his first-class team are doing.

Here are the questions we discussed, checkout the answers in the video below. I have also added some of my favorite highlights from the interview after the video.

  1. [01:05] What's the process that you use to make sense out of data?
  2. [02:41]During this process, what do you actually do when you start working with data?
  3. [04:07]When analyzing data, how can we make sure that we are looking at the context to understand what is happening around us?
  4. [07:24]How can Data Studio and better data visualizations help companies make more data-driven decisions?

We believe analytics is a business process. We start with an audit, both from the business side and the technical side - we want to engage the stakeholders to understand how to measure what matters most to the business. Once we have the data in place, we go to the reporting layer - how do we report on this data? Then, we start to be able to analyze the data and find some actionable insights. Last, we can move to testing and personalization - that's when you really can have an impact on the business. Read more about E-Nor's Optimization Framework

There's a whole lot of data these days, right? Life used to be simple for marketers: one device, a few channels - now there's data everywhere, mobile, social, web, and of course backend data. I think one of the first things we need to do is to understand the context around that data, focusing on the following:

  • The integrity of the data: is it clean, was it collected properly, is it raw or aggregated? Understand the data collection, how the data was put together.
  • Having a set of meta data, information about the data: if you're looking at Google Analytics metrics, knowing more about the user. For example, if you have a subscription based model: Is it a premium user? Is it a standard user? Having that additional data gives a whole lot of context, to the person who's consuming that data.

I would definitely advice to have a data road map. Start with what you own, web and mobile analytics data. Then, start augmenting reports with basic social data, maybe you can get a little bit into the qualitative aspect with that. And last but not least, a great product that was recently introduced by Google as the Surveys product. There are surveys we can do on our own properties to understand the voice of the customer. But also use it to do market research - it used to be expensive and cumbersome to do it, but now you can easily run a Google survey and do a lot targeting.

And here is Feras and me having fun in the Google Analytics studio!

Daniel Waisberg and Feras Alhlou

image 
Daniel Waisberg and Feras Alhlou

Data That Matters: Maternal Mortality Trends

I have always appreciated the work of the Bill and Melinda Gates Foundation, it is really amazing to see people working so hard to make the world a better place. But I was left speechless when I opened their new report: GoalKeepers 2017. It tells the s…

Data That Matters: Maternal Mortality Trends

I have always appreciated the work of the Bill and Melinda Gates Foundation, it is really amazing to see people working so hard to make the world a better place. But I was left speechless when I opened their new report: GoalKeepers 2017. It tells the stories behind the data to help "accelerate progress in the fight against poverty by helping to diagnose urgent problems, identify promising solutions, measure and interpret key results, and spread best practices".

First and foremost, the goals themselves are superb - I can't think of more important issues to fight for. But I was also impressed by the information design, it is spotless. They used the right medium for each piece of information: text, images, videos, animations and charts. The report is engaging and, before you realize, you spent an hour going through it. So I was touched both as a person that cares about what is happening around me and as a professional appreciating good work.

Interestingly, a few months ago I was looking for some data to build a sample report, and I chose the maternal mortality dataset from UNICEF's data portal. I built the report and used it, but didn't take the time to publish it - ever heard of procrastination? :-)

In this article I will provide more context into GoalKeepers 2017 using publicly available UNICEF data on maternal mortality. I'll start with some words about the GoalKeepers 2017 report - then, I'll discuss some of the steps I used to create my report and the insights I learned from the data.

Stories behind the data: maternal mortality in Ethiopia

One of the highlights that I found particularly interesting in GoalKeepers 2017 was the maternal mortality case study, focusing on how Ethiopia is fighting this terrible issue. Here is how Bill and Melinda define it.

"If you were trying to invent the most efficient way to devastate communities and put children in danger, you would invent maternal mortality." Bill and Melinda Gates

Most people would agree that mothers are probably the most important pillar for a child (I'm a father, and I think fathers are important too, but as my mom always says: "you will never be a mother!"). So it is devastating to learn that in 2015, UNICEF registered 302,530 maternal deaths due to complications from pregnancy or childbirth - 168.7 deaths per 100,000 live births. And remember that a mother's death does not mean one child left motherless, women can already have many more children when it happens.

However, as GoalKeepers 2017 shows, we've made some great progress, and the trends look good. In their case study, they show how Ethiopia is taking giant steps on their fight against maternal mortality, and the chart they used is simple and powerful: mortality went from 843 to 357 per 100,000 from 1990 to 2015 - that's great!

maternal mortality ethiopia

But in order to understand our global status better, it is important to put more context into the mix: what's happening around the world? And how does Ethiopia compare to other places?

Maternal Mortality around the world

To have a better understanding of how both Ethiopia and the world in general is progressing, I took a deeper look in the maternal mortality dataset from UNICEF's statistics website. The data is publicly available, well organized, and it seems trustworthy. I downloaded the xlsx file and formated it for Data Studio using this spreadsheet; then, I imported it to Data Studio (learn how).

Below you'll find my data visualization embedded, scroll down to read some of my conclusions based on the data.

I know, the horizontal bar chart goes on forever! But I think it gives an interesting perspective.

Disclosure: I do not pretend to be a specialist in global health, my knowledge about the efforts in the area are minimal. The insights below are based on the data only - I'm assuming UNICEF publishes accurate and unbiased data. With that said, I hope it will help people understand better the status and trends of maternal mortality around the world.

Here are my insights on maternal mortality based on UNICEF's data.

  • Amazing progress - but not solved: out of 183 countries in the data, only 13 are worse off in 2015 compared to 1990. The trajectory is mostly good - globally, we saw a decrease from 339 to 168 in maternal mortality rate, an average of 44% decrease. For context, Ethiopia's rate decreased by 71%, significantly better than the average. However, it is clear from the map that Africa is bleeding, with Sierra Leone losing 1,360 for 100,000 giving birth - that's very bad.
  • United States and South Africa have alarming trends: both countries are among the top 10 countries in the 'getting worse' table (sorted by 1990-2015 % change) - South Africa had an absolute 1,500 deaths and USA 550, that's a lot of loss. Even though they don't have the highest rates, it is quite alarming to see the negative trends and absolute numbers. For more on the USA trend check this article, which discusses possible reasons and links to more in-depth analyses.
  • Cambodia and Turkey up-and-to-the-right, but still a lot of deaths: both countries have shown great progress, appearing in the top 10 'getting better' table - but they still need a big push, especially Cambodia.

I think those are interesting points to think about as we continue fighting this horrible issue - the more data (and analyses) we have, the more prepared we will be. If you are looking for a place to start, UNICEF has a lot of interesting datasets in their data portal. Let's help make the world a better place!

image 
Online Behavior

Embedding Google Data Studio Visualizations

Last year I wrote about the Marvel vs. DC war on the big screen. It was super fun to merge two of my passions (data visualization and comics) in one piece. It started with my curiosity to understand what all those movies are amounting to, and I think i…

Embedding Google Data Studio Visualizations

Last year I wrote about the Marvel vs. DC war on the big screen. It was super fun to merge two of my passions (data visualization and comics) in one piece. It started with my curiosity to understand what all those movies are amounting to, and I think it helped me prove a point: Marvel is kinda winning :-)

One of the things that annoyed me was that I had to link to the interactive visualization, readers couldn't see the amazing charts in my article (!) - so I ended up including static screenshots with some insights explained through text. While some people clicked through to play with the data, I suspect many just read the piece and went away, which is suboptimal - when I publish a story, my goal is to allow readers to interact with it quickly and effectively.

I am extremely excited that now Google Data Studio allows users to embed reports in any online environment, which empowers us to create an improved experience for telling stories with data. This feature will be an essential tool for data journalists and analysts to effectively share insights with their audiences.

A year has passed since I did the Marvel vs. DC visualization, so I thought it was time to update it (5 new movies!) and share some insights on how to use Data Studio report embedding to create effective data stories.

Enable embedding

The first step to embed reports is a pretty important one: enable embedding! This is quite simple to do:

  1. Open the report and click on File (top left)
  2. Click on Embed report
  3. Check Enable embedding and choose the width and height of your iframe (screenshot below)

Google data studio enable embedding

Please note that the embedding will work only for people that have access to the report. If the report is supposed to be publicly available, make sure that you make it viewable to everyone. If the report should be seen only to people in a group, then make sure to update your sharing settings accordingly. Read more about sharing reports on this help center article.

But how do you make sure you are choosing the right sizes? Read on...

Choosing the right visualization sizes

Needless to say, people access websites in all possible device categories and platforms, and we have little control over that. But we do have control over how we display information in different screens. The first obvious recommendation (and hopefully all the Interweb agrees with me) - make your website responsive! I am assuming you have already done that.

On Online Behavior, the content area is 640px wide, so the choice is pretty obvious when Data Studio asks me the width I want for my iframe - make sure you know the width of the content area where the iframe will be embedded. Also, since you want the visualizations to resize as the page responds to the screen size, set your Display mode to Fit to width (option available on Page settings).

Without further ado, here is the full Marvel vs. DC visualization v2!

I personally think the full dataviz looks pretty good when reading on a desktop, I kept it clean and short. However, as your screen size decreases, even though the report iframe will resize the image, it will eventually get too small to read. In addition, I often like to develop my stories intertwining charts and text to make it more digestible. So here is an alternative to embedding the whole thing...

Breaking down your dataviz into digestible insights

As I mentioned, sometimes you want to show one chart at a time. In this case, you might want to create separate versions of your visualization. Below I broke down the full dataviz into small chunks. Note that you will find three different pages in the iframe below, one per chart (see navigation in the bottom of the report)

Right now, you can't embed only one page, which means that if you want to show a specific chart that lives on page 2 of a report you would need to create a new report, but that's a piece of cake :-)

I am looking forward to seeing all the great visualizations that will be created and embedded throughout the web - why not partner with our data to create insightful stories? Let's make our blogs and newspapers more interesting to read :-) Happy embedding!

BONUS: Data Studio is the referee in the Marvel vs. DC fight!

As I was working on my dataviz, I asked my 10yo son (also a comic enthusiast) to create something that I could use to represent it. He created the collage / drawing below, I think it is an amazing visual description of my work :-)

Data Studio referee

image 
Google data studio enable embedding
Data Studio referee

Statistical Design in Online A/B Testing

A/B testing is the field of digital marketing with the highest potential to apply scientific principles, as each A/B experiment is a randomized controlled trial, very similar to ones done in physics, medicine, biology, genetics, etc. However, common ad…

Statistical Design in Online A/B Testing

A/B testing is the field of digital marketing with the highest potential to apply scientific principles, as each A/B experiment is a randomized controlled trial, very similar to ones done in physics, medicine, biology, genetics, etc. However, common advice and part of the practice in A/B testing are lagging by about half a century when compared to modern statistical approaches to experimentation.

There are major issues with the common statistical approaches discussed in most A/B testing literature and applied daily by many practitioners. The three major ones are:

  1. Misuse of statistical significance tests
  2. Lack of consideration for statistical power
  3. Significant inefficiency of statistical methods

In this article I discuss each of the three issues discussed above in some detail, and propose a solution inspired by clinical randomized controlled trials, which I call the AGILE statistical approach to A/B testing.

1. Misuse of Statistical Significance Tests

In most A/B testing content, when statistical tests are mentioned they inevitably discuss statistical significance in some fashion. However, in many of them a major constraint of classical statistical significance tests, e.g. the Student’s T-test, is simply not mentioned. That constraint is the fact that you must fix the number of users you will need to observe in advance.

Before going deeper into the issue, let’s briefly discuss what a statistical significance test actually is. In most A/B tests it amounts to an estimation of the probability of observing a result equal to or more extreme than the one we observed, due to the natural variance in the data that would happen even if there is no true positive lift.

Below is an illustration of the natural variance, where 10,000 random samples are generated from a Bernoulli distribution with a true conversion rate at 0.50%.

Natural Variance

In an A/B test we randomly split users in two or more arms of the experiment, thus eliminating confounding variables, which allows us to establish a causal relationship between observed effect and the changes we introduced in the tested variants. If after observing a number of users we register a conversion rate of 0.62% for the tested variant versus a 0.50% for the control, that means that we either observed a rare (5% probability) event, or there is in fact some positive difference (lift) between the variant and control.

In general, the less likely we are to observe a particular result, the more likely it is that what we are observing is due to a genuine effect, but applying this logic requires knowledge that is external to the statistical design so I won’t go into details about that.

The above statistical model comes with some assumptions, one of which is that you observe the data and act on it at a single point in time. For statistical significance to work as expected we must adhere to a strict application of the method where you declare you will test, say, 20,000 users per arm, or 40,000 in total, and then do a single evaluation of statistical significance. If you do it this way, there are no issues. Approaches like “wait till you have 100 conversions per arm” or “wait till you observe XX% confidence” are not statistically rigorous and will probably get you in trouble.

However, in practice, tests can take several weeks to complete, and multiple people look at the results weekly, if not daily. Naturally, when results look overly positive or overly negative they want to take quick action. If the tested variant is doing poorly, there is pressure to stop the test early to prevent losses and to redirect resources to more prospective variants. If the tested variant is doing great early on, there is pressure to suspend the test, call the winner and implement the change so the perceived lift can be converted to revenue quicker. I believe there is no A/B testing practitioner who will deny these realities.

These pressures lead to what is called data peeking or data-driven optional stopping. The classical significance test offers no error guarantees if it is misused in such a manner, resulting in illusory findings – both in terms of direction of result (false positives) and in the magnitude of the achieved lift. The reason is that peeking results in an additional dimension in the test sample space. Instead of estimating the probability of a single false detection of a winner with a single point in time, the test would actually need to estimate the probability of a single false detection at multiple points in time.

If the conversion rates were constant that would not be an issue. But since they vary without any interventions, the cumulative data varies as well, so adjustments to the classical test are required in order to calculate the error probability when multiple analyses are performed. Without those adjustments, the nominal or reported error rate will be inflated significantly compared to the actual error rate. To illustrate: peeking only 2 times results in more than twice the actual error vs the reported error. Peeking 5 times results in 3.2 times larger actual error vs the nominal one. Peeking 10 times results in 5 times larger actual error probability versus nominal error probability. This is known to statistical practitioners as early as 1969 and has been verified time and again.

If one fails to fix the sample size in advance or if one is performing multiple statistical significance tests as the data accrues, then we have a case of GIGO, or Garbage In, Garbage Out.

2. Lack of Consideration for Statistical Power

In a review of 7 influential books on A/B testing published between 2008 and 2014 we found only 1 book mentioning statistical power in a proper context, but even there the coverage was superficial. The remaining 6 books didn’t even mention the notion. From my observations, the situation is similar when it comes to most articles and blog posts on the topic.

But what is statistical power and why is it important for A/B experiments? Statistical power is defined as the probability to detect a true lift equal to or larger than a given minimum, with a specified statistical significance threshold. Hence the more powerful a test, the larger the probability that it will detect a true lift. I often use “test sensitivity” and “chance to detect effect” as synonyms, as I believe these terms are more accessible for non-statisticians while reflecting the true meaning of statistical power.

Running a test with inadequately low power means you won’t be giving your variant a real chance at proving itself, if it is in fact better. Thus, running an under-powered test means that you spend days, weeks and sometimes months planning and implementing a test, but then failing to have an adequate appraisal of its true potential, in effect wasting all the invested resources.

What’s worse, a false negative can be erroneously interpreted as a true negative, meaning you will think that a certain intervention doesn’t work while in fact it does, effectively barring further tests in a direction that would have yielded gains in conversion rate.

Power and Sample Size

Power and sample size are intimately tied: the larger the sample size, the more powerful (or sensitive) the test is, in general. Let’s say you want to run a proper statistical significance test, acting on the results only once the test is completed. To determine the sample size, you need to specify four things: historical baseline conversion rate (say 1%), statistical significance threshold, say 95%, power, say 90%, and the minimum effect size of interest.

Last time I checked, many of the free statistical calculators out there won’t even allow you to set the power and in fact silently operate at 50% power, or a coin toss, which is abysmally low for most applications. If you use a proper sample size calculator for the first time you will quickly discover that the required sample sizes are more prohibitive than you previously thought and hence you need to compromise either with the level of certainty, or with the minimum effect size of interest, or with the power of the test. Here are two you could start with, but you will find many more on R packages, GPower, etc:

Making decisions about the 3 parameters you control – certainty, power and minimum effect size of interest is not always easy. What makes it even harder is that you remain bound to that one look at the end of the test, so the choice of parameters is crucial to the inferences you will be able to make at the end. What if you chose too high a minimum effect, resulting in a quick test that was, however, unlikely to pick up on small improvements? Or too low an effect size, resulting in a test that dragged for a long time, when the actual effect was much larger and could have been detected much quicker? The correct choice of those parameters becomes crucial to the efficiency of the test.

3. Inefficiency of Classical Statistical Tests in A/B Testing Scenarios

Classical statistics inefficiency

Classical tests are good in some areas of science like physics and agriculture, but are replaced with a newer generation of testing methods in areas like medical science and bio-statistics. The reason is two-fold. On one hand, since the hypotheses in those areas are generally less well defined, the parameters are not so easily set and misconfigurations can easily lead to over or under-powered experiments. On the other hand – ethical and financial incentives push for interim monitoring of data and for early stopping of trials when results are significantly better or significantly worse than expected.

Sounds a lot like what we deal with in A/B testing, right? Imagine planning a test for 95% confidence threshold, 90% power to detect a 10% relative lift from a baseline of 2%. That would require 88,000 users per test variant. If, however, the actual lift is 15%, you could have ran the test with only 40,000 users per variant, or with just 45% of the initially planned users. In this case if you were monitoring the results you’d want to stop early for efficacy. However, the classical statistical test is compromised if you do that.

On the other hand, if the true lift is in fact -10%, that is whatever we did in the tested variant actually lowers conversion rate, a person looking at the results would want to stop the test way before reaching the 88,000 users it was planned for, in order to cut the losses and to maybe start working on the next test iteration.

What if the test looked like it would convert at -20% initially, prompting the end of the test, but that was just a hiccup early on and the tested variant was actually going to deliver a 10% lift long-term?

The AGILE Statistical Method for A/B Testing

AGILE Statistical Method for A/B Testing

Questions and issues like these prompted me to seek better statistical practices and led me to the medical testing field where I identified a subset of approaches that seem very relevant for A/B testing. That combination of statistical practices is what I call the AGILE statistical approach to A/B testing.

I’ve written an extensive white-paper on it called “Efficient A/B Testing in Conversion Rate Optimization: The AGILE Statistical Method”. In it I outline current issues in conversion rate optimization, describe the statistical foundations for the AGILE method and describe the design and execution of a test under AGILE as an easy step-by-step process. Finally, the whole framework is validated through simulations.

The AGILE statistical method addresses misuses of statistical significance testing by providing a way to perform interim analysis of the data while maintaining false positive errors controlled. It happens through the application of so-called error-spending functions which results in a lot of flexibility to examine data and make decisions without having to wait for the pre-determined end of the test.

Statistical power is fundamental to the design of an AGILE A/B test and so there is no way around it and it must be taken into proper consideration.

AGILE also offers very significant efficiency gains, ranging from an average of 20% to 80%, depending on the magnitude of the true lift when compared to the minimum effect of interest for which the test is planned. This speed improvement is an effect of the ability to perform interim analysis. It comes at a cost since some tests might end up requiring more users than the maximum that would be required in a classical fixed-sample test. Simulations results as described in my white paper show that such cases are rare. The added significant flexibility in performing analyses on accruing data and the average efficiency gains are well worth it.

Another significant improvement is the addition of a futility stopping rule, as it allows one to fail fast while having a statistical guarantee for false negatives. A futility stopping rule means you can abandon tests that have little chance of being winners without the need to wait for the end of the study. It also means that claims about the lack of efficacy of a given treatment can be made to a level of certainty, permitted by the test parameters.

Ultimately, I believe that with this approach the statistical methods can finally be aligned with the A/B testing practice and reality. Adopting it should contribute to a significant decrease in illusory results for those who were misusing statistical tests for one reason or another. The rest of you will appreciate the significant efficiency gains and the flexibility you can now enjoy without sacrifices in terms of error control.

image 
Natural Variance
Classical statistics inefficiency
AGILE Statistical Method for A/B Testing

Revamping Your App Analytics Workflows

Every mobile app professional today uses mobile app analytics to track their app. Yet there are some key elements in their analytics workflows that are naturally flawed. The solution is out there, and you might have missed it.
The flaw, and a fairly bi…

Revamping Your App Analytics Workflows

Every mobile app professional today uses mobile app analytics to track their app. Yet there are some key elements in their analytics workflows that are naturally flawed. The solution is out there, and you might have missed it.

The flaw, and a fairly big one at that, is in the fact that app analytics pros sometimes focus solely on quantitative analytics to optimize their apps. Don't take this the wrong way – quantitative analytics is a very important part of app optimization. It can tell you if people are leaving your app too soon; if they're not completing the signup process, how often users launch your app, and things like that. However, it won't give you the answer as to why people are doing it, or why certain unwanted things are happening in your app. And that's the general flaw.

The answer lies in expanding your arsenal – adding qualitative analytics to your workflow. Together with quantitative analytics, these tools can help you form a complete picture of your app and its users, identify the main pain points and user experience friction, helping you optimize your app and deliver the ultimate product.

So today, you are going to learn how to totally revamp your analytics workflow using qualitative analytics, and why you should do it in the first place. You'll read about the fundamentals of qualitative analytics, and how it improves one's analysis accuracy, troubleshooting and overall workflows. And finally, you'll find two main ways to use qualitative analytics which can help you turn your app(s) into mobile powerhouse.

Exploring the qualitative

Qualitative analytics can be split into two main features: heatmaps and user session recordings. Let's dig a little deeper to see what they do.

Touch heatmaps

Touch heatmaps

This tool gathers all of the gestures a user does in every screen of the app, like tapping, double-tapping, or swiping. It then aggregates these interactions to create a visual touch heatmap. This allows app pros to quickly and easily see where the majority of users are actually interacting with the app, as well as which parts of an app are being left out.

Another important advantage of touch heatmaps is the ability to see where users are trying to interact, without the app responding. These are called unresponsive gestures, and they are extremely important because they're very annoying and could severely hurt the user experience.

Unresponsive gestures can be an indication of a bug or a flaw in the design of your user interface. Also, it could show you how your users think they should move through the app. As you might imagine, being bug-free and intuitive are two very important parts of a successful app, which is why tackling unresponsive gestures can make a huge difference in your app analytics workflow.

User session recordings

User session recordings

User session recordings are a fundamental feature of qualitative app analytics. They allow app pros to see just what their users are doing, as they are progressing through the app. That means every interaction, every sequence of events, on every screen in the app, gets recorded. This allows app pros an unbiased, unaltered view of the user experience.

With such a tool, you'll be able to better understand why users sometimes abandon an app too soon, why they decide to use it once and never again, or even why the app crashes on a particular platform or device.

Through video recordings, it becomes much easier to get to the very core of any problem your app might be experiencing. A single recording can shine light on a problem many users are struggling with. Obviously, the tool doesn't just mindlessly record everything – app pros can choose different screens, different demographics, mobile devices or their operating systems to record from. It is also important for this tool to work quietly in the background and not leave a strain on the app's performance.

Standard workflows- totally revamped

App Analytics Workflows

Qualitative analytics is too big of a field to be covered in a single article. Those looking to learn more might as well take a free course via this link. For all others, it's time to discuss two main workflows where they can be used - ‘Data-fueled Optimization' and ‘Proactive Troubleshooting'.

Data-fueled optimization

Both qualitative analytics and quantitative analytics tools are 'attacking' the same problem from different angles. While both are tasked with helping app pros optimize their mobile products, they have different, even opposite approaches to the solution. That makes them an insanely powerful combo, when used together.

Employing inherently opposite systems to tackle the same problem at the same time helps app pros form a complete picture of their app and how it behaves 'in the wild'. While quantitative analytics can be used as an alarm system, notifying app pros to a condition or a problem, qualitative analytics can be used to analyze the problem more thoroughly.

For example, using quantitative analytics tools you are alerted to the fact that a third of your visitors abandon their shopping cart just before making a purchase. You identify it as a problem, but cannot answer the question as to why this is happening.

With tools like user session recordings, you can streamline your optimization workflow and learn exactly where the problem lies. You could try to fix a problem without insights from qualitative data, but you'll essentially be "blindly taking a stab".

By watching a few user session recordings, you realize that the required registration process prior to making a purchase is simply too long. Users come halfway through it and just quit. By shortening the registration process and making checkout faster, you can lower the abandonment rate. Alert, investigate, resolve. This flow can easily become your "lather, rinse, repeat."

Proactive Troubleshooting

Can you truly be proactive in your troubleshooting? Especially when using analytics? Well, if you rely solely on quantitative analytics, probably not. After all, you need a certain amount of users to actually be using the app for some time before you can get any numbers out, like app abandonment rates or crash rates. Only then will you be able to do anything, and at that point – you're only reacting to a problem already present. With qualitative analytics, that's not the case.
By watching real user session recordings and keeping an eye out on touch heatmaps, you can spot issues with your app's usability or user experience long before a bigger issue arises, therefore proactively troubleshooting any problems.

For example, by watching user session recordings you notice that people are trying to log into Twitter through your app and post a tweet. However, as soon as they try to log in, the app crashes. Some users decide to quit the app altogether. Spotting such an issue helps you fix your app before it witnesses a bigger fallout in new user retention.

Not being proactive about looking for bugs and crashes doesn't mean they won't happen – it means they might go longer unattended. By the time you spot them through quantitative analytics, they will have already hurt your user experience and probably pushed a few users your competitor's way.

Wrap-up

They say new ideas are nothing more than old ideas with a fresh twist, and if that's true, than qualitative analytics are the ‘fresh twist' of mobile app analytics. Combining quantitative and qualitative analytics is a simple process that has incredible potency in terms of your workflows and app optimization. Plus, when you understand the reasons behind the numbers on your app, you are able to make crucial decisions with more confidence.

image 
Touch heatmaps
Online Behavior
App Analytics Workflows

150 Years of Marriages and Divorces in the UK

Have you ever wondered how divorce and marriage rates have trended over the last 150 years? Or what reasons husbands and wives give when getting a divorce? Fortunately these, and other questions, can be answered with data. The UK Office for National St…

Marriage and Divorce Trends

Have you ever wondered how divorce and marriage rates have trended over the last 150 years? Or what reasons husbands and wives give when getting a divorce? Fortunately these, and other questions, can be answered with data. The UK Office for National Statistics make available two extremely interesting and rich datasets on marriages and divorces, providing data for the last 150 years.

Following the discovery of these datasets, I decided to uncover trends and patterns in the numbers, working with my colleague Lizzie Silvey. Two important questions were in our minds when exploring the data:

  1. Who wants a divorce and why?
  2. How do wars and the law impact marriage and divorce rates in the UK?

We discuss our findings in this article, but you can also drill down into the data using this interactive visualization that we created using Google Data Studio.

Divorce petitioners and their reasons

The ratio of petitioners has been stable since around 1974 (70% women and 30% men), the time at which both genders started having the same rights and divorce could be attained more easily.

In the charts below we see the trends for 'Adultery' and 'Unreasonable behaviour', the two most common reasons provided (out of five possible) - each line shows the number of divorces granted to the husband or wife for a specific reason.

Divorce reasons UK

In order to use Adultery grounds the petitioner must prove that the partner had sexual intercourse with someone else, which might not be simple. We can see in the chart that Adultery follows the exact same pattern for husbands and wives, but analyzing the statistics further we see that, on average, 40% of the adultery divorces are granted to husbands - since only 30% of total divorces are petitioned by husbands, it seems adultery is a particularly strong reason for men to file for a divorce.

The second chart, showing 'Unreasonable behaviour', is more enigmatic. While husbands were granted divorces in an increasing pace for behavioural reasons, and while the lines seem to be converging, there is a strange hump in the wives line. Why were wives granted a massive amount of divorces up to 1992 based on unreasonable behaviour? Could that be related to a “backlog” of cases of domestic violence (classified as a behavioural reason) that came to light after women could divorce based on those grounds more easily? Unfortunately we could not find data showing possible reasons for that.

The impact of laws & wars on marriage and divorces

When looking at the marriage and divorce trends since 1862, there were a few clear turning points.

UK Marriage Divorce rates

The wars seemed to affect marriages quite significantly. Around the beginning of World War I & II we see spikes in marriages, maybe as a result of young men wanting to vow their love before going to fight. Then, during the wars, the marriages plunged as soldiers went away, and up again when they came back home.

As for divorces influenced by the wars, we can only look at World War II, as women had a limited ability to divorce after World War I. It seems the Matrimonial Causes Act 1937, which made other grounds legal (e.g. drunkenness and insanity), coupled with premature weddings (discussed above) and possibly a estrangement due to separation led to a spike in divorces starting in 1946 - who would have the heart to divorce in war times?

But what seems to be the strongest influence in divorces in the history of the UK is the Divorce Reform Act 1969 (link to PDF), which came into effect in 1971. This act states that divorce can be granted on the grounds that the marriage has irretrievably broken down, and it is not essential for either partner to prove an offense. While that explains the strong increase in divorce, we could not find a strong reason for the decline in marriages at the same time - we invite possible explanations in the comments section.

Closing Thoughts

While we couldn't bring answers as to why trends are going in a certain direction and predict upcoming changes, we believe that the data can shed new light into the British society and family relations. Hopefully with new releases of data in the future we will also be able to dive deeper and answer more existential questions.

If you are interested in exploring the data further, check the interactive visualization, created with Google Data Studio, you will find more context and charts showing trends and pattern on marriage and divorce in the UK.

image 
Divorce reasons UK
Online Behavioriage Divorce rates

Partnering with data to create insightful stories

[Cross-posted from The Next Web]
Whether you are a marketer trying to persuade people, a technologist building a startup, or an executive making business decisions, data is your partner. You can use it to make better decisions and create insightful dat…

Data Storyteller

[Cross-posted from The Next Web]

Whether you are a marketer trying to persuade people, a technologist building a startup, or an executive making business decisions, data is your partner. You can use it to make better decisions and create insightful data stories inside and outside your company.

The first step is to accept your data relationship: you are partners forever. Once you understand that, there is an important consideration that will define how to tell your data stories: the context of where they live, which also defines the audience that will interact with them. In this post I will go through some important lessons I learned when visualizing and communicating data in and outside Google.

Data is your partner, live with it!

Data is no longer "next year's big thing", we have gone through that many times over and almost everyone accepts data as a valuable team member. But not everyone can understand and make use of it optimally, which means lots of decisions are still made based on intuition - if you don't believe me, check PwC's Global Data and Analytics Survey 2016, it shows some interesting numbers on how often managers use data during the decision-making process. Data education is a crooked road and we have a long journey ahead of us.

One of the reasons for that is similar to the well-known phenomenon called mathematical anxiety, where people are afraid of maths as a result of past difficulties and traumas. Every one of us have interacted with data analyses (at work, newspapers or academic research) that were created by unskilful communicators, people that might be amazing statisticians but lack the ability to convey the stories behind the numbers. That creates anxiety and could prevent professionals from even trying to understand data.

I believe the reason the data community is not growing like weeds is because professionals are not confident enough with numbers and charts. I have written about how to overcome the fear of analytics (and help others), here is a quick summary.

  1. Never mock people for not understanding a chart
  2. Take baby-steps towards numeracy
  3. Make analytics more fun

When you create a visualization you may affect other people both positively and negatively. If you create a complex and unintuitive visualization, you might be creating a phobia on other people, and they will forever hate numbers and stats. However, if you create a powerful and beautiful visualization, you might be persuading another mind to join the data visualization tribe.

Below are some ideas that might help you craft better data stories, both for businesses and in general.

Stories tailored to businesses, the world, and beyond...

There are many ways to communicate data, but choosing the right format will depend on where the data will be published or presented, the context. Is it a daily performance report or a quarterly result presentation? Or a behavioral analysis using web data? Or an interactive visualization showing global trends?

I'd like to break down data stories into two main branches: business reporting or analysis, and visualizing the world. These groups can show very different characteristics, so let's look into each separately.

Business reporting or analysis

I recently had the opportunity to interview Avinash Kaushik, Digital Marketing Evangelist at Google. In our conversation we discussed techniques to create great data stories, focusing on businesses. Avinash talked about his business framework See, Think, Do, Care and the role of data visualization during the decision making process.

We also discussed data visualization (see minute 11:08), and Avinash explains how not to make silly mistakes when using data in a business context. He makes the differentiation between three main types of visualizations:

  1. Elaborated stories presented with the intent to change people's views on a complex subject (what I call visualizing the world).
  2. Strategic analysis of business results presented to executives.
  3. Day-to-day reporting used to drive most small business decisions.

Avinash differentiates between analysis "packed together" with a storyteller, which allow for more complex visualizations, and day-to-day reporting, which are supposed to stand on their own and help people make decisions by themselves.

Considering the data delivery circumstances is a great start when designing your visualizations as they will inform the presentation style and level of complexity that can be used. While every visualization should strive for simplicity, a daily report (and business visualizations in general) must be extremely clean and self-explanatory, as the data storyteller won't be there to help the decision maker.

Below is a quote by Avinash summarizing his views on how to succeed with data.

"On a business context, a data visualization has to do one job really well, and it has to answer the question ‘so what?' If your data doesn't answer the 'so what' question, and if there isn't a punchy insight that drives action, all you have is a customized data puke, it looks really nice but it serves no purpose. If you want to drive change, you have to get to the simplest possible way to present the data, and once you get to it ask the so what question. After you answer it, ask if it quantifies the opportunity, if it does you are going to win."

Visualizing the world

Luckily to our society, visualizations are increasingly used in a broader context, where the goal is not to understand the business or track performance, but to educate the public and change people's minds. There are some great examples of visualizations that make a difference, but probably the most famous is Hans Rosling motion charts, where he debunks several myths about world development.

I've written about data stories in the past, discussing why it is important and providing some ideas on how to use data visualization to tell stories. Basically, here are two really important things you need on a good data story:

  • It stands on its own - if taken out of context, the reader should still be able to understand what a chart is saying because the visualization tells the story.
  • It is easy to understand - but while too much interaction can distract, the visualization should incorporate some layered data so the curious can explore.

Recently I worked on a data story with my colleague Lizzie Silvey, where we analyzed stats from the UK Office for National Statistics. We looked into Divorce and Marriage trends starting from 1862, and came up with an interactive visualization. Below is a screenshot with some of the insights on how changes in the law impacted marriage and divorce rates in the UK. Check the visualization to play with the data.

UK Marriage Divorce rates

Whether you are working on a monthly report or a world-changing visualization, if you take the time to uncover and communicate the stories behind the data, you will be contributing to better decisions in your company and in society in general.

image 
Data-driven decisions
UK Marriage Divorce rates

Visualization Techniques to Communicate Data

So here’s the deal: you’ve spent a ton of time with your data and you know it inside out. You’ve wrangled, sliced and diced it and are now the expert with this data for this problem. You’ve uncovered new, actionable insights that will lead to fantastic…

So here's the deal: you've spent a ton of time with your data and you know it inside out. You've wrangled, sliced and diced it and are now the expert with this data for this problem. You've uncovered new, actionable insights that will lead to fantastic opportunities or improve your bottom line. Great! Time to show your colleagues or your boss or your clients these findings.

You open your data tool of choice, quickly create some charts and make it all look pretty with a flashy color scheme or fancy logos. More often than not, we fly through this final stage and don't give the data visualization step the due care it needs. This is insane!

Think about it. Your charts and dashboards are most likely the only piece of information your boss or client will interact with. The only information! And yet, here we are, creating default charts and missing the opportunity to really convey our message.

Effective charts are a compelling way to show your data. The human brain is simply better at retaining and recalling information that has been presented visually.

Sales chart year-over-year comparison

In this article I will discuss several techniques that will help you create more effective charts to communicate the underlying data.There's no big secret here. However, by applying deliberate thought, a handful of best practices, and allocating sufficient time in projects for the data visualization step, you can make a big difference to the impact of your charts.

Plan your approach

Before firing up your favorite data visualization software, it pays to spend some time thinking about your output and your goals. Start by answering a few simple questions:

  • Who is the intended audience?
  • What medium will you use to show your charts? (e.g. slides / dashboard / report etc.)
  • What is the goal of this project?

For example, consider the audience who will view your chart. How long will they have to study it? How familiar are they with the data? Are they technically inclined? Do they want detailed charts, or quick summaries?

You want to optimize your message to resonate with your audience, so the more you know about them, the more likely you'll be able to achieve that.

Likewise, how you deliver your message will affect your decisions. Is it a chart in a slide deck? In an informal email? A formal report? An interactive dashboard?

Reports and dashboards are typically pored over for longer periods of time, so charts and findings can be more detailed, whereas presentations or client pitches are short and sweet, where the audience will only have a moment to understand and absorb the information.

Lastly, think about what your end goal is. What do you want your audience to do with the information you show them? For example, if you want your manager to make a cost-benefit decision for a new hire or expensive research tool, make sure your solution answers the question and facilitates making that decision.

Deliberately focus the viewer's attention

Remember, the point of your visualizations is to communicate information, and you can ensure they do that more effectively by giving prominence to the key message within your chart.

You can do this by using attributes, for example color, to highlight specific elements of your charts and focus your audience's attention there. These are known as pre-attentive attributes, and they dramatically help speed up the absorption of information.

Consider this chart showing the open rates for four newsletters that you manage. There's an important story in there, but it's difficult to see with the default colors:

Newsletter open rates chart

However, by carefully using colors, we can bring that story to the fore:

Newsletter open rates with color

Add context to aid understanding

Consider the two charts above, showing email newsletter open rates. The second chart also has a heading that adds context to the story. The words complement the chart and reinforce the message.

Much like writing titles for your blog posts or newsletters, think about the title of your chart in the same way. It should tell the viewer what to expect in your chart and summarize the message.

Similarly, your data may have unexpected spikes or dips, so you might want to use annotations directly on the data points or as footnotes, to make sure the viewers have all the context they need.

Reduce clutter in charts

Renowned data visualization pioneer Edward Tufte coined the term data-ink ratio to convey the ratio of ink needed to tell the core message in your display, divided by the total ink in the display. The idea is to maximize this ratio, in other words, reduce the amount of non-essential ink.

Let's see that in practice. Compare the following two charts showing Amazon's revenue between 2007 and 2016:

Amazon revenue cluttered

After decluttering, the annual revenue figures jump out at the viewer and the information is much quicker to absorb:

Amazon revenue chart after decluttering

Avoid using overly complex charts for the sake of it

There are a lot of complex chart types out there: waterfall charts, radar charts, box and whisker plots, bubble graphs, steamgraphs, tree maps, pareto charts, etc. etc.

Sometimes these may be appropriate for specific cases (e.g. a Sankey chart to show web traffic flow) but it really comes back to the question of who your intended audience is and what medium you'll be showing your chart through.

Does this radar chart really communicate your message well? Would a simple bar chart, which is widely understood, be a better alternative?

Radar chart example

Whenever I teach a dataviz class, I always say that a good chart should be like a good joke: it should be understood without you having to explain it.

Is that pie chart really the best choice?

Pie charts are popular and ubiquitous, but somewhat maligned by the data visualization community. Why is that?

Consider this default pie chart in Data Studio, showing website Sessions broken out by Medium:

Bad pie chart example

This chart (and pie charts in general), have two main drawbacks: 1) it's hard as human beings to decipher the relative sizes of the slices (and the order and position of them affects this), and 2) the long tail is unreadable. Plus, the legend is ugly to look at.

A much better chart for data with many categories and a long-tail would be a standard bar chart. Nothing fancy here, but it's super quick and easy to read off the values, especially for the smaller categories (e.g. compare trying to understand email sessions in the pie chart vs. the bar chart).

Bar chart to replace pie chart

So if you're going to use them, restrict pie charts to small numbers of categories (I'd advise three or less), and always ask yourself if a simple bar chart or table would suffice and be quicker to read.

Be careful with dual axes charts

Dual axes charts should be used with caution as they often cause confusion. It's tempting to use them when trying to chart data series with large size differences, as shown in the following image. Which series goes with which axis? Lines that overlap will also confer meaning that doesn't actually exist, because the series are on different scales.

Dual axis confusion

Some strategies you can use to mitigate confusion include matching the series and axes with different colors, labeling the axes clearly and even using different chart types for the different series (line with a bar).

However, I'd still advocate only using them sparingly. It's often better to show the two series in separate charts next to each other.

When to start the y-axis at 0

For bar charts, you should always start the y-axis at 0 since the height of the bar represents the count in that category. We look at the height of the bars and compare them. If one bar is twice the height of the other, then we're going to conclude that the value of that category is twice the value of the other category, even if the axis shows otherwise.

Consider this simple example. Both bar charts have been plotted from the same data but they tell very different stories:

Truncated y-axis

Vox Media created an excellent video about truncating the y-axis. With line charts we don't need to be so strict with truncated y-axes as the visual lines are used to compare trends, not actual values, as in the case of bars. Indeed you sometimes need to narrow the range with line charts to show the story.

Remember to consider the color blind

Approximately 10% of the male population and 1% of the female population identify as color-blind, and the most common type is Red-Green color-blind. So it pays to keep this in mind when designing your charts.

Closing Thoughts

Once you've created your charts, or your dashboard, pause and ask yourself these few questions:

  • Is it effective at communicating your message?
  • Is it efficient at communicating your message?
  • Ultimately, does the audience benefit from seeing your visualization?

There is no single right answer with data visualizations, as it will depend on many of the factors discussed above. People will come out with different charts from the same dataset, all of which could be equally effective. However, by following some best practices and thinking critically about your charts, you can improve them dramatically.

I'll leave you with some parting words from a master in this field:

"Above all else show the data" Edward Tufte

image 
Sales chart year-over-year comparison
Sales chart year-over-year comparison