Would You Let a Robot Take Care of Your Mother? NYT.com

Author Maggie Jackson’s latest article in the New York Times is raising some questions about domestic robots for the elderly: Robotic companions are being promoted as an antidote to the burden of longer, lonelier human lives. At stake is the future of what it means to be human. I was briefly quoted about the ethical … Continue reading Would You Let a Robot Take Care of Your Mother? NYT.com

Author Maggie Jackson’s latest article in the New York Times is raising some questions about domestic robots for the elderly:

Robotic companions are being promoted as an antidote to the burden of longer, lonelier human lives. At stake is the future of what it means to be human.

I was briefly quoted about the ethical dilemma:

Many in the field see the tensions and dilemmas in robot care, yet believe the benefits can outweigh the risks. The technology is “intended to help older adults carry out their daily lives,” says Richard Pak, a Clemson University scientist who studies the intersection of human psychology and technology design, including robots. “If the cost is sort of tricking people in a sense, I think, without knowing what the future holds, that might be a worthy trade-off.” Still he wonders, “Is this the right thing to do?”

A/B Testing Tools: VWO Compared to Google Optimize and Convert

Using an A/B testing tool is one of the easiest ways to discover which variations of your website increase your website sales and leads the most. To help you understand the different tool options available…

visual website optimizer screenshot
Using an A/B testing tool is one of the easiest ways to discover which variations of your website increase your website sales and leads the most.

To help you understand the different tool options available to you, I have created a guide that compares three leading tool choices: VWO, Convert, and the newly launched Google Optimize.

All of them have their advantages and disadvantages, so I have created a grid comparing VWO (Visual Website Optimizer) with its major rivals, including pricing and functionality. Below the table I’ve also included an overview along with the pros and cons for each of the tools.

Let’s get started with the ratings:

Google Optimize VWO logo
Cost rating
10/10 Free (with premium 360 version also available). 7/10 Plans start at $199 per month for their growth plan (but only for 10K visitors). 6/10 Plans start at $599 per month for their core plan but includes 500K visitors.
Test creation options and ease of use
5/10 Good visual editing options but only using Chrome plugin. Poor usability of A/B test creation process. 8/10 Great A/B test creation process, with easy to use visual and code editing options. 8/10 Very good editor for coding and CSS. A/B test creation process is lacking though.
Ease to add test code 9/10 Very easy. Just one extra line of Google Analytics code needed. 9/10 Very simple – only one tag needed. 9/10 Very simple – only one tag needed.
Testing types available
9/10 A/B tests, MVT, redirect, multi-page included in all plans. 6/10 A/B tests and redirect tests included in all plans, but MVT not in basic plan. 9/10 A/B tests, MVT, redirect, multi-page included in all plans.
Test targeting options
8/10 Google Analytics segments used for targeting, and very simple to use. 8/10 Good targeting behavioral options for all levels, with basic ability to build custom target segments. 9/10 The most advanced targeting options included even in lowest level plan.
Conversion goals & success metrics 7/10 Google Analytics goals are used only, no customized goals. 9/10 Great – good selection of goals and metrics, including revenue. 9/10 Great – good selection of goals and metrics, including revenue.
A/B test reporting
6/10 Standard reporting but indepth analysis can be done in Google Analytics. 9/10 Great reporting, and now includes an A/B testing repository for managing and recording insights. 8/10 Good results reporting and dashboard overview. Being updated soon.
Customer service
1/10 No support offered. Only included in premium version. 8/10 Email support included in basic plan, 24/7 phone support in enterprise plan. 9/10 Support available by web chat in addition to phone and email.
Overall rating 7/10 Good for beginners but lacks advanced features and support options. 9/10 Highly recommended for all types of users, with reasonable costs. 9/10 Better features for A/B testing pros – although more expensive initial cost.
Free trial? Try Google Optimize – free Get a 30-day free trial of VWO Get a 30-day free trial of Convert

 

Now let’s get more detailed, with an overview of each of these tools, including their pros and cons.

Google Optimize overview

Google Optimize launched in 2017 and is a major improvement over their previous A/B testing tool which was built into Google Analytics (called Google Content Experiments). It has limitations but is improving. Here are the notable pros and cons of the tool:

Pros:

  • Very good tool for free, much better than their previous offering.
  • Excellent integration with Google Analytics makes A/B testing very easy.
  • Easy to analyze reports in greater depth using Google Analytics.
  • Can be upgraded to their premium 360 version to unlock more features.

Cons:

  • Only 5 tests can be run at the same time (unlimited when upgrading to 360 version).
  • Multi-page and mobile testing not included (only in 360 version).
  • No support included, only through forums or paid 3rd party consultants or agencies.

Rating on G2Crowd: 4.3/5 (as of August 2019)

VWO overview

There are now several versions of VWO. One is called VWO Testing, and others are VWO insights (including visitor recordings, survey tools and form analysis), and VWO Full Stack (includes mobile app and server side testing). Here are the pros and cons of the VWO Testing plan:

Pros:

  • Intuitive user interface with great A/B test design wizard to help you get better results.
  • Includes A/B test duration estimator tool which helps you see if you have enough traffic.
  • Now includes their great ‘VWO Plan’ for managing and recording insights from A/B tests.
  • Great integration between all the tools on their platform, including visitor recordings and surveys.

Cons:

  • MVT testing is not available in their lower level package, only in their pro plan.
  • Sometimes it’s hard to create tests across pages with complicated URLs.
  • Pricing has increased recently as they move away from the low cost tool market.

Rating on G2Crowd: 4.2/5 (as of August 2019)

Convert overview

Convert has evolved from being a great low cost A/B testing tool to one of the most advanced and complete tools available.

Pros:

  • Great for pros, including a very good editor for CSS and Javascript.
  • Is one of the only GDPR compliant A/B testing tools.
  • The fastest loading tool for user experience, with no flicker experienced.
  • Offers the best support of all tools, including web chat support.

Cons:

  • Expensive monthly plan, although includes many more visitors than VWO lowest plan.

Rating on G2Crowd: 4.6/5 (as of August 2019)

I have managed to get their free trial extended from 14 days to 30 days – you won’t find this elsewhere.

Which is better? VWO, Google Optimize, or Convert? 

To help you make up your mind, here are my expert two-cents about these tools:

  • Google Optimize is the best for beginners or those with no budget for A/B testing, but has limitations.
  • VWO is a great tool for those looking for more A/B testing features than Google Optimize.
  • Convert is best for advanced users, with excellent features and the best support options.

Ultimately I suggest you sign up for Google Optimize to see if that meets your needs, and also try a free trial of VWO to see the benefits of their more advanced A/B testing features. If your company can afford Convert and you want expert features, then use that tool instead.

Extra reading: If you want to get much better results from using these A/B testing tools, then don’t forget to read my essential user guide for A/B testing tool success.

Other A/B testing tools worth considering

I have reviewed 3 of the most popular A/B testing tools, but there are certainly some other newer tools. Here are some of the best other ones for you to also consider, each with their own strengths:

  • Omniconvert – a multi-purpose tool that also offers personalization, surveys, and website overlays.
  • Optimizely – Use to be the most popular low-cost testing tool, but now focused on enterprise market only.
  • A/B Tasty – a good tool that also includes useful options for visitor recordings and heatmaps.

There are also other good tools purely for the enterprise market like Adobe Target, Qubit, and Monetate.

Wrapping up

That’s my expert two cents. Now over to you the readers – what is your favorite A/B testing tool, and why? Please comment below!

The Ultimate Guide to Conversion Rate Optimization (2019)

The ultimate guide to conversion rate optimization for websites, helping you generate many more website sales and customers.

ultimate guide to conversion rate optimization

It’s no longer just about driving more traffic to your website to increase sales. To really increase sales you need to optimize your website experience to convert more of your visitors. This new technique is called conversion rate optimization (CRO), and many websites already gain great results from it.

Are you ready to take advantage too?

This guide helps you quickly benefit from CRO – and it’s not a basic CRO guide.

While it starts with the basics, it reveals the essential aspects of CRO, including the four main elements of CRO, the importance of conversion research, and a CRO process flow to maximize your success.

I created it based on my 10 years experience with CRO and it will help signficantly increase your website sales or leads, without needing more traffic. It’s a long guide, so here is a table of contents:

What is conversion rate optimization (CRO)?

Conversion rate optimization is the art of converting more visitors on your website into your goals (e.g. sales or leads). By increasing your conversion rate, you increase your website sales or leads without actually needing more traffic.

What are the benefits of doing CRO for your website?

The biggest benefit is that it helps you generate much more revenue from your website. Here is how:

  • Generates more leads or sales on your website, with the same traffic you already have, which means you don’t have to spend more money on traffic.
  • Helps maximize the return on investment from your marketing spend, and reduces the cost per sale or acquisition.
  • Improves your website so that it engages more visitors, and increases the chances of them returning and converting in the future.

What kind of results can I expect from doing CRO?

This depends on how much effort and budget you put in, and your level of expertize. With modest efforts you can increase conversion rates by 5-10%. This may not sound much, but it often has a big impact on your online sales. If you maximize your efforts with CRO you can get  amazing results like these:

  • GetResponse.com increased sign-ups by 153% with CRO
  • CrazyEgg.com grew revenue 363% by doing CRO
  • TheGuardian.co.uk increased subscriptions by 46% by doing CRO
  • Moz.com generated an extra $1 million in sales doing CRO

How is conversion rate calculated?

Your conversion rate is the proportion of your website visitors that convert for your main website goal, which is quite often a purchase or a signup. This is how it is calculated for an ecommerce website:

Ecommerce website conversion rate
(Number of orders / number of website visitors) x 100

This conversion rate is often setup by default in tools like Google Analytics. They also track ‘goal conversion rate’ for specific goals like signups or leads.

What are the main elements of CRO?

CRO is made up of four overlapping main elements – conversion research, user experience (UX), website persuasion, and A/B testing and personalization. Making strong use of these will increase your chances of improving your conversion rates, and therefore your sales or leads.

CRO main elements

  • Conversion Research: Gather insights and improvement ideas from conversion research. This comes from web analytics, heat maps, visitor recordings, surveys, user testing and expert CRO reviews. This is the most essential piece of CRO, and cannot be done effectively without it.
  • Website Persuasion: Don’t just hope your website converts your visitors. To engage and convert many more of them, use copywriting best practices and influence techniques, including the usage of social proof, scarcity, urgency and reciprocity.
  • User Experience (UX): Improve your website user experience so visitors can browse and convert more easily, including using best practices for improving your website navigation, forms and user flow. Without it, it doesn’t matter how good your website looks or how persuasive it is.
  • A/B Testing & Personalization: A/B tests and personalization techniques are used to discover and show the highest converting experience for your website. This is very useful, but not essential, particularly because so many websites don’t have enough traffic or conversions for this.

All of these elements overlap and feed into each other to gain better results from CRO, particularly conversion research. For example insights from conversion research feed into better ideas for A/B testing and personalization.

Why is conversion research so important?

Don’t just guess at what to improve on your website, or only listen to what your boss wants to improve, as this often fails to get good results on your conversion rates and sales.

Conversion research is essential for determining what needs improving and why, and is gathered from visitors and analytics tools.

There are 6 elements of conversion research that you need to use to gain the best results:

  1. Web analytics. Tools like Google Analytics are not just for reporting on traffic and KPIs. Doing in-depth analysis forms the quantitative part of conversion research, and reveals pages and traffic sources with the highest potential to improve.
  2. Visitor recordings. Use these to watch EXACTLY what visitors do on your website. They are great for discovering visitor issues, like page elements or form fields they find hard to use. Always gain insights from these recordings for pages you want to improve.
  3. Heat maps. These are a good compliment to visitor recordings. Don’t just presume you know what visitors click on or how far they scroll – check these for your key pages. Great for revealing CTAs, images and content that should be clicked on more.
  4. Surveys and polls. The voice of your visitors is THE most important thing in CRO. Essential to find what they like and don’t like with surveys and polls. Create single question polls for specific feedback, and send customer surveys.
  5. User testing. Gain feedback from your target audience while they try to complete tasks on your website and ask them questions. Great for discovering what people think of your website, their issues with it, and what needs improving.
  6. Expert reviews. This is done by an experienced CRO expert (often called heuristic analysis), and is a fast effective way of getting CRO insights and recommendations. These are offered by CRO experts including myself, CXL and WiderFunnel.

Insights from these elements of conversion research then feeds into better ideas for the other elements of CRO, including A/B testing.

Conversion research is often neglected or not well understood, apart from web analytics, so you have huge potential to take advantage of this element of CRO in particular.

How is website persuasion used in CRO?

You need to persuade your website visitors to purchase or sign up – don’t just hope they will. Therefore you need to use this newer technique of website persuasion, which is one of the 4 main elements of CRO.

Compelling copywriting plays a huge part in persuasion, particularly headlines, bullet points and CTAs. Mention how your website solves pain points and benefits. My copywriting guide gives many best practices and techniques to use.

Social proof, urgency, scarcity, reciprocity are essential influence techniques to use, as made famous by Robert Cialdini’s ‘Influence’ book.

Social proof is particularly important to show prominently, including reviews and ratings, testimonials, ‘as featured in’ and logos of well known customers. Doing this will increase the chances of visitors thinking your website is liked by others, and also using it.

Showing urgency and scarcity messages can also work well. People don’t like to feel like they may miss out, so this messaging can help convert visitors. UseFomo.com is a great tool for doing this (but don’t go to travel website extremes though!)

Why is user experience (UX) essential for CRO?

It doesn’t matter how good your website looks or how persuasive it is, if visitors find it hard to use they will not convert very often.

Therefore you need to ensure you adopt website usability best practices to improve your website user experience. Navigation, forms and user flow elements are very important elements of UX  to improve, and here are some good examples:

  • Use tool tips for fields or pieces of content that require explaining.
  • Improve error handling on form fields to ensure greater completion rate.
  • Make buttons and links fat finger friendly on mobile devices.

Best practice UX improvements should be just launched and don’t need A/B testing first. UX trends should be A/B tested first though.

Can you do CRO if you don’t have enough traffic for A/B testing?

A/B testing is certainly a part of CRO, along with personalization, but it is not essential. While it is very useful for discovering which versions of your website ideas convert better, many websites don’t even have enough traffic to do A/B testing (you need at least 5,000 unique visitors per week to the page that you want to run an A/B test on, and at least 200 website conversions per week).

If you don’t have enough traffic you should just launch your website improvement ideas and then monitor their impact on your website conversion rate. Here is a great guide that explains how to do CRO if you have a low traffic website.

Do I need to A/B test all CRO improvements or just launch them?

You don’t have to A/B test every CRO improvement you want to make to your website. This would require a lot of traffic, time and effort. Most importantly though, there are improvements you can just launch, even if you have enough traffic to A/B test them. These are considered best practice and will improve any website, so should just be launched without needing A/B testing first. This frees up time to A/B test other elements with higher impact.

Launch it – website improvements to launch instead of A/B testing:

  • Usability fixes and improvements (improving confusing or difficult navigation)
  • Prominent unique value proposition elements on key entry pages
  • Purchase risk reducers like guarantees, free shipping and free returns
  • Social proof like reviews and ratings, ‘as featured in’ and third party ratings

You can certainly do follow up A/B testing to fine tune these or iterate on the exact location or style of them, but the key thing is to just launch them first because they are so important to have.

A/B test it – website elements always worth A/B testing:

Any time that it is unclear which improvement version will perform better, particularly when it comes to elements regarding psychology and influence, these are definitely worth A/B testing to find the one with the highest conversion rate. Here are some examples:

  • Headlines (these have a huge impact on visitor engagement)
  • Website copy on key pages like the homepage and service/product pages
  • Call-to-action wording on buttons
  • Influence and persuasion elements mentioning scarcity or urgency

How to make best use of personlization for CRO

Don’t just do A/B testing, move beyond this by also doing personalization to improve your conversion reates. Instead of a one size fits all, you need to personalize your website to engage and convert more visitors. Headlines and hero images on key entry pages have particularly good impact for personalizing.

One of the best ways to use personlization is to target visitor segments with more relevant content:

  • Returning visitors with content relating to what they saw previously
  • Frequent purchasers with loyalty content like rewards or discounts

This personalization can be done with any A/B testing tool, like VWO or Google Optimize.

However, it doesn’t matter how well personlized your website is if it doesn’t have a good user experience or doesn’t persuade them to convert. Therefore to see best results from personalization you need to ensure your website has first been improved with the other elements of CRO.

What tools do you need for CRO?

For doing conversion research and A/B testing, you need three key types of website tools:

  • A web analytics tool. This tool is essential because it helps you monitor your current website conversion rate and success metric performance. It also helps you to gain great visitor insights and find poorly converting pages for improving. A simple web analytics tool like Google Analytics needs to be setup and used for this.
  • Visitor feedback tools. Getting great feedback from your visitors is essential for really understanding their needs and for gaining high-impact ideas for improving your website and conversion rates. The most important tool to use for this is Hotjar.com. User testing tools like Userfeel.com and UsabilityHub.com are essential too.
  • An A/B testing tool. Ideally you need to test different versions of your content (like different call-to-action buttons or different page layout) to see which version increases your conversion rates the most. A low-cost A/B testing tool like VWO is a great place to start, and here is a review of common A/B testing tools.

Is there a process I can use to get better CRO results?

CRO shouldn’t be done randomly or only as a project – a continuous CRO process is needed for success. I created a CRO success flow that helps ensure you get the best results for improving your conversion rates and website sales. Here are the 5 steps of this process:

Step 1 is to do in-depth conversion research, this is essential and was discussed earlier in this guide.

Conversion research then feeds into CRO ideation step 2 where ideas are created for improving your website, along with ideas from website persuasion and UX elements (2 of the other parts of CRO).

Prioritization of CRO ideas in step 3 is important to ensure you launch ideas with highest impact. Use my website prioritization tool in my CRO toolbox to help you do this.

Next in step 4 you launch the website improvement or A/B test it (if you have enough traffic).

The last and very important thing to do for step 5 is to review, learn and iterate from what you have launched or tested. This then feeds back into forming more conversion research, and the process continues again.

What website elements have biggest impact on CRO?

Unfortunately there is no silver bullet that will work every time. Depending on your type of website, your unique value proposition and your type of visitors, there are hundreds of website elements that contribute to increased conversion rates. However, here are some things to improve that often have a big impact on increasing your conversion rates.

  • Call-to-action buttons. These important call-to-action (CTA) buttons that influence visitors to take an action on your website have a high impact on your conversion rates. To improve their effectiveness, improve the wording, style, color, size and even the location of them on your pages. Dual CTA buttons can be used effectively when there is more than one main goal, as can adding useful related text very close to the button. Here are some good examples for your inspiration:
  • Headlines and important text. If your text doesn’t grab the attention of your visitors and intrigue them to read the rest of your content, then there will a greater chance of them exiting your website, lowering your conversion rates. Test improving your headlines by keeping them simple  wording that solves for visitors needs or explains benefits. You should also condense long blocks of text, and use bullet points instead – these are far easier for visitors to scan and understand quicker. Here is an example:
  • Shopping cart and checkout pages or signup flow pages. These are key because if your visitors struggle with these pages (regardless of how good their prior experience has been on your website), then they will abandon your website, lowering conversions and potential revenue. In particular you need to make your forms simple to complete, remove non-mandatory fields, improve your error validation, and use risk-reducers like security seals, benefits of using your website, guarantees and shipping/returns offers.
  • Your home page and key entry pages. These are often referred to as your landing pages, and usually get the most traffic on your website, so often have the biggest impact on conversion rates. Making sure these are focused, uncluttered and solve for your visitors main needs will greatly improve your conversion rates. Using targeting for your tests on these pages to customize your visitors experience will meet their needs better and increase your conversion rates too.

For more details on these, and hundreds of other ideas to improve your conversion rates on many types of web pages, check out my CRO course, or check out my book.

What is a good conversion rate?

This last CRO question is a very common question, and sorry to disappoint you, there is no simple answer. This is because conversion rates are hugely dependent on your website type, your unique value proposition, and your marketing efforts.

For a rough benchmark though, 2% is average for an ecommerce website and anything above 5% is considered very good. But to prove my point, it’s not unusual to have conversion rates above 50% for good, focused paid search lead generation landing pages.

Also, don’t compare your conversion rate to your competitors or what you have read in a blog or a report – it’s risky because it may set you up for a fall or set incorrect expectations to your boss. It’s more important to increase your current conversion rate – never stop improving!

Resources for deep diving into CRO

To help you learn even more about this growing subject of conversion rate optimization, there are a number of very useful resources you should check out, from great training to courses. You will find these very useful.

Conversion rate optimization training and courses:

So there we have it. The Ultimate Guide to Conversion Rate Optimization. Hopefully you found this very useful – please share this with your colleagues, and feel free to comment below.

The current need for enforcement of safety regulations

An NPR article reports on safety violations in Kentucky: In December 2016, Pius “Gene” Hobbs was raking gravel with the Meade County public works crew when a dump truck backed over him. The driver then accelerated forward, hitting him a second time. Hobbs was crushed to death. The sole eyewitness to the incident said that … Continue reading The current need for enforcement of safety regulations

An NPR article reports on safety violations in Kentucky:

In December 2016, Pius “Gene” Hobbs was raking gravel with the Meade County public works crew when a dump truck backed over him. The driver then accelerated forward, hitting him a second time. Hobbs was crushed to death.

The sole eyewitness to the incident said that the dump truck’s backup beeper wasn’t audible at the noisy worksite. The Kentucky State Police trooper on the scene concurred. Hobbs might not have been able to hear the truck coming.

But when Kentucky Occupational Safety and Health arrived, hours later, the inspector tested the beeper on a quiet street and said it wasn’t a problem.

“These shortcomings are very concerning,” says Jordan Barab, a workplace safety expert who served as Deputy Assistant Secretary of Labor for Occupational Safety and Health under President Barack Obama. “Identifying the causes of these incidents is … vitally important.” Otherwise, the employer doesn’t know how to avoid the next incident, he says.

Gene Hobbs’ case is not the exception. In fact, it’s the norm, according to a recent federal audit.

Kentucky is what’s known as a “state plan,” meaning the federal Occupational Safety and Health Administration has authorized it to run its own worker safety program.

Every year, federal OSHA conducts an audit of all 28 state plans to ensure they are “at least as effective” as the federal agency at identifying and preventing workplace hazards.

According to this year’s audit of Kentucky, which covered fiscal year 2017, KY OSH is not meeting that standard. In fact, federal OSHA identified more shortcomings in Kentucky’s program than any other state.

We know that we must have regulations and enforcement of those regulations to have safe environments. Left to our own choices, people tend to choose what appears to be the fastest and easiest options, not the most safe ones. For an interesting read on the history of safety regulation, see this article from the Department of Labor.

In 1898 the Wisconsin bureau reported that it was often difficult to find safety devices that did not reduce efficiency. Sanitary improvements and fire escapes were expensive, which led many employers to resist their adoption. Constant pressure and attention were needed to obtain compliance. Employers objected to the posting of laws in their establishments and some tore them down. The proprietor of a shoe factory with very poor fire escape routes showed “a disposition to defeat” an inspector’s request for more fire escapes, though he complied in the end. A cloak maker who was also found to have inadequate fire escapes went to the extreme of relocating his operation to avoid compliance. Such delays were not uncommon.

When an inspector found abominable conditions in the dipping rooms of a match factory — poorly ventilated rooms filled with poisonous fumes from the liquid phosphorus which made up the match heads — he tried to persuade the operators to make improvements. They objected because of the costs involved and the inspector “left without expecting to see the changes made.” When a machinery manufacturer equipped his ripsaws with guards after an inspection, a reinspection revealed that the employees had removed the guards.

Without regulation, we’ll be back to 1898 in short order.

Lion Air Crash from October 2018

From CNN: The passengers on the Lion Air 610 flight were on board one of Boeing’s newest, most advanced planes. The pilot and co-pilot of the 737 MAX 8 were more than experienced, with around 11,000 flying hours between them. The weather conditions were not an issue and the flight was routine. So what caused … Continue reading Lion Air Crash from October 2018

From CNN:

The passengers on the Lion Air 610 flight were on board one of Boeing’s newest, most advanced planes. The pilot and co-pilot of the 737 MAX 8 were more than experienced, with around 11,000 flying hours between them. The weather conditions were not an issue and the flight was routine. So what caused that plane to crash into the Java Sea just 13 minutes after takeoff?

I’ve been waiting for updated information on the Lion Air crash before posting details. When I first read about the accident it struck me as a collection of human factors safety violations in design. I’ve pulled together some of the news reports on the crash, organized by the types of problems experienced on the airplane.

1. “a cacophony of warnings”
Fortune Magazine reported on the number of warnings and alarms that began to sound as soon as the plane took flight. These same alarms occurred on its previous flight and there is some blaming of the victims here when they ask “If a previous crew was able to handle it, why not this one?”

The alerts included a so-called stick shaker — a loud device that makes a thumping noise and vibrates the control column to warn pilots they’re in danger of losing lift on the wings — and instruments that registered different readings for the captain and copilot, according to data presented to a panel of lawmakers in Jakarta Thursday.

2. New automation features, no training
The plane included new “anti-stall” technology that the airlines say was not explained well nor included in Boeing training materials.

In the past week, Boeing has stepped up its response by pushing back on suggestions that the company could have better alerted its customers to the jet’s new anti-stall feature. The three largest U.S. pilot unions and Lion Air’s operations director, Zwingly Silalahi, have expressed concern over what they said was a lack of information.

As was previously revealed by investigators, the plane’s angle-of-attack sensor on the captain’s side was providing dramatically different readings than the same device feeding the copilot’s instruments.

Angle of attack registers whether the plane’s nose is pointed above or below the oncoming air flow. A reading showing the nose is too high could signal a dangerous stall and the captain’s sensor was indicating more than 20 degrees higher than its counterpart. The stick shaker was activated on the captain’s side of the plane, but not the copilot’s, according to the data.

And more from CNN:

“Generally speaking, when there is a new delivery of aircraft — even though they are the same family — airline operators are required to send their pilots for training,” Bijan Vasigh, professor of economics and finance at Embry-Riddle Aeronautical University, told CNN.

Those training sessions generally take only a few days, but they give the pilots time to familiarize themselves with any new features or changes to the system, Vasigh said.
One of the MAX 8’s new features is an anti-stalling device, the maneuvering characteristics augmentation system (MCAS). If the MCAS detects that the plane is flying too slowly or steeply, and at risk of stalling, it can automatically lower the airplane’s nose.

It’s meant to be a safety mechanism. But the problem, according to Lion Air and a growing chorus of international pilots, was that no one knew about that system. Zwingli Silalahi, Lion Air’s operational director, said that Boeing did not suggest additional training for pilots operating the 737 MAX 8. “We didn’t receive any information from Boeing or from regulator about that additional training for our pilots,” Zwingli told CNN Wednesday.

“We don’t have that in the manual of the Boeing 737 MAX 8. That’s why we don’t have the special training for that specific situation,” he said.

Embedding Google Data Studio Visualizations

Last year I wrote about the Marvel vs. DC war on the big screen. It was super fun to merge two of my passions (data visualization and comics) in one piece. It started with my curiosity to understand what all those movies are amounting to, and I think i…

Embedding Google Data Studio Visualizations

Last year I wrote about the Marvel vs. DC war on the big screen. It was super fun to merge two of my passions (data visualization and comics) in one piece. It started with my curiosity to understand what all those movies are amounting to, and I think it helped me prove a point: Marvel is kinda winning :-)

One of the things that annoyed me was that I had to link to the interactive visualization, readers couldn't see the amazing charts in my article (!) - so I ended up including static screenshots with some insights explained through text. While some people clicked through to play with the data, I suspect many just read the piece and went away, which is suboptimal - when I publish a story, my goal is to allow readers to interact with it quickly and effectively.

I am extremely excited that now Google Data Studio allows users to embed reports in any online environment, which empowers us to create an improved experience for telling stories with data. This feature will be an essential tool for data journalists and analysts to effectively share insights with their audiences.

A year has passed since I did the Marvel vs. DC visualization, so I thought it was time to update it (5 new movies!) and share some insights on how to use Data Studio report embedding to create effective data stories.

Enable embedding

The first step to embed reports is a pretty important one: enable embedding! This is quite simple to do:

  1. Open the report and click on File (top left)
  2. Click on Embed report
  3. Check Enable embedding and choose the width and height of your iframe (screenshot below)

Google data studio enable embedding

Please note that the embedding will work only for people that have access to the report. If the report is supposed to be publicly available, make sure that you make it viewable to everyone. If the report should be seen only to people in a group, then make sure to update your sharing settings accordingly. Read more about sharing reports on this help center article.

But how do you make sure you are choosing the right sizes? Read on...

Choosing the right visualization sizes

Needless to say, people access websites in all possible device categories and platforms, and we have little control over that. But we do have control over how we display information in different screens. The first obvious recommendation (and hopefully all the Interweb agrees with me) - make your website responsive! I am assuming you have already done that.

On Online Behavior, the content area is 640px wide, so the choice is pretty obvious when Data Studio asks me the width I want for my iframe - make sure you know the width of the content area where the iframe will be embedded. Also, since you want the visualizations to resize as the page responds to the screen size, set your Display mode to Fit to width (option available on Page settings).

Without further ado, here is the full Marvel vs. DC visualization v2!

I personally think the full dataviz looks pretty good when reading on a desktop, I kept it clean and short. However, as your screen size decreases, even though the report iframe will resize the image, it will eventually get too small to read. In addition, I often like to develop my stories intertwining charts and text to make it more digestible. So here is an alternative to embedding the whole thing...

Breaking down your dataviz into digestible insights

As I mentioned, sometimes you want to show one chart at a time. In this case, you might want to create separate versions of your visualization. Below I broke down the full dataviz into small chunks. Note that you will find three different pages in the iframe below, one per chart (see navigation in the bottom of the report)

Right now, you can't embed only one page, which means that if you want to show a specific chart that lives on page 2 of a report you would need to create a new report, but that's a piece of cake :-)

I am looking forward to seeing all the great visualizations that will be created and embedded throughout the web - why not partner with our data to create insightful stories? Let's make our blogs and newspapers more interesting to read :-) Happy embedding!

BONUS: Data Studio is the referee in the Marvel vs. DC fight!

As I was working on my dataviz, I asked my 10yo son (also a comic enthusiast) to create something that I could use to represent it. He created the collage / drawing below, I think it is an amazing visual description of my work :-)

Data Studio referee

image 
Google data studio enable embedding
Data Studio referee

Statistical Design in Online A/B Testing

A/B testing is the field of digital marketing with the highest potential to apply scientific principles, as each A/B experiment is a randomized controlled trial, very similar to ones done in physics, medicine, biology, genetics, etc. However, common ad…

Statistical Design in Online A/B Testing

A/B testing is the field of digital marketing with the highest potential to apply scientific principles, as each A/B experiment is a randomized controlled trial, very similar to ones done in physics, medicine, biology, genetics, etc. However, common advice and part of the practice in A/B testing are lagging by about half a century when compared to modern statistical approaches to experimentation.

There are major issues with the common statistical approaches discussed in most A/B testing literature and applied daily by many practitioners. The three major ones are:

  1. Misuse of statistical significance tests
  2. Lack of consideration for statistical power
  3. Significant inefficiency of statistical methods

In this article I discuss each of the three issues discussed above in some detail, and propose a solution inspired by clinical randomized controlled trials, which I call the AGILE statistical approach to A/B testing.

1. Misuse of Statistical Significance Tests

In most A/B testing content, when statistical tests are mentioned they inevitably discuss statistical significance in some fashion. However, in many of them a major constraint of classical statistical significance tests, e.g. the Student’s T-test, is simply not mentioned. That constraint is the fact that you must fix the number of users you will need to observe in advance.

Before going deeper into the issue, let’s briefly discuss what a statistical significance test actually is. In most A/B tests it amounts to an estimation of the probability of observing a result equal to or more extreme than the one we observed, due to the natural variance in the data that would happen even if there is no true positive lift.

Below is an illustration of the natural variance, where 10,000 random samples are generated from a Bernoulli distribution with a true conversion rate at 0.50%.

Natural Variance

In an A/B test we randomly split users in two or more arms of the experiment, thus eliminating confounding variables, which allows us to establish a causal relationship between observed effect and the changes we introduced in the tested variants. If after observing a number of users we register a conversion rate of 0.62% for the tested variant versus a 0.50% for the control, that means that we either observed a rare (5% probability) event, or there is in fact some positive difference (lift) between the variant and control.

In general, the less likely we are to observe a particular result, the more likely it is that what we are observing is due to a genuine effect, but applying this logic requires knowledge that is external to the statistical design so I won’t go into details about that.

The above statistical model comes with some assumptions, one of which is that you observe the data and act on it at a single point in time. For statistical significance to work as expected we must adhere to a strict application of the method where you declare you will test, say, 20,000 users per arm, or 40,000 in total, and then do a single evaluation of statistical significance. If you do it this way, there are no issues. Approaches like “wait till you have 100 conversions per arm” or “wait till you observe XX% confidence” are not statistically rigorous and will probably get you in trouble.

However, in practice, tests can take several weeks to complete, and multiple people look at the results weekly, if not daily. Naturally, when results look overly positive or overly negative they want to take quick action. If the tested variant is doing poorly, there is pressure to stop the test early to prevent losses and to redirect resources to more prospective variants. If the tested variant is doing great early on, there is pressure to suspend the test, call the winner and implement the change so the perceived lift can be converted to revenue quicker. I believe there is no A/B testing practitioner who will deny these realities.

These pressures lead to what is called data peeking or data-driven optional stopping. The classical significance test offers no error guarantees if it is misused in such a manner, resulting in illusory findings – both in terms of direction of result (false positives) and in the magnitude of the achieved lift. The reason is that peeking results in an additional dimension in the test sample space. Instead of estimating the probability of a single false detection of a winner with a single point in time, the test would actually need to estimate the probability of a single false detection at multiple points in time.

If the conversion rates were constant that would not be an issue. But since they vary without any interventions, the cumulative data varies as well, so adjustments to the classical test are required in order to calculate the error probability when multiple analyses are performed. Without those adjustments, the nominal or reported error rate will be inflated significantly compared to the actual error rate. To illustrate: peeking only 2 times results in more than twice the actual error vs the reported error. Peeking 5 times results in 3.2 times larger actual error vs the nominal one. Peeking 10 times results in 5 times larger actual error probability versus nominal error probability. This is known to statistical practitioners as early as 1969 and has been verified time and again.

If one fails to fix the sample size in advance or if one is performing multiple statistical significance tests as the data accrues, then we have a case of GIGO, or Garbage In, Garbage Out.

2. Lack of Consideration for Statistical Power

In a review of 7 influential books on A/B testing published between 2008 and 2014 we found only 1 book mentioning statistical power in a proper context, but even there the coverage was superficial. The remaining 6 books didn’t even mention the notion. From my observations, the situation is similar when it comes to most articles and blog posts on the topic.

But what is statistical power and why is it important for A/B experiments? Statistical power is defined as the probability to detect a true lift equal to or larger than a given minimum, with a specified statistical significance threshold. Hence the more powerful a test, the larger the probability that it will detect a true lift. I often use “test sensitivity” and “chance to detect effect” as synonyms, as I believe these terms are more accessible for non-statisticians while reflecting the true meaning of statistical power.

Running a test with inadequately low power means you won’t be giving your variant a real chance at proving itself, if it is in fact better. Thus, running an under-powered test means that you spend days, weeks and sometimes months planning and implementing a test, but then failing to have an adequate appraisal of its true potential, in effect wasting all the invested resources.

What’s worse, a false negative can be erroneously interpreted as a true negative, meaning you will think that a certain intervention doesn’t work while in fact it does, effectively barring further tests in a direction that would have yielded gains in conversion rate.

Power and Sample Size

Power and sample size are intimately tied: the larger the sample size, the more powerful (or sensitive) the test is, in general. Let’s say you want to run a proper statistical significance test, acting on the results only once the test is completed. To determine the sample size, you need to specify four things: historical baseline conversion rate (say 1%), statistical significance threshold, say 95%, power, say 90%, and the minimum effect size of interest.

Last time I checked, many of the free statistical calculators out there won’t even allow you to set the power and in fact silently operate at 50% power, or a coin toss, which is abysmally low for most applications. If you use a proper sample size calculator for the first time you will quickly discover that the required sample sizes are more prohibitive than you previously thought and hence you need to compromise either with the level of certainty, or with the minimum effect size of interest, or with the power of the test. Here are two you could start with, but you will find many more on R packages, GPower, etc:

Making decisions about the 3 parameters you control – certainty, power and minimum effect size of interest is not always easy. What makes it even harder is that you remain bound to that one look at the end of the test, so the choice of parameters is crucial to the inferences you will be able to make at the end. What if you chose too high a minimum effect, resulting in a quick test that was, however, unlikely to pick up on small improvements? Or too low an effect size, resulting in a test that dragged for a long time, when the actual effect was much larger and could have been detected much quicker? The correct choice of those parameters becomes crucial to the efficiency of the test.

3. Inefficiency of Classical Statistical Tests in A/B Testing Scenarios

Classical statistics inefficiency

Classical tests are good in some areas of science like physics and agriculture, but are replaced with a newer generation of testing methods in areas like medical science and bio-statistics. The reason is two-fold. On one hand, since the hypotheses in those areas are generally less well defined, the parameters are not so easily set and misconfigurations can easily lead to over or under-powered experiments. On the other hand – ethical and financial incentives push for interim monitoring of data and for early stopping of trials when results are significantly better or significantly worse than expected.

Sounds a lot like what we deal with in A/B testing, right? Imagine planning a test for 95% confidence threshold, 90% power to detect a 10% relative lift from a baseline of 2%. That would require 88,000 users per test variant. If, however, the actual lift is 15%, you could have ran the test with only 40,000 users per variant, or with just 45% of the initially planned users. In this case if you were monitoring the results you’d want to stop early for efficacy. However, the classical statistical test is compromised if you do that.

On the other hand, if the true lift is in fact -10%, that is whatever we did in the tested variant actually lowers conversion rate, a person looking at the results would want to stop the test way before reaching the 88,000 users it was planned for, in order to cut the losses and to maybe start working on the next test iteration.

What if the test looked like it would convert at -20% initially, prompting the end of the test, but that was just a hiccup early on and the tested variant was actually going to deliver a 10% lift long-term?

The AGILE Statistical Method for A/B Testing

AGILE Statistical Method for A/B Testing

Questions and issues like these prompted me to seek better statistical practices and led me to the medical testing field where I identified a subset of approaches that seem very relevant for A/B testing. That combination of statistical practices is what I call the AGILE statistical approach to A/B testing.

I’ve written an extensive white-paper on it called “Efficient A/B Testing in Conversion Rate Optimization: The AGILE Statistical Method”. In it I outline current issues in conversion rate optimization, describe the statistical foundations for the AGILE method and describe the design and execution of a test under AGILE as an easy step-by-step process. Finally, the whole framework is validated through simulations.

The AGILE statistical method addresses misuses of statistical significance testing by providing a way to perform interim analysis of the data while maintaining false positive errors controlled. It happens through the application of so-called error-spending functions which results in a lot of flexibility to examine data and make decisions without having to wait for the pre-determined end of the test.

Statistical power is fundamental to the design of an AGILE A/B test and so there is no way around it and it must be taken into proper consideration.

AGILE also offers very significant efficiency gains, ranging from an average of 20% to 80%, depending on the magnitude of the true lift when compared to the minimum effect of interest for which the test is planned. This speed improvement is an effect of the ability to perform interim analysis. It comes at a cost since some tests might end up requiring more users than the maximum that would be required in a classical fixed-sample test. Simulations results as described in my white paper show that such cases are rare. The added significant flexibility in performing analyses on accruing data and the average efficiency gains are well worth it.

Another significant improvement is the addition of a futility stopping rule, as it allows one to fail fast while having a statistical guarantee for false negatives. A futility stopping rule means you can abandon tests that have little chance of being winners without the need to wait for the end of the study. It also means that claims about the lack of efficacy of a given treatment can be made to a level of certainty, permitted by the test parameters.

Ultimately, I believe that with this approach the statistical methods can finally be aligned with the A/B testing practice and reality. Adopting it should contribute to a significant decrease in illusory results for those who were misusing statistical tests for one reason or another. The rest of you will appreciate the significant efficiency gains and the flexibility you can now enjoy without sacrifices in terms of error control.

image 
Natural Variance
Classical statistics inefficiency
AGILE Statistical Method for A/B Testing

Revamping Your App Analytics Workflows

Every mobile app professional today uses mobile app analytics to track their app. Yet there are some key elements in their analytics workflows that are naturally flawed. The solution is out there, and you might have missed it.
The flaw, and a fairly bi…

Revamping Your App Analytics Workflows

Every mobile app professional today uses mobile app analytics to track their app. Yet there are some key elements in their analytics workflows that are naturally flawed. The solution is out there, and you might have missed it.

The flaw, and a fairly big one at that, is in the fact that app analytics pros sometimes focus solely on quantitative analytics to optimize their apps. Don't take this the wrong way – quantitative analytics is a very important part of app optimization. It can tell you if people are leaving your app too soon; if they're not completing the signup process, how often users launch your app, and things like that. However, it won't give you the answer as to why people are doing it, or why certain unwanted things are happening in your app. And that's the general flaw.

The answer lies in expanding your arsenal – adding qualitative analytics to your workflow. Together with quantitative analytics, these tools can help you form a complete picture of your app and its users, identify the main pain points and user experience friction, helping you optimize your app and deliver the ultimate product.

So today, you are going to learn how to totally revamp your analytics workflow using qualitative analytics, and why you should do it in the first place. You'll read about the fundamentals of qualitative analytics, and how it improves one's analysis accuracy, troubleshooting and overall workflows. And finally, you'll find two main ways to use qualitative analytics which can help you turn your app(s) into mobile powerhouse.

Exploring the qualitative

Qualitative analytics can be split into two main features: heatmaps and user session recordings. Let's dig a little deeper to see what they do.

Touch heatmaps

Touch heatmaps

This tool gathers all of the gestures a user does in every screen of the app, like tapping, double-tapping, or swiping. It then aggregates these interactions to create a visual touch heatmap. This allows app pros to quickly and easily see where the majority of users are actually interacting with the app, as well as which parts of an app are being left out.

Another important advantage of touch heatmaps is the ability to see where users are trying to interact, without the app responding. These are called unresponsive gestures, and they are extremely important because they're very annoying and could severely hurt the user experience.

Unresponsive gestures can be an indication of a bug or a flaw in the design of your user interface. Also, it could show you how your users think they should move through the app. As you might imagine, being bug-free and intuitive are two very important parts of a successful app, which is why tackling unresponsive gestures can make a huge difference in your app analytics workflow.

User session recordings

User session recordings

User session recordings are a fundamental feature of qualitative app analytics. They allow app pros to see just what their users are doing, as they are progressing through the app. That means every interaction, every sequence of events, on every screen in the app, gets recorded. This allows app pros an unbiased, unaltered view of the user experience.

With such a tool, you'll be able to better understand why users sometimes abandon an app too soon, why they decide to use it once and never again, or even why the app crashes on a particular platform or device.

Through video recordings, it becomes much easier to get to the very core of any problem your app might be experiencing. A single recording can shine light on a problem many users are struggling with. Obviously, the tool doesn't just mindlessly record everything – app pros can choose different screens, different demographics, mobile devices or their operating systems to record from. It is also important for this tool to work quietly in the background and not leave a strain on the app's performance.

Standard workflows- totally revamped

App Analytics Workflows

Qualitative analytics is too big of a field to be covered in a single article. Those looking to learn more might as well take a free course via this link. For all others, it's time to discuss two main workflows where they can be used - ‘Data-fueled Optimization' and ‘Proactive Troubleshooting'.

Data-fueled optimization

Both qualitative analytics and quantitative analytics tools are 'attacking' the same problem from different angles. While both are tasked with helping app pros optimize their mobile products, they have different, even opposite approaches to the solution. That makes them an insanely powerful combo, when used together.

Employing inherently opposite systems to tackle the same problem at the same time helps app pros form a complete picture of their app and how it behaves 'in the wild'. While quantitative analytics can be used as an alarm system, notifying app pros to a condition or a problem, qualitative analytics can be used to analyze the problem more thoroughly.

For example, using quantitative analytics tools you are alerted to the fact that a third of your visitors abandon their shopping cart just before making a purchase. You identify it as a problem, but cannot answer the question as to why this is happening.

With tools like user session recordings, you can streamline your optimization workflow and learn exactly where the problem lies. You could try to fix a problem without insights from qualitative data, but you'll essentially be "blindly taking a stab".

By watching a few user session recordings, you realize that the required registration process prior to making a purchase is simply too long. Users come halfway through it and just quit. By shortening the registration process and making checkout faster, you can lower the abandonment rate. Alert, investigate, resolve. This flow can easily become your "lather, rinse, repeat."

Proactive Troubleshooting

Can you truly be proactive in your troubleshooting? Especially when using analytics? Well, if you rely solely on quantitative analytics, probably not. After all, you need a certain amount of users to actually be using the app for some time before you can get any numbers out, like app abandonment rates or crash rates. Only then will you be able to do anything, and at that point – you're only reacting to a problem already present. With qualitative analytics, that's not the case.
By watching real user session recordings and keeping an eye out on touch heatmaps, you can spot issues with your app's usability or user experience long before a bigger issue arises, therefore proactively troubleshooting any problems.

For example, by watching user session recordings you notice that people are trying to log into Twitter through your app and post a tweet. However, as soon as they try to log in, the app crashes. Some users decide to quit the app altogether. Spotting such an issue helps you fix your app before it witnesses a bigger fallout in new user retention.

Not being proactive about looking for bugs and crashes doesn't mean they won't happen – it means they might go longer unattended. By the time you spot them through quantitative analytics, they will have already hurt your user experience and probably pushed a few users your competitor's way.

Wrap-up

They say new ideas are nothing more than old ideas with a fresh twist, and if that's true, than qualitative analytics are the ‘fresh twist' of mobile app analytics. Combining quantitative and qualitative analytics is a simple process that has incredible potency in terms of your workflows and app optimization. Plus, when you understand the reasons behind the numbers on your app, you are able to make crucial decisions with more confidence.

image 
Touch heatmaps
Online Behavior
App Analytics Workflows

150 Years of Marriages and Divorces in the UK

Have you ever wondered how divorce and marriage rates have trended over the last 150 years? Or what reasons husbands and wives give when getting a divorce? Fortunately these, and other questions, can be answered with data. The UK Office for National St…

Marriage and Divorce Trends

Have you ever wondered how divorce and marriage rates have trended over the last 150 years? Or what reasons husbands and wives give when getting a divorce? Fortunately these, and other questions, can be answered with data. The UK Office for National Statistics make available two extremely interesting and rich datasets on marriages and divorces, providing data for the last 150 years.

Following the discovery of these datasets, I decided to uncover trends and patterns in the numbers, working with my colleague Lizzie Silvey. Two important questions were in our minds when exploring the data:

  1. Who wants a divorce and why?
  2. How do wars and the law impact marriage and divorce rates in the UK?

We discuss our findings in this article, but you can also drill down into the data using this interactive visualization that we created using Google Data Studio.

Divorce petitioners and their reasons

The ratio of petitioners has been stable since around 1974 (70% women and 30% men), the time at which both genders started having the same rights and divorce could be attained more easily.

In the charts below we see the trends for 'Adultery' and 'Unreasonable behaviour', the two most common reasons provided (out of five possible) - each line shows the number of divorces granted to the husband or wife for a specific reason.

Divorce reasons UK

In order to use Adultery grounds the petitioner must prove that the partner had sexual intercourse with someone else, which might not be simple. We can see in the chart that Adultery follows the exact same pattern for husbands and wives, but analyzing the statistics further we see that, on average, 40% of the adultery divorces are granted to husbands - since only 30% of total divorces are petitioned by husbands, it seems adultery is a particularly strong reason for men to file for a divorce.

The second chart, showing 'Unreasonable behaviour', is more enigmatic. While husbands were granted divorces in an increasing pace for behavioural reasons, and while the lines seem to be converging, there is a strange hump in the wives line. Why were wives granted a massive amount of divorces up to 1992 based on unreasonable behaviour? Could that be related to a “backlog” of cases of domestic violence (classified as a behavioural reason) that came to light after women could divorce based on those grounds more easily? Unfortunately we could not find data showing possible reasons for that.

The impact of laws & wars on marriage and divorces

When looking at the marriage and divorce trends since 1862, there were a few clear turning points.

UK Marriage Divorce rates

The wars seemed to affect marriages quite significantly. Around the beginning of World War I & II we see spikes in marriages, maybe as a result of young men wanting to vow their love before going to fight. Then, during the wars, the marriages plunged as soldiers went away, and up again when they came back home.

As for divorces influenced by the wars, we can only look at World War II, as women had a limited ability to divorce after World War I. It seems the Matrimonial Causes Act 1937, which made other grounds legal (e.g. drunkenness and insanity), coupled with premature weddings (discussed above) and possibly a estrangement due to separation led to a spike in divorces starting in 1946 - who would have the heart to divorce in war times?

But what seems to be the strongest influence in divorces in the history of the UK is the Divorce Reform Act 1969 (link to PDF), which came into effect in 1971. This act states that divorce can be granted on the grounds that the marriage has irretrievably broken down, and it is not essential for either partner to prove an offense. While that explains the strong increase in divorce, we could not find a strong reason for the decline in marriages at the same time - we invite possible explanations in the comments section.

Closing Thoughts

While we couldn't bring answers as to why trends are going in a certain direction and predict upcoming changes, we believe that the data can shed new light into the British society and family relations. Hopefully with new releases of data in the future we will also be able to dive deeper and answer more existential questions.

If you are interested in exploring the data further, check the interactive visualization, created with Google Data Studio, you will find more context and charts showing trends and pattern on marriage and divorce in the UK.

image 
Divorce reasons UK
Online Behavioriage Divorce rates