Hawaii False Alarm: The story that keeps on giving

Right after the Hawaii false nuclear alarm, I posted about how the user interface seemed to contribute to the error. At the time, sources were reporting it as a “dropdown” menu. Well, that wasn’t exactly true, but in the last few weeks it’s become clear that truth is stranger than fiction. Here is a run-down … Continue reading Hawaii False Alarm: The story that keeps on giving

Right after the Hawaii false nuclear alarm, I posted about how the user interface seemed to contribute to the error. At the time, sources were reporting it as a “dropdown” menu. Well, that wasn’t exactly true, but in the last few weeks it’s become clear that truth is stranger than fiction. Here is a run-down of the news on the story (spoiler, every step is a human factors-related issue):

  • Hawaii nuclear attack alarms are sounded, also sending alerts to cell phones across the state
  • Alarm is noted as false and the state struggles to get that message out to the panicked public
  • Error is blamed on a confusing drop-down interface: “From a drop-down menu on a computer program, he saw two options: “Test missile alert” and “Missile alert.”
  • The actual interface is found and shown – rather than a drop-down menu it’s just closely clustered links on a 1990s-era website-looking interface that say “DRILL-PACOM(CDW)-STATE ONLY” and “PACOM(CDW)-STATE ONLY”
  • It comes to light that part of the reason the wrong alert stood for 38 minutes was because the Governor didn’t remember his twitter login and password
  • Latest news: the employee who sounded the alarm says it wasn’t an error, he heard this was “not a drill” and acted accordingly to trigger the real alarm

The now-fired employee has spoken up, saying he was sure of his actions and “did what I was trained to do.” When asked what he’d do differently, he said “nothing,” because everything he saw and heard at the time made him think this was not a drill. His firing is clearly an attempt by Hawaii to get rid of a ‘bad apple.’ Problem solved?

It seems like a good time for my favorite reminder from Sidney Dekker’s book, “The Field Guide to Human Error Investigations” (abridged):

To protect safe systems from the vagaries of human behavior, recommendations typically propose to:

    • Tighten procedures and close regulatory gaps. This reduces the bandwidth in which people operate. It leaves less room for error.
    • Introduce more technology to monitor or replace human work. If machines do the work, then humans can no longer make errors doing it. And if machines monitor human work, they ca
    snuff out any erratic human behavior.
    • Make sure that defective practitioners (the bad apples) do not contribute to system breakdown again. Put them on “administrative leave”; demote them to a lower status; educate or pressure them to behave better next time; instill some fear in them and their peers by taking them to court or reprimanding them.

In this view of human error, investigations can safely conclude with the label “human error”—by whatever name (for example: ignoring a warning light, violating a procedure). Such a conclusion and its implications supposedly get to the causes of system failure.

AN ILLUSION OF PROGRESS ON SAFETY
The shortcomings of the bad apple theory are severe and deep. Progress on safety based on this view is often a short-lived illusion. For example, focusing on individual failures does not take away the underlying problem. Removing “defective” practitioners (throwing out the bad apples) fails to remove the potential for the errors they made.

…[T]rying to change your people by setting examples, or changing the make-up of your operational workforce by removing bad apples, has little long-term effect if the basic conditions that people work under are left unamended.

A ‘bad apple’ is often just a scapegoat that makes people feel better by giving a focus for blame. Real improvements and safety happen by improving the system, not by getting rid of employees who were forced to work within a problematic system.

‘Mom, are we going to die today? Why won’t you answer me?’ – False Nuclear Alarm in Hawaii Due to User Interface

Image from the New York Times The morning of January 13th, people in Hawaii received a false alarm that the island was under nuclear attack. One of the messages people received was via cell phones and it said:“BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” Today, the Washington Post … Continue reading ‘Mom, are we going to die today? Why won’t you answer me?’ – False Nuclear Alarm in Hawaii Due to User Interface


Image from the New York Times

The morning of January 13th, people in Hawaii received a false alarm that the island was under nuclear attack. One of the messages people received was via cell phones and it said:“BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” Today, the Washington Post reported that the alarm was due to an employee pushing the “wrong button” when trying to test the nuclear alarm system.

The quote in the title of this post is from another Washington Post article where people experiencing the alarm were interviewed.

To sum up the issue, the alarm is triggered by choosing an option in a drop down menu, which had options for “Test missile alert” and “Missile alert.” The employee chose the wrong dropdown and, once chosen, the system had no way to reverse the alarm.

A nuclear alarm system should be subjected to particularly high usability requirements, but this system didn’t even conform to Nielson’s 10 heuristics. It violates:

  • User control and freedom: Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
  • Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
  • Error prevention: Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
  • Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
  • And those are just the ones I could identify from reading the Washington Post article! Perhaps a human factors analysis will become regulated for these systems as it has been for the FDA and medical devices.

    [humanautonomy.com] Dr. Mica Endsley: Current Challenges and Future Opportunities In Human-Autonomy Research

    We had a chance to interview Dr. Mica Endsley about her thoughts on autonomy. The social science research that we cover in this blog is carried out by a multitude of talented scientists across the world; each studying a different facet of the problem. In our second post in a new series, we interview one … Continue reading [humanautonomy.com] Dr. Mica Endsley: Current Challenges and Future Opportunities In Human-Autonomy Research

    We had a chance to interview Dr. Mica Endsley about her thoughts on autonomy.

    The social science research that we cover in this blog is carried out by a multitude of talented scientists across the world; each studying a different facet of the problem. In our second post in a new series, we interview one the leaders in the study of the human factors of autonomy, Dr. Mica Endsley.

    Down on the farm: Human factors psychologist Margaux Ascherl optimizes technology to make farming more efficient

    Complimenting the previous post about applied psychology, this new article dives into how one human factors PhD, Margaux Ascherl, is working to make farming more efficient with technology (she also happens to be my former student!): The world’s population of 7.3 billion is predicted to grow to 9.7 billion by 2050, according to the Global … Continue reading Down on the farm: Human factors psychologist Margaux Ascherl optimizes technology to make farming more efficient

    Complimenting the previous post about applied psychology, this new article dives into how one human factors PhD, Margaux Ascherl, is working to make farming more efficient with technology (she also happens to be my former student!):

    The world’s population of 7.3 billion is predicted to grow to 9.7 billion by 2050, according to the Global Harvest Initiative. To feed all those people, global agricultural productivity must increase by 1.75 percent annually.

    One person working to drive this increase is Margaux Ascherl, PhD, user experience leader at John Deere Intelligent Solutions Group in Urbandale, Iowa. John Deere recruited Ascherl in late 2012 while she was finishing her PhD in human factors psychology at Clemson University. Five years later, she now leads a team responsible for the design and testing of precision agriculture technology used in John Deere equipment.

    Ascherl spoke to the Monitor about what it’s like to apply psychology in an agricultural context and how her team is helping farmers embrace new technology to feed the world.

    Human-Robot/AI Relationships: Interview with Dr. Julie Carpenter

    Over at https://HumanAutonomy.com, we had a chance to interview Dr. Julie Carpenter about her research on human-robot/AI relationships. As the first post in a series, we interview one the pioneers in the study of human-AI relationships, Dr. Julie Carpenter. She has over 15 years of experience in human-centered design and human-AI interaction research, teaching, and … Continue reading Human-Robot/AI Relationships: Interview with Dr. Julie Carpenter

    Over at https://HumanAutonomy.com, we had a chance to interview Dr. Julie Carpenter about her research on human-robot/AI relationships.

    As the first post in a series, we interview one the pioneers in the study of human-AI relationships, Dr. Julie Carpenter. She has over 15 years of experience in human-centered design and human-AI interaction research, teaching, and writing. Her principal research is about how culture influences human perception of AI and robotic systems and the associated human factors such as user trust and decision-making in human-robot cooperative interactions in natural use-case environments.

    Throwback Thursday: A model for types and levels of automation [humanautonomy.com]

    This week’s Throwback Thursday post (next door, at humanautonomy.com) covers another seminal paper in the study of autonomy: This is our second post on our “throwback” series. In this paper, I will take you through an article written by the best in the human factors and ergonomics field, the late Raja Parasuraman, Tom Sheridan, and … Continue reading Throwback Thursday: A model for types and levels of automation [humanautonomy.com]

    This week’s Throwback Thursday post (next door, at humanautonomy.com) covers another seminal paper in the study of autonomy:

    This is our second post on our “throwback” series. In this paper, I will take you through an article written by the best in the human factors and ergonomics field, the late Raja Parasuraman, Tom Sheridan, and Chris Wickens. Though several authors have introduced the concept of automation being implemented at various levels, for me this article nailed it.

    Throwback Thursday: The Ironies of Automation [humanautonomy.com]

    My third job (in addition to being a professor, and curating this blog) is working on another blog with Arathi Sethumadhavan focused on the social science of autonomy and automation.  You can find us over here. Occasionally, I will cross-post items that might be of interest to both readerships.  Over there, we’re starting a new … Continue reading Throwback Thursday: The Ironies of Automation [humanautonomy.com]

    My third job (in addition to being a professor, and curating this blog) is working on another blog with Arathi Sethumadhavan focused on the social science of autonomy and automation.  You can find us over here.

    Occasionally, I will cross-post items that might be of interest to both readerships.  Over there, we’re starting a new series of posts called Throwback Thursdays where we go back in time to review some seminal papers in the history of human-automation interaction (HAI), but for a lay audience.

    The first post discusses Bainbridge’s 1983 paper discussing the “Ironies of Automation”:

    Don’t worry, our Throwback Thursday doesn’t involve embarrassing pictures of me or Arathi from 5 years ago.  Instead, it is more cerebral.  The social science behind automation and autonomy is long and rich, and despite being one of the earliest topics of study in engineering psychology, it has even more relevance today.

    In this aptly titled paper, Bainbridge discusses, back in 1983(!), the ironic things that can happen when humans interact with automation.  The words of this paper ring especially true today when the design strategy of some companies is to consider the human as an error term to be eliminated

     

    Did a User Interface Kill 10 Navy Sailors?

    I chose a provocative title for this post after reading the report on what caused the wreck of the USS John McCain in August of 2017. A summary of the accident is that the USS John McCain was in high-traffic waters when they believed they lost control of steering the ship. Despite attempts to slow … Continue reading Did a User Interface Kill 10 Navy Sailors?

    I chose a provocative title for this post after reading the report on what caused the wreck of the USS John McCain in August of 2017. A summary of the accident is that the USS John McCain was in high-traffic waters when they believed they lost control of steering the ship. Despite attempts to slow or maneuver, it was hit by another large vessel. The bodies of 10 sailors were eventually recovered and five others suffered injury.

    Today marks the final report on the accident released by the Navy. After reading it, it seems to me the report blames the crew. Here are some quotes from the offical Naval report:

    • Loss of situational awareness in response to mistakes in the operation of the JOHN S MCCAIN’s steering and propulsion system, while in the presence of a high density of maritime traffic
    • Failure to follow the International Nautical Rules of the Road, a system of rules to govern the maneuvering of vessels when risk of collision is present
    • Watchstanders operating the JOHN S MCCAIN’s steering and propulsion systems had insufficient proficiency and knowledge of the systems

    And a rather devestating:

    In the Navy, the responsibility of the Commanding Officer for his or her ship is absolute. Many of the decisions made that led to this incident were the result of poor judgment and decision making of the Commanding Officer. That said, no single person bears full responsibility for this incident. The crew was unprepared for the situation in which they found themselves through a lack of preparation, ineffective command and control and deficiencies in training and preparations for navigation.

    Ouch.

    Ars Technica called my attention to an important but not specifically called out reason for the accident: the poor feedback design of the control system. I think it is a problem that the report focused on “failures” of the people involved, not the design of the machines and systems they used. After my reading, I would summarize the reason for the accident as “The ship could be controlled from many locations. This control was transferred using a computer interface. That interface did not give sufficient information about its current state or feedback about what station controlled what functions of the ship. This made the crew think they had lost steering control when actually that control had just been moved to another location.” I based this on information from the report, including:

    Steering was never physically lost. Rather, it had been shifted to a different control station and watchstanders failed to recognize this configuration. Complicating this, the steering control transfer to the Lee Helm caused the rudder to go amidships (centerline). Since the Helmsman had been steering 1-4 degrees of right rudder to maintain course before the transfer, the amidships rudder deviated the ship’s course to the left.

    Even this section calls out the “failure to recognize this configuration.” If the system is designed well, one shouldn’t have to expend any cognitive or physical resources to know from where the ship is being controlled.

    Overall I was surprised at the tone of this report regarding crew performance. Perhaps some is deserved, but without a hard look at the systems the crew use, I don’t have much faith we can avoid future accidents. Fitts and Jones were the start of the human factors field in 1947, when they insisted that the design of the cockpit created accident-prone situations. This went against the beliefs of the times, which was that “pilot error” was the main factor. This ushered in a new era, one where we try to improve the systems people must use as well as their training and decision making. The picture below is of the interface of the USS John S McCain, commissioned in 1994. I would be very interested to see how it appears in action.

    US Navy (USN) Boatswain’s Mate Seaman (BMSN) Charles Holmes mans the helm aboard the USN Arleigh Burke Class Guided Missile Destroyer USS JOHN S. MCCAIN (DDG 56) as the ship gets underway for a Friends and Family Day cruise. The MCCAIN is getting underway for a Friends and Family Day cruise from its homeport at Commander Fleet Activities (CFA) Yokosuka Naval Base (NB), Japan (JPN). Source: Wikimedia Commons

    “Applied psychology is hot, and it’s only getting hotter”…and one more thing

    The American Psychological Association’s member magazine, the Monitor, recently highlighted 10 trends in 2018.  One of those trends is that Applied Psychology is hot! In this special APA Monitor report, “10 Trends to Watch in Psychology,” we explore how several far-reaching developments in psychology are transforming the field and society at large. Our own Anne … Continue reading “Applied psychology is hot, and it’s only getting hotter”…and one more thing

    The American Psychological Association’s member magazine, the Monitor, recently highlighted 10 trends in 2018.  One of those trends is that Applied Psychology is hot!

    In this special APA Monitor report, “10 Trends to Watch in Psychology,” we explore how several far-reaching developments in psychology are transforming the field and society at large.

    Our own Anne Mclaughlin, along with other prominent academics and industry applied psychologists were quoted in the article:

    As technology changes the way we work, play, travel and think, applied psychologists who understand technology are more sought after than ever, says Anne McLaughlin, PhD, a professor of human factors and applied cognition in the department of psychology at North Carolina State University and past president of APA’s Div. 21 (Applied Experimental and Engineering Psychology).

    Also quoted was Arathi Sethumadhavan:

    Human factors psychologist Arathi Sethumadhavan, PhD, has found almost limitless opportunities in the health-care field since finishing her graduate degree in 2009. Though her background was in aviation, she found her human factors skills transferred easily to the medical sector—and those skills have been in demand.

    One more thing…

    Arathi and I have recently started a new blog, Human-Autonomy Sciences, devoted to the psychology of human-autonomy interaction.  We hope you visit it and contribute to the discussion!

    Looking for a more reliable way of analyzing your email campaigns?

    Here are 4 facts on email marketing. The vast majority of online sales companies still use transactional emails as part of their marketing efforts. Email marketing campaigns have open rates of up to 40-50% and click rates of around 10-20%. … Continue reading

    Here are 4 facts on email marketing.

    • The vast majority of online sales companies still use transactional emails as part of their marketing efforts.
    • Email marketing campaigns have open rates of up to 40-50% and click rates of around 10-20%.
    • Email marketing campaigns are here to stay.
    • The more people get used to online marketing campaigns, the cleverer you have to be with the content and timing of your emails sent to the clients.

    So… Just between us…

    Have you ever conducted an email marketing campaign that didn’t have the desired results and kept wondering what went wrong?

    And even worse, have you ever had to stop a campaign as it did more harm than good?

    Yes, these things actually do happen!

    That’s why it is essential to analyze in real-time which emails are successful and which are not.

    The Old Way

    Do these metrics look familiar?

    • Open rate
    • Click rate
    • Conversion rate

    They should – as most companies use these classical metrics in their analysis.

    In my experience, this is an oversimplified way of looking at things. It doesn’t give you the right answers, nor does it produce the desired results.

    Useful as it might have been some time ago, this method is simply not refined enough to provide you with all the data you need in order to make informed decisions regarding your email campaigns.

    The New Way

    After years of working in the field of analytics, I have come up with the following measurement framework:

    • Target audience
    • Audience that opened the email
      • How many of them converted
    • Audience that didn’t open the email
      • How many of them converted

    By following this approach, you will get more detailed information on which email campaigns work and which don’t.

    Content and Timing

    The main factors that come into play when users decide on purchasing your product or not are the content and the timing of the email you send.

    Content-wise, the recipe for success is no surprise: find an eye-catching title and a content to match it, and you’re all set.

    Timing however, is another issue…

    Software companies have it easier than e-commerce. Here is a typical scenario:

    After researching onboarding timing, a software company might discover that 80% of users usually take less than 40 minutes to onboard.  This would be the best time to send the first email.

    Send the email before the 40-minute benchmark, and your potential customers might be annoyed and abandon the purchase.

    Send the email after the 40-minute benchmark, and your user will already have forgotten about your product.

    Want to talk more about measuring email campaigns? Leave a comment or send me an email.