Understanding Incidents: Three Analytical Traps

Dr. Johan Bergström, who leads the MSc program in Human Factors and Systems Safety at Lund University (I am an alumnus) has a short ~7 minute video discussing three common analytical traps that incident analysts and accident investigators can get caught in. They are:

1. Counterfactual reasoning
2. Normative language
3. Mechanistic reasoning

Have you seen any of these traps in the wild?

Below the video, I’ll include a transcript to read.

video transcript (emphasis mine 🙂 )

When Asiana Airlines flight 214 crashed on the runway of San Francisco airport on July 6, 2013, it was up to the National Transportation Safety Board of the US to investigate the causes of the accident. In their report, they focused heavily on human performance, on part of the pilots.

They ended the report with 30 conclusions about why the crash had happened. Drawing on the Rasmussian school of safety thought, I will highlight some of these conclusions and present three common traps in accident investigation.

The first trap we’re going to have a look at is counterfactual reasoning.

Counterfactual reasoning is when the investigator is discussing a case that actually never happened, like a parallel universe, if you like.

Let’s have a look at the report:

“Crew members became aware of the low airspeed and low path condition, but the flight crew did not initiate a “go around” until the airplane was below 100 feet, at which point the airplane did not have the performance capability to accomplish a go-around.”

Quotes, like “…did not initiate…” and “…did not have…” are clear indications that the investigators have fallen into the counterfactual trap. Another kind of counterfactual reasoning is when investigators hypothesized different scenarios, like in the following two quotes:

“If the pilot monitoring had supervised the trainee pilots in operational service during his instruction training, he would likely have been better prepared to promptly intervene when needed to ensure effective management of the airplane’s flight path.”

And the next quote:

“If Asiana airlines had not allowed an informal practice of keeping the pilots monitoring his flight director on during a visual approach, the pilot’s monitoring would likely have switched off both flight directors, which would have corrected the unintended deactivation of automatic airspeed control.”

The problem with counterfactual reasoning is that it prioritizes an analysis of what the system did not do, and as a consequence, it ignores an analysis of why it made sense for the system to act the way it did when it did…because it did make sense to act the way it did.

It must have, right? If the action had not made sense at the moment, it would not have happened. The analysis becomes one of an imagined system in a parallel universe, rather than an analysis of the system, as it actually worked at the time of the accident.

The second trap is normative language. In accident analysis, normative language is when the investigator puts his or her values into the analysis of other people’s performance.

We can read in the report:

“The flight crew mismanaged the airplane’s vertical profile during the initial approach, which resulted in the airplane being well above the desired glide path, when it reached the five nautical mile point, and this increased the difficulty of achieving a stabilized approach.”

Normative language, measures people’s performance based on some norm or idea of what is appropriate behavior as if the pilots chose to fly a mismanaged vertical profile. Normative language can also include speculations as to why the crew behaved in an inappropriate manner.

Like in the following quote:

“Insufficient flight crew monitoring of airspeed indications during their approach likely resulted from expectancy, increased workload, fatigue and automation reliance.”

When falling into the trap or normative language, the investigators sound more like a judge than like a curious investigator. And this goes for judging both poor and good behavior. Here is an example of the latter:

“The flight attendants acted appropriately when they initiated an emergency evacuation upon determining there was a fire outside door 2R.”

These norms on which the normative language is built are typically defined in hindsight and make sense from an argument that the norm was not adhered to in the event.

Well, it cannot have been. If it was, the accident would never have happened right? Often this norm is very vaguely defined using highly subjective notions. In this case, mismanaged, insufficient monitoring, and appropriately to suggest a causal link between poor behavior and the accident.

The third trap is mechanistic reasoning.

Mechanistic reasoning suggests that accidents are caused by malfunctioning components in a basically well-functioning, reliable, and safe machine. If only all components are in place and working as they should, the system will be safe.

In the report, we can read:

“The following were not factors in the accident: flight crew certification and qualification, flight crew behavioral or medical conditions, or the use of alcohol or drugs, airplane certification and maintenance, pre-impact structural engine or system failures, or the air traffic controllers handling of the flight.”

This passage is interesting because it essentially lists all the system components that were working reliably at the time of the event. From the view of mechanistic reasoning, this makes sense if accidents are caused by malfunctioning system components, then identifying all the functioning system components and excluding them one by one as contributing factors will eventually lead to the malfunctioning component — or components.

Here is another conclusion, listing several broken system components related to human behavior.

“The delayed initiation of a go around by the pilot flying and the pilot monitoring after they became aware of the airplane’s low path and airspeed likely resulted from a combination of surprise, non-standard communication, and role confusion.”

And Eureka – the malfunctioning component has been identified!

To no big surprise, it was once again the human that was constructed as the malfunctioning component in a system of reliably functioning, technological components. To contemporary safety science, this mechanistic reasoning does not make sense. Instead, we need to understand that accidents are the results of often well-functioning components, interacting in unexpected ways.

What we then need to understand is why it made sense for the Asiana pilots to do exactly what they did, given their expectations, knowledge, and focus of attention. They did not intervene until the system was beyond the point at which it could be saved.

And that was because they experienced the system that was behaving as they expected. To understand why this was the case. We can look into the research describing automation surprises; situations where operators are surprised by the behavior of the automation and asking questions like, “what is it doing now? Why is it doing this? And what is it going to do next?

Those are the three main traps of accident analysis, but I will leave you with a fourth bonus trap. And this is cherry-picking of data to only reveal the data which fits the point that you are trying to make.

It’s a little bit like I’ve just done with showing only 8 out of the 30 points of the National Transportation Safety Board list of causal factors.

Scroll to Top