Human error is often mentioned as a causal factor in accidents in a variety of industries. But what do we mean by human error? Even though the notion of human error has been around since the beginning of the 1900s when they talked about accident-prone people or unsafe acts of workers, it was not until the Three Mile Island nuclear meltdown accident in 1979 that it really became a target of scientific study and intervention in the broader safety sciences. Soon after the Three Mile Island accident, several academic conferences gathered the scientific elite of safety science and error studies to specifically discuss the problem of human error.
And soon, two highly different schools of thought emerged. We can call them the cognitive psychological school and the joint cognitive systems school. In safety science, the cognitive psychological school of human error is perhaps best represented by Professor James Reason.
Back in the 1970s and 80s, Riesen, who was a psychologist, studied the errors made in people's daily lives. And by studying diary notes in which people described their errors, he developed a theory of absent-minded slips. The Three Mile Island accident got James Riesen interested in the errors made by operators of high-risk processes, and he started analyzing these errors using much of the theory he had already developed for everyday slips.
Reason's cognitive psychological school sees error as a social fact of life, and in his book Human Error, published in 1990, he defined four subcategories, or types, of errors or unsafe acts. They are slips, which are failures of attention, lapses, which are failures of memory, mistakes, that according to Reason can be rule-based or knowledge-based, and finally violation of rule or procedure. Reason's view.
is that categorizing human error can help to explain accidents. Human error can be labeled a cause. And the slip, lapse, mistake or violation are psychological cognitive models used to explain behavior. While Riesen developed his human error taxonomy for high-risk operations, a different school of thought formed at the nuclear research center in the Danish town Rysø.
Here, professor Jens Rasmussen together with others like Erik Holnagel and David Woods developed the Joint Cognitive Systems School of Human Error. The two schools focused on the same problem of managing increasingly complex high-risk processes, but they came to vastly different conclusions. The researchers at RISE did not use people's everyday errors or experiments in laboratory environments as a starting point. Instead, they developed a naturalistic school.
Interested in naturalistic, real-world, high-risk work. Instead of studying error as a psychological or cognitive construct, they studied error as a product of complex interactions between actors in space and time. They asked questions like how do people and machines interact? What constrain human action? What principles should guide technological interface design so that it facilitates human understanding of system states?
It was by looking at human error from this point of view that they arrived at a radical conclusion. Human error is never the cause of an accident. Instead, human error should be seen as an attribution for other problems located deeper in or higher up in the system.
Human error might then be seen as a symptom of such problems, but never the cause. Holnagel even went so far as to proclaim that the notion of human error is an analytical dead end. Ultimately, how to view human error in the wake of an accident is not only an academic exercise.
This analytical choice can have great implications. Depending on what story that we tell about error, let us say an unintended harm to a patient during a surgical procedure, the two schools will suggest different means for system improvement. The cognitive-psychological school will target system interventions at the level of the brain, and focus on Motivation, selection, proceduralization of human work, and functional allocation.
That means what tasks that should be performed by humans, and what tasks that should be performed by technology. If we on the other hand follow the Joint Cognitive System school, we will rather zoom out and analyze the accident in terms of what contributed to what later looks like a human error, both in time and in hierarchy. The Joint Cognitive System school We look at how we configure humans and technology in their working environments to understand the constant goal conflicts that they face, and the inevitable space between work as we imagine work, and work as it's actually done.
It's not until we understand such highly complex relationships in organizational time and space that we can suggest meaningful system improvements. There is no right or wrong view of human error, and maybe there will never be. We will need to make up our own minds of what stories of human error that we believe in and find credible in our efforts to improve our systems. Different stakeholders, such as accident investigation boards, safety managers, union and journalists, will take different views in the stories that they tell.
So it becomes a question not only of how the story is told, but also of who tells it.