Our third guest blog for this year's Incident Management Month is by Roberts Sams from dolphyn.com.au. In this article, Sams delves into Incident Investigation discussing the limits of a mechanistic approach. The article was originally posted on SafetyRisk.net on the 5th of November 2016.
An incident occurs at work, the well entrenched procedures quickly kick into play – first aid provided, forms filled in, reports to management, find the cause (sometimes causes), change operating procedures (they must have been wrong), issue Safety Alert and conduct Toolbox Talk, include information on monthly report, then file.
Our systems and standard processes, some argue even the legislation, often dictate that this is what we should do, so we just do it. Sound familiar?
The question is, if following this type of systematic approach is the only way we go about things following an incident, do we limit the opportunity we have to learn from the incident? Could it be that the more mechanistic our response, the less we ‘think’ and reflect humanly about what has happened?
Traditional approaches used within the safety profession following an incident typically use a mechanistic method of review. For example Taproot, ICAM, Ishikawa or Fishbone Diagrams and 5 Why’s are all popular tools used by the profession to guide incident investigations.
These tools differ in a number of ways, but the one thing they have in common is that they demand a systematic and mechanistic approach to investigation. What is more, all these approaches to investigation assume that social-psychological factors are not a necessary factor in human decision making. Each of these systems of investigation not only proposes a ‘method’ but has its own underlying methodology or philosophy bias about humans. The majority of incident investigation approaches understand the human as a rational being and have no factor in their mechanistic structure to understand the way social psychological factors shape judgement. This has the effect of priming users to focus on ‘things’ and the quest to find ‘root causes’ as if incidents are solely an engineering process. Mechanistic and systematic approaches can be useful to assess engineering and forensic factors, but can they really understand and explain the nature of human decision making associated with an incident?
While some of the tools listed above include elements of ‘human factors’, they typically focus on how humans made ‘errors’, ‘mistakes’, ‘lapses’ and ‘failures’. While there is much written about “Human Factors”, notably by people such as Reason (1990) who came up with the “Swiss Cheese Model”, the attribution that people (aka humans) involved in an incident made an ‘errors’, ‘mistakes’, ‘lapses’ or ‘failures’ prime those investigating the circumstances of the incident to think negatively about the incident and about the human. The worst thing is, that because the mechanistic approach assumes a rational logical human, it doesn’t understand non-rational factors that influence decision making. The rational assumption thinks that any mistake or error has an element of intent or .fault’. That is, something must have gone ‘wrong’, someone didn’t do what was expected of them, we must find fault. When we have found fault, we can come up with recommendations and controls, job done, right? How satisfied would any organisation be if the investigation found no fault?
The effect of this is that incident investigations, or at least the results of them, become predictable. I wrote about heuristics in my last piece (https://safetyrisk.net/i-wish-i-had-thought-of-that/) and in particular the “availability heuristic” and the impact that this can have on risk assessment. The same effects can occur in incident investigation. Have you ever been in a situation where you have been investigating an incident and a familiar pattern becomes obvious to you, something like… the person didn’t follow the procedure, they hadn’t been trained, the machine wasn’t serviced, the list goes on. What impact did the availability heuristic have when you were conducting an investigation? Did it limit your thinking? Did it do what a heuristic is designed to do and take you on a ‘mental short-cut’ that lead you to a conclusion, limited your thinking and with that, limit your opportunity to learn?
Unfortunately, a mechanistic approach to investigating incidents constrains investigative thinking because factors such as the availability heuristic mean we must ‘just get to the bottom of it”, find those root causes, as quickly as possible.
Why is it then that we continue to adopt only mechanistic approaches when reviewing incidents? Perhaps one of the reasons that the person did not follow the procedure was that they were distracted – by a noise, because they were thinking about the footy, because they were bored in their job, stressed or running on autopilot. How will we discover these things if we only ask mechanistic (and often closed) questions about procedures, training, maintenance, and errors linked to process failure? Imagine the information we could learn if in addition to looking at systemic processes, we had an open conversation with the person, or persons, involved in an incident. What if we didn’t focus just on asking questions such as five why’s, six whens, four how’s, or whatever process is in place, and talked less and listened more with people. Maybe one of the outcomes of such an approach would be that they would open up, they would not feel like they are being interrogated, they would not become defensive, and they may tell us what was really going on rather than tell the investigator what they want to hear..
Of course to take a non mechanistic approach requires courage and imagination. The focus would be on the conversation not just the system, we may need to overcome some fears of our own. We may even realise that too much of our subjectivities are brought into our investigations. We may then become much more aware of the types of conversations we have with people. What if we are talking with a young person following an incident and they tell us that they were thinking about the date they have planned for Saturday night, that’s one of the reasons why they didn’t hit the right button. How do we respond to that? To the rationalist assumption, this is hard top explain, we then find fault that they ‘didn’t have their mind on the job’.
I have talked to many safety, and HR professionals who say “just don’t go there”. I’ve had union officials say “you can’t ask about that, what happens outside of work has nothing to do with what happens here”. Concerns about the response we might get when we engage in open dialogue can limit our opportunity to learn. Imagine if the police were to adopt this approach? There aren’t too many crooks who wander into police stations and profess their sins. Good detectives engage in good dialogue, ask good open questions, people talk and they listen. This is how they discover so much of what they are looking for. Start with questions, gather information, and open up a conversation about confessions.
Systems can play a role when trying to understand the factors that contributed to an incident, but we need to recognise their limitations. The standard approach to gathering facts, reviewing procedures, training records and maintenance documents will all help understand some of the contributing factors. However if we continue to use only mechanistic methods when reviewing incidents, we will continue to limit our learning. Instead, we need to suspend our own agenda’s, be aware of our own biases, and recognise the impact of heuristics, and talk more to people, engage in conversation and most importantly listen to what people are saying. Let people tell us what really went on, without fear that they will be seen to have failed, or be “at fault”. If we follow this approach, perhaps the contributing factors will become not only obvious to us, but better still, to the person, or persons involved in the incident, and they will learn too.
Sams' newest book 'Social Sensemaking' tries to open up the question of "why we do what we do?". The book has been inspired by the desire to better understand people and risk.