Thursday, January 14, 2016

How Would You Know If You Were Wrong?

Many safety professionals are familiar with the BP Texas City Refinery disaster in 2005 that killed 15 and wounded almost 200. There are many interesting aspects of the event that provide learning for other organizations. We even use the event as a discussion point in many courses we teach for various aspects. A common response many have to the event is wondering how BP as an organization, the managers at the plant and the operators on the ground could have missed key warning signs of danger. The failing safety equipment, the track record of serious events and near-misses - how could they have proceeded in-spite of information telling them disaster was imminent?

An interesting and not well-known findings from the different investigations that took place after the event is that although the process where the disaster happened was subject to the Process Safety Management regulations in the United States, which require a Process Hazard Analysis (PHA) to be performed on hazardous processes, there was not an analysis conducted on the overfilling of the tower in the process because it was assumed that it was impossible to overflow the tower (which happened to be the same tower that overflowed, leading to the loss of containment and subsequent explosion).

Now, many will be quick to cast judgment on BP for not conducting the analysis, but before we head down that road, consider what effect not conducting the analysis would have on subsequent decisions managers and operators would have. There official risk assessment process determined that overflowing the tower was not even risky enough to warrant formal assessment. So when plant managers didn’t allocate enough money to fix safety critical equipment like overflow alarms, they were doing so with the understanding that completely overflowing the tower was evaluated and found to be so unlikely as to be impossible. Therefore why spend money to prevent something that was next to impossible? And when the operators violated procedures by overfilling the tower past the prescribed amount, they did so with the understanding that it was safe to do so. After all, a complete overflow was next to impossible. It was safe.

Or so they thought.

This is part of a long stream of disasters where we, in retrospect, can look back and point to faulty assumptions where people thought something was safe, acted on that judgment, but were wrong. Sometimes the event is considered so unlikely that it is not even considered, such as the case of the Aberfan spoils tip slide, where a landslide of the spoils tip from a nearby mine engulfed a town, killing 144, including 116 children. A slide of spoils tip such as what happened wasn’t even considered possible, so no regulations or other safeguards were even put in place against it. In other cases, a situation is formally evaluated through a risk assessment process, but determined to be “safe” or an “acceptable risk”, such as the cases of both shuttle disasters NASA suffered (here and here).

In retrospect it is easy for us to point to what signals were missed. What did the people involved not seen that they should have? What did they not pay enough attention to that they should have? But the phrase “should have” is a tricky one, since it is an opinion masquerading as a fact. Sure we can say that if they wanted to prevent the disaster they “should have” done this or that, but lets assume that no one involved in any accident wanted the accident to happen (otherwise it wouldn’t be an accident). Why didn’t they do what they “should have” done is a better question.

Unfortunately, many immediately point to flaws in the people involved as an explanation, either mental (they are stupid) or moral (they are greedy and evil). This is a form of distancing through differencing that we have discussed in previous posts and won’t discuss more here.

What if they weren’t stupid or evil though? What if they were normal people like us, doing things they think are safe and acceptable, and not doing the things they think are unsafe and unacceptable. But they were wrong. Doesn’t that mean that we could be wrong too? Put another way, these people were operating in risky environments with an idea, a model of how risky their tasks were and they believed that the tasks were safe. But their risk model was wrong.

How would you know that your risk model was wrong? Certainty we can wait for something bad to happen, but no one wants that. Still it seems that the safety profession is structured in such a way that we simply don’t address this. So much of what we do is based on the findings of accidents, but with so many disasters pointing to people acting as if something was safe when it wasn’t how many of us have implemented means to identify these faulty assumptions in our organizations? A means or mechanism that doesn’t take past success as a guarantee of future safety, that challenges old assumptions of what is acceptable. How many of us have that in our organizations?

Do not make the mistake of thinking this is as easy as asking a question either. These assumptions and models of risk are embedded deeply in the culture of an organization. Therefore they are taken for granted and unquestionable. The decisions made about risk are rational to the people involved. You can ask but you will likely be given a very good, reasonable answer.

So what can we do then? There are a number of potential solutions available to organizations. One in particular we will share. Often after a disaster the investigation identifies those in the organization, usually ones low on the organizational hierarchy, that had concerns about the risks but those concerns were not listened to (see the above paragraph). What if we listened to those people a bit more?

As an example, we leave you with the below video of construction job in our area. At least watch the first couple of minutes of it. Luckily no one was seriously injured, but it easily could have been worse. Note as well that the company involved has a strong emphasis on safety and a very low accident rate. There is even a safety representative on-site, at the job site. But if you watch the video you will quickly see that employees are expressing frustration and fear about having to do such a risky job BEFORE the event. Why weren’t these concerns listened to in a company that prides itself on having an excellent “safety culture”? What assumptions about the roles of managers and employees may have played a role in making it hard to take these concerns seriously? Could something similar happen in our organizations? Do these same assumptions about roles and risks exist in your workplace? Could this be a source of challenging bad assumptions about risks your workers face?

No comments:

Post a Comment