Wednesday, January 27, 2016

Your Workers Make You Look Good

Recently an organization that we know about underwent what many organizations are and will continue to go through – a time of financial turmoil. They needed to cut costs and do it fast. The solution? Layoffs. So word came down and at one of their local plants a large portion of the engineering staff was cut.

Unfortunately this was a few short weeks before a major plant shutdown. The rest of the plant staff, dealing with the sting of seeing friends leave and not a little bit worried about their own jobs, banded together to share the load of project and contractor management left by the layoffs. They held meetings with employees at all levels to identify ways to accommodate the reduced staff without sacrificing safety and reliability of the plant. Corporate had created a gap and the workers had no choice but to fill it.

The irony of all of this is that, at least for this plant, the staff cutbacks will likely appear successful. The plant will operate at reduced costs without any significant reductions in safety and reliability. Those responsible will receive pats on the back, but almost no one will wonder why they were successful. Likely they would be shocked and perhaps offended to hear that the success they celebrate may, at least in part, be not because of their efforts, but in spite of them.

If you’re like the majority of the population, you probably think you’re above average at your job (and you probably are, it’s just those other idiots who are fooling themselves!). We like to think that we add value to our organizations and that, as safety professionals, we are doing our part to help our workers create safety day-in and day-out.

What if, though, we were like those corporate decision-makers? What if we are not as effective as we’d like to think we are? What if the success we see is simply because our workers are making us look good by finding a way to fill the gaps we created?

Before you protest that these sorts of things happen to others, not to yourself, ask yourself, how would you know? The significant disadvantage we have in the safety profession is that we often are terrible at measuring our own success. If we measure success by reductions in injuries we are subject to all sorts of statistical rules that merely muddy the water. For example, one organization saw a spike in injuries after a period of decline. They responded quickly, implementing many measures designed to build a safety culture and have employees “work safely”. The result? A reduction in injuries! Huzzah! Everyone is pleased with the response and the subsequent result, blissfully unaware of the concept regression toward the mean which would predict that had they done nothing they likely would have gotten the same result.

Or take for example the client that mandated a permit to work system, despite the adamant protests of some of the workers and managers. They reasoned though that the complaints that this will make work more difficult were just complaints and that, over time, people would get used to the permits. Sure enough, the protests died down after a while, and the safety staff declared success. They were right that the workers got used to the system, by finding ways to highlight the useful aspects of the permit and ways around the parts that were least useful. They made it work without the help of the safety staff, and the safety staff got the credit for a job well done.

Unfortunately this is a common story, especially in organizations where there is a persistent belief that safety is something we have to force workers to do. In the first organization it is a little ironic that they are just about to roll out a behavior-based safety program that focuses on getting individual workers to “work safely”. Many organizations and most safety initiatives such as this are based around the idea that workers are a problem that we must control.

Our workers though are living, breathing, thinking human beings. If we do something that affects them they are not passive observers. Rather, they will find ways that they believe will help them achieve success and avoid failure. Put another way, they will find a way to get the job done safely enough. Frankly, they are very good at this. But we will never see it because we are blinded by our belief that they can’t be trusted. We only see our problems go away and we assume that this is because of our efforts. In fact though, this is often despite our efforts. Our workers are experts at snatching victory from the jaws of defeat and failure that we regularly give them.

How can we move past this? The first step is to look inward. Do you see your workers as a problem to control or a solution to harness? What do your actions say you believe? Look at your accident investigation reports – what do the corrective actions you’ve identified say you think the problem is? We need to move past this idea that accidents happen because people are untrustworthy. Success happens because of your employees, not in spite of them. Trust them.

The next step is to look outward. Get out and learn from your employees. Go figure out how work works. What are the difficulties that your employees face? What challenges do your employees have to overcome to get the job done? What realities do they simply have to live with? These are sources of risk that get missed in your typical hazard hunt inspections. It’s a bit of a paradigm shift, but if you start to focus on how to improve work outcomes you will find that you will improve safety outcomes at the same time.


Finally, be collaborative. If you have a problem you think needs solving get your employees involved in the process. Not only will your employees help you identify better ways to solve the problem, they may even point out that the problem you were trying to solve isn’t even really the problem at all. As Daniel Hummerdal points out, your employees have a capacity greater than their job description. That’s a resource in your organization that you are already paying for that you’re not using. In a world of tight budgets and stretched resources, how can we afford to not harness our people’s unique talents to make our organizations better?

Thursday, January 14, 2016

How Would You Know If You Were Wrong?


Many safety professionals are familiar with the BP Texas City Refinery disaster in 2005 that killed 15 and wounded almost 200. There are many interesting aspects of the event that provide learning for other organizations. We even use the event as a discussion point in many courses we teach for various aspects. A common response many have to the event is wondering how BP as an organization, the managers at the plant and the operators on the ground could have missed key warning signs of danger. The failing safety equipment, the track record of serious events and near-misses - how could they have proceeded in-spite of information telling them disaster was imminent?

An interesting and not well-known findings from the different investigations that took place after the event is that although the process where the disaster happened was subject to the Process Safety Management regulations in the United States, which require a Process Hazard Analysis (PHA) to be performed on hazardous processes, there was not an analysis conducted on the overfilling of the tower in the process because it was assumed that it was impossible to overflow the tower (which happened to be the same tower that overflowed, leading to the loss of containment and subsequent explosion).

Now, many will be quick to cast judgment on BP for not conducting the analysis, but before we head down that road, consider what effect not conducting the analysis would have on subsequent decisions managers and operators would have. There official risk assessment process determined that overflowing the tower was not even risky enough to warrant formal assessment. So when plant managers didn’t allocate enough money to fix safety critical equipment like overflow alarms, they were doing so with the understanding that completely overflowing the tower was evaluated and found to be so unlikely as to be impossible. Therefore why spend money to prevent something that was next to impossible? And when the operators violated procedures by overfilling the tower past the prescribed amount, they did so with the understanding that it was safe to do so. After all, a complete overflow was next to impossible. It was safe.

Or so they thought.

This is part of a long stream of disasters where we, in retrospect, can look back and point to faulty assumptions where people thought something was safe, acted on that judgment, but were wrong. Sometimes the event is considered so unlikely that it is not even considered, such as the case of the Aberfan spoils tip slide, where a landslide of the spoils tip from a nearby mine engulfed a town, killing 144, including 116 children. A slide of spoils tip such as what happened wasn’t even considered possible, so no regulations or other safeguards were even put in place against it. In other cases, a situation is formally evaluated through a risk assessment process, but determined to be “safe” or an “acceptable risk”, such as the cases of both shuttle disasters NASA suffered (here and here).

In retrospect it is easy for us to point to what signals were missed. What did the people involved not seen that they should have? What did they not pay enough attention to that they should have? But the phrase “should have” is a tricky one, since it is an opinion masquerading as a fact. Sure we can say that if they wanted to prevent the disaster they “should have” done this or that, but lets assume that no one involved in any accident wanted the accident to happen (otherwise it wouldn’t be an accident). Why didn’t they do what they “should have” done is a better question.

Unfortunately, many immediately point to flaws in the people involved as an explanation, either mental (they are stupid) or moral (they are greedy and evil). This is a form of distancing through differencing that we have discussed in previous posts and won’t discuss more here.

What if they weren’t stupid or evil though? What if they were normal people like us, doing things they think are safe and acceptable, and not doing the things they think are unsafe and unacceptable. But they were wrong. Doesn’t that mean that we could be wrong too? Put another way, these people were operating in risky environments with an idea, a model of how risky their tasks were and they believed that the tasks were safe. But their risk model was wrong.

How would you know that your risk model was wrong? Certainty we can wait for something bad to happen, but no one wants that. Still it seems that the safety profession is structured in such a way that we simply don’t address this. So much of what we do is based on the findings of accidents, but with so many disasters pointing to people acting as if something was safe when it wasn’t how many of us have implemented means to identify these faulty assumptions in our organizations? A means or mechanism that doesn’t take past success as a guarantee of future safety, that challenges old assumptions of what is acceptable. How many of us have that in our organizations?

Do not make the mistake of thinking this is as easy as asking a question either. These assumptions and models of risk are embedded deeply in the culture of an organization. Therefore they are taken for granted and unquestionable. The decisions made about risk are rational to the people involved. You can ask but you will likely be given a very good, reasonable answer.

So what can we do then? There are a number of potential solutions available to organizations. One in particular we will share. Often after a disaster the investigation identifies those in the organization, usually ones low on the organizational hierarchy, that had concerns about the risks but those concerns were not listened to (see the above paragraph). What if we listened to those people a bit more?

As an example, we leave you with the below video of construction job in our area. At least watch the first couple of minutes of it. Luckily no one was seriously injured, but it easily could have been worse. Note as well that the company involved has a strong emphasis on safety and a very low accident rate. There is even a safety representative on-site, at the job site. But if you watch the video you will quickly see that employees are expressing frustration and fear about having to do such a risky job BEFORE the event. Why weren’t these concerns listened to in a company that prides itself on having an excellent “safety culture”? What assumptions about the roles of managers and employees may have played a role in making it hard to take these concerns seriously? Could something similar happen in our organizations? Do these same assumptions about roles and risks exist in your workplace? Could this be a source of challenging bad assumptions about risks your workers face?