Wednesday, September 3, 2014

When Safety Makes Us Unsafe

It’s sort of one of those provocative questions – is there such a thing as too much safety? In a sense, this is the wrong question (primarily because “safety” is such a subjective term). Basically though, is there a point at which if we add more safety interventions we not only get diminishing returns, but we also make things worse?

If we all thought hard enough I’m sure we can think of examples where safety interventions led to unintended consequences in the form of hazards. For example, anyone who’s worn a hard hat for a length of time has likely experienced the increase in the number of times you hit your head on things. Or those who wear extra chemical protective clothing who are more exposed to heat stress issues as a result. It’s not that these protections are or are not required necessarily. However, we can see that, in the wrong circumstances using these protections will not make you any safer, and you could argue it would make you less safe (e.g. if you were wearing a full chemical moon suit, such as in the picture above, while you’re reading this blog would that make you more or less safe?).

This concept can be applied to organizational interventions as well. Take example many well meaning organizations that, in an effort to motivate employees, use incentives for employees when they go a certain period with having an injury. The idea makes sense on the surface as a reasonable safety intervention. However, even the US Occupational Safety and Health Administration has come out against such programs because of the unintended consequence of motivating employees to not report injuries (and even motivating some safety professionals to play the “numbers game” and hide injuries).

These are easy and tangible examples of a broader concept that can even be more dangerous in other circumstances. We say more dangerous because when the downsides are not obvious that makes them harder to spot, which can also give us a false sense of security. We start to believe that our interventions are making us safe, when they may be doing nothing, or, worse, they may even be making things worse.

Take, for example, a strategy that a lot of companies adopt to show that their top management cares about safety – the idea that if you report an incident it automatically has to get reported up the corporate chain, everyone hears about it, and the CEO (or someone similar) gets involved. Sounds like a great way to show that the organization means business when it comes to safety, right? Recently we were working in an organization with exactly that policy. When asked how many incidents get reported the answer was “not many.” When we were with line employees we asked if there had been any incidents that should have been reported – yes. So an intervention designed to increase accountability and visibility for safety actually made the organization dumber, because they were missing opportunities to learn.

The problem is not that these people are not well-meaning. The problem is that we need to understand that whenever we implement a change, even a safety change, we are implementing it into a complex system that is already in operation. Systems don’t have “pause” buttons. Any change we make will interact with other parts of the system, in either predictable or unpredictable ways. That means there will almost always be unintended consequences when we implement a change and those unintended consequences could actually make us less safe.

There are plenty of other examples of this as well, some which are very complex:
  • The organization that decided any risk reduction set a precedent and could never be reversed, leading to reluctance in implementing future risk reduction measures (which may have contributed to a fatal incident).
  • Defense in depth strategies, which can work great in some contexts, but, according to Perrow, may work to make our systems more complicated or complex, leading to less predictability and more risk.
  • Risk compensation, or risk homeostasis theory, which says that any intervention we implement may lead to increases in risk taking behavior, as people learn to rely on the intervention to protect them. 

Now this doesn’t mean that all of these interventions are bad and we should do nothing. The real problem, as we said above, is a failure to understand complex systems and how any intervention, even well meaning ones, can have unintended consequences. So, our job in the safety profession, if we really want to do good is to find ways to facilitate safety while minimizing or accounting for those consequences. Here’s some recommendations to get you started:
  • Take the time to understand systems, particularly complex systems. We have a couple blogs on the issue (here and here). Plus you should read these two books by Meadows and Dekker.
  • Research shows that getting diverse opinions involved in decision-making increases decision quality. So stop making decisions on your own. Get other people involved in the decision-making process, particularly people who have a different background and perspective as you.
  • Never make a decision about an operation that doesn’t include a healthy amount (read: a whole lot) of input from the people who will actually be doing the job. (In fact, our real recommendation is that you shouldn’t even be making the decision. You should be the one providing input, with the people doing the job making the decision.)
  • Always follow-up on interventions to identify if they are having their intended effect and any unintended effects. Too often we just implement something and make, at best, rudimentary efforts to see if it’s working. And it’s very important that you look for examples of the intervention not working, not just examples of it working. It’s too easy to see what you want to believe, unfortunately. (But, of course, if it wasn’t then we wouldn’t be talking about this subject to begin with.)


1 comment:

  1. Great article and very true, there will always be a by-product or a trade off - Risk Homeostasis!

    ReplyDelete