Wednesday, March 18, 2015

Enough With The “Safety Darwin Award” Pictures

We all have seen the pictures, such as this one that show workers doing things that they probably shouldn’t be doing. You might have chuckled at them or shaken your head in disgust or sadness or both. That’s a normal reaction, and that’s not really what we want to talk about.

The problem comes when some of us (and yes, we are embarrassed to admit that we have done this too) have gone to the next step. There are many in the safety profession who use these pictures regularly as a means to get laughs out of their audience. This could be in a training class, to show how dumb some workers are, or, it could be done amongst safety professionals, such as on a social media site. You even occasionally see people post these as “Safety Darwin Awards”, the implication being that these pictures are an example of natural selection weeding out the less intelligent and capable of us.

Sharing these pictures with the intention of making fun of workers or insulting their intelligence has to stop. We are better than this!

To illustrate what we’re talking about let’s look at the implications of sharing these pictures in a way that ridicules workers. First of all, it turns us into bullies. We are trivializing the realities our workers face and not only thinking badly about them, we are openly ridiculing them to others! Whether we intend to or not, we are saying that they are stupid. And if we go far enough to use a frame such as the “Darwin Awards”, we are implying that these people deserve to be hurt or killed because they are so stupid. Why would we be at all surprised if our workers don’t trust us? Why are we so annoyed that they don’t like when we come around, when it’s possible we might take a picture of them and publicly shame them for a mistake? Don’t get us wrong – we are not saying that what we are seeing in the picture is acceptable. We are saying though that public shaming may have significant negative consequences that we do not intend, but are real nonetheless.

Second, when we post pictures such as this on social media or in a training class we are making a statement about where we think the problem is in our organizations – workers doing stupid things. The implication is that if it weren’t for our workers being dumb we would be safe. Our organization is inherently safe, it’s just a few bad apples that are spoiling things. Here’re some examples. Don’t be like them!

This line of thinking just doesn’t jive with reality. Think about it – how many of your organizations are perfect? Hm…none? If we had to guess we’d say that many of you have consistent problems in your organization:
  • Aging or poorly designed equipment
  • Not enough resources (time, money, people, etc.)
  • Inadequate training
  • Outdated procedures
  • Conflicting goals

Any of these sound familiar? If no, then you obviously work in the unicorn factory and you can stop reading now. For the rest of us, this means we have an imperfect system and a lot of your risk stems from these imperfections. You not only have to deal with these imperfections, but your employees do as well. However, most organizations we’ve worked with do not have any systematic way to assist employees in dealing with this complexity. We leave it to the employees to figure out. So we have a gap between what we know to be true and how we act. We know our organization is imperfect, but when it comes to safety we approach it with the understanding that our system is perfect. So if we see a picture like the one above then the reason behind that picture isn’t that we haven’t provided the worker with the tools (equipment, knowledge, etc.) necessary to do the work safely and they are naturally adapting what they have in order to get the job done in the safest way they know possible. It’s not our fault. It’s that they are dumb (or sometimes because they are immoral).


When we see these sorts of pictures we need to step back and take a hard look at how we react and what that says about our safety worldview. And what does what we do with our gut reaction say about us as professionals? The time has come for our profession to grow up and stop ridiculing the people we’re trying to help.

Tuesday, March 10, 2015

Accident Investigations Are Error Provocative Environments


Editors – To hear more about our thoughts on accident investigations, join us as one of our team members presents Beyond Root Cause – A New View of Accident Investigations at the Orange County ASSE Professional Development Conference on March 18, 2015. This week’s blog is a preview of that presentation.

A staple of the safety profession is accident investigation. When things go wrong we marshal up the resources to figure out what happened with the hope to prevent it from happening again. Take for example a relatively recent injury at a client’s site, where an employee used a 21” radial saw and had his finger amputated. Obviously this is an outcome no one wants, so the organization organized an investigation, led by the safety professionals. In this case the safety professionals found that the saw had a sign indicating that the saw was “out of order”, but the employee used it anyway. While using it, the employee shut the power down on the saw and reached toward the blade to make an adjustment. Unfortunately the saw blade was still moving, amputating one finger and lacerating another. The accident investigation report counseled the organization to provide more training to employees in the shop, and to ensure that no employees work in the shop alone. They also recommended replacing the saw with a “safer” alternative.

One of the pioneers of the human factors field, Alphonse Chapanis, was one of the first to identify that sometimes people make mistakes not because of deficiencies in the person, but because of the environment they are working in. For example, one of his early achievements was to reduce aircraft mishaps in World War II by simply changing the design of toggle switches to make them easier to identify what the switch was for. Simply adding a small rubber tire to the toggle switch for landing gear reduced incidents of accidentally putting up the landing gear to almost nothing. Chapanis, and other human factors pioneers, identified early on that many times errors can be simply chalked up to a poorly designed environment – i.e. an environment that is designed so that the human must overcome their humanity in order to succeed. These environments are sometimes called “error provocative”, because they tend to provoke errors in people operating in that environment and simply changing the environment can go a long way to reducing mistakes.

We submit for your consideration that accident investigations are error provocative environments. We think that asking someone to investigate an accident, particularly in the way accidents are commonly investigated, is putting them into a situation where they are highly likely to make a mistake. Consider what we are asking a person to be able to do in an accident investigation:

  • Be objective. The idea that people can be completely objective at nearly any point in their life is simply unsupported by any empirical evidence that we are aware of. All of us take all of our experiences, education, emotions, personality, culture, etc. into every situation we go into in life. This all colors our perceptions of the world we live in. Even in a highly sterilized environment, such as scientific research, Kuhn found evidence that our worldviews (Kuhn called them “paradigms” to represent patterns of thought in a field) drastically affect what gets researched, what questions get asked, and therefore, what answers are deemed acceptable by a scientific community. Note that all of this happened without the conscious awareness of the researchers. If this is true in scientific research, where emotion should play not part, how can we expect objectivity in the highly emotional environment of an accident investigation? Add to this that often we are investigating environments where we had significant influence before the accident. As Manuele noted, often this puts us into situations where the investigator has a vested interest in downplaying their role. In some cases, Manuele points out, the investigator may be writing their own performance appraisal with the investigation. How can we expect objectivity in such an environment?
  • Explain the complex in a simple way that facilitates action. The point of an accident investigation in the minds of most safety professionals is to lead to action, so we can prevent the bad thing from happening again. So the organization expects a simple explanation of what happened that facilitates getting things fixed. The problem is that accidents are often quite complex. Many aspects of the normal environment came together in an unexpected way. Sure we can point to many things and say “if only this didn’t happen then the accident wouldn’t have happened”, but where do we draw the line. Often we end up with arbitrary stop rules on our investigations where we just decide where to stop the investigation. Unfortunately this often leads investigators to solving the problems they wanted to solve before the accident. So the investigation is led by the need to solve problems, not by the need to learn.
  • Be thorough and efficient. In all environments we have competing goals, but accident investigations are hyper-examples of this. We must be thorough and really dive into what happened, but we also have rules that require us to turn in investigation findings within a certain time frame, so we must be efficient. Unfortunately you can’t be both completely thorough and completely efficient. Something has to give.


If we go back to the employee amputation we can see examples of these problems. Upon our analysis, the accident investigation missed a lot of really important lessons. The saw was purchased years ago and was left in the shop, partially set up. The employees in the shop didn’t want to use the saw because most were intimidated by the size of it. So the saw was not really out of order, it just wasn’t completely set up, and all of the employees knew this. One employee felt comfortable using the saw. Guess which one…

Now we do not blame anyone in the organization for any of this, whether for the accident or for the investigation. The problem is that, in both cases, people were put into situations that invited mistakes. Unfortunately, the problems that led to the accident, although complex, are easier to solve than the problems that led to an accident investigation that missed some key learning points. As a profession, safety professionals need to rethink the way we investigate accidents. If we just “go with our gut” and investigate without really thinking through and challenging ourselves, we will often come up with a poor product. We need to develop environments that facilitate learning. Some quick examples to consider to make your investigations better are:

  • Get others involved in the process whenever possible. Diversity in the process often leads to better outcomes. This includes getting the employees who were involved in the accident involved in the investigation, if possible.
  • Use investigations as learning processes. Searching for “the” cause may actually make the end product of the investigation worse, because it might lead us to stop the learning process. Investigations should be about learning, and often after an accident the organization is willing to learn, so take advantage of them.
  • Start thinking in terms of systems. The old ways of understanding organizations just can’t capture the complexity of the organization. There’s a ton of great resources out there, such as from Meadows, Checkland, Leveson, and Senge. Find something that works for you and start the learning point today!

Wednesday, March 4, 2015

The Way We Sell Safety May Be The Problem

One of the big problems safety professionals face regularly is convincing people, particularly business leaders, to invest in safety in their organizations. Investment could come in the form of spending money on a new ventilation system, time spent on training, or even hiring more personnel to fill in key safety roles. Obviously organizations do not have limitless resources, and therefore the organization must be wise about where it spends its resources. So, as a result, safety professionals have to spend a lot of time and energy “selling safety.” This is a frustrating problem for many, because, after all, why do we have to sell safety? Doesn’t the mere fact that we have to convince someone to be safe imply that they don’t care about safety to a sufficient level?

One of the quirks of people is that we have a tendency to fall into the fundamental attribution error. The fundamental attribution error is a fascinating concept from social psychology that basically suggests that people have a tendency when looking at the behavior of others to attribute the cause of the behavior to internal factors and disregard external factors. As an example, think about the last time you were driving down the road and someone cut you off. What was your immediate reaction? That person is an idiot! Where did they learn to drive?

Sound familiar? That is looking at a behavior (the other person cutting you off) and attributing it to something inside of them (their intelligence). Whereas, if you cut someone off, often it was an accident, you didn’t see them, road conditions are bad, etc. Essentially you gave yourself the benefit of the doubt and attributed your behavior to external, contextual features.

That’s the fundamental attribution error and it plays a role almost every time we look at people’s behavior. Consider the example of the organization not investing in safety. What was our reaction? We looked inward, at the ethics and beliefs of the individuals (they don’t care about safety), rather than looking for contextual factors.

What contextual factors could influence someone caring about safety? Well, another interesting finding from psychology relates to how people react to risky decisions (for a good summary, see Daniel Kahneman’s book, Thinking, Fast and Slow). Essentially, research suggests that the way a question is framed changes the way a question is answered. So, for example, research suggests that when you frame a question so someone has to choose between a guaranteed small loss or a probability of a large loss, people tend to take the riskier option. Essentially, if I offer you a guaranteed loss of $5 or a 10% chance of losing $50, if you’re like most people, you will tend to choose the 10% chance of losing $50.

Think about the implications for selling safety for a second – could the way we frame the decision we give to our organizations change the decisions they make? The way safety decisions are often framed involve a small guaranteed investment of resources now to avoid the potential for a large loss later. Based on the research, we would expect people to take the riskier option, i.e. we would expect people to take the chance that an accident won’t happen rather than accept the small risk now.

Now, we’re not saying that the decision is right or wrong. Rather, we’re just making the point that there are contextual factors that can influence the decisions that people make. Often we discount those. After all, people should care about safety right? But whether that’s true or not is irrelevant. We can’t make anyone care about safety any more or less than they already do by simply telling them to care more. We don’t have that kind of control. What we do have control over is how we frame the message we give them. We have complete control over that.

Will that help? Research suggests it does. As opposed to the losing option, if we instead focus on what the organization is gaining (rather than merely what it can avoid losing) by the investment then the frame becomes a guaranteed win and when people are given the choice between guaranteed small win versus the probability of a larger win, they tend to take the guaranteed small win. So simply changing the way we frame the proposition we give to organizations may make a significant difference in how often they adopt our suggestion. This issue of framing is also consistent with the idea of Safety-II, where we define safety as the ability to achieve success, rather than merely avoiding failure.


In any case, the research should give us pause. Next time you find yourself frustrated because someone didn’t do what you wanted/expected them to do, ask yourself – is there any chance that there are contextual factors that may have influenced their decision? What if you are part of the problem?