Thursday, January 29, 2015

What Is The Safest?

Recently we were having a discussion amongst safety professionals (yes, we know – a scary thought) about the safety of different actions in road safety. Is using a hands free device in a cell phone safer than normal cell phone use? Is using a cell phone in any form safer than having a conversation with a passenger? Which lane is the safest to drive in? The discussion revolved around various studies on road safety and was quite academic when one of us suddenly chimed in – it really depends on your definition of “safety”. After a few eye rolls we simply moved on with our conversation as if nothing happened.

However, as Alan Quilley and many others have noted, the definition of safety we use does matter, particularly when we are comparing tasks to determine which is the “safest”. After all, when you’re comparing two objects, you really need objective criteria to compare them to. For example, if you ask which is a larger animal, a giraffe or an elephant, you need some sort of standard of measurement to use to enable reliable comparison. Which standard you use can change the answer. If “larger” means height, then the giraffe wins, whereas if “larger” means weight, then elephants tend to win.

In the same way, when we talk about safety and we compare tasks to determine to determine if one is safer than another, it would be ideal to have some sort of idea of what “safety” is. What is the standard we’re using to measure when something is safe or not? What does “safe” look like, so that we can know which thing (task, object, person, etc.) looks more safe-like?

One answer that people commonly use (often much more than we’d like to admit) is that safety is the absence of accidents. Safety looks like people not getting hurt. This is obviously problematic to many people, but the short answer to why this definition is not good is that we’re defining safety by its absence. But this is not like a scale, where if you have less accidents then you have more safety by default. Logically speaking, just because you have less of one thing, does not mean that you by default have more of its opposite. Less sadness does not necessarily mean more happiness. In the same way, less harm does not automatically mean less safety. Why? Because there’s a potential for harm. If we define safety by the absence of accidents then we are force to say that the drilling platform Deepwater Horizon was safe 5 minutes before the explosion that killed multiple people and caused one of the worst environmental disasters in US history. This is obviously unacceptable.

Perhaps then we can do as many have done and merely say that safety is an acceptable level of potential for harm, or risk. This is the definition used by many industry standards, such as ISO standards. More specifically, these standards define safety as “freedom from unacceptable risk”. This seems like a better definition that allows us to make comparisons. So we can look at two actions, look at the likelihood of harm in each, and ask if that’s acceptable or not. If it’s acceptable, then we can say that it’s “safe” and the one with the lower risk obviously is the safer one. Easy, right?

Not so fast. As one of us pointed out in the safety discussion above, but in a hidden way, it depends on YOUR definition of safety. Here’s the thing – if safety is freedom from unacceptable risk, then who gets to decide what’s “acceptable” risk? For example, what you may accept may be different than what is unacceptable to someone else, and may be trivial risk to someone else. An interesting ethical discussion is who gets to decide acceptable risk in most organizations? The people taking the risks or those removed from the risks?

There’s a further wrinkle that we must consider in this safety definition conundrum – even if we got everyone together in a room and nailed down an exact definition of “acceptable risk” we still would have problems. Why? Because our definition of acceptable risk changes depending on our perspective. What we accept varies depending on where we’re standing. Take an example of a trip hazard. There are literally hundreds of these things in most organizations that people walk over every single day without incident. Let’s say we took one and calculated that the risk of tripping and injuring oneself on the hazard is 1 in 100,000, or 0.0001%. Is that acceptable? Obviously your answers will vary (see the paragraph above), but many would say yes. If you say yes, then you’re saying that, by definition, the situation is “safe”. Therefore you should not need to do anything to eliminate or control the hazard. Sure you can make it “safer” perhaps, but you don’t have infinite resources, so perhaps your time and money are better spent elsewhere.

Now let’s assume that someone trips over that tripping hazard and seriously injures him or herself. Someone tripped over your “safe” tripping hazard and is now in the hospital. Would your response to the tripping hazard change? Nothing except your perspective. You determined it was “safe”, but now someone got hurt. If we’re honest with ourselves we expected that (after all, 1 time in 100,000 is what we calculated), but when someone is in the hospital it seems perverse to say that it is “safe”. So, naturally, you’ll devote resources now to fixing the problem.

Obviously the above example has many interesting discussion points, for example, as Drew Rae and others have pointed out, sometimes calculating risk in thehere and here). 

way we did above is nothing more than a fiction we use to tell ourselves that we have more control than we really do. Also, we should point out that we are not saying that someone getting seriously injured is ok or that it’s necessarily a good thing to not fix hazards in the workplace. Finally, we should say that some newer definitions of safety that might have more merit have not been considered in this post, but we have discussed in other blogs (

However, the point we’re trying to make is that the definition of safety is a great example of why safety professionals need to learn to become accustomed to uncertain, complex, and sometimes contradictory, a-rational beliefs. We need to move away from black and white thinking and learn to be accustomed to grey. This doesn’t mean we should accept people being hurt. However, complex problems do not have simple solutions (or else they would not be complex). Pretending that something is simple when it is complex may work in many situations (we do it all the time), but in safety, particularly when it comes to high-risk activities, this strategy tends to fail miserably.


Further, we must admit to ourselves that our definition of safety is fluid. We are human, just like everyone else. We may have a firm, abstract definition we hold onto (e.g. freedom from acceptable risk), but our actionable definition (e.g. what is “acceptable”?) changes, particularly after an accident, but that’s a story for another blog.

Thursday, January 22, 2015

The Commonality of Common Sense

In a recent project planning meeting at a client’s site, the discussion centered around the need to coordinate with a contractor about a certain element of the job task. The communication seemed relatively basic to those of us in the room, so one person, reasonably, brought up that perhaps we didn’t need to have this coordination with the contractor. After all, they should already know this stuff, right? One of the employees, the most experienced engineer in the room, just a few months from retirement, replied, “What seems like common sense depends on the person. My common sense is based on my experience working at this plant and the projects I’ve worked on, whereas the contractor’s common sense is based on their experience and knowledge.”

In our lives, people give us nuggets of knowledge all the time. Many are ignored, and most we only realize the wisdom of the words in retrospect. This case was one of those rare times where you realize the wisdom of the words in the moment, and everyone in the room knew it. What a great piece of wisdom!

We talk about common sense, as if everyone has it, and then we derisively talk about how uncommon it is. Why? Because we see people doing things that look stupid to us, that seem like someone with common sense wouldn’t do, and we conclude that that person does not have common sense. But think about this for a second. If it were true that we could hold people to some standard of “common sense”, that assumes that common sense is some stable body of knowledge and understanding of our environment that people can have, but some do not. Common sense becomes a standard, a line we can compare ourselves and others to. In a way, it exists outside of us, separate from us.

But is this right? We are reminded of the time we were doing a ride-along assessment with a gentleman who worked in the mountainous, rural part of the state, and he was pointing out that his job required a certain level of unorthodox hazard assessment. For example, he mentioned that one need to know that if they see a cow in a field standing all by itself, that is not a cow, but rather a bull, and therefore a situation that people should avoid. This was common sense to him, and after he said it, it sure did seem like common sense to us. But we would not have known this without someone telling us (and we assume he was not born with this knowledge either).

This begs the question – if the knowledge of the lone cow in the field being dangerous is common sense, why did we not know it? Is it because we were or are unintelligent? Well, as much as some might disagree, we don’t think that’s the case. Rather, we think that the problem is that our belief in a stable, “out there” common sense is misleading. We think the wise engineer would point out that common sense is relative. We didn’t know about the dangers of cows because we never had to know this information. Most of our work is in urban or suburban environments, where there aren’t many cows or open fields for us to trek though. Our common sense is based on the environment we work in. The common sense of the worker we talked to was based on the rural and mountain environments he worked in. The wise engineer’s common sense was based on his experience in the chemical plant industry.

We think it’s time that we reevaluated this standard of “common sense” that we arbitrarily apply to people. It is based on old ideas of human cognition and intelligence, and is often used to put the blame for a problem on the person, rather than engaging in real problem solving. We need to stop thinking of “common sense” as some static idea that people can attain, and begin to see common sense as relative to each individual’s experience. Instead of common sense existing “out there”, waiting for us to attain it, the wise engineer would remind us that common sense exists inside each of us, and that our common senses may only partially overlap.

By changing the definition of common sense we increase our ability to solve problems. After all, if the problem is because the person is just stupid, i.e. they don’t have common sense, then our options for dealing with that problem are limited – we can only get rid of the person. But if we start to see common sense as relative, the number of potential solutions goes up exponentially. We can look at training, coaching, partnering, design of the work environment, design of the work system, etc.

More important, seeing common sense as relative helps us to take another step away from our knee jerk blame response that we consistently have, where we see the problem as one with the individual, not with the system. This, of course, requires a bit of empathy on our part. It requires us to attempt to see the world through the eyes of others. And when we do we will begin to see avenues for building collaboration and trust in the work environment. We start seeing people as having different pieces of the puzzle, rather than as having deficits that we must overcome. People become solutions to particular problems, rather than the problem themselves.

Tuesday, January 13, 2015

There’s Nothing To Fear Here…That’s What Scares Me

The title of this post comes from the movie Raiders of the Lost Ark, and is in one of the opening scenes where the hero, Indiana Jones, and his colleague reach the room where the treasure they seek is found. After trekking through the jungle and through a temple fraught with booby traps, the two finally reach the room and everything looks like it is finally safe. Indiana stills acts with caution, which makes his colleague remark that there is nothing to be afraid of. Indiana Jones replies “that’s what scares me.” Of course, Indiana Jones is right, and shortly after they grab the treasure mayhem ensues and only Indiana escapes with his life, but barely and only in a very dramatic fashion (it was an action movie, after all). 

Despite being an iconic scene in movie history, we think that the scene also has something to teach us about safety management. You could argue that one common theme exists in almost every accident – something that someone saw as benign turned out to be dangerous. Think about the operators who were running the BP Texas City Refinery plant who overflowed a tower, leading to a release and explosion. Or the truck driver who got behind the wheel and ended up crashing into a limousine carrying comedian Tracy Morgan and others. In each of these cases, the people involved did things under the assumption that there was nothing to fear…but it turned out they were wrong.

And, even though with these two examples we focused on the individuals closest to the accident, you could say the same about others higher up in the organizations having influence over the accident. The plant management and corporate managers over the BP Texas City Refinery didn’t think that there was anything to fear when they made further cuts to maintenance, training and process safety spending…but they were wrong.

This is a common theme in safety management – thinking that everything is “safe”, when there is much to fear. This is interesting when we in the safety profession spend so much time thinking about what could go wrong in terms of hazard and risk. We have piles and piles of paper composing all the regulations, procedures, hazard and risk assessments, and training that should help us prevent these instances. Yet, a common theme emerges as one studies major accidents – these things are often in place, yet the accident still happens.

What gives?

We think that the problem is stems firstly from the Newtonian idea of cause and effect – that big causes lead to big effects. Therefore, to have a big effect, you need a big cause. This isn’t really true though. If you study major accidents, it’s often, at best, small things, sometimes called “weak signals”, that signal that something is amiss. Of course, the problem is that after the accident those signals no longer seem so weak, because we can draw an easy correlation between the weak signal and the accident. But, in the moment, the signals are weak indeed, and many times, they are so weak that only in retrospect, after the accident, can those signals have meaning. So, in the moment, we go along, assuming that there’s nothing to fear, when mayhem is about to ensue.

That’s why we think that there needs to be a paradigm shift in safety management. Most tools and interventions in the safety profession are designed to find obvious things – hazards, “unsafe acts”, etc. But there are few tools designed to help safety professionals understand and find those “weak signals” in organizations that lead to real disasters. Part of the problem, in our opinion, is that many of these weak signals are just normal work – i.e. things continuing on successfully. And, in the safety profession, we’re not equipped to learn from success. Our bread and butter is learning from failure (or at least it should be). But if all we do is learn from failure, we learn about only a small part of our organization’s operations, because most of the time things don’t fail. Most operations succeed, and we have no idea why. For example, what if the same things that we think are leading to failure (e.g. “unsafe acts”) are also leading to success? Wouldn’t that suggest that saying that accidents are caused by human failure is, at best, inadequate at explaining accidents?

The time has come for the safety profession to change focus. We need to stop learning from failure alone and start looking for those weak signals, those normal, everyday things, that may be the triggers of the next major accidents in your organization. Diane Vaughn, called this the “banality of accidents”. Why? Because accidents are caused by everyday work.

So how can we start looking at and learning from normal, banal work in our organizations, so we can both increase the chance of success and decrease the chance of failure? Here’s some tools we recommend you consider:
  • Get out from behind your desk, get on the shop floor, and learn from your workers. Note, this is not the same things as you going out and looking for “unsafe” things. This is not a hazard hunt. Certainly we’re not asking you to ignore clearly dangerous situations, but this excursion onto your shop floor is designed to help you learn about what’s actually happening on a daily basis in your organization.
  • Consider holding debriefing sessions after a shift or a project with work crews and multiple levels in your organization (from top to bottom). This is commonly done in the military, but not as much in the civilian world. Ask your crews what worked, what didn’t, and what surprised them? Where did they have to bend the rules and change the plans to get the job done? Where were the rules and plans spot on? Where did they feel uncomfortable with the task? (Obviously this should be done in a safe, blame-free environment, so that workers feel comfortable sharing.)
  • Conduct a formal success investigation. Next time a project goes right (however you define that in your organization), do an investigation, just like you would if there had been a failure. Interview witnesses, look at evidence, etc. Find out why things are working and you just might find some clues as to how things could fail that you weren’t aware of.
  • Start learning about drift, weak signals, and human and organizational performance. There are plenty of resources out there, this blog being one of them, but others, such as other blogs (like this one and this one), podcasts, and authors (like this one and this one) that can get you started.


Wednesday, January 7, 2015

Beyond “Unsafe Behavior”

Pop quiz – are most accidents caused by unsafe behavior or unsafe conditions?

Have your answer? Good, keep it to yourself for a minute because rather than talking about your answer, we should spend some time talking about the question. It’s an age old question in the safety profession, perhaps first asked officially by Heinrich (who told us that 88% are caused by unsafe acts), and a question that is hotly debated to this day.

But here’s the thing – the question itself is problematic because it assumes so much! There are at least three assumptions made in the question that many in the safety profession take for granted:
  1. That we can know the actual cause of accidents (i.e. cause is something that exists in reality and can be found);
  2. That there is such a thing as “unsafe behavior” that is a distinctly identifiable category; and,
  3. That any cause of an accident can be clearly attributed to an “unsafe behavior” or an “unsafe condition”, and there is no significant overlap between the two categories (i.e. they are mutually exclusive).

Unfortunately, even though many in the safety profession accept all of the above without question, each of the above assumptions is, at best, hotly debated, and, at worst, shown to be inaccurate, in the safety science and social science literature. This alone speaks to problems within the safety profession (which we've talked about here and here), and many volumes have been written on the assumptions, which we just cannot do justice to in this post. However, we want to spend some time to at least challenge the thinking regarding “unsafe behavior” and present some new ways of understanding human behaviors.

The first question that should spring to our minds when we speak of “unsafe behavior” is – “unsafe” by whose measure? After all, let’s not forget that in the safety profession there is not a clearly agreed upon definition of what “safety” is! And often when we talk about “unsafe behavior” we’re speaking about it in retrospect, usually after an accident (i.e. “because of his unsafe behavior the employee fell off of the ladder”). This often leads to circular arguments – the employee fell off of ladder because of his unsafe behavior. How do we know his behavior was unsafe? Because he fell off of the ladder.

Even if we take one of the most widely agreed upon definitions of “safety”, that safety is “freedom from unacceptable risk”, we still run into problems. Why? Because we’re defining whether the risk was acceptable or not in a world that is objectively different than the world was before the accident. We know the true cost of the behavior (assuming that we can say that the accident was caused by the behavior, which is another very problematic assumption to make. More on that later). The person did not know the true cost. Sure they probably knew that the potential was there, but there’s a difference between a potential loss and an actual loss. What if the person decided that the potential for an accident was an acceptable risk? Must we then say that the behavior was “safe” because the person accepted it? It seems silly, but this is the sort of question we must ask if we accept that there is a clear distinction between “safe” and “unsafe” behavior.

Furthermore, and this is very important, the distinction between unsafe behavior and unsafe conditions assumes that you can draw a meaningful separation between behavior and conditions or context. But here’s a challenge for you. Assuming for a second that we are able to tell when a behavior is safe or unsafe. Name one behavior that is always “unsafe” without also making a reference to a context or a condition.

Go ahead, we’ll give you a minute…

Times up. Got one? We sure couldn’t. We’ve thought long and hard and we simply cannot come up with any individual behavior that, by itself, regardless of any condition or context, is always unsafe. Whenever we think of an “unsafe behavior” it always has a contextual element associated with it. Some common examples – work near an unprotected leading edge more than 30’ off the ground (the only behavior identified is the work, the rest is context). Working on energized equipment without appropriate controls in place (again, the behavior is the “work” and the rest is context). Even the quintessential unsafe act, running with scissors, could be considered “safe” in certain contexts (i.e. in a medical emergency with trauma scissors).

This means that the debate between “unsafe behavior” and “unsafe conditions” is a false flag. It’s not an either/or scenario. We shouldn’t be focusing on one or the other, but rather on both and how each effects the other. Systems theorists teach us that interactions and relationships between parts are more important than the parts themselves. Why aren’t we focusing on that instead?

And this leads us to some thoughts for alternative ways of understanding behavior. Erik Hollnagel presents an interesting and informative model for understand people’s behavior as it relates to safety. Rather than thinking in bimodal terms (i.e. it is “safe” or it is “unsafe”), people’s behavior is more about variability – i.e. people adjust their performance to match the conditions they are in to help them achieve success in a complex, resource constrained world with competing goals. And it’s this performance variability that creates failure, which we then call “unsafe acts”. However, this performance variability is also the reason that work is successful most of the time (i.e. accidents are rare and the job gets done the overwhelming majority of the time). So, rather than working to eliminate “unsafe behavior”, we should instead help people make better adjustments to their environment.

This requires two shifts for the safety professional:
  1. We must understand how work works. Typically we create rules separate from the work environment, then we get frustrated when people violate our rules and get into accidents. We think that this must be a problem with the people involved, but often the problem is that we design work in a way that just doesn’t work. If people are constantly adjusting their performance to help them achieve success then we need to get out there and understand what’s really happening on the ground floor and why. This means that your employees have just as much or more to teach you about how to do their jobs safely as you do to them.
  2. There’s just as much, or more, to learn from what goes right as there is to learn from what goes wrong. We spend a lot of time learning from failures, which makes sense, but if that’s all we do then we get a skewed version of the world. We start to think that what we’re finding is unique to failures (e.g. procedure violations). But if we start to learn why our organizations are succeeding then we might start to realize that success and failure often have the same causes. This presents all sorts of unique opportunities for us to learn about how our organization creates safety on a daily basis, and why that sometimes doesn’t work.


It’s time though that we stop hanging onto the crutch of “unsafe behavior”. The concept is misleading and doesn’t really lend itself to good interventions for the safety professional. Further, there are better concepts out there that more accurately explain behavior and present innovate interventions for the safety professional.

Friday, January 2, 2015

This New Year, Resolve To Do More of What You Do Best

It’s 2015. Our planet has passed around the sun again and all around the world people are getting ready to start a new year. This process often involves a lot of reflection for most. We look back on the previous year, the good and the bad, cherishing the good memories and mourning the bad, especially if we’ve lost anyone during the year.

One of the most common outcomes of this process is the infamous “New Year’s Resolution”. After reflecting on the previous year, many people identify things that they want to do better in the coming year. Some of the more common resolutions include things like getting healthy via some combination of eating right or exercising, or spending more time with family and friends. Some resolve to do some form of self-improvement involving school or work. Maybe resolving to getting that big promotion or getting that graduate degree you’ve been looking towards.

Whatever the resolution one chooses, these resolutions have a common thread – they stem from a focus on learning from some version of failure or deficit. People who believe they are very healthy usually don’t have a New Year’s resolution to get healthier. Instead, it’s those who believe that they didn’t do as well as they’d like and are looking to improve. This is fine, but if you only focus on improving what you don’t do well you miss a large part of your life – i.e. those things you do well.

Think about it, there’s probably a lot of things you’re really good at, and our primary evidence of this is that we’re successful (by some measure). Most just take it for granted that they were successful and stop there. But what if instead, at this time of reflection, you look back on not only areas you didn’t achieve your goals, but those areas where you were successful? Research suggests that those who take the time to learn not only from their failures, but their successes are…well, more successful.

This line of thinking is consistent with a new approach to safety management, called Safety-II  where, instead of defining safety as the absence of negatives, we focus on the ability to achieve success. A key principle in Safety-II, and in many similar theories of safety management, such as Resilience Engineering, is that learning should not be a discrete event, like it is in most organizations. What we mean by this is that, in most organizations they only learn when something bad happens (i.e. an accident). Instead, learning should be continuous. And, instead of only minimizing the bad, safety should look at why things go right and find ways to enhance the possibility that the organization will achieve success.

Back to you, what this means for you, this New Year, is that perhaps you should take some time and think about what worked. Why did it work? What did you do that contributed to that success and how can you do more of that? How can you expand that into other areas of your life? And what successes last year, after you take some time to think about them, were maybe more a result of luck than you’d like to admit. What can you learn from those so you’re not relying on luck as much? What resolutions can you develop to enhance your ability to achieve even more success in 2015?

And think about it – how often does your New Year’s resolution work? If you’re like most people, probably not that often. The reasons for this are complex, but the short story is that if changing behavior from something that led to failure before to something that led to success is not simple (regardless of how simple some would like us to believe it is). At a very basic level, these are things that at least a part of us doesn’t want to do. But people like doing things that they are good at, so if you change the focus from doing better at those things you failed at, to doing more of what you’re good at you’re much more likely to be even more successful next year. 


Happy New Year from SCM Safety! Here’s a more success next year!