Thursday, August 28, 2014

Why Models Matter in Safety

Photo Credit - +IFLScience 

Many times blogs, articles, discussions, etc. like ours are criticized because they don’t always speak to specifics. This is a fair criticism. We do talk about specific aspects of safety (such as here, here, and here), but most of the time we’re talking about how to approach or think about safety. So, it’s easy to see how sometimes we can come off as a little abstract in our posts.

Criticisms not withstanding, we do feel it’s important that we keep discussing the things we’re discussing. Basically, what we’re trying to do is get you all to prescribe to the same way of thinking as we do. Put another way, we want you to buy in to the model of safety that we have bought into.

Ok, there we go getting all abstract again, but let’s take a step back and talk about why it’s important to be abstract and talk about models every once in while. What is the purpose of a mental model? From a social science perspective, a mental model helps you conserve mental resources. If every time you thought about an environment you had to figure out how each piece of the environment worked together you would waste a lot of time. So, instead your brain constructs a model and uses that model to understand the environment.

Let’s use an example, you go into a restaurant and immediately your restaurant mental model (or, sometimes called a schema) activates. So you don’t have to waste time wondering who the people bringing food to the other people who are sitting at tables are. You have a script in your mind about what is normal in a restaurant and what isn’t. This script leads to certain expectations that are completely arbitrary, if it weren’t for the mental model you have (e.g. that the waiter/waitress has to come to you, that they provide a menu, and that you can only order food that is on the menu). In a way, the model you have constrains what is and isn’t possible, at least in your mind.

One of the biggest problems with the safety profession today is that everyone has a mental model of safety and very few people take the time to figure out what it is. Think about it. Why do you do what you do as a safety professional? For example, you get hired into an organization – what are your first actions and why? Most safety professionals we’ve dealt with base the majority of their actions in the profession on (a) what everyone else has done or is doing (i.e. the popular thing), and/or (b) what they feel has “worked” in their own direct past experience.

Of course, this can work, but only to a point. The problem is that this means that safety professionals are constantly looking backwards. We do what we do based on what we believe has worked in the past. If things never change, or if the organizations that we’re “benchmarking” with are the same as us then this approach will be just fine. But we work in unique organizations that are constantly changing, sometimes in subtle ways, sometimes in dynamic ways. This means the things that worked yesterday may not work today.

Additionally, just relying on anecdotal evidence (i.e. “that’s what’s worked before”) leaves people incredibly vulnerable to stubbornly holding on to discredited ideas and interventions. For example, some of our posts have criticized the use of rules or procedures to create safety (for example, here, here, and here). The response to these posts has been very favorable for most, but some challenge our ideas. After continued discussion though these people often resort to using evidence such as “procedures work because they have worked in the past.” (And then we wonder why our employees counter our proposed changes with “but that’s the way we’ve always done it.”)

We don’t intend to say that these people are dumb or that we’re better than them (we all need critical evaluation of our ideas to get us closer to the truth). Instead, we believe that part of the reason people have trouble accepting new ideas is that they base their old ideas on their experience (which social science has shown to be consistently deceiving), not on a model that can be seen as separate from themselves, and therefore analyzed in a non-threatening way.

If instead we all understood the mental models that provide foundation for the way we view safety in our organizations it would be easier understand why we do what we do. For example, much of the way safety professionals approach procedures is based on a model where people cannot be trusted to figure the best (or safest) way to do a job on their own. They either are deficient in their knowledge or in their motivation. Further, we can, separate from the work environment, identify the one best (or, at least, most acceptable) way to do that job.

By identifying those assumptions within the mental model of what Sidney Dekker calls the “old view” of procedures it is much easier to identify whether those assumptions are true and, therefore, if the mental model should be utilized or discarded.


So, next time you’re thinking of recommending a change in your organization, or you’re conducting an incident investigation, what is the mental model you’re using and what assumptions is the model based on? This could be one of the most important steps in the process.

Tuesday, August 19, 2014

Safety and Human Error – It’s no PICNIC

Talking to a Help Desk person one time about an issue we were having with our computers we found that the problem was, according to the IT person we were talking to, not a problem with the software or hardware, but rather we were not looking in the right place for the information we needed. So, basically, the problems we were having were our fault, not the computer’s. Jokingly, the IT person told us that this was a PICNIC situation – Problem In Chair, Not In Computer.

Now, we have to admit, that’s a pretty funny acronym. However, we think that PICNIC is also a pretty good example of how people thinking about human performance and, unfortunately, a common mistake people when they are dealing with the problem of “human error.” We often think of human performance in terms of an “either/or” – either the event was caused by an action or by the conditions the person was in (equipment, culture, etc.). In the case of PICNIC, either the problem is in the chair (the person) or in the computer (conditions).

This makes investigation and analysis easy, right? All we have to do is look at the incident, place all the human failures on one side and all the equipment failures on the other side and whichever side is the biggest is the one to blame and should be dealt with accordingly. This line of thinking is consistent with most models of safety management, most notably seen in Heinrich’s 88-10-2 theory, where 88% of all accidents
are caused by unsafe acts, 10% are caused by unsafe conditions, and 2% are just “acts of god” (unpreventable, according to Heinrich).

The implication looking at this data is that our systems (organizations, environments, factories, jobsites, whatever you want to call it) are basically safe. It’s all these unreliable, unsafe people who make things unsafe by not following rules and procedures. Therefore, we need interventions designed to change this disruptive, unsafe behavior. And you see this played out in the safety world with many of the interventions we tend to focus on, such as compliance, training, disciplinary procedures, reward and incentive structures, culture initiatives, hearts and minds initiatives, and behavior-based safety. As an indicator of the dominance of the focus on behavior in safety, look at the difference in group memberships in the LinkedIn groups for Safety in Design (624, at the time of this writing) and Behavior-Based Safety (12,634, at the time of this writing). Obviously this is not a scientific analysis, but it is an indicator of how representative the PICNIC idea is in the everyday practice of safety professionals.

But is the model that underlies PICNIC true? Well, to answer this question, let’s think about it. For it to be true that we can differentiate and analyze human acts separate from the conditions in which those actions take place. Basically, the human act must not be influenced by the condition and vice versa. If the action is influenced by the condition (or if the human influences their environment) then looking at them separately loses its’ value because you’re not looking at each in a natural way, as they exist in the environment in which the event occurred.

When we look at human performance closely this is exactly what we find – you cannot meaningfully separate the human’s behavior from the environment they are in. People naturally adapt their behavior to the environment they are in, adopting a strategy that they believe will help them achieve their goals. This includes making adjustments both to their behavior and adjustments to the conditions they are in. This is the essence of sociotechnical systems. The interactions between the people and their environment create behavior that you could not predict if you look at either separately. Further, these performance adjustments may also be separated from the event both in space and time, meaning that a traditional, linear investigation will not easily identify them.

The implication here is that traditional interventions may not be as effective as we’d like them to be. We need to change the way that we think about safety and human error. Rather than focusing on individual parts of a system, such as the people or the conditions, we need to understand how the people interact with the system and vice versa. Traditional methods of safety management are not well suited for this task because they tend to be based on the idea that we can easily predict the safest ways to do every job, but that’s not necessarily possible in complex systems (spoiler alert – almost any system that involves people is going to be a complex system). We need interventions not based on reductionist thinking, but rather based on understanding relationships, both relationships amongst people, but also relationships between people and their environment. We need interventions that help us understand not just failure, but normal work, where these performance adjustments are taking place. Most importantly, we need interventions that see workers as a solution to harness, rather than a problem to control, because it is these very same performance adjustments that are leading to success in our organizations most of the time.

This change in thinking may require the rethinking of some interventions (e.g. training), the retooling or even abandonment of others (e.g. behavior-based safety). But without change it is clear that safety management is quickly falling behind the curve in its' ability to provide effective interventions in an increasingly complex and dynamic world.


Wednesday, August 13, 2014

Robin Williams and Mental Health – Reflections from a Safety Professional

We’ve talked about this topic before (here) but given the recent passing of Robin Williams in an apparent suicide, one of our authors, Ron Gantt, wanted to share some personal reflections on the topic.

It is with particular sadness that I read about the apparent suicide of Robin Williams. Not only was he a beloved actor and comedian who will be sorely missed, but this also means we lost another person in the struggle against suicide and mental illness. As a safety professional I really believe in what I do – I have a hand in potentially saving lives. So when a life is lost needlessly, that alone breaks my heart.

Keep that last sentence in mind. We’ll come back to it later.

Unfortunately, suicide is all too common in our world. In the US more people kill themselves every day than die in car accidents. At work, more people are killing themselves than are dying from confined space accidents, or electrocutions, or fires and explosions. According to Thomas Insel in this TED talk while other causes of loss of life decrease, suicide rates are flat (see the graph to the side, taken from Insel’s talk) and are even rising according to some sources. 

Despite all of this loss of life we as a society, and as a profession, continue to do little or nothing. Our misconceptions about mental health and suicide poison our ability to save lives. Just like we do with other aspects of safety, we put the focus back on the individual, telling them to buck up and do better. Just like with accidents, we play the blame game.

This isn’t a condemnation though, because I was like that once. But then mental illness and suicidality touched me. My wife was diagnosed with major, treatment-resistant, depression and borderline-personality disorder almost a decade ago, although we can see signs of her illness throughout her whole life. Although she hasn’t yet had an official suicide attempt, she has “self-harmed” on more than one occasion, and she has been incredibly close to attempting suicide many times. Many times I’ve had to work from home, miss work, distance ourselves from friends, etc., simply because we couldn’t leave her alone. When I do leave, we literally have to lock up all of her medications in a safe that I alone have the key to, and if I don’t hear from her all day there’s a real possibility that I’m going to come home and find that she’s succumbed to her illness.

If my wife had cancer or something like that, no one would question her strength and bravery for lasting as long as she has. But because people don’t understand mental illness and suicidality few people really understand how amazingly resilient she is. As an example, a news commentator was vilified recently for insinuating that Robin Williams was a coward for committing suicide (which he later apologized for). However, that wasn’t the only mistake the commentator made. Earlier he was questioning how someone could make the decision to end their own life. This shows a fundamental misunderstanding of suicidality.

Most suicides are not a well-thought out choice that the person made. It’s not like when my wife wants to kill herself, she does so after conducting a cost-benefit analysis. Something inside of her, some impulse, something separate from who she is, is telling her to end it all. She doesn’t want to die. She’s terrified of death, of leaving me, of leaving our family and friends, and our dogs (she’s a dog mom). But something separate from all of that love and hope and amazing creativity and intelligence is being tainted by a broken part of her. There’s a part of you and I that allows us to not even think about killing ourselves that isn’t working right in her. Telling her to just snap out it, to just go for a walk, use homeopathic medicines, think about happy times, or whatever well-meaning, but ignorant people tell suicidal people to do does not work.

As an analogy, imagine telling someone who has cancer to just smile more and that should make them better. Imagine telling them that the problem is that they just need to try harder to beat the cancer and that’s all. They don’t need medicine, treatment, etc. They just need good ol’ fashioned grit or to just wait it out. That doesn’t make sense, right? But for some reason, when the sickness that is also very deadly and is more deadly than many illnesses we are afraid of (e.g., in western countries, AIDS), that’s what we do. We call it mental illness, but we don’t treat it like any other illness. For every other illness it is normal to go to the doctor and get treatment and to take preventative measures to not get the illness. But for mental illness, for some reason, we put the responsibility back on the sick person.

So what can we do, as a society, but particularly as a safety profession? Go back to that sentence I told you to remember – when a life is lost needlessly, it breaks my heart. Many times when we say that we mean that it was needless for the person to make the choice to end it all. As with other things in our society, we think the problem is the person. That’s a huge mistake that will only result in more lives lost. Suicide is needless because if we build a system of suicide prevention in our society, in our organizations, in our families, in our lives, we can prevent this needless form of death. There are literally millions of people out there like my wife suffering, struggling every day to simply keep living. If we really want to save lives we need to identify the signs, avoid the myths, reach out to people who are suffering, and build in prevention and response systems, just like we would with any other “risk.”


There are lots of resources out there that you can use to get started, some in the blog post linked at the top of this post. Start today. You just might save a life.

Wednesday, August 6, 2014

Confined Space Permits – Dealing with the Paradox

Confined spaces are one of those issues in occupational safety and health that seem to make many safety professionals take notice. Safety professionals often list confined spaces as one of the highest risk activities in their organizations, causing them to spend a lot of time and resources in dealing with these risks. This makes some sense, given that confined spaces, by their very nature, make the hazards one faces worse. OSHA defines a confined space as any space that is large enough to get into and do work, not designed for continuous human occupancy, and difficult ingress and egress (i.e. hard to get in and out). If you’re in an environment that limits your ability to escape that takes away some flexibility in your ability to protect yourself, putting more emphasis on prevention instead of response.

In response, we require our employees who enter confined spaces to fill out a permit. This permit is different than your typical building permit, in that it is not something provided by the government. Instead, the confined space permit is something that the organization fills out to permit employees to enter a confined space (more specifically, a permit-required confined space). This permit contains information such as the space to be entered, the job to be done, hazards of the space, protection measures employees must follow to safely enter the space, and rescue measures in case things go bad.

Take a step back and think about that brief list for a second. Don’t employees already want to identify hazards, control measures, and rescue procedures? Essentially, don’t employees already want to be safe doing a confined space job?

Invariably, the answer to these questions are “yes.” So why do we need a permit then?

Well, we can say that a permit is needed because it is required by the law or because it is a liability reducing tool. That all may be true, but in a more optimistic sense, a confined space permit is needed because we don’t always act in our own best interest. Why not? Well, one reason is that we may forget about a hazard or not think to identify rescue, etc. Basically, we need a permit because we make mistakes sometimes. In this way, a permit is a checklist, a memory aid, something designed to help us remember things we really want to remember anyway.

Why do we make these mistakes? Well, there’s a lot of reasons we can point to. Sometimes there’s other stuff going on, so we’re distracted or in a hurry, or perhaps we’re tired or frustrated, or perhaps we think we already know all the hazards when we do not (most call that complacency, but we’re not too crazy about that term).

A common issue with checklists, such as the confined space permit, is that people don’t always fill them out as designed. The technical term for this is “pencil whipping” (ok, that’s not really the technical term). But most people have pencil whipped a checklist – i.e. they have filled out the checklist form without actually using the checklist to check the items they were supposed to check. In the context of confined space entry, this means they filled out the permit, but they didn’t really think about the hazards, controls, rescue, etc.

Why would someone pencil-whip a checklist, such as a permit? Well, right of the bat we have to say it’s not that they don’t care about their or other’s safety. That’s too simplistic a view. Rather, in a manner of speaking, it’s a mistake. Usually the person is distracted or in a hurry, or perhaps they are tired or frustrated, or perhaps they think they already know the hazards and don’t need a permit to help them.

Wait a minute…that list looks familiar. Look at the above paragraph, and then look two paragraphs above it. Confined space permits and all checklists have an interesting paradox – you’re most likely to make a mistake when you are least likely to use the thing that’s supposed to help you not make a mistake.  The time you really don’t want to take the time to fill out the permit is likely when you need the permit the most.

This is a bit disconcerting. How do we deal with this paradox? Well, in general, this is an example of how human performance is more complex than we’d like it to be. If we just give people a permit and expect them to always fill it out as required we will be setting ourselves, and, worse, our employees up for failure. Just telling them to comply and punishing them when they do not won’t work. Instead, we need to help our employees deal with this complexity through design of the work environment. This includes traditional safety engineering, such as engineering out hazards, eliminating confined spaces, etc. This also should include design of the work system by helping employees deal with competing goal structures and building in a work system that has sufficient capacity to adapt to changing work environments, as well as making systems as error tolerant as possible.


Further, this permit paradox is an illustration that safety is not as simple as creating a rule and expecting people to follow it. As safety professionals we must always look closely at any interventions we propose to ensure, as much as we can, that they do not create new blindspots, new pathways to failure, or that limit the ability of our employees to make informed performance adjustments to their work environment. Like so many other things in safety, we need to be on the lookout for these unintended consequences, or else we’ll be like the emperor, believing we’re clothed in the robe of safety, when in reality we are as naked as the day we were born.