Friday, November 4, 2016

Understanding 'Error'

Within decision psychology there’s a famous experiment, described in Daniel Kahneman’s book, Thinking, Fast and Slow, known as the Linda the Bank Teller experiment. The experiment goes like this – participants are given the following information:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Participants are then asked the following question (see which answer you would choose):

Which of the following is more probable?
    1. Linda is a bank teller.
    2. Linda is a bank teller and is active in the feminist movement.

If you chose #2, then you are like most people. Most of the participants in the study identified that it is more probable that Linda is a bank teller and is active in the feminist movement than her being simply a bank teller.

However, most people (and you, if you chose #2) are wrong. The ‘correct’ answer is #1. Here’s why – the experiment asked you a math question. Most people don’t necessarily see it that way, but that’s what it is. They asked about the probability of an event occurring. And the thing is that you cannot have the probability of two events being greater than the probability of one of those events occurring.

To put it another way, the question the experiment asked you is like asking you:

Which of the following is more probable?
    1. That you went to work today?
    2. That you went to work today and you had a cup of coffee?
At best, the two probabilities can be equal, but #2 can never be greater than #1.

So, if you, like most participants in the study chose #2 you made an ‘error’. This ‘error’ you made is one that is predictable and, according to Kahneman and others, is an example of the irrationality of people. The message that is often told is that people are unreliable and can’t be trusted.

‘Human error is a common issue in many workplaces, and is a big problem that many safety professionals actively try to deal with. Some people believe that ‘human error’ or ‘unsafe acts’ (or whatever we want to call it) is responsible for most of the accidents we have in the workplace. So we spend a lot of time in safety trying to understand and deal with ‘human error’, in all its forms and studies like the Linda the Bank Teller study seem to confirm our suspicions – people are a problem to control.

But wait a minute…if you chose #2, why did you choose #2? If you’re like most it’s probably because of all the additional information that was given to you. You used a stereotype, or a mental shortcut. Your mind filled in the blanks of Linda’s life based upon the information you had. Why does your mind do that?

Well, as psychologist Gerd Gigerenzer points out, it’s because this mental shortcut is correct a lot and allows you to communicate more effectively and efficiently with other people. Think about it, if you listen to the words people typically use in normal conversations there’s a lot of ambiguity. We often aren’t very specific about what we’re talking about. So our mind fills in the blanks, using the information that is said, as well as other contextual factors. And this works most of the time. It allows you to carry on conversations with others without a lot of confusion. Effectively, this makes you more socially intelligent.

Now wait a minute, none of the above about social intelligence makes choosing #2 any less of an ‘error’. But when we try to understand the ‘error’ we find that this ‘error’ makes people more successful overall in the sorts of environments they operate in. To put it another way, the ‘cause’ of the error in this experiment is the ‘cause’ of success most of the time.

And this is a common theme we see when you start to dig deeper into human performance – success and failure have the same causes. People have to deal with a world that is imperfect and complex, all the while with scarce resources. To make this work people make what Erik Hollnagel calls performance adjustments. These performance adjustments are remarkably successful most of the time…until they aren’t.

What does this mean for the safety professional? There’s a few lessons learned here:
  1. We don’t want to eliminate the causes of ‘human error’.  If people fail and succeed for the same reason, the performance adjustments they make, then eliminating their ability to adjust their performance will not only eliminate the errors, it will eliminate your successes as well. And this is not simply production success, but also safety success. Often these performance adjustments are the reason you are having a lot less accidents in your organization than you might otherwise be. As Steven Covey said, “seek first to understand”. We want to understand ‘error’ before trying to eliminate it.
  2. ‘Human error’ is never simply an issue with the ‘human’. Because success and failure have the same cause, this means that the error you see is always only part of the issue. If people are making performance adjustments and this is what is causing the ‘errors’ you see, then you have to ask what they are adjusting their performance to. Those who try to deal with ‘error’ by focusing on the individual are dealing with this problem with one hand tied behind their back while blindfolded. Focus on the context of the performance adjustment you see and you’ll begin to understand why it made sense for the person do to what they did.
  3. Focus on what makes people more successful, not merely on what makes them fail less. Because these performance adjustments are tied to people’s attempts to achieve success and avoid failure, telling them to try harder, pay more attention, or be more mindful is unlikely to work. Instead, look for ways to make successful performance adjustments easier. For example, if we framed the Linda the Bank Teller problem by telling you exactly how many bank tellers there are and how many of those bank tellers are active in the feminist movement, it is less likely that you would have made the same mistake you did. By contrast, if we tell you to simply pay more attention next time, it’s likely many will make the same mistake again in the future. By creating the conditions where people can be more successful you enable people to be more successful in a sustainable way.

If you have questions about how you can create sustainable performance adjustments in your organization contact us today! 

Monday, October 3, 2016

Rules Are Not Risk Controls

Anyone who has had any sort of formal training in safety management or hazard mitigation is familiar with the hierarchy of controls. To make sure we’re all on the same page, the hierarchy of controls is meant as a decision-making hierarchy to assist people in choosing the most effective risk mitigation measure. There are various versions of the hierarchy of controls, but a typical one is shown below.

Now most of the types of controls work by acting directly on the object creating the risk. For example, ventilation (an engineering control) reduces the number of toxic contaminants in a given atmosphere, reducing the risk. Wearing fall protection PPE reduces the effect of gravity, reducing the likelihood that one will fall
far enough and land hard enough to cause injury. Elimination of a given process through design or redesign eliminates the risks inherent in that process.

However, administrative controls are interesting because they are the one control that actually does not directly act on the object creating the risk. For example, a procedure that requires isolation of hazardous energy sources before work can begin actually has no direct effect on the hazardous energy. Instead, the procedure is designed to influence the worker who will work around the hazardous energy sources.

When you think of other examples of administrative controls you will probably reach the same conclusion – the control itself (rules, procedures, regulations, training, etc.) doesn’t really manage risk. Instead, administrative controls, following the logic, are designed to control people.  It’s the people who control the risk.

If we were to graphically illustrate this, whereas other types of control measures on the hierarchy work in an almost linear fashion on the risk they are meant to control. You implement the control and the risk is reduced.*

Administrative controls, by contrast, have an intervening variable, the person.

To illustrate this point, imagine if the person did not adjust their performance as a result of the rule, either because they didn’t know about the rule or because the rule was unfollowable. In that case there would be no risk reduction as a result of the rule.

Now, the fact that rules and other types of administrative controls do not directly control risk and that people do seems pretty obvious, and so many of you are likely saying “so what?”

Well, here’s the thing, that simple intervening variable of the person may seem like a trivial point, but, in reality, it changes everything. The change is so profound that if you treat administrative controls (rules, regulations, policies, procedures, training, etc.) like you treat any other type of control you likely will run into problems. As the famous astrophysicist, Neil deGrasse Tyson, said recently on Twitter – “In science, when human behavior enters the equation, things go nonlinear. That’s why Physics is easy and Sociology is hard.”

To illustrate, here’s a few implications for rules based on the idea that administrative controls don’t control risk, people do:
  1. The perspective of the rule follower is the only one that matters. For a rule to be effective it needs to make sense to the people who are meant to follow the rule. Often we see a violation of a rule and the first assumption is that the person is the one at fault because we clearly see how the rule makes sense to us. But we forget that our perspective doesn’t matter. We aren’t the one who has to follow the rule. This leads to the second point.
  2. Given how much we rely on rules (and similar), we should devote more attention to understanding the perspective of our workers. Much of safety is designed around creating a standard and then ensuring everyone follows the standard. But, building on the first point, perhaps we should begin to understand more about the people who work for us. How do they see the world? What makes sense to them? What do they see as the challenges that inhibit their ability to do safe work and would do they think adding a rule would do to that? The most important attribute of a safety professional is empathy and we need to practice it in this case through asking good questions. 
  3. And in doing so we see that rules are often not used in the way we think they are – they are more like guidelines than rules. It’s pretty common to hear people speak of someone who violated a rule and point to other people in the organization saying “they aren’t having trouble following the rule.” But often we have no evidence to back up this claim. All we really know most of the time is that we don’t have evidence that people are violating the rule. Others could be better at covering it up, or, more commonly, others may not be violating the rule, but they aren’t following it. Think about it, as people get better at a task they often do the task without much thought. This means that the written rule is not really doing much to enable their performance anymore. Often times it’s quite the opposite, as we just put rules in place without helping people know how to follow them. In some cases the workers find the way to do the work according to the rule in spite of the rule, not because of it.
  4. And then we see that calling them “controls” at all is misleading. A rule doesn’t have the ability to control anything because it is really nothing more than a “good idea” at best. It’s probably better to think of them as “influencing factors” or “guidelines.” Anyone who thinks that people can be easily controlled is obviously not a student of history or the social sciences.

None of this is meant to imply that rules and other administrative controls are not important and that they have no place in how an organization manages itself. Rather, it is to say that we cannot ignore the central role that our people have in managing risk. This is not a bad thing. In fact, given how much is reliant on people we should marvel at how effectively people manage risk, given that most of the time no accidents happen and “the trains run on time.”

So what’s our role here as leaders in the organization? Here’re a few points to consider and discuss:
  • Rules should be resources for action. This means they should enable performance, i.e., help people know what they need to do to achieve goals. Ask your workers what rules help them get their job done and which ones do they merely have to overcome to get the work done. That will give you a clue as to where your rules are adding value and where they are holding you back.
  • Have rules for your rules. We wrote a blog about this in the past, so you should check that out if you’re interested.
  • Try conducting an Appreciative Audit on a rule or procedure. This requires taking a different lens to the audit – focus on identifying and appreciating how work is actually happening in a given process without judgment. Choose one rule or procedure in your organization and trace its movement through your organization. Start where the rule was developed (and why) and work your way down to how it is implemented, taking time to review how people have adopted the rule into how practice. Is it as was intended? Why or why not? Keep in mind what David Woods says – systems work as designed, but rarely as intended. What does how the rule/procedure was implemented tell you about how your system was designed and is functioning?

* A quick note regarding risk reduction. In this blog we were a bit flippant with the idea of risk reduction. Bear in mind that risk reduction is far more complicated than we are making it. Often risks we think have been eliminated or reduced simply have been transferred to other places. That's another topic for another post though. 

Have a particularly troublesome issue with rule violations in your organization or just looking for someone to bounce ideas off of regarding your administrative controls? Contactus today!

Thursday, September 1, 2016

Where Does the Failure Come From?

This is a rather basic question, but rather profound if you think about it. Where does the failure come from? For the safety professional, identifying the sources of failure in organizations, the causes of accidents, is crucial. The answer to this question guides where we spend our time, what problems we look for and what solutions we choose.

To illustrate this, from our perspective, there are two basic viewpoints on what the answer is to the question where does the failure come from? For the sake of simplicity, we will just call them the Broken Parts and the Functioning System perspectives. Each perspective has its own assumptions as to what causes accidents, what causes safety, where the problems in organizations are and what the safety professional should do about it. We will discuss each perspective in kind.

The Failure Comes from Broken Parts

The first perspective, the perspective held by many in the safety profession, is that when an organization experiences failure, such as an accident, the failure came because something broke or failed. Everything was working fine, but then something or someone screwed up and that caused the failure. So, for example, an employee is injured when she put her arm into a piece of equipment and is partially pulled in. The failure from the perspective of the Broken Parts camp comes from erratic employees. The employee should not have put her arm in the machine to begin with. The broken part in this case is the employee, usually in their decision-making processes. If it weren’t for this then the accident would not have happened.

What causes accidents and what causes safety?

From the Broken Parts perspective, accidents are caused when something in the organization deviates from the intended design. This could be from equipment that fails, but most likely is from a person deviating from the rules or procedures put in place.

Conversely then, safety is created when everything and everyone follows the plan. It is only when we deviate that accidents happen. The organization itself is basically safe. It is only through deviation that unsafety creeps in.

Where do we look?

Following from the above, when an accident happens, all we have to do is look for those parts of our organization that deviated from the intended design. This is the so-called “root cause” or “root causes”.

If we want to prevent accidents, all we need to do is prevent deviations and variability in performance. We want things to be standard and uniform. Only when things begin to vary in their performance do we have problems.

What do we fix?

Again, it follows the above that our job is to find those parts that are broken or in the process of breaking and either fix them or replace them. Again, typically the issue is one of human behavior, because people are typically less reliable than machines. So we need to fix the individual’s behavior through typical behavior interventions, up to and including termination. Often we don’t go that far though, so we just implement more controls in the form of rules, policies, procedures, observations, audits, etc. If we decide to get more “enlightened” then we look for opportunities to engineer the human out of the system entirely through automation.

The Failure Comes from the Functioning System

From the second perspective, one not held by many in the safety profession so far, is that failure results from how the system normally functions. Organizations are always working in a fluid, imperfect, resource constrained world, which forces them to balance competing goals. This process of balancing is remarkably successful…until it’s not. Essentially failure and success have the same causes. Using the example from above, the employee put her arm in the machine not because of a disregard for safety rules, but because there was no other way to do the task available to her. The organization simply couldn’t shut the machine down to do the task the employee was doing without cutting their production almost in half. Further, this task was routinely done, multiple times a day, around the clock, every day without incident. So the day her arm was pulled into the machine was a day like any other…except today the things that normally happened came together in abnormal ways to create the accident.

What causes accidents and what causes safety?

In the Functioning System perspective, accidents are an unintended consequence of normal performance variability. Put another way, accidents are an outcome that was designed into your system (as David Woods says, systems work as designed, but not as intended). People have to make trade-offs in order to function in an imperfect, complex and resource constrained world. In such an environment, deviations and variability in performance are normal and often required in order to get the job done. This does not make them right, but on days where you have accidents and days where you have none, you will have both deviations and variability.

Safety is created in organizations not when we force them to meet an unrealistic standard, but when we help facilitate successful performance. By assisting people in making better trade-offs, smarter adaptations and designing systems that work with people rather than constrain them we create expertise and safety.

Where do we look?

Following from the above, when an accident happens it provides us an opportunity to see how our system produced an outcome that we didn’t expect or intend and change that system. Essentially we are looking for how the system normally functions and why that functioning led to this negative outcome. This will tell us where the opportunities for improvement are.

Before accidents happen, because success and failure have the same cause, it makes no sense to wait for an accident to happen. For the Functioning System perspective, similar things happen on both the days you do and do not have accidents. So you can learn just as much on days you have no accidents as you can on days you do not have accidents. Looking for those parts in your system where work becomes difficult, where people have to overcome things in order to get work done will help you find where risk is creeping into your system and where you can make work easier to get done.

What do we fix?

Finally, in the Functioning System perspective, doing things like focusing on any individual to improve things in the future makes no sense. People don’t fail like machines do, so blame seems a bit nonsensical in this light. But we obviously want less accidents and better performance in the future. So we look for ways to make it easier for people to accomplish goals, make sure that the proper resources are readily available (from the perspective of the worker, not your perspective) and to find ways to streamline workflows in a way that makes success possible in varying conditions (i.e., resilient). We are more interested in facilitating performance rather than constraining it and harnessing the ability of people to adapt to their circumstances to achieve success. What this looks like will obviously vary depending upon the context of the work.

What’s Your Perspective?

In safety we often do not reflect on our worldview or mental models and how those can guide us down a path to where certain problems and solutions seem obvious and others seem crazy. We think it’s probably a good idea to take a step back every now and again to identify what your perspective is and ask whether it’s leading you in the direction you’re happy with. Many times, these worldviews are constructed in such a way that it’s very hard to identify the flaws in them from the inside. It’s only when we step outside ourselves, often with the help of others, that we see them. But we think it’s well worth it.

Clearly from the above, we think that when it comes to the question of where failures come from the Functioning System perspective is better, although it currently is not the most popular one. What do you think? Where does the failure come from?

Wednesday, June 1, 2016

Digging Out Root Cause

There’s been a push in the area where we live to require certain organizations to do “root cause analysis” for major incidents. Most of this push is for high-hazard industries, such as refining and chemical manufacturing, which makes sense, since if something goes wrong in these areas it can have very serious consequences. So state and local regulators are proposing and enacting regulations to force organizations to conduct a root cause analysis for major events (and perhaps some other events). You can view an example of these regulations here and here.

The underlying assumptions for this push are basically that (a) the current processes used by these chemical plants are flawed, and (b) doing a root cause analysis is a superior method for doing investigations. Now, we haven’t worked with all of the plants covered by the regulations, so we can’t really speak to the first assumption (a).

However, is root cause analysis really a good way to investigate accidents? At first glance the answer seems to be ‘yes’. The name “root cause analysis” implies a process that goes deeper into the organizational processes. And that is something we all want – a process that goes deeper. This should help us better identify and correct problems right?

The question is not how deep the investigation goes. Rather, the question is whether the investigation gives us a clear enough picture of our operations to help us learn and improve future performance. Does root cause analysis do this?

Unfortunately, the answer is no. Now, this isn’t to say that there aren’t people who use root cause analysis that are improving operations after an accident. It’s just that we aren’t convinced that the root cause analysis is the reason for the improvement. In fact, we would argue that if organizations moved beyond traditional root cause analysis methodologies they may experience even more improvement than they would have otherwise.

The biggest issue with the search for root cause(es) is that it is so deceptively subjective and arbitrary. To understand what we mean, let’s look at the basic idea behind root cause analysis – to identify the root causes of a fault or accident. Root causes are defined as those factors that, if removed, would have prevented the accident. On the surface this seems to be very logical and objective. Just identify those things that, if removed, would have prevented the event from occurring.

In reality though, this is so subjective and prone to bias that it’s scary. Lets look deeper at the logic here. Every effect has a cause. Therefore, every accident (which is an effect) has a cause. Every cause is also an effect though. So you just keep looking at each cause as an effect and then determining what the cause of the cause was until you get to the so-called “root cause”. But doesn’t this also mean that so-called “root causes” also are effects that have causes? Why wouldn’t the cause of the root cause be the actual root cause? How do we pick one or the other? Wouldn’t, in reality, the root cause of all accidents be Creation or the Big Bang (respectively, depending on your worldview)? As Jens Rasmussen points out, what we end up calling “root cause(es)” are often merely the points where we decided to stop investigating. This means that “root cause” is not an objective element that we find, but rather something we create when the investigation stops. This means root cause analysis is entirely subject to the whims and biases of the investigators.

As an example, lets apply the root cause analysis thinking to an instance of someone getting hurt at work. What are the factors that, if removed, would have prevented the accident?
  • Getting up that morning.
  • Driving to work.
  • Not getting into a serious accident on the way to work.
  • The business being profitable the previous year, allowing the business to remain open.
  • The employee was hired.
  • The employee being born.
  • The person who invented the technology that the employee was working on being born. 

Obviously we’re being a little absurd, but the main reason the above are not typically considered “root causes” in a root cause analysis is because the organization chooses to do nothing about them (in one or two cases there’s nothing the organization could even do). But doesn’t that mean that root cause analysis is muddying the waters between learning and corrective action? If we only can call something a root cause if it’s something that we have the means and desire to fix, doesn’t that introduce huge potential for bias into the process?

The bottom line is that root cause analysis is distortion and subjectivity masked as clarity and objectivity. The world does not unfold in the way that root cause analysis has us looking at it. Our world is made up of a constant stream of events that are highly interconnected. Breaking them into bits is an extremely arbitrary process that will inevitably lead to an incomplete understanding of what we are looking at.

We recommend that the safety profession move beyond root cause analysis and begin to look at other methods for investigating failures. What that looks like may vary from place to place, situation to situation, but some general ideas to get you started include:
  1. Learn, then improve. Don’t identify fixes in your investigation until after you’re satisfied you’ve learned what you can from the event. The goal is to figure out how your system is working and why that led to a failure this time. If you go in there trying to fix before you understand you’ll blind yourself to innovative opportunities for improvement.
  2. Start back in time and move forward in your investigation. As much as possible we want to see the world the way those involved saw it. Processes that go backwards in time do the opposite, which will increase the potential for hindsight bias. Instead, go back a day, a week, a month or a year in time (or more), start there and move forward.
  3. Tell the story of the event. Rather than breaking the event up into parts and classifying them, put them all together into a coherent story. This will help you see how things worked together, which will help you not only understand each element more, but also the behavior of the whole system you’re looking at. Go up and out to look at the big picture, rather than down and in.
  4. Look for how things succeed to understand why they sometimes fail. People don’t break like machines. We do things that we believe will help us be successful. Therefore, don’t just look at failure in an investigation. Try to figure out why people’s behavior made sense to them, why it helped them to be more successful. Then you’ll understand the behavior more, which will then allow you to identify better fixes overall, if necessary.
  5. Get different perspectives involved. The best way to reduce bias, paradoxically, is to get more bias and diverse perspectives involved. If you get enough diverse bias involved the biases will begin to cancel each other out, leading you closer to what’s really going on. This means get employees, supervisors, engineers, etc. involved. Note that we said “involved”, not “interviewed as part of the investigation”. They should be involved both in the investigation as investigators, but also in identifying solutions to problems found.

Now, there are some who can do the above while still using some methodology that they call “root cause analysis”. That’s fine. You can call it whatever you want to call it, as long as you recognize that reality is not reducible to “root causes” and the process is more about understanding and managing bias rather than eliminating it. Accident investigation is a social process requiring empathy, collaboration and communication first. Everything else is secondary.

Monday, May 2, 2016

The Unintended Cost of Legal Culpability

You ask many in the safety profession what we need more of and a fair number will say- accountability. What they really mean is culpability, i.e., punishment for the guilty. We need people to have real consequences for their actions. We see scandal after scandal, major accident after major accident. The body count rises and out of our mourning we begin to look for reason in the senselessness. Naturally (and we mean that in the most literal sense of the word), we look for who is responsible. We look to the government to punish the guilty. For instance, we applaud those lawmakers who propose to enhance legal culpability for organizations, such as the recent bill to make it easier to prosecute pipeline operators in California.

The logic here is pretty straightforward – we make it easier to punish organizational leaders, that should deter them from putting people in harms way just to make a profit!

There are some underlying assumptions with this logic that are worth pointing out. First, the assumption is that people act almost entirely based on incentives and reinforcement. This is familiar to many in safety. In safety we largely ignore the social sciences, but when we decide to pay attention to this area the behaviorist school of psychology dominates. To oversimplify a little bit, behaviorism is about carrots and sticks. People move toward carrots and run away from sticks. So, behaviorism predicts, increase the size of the stick and you will get less of the behavior you don’t want to see.

The second underlying assumption with the logic above is that leaders make risk decisions based upon a coherent calculation. They look at a decision, count up the carrots and the sticks and then do whatever the calculation tells them. If there are more carrots (or the carrots are more valuable) than the sticks, then they do whatever it is, and vice versa.

With these two assumptions the actions to increase culpability and blame for those after an accident make perfect sense.

But what if they are wrong?

Well, perhaps “wrong” is not the right way to characterize this, but maybe “only part of the story.” In our experience, the focus on creating legal culpability for safety has four unintended consequences.

  1. It creates risk calculations where none existed. The people who are the targets of pushes for legal culpability are…well, people. They don’t intend to hurt or kill people (they aren’t murderers). Often these decisions that seem like criminal events are merely mistakes, i.e., they think what they are doing is safe for all involved, but they are wrong (which is only easy to see in retrospect). By adding personal liability risk to those who make these decisions you change the decision-making process, which may cause the devious risk calculus that it is designed to discourage. An example of this is the classic study where a day care center attempted to reduce the number of parents who were late to pick up their children by adding a late-pickup fee. The result? An increase in the number of parents late to pickup their child. Adding the personal penalty created devious risk calculus that allowed parents to more easily justify picking up their child late. Adding legal culpability encourages people to weigh the costs, putting a tangible value on something that many would argue is invaluable – a human life. 
  2. It inhibits transparency. One of the things we’ve consistently run into in our business when we write up a report is concern about what would happen if the report gets out, particularly in high profile or public organizations. As a result we often go through a process where wording is challenged, lawyers have to review our findings and sometimes information is repressed. Organizations are so afraid of culpability that it inhibits responsibility. They don’t want to admit what’s really going on. But isn’t every organization imperfect? The fear of culpability has kowtowed organizations so much that we stop talking about the realities that organizations face – the fierce competition, the scarce resources, the uncertainties in decision-making. We unrealistically expect perfection from organizations to such an extent that it inhibits their ability to improve by inhibiting the flow of information. Wouldn’t it be better to create an environment where the organization sees value in being honest?
  3. It discourages safety innovation. In an environment where if you don’t make good decisions you can be held personally liable, what incentive do you have to innovate? Legal culpability encourages more of the same. There is safety in numbers, and the focus on legal culpability pushes organizations to do little more than do what everyone else is doing. Thinking outside of the box involves risk. What if you are wrong? You chose to take a risk in a situation where safety was involved. No wonder why the best the safety industry has to offer is new, shinier versions of what we’ve been doing for decades, regardless of whether there is evidence to suggest they are effective or not.
  4. It encourages blame of workers. Legal culpability puts organizations on the defensive. We have incentivized them to protect themselves. But, when a disaster occurs we need an explanation. Fortunately for us, there’s an easy explanation that almost everyone is ready to buy-into – worker error. Take the case of the tigertrainer who was recently mauled and blamed for her own death. There’s no discussion about what makes being a tiger tamer difficult or how normal organizational processes can lead to drift. Legal culpability creates an either/or scenario – either the organization is at fault or the worker is at fault. In that scenario, the worker has almost no chance. Now, when there’s a major disaster, such as the NASA shuttle disasters, Fukushima or Deepwater Horizon where intense scrutiny is put on the organization, often the organization’s contribution becomes clearer. But most of the time the media doesn’t care unless the organization is implicated. We incentivize organizations to blame the worker in those cases. Learning and improvement suffer, which largely guarantees history will repeat itself.

Now, we aren’t saying necessarily that we should do away with the legal system entirely, nor are we saying that there should not be consequences for law-breaking. The legal system serves more purposes than just ensuring safety. However, we are safety professionals, not lawyers. The idea that more legal culpability for organizations and leaders in organizations will give us more safety is, at best, not as clear-cut as it seems at first glance. At worst, it may be distracting us from other ways to influence organizations to make better decisions to reduce risk and enhance performance. After all, sure we want more accountability in organizations, but if we achieve accountability at the cost of communication, transparency, learning and innovation, what have we really gained?