Tuesday, May 16, 2017

What are you doing that for?

Take a moment and do a brief exercise with us. Grab a piece of paper and in 30 seconds write out all of the things you do to help manage safety in your organization. Go ahead, we’ll wait.

Got it?

Ok, take a look at your list. It likely includes things like:
  • Write safety procedures
  • Do audits or inspections
  • Facilitate training or safety meetings
  • Model “safe behaviors”
  • Investigate accidents and incidents 

Your list might have more or less things, or you might call these things something different. Take a moment to reflect on this question as you look at the list – what are you trying to achieve with each of these? We don’t mean the obvious things like “we train so that people know the safe way to do the job.” We mean what is the overall goal of all of these things we are trying to do? It’s to prevent accidents, reduce risk, control hazards, eliminate “unsafe acts.”

That’s all well and good. After all, no one wants anyone to get hurt at work. But telling people what you don’t want (accidents, incidents, negative things) doesn’t tell them what you do want. And sometimes we can be so focused on what we do not want that we create processes that inhibit us from getting what we do want.

Lets take a deeper look at this – the avoidance of accidents taps into the innate human instinct for survival. Certainly no one wants to get hurt at work, so the things on your list likely help tap into that survival motivation.

But that’s not the only thing that motivates people.

People do not exist merely to survive. Instead, we have psychological motivators that we use to give meaning to our lives. For example, some powerful intrinsic motivators include autonomy, mastery, and purpose. These motivators are acted out every day in what we do in small and big ways at work. Take for example the construction worker who expresses himself by putting stickers on his hardhat (autonomy). Or the sense of pride an engineer has when she works out a difficult design problem (mastery). Or the water utility worker who cuts a procedural corner to get a homeowners water on faster so they can bathe their kids before school (purpose).

Take a look back at your list of safety management tasks. How many of them tap into these intrinsic motivators? How many of them are in conflict? Here’s some examples for consideration:
  • When we spell out the one best way to do a job through a procedure and force workers to follow the procedure without question, how would that affect their sense of autonomy?
  • When we tell them that safety is the number one priority (or value if that’s the language you prefer), essentially saying that survival is the goal of the organization, what effect would that have on their sense of the organization’s purpose?
  • When we create processes (audits, observations, inspections) to constantly check to ensure that employees are doing it the right way, implying that they can’t do it without supervision, how does that develop a sense of mastery over the work processes?
It is important that we constantly evaluate what we are trying to achieve with our safety management processes. Safety management for the sake of safety management is not only confusing, but it is counter productive. Instead, look for ways to align our safety management processes to work with the psychological motivators (autonomy, mastery, purpose) innate in all of us. You will not only improve the effectiveness of your safety processes (because you’re working with people instead of in spite of them), but you will also create unintended consequences such as increased trust, employee engagement, job satisfaction, and productivity.



Friday, November 4, 2016

Understanding 'Error'

Within decision psychology there’s a famous experiment, described in Daniel Kahneman’s book, Thinking, Fast and Slow, known as the Linda the Bank Teller experiment. The experiment goes like this – participants are given the following information:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Participants are then asked the following question (see which answer you would choose):

Which of the following is more probable?
    1. Linda is a bank teller.
    2. Linda is a bank teller and is active in the feminist movement.

If you chose #2, then you are like most people. Most of the participants in the study identified that it is more probable that Linda is a bank teller and is active in the feminist movement than her being simply a bank teller.

However, most people (and you, if you chose #2) are wrong. The ‘correct’ answer is #1. Here’s why – the experiment asked you a math question. Most people don’t necessarily see it that way, but that’s what it is. They asked about the probability of an event occurring. And the thing is that you cannot have the probability of two events being greater than the probability of one of those events occurring.

To put it another way, the question the experiment asked you is like asking you:

Which of the following is more probable?
    1. That you went to work today?
    2. That you went to work today and you had a cup of coffee?
At best, the two probabilities can be equal, but #2 can never be greater than #1.

So, if you, like most participants in the study chose #2 you made an ‘error’. This ‘error’ you made is one that is predictable and, according to Kahneman and others, is an example of the irrationality of people. The message that is often told is that people are unreliable and can’t be trusted.

‘Human error is a common issue in many workplaces, and is a big problem that many safety professionals actively try to deal with. Some people believe that ‘human error’ or ‘unsafe acts’ (or whatever we want to call it) is responsible for most of the accidents we have in the workplace. So we spend a lot of time in safety trying to understand and deal with ‘human error’, in all its forms and studies like the Linda the Bank Teller study seem to confirm our suspicions – people are a problem to control.

But wait a minute…if you chose #2, why did you choose #2? If you’re like most it’s probably because of all the additional information that was given to you. You used a stereotype, or a mental shortcut. Your mind filled in the blanks of Linda’s life based upon the information you had. Why does your mind do that?

Well, as psychologist Gerd Gigerenzer points out, it’s because this mental shortcut is correct a lot and allows you to communicate more effectively and efficiently with other people. Think about it, if you listen to the words people typically use in normal conversations there’s a lot of ambiguity. We often aren’t very specific about what we’re talking about. So our mind fills in the blanks, using the information that is said, as well as other contextual factors. And this works most of the time. It allows you to carry on conversations with others without a lot of confusion. Effectively, this makes you more socially intelligent.

Now wait a minute, none of the above about social intelligence makes choosing #2 any less of an ‘error’. But when we try to understand the ‘error’ we find that this ‘error’ makes people more successful overall in the sorts of environments they operate in. To put it another way, the ‘cause’ of the error in this experiment is the ‘cause’ of success most of the time.

And this is a common theme we see when you start to dig deeper into human performance – success and failure have the same causes. People have to deal with a world that is imperfect and complex, all the while with scarce resources. To make this work people make what Erik Hollnagel calls performance adjustments. These performance adjustments are remarkably successful most of the time…until they aren’t.

What does this mean for the safety professional? There’s a few lessons learned here:
  1. We don’t want to eliminate the causes of ‘human error’.  If people fail and succeed for the same reason, the performance adjustments they make, then eliminating their ability to adjust their performance will not only eliminate the errors, it will eliminate your successes as well. And this is not simply production success, but also safety success. Often these performance adjustments are the reason you are having a lot less accidents in your organization than you might otherwise be. As Steven Covey said, “seek first to understand”. We want to understand ‘error’ before trying to eliminate it.
  2. ‘Human error’ is never simply an issue with the ‘human’. Because success and failure have the same cause, this means that the error you see is always only part of the issue. If people are making performance adjustments and this is what is causing the ‘errors’ you see, then you have to ask what they are adjusting their performance to. Those who try to deal with ‘error’ by focusing on the individual are dealing with this problem with one hand tied behind their back while blindfolded. Focus on the context of the performance adjustment you see and you’ll begin to understand why it made sense for the person do to what they did.
  3. Focus on what makes people more successful, not merely on what makes them fail less. Because these performance adjustments are tied to people’s attempts to achieve success and avoid failure, telling them to try harder, pay more attention, or be more mindful is unlikely to work. Instead, look for ways to make successful performance adjustments easier. For example, if we framed the Linda the Bank Teller problem by telling you exactly how many bank tellers there are and how many of those bank tellers are active in the feminist movement, it is less likely that you would have made the same mistake you did. By contrast, if we tell you to simply pay more attention next time, it’s likely many will make the same mistake again in the future. By creating the conditions where people can be more successful you enable people to be more successful in a sustainable way.

If you have questions about how you can create sustainable performance adjustments in your organization contact us today! 

Monday, October 3, 2016

Rules Are Not Risk Controls

Anyone who has had any sort of formal training in safety management or hazard mitigation is familiar with the hierarchy of controls. To make sure we’re all on the same page, the hierarchy of controls is meant as a decision-making hierarchy to assist people in choosing the most effective risk mitigation measure. There are various versions of the hierarchy of controls, but a typical one is shown below.


Now most of the types of controls work by acting directly on the object creating the risk. For example, ventilation (an engineering control) reduces the number of toxic contaminants in a given atmosphere, reducing the risk. Wearing fall protection PPE reduces the effect of gravity, reducing the likelihood that one will fall
far enough and land hard enough to cause injury. Elimination of a given process through design or redesign eliminates the risks inherent in that process.

However, administrative controls are interesting because they are the one control that actually does not directly act on the object creating the risk. For example, a procedure that requires isolation of hazardous energy sources before work can begin actually has no direct effect on the hazardous energy. Instead, the procedure is designed to influence the worker who will work around the hazardous energy sources.

When you think of other examples of administrative controls you will probably reach the same conclusion – the control itself (rules, procedures, regulations, training, etc.) doesn’t really manage risk. Instead, administrative controls, following the logic, are designed to control people.  It’s the people who control the risk.

If we were to graphically illustrate this, whereas other types of control measures on the hierarchy work in an almost linear fashion on the risk they are meant to control. You implement the control and the risk is reduced.*



Administrative controls, by contrast, have an intervening variable, the person.


To illustrate this point, imagine if the person did not adjust their performance as a result of the rule, either because they didn’t know about the rule or because the rule was unfollowable. In that case there would be no risk reduction as a result of the rule.

Now, the fact that rules and other types of administrative controls do not directly control risk and that people do seems pretty obvious, and so many of you are likely saying “so what?”

Well, here’s the thing, that simple intervening variable of the person may seem like a trivial point, but, in reality, it changes everything. The change is so profound that if you treat administrative controls (rules, regulations, policies, procedures, training, etc.) like you treat any other type of control you likely will run into problems. As the famous astrophysicist, Neil deGrasse Tyson, said recently on Twitter – “In science, when human behavior enters the equation, things go nonlinear. That’s why Physics is easy and Sociology is hard.”

To illustrate, here’s a few implications for rules based on the idea that administrative controls don’t control risk, people do:
  1. The perspective of the rule follower is the only one that matters. For a rule to be effective it needs to make sense to the people who are meant to follow the rule. Often we see a violation of a rule and the first assumption is that the person is the one at fault because we clearly see how the rule makes sense to us. But we forget that our perspective doesn’t matter. We aren’t the one who has to follow the rule. This leads to the second point.
  2. Given how much we rely on rules (and similar), we should devote more attention to understanding the perspective of our workers. Much of safety is designed around creating a standard and then ensuring everyone follows the standard. But, building on the first point, perhaps we should begin to understand more about the people who work for us. How do they see the world? What makes sense to them? What do they see as the challenges that inhibit their ability to do safe work and would do they think adding a rule would do to that? The most important attribute of a safety professional is empathy and we need to practice it in this case through asking good questions. 
  3. And in doing so we see that rules are often not used in the way we think they are – they are more like guidelines than rules. It’s pretty common to hear people speak of someone who violated a rule and point to other people in the organization saying “they aren’t having trouble following the rule.” But often we have no evidence to back up this claim. All we really know most of the time is that we don’t have evidence that people are violating the rule. Others could be better at covering it up, or, more commonly, others may not be violating the rule, but they aren’t following it. Think about it, as people get better at a task they often do the task without much thought. This means that the written rule is not really doing much to enable their performance anymore. Often times it’s quite the opposite, as we just put rules in place without helping people know how to follow them. In some cases the workers find the way to do the work according to the rule in spite of the rule, not because of it.
  4. And then we see that calling them “controls” at all is misleading. A rule doesn’t have the ability to control anything because it is really nothing more than a “good idea” at best. It’s probably better to think of them as “influencing factors” or “guidelines.” Anyone who thinks that people can be easily controlled is obviously not a student of history or the social sciences.

None of this is meant to imply that rules and other administrative controls are not important and that they have no place in how an organization manages itself. Rather, it is to say that we cannot ignore the central role that our people have in managing risk. This is not a bad thing. In fact, given how much is reliant on people we should marvel at how effectively people manage risk, given that most of the time no accidents happen and “the trains run on time.”

So what’s our role here as leaders in the organization? Here’re a few points to consider and discuss:
  • Rules should be resources for action. This means they should enable performance, i.e., help people know what they need to do to achieve goals. Ask your workers what rules help them get their job done and which ones do they merely have to overcome to get the work done. That will give you a clue as to where your rules are adding value and where they are holding you back.
  • Have rules for your rules. We wrote a blog about this in the past, so you should check that out if you’re interested.
  • Try conducting an Appreciative Audit on a rule or procedure. This requires taking a different lens to the audit – focus on identifying and appreciating how work is actually happening in a given process without judgment. Choose one rule or procedure in your organization and trace its movement through your organization. Start where the rule was developed (and why) and work your way down to how it is implemented, taking time to review how people have adopted the rule into how practice. Is it as was intended? Why or why not? Keep in mind what David Woods says – systems work as designed, but rarely as intended. What does how the rule/procedure was implemented tell you about how your system was designed and is functioning?


* A quick note regarding risk reduction. In this blog we were a bit flippant with the idea of risk reduction. Bear in mind that risk reduction is far more complicated than we are making it. Often risks we think have been eliminated or reduced simply have been transferred to other places. That's another topic for another post though. 



Have a particularly troublesome issue with rule violations in your organization or just looking for someone to bounce ideas off of regarding your administrative controls? Contactus today!

Thursday, September 1, 2016

Where Does the Failure Come From?

This is a rather basic question, but rather profound if you think about it. Where does the failure come from? For the safety professional, identifying the sources of failure in organizations, the causes of accidents, is crucial. The answer to this question guides where we spend our time, what problems we look for and what solutions we choose.

To illustrate this, from our perspective, there are two basic viewpoints on what the answer is to the question where does the failure come from? For the sake of simplicity, we will just call them the Broken Parts and the Functioning System perspectives. Each perspective has its own assumptions as to what causes accidents, what causes safety, where the problems in organizations are and what the safety professional should do about it. We will discuss each perspective in kind.

The Failure Comes from Broken Parts

The first perspective, the perspective held by many in the safety profession, is that when an organization experiences failure, such as an accident, the failure came because something broke or failed. Everything was working fine, but then something or someone screwed up and that caused the failure. So, for example, an employee is injured when she put her arm into a piece of equipment and is partially pulled in. The failure from the perspective of the Broken Parts camp comes from erratic employees. The employee should not have put her arm in the machine to begin with. The broken part in this case is the employee, usually in their decision-making processes. If it weren’t for this then the accident would not have happened.

What causes accidents and what causes safety?

From the Broken Parts perspective, accidents are caused when something in the organization deviates from the intended design. This could be from equipment that fails, but most likely is from a person deviating from the rules or procedures put in place.

Conversely then, safety is created when everything and everyone follows the plan. It is only when we deviate that accidents happen. The organization itself is basically safe. It is only through deviation that unsafety creeps in.

Where do we look?

Following from the above, when an accident happens, all we have to do is look for those parts of our organization that deviated from the intended design. This is the so-called “root cause” or “root causes”.

If we want to prevent accidents, all we need to do is prevent deviations and variability in performance. We want things to be standard and uniform. Only when things begin to vary in their performance do we have problems.

What do we fix?

Again, it follows the above that our job is to find those parts that are broken or in the process of breaking and either fix them or replace them. Again, typically the issue is one of human behavior, because people are typically less reliable than machines. So we need to fix the individual’s behavior through typical behavior interventions, up to and including termination. Often we don’t go that far though, so we just implement more controls in the form of rules, policies, procedures, observations, audits, etc. If we decide to get more “enlightened” then we look for opportunities to engineer the human out of the system entirely through automation.

The Failure Comes from the Functioning System

From the second perspective, one not held by many in the safety profession so far, is that failure results from how the system normally functions. Organizations are always working in a fluid, imperfect, resource constrained world, which forces them to balance competing goals. This process of balancing is remarkably successful…until it’s not. Essentially failure and success have the same causes. Using the example from above, the employee put her arm in the machine not because of a disregard for safety rules, but because there was no other way to do the task available to her. The organization simply couldn’t shut the machine down to do the task the employee was doing without cutting their production almost in half. Further, this task was routinely done, multiple times a day, around the clock, every day without incident. So the day her arm was pulled into the machine was a day like any other…except today the things that normally happened came together in abnormal ways to create the accident.

What causes accidents and what causes safety?

In the Functioning System perspective, accidents are an unintended consequence of normal performance variability. Put another way, accidents are an outcome that was designed into your system (as David Woods says, systems work as designed, but not as intended). People have to make trade-offs in order to function in an imperfect, complex and resource constrained world. In such an environment, deviations and variability in performance are normal and often required in order to get the job done. This does not make them right, but on days where you have accidents and days where you have none, you will have both deviations and variability.

Safety is created in organizations not when we force them to meet an unrealistic standard, but when we help facilitate successful performance. By assisting people in making better trade-offs, smarter adaptations and designing systems that work with people rather than constrain them we create expertise and safety.

Where do we look?

Following from the above, when an accident happens it provides us an opportunity to see how our system produced an outcome that we didn’t expect or intend and change that system. Essentially we are looking for how the system normally functions and why that functioning led to this negative outcome. This will tell us where the opportunities for improvement are.

Before accidents happen, because success and failure have the same cause, it makes no sense to wait for an accident to happen. For the Functioning System perspective, similar things happen on both the days you do and do not have accidents. So you can learn just as much on days you have no accidents as you can on days you do not have accidents. Looking for those parts in your system where work becomes difficult, where people have to overcome things in order to get work done will help you find where risk is creeping into your system and where you can make work easier to get done.

What do we fix?

Finally, in the Functioning System perspective, doing things like focusing on any individual to improve things in the future makes no sense. People don’t fail like machines do, so blame seems a bit nonsensical in this light. But we obviously want less accidents and better performance in the future. So we look for ways to make it easier for people to accomplish goals, make sure that the proper resources are readily available (from the perspective of the worker, not your perspective) and to find ways to streamline workflows in a way that makes success possible in varying conditions (i.e., resilient). We are more interested in facilitating performance rather than constraining it and harnessing the ability of people to adapt to their circumstances to achieve success. What this looks like will obviously vary depending upon the context of the work.

What’s Your Perspective?

In safety we often do not reflect on our worldview or mental models and how those can guide us down a path to where certain problems and solutions seem obvious and others seem crazy. We think it’s probably a good idea to take a step back every now and again to identify what your perspective is and ask whether it’s leading you in the direction you’re happy with. Many times, these worldviews are constructed in such a way that it’s very hard to identify the flaws in them from the inside. It’s only when we step outside ourselves, often with the help of others, that we see them. But we think it’s well worth it.

Clearly from the above, we think that when it comes to the question of where failures come from the Functioning System perspective is better, although it currently is not the most popular one. What do you think? Where does the failure come from?