Tuesday, September 29, 2015

Sacrificing Decisions – The Upside of a Late Arrival

As we write this a few of us are on our way to the NationalSafety Council’s Congress and Expo in Atlanta (although most likely the event will be past by the time you read this). Traveling from the west coast of the US to the other side obviously involves air travel, an experience most people are familiar with. Air travel comes with a consistent range of experiences, from the pleasantly predictable (lets face it, one thing we don’t want when traveling miles above the ground is surprise) to the mildly annoying (for example, being a few minutes late or getting stuck in the seat with the exact wrong person).

Sometimes though something really disrupting happens and most people are sufficiently frustrated, often understandably so. One such event happened to us. While traveling on Southwest Airlines, getting on a connecting flight in Houston on our way to Atlanta we all boarded the plane, got ourselves comfortable in our seats and then the captain came on to say there’d be a short delay as the maintenance crew needed to look at an electrical problem. About five minutes later though the captain came on to say that we needed to change planes. What a hugely disrupting event! We all had to deplane, move to another gate, the people who were at that gate for their flight had to move to another gate themselves, the luggage had to get moved (talk about a cascading event!). We took off almost an hour late, so we were obviously late arriving. Not a big deal for us, but for some people with tight schedules (such as their own connecting flights), that could be a big deal! The flight attendants and crews were clearly flustered and very apologetic. Many of the passengers were clearly annoyed.

But we couldn’t have been happier.

During the event we couldn’t help but think of a quote from Sidney Dekker’s book Drift into Failure that is worth some reflection:

In making these trade-offs, however, there is a feedback imbalance. Information on whether a decision is cost-effective or efficient can be relatively easy to get. An early arrival time is measurable and has immediate, tangible benefits. How much is or was borrowed from safety in order to achieve that goal, however, is much more difficult to quantify and compare. If it was followed by a safe landing, apparently it must have been a safe decision.

Southwest made what we call in the safety business a sacrificing decision. In the constant trade-off and balance between safety and production, production has a number of inherent advantages. One of the primary is what Dekker alludes to in the above quote – if you sacrifice production you always get immediate and consistent feedback. You know the consequences of your decision because the job doesn’t get done as planned (on-time, on-budget, etc.). And we all know that this is the case. We know that if we choose safety over production that there will be (negative) consequences, and if we borrow a little from safety to enhance production there will also be (positive) consequences. This knowledge creates an inherent pressure (sometimes called “faster, better, cheaper pressures”, after the famous NASA goal of the ‘90s), a tension pushing us toward the boundaries of safe work.

The problem that many safety professionals don’t want to admit is that this is almost always a good thing, it leads to innovation, efficiency improvements, and many other benefits that we all enjoy. Who doesn’t want to arrive at their destination early when flying?

These faster, better, cheaper pressures are a good thing…until they aren’t. There is such a thing as too much of a good thing. Everyone knows that, but the tricky thing is balancing the trade-offs so you get the most of the “good thing” without the “too much”. How do you know where those boundaries are, especially since they change over time?

That’s the tough question we try to answer in safety and is worthy of a discussion on its own. But the point we’re trying to make is that sometimes in the safety world we make it seem like such an obvious and trivial thing to make a sacrificing decision, sacrificing production for safety. Especially after an accident, we chastise those who didn’t stand up to do the “right thing”. But making sacrificing decisions is not an easy thing to do because the pressures are almost always pushing you away from choosing safety over production.

Do you want to know what makes it harder? People punishing those who make the sacrificing decision for safety. For example, if when we changed planes we had yelled at the crews for disrupting us and making us late, if we had blamed Southwest we would be making it harder for someone to choose safety over production. We would be contributing to the pressure to keep quiet.

What about your organization? How can you make it easier for people to make sacrificing decisions, where they sacrifice safety for production? Here’s some things to consider:
  1. Celebrate every time someone chooses safety over production. Yes, you can overdo it, but there is no pressure within your organization to put safety over production unless you create it, whereas there is always pressure to put production over safety. If you punish or even just ignore people who choose safety over production you are making things worse.
  2. Provide training and coaching on how to make sacrificing decisions. Saying that everyone has “stop work” authority is easy, but doing it in real life is hard. Employees at all levels need to learn about how to recognize situations when you believe it’s ok for them to stop work. What should they look for? What signs point to a situation where they should be wary? And what’s the process they should use? Give them the tools they need to be successful.
  3. Identify a process for “breaking ties”. Again, saying that everyone is responsible to stop work that is unsafe is easy to say, but sometimes there will be disagreement as to whether the job is unsafe or not. Who gets to mediate those disputes? The answer many default to is the supervisor or manager, but keep in mind the incentives and biases that may influence that person’s decisions (this isn’t to say that they are a bad person, just that they are a normal human being like the rest of us). They may not be the best person to make that decision. So what’s the process you can use? Who gets the final decision? Is there an appeals process? The devil is in the details, so think this through and get your employees involved.

So, in line with our first recommendation – thank you Southwest Airlines! Note, that we don’t necessarily endorse Southwest Airlines, nor can we vouch for their safety management systems in all cases (we just don’t have that knowledge. But in this specific case they chose the safety of their workers and their passengers (us) over production, and they should be recognized for it. (Plus the new plane we went on had wifi, whereas the other one did not, so that was a side benefit.)

Wednesday, September 9, 2015

Who Holds the Rule Makers Accountable?

There was a discussion on LinkedIn recently posted in a number of groups that asked how to deal with a situation when a worker keeps doing the “wrong” thing. By “wrong” we assume they mean that the person violated a rule or some practice and hence did something “unsafe” (whatever that means). It’s a fair question, because there will be times when someone repeatedly violates some rule or procedure you put in place as a safety professional and that can be supremely frustrating. After all, you put these rules in place to help keep them safe. So if someone violates your rule it easily seems like they are acting in a way outside of their own self-interest (i.e., in a way that could get them hurt).

There was a lot of response generated in some of the groups and the responses were pretty interesting. One of the most common responses seemed to revolve around the idea that we need to hold those workers accountable. Some took a positive spin on this, saying we need to find ways to motivate the workers and reward them for safe behaviors. Some pointed to flaws in the management of the organization, saying that without management on board you won’t make any progress. Still some pointed to the need for discipline, whether that’s firing the employees or some even advocated docking the pay of employees.

All of these solutions are interesting and worthy of discussion, but they all have one thing in common – they see the people as the problem that must be fixed. Think about your own answer to the question asked above. How would you solve it? If you’re like most you probably went right toward finding solutions for the behavior. Therefore the behavior (the individual) is the problem and the only question is what solution we can recommend to fix that behavior.

Take a step back from the problem for a second. If someone violates a rule repeatedly is the only potential problem with the behavior? Isn’t there at least one more option - What if the problem is that the rule is a bad rule?

We find it disturbing but not surprising that very few safety professionals respond to the question with a solution that first requires us to look at the rule. With very few exceptions safety professionals respond to a problem like the one posed above with an implicit assumption that the problem is localized within the person. That’s such a powerful assumption that we don’t even take the time to think about other options. Similar to a reflex, we see the issue and we immediately and unconsciously know what the problem is. We don’t ask questions because we already understand what’s going on. It’s another trouble maker, another bad apple!

Ok, let’s assume that it’s possible that you could merely have a bad apple that must be removed (we don’t necessarily buy that though). After all, people are fallible. The question then becomes could there ever be a bad apple who writes rules? Think about it, rules are written by people, just like work is done by people. The difference is that when we do work we actually (usually) have systems in place to check if the work we’ve done is actually effective. Very few organizations have anything remotely close to this for the rules they write.

This is particularly true for “safety rules.” If you put a safety rule in place this is often sacrosanct. You can’t question it without being labeled as someone who doesn’t care about safety. There’s no need to check to see if the rule is actually working, because, after all, it’s a safety rule. It would work if only people would follow it!

Why do we have rules though?

A rule is not just a standard. If the only reason you have rules is so that you have standards you can hold people to, then a rule is merely a means for you to easily find the cause of accidents (blame) and it really has little or no relationship with helping to create safety in your organization. That sounds harsh, but it’s true if you think about it. If you have a rule that you can only enforce after an accident happens then that is a really bad rule and isn’t doing you much good at all.

Really though, a rule is supposed to be a means of transferring knowledge with the goal of influencing behavior. Not everyone knows the best ways to do certain things, nor the things that will lead to bad outcomes (e.g., accidents) so we create rules that transfer that knowledge in a way that makes if very hard for people to not utilize that knowledge (because we provide an artificial consequence). So the rule should be tied both to artificial, but also to real outcomes in the environment where the rule is utilized. That means if someone is routinely violating a rule then it’s entirely possible that the rule is not really tied to real outcomes in the environment. To put this in more plain terms – maybe the rule you put in place to keep people safe isn’t working.

Does your organization have anything in place to identify this? Probably not. But if we buy into the idea that people are fallible, and, to take it further, if we buy into the idea that our world is fluid and complex, we would have to admit that our rules are sometimes (and perhaps often, if we’re honest) deeply flawed. So if someone violates a rule repeatedly then we have to admit that there is a distinct possibility that the rule is the problem, not necessarily the person.

But if your organization is like most organizations there are very few systems in place to identify this. There are probably detailed processes designed to hold workers accountable for violating rules. But there are probably no processes in place to hold those who write the rules accountable.

The question we ask then is this – what does this fact say about our assumptions in our profession and what effect will this have on our practices?

Tuesday, September 1, 2015

So What Do We Measure Then? –Creating Leading Indicators

A few weeks ago we wrote a post about the problems with measuring safety only by its absence through the use of incident rates. Then we wrote a post about how to think about creating new indicators of safety. As we discussed, you really need to make sure your indicators are specific to how your organization creates safety. Now, we want to spend some time talking about actually creating those indicators, including how to avoid some of the traps organizations fall into.

First things first, we need to get rid of this idea that “if you can’t measure it then you can’t manage it”. That slogan sounds really good, but it’s simply not true. There are plenty of things that we manage everyday without measuring. Some of the most important things in life are immeasurable. Take for instance love or happiness or hatred or consciousness. All of these are abstract concepts that are vastly important to the human experience, but we have no way to measure them directly.

Now, don’t get us wrong, you can measure the effects of these things. For example, people who are happy often report being happy and display behaviors that we associate with happiness (e.g., smiling). We can measure those things, but those are not measures of happiness directly. People who are unhappy can do the exact same things in a convincingly enough way, but will have low levels of happiness (at least we think so). So these things that we can measure are merely indicators of what we want to measure (but can’t). Yet somehow we still manage our happiness and attempt to manage the happiness of others. Certainly things that you can measure may be easier to manage, but that doesn’t mean things you can’t measure are unmanageable.

In the same way, safety is immeasurable. You cannot measure how much safety an organization has or does because safety is an abstract concept. The systems folks would say that safety is an emergent property of a complex system. We can’t identify it or measure it directly, but we can feel its effects. Yet, we still have to manage it.

If that’s true, then how do we know we’ve achieved anything resembling success? We need to take the focus off of measuring safety and look for indicators, things that would indicate the presence of the capability to create safety.

This leads to our second point, indicators often create new incentives that drive behavior. So as you look for those things that indicate the ability to create safety, you need to understand that everything you are doing is going to change what you are measuring. This is one of the reasons incident rates are so bad – they create an incentive to reduce the number, which is achievable in two ways. You can either have less incidents, or you can report less incidents, which obviously is very bad for us.

But this feature of creating unintended consequences is not unique to incident rates, it is a feature of measurement in complex social systems. For example, one commonly used indicator is time to close safety corrective actions. The goal is to get people to close corrective actions in a timely manner. And you can achieve this in two ways – complete the corrective action in a timely manner, or report that you’ve completed the corrective action in a timely manner. Something similar happened at the BP Texas City Refinery where work orders to fix safety critical equipment were closed as completed, but the work was not performed.

One way to avoid this is to create a countering indicator. If you have an indicator that is quantitative (say % of required training completed), couple it with a more qualitative indicator (e.g., employee reported confidence in completing the task, employee reports of training quality, observation of employee skill levels). This counter indicator may provide a balance to avoid some unintended consequences.

This leads us to our third point, which was brought to us by a colleague of ours, David Bond. The indicators you choose are less important than the conversations that they start about safety in your organization. You want indicators that don’t drive rote compliance or unthinking behavior. Things that drive out thinking are often nefarious enemies of safety. Instead, we want indicators that make people think and ask questions. We want indicators that drive reflection. For example, one organization allowed their managers to develop their own leading indicators for their business units and one of the indicators they came up with was the ratio of reactive to preventative maintenance. This indicator drives people to think about why that is an indicator of safety. Questions will be asked about when reactive maintenance is bad and why preventative maintenance is better, but not in all cases. And if reactive maintenance is not ideal, how can the organization manage risk in those times they have to have reactive maintenance. The indicator started people talking. Safety ensued.

Finally, our last recommendation is encompassed above – get employees involved. Ask your employees at all levels how they know they are safe. What are the things they do each day related to safety? And we don’t mean the stuff to be compliant, but the stuff they do to be safe. Sure you might need to educate them a bit about how to do it, but if you can give them the tools you will likely find that there is no one wiser in your organization. The people who are often best equipped to tell us about how we know on a moment by moment basis how safe we are, are often the people doing the work, not necessarily the safety people.

That’s it! Those are the basics. What do you think? What did we miss? This is a difficult concept, so lets keep this conversation going!