Wednesday, August 19, 2015

What Do We Measure Then? - Seek First to Understand

Last week’s post discussed some of the pitfalls of one of the most widely used and flawed methods for tracking safety performance in the profession – the recordable incident/injury rate. The post received a lot of attention and discussion, which is exactly what we were hoping to generate (thanks for that!). One common response we received from a lot of you was – ok, then what do we measure then?

The goal was to get people to think outside of the world we’ve created for ourselves with these incident rates, not necessarily to provide a roadmap to what a new world would look like (be wary of those that do provide such roadmaps). But so many people asked that we thought we could at least share our thoughts on how to approach thinking about and building new ways of measuring safety performance within an organization.

So, lets begin. Whenever you measure something the first step is to make sure you understand what it is that you’re measuring. (Note that we’re talking more about measuring for the sake of managing something, not necessarily measuring for the sake of figuring something out, e.g., scientific exploration.) As Stephen Covey would say, “seek first to understand...” The measurement of safety should begin with a clear understanding of what creates safety in your organizations. You can’t measure what you don’t understand. (Note - If we were being academic, we would say that measurement must begin with a clear model or theory about what creates safety in your organizations.)

There are two things we can say about this right off the bat. First, in our experience, this is something that very few safety professionals have explicitly done. Most just do safety, they don’t think about and reflect on safety. Second, we do have to admit that it is impossible to fully understand what it takes to create safety in your organization. You can know what you can know, but you don’t know what you don’t know. This does not mean that we’re helpless. More on that later.

To the first point though, we have to avoid the temptation to answer this question with “not having accidents” or “everyone going home at the end of the day”. Those are results or outcomes. That is not safety, because safety is something you do (a process), not something you have (an outcome). This is why we need new metrics for safety, what some call leading indicators, leading metrics, or process metrics.

So what do we think about then? Think about how safety is created in your organization. What does it take? What would you see if your organization were creating safety on a daily basis? A couple points jump out of these questions:

  • These metrics should be specific to your organization. Each organization has different cultures, different contexts. These makes cookie cutter approaches to safety problematic, and hence measurement across organizations problematic. Even within one company, if you have different business units in different geographic locations they may have different needs that require different metrics.
  • These metrics should be specific to tasks and responsibilities. In the same way that your metrics should be diverse depending on the organization (horizontal diversity), metrics should also vary across the hierarchy of the organization (vertical diversity). What it takes to create safety within your engineering department will differ from what it takes for your line staff and will vary from what it takes for your upper management. If we truly believe that safety is everyone’s responsibility (LINK), then there should be metrics applicable to everyone.
  • Metrics should overlap across units and groups. Keep in mind that just because these metrics are specific to one group does not mean that the metrics only apply to that group’s safety. For example, the actions of upper management affect the safety of the whole organization, so metrics based on their actions may not directly apply to their safety alone. You may use something like top management participation in safety committees as a metric, which has little bearing on the safety of the manager, but can have significant effect on the safety of the rest of the organization.
  • Metrics should be dynamic. The words we used in the questions above may seem awkward at first (e.g., “create safety”) because we don’t usually talk like that in safety. But we intentionally chose dynamic words to show that safety is a dynamic process. Risk changes over time and so should our safety processes. What it takes to create safety today may be different tomorrow. That means our metrics should change with the changing realities of the work environment. (This assumes that you’re keeping your finger on the pulse to identify those changes! Again, more on that later.)

The questions above the points that follow from them give us a foundation from which we can begin to think about what we would want to measure. In our next post, we’ll continue the discussion and talk about actually creating those metrics based on this thinking, some of the pitfalls many organizations encounter when doing so, and how you can avoid them. We’ll also begin to present a framework that may be useful in helping us identify emerging threats early in the process, so we can respond accordingly. Stay tuned!

Wednesday, August 12, 2015

What If We Stopped Using Incident Rates So Much?

Why is our profession so obsessed with the recordable injury rate? Think about it – almost every safety professional agrees that these sorts of measures are, at best, not great, and, at worst, awful. Yet we still all use these rates to measure safety success. We compare ourselves to others to see how we’re doing using these rates. If the rate goes down we get excited. If the rate goes up we get upset. Many of us even have our personal performance rated based on these types of injury rates.


No really, think about it for a second, why? Why do we consistently measure safety success using these rates?

If you’re like most, the reason that likely came to mind is something along the lines of – it’s an easy thing to measure, everyone else is doing it, and we need some way to measure performance, right? So essentially there’s real thought put into this. We do it because (a) it’s easy, and (b) everyone else is doing it. Effectively it’s a version of professional peer pressure that we’re giving in to.

Now, some would point out, rightly so, that we do need some sort of validation measure for the work we do. And, after all, if we’re doing things right then we should see less accidents happen, right? Here’s where it gets tricky. That’s sort of true, but not entirely. Yes, it’s true that if what we’re doing doesn’t prevent accidents, particularly the serious ones, then we need to question why we’re doing it. The problem is that the measures we’re using are not sensitive enough to really give us an accurate measure. This is really an issue with the statistics of it, and we know how many of you hate math, so we won’t bore you with the numbers. But the short explanation is that we simply do not have enough accidents. We need a large enough number of accidents to be able to reliably and validly use the incident rate as a measure of the effectiveness of any given intervention. Most organizations simply don’t even get close to that number, particularly of the serious accidents.

Of course math is scary, so we don’t want to think about that too much. That’s ok though, because even if we did not have the mathematical problems with the incident rate, we still would have at least two significant problems with using incident rates as measures of safety performance. Lets look at each.

To understand the first problem, lets look at a different kind of “incident”, the “near-miss” (or whatever you call it). There’s lots of definitions of near-misses out there, but one we like is “an event that could have caused an injury, illness, or other kind of accident, but didn’t because of luck”. (Some people are uncomfortable with the word “luck”. If that’s you, go ahead and substitute “luck” with “stochastic events within and outside the system over which we have little or no control”.) If we accept that definition then that means that an accident is an event that happened because of bad luck. If we were lucky it wouldn’t have happened, but it did happen, so we were unlucky. And if this is true, that means that a certain portion of an incident rate is controlled not by us or the organization, but by luck. So your incident rate can go up or down and the only reason is that you’re unlucky. Furthermore, if you are managing your program using this rate that means that you are attempting to manage luck. That doesn’t seem like a very effective business strategy to us.

Even if we assume that the effect of luck is nonexistent or negligible on your incident rate, we still have at least one more reason why incident rates are bad measures of safety performance. Essentially, you don’t really know what you’re measuring with an incident rate.

To illustrate, let’s use an example. Let’s say there’s an organization that initiates a safety program that is just terrible. It’s poorly planned and poorly executed, but the only measure the organization has in place to measure this is the incident rates. The employees at the shop floor still have to find a way to work without getting hurt, right? So they adapt to the new, bad program. However, the law of fluency predicts that this adaptation will naturally hide what is being adapted to. So employees will find a way to “make it work” and, unless the organization goes out and looks, they will be none the wiser. As a result the number of incidents may go down, causing the incident rate to go down. The organization will then believe that the program is a success, when in reality it’s the workers adapting to the poor work environment. The organization could have saved all the resources used in implementing the program and probably have gotten the same result.

This happens a lot in organizations. They implement a program and then they look at the incidents rates to identify if it was successful or not. If the rate goes down then they call it a success. But that can have nothing to do with the effort the organization put into the program. What this means is that if all you do is use incident rates to measure the effectiveness of your safety management system, it is entirely possible that you are wasting resources on ineffective programs.

Now we’re going to ask you to do something crazy – imagine what would happen if you just stopped paying attention to your incident rate. What if we just decided to stop using it to compare our performance year over year or as compared to others? Sure there will be reasons to still keep track of the rate, but if we stop making the rate so central to measuring safety performance would it really be so bad?

Saturday, August 1, 2015

Caring About Safety Isn’t Enough

If you peruse LinkedIn discussion groups, safety magazines, or any other place where people talk
about safety problems they are having a common theme in many of the suggested solutions has to do with people’s level of “care” about safety. Got a problem with employees breaking rules? It’s probably because they don’t care enough about safety. Management not “walking the talk”? We need to get them to really care about safety! If management doesn’t care then nothing will work, right?

Now this seems to make sense, because one can draw an intuitive easy correlation between a person’s motivation and doing something. So if we can get someone to care more about something then they will do it. For example, if you get someone to care a lot about a social cause, then we would predict that they will perform behaviors to support that social cause. So if we extrapolate this to the safety world, if someone cares a lot about safety, then they will behave safely.

The problem is that this isn’t necessarily true. We’ve dealt with many organizations and we have yet to find a single one where people in the organization explicitly told us that they don’t really care about their or others’ safety. Now, keep in mind that we have had organizations tell us that they don’t really care about “safety” but every time what they mean is they don’t care about regulatory standards or similar versions of “safety”. When it comes to actually caring about people they unanimously do.

Sure we can question whether or not what they are saying is true (perhaps they are lying to us), but we have no real reason to consider our clients liars, so we take them at their word. Even if we concede though that there are some who are lying and that there are indeed people who don’t care about safety, we must point out that we have plenty of clients who actually really are adamant that they care about safety a whole lot and there is evidence to prove that they do.

The thing is though that even these people who really genuinely care about safety often still continue to have the problems we mentioned above – employees breaking rules and managers not “walking the talk”.

What gives?

Now the tendency for many will be to question (again) the motives of the people (they don’t really care). But lets just consider the alternative for a minute – that perhaps it’s our belief that is flawed, not the people we’re referring to. Could it really be that caring a lot about something is not really enough to dramatically influence behavior? Consider a couple examples.

Do you care a lot about your health?

Likely, with very few exceptions, you answered yes. Consider this though, when was the last time you did something that your doctor would describe as unhealthy (e.g. eat an unhealthy meal, smoked, etc.)? When was the last time you did not do something that your doctor recommended you do (e.g., exercise)? If you’re like most, it was very recent for both questions. Perhaps simply caring a lot about your health is not good enough to get you to behave healthy.

Another example – do you care a lot about safety?

If you’re reading this blog you likely said yes, again. But when was the last time you did something that others would consider unsafe, or that you would say is unsafe if you saw someone else do it? Again, if you’re like others, it is probably not that long ago. Maybe it’s mowing the lawn without the proper PPE, maybe it’s using your phone while driving, maybe it’s forgetting to inspect your fire extinguisher and smoke detector in your home. Again though, the fact that you care a lot about safety does not seem to translate easily into your behavior.

So we are left with two options, either you’re a liar or we have to admit that our belief is probably bogus that the problem we have in our organizations is that people don’t care enough about safety.

The fact is that while general personal motivation does predict behavior, it is not a very good predictor. Why? Because our world is simply not that simple. In the safety profession we often treat safety decisions like they are made in a vacuum. We seem to think that before doing something people are actively thinking “should I do the safe thing or should I do the productive thing?” The thing is though that when people do work there is no distinction. People have to find ways to do jobs that have competing goals and still somehow meet both goals. So employees and managers doing make choices to do the safe or productive thing, they find ways to do the safe-productive thing. They make do. They satisfice. They balance competing cares with amazing skill.

Does it always work? No. Does it sometimes make us want to cringe when we see it? Yes. But does it work most of the time? YES! And that’s why they do it.

Now, this doesn’t mean it’s acceptable. It just means that telling them that the problem is that they don’t care enough is not only wrong, it’s sort of offensive. A better approach is to find ways to understand the competing goals and cares the person has and help them better manage those goals. If someone isn’t doing something that they care about it’s probably because there are barriers getting in the way. To understand what those barriers are you need to exercise a little bit of empathy and try to see the world from their perspective. Then you’ll be in a better place to help remove those barriers.

So, rather than seeing problems as a lack of care on the part of employees and managers, start seeing as a problem of unharnessed and imprisoned care. The problem is then not the people, but the context, which gives us a lot more opportunities to actually fix the problem.