Tuesday, September 30, 2014

The White House Fence Hopper, the Silenced Alarm, and Our Knee Jerk Blame Response

One of the big issues this week in US news is reports of an individual named Omar Gonzalez who was able to climb over the protection fence surrounding the White House, run across the yard, get in the main door and far into the building before being taken down by Secret Service agents. (For more information on the story, click here).

As more details have emerged in the fallout from this story, one of the more interesting and controversial details was that the alarm system on the main door of the White House that was supposed to alert guards of an intruder was silenced. Reports after the incident suggest that the alarm was intentionally silenced because complaints from the White House ushers’ office that the alarm was too loud. (More details can be found in the news report above).

Obvious questions come to mind after hearing this. After all, the alarm system was designed to protect the President and the first family. Why would we silence it simply because it’s annoying to some? One congressman criticized the leadership of the Secret Service for allowing the alarm to be silenced, say:
The agency needs a solution that goes deeper than more fences and more people. It must examine what message is being sent to the men and women who protect the president when their leader sacrifices security to appease superficial concerns of White House ushers. (Source in article linked above.)
On the surface this seems like a legitimate point. However, we must realize that this statement is made with the benefit of hindsight. All it does is explain what should have happened, not why what did happen, did happen. This is an important distinction to make because if we ignore the reasons why people behaved as they did we doom ourselves to repeatedly running into the same issues over and over. It’s easy for us to say what people should or should not do after an incident such as this, but if we stop there the obvious implication is that the fault lies with the people, and if we replace those people (or discipline them into compliance) then the problem will be solved.

But what if it’s not a human problem? What if by trying to fix the people involved we are engaging (intentionally or not) in scapegoating?

Consider the following other potential causal factors and fixes:
  • The usher’s office is reported to have requested silencing the alarms because of frequent malfunctions and false alarms. Shouldn't we just fix the alarm system?
  • If the ushers had problems with the alarm systems, could we perhaps investigate moving the ushers’ office to another location, away from the alarms?
  • Is the alarm system the only defense the White House has?

The fact is that if all we do is blame individuals for this problem we miss the complexity behind this incident and we trick ourselves into a false sense of security as we create a solution where there may be no problem and generate no solution for the real problems inherent in the system. Yes, they should not have silenced the alarm and, yes, they should have fixed the alarms. But for those of you who work in facilities with alarm systems that malfunction – how often do those get fixed immediately? Most facilities don’t have the resources to do so. And in an environment with many false alarms, would we be surprised that people in the area take steps to reduce the volume so they can get their work done?

Unfortunately this is another example of our tendency to search for blame, rather than search for solutions following an incident. In the safety world we see this all the time, as well as in other environments (here’s another example we talked about in the past). Research and experience suggest that our tendency to search for who to blame following an incident is corrosive to our ability to learn from incidents. And if that’s true then we are faced with the reality that our rush to blame following incidents may make us feel good temporarily, but actually may be making us less safe. If we really want to make progress in in reducing risk, increasing safety, driving creativity and innovation, and making the world a better place then we must let go of the need to blame. Our priority should be to learn. If at the end of a fair and just learning process we find that blame is appropriate (it almost never is) then we can take what steps are appropriate. But these knee-jerk blame responses have to stop or we can expect more of the same.

Tuesday, September 23, 2014

Stop Telling Your Employees That Safety Is Everyone’s Responsibility

There are a number of favorite sayings that we have in the safety world that we use like static cheerleaders to get ourselves and our employees (but mostly for our employees) all pumped up about safety in our workplaces. There are things like “safety first” or “safety doesn’t happen by accident,” both of which are overused and oversimplified things to say. These often get put on posters or we’ve even seen them put on floor mats, which is kind of funny if you think about it. These sayings get plastered all over our organizations, similar to advertisements, which sort of implies that we have to sell safety to our employees (and that is a whole other discussion and blog unto itself).

One saying that often gets put up there that we think has some merit, but often is misused in organizations is “safety is everyone’s responsibility.” Have you used that statement before in your organization? We have. It makes sense, right? Safety requires people working together, asking questions, identifying risk, and doing what is needed to reduce risk. Everyone has a role to play, a responsibility to themselves and others. Safety really does involve responsibilities for everyone.

So what’s with the title of the blog then?

Here’s the thing – whenever we hear organizations use that statement “safety is everyone’s responsibility” it is ALWAYS directed at workers. Although we’ve heard safety managers and supervisors tell employees that safety is everyone’s responsibility in many training classes and tailgate meetings, we’ve not once heard someone say that in a manager’s meeting. And even though many organizations plaster the walls with posters that highlight how safety is everyone’s job in employee locker rooms and break rooms, we’ve never seen such a poster in the corporate conference room or the safety manager’s office.

Communication between humans is an interesting thing. Even when we aren’t talking or doing other forms of active communication, we are still communicating. The lack of active communication says something about the values and beliefs of the communicator. So when we tell line employees that safety is everyone’s responsibility but we do not feel the need to tell anyone else in the organization that safety is everyone’s responsibility, what are we communicating about our values and beliefs?

Think about it – the only way that only telling line employees can make sense is if:
  • Somehow line employees have either a mental or motivational deficiency that others in the organization do not have, or
  • Line employees have a special, greater responsibility that reminding them about will somehow help them appreciate and then take the appropriate actions. 

To the first point, employees have the most to lose. If an accident happens they are the ones who suffer the most, because they are the ones who get hurt or killed. The stakes don’t really get much higher for them. So, we don’t think motivation is really the problem here. Perhaps it is a mental deficiency then. Perhaps our workers are especially dumb. We hope that no one really believes this, but if you do take a look at this blog.

Ok, to the second point, about our workers having a special responsibility that reminding them of will somehow make better, we’re pretty sure that the jury is back on this one – most safety professionals would agree that the part of your organization that has the most influence on safety is your managers and supervisors, not your line workers. The further up the organization you go, the more influence over the safety of the organization you have. So, if reminding someone about their responsibility regarding safety actually makes a difference in behavior, wouldn’t it make more sense to start putting posters up in the boardroom?

So that’s why we say stop telling people that safety is everyone’s responsibility, because, if you’re like most organizations, you’re only telling the line workers and you’re not reaching the people who matter. Workers often see this as a way to merely pass the buck on safety responsibility. If you really want safety to be everyone’s responsibility in your organization then communicate it with actions, and not just in a way that makes sense to you. There’s a concept in social psychology called reciprocal altruism, which is the idea that we do things for people who do things for us. If this is true then perhaps a better way to communicate the need for employees to take responsibility is to show them than you’re taking responsibility for safety.

Make sure your actions to communicate the importance of safety are done in a way that everyone in the organization understands. Too often we fix the problems that some regulatory agency or some auditor identify and wonder why our employees are appreciative. Your employees don’t really care that much about that stuff. Get out there and fix problems that employees have. And this isn’t just the problems that you think they have. Ask them what’s tough about their jobs and find ways to solve those problems. If you want employee engagement then get out there and engage with your employees.

And for your managers and supervisors, if anyone needs reminding about their responsibility in regards to organizational safety it’s them. What are you doing to remind them of their responsibility? Just like your employees though, don’t just talk to managers and supervisors in a way that makes sense to you. Find out what’s in the way of them engaging in exercising their safety responsibility and do what you can to remove those roadblocks.

What you do will vary from person to person and organization to organization. However, the bottom line is that if you really believe that safety is everyone’s responsibility you shouldn’t have to tell anyone. People will know that it is true by your actions. So how are your showing people that safety is everyone’s responsibility?

Wednesday, September 17, 2014

A Better Way To Be Compliant

We are currently on our way back from the National Safety Council Congress & Expo, where we presented on the topic of HumanPerformance. The session went very well with the room literally packed, with some having to be turned away. We were surprised at the interest in the topic, but very encouraged nonetheless. Even still during and after the presentation we got some of the typical (and understandable) responses from those who have not yet bought into the ideas we’re espousing, such as questions about personal responsibility.

One question that came up a couple times (in a couple different ways) is how the ideas related to human performance relate to compliance. One attendee said that at some point we have to deal with the fact that there are laws that must be complied with, so no matter what we’d like to do in terms of empowering employees and removing unnecessary constraints, at which time we will hit the wall of compliance that we cannot cross. Another attendee questioned whether these ideas would not be best suited for those companies that have already achieved the stability of compliance but who then want to move to higher levels of safety performance.

These are both very good questions, and so, in addition to answering them during the session, we thought we’d address them here so we could explain our perspective to a wider audience. First, we do have to say that, at some level, the questions hint at a slight misunderstanding of the point we’re trying to make. It is true that at some level we do advocate the stance that behavioral controls, such as regulations, are overused in safety practice. However, on a different level what we’re saying is that if there are problems you face in your organization (regulatory compliance just being one of many) that you should not think of your employees as part of the problem, but rather as part of the solution.

Let’s use an example to illustrate what we’re talking about. A common problem in any warehouse where forklifts are operated is how you get your employees to wear their seatbelt. Wearing a seatbelt while driving a forklift is a good idea, but even if it wasn’t, it doesn’t matter because the regulations almost always require that if a seatbelt is present your employees need to wear that seatbelt. That’s a hard and fast rule and there’s no leeway.

But just because such a rule exists doesn’t mean that we need to put all the focus on the individual employee to comply with that rule. We can take a systems-approach and engage in a discussion with employees why they do not wear the seatbelts. We’ll often find answers such as the fact that they are too busy, they don’t see the point, the belts are uncomfortable, etc. If we then level with employees and tell them that we have to do it because it’s the law and ask them for ways to help them comply more we may be surprised with the solutions we find. Sometimes it involves finding a new piece of equipment that makes the belts more comfortable. Sometimes it involves changing of the work processes or accountability structures, to make following of the rules more consistent across the board. You might even find that the simple act of engaging with employees activates a team spirit within your employees, who then take ownership to simply hold each other accountable to comply with the rule.

And you can think of other examples where taking a systems-based, human performance approach can help achieve multiple competing goals, such as compliance, safety, and production. Additional examples of common compliance problems, with some potential systems-based questions you could ask are below:
  • Employees not using ladders appropriately. Are the proper ladders readily available in a way that doesn’t require a lot of effort? Are there other tools available that make the use of ladder unnecessary? How do the employees perceive their job load (heavy versus light) and how does that affect their ability to pre-plan jobs and make adjustments during a job?
  • Blocked equipment (fire extinguishers, electrical panels, etc.). Is there adequate, convenient storage in the area? Is the equipment stationed in a bad place? Is the equipment adequately marked for the purposes of identification?
  • Chemicals stored without labeling. Is there an easy way for employees to create compliant temporary labels on containers? Do employees have adequate training in labeling requirements and how to identify hazards that need to go on a label (from their perspective, not yours)? Is there a way to have labels pre-printed on containers that are commonly used for certain types of chemicals?

We’ve also talked an approach regulatory-required permits utilizing this mindset in a previous blog.

The bottom line is by incorporating the idea that your organization is a complex system and that your people are a solution to harness, rather than a problem to control, you might find some innovative solutions to your compliance problems. These ideas are not just for those organizations that have already achieved 100% compliance are looking to go to the “next level” (we’re not sure that any such organization really exists). These ideas are for any organization that has people in it and has competing pressures, such as production, scarce resources, complicated regulations they must comply with (that sometimes contradict each other), and the need to work safely. Does that sound like your organization?

Tuesday, September 9, 2014

Human Performance – Busting the Myth of Human Error

Human Performance – Busting the Myth of Human Error

This blog is a brief introduction to a topic that two of our team members, Paul Gantt and Ron Gantt, are presenting at the National Safety Council Congress and Expo in San Diego on September 15th, session 14

At a client’s site the other day they were discussing an incident where a rail car loader overloaded a rail car past a visual cue that should have told him when to stop loading. No spill resulted, luckily, but the incident was not caught until after leaving the facility and therefore had to be reported to the appropriate regulatory authority, per legal requirements. In the investigation the site determined that it was “human error, plain and simple.”

Pretty typical right? Many safety “researchers” such as Heinrich warned us about this with warnings that somewhere between 80%-90% of all accidents were caused by unsafe acts. Our systems would be perfect if it wasn’t for all these unreliable people! But still, you can’t fix stupid. Being a safety professional would be an easy job if we didn’t have to daily deal with the reality of having to “protect you, from you, in spite of you.”

This fact, the fact of the unreliable human that is the problem we must control, informs most of the interventions that safety professionals use to this day. Think about our go to interventions – training, safety rules and regulations, procedures. All designed to change behavior. And if we want to push deeper and go for real “world-class” safety status? Then we have behavior-based safety, incentive structures, strategies to go after hearts and minds. Once again, all designed to change behavior. The idea that we need to change people’s behavior because it is inherently deficient provides a poignant foundation to most of safety practice.

But is it really true that humans are a problem to control, that they are so unreliable? Let’s look at the above example with the overflowed rail car again. Surely, it is easy to look at a visual cue and fill a container up to that cue – cooks and bartenders do it all the time. So how could the guy be so stupid? Let’s look closer by looking at ourselves for a second – do you ever take your mind off of what you’re doing when you’re doing something routine and mundane? Of course you do! How do we know? Because you’re a human!

So, imagine someone passively monitoring the filling of rail cars day in and day out. Would we be surprised if their attention ever wanders? Of course not. In fact, we would expect that person’s mind to wander a lot. And really, it was just a matter of time before the mind wandering led to overfilling of the container. With that being the case, the interesting question is no longer “why did the loader overfill the rail car?” The interesting question is “why aren’t we having many more instances of loaders overfilling rail cars?” The current work system requires that the loader to be paying attention at the right time to ensure that the car isn’t overfilled. Put another way – the work system requires that the human operator ensure safety in an environment with a design that encourages human limitation (i.e. attention span). Yet, somehow, the loaders almost never overload rail cars. If we look closer at the overflowing incident the story is not really one of “human error, plain and simple,” instead it is one of human reliability. The overfilled rail car is evidence of the remarkable ability of our workers to adapt to poorly designed, unsafe work environments and pull success and safety from the jaws of failure.

Safety scientist, Sidney Dekker, presents view of the state of the safety profession in regards to human error as two opposing models – the old view and the new view. The beliefs of each view, as identified by Dekker, can be seen in the table below. 

We agree with Dekker in that we believe that the safety profession is at a crossroads. We can either continue to use methods of safety management that we’ve used before and are comfortable with, which focus on controlling unreliable people, or we can begin to make real progress on preventing “human error” by adopting the new view of human error, which sees people as normally reliable sources of success, particularly in environments and contexts that are properly designed to maximize human performance. Or, as we put it to our client regarding the rail car incident – you can either come to terms with the fact that it will happen again, or you can actually fix the problem.

Wednesday, September 3, 2014

When Safety Makes Us Unsafe

It’s sort of one of those provocative questions – is there such a thing as too much safety? In a sense, this is the wrong question (primarily because “safety” is such a subjective term). Basically though, is there a point at which if we add more safety interventions we not only get diminishing returns, but we also make things worse?

If we all thought hard enough I’m sure we can think of examples where safety interventions led to unintended consequences in the form of hazards. For example, anyone who’s worn a hard hat for a length of time has likely experienced the increase in the number of times you hit your head on things. Or those who wear extra chemical protective clothing who are more exposed to heat stress issues as a result. It’s not that these protections are or are not required necessarily. However, we can see that, in the wrong circumstances using these protections will not make you any safer, and you could argue it would make you less safe (e.g. if you were wearing a full chemical moon suit, such as in the picture above, while you’re reading this blog would that make you more or less safe?).

This concept can be applied to organizational interventions as well. Take example many well meaning organizations that, in an effort to motivate employees, use incentives for employees when they go a certain period with having an injury. The idea makes sense on the surface as a reasonable safety intervention. However, even the US Occupational Safety and Health Administration has come out against such programs because of the unintended consequence of motivating employees to not report injuries (and even motivating some safety professionals to play the “numbers game” and hide injuries).

These are easy and tangible examples of a broader concept that can even be more dangerous in other circumstances. We say more dangerous because when the downsides are not obvious that makes them harder to spot, which can also give us a false sense of security. We start to believe that our interventions are making us safe, when they may be doing nothing, or, worse, they may even be making things worse.

Take, for example, a strategy that a lot of companies adopt to show that their top management cares about safety – the idea that if you report an incident it automatically has to get reported up the corporate chain, everyone hears about it, and the CEO (or someone similar) gets involved. Sounds like a great way to show that the organization means business when it comes to safety, right? Recently we were working in an organization with exactly that policy. When asked how many incidents get reported the answer was “not many.” When we were with line employees we asked if there had been any incidents that should have been reported – yes. So an intervention designed to increase accountability and visibility for safety actually made the organization dumber, because they were missing opportunities to learn.

The problem is not that these people are not well-meaning. The problem is that we need to understand that whenever we implement a change, even a safety change, we are implementing it into a complex system that is already in operation. Systems don’t have “pause” buttons. Any change we make will interact with other parts of the system, in either predictable or unpredictable ways. That means there will almost always be unintended consequences when we implement a change and those unintended consequences could actually make us less safe.

There are plenty of other examples of this as well, some which are very complex:
  • The organization that decided any risk reduction set a precedent and could never be reversed, leading to reluctance in implementing future risk reduction measures (which may have contributed to a fatal incident).
  • Defense in depth strategies, which can work great in some contexts, but, according to Perrow, may work to make our systems more complicated or complex, leading to less predictability and more risk.
  • Risk compensation, or risk homeostasis theory, which says that any intervention we implement may lead to increases in risk taking behavior, as people learn to rely on the intervention to protect them. 

Now this doesn’t mean that all of these interventions are bad and we should do nothing. The real problem, as we said above, is a failure to understand complex systems and how any intervention, even well meaning ones, can have unintended consequences. So, our job in the safety profession, if we really want to do good is to find ways to facilitate safety while minimizing or accounting for those consequences. Here’s some recommendations to get you started:
  • Take the time to understand systems, particularly complex systems. We have a couple blogs on the issue (here and here). Plus you should read these two books by Meadows and Dekker.
  • Research shows that getting diverse opinions involved in decision-making increases decision quality. So stop making decisions on your own. Get other people involved in the decision-making process, particularly people who have a different background and perspective as you.
  • Never make a decision about an operation that doesn’t include a healthy amount (read: a whole lot) of input from the people who will actually be doing the job. (In fact, our real recommendation is that you shouldn’t even be making the decision. You should be the one providing input, with the people doing the job making the decision.)
  • Always follow-up on interventions to identify if they are having their intended effect and any unintended effects. Too often we just implement something and make, at best, rudimentary efforts to see if it’s working. And it’s very important that you look for examples of the intervention not working, not just examples of it working. It’s too easy to see what you want to believe, unfortunately. (But, of course, if it wasn’t then we wouldn’t be talking about this subject to begin with.)