At recent meeting of contractor safety representatives at a large construction site each of the contractors was reporting on the incidents they had in the past month. (This practice is common but comes with its fair share of problems. If not dealt with delicately, it can quickly become a shaming exercise where those contractors with the most incidents are seen as the worst offenders, leading to underreporting.) One contractor told the story of an employee who made a mistake and the result was an incident where a forklift tipped so it was balancing on its front wheels. No one was injured, luckily, and the forklift was safely brought back onto its other two wheels and put back in service. The contractor then reported that the employee was sent home for a few days without pay and retrained as the cause was a “behavioral” one. Everyone agreed with the response (except us) and the meeting moved on without incident.
What’s the point of investigating incidents? Most would say that it’s to make sure that the incident never happens again. So in this case the employee allegedly made a mistake. They didn’t intend to do what they did and they certainly didn’t choose to tip the forklift over. The incident was not intentional. The employee did not weigh all potential actions and their consequences before acting. He just did what he always did, except this time he screwed up.
So how does punishing him fix this? Now, some of you will quickly respond that it will give him something to think about next time. Next time he’ll remember to be more careful, right?
Not so fast. First, we have to remember that the employee wasn’t thinking actively about his decisions. He was working like we all do – using what we could essentially call muscle memory to get a job done that he does day in and day out. Think about your work – do you actively think about every job step when doing jobs that you do every day that you know you’re good at? Of course you don’t. And before you say that this was a high risk job, we would challenge you to think about all the times you’ve done high risk things without thinking (e.g., pretty much doing anything while using your mobile phone). Let he/she who is without sin cast the first stone.
Of course you probably learned not to do those things, but that leads to the next point. Think about the last time you made a big mistake. Didn’t you feel bad and start to think about all the things you could do differently next time to ensure it didn’t happen again? So you essentially punished yourself and looked for ways to learn from your mistake. Why would this employee be any different? If we buy into the idea that punishment will have an effect on behavior (which it does), why is our punishment necessary when the employee is punishing himself?
The thing is, the punishment really doesn’t do much, if anything at all to decrease the likelihood that this incident will happen again. In fact, it probably makes things less safe, because now employees are incentivized not to tell management about mistakes (because they get punished when they do). Now that loss of reporting could be worth it if we could prove that the punishment changed the behavior. But we have no real evidence that this is the case. But won’t the punishment send a message to other employees? Yes, it will send the message that mistakes are punished, so employees should hide their mistakes. It won’t lead to less mistakes though. So we have real evidence that punishment can make things worse and no real evidence that it makes things better.
Why do it then?
Because if the contractor that recounted the incident and was asked what they were doing in response had said “nothing, the employee has learned his lesson already” they would, at best, get some strange looks. At worst they would be chastised for not caring about “safety”. But the issue isn’t about caring about safety, it’s about caring about looking like we care about safety. If we look like we are doing nothing then we look like we don’t care. That’s bad.
But the reality of the situation is that the reason we had an incident is that we had a system that relied on human reliability – i.e. people to not make mistakes. Now keep in mind that any such system will, on average, be a very reliable system, because people don’t screw up that often. But they do screw up occasionally. So eventually you will have a failure. The same is true for any system in your organization that relies on human reliability. If you have people who climb ladders, drive cars, put widgets together, drive forklifts, use tools, do work, etc. eventually you’ll have an incident where a person makes a mistake.
That’s not surprising to anyone, but what should shock us is we build these systems where, if we think about it, we know that they will fail eventually because of some version of “human error” (or what some have called “performance variability”) and then we get shocked at how careless the person could be to make that mistake and we try to punish it out of them. We aren’t really making anything better, because we put people back into the same system so they can make a mistake again later (and then we can punish them again). But at least we can say we did something, right?
The time has come for the safety profession to break this vicious and unjust cycle. We need to stop putting people into situations where they are highly likely to fail eventually and then punishing them when they do, simply because we don’t want to be seen as doing nothing. The fact is that there will be times when we need to rely on human reliability. When an incident happens in these systems we need to let go of the need to do something just for the sake of doing something. Rather than trying to fix the problem, we should help the people we’re relying on learn from the event. Don’t focus on what the people did wrong, focus on how you can help them do better next time. Punishing them doesn’t accomplish this. We trust them to reliably create safety day in and day out, we should start trusting them to learn from incidents as well.