Saturday, February 28, 2015

Safety Leadership – From Constraint to Facilitation

Editor’s Note – The title of this post and our presentation is adapted from a post on http://www.safetydifferently.com by Daniel Hummerdal.

On Wednesday, March 4, 2015, one of our team is giving a presentation of the same title as this post at the Bay Area Safety Symposium. We hope you can make it! Whether you can or not, here is a brief preview of the topic.

Leadership is one of those buzzwords you hear a lot in the business world. It’s something that we all aspire to. When an organization succeeds it is often attributed to extraordinary leadership. When it fails, it was leadership. It seems that a lot rides on leadership (at least in hindsight, that is).

But what is “leadership”? Well, one good definition of leadership that we like is “a process of social influence in which a person can enlist the aid and support of others in the accomplishment of a common task”. To translate this to safety, the common task is, essentially, safety, whatever you mean by that. This is important, because often the picture of a “leader” is someone who stands out in front, someone who’s brave, leading the way. This picture makes it seem like “leadership” is a goal unto itself, but leadership is not a goal, but a means to an end. Leadership is a tool with which we can achieve our “common goal”.

In the safety world, our “common goal” is, well, safety. Leadership is a means to achieve that. And how? By “social influence”. In the safety world, we get this (at least we think we do). If we want to influence people socially then we must define acceptable job performance, create incentives (reinforcement and punishment), and hold people accountable. Essentially, this is the traditional way that safety professionals approach “people”. People are treated like machines, we define how the person/machine is to perform in the environment and if the person/machine deviates then we take action because any deviation is obviously bad. Things work when everything goes according to plan and things don’t work because someone/something deviated from that plan. The implication, of course, is that our plans are perfect as is.

Think about this model of leadership for a minute. It is based on the idea that we, separate from the work environment, from the context where the job will be performed, can identify the one best method for doing that job, and those who do the actual job cannot (after all, a deviation from the plan is essentially their “vote” for a better way to do the job). How anti-social. It is not a model based on a relationship, in any meaningful sense, because this relationship is essentially one-way – you dictate, they follow. We don’t want them thinking for themselves, because that might cause them to question the plan, which inherently is perfect. We just want action, not thought. The brain is only an unfortunate side-effect of human capital. Even in situations where workers are involved in identifying the one best method to do the job, and then we hold them accountable to it (which is an improvement), the implication here is that we want their brains only for a short time. And then, if they could kindly shut it off and get back to work that would be convenient.

No wonder our organizations lack trust.

The problem though is that, at some basic level, this model of “leadership” works, by some definition of that word. It works because it gets results, in a manner of speaking. The job usually gets done and people usually don’t get hurt. But the problem is that, because we broke trust with our employees, we are not in a position to see why it’s really working. We only see when things don’t work, when an accident happens, where we can see that someone deviated from the plan. Therefore we punish – name, blame, shame, and retrain. Rinse. Repeat.

But what if things are working because employees routinely have to deviate from the plan? We only see when bad things happen, so perhaps we have a skewed perspective. What if our model of leadership is blinding us to why our organization is so successful most of the time? What if our employees are successful and safe not because of our plan, but in spite of it?


The time has come for a new model. Not one based on constraint, command and control. But rather, one based on facilitating performance of a job safely. This will require a new way of thinking though. Old assumptions about why things are working must be challenged. We must get beyond the idea that people are the weakest link in our organizations and begin to understand why they are successful so much of the time. When we understand that we will be in a better place to become problem solvers (instead of problem finders) in our organizations. Our employees won’t see us as someone to avoid (because we only make things more difficult for them), but rather someone they can go to when they need help. Then we become true leaders, because we are not using social influence to move employees to our goals (usually compliance), we are facilitating employees achieving our common goal – safety.

Wednesday, February 18, 2015

May God Have Mercy…Because We Cannot – Looking for Second Stories

On January 13, 2012, the giant cruise ship Costa Concordia was ship wrecked off the coast of Isola del Giglio in Italy. Of the 4,200 people on board the ship at the time of the disaster 32 died. Subsequent investigation in the media has focused on the actions of the captain of the ship, who navigated the ship too close to the coast, delayed reporting that the ship had struck a rock to the Italian Coast Guard for an hour, and abandoned the ship during the subsequent evacuation before all others had evacuated, in violation of maritime law (for more information on the incident you can look here and here). Four days after the shipwreck the captain was arrested and his trial that has been ongoing for the last 19 months, reached its conclusion, with Captain Schettino sentenced to 16 years in prison, 10 years for manslaughter, 5 years for negligence leading to the shipwreck, and one year for abandoning ship before others. The title of this blog post is adapted from a quote from the Italian prosecutor on the case – may God have mercy because we cannot.

Justice has been served right?

In a classic paper, Woods and Cook point out that if we really want to learn from cases involving “human error” (however you want to define that term) we need to move beyond the first stories, i.e. the story about the unreliable, stupid, and/or evil human not doing what is right. Instead, Woods and Cook argue we need to look for second stories, i.e. the story of the incident but in a way that takes into account the complexity of normal work and local rationality, and avoids the trap of hindsight bias.

First stories of human error seek to place blame on individuals, whereas second stories look to explain why doing what they did made sense at the time. The point of seeking second stories is that it enhances learning. If we only look for first stories, that implies that the problem is located only in that individual. The system is safe “as is”, once we get rid of that one bad apple. Second stories reject the bad apple theory though. There’s not much to learn there.

Those looking for second stories understand that no one wants to be unsafe or to cause others pain (generally speaking). Therefore, any behavior we witness before the accident must be interpreted in light of the fact that the employee did not believe that the behavior would be unsafe. Instead, to oversimplify things a bit, the employee thought that the behavior would help them achieve their complex, often competing goals. If we can tap into that knowledge we can better understand why people do what they do in the organization and begin to make real progress in safety. The learning is deep and rich in second stories.

Is there a second story in the Costa Concordia case? It’s hard to say for sure. Why? Because in the rush to judgment following the assumption of the first story, a mere four days after the accident the captain was arrested. That immediately changes this from an accident investigation (a learning event), to a criminal prosecution. We are no longer learning what happened because the prosecutors already decided what happened for us – i.e. a crime was committed. Blame was decided before we even had a real opportunity tostart learning. To use some of the terminology of Todd Conklin, does the emphasis on blaming over learning make us any smarter? Does is make us safer?

For example, what if we went beyond the first story and started looking for a second story in the Costa Concordia disaster? For example, what if we asked why it made sense for the captain to steer so close to shore? We might get answers such as, it was a tradition for many of the captains, particularly for the Captain Schettino’s mentor, who was from that particular island. Schettino had done this many times before and felt comfortable doing it.

Does this information make it ok that the captain did what he did? Not necessarily. But now we know that the problem isn’t one of a bad apple, but rather an institutional problem. Sending Schettino to jail is unlikely to fix this problem.

Another question – why did it make sense for Schettino to not report the issue for an hour (including apparently misleading authorities about the severity)? Anyone who has studied human response to emergencies and disasters would recognize this as the normalcy bias. People have a tendency to base their interpretation of the present based on what has happened in the past. In the past things were generally normal, so we tend believe that, despite evidence to the contrary, things will probably be ok (normal) now. Anyone who has seen someone who is clearly in need of medical help delay calling emergency medical services (for example, during a heart attack) has seen an example of the normalcy bias. So Schettino not calling for help and downplaying the severity may not be a case of Schettino misleading authorities. In all likelihood he was also misleading himself.

Does this information make the behavior acceptable? No, but it does show that the problem is one of human nature, not of a bad apple. Sending Schettino to jail is unlikely to fix this problem.

You can even ask about the unorganized nature of the evacuation, how patrons were given conflicting information. Doesn’t this speak to poor leadership and preparation? Possibly. But if we look for the second story we notice two relevant facts. First, this was the first night of the cruise, so passengers had not yet participated in mandatory evacuation drills (they were scheduled for the next morning). Second, the crew was multinational, from anywhere from 20-40 countries, and most spoke no Italian. They were interacting with multinational passengers. This is a situation ripe for miscommunication and confusion.

Now, a post such as this is far too short to go into detail regarding the complexity of an accident on the scale of Costa Concordia. But hopefully you can begin to see the need to look for second stories in these types of accidents. The uphill battle we have, unfortunately, is the first stories are so easy to find, and even when they aren’t the media does a good job of finding them for us (finding a “villain” makes the story more compelling). The further problem is that first stories are so satisfying to many people – if we convince ourselves that the bad apple is the problem, then finding and removing the bad apple should fix the problem. Everything is right with the world at that point…that is, until the next bad apple comes around. But at least then we’ll get to experience the satisfaction of excising the problem again and again, as more and more accidents happen, more and more people are hurt and killed.

It’s time to break the self-fulfilling prophecy of first stories. 

Editors note - Thanks for Rob Sinclaire for suggesting this topic for a blog post!

Wednesday, February 11, 2015

Safety By Substitution

In the safety world there’s a proverbial elephant in the room – compliance. In almost any country where safety professionals work in there are laws and regulations related to safety that must be complied with. In the United States, for occupational safety, we have OSHA (the Occupational Safety and Health Administration). And safety professionals throughout the United States spend a large part of their time focusing on ensuring their organizations maintain compliance with OSHA requirements (if you’re a safety professional, think about how your job would change if all regulations were abolished tomorrow). The need for regulations is certainly controversial, but most believe that they serve an important purpose for society.

Regulations are all well and good, but, just like everything else, regulations do not exist in a vacuum.

In the world of complex systems theory, in a complex system changes can cascade through the system and create unintended, often unpredictable behavior (called “emergent behavior”). This is not because of any of the components in the system, but rather because of their relationships. So, for example, heavy traffic in cities did not emerge because cars, cities, business, or roads were invented. Heavy traffic emerges because of the relationships between all of these and other features, interacting in ways that were difficult to predict (although they seem simple in hindsight).

Regulations are not an end goal, but a means to an end. In the safety world, the intention of regulations is to facilitate the creation of safety. Safety, as we’ve discussed previously is an abstract concept, so people often try to make it more concrete, and regulations is one of those ways. So, as regulations have been introduced into the complex system of organizational life, behavior emerged that was unintended and difficult to predict – people started defining safety by compliance.

Here’s an example. Ask a safety professional at what height can a worker work safely without some sort of external fall protection. You get a variety of answers that usually revolve around “well it depends on if it’s construction or general industry”, or “6 feet”. If you start to explore what research, thought process, calculation, etc. the professional used to come up with their answer, they are usually dumb-founded. They didn’t think or do much research. Their mind answered your abstract question (“what makes a job safe?”) by substituting it for an easier question to answer (“what does the regulation say we need to do?”). The most common answer to the question, in our experience, is 6 feet, which is the OSHA standard for fall protection (for most construction activities). We’ve substituted “safety” for “compliance”.

This seems somewhat benign, assuming the regulations are well-written, but the problem is that this substitution creates a serious, hidden problem. When we substitute compliance for safety, we have changed the definition of both compliance and safety to something that neither intended. In the case of compliance, we’ve changed compliance to a goal, an end state. The idea is that we get to compliance and then we stop asking questions. All regulators that we’ve talked to though will tell you that this is not the intention of regulations. Regulations were meant to be the starting point, not the end point – i.e. we achieve compliance and then we ask if that’s good enough for us and we keep building from there. If we routinely substitute safety for regulations though we lose focus on this and see regulations as our goal.

Secondly, and most importantly, by coupling regulatory compliance with safety we start to see safety as an end state unto itself – i.e. safety is something we can achieve or get to. But this is an unrealistic view of safety. Safety is not something you have, safety is something you do. Safety emerges through interactions of people, the organization, equipment, tools, etc. It is constantly changing due to changes in each of those features. Therefore you can’t ever really get to “safety” because, similar to Heisenberg’s uncertainty principle, once you identify it things have changed. So, at best we can only say we were safe.

So what does all of this mean? We need to decouple safety and regulations. Certainly there are regulations out there designed to help organizations be safe. But regulations are only a tool amongst many other tools designed to help us be safe. Certainly regulations have an important role, given that they have the force of law behind them, so they should be given due consideration. But we cannot define safety as regulatory compliance. We need to change the conversation away from “what do we have to do?” to “what do we want to do?”. This means that next time you are asked the safe way to do a job, it’s ok to say what the regulations say, but don’t stop there. Safety should be about what’s possible, not about making work impossible. Safety should not be about holding people and organizations back, but about facilitating success.

Thursday, February 5, 2015

Numbers, Priorities, and the Problem With Looking Good In Safety

Recently we were involved in a project meeting between contractors and site employees during a maintenance shutdown (called a “turnaround”) where an open discussion of safety and production activities takes place. The purpose of the meeting is to facilitate knowledge sharing and problem solving, to be proactive and identify issues before they turn into disasters. During the meeting one of the contractor supervisors volunteered that one of his employees was bitten by what they think was a mosquito, seemed to have an adverse reaction, and was sent to a clinic as a precautionary measure. The employee was looked at by a doctor and was quickly cleared to go back to work. The contractor spoke with his site contact about this at the time, but the meeting was the first that the rest of the group heard of this incident.

Quickly a few folks from the plant, including a corporate safety representative who was visiting, objected that we need to know about these sorts of incidents much more quickly. The corporate safety representative explained that site employees need to examine employees before they are sent to receive any medical care, so they can evaluate their treatment needs and make recommendations to the doctors.

For the record - no one at that site has any formal medical training beyond first aid training. Yet, somehow we feel qualified to make decisions regarding treatment. Why? Because there is one area that many “safety” professionals are well versed in – regulations. In the US, where this plant is, if an employee receives certain types of medical treatment for any injuries it becomes what we call a “recordable” injury, something that is recorded on a log that OSHA has created. Over time, an organization’s number of recordable injuries has become the standard way of measuring how “safe” an organization is. The logic is that the more injuries you have, the less safe you should be. Whole systems have emerged around this method of measurement. Contractors are awarded or denied contracts on the basis of their recordable injury rates. Organizations compare themselves to others based on this rate. It’s the way we keep score in safety.

In the case of this plant, the corporate incentive was to reward employees with a bonus based on safety performance. And safety performance, in this case, is measured by, you guessed it, the number of recordable injuries you have (which includes recordable injuries suffered by contractors and site employees). So, if the contractor employee who got bitten by a bug received a certain level of medical treatment it would cost people real money and would make the site and the organization look bad. The corporate safety rep was simply trying to avoid this.

This is all reasonable until we take a step back and realize that, to paraphrase Newton, for every action there’s a reaction. We just told the contractors that every single injury is a big deal that will be evaluated by site employees who are not medical professionals, who will subsequently guide treatment options. The question then becomes, although these actions are designed to help the site look good, how does it make us more safe?

Could it make us less safe?

We contend that it will, and here’s why – we just incentivized people to hide injuries and we just made the injuries that we do see more severe. Minor injuries will be driven underground, not because people don’t care about safety, but because they want to avoid having people make a big deal about it (the same reaction happens when someone avoids calling for emergency medical help when they are having heart attack symptoms). Further, minor injuries that are reported are less likely to be sent for precautionary reasons, which means the only medicine that will be provided will be reactionary, i.e. the condition gets worse to the point where it’s severe enough that it’s unbearable. So, by attempting to ensure that the organization looks good we created a situation where the organization will, on average, learn less about what’s really happening on the ground floor and we, on average, increased the overall suffering our employees face.


In the safety profession we have to understand that our choices are often not ones where we face choosing between a good choice and a bad one. Often we have to choose between two options that have benefit our organization in some way, but that have negative consequences as well. In this case, we have to choose between increasing organizational learning or making the organization look good. (Of course, we can remove this choice by changing the way safety is measured, but that’s beside the point for now). Both are laudable goals in a vacuum. But we don’t live in a vacuum. We live in a world with finite resources, which means we have to make tradeoffs, which means we have to have a clear understanding of what our priorities are in different situations. There are always side effects to our decisions. We have to identify those side effects and evaluate whether they make our intended decision “worth it” or not. Sometimes what seems like a very good decision actually makes things much, much worse. Our job is to identify that as early as possible. It’s not as easy as some would make it sound. Our world is not mechanistic and linear. It’s complex and impossible to fully understand. But that’s why they pay safety pros the big bucks, right?