Tuesday, June 24, 2014

Getting Ahead of the Game: Design For Safety

As safety professionals sometimes it feels a bit like we’re chasing our tails. We come in to an organization and are asked to help facilitate safety management, but typically only after things have been put in place by the organization. Work processes, equipment, management practices, etc. are all typically well established before we’re asked to get to work. Unfortunately this means we’re inherently in reactive mode. We like to call it “safety whack-a-mole.” For those who are unfamiliar with the game “Whack-a-mole”, in the game a “mole” pokes his head up out of a hole and the player’s job is to quickly knock it back down again. As the game progresses the moles come up faster and faster, making it more difficult to deal with all the moles popping up their heads.

Look familiar?

Safety whack-a-mole happens when we constantly are playing catch-up. We are trying to fix problems after they already exist. Whether it’s hazards, risks, incidents, or just general issues that relate to safety management, if all we do is wait for them to show up and then deal with them we will often find ourselves never able to get ahead.

Enter the Prevention through Design movement, or, as we like to call it, Design for Safety (we like our version better because DFS is more than just prevention). Design for Safety (DFS) is about getting safety considerations involved earlier in the management, procurement, and engineering processes in your organization. So, for example, rather than waiting for that loud new piece of equipment to be purchased and only then trying to protect employees from the noise, DFS would involve the development of safety criteria for the purchasing of that equipment, which would include ensuring that the equipment is designed to protect employees from the noise exposure (e.g. sound dampeners).

Or, to use another example, as your organization is developing a new bonus structure that rewards employees for production markers, rather than waiting to see what the unintended consequences of this structure will be, DFS would have us analyze the incentive structure to identify any competing goals it may create and search for ways to either eliminate those trade-offs, when possible, or to provide employees with coping skills for managing the trade-offs so that safety is not compromised.

Obviously the development of an effective DFS system is not something we can completely cover in a short blog such as this. Any effective DFS system must be aligned with the specific work processes and management practices within your organization to be successful. There is an ANSI standard on Prevention through Design that can provide you with a framework for implementing DFS in your organization. However, there’re a few elements that either aren’t covered in the standard, or are sufficiently important that they need additional emphasis. So here’s a brief list of additional considerations that you can use to help in the implementation of an effective DFS system.
  • A common reaction to DFS by safety professionals is fear, because they may not have an engineering background. That’s ok! You don’t need to be an engineer. You bring to the table an understanding of hazards, risks, controls, and regulations that others do not have. You likely also can bring to the table knowledge about human performance that most engineers don’t possess. You don’t have to be an expert in design to champion the implementation of DFS.
  • You MUST include line employees in the design process. A big problem in most organizations is the gap between work-as-imagined and work as it actually gets done. Bridge that gap by getting employees involved in the process. They often can provide innovative solutions to problems, or, at least can help you avoid making mistakes that are brought on by insufficient knowledge of the operating environment. Employee participation is at least as crucial a management commitment to the process.
  • If you’re conducting analysis of a design it often helps to have analysis tools to help make the process systematic. Typical ones include FMEAs, Fault Tree Analysis, HazOps, etc. To add to this we like using the System-Theoretic Process Analysis (STPA) that was developed by Nancy Leveson. In head to head comparisons with other hazard analysis tools STPA has been shown anecdotally and through research to identify more hazards than other traditionally used methods. For more information about STPA check out this book by Leveson.
  • Always document assumptions. If you implemented a particular design feature make sure it’s very obvious why that design feature is there. Ask the question – what happens if everyone who’s involved in the design process leaves the organization? Would people know why you did what you did in a way that would allow them to work safely with your designs?
  • Understand that no planning process is perfect. As they say in the military, no plan survives contact with the enemy. To account for this we need to implement our designs, plans, etc. with an understanding that they will need adjustment. Therefore, build in resilience to allow for those operating with the new design or equipment to adapt to any imperfect parts of the plan. To help with this, in the design process consider implementing feedback structures that help make it obvious that a change is needed.


Again, in a short blog we can’t list everything. So what other elements are needed to ensure the success of a DFS system?

Tuesday, June 17, 2014

Assimilation, Accommodation, and Thinking Safety Differently

Last week we were pleasantly surprised to see that Professor Sidney Dekker was one of the keynote speakers at the American Society of Safety Engineers Professional Development Conference in Orlando. For those who are not familiar with Professor Dekker, he is one of a crop of thought leaders in safety science who are challenging our profession to move beyond old models of safety management that just aren’t working anymore, toward a more effective and sustainable future. (For a summary of his presentation, click here).

Professor Dekker painted a striking contrast between old models of safety management and a new view of safety, one that he and others have taken to call “safety, differently.” The figure below presents the contrast between these two models.



Traditional View of Safety
Safety Differently
People are a problem to control
People are the source of safety and success to harness
The best way to intervene in safety as at the behavioral level
The best way to intervene in safety as at the contextual level
Safety is best measured by its absence
Safety is best measured by its presence


Safety Differently is a great model for the future of safety leadership and it’s one we’ve talked a little about before here and will continue to talk more about in the future. So we were pleased to see a large body of our profession exposed to these ideas.

However, being exposed to these ideas is not enough. To make a change we have to do things differently. The problem people run into though when they are exposed to a new way of thinking is that they often don’t adopt the new way of thinking. Instead of letting the new ideas change them they try to change the ideas to make it fit their existing world.

Social and developmental psychologists have a model that may help explain this process a bit better. Typically, when exposed to a new idea people go through a process of assimilation and accommodation. The first step, assimilation, involves the person trying to make sense of the new idea using their current view of the environment. Basically, the first instinct tends to be one of looking for similarities between the new idea and what the person already knows and does.

So, for example, many safety professionals will look at Professor Dekker’s Safety Differently model and find things they are already thinking and doing that are in line with it. For example, many look at current models of safety management systems, behavior-based safety, or the vogue concepts of safety culture and leading metrics and say that the Safety Differently model isn’t all that different.

However, this is a mistake. The mistake is that humans have a tendency to look for what they expect and hope to find, making them more likely to find it. This is called the confirmation bias and it’s something we all suffer from. If all we do when exposed to new information is find reasons why it’s not that new then we will never change and progress. We will always find reasons why what we’re doing is enough and, as the saying goes, we’ll keep doing what we’re doing and getting what we’ve got.

The second step in the process of being exposed to new information is accommodation. Accommodation occurs when the person cannot completely confirm the new idea into existing mental models and so they create new mental models and frameworks, accommodating the new information they have.

This is what we need to get into the habit of doing as a profession. When exposed to new ideas, such as those from Professor Dekker and others, we need to not look for what we’re already doing. Rather, we need to look for how the model conflicts with our current beliefs and practices. What aren’t we doing that the model suggests we do? What are we doing that model suggests we stop doing? In short, we need to look for all the ways the model would change us, rather than us changing the model to fit us.


Now, this isn’t to say that we blindly accept any new idea or model that someone suggests to us. Rather, what we are advocating is taking a process that often happens unconsciously, and usually, as a result, involves significant bias, and make it conscious and deliberate. When someone brings a new idea to the table our bias should not be to find reasons why we’re right and the new idea is wrong. Instead, lets think critically about these new ideas. Challenge old assumptions. Ask hard questions. And, if at the end of this, our old models still stand then we’re the better for it. However, if we ever want to move our profession forward and to start thinking differently about safety, we need to quickly move past the assimilation phase and begin the process of accommodating new ideas into our current ways of thinking.

Wednesday, June 11, 2014

Stop Trying to Punish the Error Out of People!

We’ve talked about things this like before (here, here, here, and here) but we once again see examples of a really bad habit organizations, governments, and, in general, people have – punishing people for human error. The most recent example is the crash involving comedian Tracy Morgan, who, as of the time of this writing is still in critical condition, where four people were injured and one was killed. Prosecutors have charged the truck driver with vehicular manslaughter and assault by auto, citing the fact that he was awake for more than 24 hours before the crash.

Let’s forget the fact that the driver is being punished for an outcome he couldn’t control (after all, if he was awake for 24 hours but DID NOT get into an accident simply because of luck, would he have been charged with anything?). Let’s just look at the idea of punishing the human error.

First, we have to assume one thing – the driver did not intend to crash into the limo that was carrying Morgan and company. If the driver did intend to cause harm then punishment may be deserved, but it’s pretty unlikely this is the case.

What happened then? The truck driver was tired and likely made a mistake. The remedy? In a scene akin to the witch burning scene in Monty Python and the Quest for the Holy Grail, we are looking for someone to punish and the driver is an easy target.

After all, assuming it’s true, how could he be so stupid as to stay up that long before driving?

Indeed, that is the question. We have to remember that if anyone has “put their money where their mouth is,” or, to use a more appropriate colloquialism, “has skin in the game” it’s the driver. In this case staying up too much led to injury and death to others, but in many cases it can lead to injury or death for the driver. So right off the bat the driver has a vested interest in being safe. So the problem isn’t one of motivation.

Further, we should also admit that if you asked any truck driver if they went without sleep for a long period of time whether they would be a less safe driver that this would not be news to them. Most know that being fatigued makes you an unsafe driver. So the problem isn’t knowledge.

So, if truck drivers want to be safe drivers and if they know that they should get enough rest before driving, why did this accident happen?

The convenient answer is to say that this particular truck driver was particularly bad and therefore deserves punishment.

…but what if the problem isn’t just with this driver? What if this truck driver isn’t really that much different than any other truck driver? What if the issue is unreasonable production schedules, faulty procedures and regulations, and incentive structures that reward drivers for driving while tired? What good is punishing them going to do?

Obviously the answer is – nothing. However, you could argue that punishing human error in situations where it is a system problem actually makes things worse. What ends up happening is that workers go into defensive mode. Rather than incidents being opportunities to learn, they become times to find fault for some and duck for cover for others. The bottom line, as Todd Conklin says, you can either choose to find fault and blame, or you can learn. You can’t do both. If we choose fault and blame, like we always do, then we will get what we always got. And that means more accidents, more injuries, and more deaths. 

Thursday, June 5, 2014

Safety by Accident: A Call for Action from the Safety Profession

Note: This blog is a brief summary of a presentation that Paul Gantt, our President and Founder, is presenting at next week’s ASSE NationalDevelopment Conference, Safety 2014, in Orlando on Monday, June 9th at 3:15pm EST (session 539).

In speaking with a safety professional recently we got to talking about how he got into the safety profession. Like so many other professionals in our field, he did not intend to become a safety professional. He was a floor supervisor for a company that got bought out and rather than being laid off, the new organization that purchased his company offered him a position in the Environmental, Health, and Safety (EHS) department. He had no previous experience in EHS at all, he sort of just fell into it.

Now, don’t get us wrong, this particular person has done a pretty good job in the years since starting in the profession. However, it does highlight an interesting and perhaps disturbing trend in the safety profession – the fact that the overwhelming majority of safety professionals who got into the profession did not intend to get into the profession. This fact alone is not a problem. But like so many other things in our profession, we must understand the unintended consequences that result from this.

To illustrate these consequences, compare the safety profession to other professions, such as medicine or engineering. Typically people get into these professions at an early age and because they want to. So they go to college, take internships, and get locked into the fundamental institutions of the profession (i.e. professional associations, academic institutions). In the safety profession though, given that the majority of safety professionals fall into the profession rather than intentionally join the profession, we don’t see that. What we see is a situation you would expect to see in an environment where a profession is made up of ad hoc professionals. Here are some examples:
  • The average professional who joins the profession will be less qualified in terms of knowledge, education, and training.
  • This will lead to an overall lower average level of knowledge, education, and training for the profession as a whole.
  • The average safety professional who joins the profession unintentionally will be less likely to be aware of institutions that exist to shore up the profession, such as professional associations and academic research.
  • This leads to a highly divergent and fractured knowledge base due to a lack of continuing education.
  • Further this leads to a lack of profession-wide forward thinking and innovation, as we’re always trying to play “catch-up.”

One can also draw correlations between this lack of entry standards for our profession and the compliance culture that permeates most safety professional thinking. After all, if you haven’t been given a theoretical and practical foundation for thinking about safety and you have been thrust into a safety position, needing to get up to speed quickly, the most efficient thing to do is to mentally outsource your safety thinking to compliance with regulations. This is why if you ask the majority of safety professionals what the safe way to do something is they will respond by parroting some form of regulatory requirement.

We do not blame these professionals and we do not mean to disparage the well-intentioned and passionate individuals who joined the profession by accident (in fact, Paul, who’s giving the presentation mentioned above, is a safety professional by accident too). Rather, as Paul will discuss in Orlando, problems do not get solved until problem solvers identify the breadth and depth of the problem. We have many highly-intelligent individuals within our profession and at SCM we believe that if you get enough highly-intelligent people in a room and give them the proper parameters for problem solving some really amazing things can happen.

But Paul will have more to say on this topic next week, so we won’t steal his thunder.


What are your thoughts on how to improve the safety profession?