As we’ve discussed before (for example, here), many times what we think of as “human error,” or whatever other label you want to use, is often just a product of normal work – i.e. people doing what they feel is best, given the competing goals and tradeoffs they have in front of them.
As must as we’d like to take credit for this idea, we must admit that we’re standing on the shoulders of giants, forerunners in the field of safety science and human performance. One of the most important of those forerunners is Jens Rasmussen. Rasmussen is responsible for many ideas and concepts in safety science that are foundational to our understanding. One concept in particular we find very interesting and useful is Rasmussen’s drift to danger (you can view the entire article where the concept is described here) model, seen below in its original form.
As Box and Draper said, all models are wrong, some are useful, but we think this model is particularly useful and worth a closer look. Obviously there’s a lot going on in the above picture (we encourage you to read the entire article in the link above to get an explanation), so we have a more paired down version below, created by Johan Bergström (which can be seen in this presentation that is well worth your time).
Basically though, what the model illustrates is that at any given time there are numerous pressures competing for attention within an organization. On the upper right side you have the pressure to not go out of business. Every organization has a line (although we may not know where that line is) where on one side the business can continue to function and on the other side the business isn’t financially stable enough to continue to exist and must shut down. From a “risk management” perspective, the organization must get as far away as possible from that line as possible. So management is incentivized to push the organization towards great efficiency and productivity.
On the bottom right side you have the pressure not to work too hard. Certainly an organization can be extremely productive if their workers could do an infinite amount of work without having to rest. But that’s not reality. Everyone has a limit. Furthermore, people are inherently motivated to do the most work for the least amount of effort (however, they define these terms individually). You can call this laziness if you like, but it’s something we all do, even when it’s not required. So again, we have a line where on one side you have an acceptable amount of work and on the other side an unacceptable amount of work, and we constantly try to get away from the unacceptable amount of work.
So we have pressure from one side toward efficiency and we have pressure from another side toward least effort. Note though that both the efficiency and least effort pressures, while they are pushing against each other to an extent, they are also pushing in the same direction to a degree, reflecting that the goals of efficiency and least effort overlap to an extent.
On the far left we have the “boundary of functionally acceptable behavior,” or you could say the safety boundary (although one could argue that this is not technically accurate). Note that, left unchecked, the pressures toward efficiency and least effort would blast right past the safety boundary. Fortunately, most people who come to work don’t want to die or get anyone killed, so people are motivated to not get too close to the safety boundary. The problem though is that (a) the safety boundary changes sometimes as our systems change over time, (b) the safety boundary is not always clearly defined, and (c) other pressures for efficiency and least effort can take up our attention unexpectedly (such as during economic downturns) and create increased pressure away from that boundary, pushing us closer to the safety boundary.
The important thing to remember is that these pushes toward the safety boundary are not “dumb,” “evil,” or any other adjective we typically put on such things. Rather, they are a function of normal behavior. This is what Scott Snook called “practical drift.” And that’s what makes them so tricky to deal with, because when something is normal it is hard to identify it as a problem. As Michael Roberto pointed out, what at first gets excepted, quickly becomes accepted, and soon after becomes expected.
So what can we do about this? The most important lesson from the Rasmussen’s drift to danger model is that identifying how close one is to the safety boundary requires a new way of thinking about safety. We can’t define safety by the absence of negatives (i.e. incidents, risk, etc.), because you cannot identify how close you are to danger using those metrics. Instead, we have to come up with new metrics based on positive capacities to adapt to changing environments that the organization and the people find themselves in. Further, relying on traditional tools, such as hazard hunts, behavior observations, and risk assessments usually will typically not find such drift because they are often based on static environments (i.e. the job never changes), instead of the complex, dynamic work systems our employees operate in. Instead, we must understand how work works. Traditional safety practice must be expanded to include understanding elements of the organization not traditionally covered under the umbrella of safety management, because all of these things have an effect on safety.
Is your organization drifting to danger?