Shit Happens | Miscellany | Chicago Reader

News & Politics » Miscellany

Shit Happens

Especially in complex systems, like a nuclear power plant or the airline industry, says Richard Cook, an expert in "safety culture." And when it does, the fattest manual in the world won't save you.

by

comment

Richard Cook, director of the University of Chicago's Cognitive Technologies Laboratory, is an anesthesiologist by training. But he's best known for his work on spectacular accidents outside of health care--Three Mile Island, Chernobyl, the Challenger disaster--which concludes that such accidents are caused not by human error but by flaws in systems. His lab's mission is to study "safety culture" and devise training methods that treat human judgment as a resource rather than an obstacle.

Harold Henderson: How does one get from anesthesiology to space-shuttle disasters?

Richard Cook: Anesthesia is a very technical specialty to start with. It involves lots of complex devices and monitoring systems. So anesthesiologists have been interested for a long time in how people understand and handle problems with complex systems. In the past ten years the rest of health care has become interested in the issue of patient safety. Much of the current work on patient safety has used anesthesiology as a model.

HH: So this isn't all new?

RC: Most people didn't become fascinated with human error in complex systems until the 60s and 70s, when we started to create elaborate computer-controlled systems in manufacturing, airplane cockpits, and nuclear power plants. The watershed event in the field was the Three Mile Island meltdown in 1979. At first this appeared to be a simple matter of human error: the operators mismanaged the plant. But on closer examination over several years people discovered that the operators hadn't mismanaged the plant. They had been trapped by the complexity of the system and by its usual state of being partially broken. They were experienced operators with plenty of knowledge, but they had something like 100 alarms go off in the first five minutes of the event.

HH: Yet it was all perfectly clear in hindsight?

RC: Yes. The classic book in this field is Charles Perrow's Normal Accidents [1984]. We find that in complex systems the technology and organization often set up people to fail. After an accident it seems clear how it could have been prevented, but it was impossible to know ahead of time.

It's also the case that the kind of accident we see is changing. When you introduce a new technology you create new opportunities for failure--often in a form that's hard to imagine until it occurs. Have you ever used a windup alarm clock? They're not very accurate. They can easily run five or ten minutes slow or fast. A digital alarm clock is much better: it's precise to a fraction of a second, and it will work all the time without your having to wind it. But if the power goes out you may not get any alarm at all. What's more, you can easily make the mistake of setting your alarm to ring at 6 PM instead of 6 AM. Instead of a small error of five or ten minutes, you now have a really big one. So along with precision and accuracy comes a new kind of failure, which occurs only rarely, but when it does it's quite devastating. We see the same sort of thing in far more complex systems. It's a change from frequent low-consequence failures to relatively infrequent but very consequential failures, across domains as different as aviation and health care, from big systems to personal appliances.

HH: And you're saying that all these complex systems are fresh meat for Monday-morning quarterbacks. I was struck by an article you described from the Journal of the American Medical Association, in which anesthesiologists judged professional performance in several cases that contained the same facts but had randomly assigned outcomes, good and bad. When the outcome was bad the professionals said the patient care was substandard, but when the outcome was good they judged the exact same patient care as up to standard.

RC: Hindsight bias is a durable character of human cognition. Among other things, it allows us to believe that an event is more likely to occur if it has occurred. A lot of the reaction to 9/11 is hindsight bias: It's so obvious! We should have foreseen it! But the plain fact is, people didn't. The problem is that our understanding of the world has now been transformed by the event. It's what we call a fundamental surprise.

HH: Which is?

RC: Daniel Webster came home and found his wife in bed with another man. She said, "You surprised me." He said, "Madam, you astonish me." She knew that she might be caught; Webster had no idea that the situation was possible at all. That's fundamental surprise. In hindsight we tell the Three Mile Island operators, "Oh, you need to pay more attention. Be more careful." But the record shows that they couldn't have paid more attention.

The standard solution is to make sure they're more careful by bulking up the procedure manual, then training everyone to follow it under every conceivable circumstance.

When you look at manuals, though, they don't cover every conceivable circumstance. We can't envision all of the ways things can fail. Whether it's acknowledged or not, we depend on people to take up the slack when something unexpected happens. Our great successes are when people have dealt successfully with unforeseen and potentially catastrophic occurrences. Think about Apollo 13 [the 1970 moon shot in which astronauts Jim Lovell, Jack Swigert, and Fred Haise had to improvise their way back to earth after their oxygen tank exploded]. Most of the time people do keep those systems from failing. Instead of trying to get people to go by the book, which isn't possible, we need to train people to understand systems and possible failures. The truth is, people are error-correcting mechanisms.

HH: Another standard viewpoint is that safety should always be the number one priority.

RC: Well, safety is dynamic and not static. If you really want to live in a no-risk world you're going to live in a no-action world.

HH: The safest anesthetic is the one never administered?

RC: Exactly. We've got to stop pretending that these are simple failures and that human error is the culprit. That approach blinds us to the complexities, the vulnerabilities, and to the resources. At the Cognitive Technologies Laboratory our approach is to try to empower people to operate these systems, anticipate failures, and to deal with them when they start to occur.

HH: Are you claiming that there's no such thing as human error?

RC: I don't put it that way, because that can lead into almost religious arguments. It's just that making error the target of our efforts doesn't lead to fruitful results. The old military idea is to reinforce success, not failure.

HH: So study Apollo 13 more than the Challenger disaster?

RC: Yes--and recognize that after-the-fact hindsight poisons our assessments of what people should have known before the fact.

Art accompanying story in printed newspaper (not available in this archive): photo/Lloyd DeGrane.

Add a comment