C. Warren Axelrod

Mitigating the “Humin Errur” Risk

There is a retrospective report on May 18, 2011 in The Wall Street Journal by Yuka Hayashi and Phred Dvorak, with the title “Fresh Tales of Chaos Emerge From Early in Nuclear Crisis,” which describes the first few minutes following the earthquake that hit Japan on 3-11-11 and how workers in error shut down a backup system, leading to the meltdown of one of the nuclear reactor cores. Here is a quote from the article:

“Soon after the quake, but before the tsunami struck, workers at one reactor actually shut down valves in a backup cooling system—one that, critically, didn’t rely on electrical power to keep functioning—thinking it wasn’t essential. That decision likely contributed to the rapid meltdown of nuclear fuel, experts say.”

Another report in InfoWorld magazine describes in detail the human error that caused a recent major outage of Amazon’s cloud computing services.

To paraphrase the cartoon character Pogo … we have seen the error and it’s human. [For those for whom Pogo was “before their time,” the actual quote is “We have met the enemy… and he is us.”]

So the solution seems simple. Take humans out of the decision process. That’s OK except for one thing … humans are the ones who design and build the automated systems that are supposed to replace the human “actors.”

Granted, it is far better to have a group of engineers consider carefully the design and functioning of the automated systems than to have operators decide on the spur of the moment what to do under fire and in the midst of a crisis. But such human-engineered systems are never perfect, so they almost always have an option for human intervention. And so, we wind up with the potential for human error under the worst of circumstances.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*