My September 12, 2011 BlogInfoSec column “Risk Management – Scoring vs. Monte Carlo vs. Scoring” was about the subjectivity of risk assessments, where the term “subjectivity” was defined as one’s personal view of particular risks. I received some considerable push-back from the likes of Donn Parker and Alex Hutton and, in order to address their valid comments, I changed the descriptor to “personalization” in my December 19, 2011 BlogInfoSec column “The Personalization of Risk.” In both columns, I was trying to convey the idea that even risk professionals inject their personal attitudes about risk and risk-related decision-making.
Support for this approach to risk recently came from an unlikely source, in the form of a DealBook article by Jesse Eisinger with the title “Uncovering the Human Factor in Risk Management Models,” which was published on page B7 of the April 4, 2013 New York Times. The author provides a history and expresses the thoughts of Dr. John Breit, who was the former “top” risk manager at Merrill Lynch.
In Eisinger’s article, we discover that Dr. Breit believes that “[i]nstead of fixating on [risk] models, risk managers need to develop humint … [namely] human intelligence from flesh and blood sources.” To gather such humint, Dr. Breit would socialize with junior accountants and others on the basis that: “They see things first. Almost every trading debacle was sitting on some accountant’s desk.”
If we apply the same thinking to information security, it makes good sense for infosec professionals to gather information from users and operators of systems, networks and devices in order to understand how they view security risks of their systems and operations. You will often find out that they know much more than you had realized about where the threats and vulnerabilities lie and what should be done about them.
Perhaps the following paragraph from Eisinger’s article is the most revealing:
“Take VaR [value at risk]. In Mr. [sic] Breit’s view, Wall Street firms, encouraged by regulators, are on a fool’s mission to enhance their models to more reliably detect risky trades. Mr. Breit finds VaR, a commonly used measure, useful only as a contrary indicator. If VaR isn’t flashing a warning signal for a profitable trade, that may well mean that there is a hidden bomb.”
Such a view can be extended from trading models to information security. We see what our intrusion detection and network monitoring systems tell us. The common threats and exploits are monitored, mitigated and deflected. For the most part, it is common that exploits, which we don’t detect, can lead to major data breaches. This presumption is based on the number of publicized occasions where victims appear to be unaware of attacks and only recognize them once another party, such as law enforcement or payment card processors (such as VISA), has detected them and pointed them out to the victim organization.
This takes us back to the personalization of risk. Risk models attempt to remove subjectivity and personal views, and to some extent they may do that. However, the highly subjective manner in which such models are developed and used often leads to their failure to represent reality. It would be far better if one were to recognize that humans still have major contributions to make in recognizing risks and assessing their impact, and use humint to supplement, and in some cases replace, ineffective risk models. Placing unwarranted faith in deficient risk models only exacerbates an already dangerous situation. On the other hand, if one accepts that personal bias is injected into the development and use of risk models, the resulting models depend very much on whom you ask for input as does the interpretation of the results.
It is important to include human factors when creating and running risk models. But it is also important to recognize the limitations resulting from “human frailty” and personal bias.