I finally got to read Douglas Hubbard’s book “The Failure of Risk Management: Why It’s Broken and How to Fix It” (Wiley, 2009). As I have written in other columns about Hubbard’s prior book “How to Measure Anything: Finding the Value of Intangibles in Business” (Wiley, 2007; Second Edition, 2010). Hubbard’s risk management book is a welcome addition to texts in this frequently-misunderstood area. In many ways, the risk-management book parallels my article “Accounting for Value and Uncertainty in Security Metrics” in Volume 6, 2008 of the ISACA Journal, available at http://www.isaca.org/Journal/Past-Issues/2008/Volume-6/Pages/Accounting-for-Value-and-Uncertainty-in-Security-Metrics1.aspx However, the book obviously includes much more material, mostly debunking the various risk scoring approaches propounded by ISACA (in COBIT), NIST (Special Publication 800-30), and PMI (PMBoK). Quite a list!
However, like other critiques of inadequate approaches, such as Donn Parker’s dismissal of essentially all security risk assessment methodologies, there is a tendency to throw the baby out with the bathwater in order to make one’s point. While it is fair to discredit some risk scoring approaches as being inconsistent and inaccurate, the method does, in my opinion, have some merit as an expression of the subjective relative importance of various risk factors according to the respondent. In their book “Managing Information Security Risks: The OCTAVESM Approach,” (Addison Wesley, 2002), Alberts and Dorofee actually do caution analysts about the interpretation of risk scores, even though OCTAVESM, perhaps the preeminent risk assessment methodology developed by Carnegie Mellon, is totally based on scoring. I was actually wondering why Hubbard didn’t mention OCTAVESM or FAIR or FRAP or Soo Hoo in his book. Even Donn Parker, in his diatribe against risk management approaches, mentions some of these well-known techniques in his chapter in the book “Enterprise Information Security and Privacy” (Artech House, 2009), which I co-edited.