Disclaimer: The opinions of the columnists are their own and not necessarily those of their employer.
C. Warren Axelrod

Risk Mismanagement – Scoring vs. Monte Carlo vs. Scoring

I finally got to read Douglas Hubbard’s book “The Failure of Risk Management: Why It’s Broken and How to Fix It” (Wiley, 2009). As I have written in other columns about Hubbard’s prior book “How to Measure Anything: Finding the Value of Intangibles in Business” (Wiley, 2007; Second Edition, 2010). Hubbard’s risk management book is a welcome addition to texts in this frequently-misunderstood area. In many ways, the risk-management book parallels my article “Accounting for Value and Uncertainty in Security Metrics” in Volume 6, 2008 of the ISACA Journal, available at http://www.isaca.org/Journal/Past-Issues/2008/Volume-6/Pages/Accounting-for-Value-and-Uncertainty-in-Security-Metrics1.aspx However, the book obviously includes much more material, mostly debunking the various risk scoring approaches propounded by ISACA (in COBIT), NIST (Special Publication 800-30), and PMI (PMBoK). Quite a list!

However, like other critiques of inadequate approaches, such as Donn Parker’s dismissal of essentially all security risk assessment methodologies, there is a tendency to throw the baby out with the bathwater in order to make one’s point. While it is fair to discredit some risk scoring approaches as being inconsistent and inaccurate, the method does, in my opinion, have some merit as an expression of the subjective relative importance of various risk factors according to the respondent. In their book “Managing Information Security Risks: The OCTAVESM Approach,” (Addison Wesley, 2002), Alberts and Dorofee actually do caution analysts about the interpretation of risk scores, even though OCTAVESM, perhaps the preeminent risk assessment methodology developed by Carnegie Mellon, is totally based on scoring. I was actually wondering why Hubbard didn’t mention OCTAVESM  or FAIR or FRAP or Soo Hoo in his book. Even Donn Parker, in his diatribe against risk management approaches, mentions some of these well-known techniques in his chapter in the book “Enterprise Information Security and Privacy” (Artech House, 2009), which I co-edited.

4 Comments

  1. Donn Parker Sep 12, 2011 at 2:05 pm | Permalink

    Risk is about the future and forecasting the effects of security solutions and would be needed if at all, in only extreme cases. I have found that identifying many future incidents and decision-making about selections of possible security solutions to them and their priorities are clear cut, straightforward, and concluded with no contention and requires only diligence. In a few situations, say once every several years, management may question the frequency and impact of a kind of incident and its expensive solution and more justification for a decision is needed. Among many of those cases a few simple calculations and benchmarking settle the issue to management’s satisfaction. Finally, a situation may arise say once every ten years, where a disputed decision cannot be settled. In this case the decision could be made by management fiat. Or management may attempt to resort to expert decision analysis, and a specialist, such as Doug Hubbard since existing security staff certainly wouldn’t have the expertise and data necessary to engage in valid decision analysis.

  2. Doug Hubbard Sep 12, 2011 at 3:28 pm | Permalink

    Thanks for mentioning my books.

    You asked about why I didn’t mention FAIR, FRAP, or OCTAVE. If I wrote a book about risk management in infosec, or even IT risks in general, I would have mentioned them. (The exception is that I do mention NIST 800-30 but only as an example of a qualitative method from one of the many industries that have created their own recipes for risk analysis).

    But I wrote a book about the very broad cross-industry, cross-specialty topic of risk management. Even though those methods may be well known within info sec, they are not actually well known to people outside of infosec such as actuarial science, financial portfolio management or probabilistic risk assessment used in nuclear power. Nor are they even remotely representative of the methods used in other fields. FAIR, for example, uses some probabilistic methods but most of my work is outside of information security and when I talk to probabilistic risk assessment experts in most industries, I doubt I will find that even the most advanced experts in Monte Carlo simulations will have heard of FAIR. Depending on the profession a person is from, they may assume QRM, PRA, FMEA, HAZOP or even just plain actuarial science are the methods that represent risk assessment.

    Poll three actuaries or financial portfolio analysts and I’ll bet you won’t find one that heard of any of the methods you mentioned. Nor would most books on QRM, PRA, or FMEA for fananncial analysts or engineers mention FAIR or OCTOAVE. This isn’t that actuaries or risk analysts in finance are less sophisticated – they are actually much more sophisticated that most other professions in risk management. Its just that there are numerous industries that have come up with something on their own and given it an acronym you never heard of. And each have come to believe that their industry method is actually how risk analysis is done in general – and that their peculiar acronyms are the same ones all risk management professionals use.

    That’s part of the problem I discuss in the book. Most of these methods are not created by quantitative risk experts, but by industry subject matter experts with almost no input from well-developed methods used by the broader risk industry. As you can see, my book was about many areas of risk analysis without spending too much time in an single area. The problems I talk about regarding risk management are mostly industry-generic and I only mention cases in select industries to illustrate particular points.

    Part of my objective with the book was to get all industries – including info sec – out of their “shell” so that they realize that what they do is not necessarilly representative of all risk management. And, hopefully, learn from other risk professionals applying methods they haven’t ever heard of.

    Thanks again for posting this thought on your blog.
    Doug Hubbard

  3. Doug Hubbard Sep 13, 2011 at 12:40 pm | Permalink

    Warren,

    I meant to address another point you made, as well. You said highly subjective risk measures are either neglected or merely alluded to in my book. I would say I spend quite a lot of time on that point. In fact, I discuss in detailed the research behind measuring subjective assessments of risk and devote an entire chapter to the limits of expert knowledge (i.e. subjective estimates based on experience). I then go on to show how subjective assessments of uncertainty can and should be used in quantitative models – but only after “calibration” training of experts.

    Risk is also “subjective” by virtue of the fact that you have to decided *whose* risk you are modeling. But moral hazard itself doesn’t mean the risk to the victim of moral hazard (i.e. an insurer) is subjective. On the contrary, it is routinely meausred. Even subjective behavior is objectively measurable. I describe in detail how subjective risk tolerances, for example, can be objectively quantified with the “investment boundary” or risk/return utility curve. I spend deliberate time in the third section of the book describing conflicts of interest and other behavioral aspects of managing risks. But this should not be confused with being subjective, since there is a lot of objective data about the behavior of humans in such situations.

    Thanks again for your post.
    Doug Hubbard

  4. Kevin Fitzgerald Aug 13, 2012 at 8:49 pm | Permalink

    I use Risk Analysis in my projects as a means of creating a mind-set in my clients minds that they can become pro-active about Information security. I use Jerry FitzGerald’s threat-asset matrix and approximation tables for likelihood and impact to calculate the Annual Risk Exposure for each cell in the matrix. I have been using this approach with great effect since 1980. My client’s are aware that we are not always perfectly right but they appreciate the scale and they feel “in-charge”. BTW Donn B. Parker was my original mentor in the late 1970s. Good to hear his voice again after so long.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*