Every now and then I will find a security practitioner presenting the following formula when discussing information security risk analysis (ISRA).

Risks = Threats x Vulnerabilities x Impact

In some versions of this formula, the word “Consequence” is sometimes substituted for “Impact,” but the concept appears to be the same.

I want to argue that this equation, when taken literally as a mathematical formula, is nonsense and should be discarded.

As I argued in my last post, risk analysis, including ISRA, has its roots in decision theory, especially expected value (or utility) theory. The expected value or utility of an action may be thought of as a weighted average. It can be calculated by defining a set of *mutually exclusive* and* jointly exhaustive* possible outcomes from a particular course of action, and then multiplying the probability of each possible outcome by its utility. The formula is very clear and mathematically rigorous. In contrast, the “Risk = Threats x Vulnerabilities x Impacts” formula is unclear at best and possibly mathematically incoherent at worst.

First, while the concepts of “threats” and “vulnerabilities” are clearly relevant to determining the probability of a possible outcome of an event, they are not equivalent to the probability of a possible outcome of an event. For example, I understand what it means to say that the threat is “unauthorized access to a company information system” and the vulnerability is “an unpatched vulnerability in an Internet-facing web server.” It is far from clear, however, how to literally plug in those concepts into a mathematical formula. What are the units of measurement for threats and vulnerabilities? What would it mean, mathematically, to plug a number in for the “Threats” variable? If I say that a threat is 0.8, what does that mean? What is the range of possible values for “Threats”? Likewise, what is the range of possible values for “Vulnerabilities”?

Second, the “Risk = Threats x Vulnerabilities x Impact” formula may actually violate the axioms of probability theory and the canons of inductive logic. In order to be inductively correct, a formal analysis of a risky action needs to take into account ALL of the potential outcomes of an action. The “Risk = Threats x Vulnerabilities x Impact” formula fails to do this by focusing solely on security threats. Indeed, the way the formula is presented, it appears to focus solely on a single security threat. In contrast, the logically correct expected value approach takes into account all of the possible outcomes of an action. For example, if the relevant action is “delay in patching a vulnerability in an Internet-facing web server,” one possible outcome is that the vulnerability is not exploited. The utility of that outcome would then be measured by whatever savings or efficiencies may be achieved by not patching, such as the value of the employee time that would have been spent patching the machine but wasn’t, or the value of advertising revenue generated by the web server that would have been lost due to downtime (for a server reboot) or wasn’t.

One reply to my argument is that the formula is not literally intended to be used as a mathematical formula; rather, the formula is just an informal way of stating that security risk is a *function *of threats, vulnerabilities, and potential impact. Fair enough, but why use a bogus formula? (I do believe risk can be modeled mathematically, but not using the “Risk = Threats x Vulnerabilities x Impacts” formula.) As an alternative, why not use “Risk = Function(Threats, Vulnerabilities, Impacts)” or something similar? I’m willing to bet that anyone who can understand the first formula can also understand the second.

## 9 Comments

Jeff,

Thank you for saying out loud what I have thought ever since I heard of this “forumula”.

ALE = ARO * SLE, on the other hand, really is a mathematical formula – first defined, as I recently learned, in 1979 in a NIST Publication – FIPS 79, I think it was.

Patrick Florer

Co-founder

Risk Centric Security, LLC

Dallas, Texas

Hi Jeff,

It’s a nice article talks about the basic fundas of security and its really nice you have given a new dimension and meaning for the existing risk formula. I agree to that but if we deeply find more about the synonym of risk, threat, vulnerability and impact according to security field. I can find one more formula as risk = impact (vulnerability+ threat). What do you say about this.

Jeff,

I completely agree that formula is flawed. I have always believed more in the very basic Risk = Probability * Impact. The vulnerabilities and threats from the original formula would factor into the probability number. Using the more general “Probability” allows for discretion on the part of the analyst who realizes that not all threats and/or vulnerabilities are equal.

Thank you.

Jeff,

I think your argument is deficient as it does not seem to take into account the fact that the equation is meant to be used within a subjective and qualitative framework such as the OWASP RRM (http://www.owasp.org/index.php/OWASP_Risk_Rating_Methodology)

remmington,

as much as I love owasp, rrm also breaks the fundamental laws of mathematics as we know them by performing math other than addition or subtraction on ordinal scales.

In addition to what Jeff is saying above, there may be an even more fundamental problem with R= T x V x I.

If software, networks, threats, businesses are complex adaptive systems (and while there is no proof available to prove this hypothesis, ifnthe subcomponents are not then the entanglement of these subcomponents probably is> then point probabilities cannot be made (this explains part of the all outcomes argument Jeff makes). All we can do is make predictive analysis around past, current, and future states in terms of describing patterns. or so complex systems theory currently posits.

so even if these risk statements weren’t the logical analog of multiplying “turkey baster” times “lavender” and pretending that the outcome is “swimming quickly”, we would still be incapable of creating any likelihood x impact statement with any accuracy save the absurdly specific or extremely abstract. neither are much use in defensive strategy development.

then age of the specific risk statement must come to an end, taking with it the end of governance without metrics and comparative analytics.

While that is true, in most cases, using R = TxVxI formula is the only practical way of quantifying something that is inherently difficult to quantify. It’s a management tool to aid decision making. In fact, we use R = I x V

For example,

What is risk profile of people using USB to store payroll data?

Impact = 3 (i.e., breach of data protection law, fines etc.)

vulnerablity = 2 (medium, it conceivable that people may lose the USB stick)

i.e. Risk = 6/9. Which would be an amber risk profile.

Compare this to the risk profile of storing payroll data on a laptop

Impact = 1 (because HD is encrypted, so the impact of loss it negligible)

Vulernability = 1 (peopel might lose the laptop, but it’s rare, compared to losing USB sticks).

Thus risk = 1/9 (which would be a green).

This is how most risk profiling is done. You make an assessment on Vulnerability, Impact, and Threat to derive a risk value that you can act on.

you can’t do anything with Risk = Function (Threat,vulnerability,impact)

Just so let me be clear though, this is a very useful management tool (like the 2×2 you learn in your MBA) in the absence of a more mathematically robust alternative.

Here, here, Jeff!

We fully agree and this is another case-in-point for our need that many, if not most, risk assessment methods need to be tossed and / or fully re-engineered.

Best Rehards to you and thanks, everyone, for adding your own thoughts, too!

Phil

Henry has it right: It’s a mathematical model for determining risk in indeterminate environments: IE it’s an approach you use if you don’t have concrete data. Lacking # of incidents or financial impact of incidents, for instance, a risk equation requiring this data, such as an ALE model, breaks down. An ALE model is also single scale dependent for the consequence, in this case financial impact. Again, the model breaks down if you’re interesting in modeling something more abstract, like impact to public profile from an adverse incident.

All models are limited, and only as good as the input values and scales applied. R=CVT is a modeling approach to provide flexibility in modeling risk, nothing more. Another way to think of it: If C, consequence, is captured in terms of financial impact, V, vulnerability is captured as the likelihood of an incident succeeding and T threat is captured as a function of frequency of the incidents, you’ve basically devolved the equation into ALE. This is exactly what most organizations using the “Risk equation” do once they have the data they need. Performing an assessment with qualitative data as due diligence at the start of an assessment program is better than just jumping in blindly without a concept of where your areas of concern are. It can be illuminating.

As to the “single security threat”, that’s what pairing matrices are for. A single threat can exploit multiple vulnerabilities. A single control implementation can reduce multiple vulnerabilities. Pairing effectiveness factors can be used to reflect subjective impacts (IE a parasol will work to reduce a vulnerability to getting wet, but an umbrella will be far more effective).

[sigh…] It’s just a teaching model, useful for showing the relationships between the risk elements. Other methods of qualitative analysis are used when actually conducting an assessment, but this simple formula is a great introduction to the elements of risk.

## One Trackback

[…] 2010 Jay Jacobs Leave a comment Go to comments Jeff Lowder wrote up a thought provoking post, "Why the “Risk = Threats x Vulnerabilities x Impact” Formula is Mathematical Nonsense” and I wanted to get my provoked thoughts into print (and hopefully out of my head). I’m […]