Every now and then I will find a security practitioner presenting the following formula when discussing information security risk analysis (ISRA).

Risks = Threats x Vulnerabilities x Impact

In some versions of this formula, the word “Consequence” is sometimes substituted for “Impact,” but the concept appears to be the same.

I want to argue that this equation, when taken literally as a mathematical formula, is nonsense and should be discarded.

As I argued in my last post, risk analysis, including ISRA, has its roots in decision theory, especially expected value (or utility) theory. The expected value or utility of an action may be thought of as a weighted average. It can be calculated by defining a set of *mutually exclusive* and* jointly exhaustive* possible outcomes from a particular course of action, and then multiplying the probability of each possible outcome by its utility. The formula is very clear and mathematically rigorous. In contrast, the “Risk = Threats x Vulnerabilities x Impacts” formula is unclear at best and possibly mathematically incoherent at worst.

First, while the concepts of “threats” and “vulnerabilities” are clearly relevant to determining the probability of a possible outcome of an event, they are not equivalent to the probability of a possible outcome of an event. For example, I understand what it means to say that the threat is “unauthorized access to a company information system” and the vulnerability is “an unpatched vulnerability in an Internet-facing web server.” It is far from clear, however, how to literally plug in those concepts into a mathematical formula. What are the units of measurement for threats and vulnerabilities? What would it mean, mathematically, to plug a number in for the “Threats” variable? If I say that a threat is 0.8, what does that mean? What is the range of possible values for “Threats”? Likewise, what is the range of possible values for “Vulnerabilities”?

Second, the “Risk = Threats x Vulnerabilities x Impact” formula may actually violate the axioms of probability theory and the canons of inductive logic. In order to be inductively correct, a formal analysis of a risky action needs to take into account ALL of the potential outcomes of an action. The “Risk = Threats x Vulnerabilities x Impact” formula fails to do this by focusing solely on security threats. Indeed, the way the formula is presented, it appears to focus solely on a single security threat. In contrast, the logically correct expected value approach takes into account all of the possible outcomes of an action. For example, if the relevant action is “delay in patching a vulnerability in an Internet-facing web server,” one possible outcome is that the vulnerability is not exploited. The utility of that outcome would then be measured by whatever savings or efficiencies may be achieved by not patching, such as the value of the employee time that would have been spent patching the machine but wasn’t, or the value of advertising revenue generated by the web server that would have been lost due to downtime (for a server reboot) or wasn’t.

One reply to my argument is that the formula is not literally intended to be used as a mathematical formula; rather, the formula is just an informal way of stating that security risk is a *function *of threats, vulnerabilities, and potential impact. Fair enough, but why use a bogus formula? (I do believe risk can be modeled mathematically, but not using the “Risk = Threats x Vulnerabilities x Impacts” formula.) As an alternative, why not use “Risk = Function(Threats, Vulnerabilities, Impacts)” or something similar? I’m willing to bet that anyone who can understand the first formula can also understand the second.

## 16 Comments

Jeff,

Thank you for saying out loud what I have thought ever since I heard of this “forumula”.

ALE = ARO * SLE, on the other hand, really is a mathematical formula – first defined, as I recently learned, in 1979 in a NIST Publication – FIPS 79, I think it was.

Patrick Florer

Co-founder

Risk Centric Security, LLC

Dallas, Texas

Hi Jeff,

It’s a nice article talks about the basic fundas of security and its really nice you have given a new dimension and meaning for the existing risk formula. I agree to that but if we deeply find more about the synonym of risk, threat, vulnerability and impact according to security field. I can find one more formula as risk = impact (vulnerability+ threat). What do you say about this.

Jeff,

I completely agree that formula is flawed. I have always believed more in the very basic Risk = Probability * Impact. The vulnerabilities and threats from the original formula would factor into the probability number. Using the more general “Probability” allows for discretion on the part of the analyst who realizes that not all threats and/or vulnerabilities are equal.

Thank you.

Jeff,

I think your argument is deficient as it does not seem to take into account the fact that the equation is meant to be used within a subjective and qualitative framework such as the OWASP RRM (http://www.owasp.org/index.php/OWASP_Risk_Rating_Methodology)

remmington,

as much as I love owasp, rrm also breaks the fundamental laws of mathematics as we know them by performing math other than addition or subtraction on ordinal scales.

In addition to what Jeff is saying above, there may be an even more fundamental problem with R= T x V x I.

If software, networks, threats, businesses are complex adaptive systems (and while there is no proof available to prove this hypothesis, ifnthe subcomponents are not then the entanglement of these subcomponents probably is> then point probabilities cannot be made (this explains part of the all outcomes argument Jeff makes). All we can do is make predictive analysis around past, current, and future states in terms of describing patterns. or so complex systems theory currently posits.

so even if these risk statements weren’t the logical analog of multiplying “turkey baster” times “lavender” and pretending that the outcome is “swimming quickly”, we would still be incapable of creating any likelihood x impact statement with any accuracy save the absurdly specific or extremely abstract. neither are much use in defensive strategy development.

then age of the specific risk statement must come to an end, taking with it the end of governance without metrics and comparative analytics.

While that is true, in most cases, using R = TxVxI formula is the only practical way of quantifying something that is inherently difficult to quantify. It’s a management tool to aid decision making. In fact, we use R = I x V

For example,

What is risk profile of people using USB to store payroll data?

Impact = 3 (i.e., breach of data protection law, fines etc.)

vulnerablity = 2 (medium, it conceivable that people may lose the USB stick)

i.e. Risk = 6/9. Which would be an amber risk profile.

Compare this to the risk profile of storing payroll data on a laptop

Impact = 1 (because HD is encrypted, so the impact of loss it negligible)

Vulernability = 1 (peopel might lose the laptop, but it’s rare, compared to losing USB sticks).

Thus risk = 1/9 (which would be a green).

This is how most risk profiling is done. You make an assessment on Vulnerability, Impact, and Threat to derive a risk value that you can act on.

you can’t do anything with Risk = Function (Threat,vulnerability,impact)

Just so let me be clear though, this is a very useful management tool (like the 2×2 you learn in your MBA) in the absence of a more mathematically robust alternative.

Here, here, Jeff!

We fully agree and this is another case-in-point for our need that many, if not most, risk assessment methods need to be tossed and / or fully re-engineered.

Best Rehards to you and thanks, everyone, for adding your own thoughts, too!

Phil

Henry has it right: It’s a mathematical model for determining risk in indeterminate environments: IE it’s an approach you use if you don’t have concrete data. Lacking # of incidents or financial impact of incidents, for instance, a risk equation requiring this data, such as an ALE model, breaks down. An ALE model is also single scale dependent for the consequence, in this case financial impact. Again, the model breaks down if you’re interesting in modeling something more abstract, like impact to public profile from an adverse incident.

All models are limited, and only as good as the input values and scales applied. R=CVT is a modeling approach to provide flexibility in modeling risk, nothing more. Another way to think of it: If C, consequence, is captured in terms of financial impact, V, vulnerability is captured as the likelihood of an incident succeeding and T threat is captured as a function of frequency of the incidents, you’ve basically devolved the equation into ALE. This is exactly what most organizations using the “Risk equation” do once they have the data they need. Performing an assessment with qualitative data as due diligence at the start of an assessment program is better than just jumping in blindly without a concept of where your areas of concern are. It can be illuminating.

As to the “single security threat”, that’s what pairing matrices are for. A single threat can exploit multiple vulnerabilities. A single control implementation can reduce multiple vulnerabilities. Pairing effectiveness factors can be used to reflect subjective impacts (IE a parasol will work to reduce a vulnerability to getting wet, but an umbrella will be far more effective).

[sigh…] It’s just a teaching model, useful for showing the relationships between the risk elements. Other methods of qualitative analysis are used when actually conducting an assessment, but this simple formula is a great introduction to the elements of risk.

hi there,

how about this:

CR = i x (v+c)L/t

i = impact

v+c = vulnerability + control effectiveness

L = likelihood

t = time

CR = cyber risk

Jeff,

I stumbled on this posting in support of my argument that the formula that many depend on is complete nonsense! It amazes me how many rely/depend on this formula when they don’t realize the values they are putting in are just theories and opinions and two teams assessing the same environment can often come up with differing results and opinions.

very interesting discussion. My 2-cent corntibute is: in a number of standards the notion of risk is associated to a combination of impact and likelihood, may change some names but the concepts are these. This approach derives from the Safety community approach, the problem is that cybersecurity absolutely does not have the data available for Safety, therefore the estimation of Likelihood is over-subjective. Provocatively, cant’ we get rid of that dimension, and consider consequences only ?

Thanks

I want to divided the concept into two individual parts

Risks & Impact

and to define this things for action plan, it should be formulated with this formula

threats + vulnerabilities=Risk ≤ Impact

Risks- A source of danger; a possibility of incurring loss or misfortune

Threats- Something that is a source of danger

Vulnerabilities- The state of being vulnerable or exposed

Impact- A forceful consequence; a strong effect

suppose if we want to count or address/define a risk of an electric fire of an apartment or a property.

impact is what happen after the fire like (ex: what has burnt, how much damage the property or life etc.)

we want to think or measure the risk to take precaution or action

so to know this thing we need to know the vulnerabilities and threats. and this two things together tell you how much risk you are in. it can be more or equal.

like if you take a scale of 20 unites and you measure the risk.

if the apartment is fully fire equipped,

threats + vulnerabilities=Risk ≤ Impact

3+3=6 ≤20 or Risk is 6≤20

The primary problem with any risk formula is the identification and quantification of likelihood. You have to have direct access to the potential adversary, and some relatively controversial psychoanalysis models to begin that determination.

For instance, in the Philippines the government requires that all applicants for gun permits take a series of psychological tests, primarily the House-Tree-Person test. They use the analysis of the results as a dominant factor of whether to issue a gun permit. They are essentially quantifying the threat.

Similarly, criminal justice is starting to use a psychological model to assess the probability of recidivism and set harsher sentences for those determined as least likely to be quality citizens.

If you accept the science, then this is a great way to determine and mitigate insider risks. It doesn’t have any validity for an actor or adversary that does not submit to the tests. This is where the R=TxVxI model is still valuable in communicating relative risks of different factors.

Presenting or outlining a problem without a solution is workplace no-no 101. I don’t see you providing a solution except possibly complaining about a flawed mathematical equation. I agree with Henry, it allows to measure risk in some quantifiable way for the real world problem.

High horse math essays that pontificate has zero value for boots on the ground.

I like the formula because I’m not trying to build a safe 100 storey skyscraper. I’m trying to quantify something for execs that would otherwise can’t be and saying we can’t quantify the risk would get you thrown out of a board room for yapping gibberish. Solutions are better than complaining.

hi,

I am a Master of Disaster Recovery Planز

How can I measure the risks in cloud computing?

pls. give me answer at:

gigawaleedafifi@yahoo.com

## One Trackback

[…] 2010 Jay Jacobs Leave a comment Go to comments Jeff Lowder wrote up a thought provoking post, "Why the “Risk = Threats x Vulnerabilities x Impact” Formula is Mathematical Nonsense” and I wanted to get my provoked thoughts into print (and hopefully out of my head). I’m […]