Disclaimer: The opinions of the columnists are their own and not necessarily those of their employer.
C. Warren Axelrod

Cybersecurity Risk Metrics … Why Don’t They Get It?

The problem with cybersecurity is the metrics that are used to assess and manage security risks. In November 2008, I published an article “Accounting for Value and Uncertainty in Security Metrics,” in ISACA Journal, which subsequently won the 2009 Michael P. Cangemi Best Book/Best Article Award. My thesis was that commonly used security metrics, while relatively easy to obtain, do not provide Infosec professionals with adequate representations of the state of security and do not indicate specifically what needs to be done to mitigate cybersecurity risks.

While my article is generating many “reads” and “citations” on ResearchGate, the concepts in the article have not been absorbed into the mainstream if a recent article in Volume 2, 2017 of the ISACA Journal by Mukul Pareek with the title “Standardized Scoring for Security and Risk Metrics,” is any indication.

Pareek introduces three types of scoring calculation as follows:

  • Velocity and trend measure
  • Distance measure
  • Persistence measure

The velocity measure is meant to answer the question: What is the rate of change in the metric compared to the past?

The distance measure is aimed at the question: What would it take to cover the distance from the current state [as depicted by the security measure] to the desired state [threshold]?

The persistence measure addresses the question: How enduring are the unfavorable elements represented by the metric?

The first two measures are ratios, with the first showing the change in the metric’s value from one period to the next, and the second indicating by how much the threshold level is exceeded or not met. The third is an aging measure depicting the length of time each recorded control failure has been open.

While all three of these measures are useful, and I have employed them in the past, they continue to treat all elements making up a metric as equally valuable, which they are not, and equally precise or certain, which they are also not.

With respect to value to the enterprise, it is important to differentiate between mission-critical systems and other less-important systems when applying software patches, for example. You need to know whether security budgets are being expended in the most effective manner. Often the top ten-to-twenty percent of applications are as valuable in aggregate as the remaining 80-90 percent, so its makes the most sense to focus your remediation efforts on the most valuable applications initially, after which less critical applications can be addressed. Of course, you must ensure that the less important systems are not gateways to critical systems, which can easily happen as more and more systems are interconnected with one another.

Of course, one should ask about how to measure the values, whose values should they be, and the values of those funding the security program. These are not easy questions to answer, but ignoring them is worse.

Furthermore, one must also account for uncertainty in depictions of risk and security metrics. As Douglas Hubbard and Richard Seiersen put it in their excellent new book “How to Measure Anything in Cybersecurity Risk”:

“Using ranges to represent your uncertainty instead of unrealistically precise point values clearly has advantages. When you allow yourself to use ranges and probabilities, you don’t really have to assume anything you don’t know for a fact. But precise values have the advantage of being simple to add, subtract, multiply and divide in a spreadsheet. If you knew each type of loss exactly it would be easy to compute the total loss. Since we only have ranges for each of these, we have to use probabilistic modeling methods to ‘do the math.’”

Pareek does present a probability-distribution-based metric, the “z-score,” which effectively normalizes or standardizes a metric and which is obtained using the formula:

Z-score = (Metric Value – Mean)/Standard Deviation of the Metric Value

The z-score provides a measure of how many standard deviations (presumably of a Normal distribution) a particular metric is from the mean of the population (not the mean of the sample, which is used by Pareek, for whom the sample is one year’s worth of monthly data). Pareek compares the z-score to the “value-at-risk” (VAR) metric, which represents a multiple of the standard deviation, as used in determining the risk of financial markets. Unfortunately, because of making what proved to be highly inaccurate assumptions, users of the VAR models did not anticipate the true level of risk that was shown to be a major contributor to the 2008-2009 collapse of global financial markets. The same potential for misuse brings into question the value of scoring models for managing cybersecurity risk. They are only as good as their underlying assumptions as to the probability distributions used, and most do not consider distributions with “fat tails” that increase the chances of an extreme event … either good or bad.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*