Metrics Revisited – Application Security Metrics

I have recently been giving some thought to, and doing some research into, application security metrics, and I have determined, quite simply, that there aren’t any good ones.

“How ridiculous!” you say, “We have two dozen application security metrics, which we report in real time, daily, weekly and fortnightly.” Yes, I understand. You have measures that you call application security metrics. But there’s only one problem – they don’t measure the security of the application. They might measure everything around the application, but they don’t tell you anything about the actual strength of the application itself.

“Even more ridiculous!” you assert. “We do code reviews, test for buffer overflows and cross-site scripting (you know, the OWASP top ten), and all those good things. How can you say we aren’t measuring app security?” That is surely good, and better than most. But Microsoft and Oracle and all those other prestigious software vendors are privy to the same good practices, and they still regularly send out reams of patches.

“OK. So even if what you are telling me has some slight truth to it. Can you do any better?”

“Well, not exactly.”

“See, you’re full of it.”

“But, at least I’m aware of the problem.”

“That doesn’t help me.”

“Well it might. So let’s take a closer look…”

Teaching developers to write more secure code, examining and testing application code for weaknesses, and subjecting applications to rigorous change management reviews are all admirable, but these activities do not provide measures of the security strength of applications. Granted, they do serve to reduce the total number of security weaknesses in any given application (which is good), but they really don’t tell you how many, and how serious are the ones still left in the code … and that is what counts. Unfortunately, it is next to impossible to determine the number and severity of residual vulnerabilities, but more on that later.

Meanwhile, consider this … One application has 100 inherent vulnerabilities, of which 10 are discovered and patched. Another application has 1000 inherent vulnerabilities, of which 900 are known and fixed. The former has 90 residual vulnerabilities, and there are 100 remaining in the latter application. Which application is more secure? Well, it really depends on how exploitable the remaining vulnerabilities are. But let’s assume that they are all equally easy (or difficult) to exploit and that the probability that a successful exploit will be developed is the same for each vulnerability. Then the first application is more secure since it only has 90 vulnerabilities left versus 100 for the second application.

“But look at all the ones we fixed!”

If number and timing of patches are being used as metrics, then the second application is 90 times more secure. Wrong! Yes it’s good that Microsoft sends out notices of patches monthly, but the more they find (which is certainly good), the less confident I am in the overall security strength of their applications. To me a good measure is how few patches are required. But of course, one can also argue the opposite, namely, that the reason that they only find a few vulnerabilities is that they are not looking hard enough or do not have the capability to discover them.

Always remember that an attacker only has to find a single exploitable vulnerability and go after it, whereas the security professional’s charter is to get rid of all weaknesses or protect against those that cannot be eliminated. Since we see weaknesses announced practically every day – in the most widely used programs and, yes, in open source software also – it is clear that even the most successful software vendors are not doing a great job in this regard.

Currently, there is a limited number of options for application developer organizations or software vendors, as described above, and each has its place and does contribute to the applications’ security strength. But the number of frequency of patches and fixes suggest that these methods are inadequate. In fact, there was a study, reported in the IBM Systems Journal, which showed that, for OS/360, developers reached a point where they were introducing more bugs than they had fixed in each successive release of the operating system. Put it down to complexity and the inability of any one individual, or even a small group, to understand the entire system. Whatever the reason, the reality was that the system had become so unwieldy that it could no longer be maintained effectively.

“Well, Axelrod, you’ve described the problem. What’s the solution?”

Dare I say that there is no quick fix? But there are ways to develop application security measures that will have more meaning. They are not simple, and will be costly, but they will at least be meaningful. A clue to how to go about this comes from the earlier attempts to create programming techniques to ensure bug-free code and also to develop code generators that automatically produced such “nirvana” code. It is obvious from the state of software today that this effort did not bear much fruit, but I do believe it is worth revisiting from the security angle. I say this because the downside of security vulnerabilities is often, but not always, much greater than for regular bugs. Why else do you think that Microsoft could get away with charging big bucks to agree to extend support specifically for NT “security” bugs as they obsoleted the product?

We need to come up with a set, admittedly a large one, of application code principles to which programs with strong security adhere. Not general principles, such as conduct code reviews and engage someone to do monthly penetration tests. Nor just a list of secure programming rules, which you can find over the Internet, particularly at the OWASP site. Those you should be doing at a minimum anyway.

What I am talking about is coming up with a sophisticated and reasonably complete set of enforceable design and coding principles, which demonstrate awareness of potential weaknesses and provide practical approaches to avoiding baking in vulnerabilities. It will mean bringing together one or more groups of super-programmers and application security specialists and having them brainstorm what the characteristics of “ideal” secure programs should be. And they need to also develop ways of measuring the extent to which any particular application – be it homegrown, off-the-shelf, or open source – meets those principles. These measures will be your true application security metrics. So let’s get started …

Post a Comment

Your email is never published nor shared. Required fields are marked *