Disclaimer: The opinions of the columnists are their own and not necessarily those of their employer.
Kenneth F. Belva

H1N1 Threat Overblown? Information Security Relevance? A Logic Proof

“H1N1 was totally overblown. Nothing really terrible happened. No one suffered a pandemic and the resulting deaths were less in number than the deaths from the regular flu.” That’s a paraphrase of what some colleagues said to me. This sentiment is now echoed in the mainstream press as the WHO reacts to criticism that the pandemic hype was generated by the drug companies to sell flu-shots. In short, it wasn’t a real pandemic because nothing happened. It’s the same logic behind many criticisms of information security. It’s also based on a semantic fallacy rather than on a mistake in the underlying logic.

Logically, the argument runs like this:

If “x conditions exist” then something really bad should happen

Nothing really bad happened

Therefore “x conditions” did not exist

In it’s pure mathematical form (technically called Modus Tollens) it can be represented as such:

if p -> q

~q

hence ~p

To flesh this out a bit:

If the current conditions exist such that H1N1 should massively spread, then there should be a pandemic

We did not have a pandemic

Therefore the conditions did not exist such that H1N1 should massively spread

The conclusion is that if the conditions did not exist then it must have been another reason — such as drug companies —  that pushed the pandemic hype. The mistake in reasoning is to believe that the conditions in the first part of the If/Then statement cannot change. By distributing a vaccine the conditions of the “If” were altered.  The same fallacy applies to information security.

Next time someone complains that “There is no way to tell if any of this information security really does anything” the Information Security Professional has a proper, logical and mathematically sound reply. “We changed the environment so that it would be much less likely to happen.” Logically speaking it’s as though we changed the variable ‘p’ to something else so that a different condition now exists. It’s necessarily so.

3 Comments

  1. Dave R Jan 27, 2010 at 8:07 am | Permalink

    The Y2K effort (now a full ten years ago, can you believe it) is perhaps the most famous illustration of Ken’s point.

    Y2K was a “non-event” and a “wasted effort” and a “false alarm” in the minds of many people.

    Put aside entirely the absurd Y2K alarmists’ fears and predictions of world-wide technological collapse. No rational technologist accepted or believed these predictions anyway.

    The fact is that there were real, genuine date-related “bugs” in our systems. They were pervasive, potentially disruptive and possibly ruinous for the conduct of business if ignored. We did not ignore them, we spent a couple of years finding and fixing them. Practitioners who know the extent of the remediation understand that it was necessary and appropriate, and that it was just in time — we could not wait until January 1, 2000 to begin to address the problem.

    But in retrospect, because on that date there was no Y2K disaster, there are still people who believe to this day — with false logic — that the entire matter was a hoax.

  2. Dave Funk Sep 23, 2010 at 3:29 pm | Permalink

    The problem with “We changed the environment so that it would be much less likely to happen.” is also logically flawed. Yes the environment was change but what was the change in the probability? Unfortunately in both the cases discussed here, we have no idea what either the before or the after probability was/is. It is very possible (no idea the probability) that with the H1N1 the probability of a pandemic with 2 million dead was 7% and after the work put in to mitigate the impact the probability of the same pandemic was reduced to 6.5%. If we knew these numbers we could rationally deconstruct and decide how much money it makes sense to spend. If we do not, we are left with other questions, like who is asking us to spend the money and what is in it for them. This is why the FUD argument is so dangerous for the security professional.

  3. Kenneth F. Belva Sep 23, 2010 at 4:03 pm | Permalink

    Hi Dave,

    Thanks for commenting.

    While I agree that having more precise numbers will help in policy and decision making, I disagree with you that it’s something that must be present in all cases (and in this case H1N1).

    There are plenty of times we assess a risk without being able to assign a specific quantified number. For example, build an old Windows 2000 server from the original distribution disk and leave it unpatched. Then connect it to the internet directly. Can you truly give me a quantitative measurement of _exactly_ when it will be compromised? What are the exact chances of compromise in the first minute? What about 5? 10? 30? Hour? We certainly can give a qualified description of what will happen: the longer we leave the box out there the more likely it will be hacked. In addition, since the flaws are so well known with many published exploits we expect this compromise to be almost guaranteed since the server is low-hanging fruit.

    The same goes for H1N1. Can we really construct quantitative metrics for emergence of a new virus? Unlikely. Since scientists know the properties of the virus they may be able to give us an accurate qualitative description of the risks. We can then take concrete measures to reduce the risks are they are described.

    In life we make most of our decisions without the aid of mathematical precision. We shouldn’t let this fact become a handicap when understanding risk related matters, including those in our field of information security.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*