Jeff Lowder

Agility and Risk Compensation: Exploring the Connection

In my previous and inaugural column, I introduced the concept of a tradeoff between information security and agility, where agility was defined as “the capability to change with managed cost and speed.” Information security doesn’t necessarily have to be at odds with agility, but security professionals would be wise to consider the potential impact of their security proposals and programs on their organizations’ agility.

In this column, I want to explore the connection between agility and risk compensation. Individuals desire a certain level of risk, whether they are risk averse, risk-seeking, or somewhere in the middle. According to risk compensation theory, individuals adjust their behavior based upon perceived changes in the level of risk. If the risk increases above their risk appetite, they will behave in ways to reduce the level of risk. If, on the other hand, the risk increases below their risk appetite, individuals will feel safer and behave less cautiously.

Several empirical studies on the efficacy of various safety mechanisms provide strong evidence for risk compensation theory. Consider the following examples (all taken from the Wikipedia article on risk compensation theory).

  • Anti-lock Brake System (ABS). Drivers of vehicles with ABS tend to drive faster, follow more closely, and brake later than drivers of vehicles without ABS.
  • Bicycle Helmets. Drivers of vehicles tend to drive faster and closer to bicyclists wearing helmets than they bicyclists not wearing helmets.
  • Skydiving Equipment. As skydiving gear becomes safer, skydivers perform more aggressive maneuvers and hence take more risks. The result is that skydiving fatality results have remained constant, despite the usage of safer equipment.

It is easy to imagine other examples outside of safety. For example, imagine a person taking anti-cholesterol medication who does not make the recommended changes to diet and exercise. Along the same lines, it is easy to imagine a person who exercises more, not so that they can lose weight, but so they can eat more.

The key point to notice here is that, in each example, users of a system modified their usage of the system based upon their perception of the risk(s) involved. And in the case of skydiving equipment, this, in turn, influenced the actual level of risk by negating the improvements offered by the safer equipment.

So what, then, is the connection between agility and risk compensation? Is there a connection at all? My hypothesis is as follows:

When users of a system are forced to endure a loss of agility in order to reduce the level of risk below their risk appetite, the users of that system will attempt to behave in ways that restore agility, level of risk, or both to their desired levels.

In other words, if users are asked to follow a process that both decreases their agility and constitutes what they perceive to be an “overkill” approach to risk reduction, users will attempt to behave in ways that restore agility, level of risk, or both to their desired levels.

The relevance of all this to information security should be obvious, but here is one example. Consider password best practices. According to conventional wisdom, users should use passwords that”

  • are complex,
  • change every X days,
  • are different from their previous Y passwords, AND
  • should not be written down.

Security administrators can use technology to force passwords to comply with the first three requirements, but not the last. The result? Users who ignore or work around the policies, which in turns hampers the effectivess of security policies as a risk reduction mechanism.

This example (and many others that could be offered) show why traditional approaches to security awareness training fail. Traditional awareness programs focus merely on the what and the how, but not the why. In a nutshell, traditional security awareness programs are at odds with the body of empirical evidence supporting both risk compensation theory and the entire discipline of risk communication.

As soon as one begins to consider that evidence, other strategies become immediately obvious. One approach, the risk communication approach, would be to motivate users to comply with the policies (the why). The other approach, suggested by Adam Shostack and Andrew Stewart in their excellent book, The New School of Information Security, is to make the how irrelevant by simply not putting security decisions in the hands of the user (see pp. 98-99). And note that, depending upon how they are implemented, either approach could increase (or at least avoiding decreasing) the organization’s agility.

4 Comments

  1. Patrick Florer Jun 9, 2008 at 11:06 am | Permalink

    just a couple of comments about the password example:

    not convinced about the relationship of risk appetite to your subject matter – policy non-compliance may be much simpler than that.

    For example, I must have 75 different accounts – company, financial, software support – that require passwords.

    So why do I write them down?

    It’s not because I want to circumvent policy and it’s really got nothing to do with risk appetite – it’s
    because I cannot remember them all.

    Why do I resent having to change them every so often?

    Because it’s a hassle, because I cannot remember the last n passwords, and because my job is not to be a creator of strong passwords.

    Why don’t I comply with policy? Because the policies may not make sense for what I do.

    What can security do to improve my life?

    Design password policies that are relevant to different levels of access/different roles – for some things/roles, who cares? The data just doesn’t matter.

    For other things, maybe strict policies are the right way to go.

    As for taking control out of my hands – if you can do that in a way that doesn’t completely tie me down, then great.
    But if your approach is going to force me into a box, I will resist you and ultimately defeat your misguided efforts, thereby possibly creating a security problem that I don’t really wish to create, but have no reasonable way not to create.

    fyi – I am somewhat new to infosec, but have spent 27 years in IT on all sides of the fence – really like your ideas about agility and risk, but am just taking the user’s point of view in these comments.

  2. Jens Laundrup Jun 9, 2008 at 12:11 pm | Permalink

    I believe that the key is not that the policy fails but that the tools and processes necessary to support the policy are not put in place.
    Patrick makes a great point about a policy that, though well intended, does not take into account the situation where 70+ passwords need to be maintained and remembered (even if it was more than 5 it would be pertinent). This raises the need for one of three efforts to resolve the situation:
    1. The policy needs to be changed so that he is in compliance (not very desirable).
    2. A single sign-on system needs to be set up for Patrick and those in his situation so that the passwords can be updated as required per policy but they are accessed through a separate system (a better system though it still relies on the weak username/password system).
    3. A different system, such as a combination of a Smart Card and Biometric scan with a PIN, needs to be put in place so that the Username/Password system can be eliminated. The most expensive solution but it would enable compliance with not just the policy but the intent of the policy.
    Thus, helping secure the environment through policy needs to be backed up with a reasonable manner/system/process for compliance. Too often this does not happen because of a well intended IT or security manager wanting to create a secure environment without thinking through the ramifications for the few (for the many, the password policy usually deals with one or two passwords only). In design engineering, this is often referred to as maintainability, in IT/security, this same scrutiny should be applied. We could call it “compliability” but what it really is: common sense.

    The concepts of agility and risk that Jeff wriites of are greatly needed in modern IT environments. In addition, supplementing the what and how with why is very important to ensure that the users truly understand that it is not just to annoy. But as Patrick illustrated, policy needs to be tempered with “compliability”.

  3. Rob Jul 22, 2008 at 10:50 am | Permalink

    I have hundreds of passwords, of varying strength, that I manage for myself, my wife, and my children. I use an application on a PC at home and my Palm PDA (eWallet) which encrypts everything using a strong password. Some passwords we remember because we use them frequently, of course, but often, I’m called upon to relay a saved password.

    While passwords are not the ideal security mechanism, they are common, so a scheme like I use would help Patrick with his 70+ passwords without the need to write them on paper.

  4. Charlie Salomon Nov 10, 2009 at 9:26 am | Permalink

    Here’s an idea that my boss thought of: Why not develop a password enforcement mechanism that ties the length of the password to how often you have to change it?

    If you want a 4 digit password, okay. You have to change it every 24 hours. If you have a 16 character pass phrase, you get to keep it for 3 years.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*