In my previous and inaugural column , I introduced the concept of a tradeoff between information security and agility, where agility was defined as “the capability to change with managed cost and speed.” Information security doesn’t necessarily have to be at odds with agility, but security professionals would be wise to consider the potential impact of their security proposals and programs on their organizations’ agility.
In this column, I want to explore the connection between agility and risk compensation. Individuals desire a certain level of risk, whether they are risk averse, risk-seeking, or somewhere in the middle. According to risk compensation theory, individuals adjust their behavior based upon perceived changes in the level of risk. If the risk increases above their risk appetite, they will behave in ways to reduce the level of risk. If, on the other hand, the risk increases below their risk appetite, individuals will feel safer and behave less cautiously.
Several empirical studies on the efficacy of various safety mechanisms provide strong evidence for risk compensation theory. Consider the following examples (all taken from the Wikipedia article on risk compensation theory ).
- Anti-lock Brake System (ABS). Drivers of vehicles with ABS tend to drive faster, follow more closely, and brake later than drivers of vehicles without ABS.
- Bicycle Helmets. Drivers of vehicles tend to drive faster and closer to bicyclists wearing helmets than they bicyclists not wearing helmets.
- Skydiving Equipment. As skydiving gear becomes safer, skydivers perform more aggressive maneuvers and hence take more risks. The result is that skydiving fatality results have remained constant, despite the usage of safer equipment.
It is easy to imagine other examples outside of safety. For example, imagine a person taking anti-cholesterol medication who does not make the recommended changes to diet and exercise. Along the same lines, it is easy to imagine a person who exercises more, not so that they can lose weight, but so they can eat more.
The key point to notice here is that, in each example, users of a system modified their usage of the system based upon their perception of the risk(s) involved. And in the case of skydiving equipment, this, in turn, influenced the actual level of risk by negating the improvements offered by the safer equipment.
So what, then, is the connection between agility and risk compensation? Is there a connection at all? My hypothesis is as follows:
When users of a system are forced to endure a loss of agility in order to reduce the level of risk below their risk appetite, the users of that system will attempt to behave in ways that restore agility, level of risk, or both to their desired levels.
In other words, if users are asked to follow a process that both decreases their agility and constitutes what they perceive to be an “overkill” approach to risk reduction, users will attempt to behave in ways that restore agility, level of risk, or both to their desired levels.
The relevance of all this to information security should be obvious, but here is one example. Consider password best practices. According to conventional wisdom, users should use passwords that”
- are complex,
- change every X days,
- are different from their previous Y passwords, AND
- should not be written down.
Security administrators can use technology to force passwords to comply with the first three requirements, but not the last. The result? Users who ignore or work around the policies, which in turns hampers the effectivess of security policies as a risk reduction mechanism.
This example (and many others that could be offered) show why traditional approaches to security awareness training fail. Traditional awareness programs focus merely on the what and the how, but not the why. In a nutshell, traditional security awareness programs are at odds with the body of empirical evidence supporting both risk compensation theory and the entire discipline of risk communication.
As soon as one begins to consider that evidence, other strategies become immediately obvious. One approach, the risk communication approach, would be to motivate users to comply with the policies (the why). The other approach, suggested by Adam Shostack and Andrew Stewart in their excellent book, The New School of Information Security,  is to make the how irrelevant by simply not putting security decisions in the hands of the user (see pp. 98-99). And note that, depending upon how they are implemented, either approach could increase (or at least avoiding decreasing) the organization’s agility.