Sam Dekay

Risk Assessment Gone Awry: The Costly, and Unpleasant, Consequences of Good Intentions

We are all well aware that information security controls should be “risk-based.”  Innumerable email messages received from vendors stress this apparent truth, and conference speakers are forever reminding us that risk assessment must serve as the foundation of an effective—and practical—security program.  For many of us, the Information Security function is located within a broader division or department dubbed “Technology Risk Management,” or simply “Risk Management.”  Even the federal government, in its official formulation of guidance for safeguarding the privacy and integrity of our customers’ personally identifiable information, insists that an information security program must include a strong “risk assessment” dimension.

We are rarely pressed to question whether or not risk-based controls are good, desirable, or even feasible.  Indeed, our major problem generally consists of identifying risks, developing assessment instruments, and applying the results of assessment processes to the development of appropriate controls.  In general, we are acutely aware that many of our risk metrics—the substance of any assessment effort—are often simply matters of subjective judgment.  However, the broader goal, the identification and mitigation of risk, is itself rarely questioned. Recently, however, I have encountered a theory that requires information security professionals to examine many of their assumptions concerning the value of risk-based controls.  This theory, which seems to have originated within the IT audit community, occasionally emerges at professional conferences.  Currently, the theory has no formal name.  Thus, to pursue this discussion, I will refer to it with a rather long-winded term:  “Risk-based Approach to Developing Information Security Standards and Controls.”  I’ll use the acronym RADISSC for the sake of convenience.

The Theory and its Rationale

 RADISSC maintains that traditional information security standards and controls employ a “one size fits all” model.  That is, most of our policies, standards, and procedures insist on the following types of statements:

  • The minimum length of a password is X [insert a number] characters.
  • Passwords must contain a combination of letters, numbers, or special characters.
  • A user’s session will be timed out after X [insert a number] minutes of nonuse.
  • A user’s account will be suspended after X [insert a number] days of inactivity.
  • If a user enters his/her password X [insert a number] times incorrectly, the system will prevent further signon attempts.
  • Passwords must be changed at least every [insert a number] days.

The RADISSC theory holds that these kinds of rules, which apply to users of all systems and applications within an organization, are impractical.  More seriously, these types of controls are not truly risk-based. The impracticality of standardized security practices, according to RADISSC, is due to the technical limitations of specific applications.  For example, not all business applications are capable of enforcing a password length of, say, eight characters.  Other systems may not be able to identify if a password contains the stipulated mix of alphabetic and numeric characters.  Because these applications and systems cannot meet the “one size fits all” mandate, the business owners or technicians responsible for the errant aps must request a waiver, or exception, to policy.  However, a waiver may only “paper over” the problem.  In other words, even if an exception is permitted by senior management, the underlying risk to the confidentiality or integrity of data is not really removed.    Thus, argue the proponents of RADISSC, not all systems or applications should be required to meet inflexible, standardized security controls.  If, for example, a business ap is deemed to be of moderate or low risk, users may be permitted to change their passwords every 180 days, instead of a rigidly asserted 90 days.  If a system is considered a low risk, then its users may be permitted more than three failed signon attempts prior to lockout.  If an application is of moderate risk, users’ accounts may remain inactive for a full year, instead of an across-the-board standard, such as 180 days.  In short, RADISSC proposes that security controls should be customized to specific systems and applications, based upon the risk-ranking of those systems and applications.  The great benefit of adopting a RADISSC approach, according to its advocates, is that fewer exceptions to policy will be required, because fewer applications are required to meet enterprise-mandated controls.  Fewer waivers will translate into fewer risks.

Is RADISSC an Example of Risk Assessment Gone Awry?

From a theoretical perspective, the RADISSC approach has a certain elegance.  In fact, it seems to represent a logical extension of the notion that security controls must be calibrated to the degree of risk represented by a system or application.  However, examined more closely, the theory of RADISSC seems fraught with practical difficulties, internal contradictions, and erroneous assumptions.  Here are some of the problems:

  • Because RADISSC seeks to customize security standards and controls for each application, an organization must identify all its applications, establish a consistent application risk assessment methodology, and assign a risk ranking to each application.  Further, the rankings must be reviewed on a periodic basis, because the risk ranking assigned to a specific application may alter over time.  For a small organization, these may be feasible tasks.  However, the effort will represent a considerable expenditure of staff and monetary resources for a large enterprise.
  • An organization that has implemented a single signon methodology cannot implement RADISSC (at least in relation to identification and authentication controls), because the signin controls are enforced at the point of entry, not when the user accesses specific applications.
  • RADISSC assumes that customized security controls will reduce risk because fewer exceptions to policy will be required.  Yet, only moderate and low risk applications are eligible for less rigorous controls.  Since these applications are not deemed to present a high risk to the organization, they do not presumably pose serious threats even if their controls fail to meet corporate security standards.  Also, it is not necessarily true that the granting of a policy exception automatically represents a risk.  For most organizations, exceptions are permitted if a system or application can demonstrate that mitigating controls can compensate for a risk identified elsewhere in the technology.  Thus, granting a policy exception is not the same as exempting a system or application from security controls.
  • Because RADISSC permits varying standards, dependent upon the risk ranking of a business ap, users may be confronted with a myriad of conflicting rules.  Some applications will require passwords containing a minimum of eight characters; others will need only three.  For some systems, passwords must be changed every 60 days; however, low risk systems may permit changes after the passing of 120 days.  The application of inconsistent rules may result in greater frequency of forgotten passwords and other signon problems.

Good Intentions May Have a Downside

The point exemplified by RADISSC is that a good intention—the establishment of a risk-based information security program—may go awry.  It may be possible to have too much of a good thing (assessing risk) if the real objectives, scope, and practical consequences of assessment are not the products of deliberate thought and planning.  We often assume that vulnerabilities and threats must be controlled in order to reduce risk.  But, as demonstrated by RADISSC, our understanding of risk itself must be controlled to avoid the costly, and unpleasant, consequences of good intentions.


  1. Alex May 21, 2008 at 10:05 am | Permalink

    Your confusion on the subj. stems from this aspect of your approach:

    “an organization must identify all its applications, establish a consistent application risk assessment methodology, ***and assign a risk ranking to each application.***”

    It’s not that easy, but if you add just a small adjustment to your approach, the issues with the complexity you’re describing go away.

  2. Osama Salah May 21, 2008 at 11:04 pm | Permalink

    Re your argumentation. Two of the four problems listed are for the particular password issue and not general RADISCC implementation problems.

    Too much of anything isn’t good.
    Security implementations are about balance and compromises. I suppose not many will disagree with that.

  3. Jesse May 27, 2008 at 12:29 pm | Permalink

    It strikes me that a balanced approach is best. Ask “what are you trying to protect” and “what is the risk associated” then set the appropriate enterprise standards to singular enforceable rules. If a waiver is needed due to technical limitations then so be it, because the overall complexity and confusion and enforment will all be reduced. Note we are hiding our weaker apps behind an IDM single sign-on system so for us this is easier said AND done.

Post a Comment

Your email is never published nor shared. Required fields are marked *