Bad Behavior – Thoughts on the Malicious Insider

Following every high-profile insider security breach, there is usually a slew of vendors who will triumphantly point out that, had they installed their product, the victim company would have avoided the whole painful problem. The adverse publicity, the implementation of new Draconian controls, the reprimanding and firing of “my best employees,” the souring of relationships with customers and business partners, and being subjected to continuous audits – all these horrors might never have happened “if only they had had the right products in place.”

But let’s be honest about it (even if only to ourselves). In the first place, nobody really knows the scope of the insider threat. Numbers of around 70 or 80 percent of total incidents are attributed to insiders. I happen to think that the number is much higher, probably of the order of 95+ percent – and here is why. If you were to look at the ways in which various incidents are reported, an interesting pattern emerges. Outsider attacks are more likely to be picked up, and stopped from turning into actual incidents, through the use of tools such as intrusion detection and prevention systems (IDSs and IPSs). Insider incidents, on the other hand, are more likely to be discovered by chance because the perpetrator got careless or greedy or both, and his or her activities were noticed by an alert employee or in the course of an audit review.

Now I ask you, what percentages of actual nefarious activities are identified for external versus internal transgressions? I would guess that 50 percent or more of external attacks and only about 5 percent of internal misdeeds are captured. So let’s assume we know of 70 internal incidents and 30 external incidents, this being the approximate breakdown that one might expect. However, if you accept my guesses, the total populations are calculated to be 1,400 internal incidents and 60 external incidents. This would mean that some 96 percent of incidents are internal, but we only find out about one in twenty or so.

OK, so we have tools that are really good at calling out known anomalies. But if we believe that the ratio of known to unknown internal incidents is very small, say 5 percent, then we see our problem as not being able to capture so many more suspicious internal activities.

But fear not, there are many vendors appearing over the horizon, each with a particular method for resolving a particular problem. Is this bad? No, not really. In fact, it is good to see so many creative approaches. We have to start somewhere, and a number of these products are pretty good and have enormous potential. The noteworthy point is that there is no single silver bullet here. We need a variety of tools using pattern recognition and artificial intelligence (I prefer the term “adaptive systems”) in order to tease out patterns of irregular behavior from the morass of noisy data. I have seen one product that applies the brilliant technology used in the human genome project to determine the whereabouts of a few drops of sensitive data in an ocean of corporate information. Other innovative approaches learn what is considered to be normal behavior and give the alarm when that behavior changes significantly. Some products capture data in motion, others data at rest. Some track sensitive data leaving an organization, others within the organization’s boundaries.

Despite these differences, the tools have many things in common. The key issues with many of these products are the following:

  • It can take a considerable effort to set up a product and to teach it the difference between right and wrong
  • Watertight policy and procedures for monitoring and reporting incidents need to be established.
  • Even when the results are filtered they can be extensive and overwhelming
  • The enforcement task can be daunting once irregular behavior has been identified

As we enter a new era of more difficult-to-detect exploits, we need monitoring tools, defenses and preventative methods that are up to the escalating threats. It is no longer enough to identify and act upon known exploits. Increasingly we are seeking out technologies that can second guess criminals, even when the bad guys are “trusted” employees, contractors, business partners or even customers. Such products need to understand the nuances of normal behavior in order to minimize false (or unprovable) accusations and ensure that practically all provably nefarious activities are identified and resolved.

Old-line signature-based methods are becoming less and less effective against increasingly successful exploits that operate under the normal radar. While they still have a role, traditional antivirus products alone cannot do the task at hand. Hence the proliferation of more sophisticated behavioral products. However, with the latter often being more difficult to deploy, there is much work to be done before they become as plug-and-play as the marketplace is demanding.

So what do we do in the mean time?

First, it would be a good idea if we all shared amongst ourselves the anomalies that we find and monitor. That way we don’t all keep on reinventing the wheel, as it were. After all, the bad guys share all the time, to the extent that the (good?) hackers have even set up their own social network “House of Hackers.”

Second, we should encourage those systems that prevent people from getting into trouble rather than those that catch the perpetrators after they have done something bad. The great benefit is that you avoid all the unpleasantness of investigating and punishing someone, and punishing yourself by having to fire and replace otherwise excellent workers.

And third, you should invest in today’s products even if they are not quite ready for prime time, since you are likely to achieve some unanticipated advantages. If too few of us encourage this approach it will take forever to get tools to the level at which we really need them, by which time we’ll be even further behind the crooks. So buy them, try them, and perhaps you will realize some short-term benefits while waiting for the systems to mature.


  1. Michael May 30, 2008 at 12:48 pm | Permalink

    I understand the reasoning. But I’ve also seen the numbers for insider attacks vary from 40% (CERT) to 80% (various) with many numbers in between. That makes me question the veracity of these statistics. Where do they come from? What constitutes an attack/incident? Do they differ by industry?

    I’ve been an infosec professional for over a decade and while I am sure I have not seen all the insider (or outsider) incidents that occurred on my watch, the number of occurrences and ratio of outsider to insider attacks suggests numbers in the 70% and above range are wildly overstated.

    To me a figure like 96% suggests a large population of a company are in the midst of a massive pillaging of its assets somehow unbeknownst to the hapless infosec pro or the other four honest people left in the company 🙂

    The last few years, outsider attacks have become more prevalent. It is easier to anonymously email malware than it is to infiltrate an organization or bribe someone I should think.

    Whatever the case, to my mind, the key point for the infosec pro is to get a good handle on their company’s specific threats and risks because I guarantee that it varies from industry to industry, company to company.

  2. Fred Herman May 30, 2008 at 9:51 pm | Permalink

    This article resonates with the typical American response to problem solving, throw money at it and it will get solved. The real solution to insider security threats involves a paradigm shift from employee as consumable supply to employee as valued asset. When the company has no loyalty to the employees, don’t expect the employee to have much loyalty to the organization. The last IT manager I spoke to said “I don’t involve myself in that HR stuff….” Well, the nexus of human relationships is a system much like a computer network. The more emotionally detached from the system, the easier it is for the individual to become a rogue access point. The solution to the problem doesn’t require the right software package, it requires more interpersonal bonding in the corporate culture. American society focuses more on the success of the individual then concern for the welfare of the society. Call me a socialist if you will, but the truth is still, “garbage in garbage out” if you know what I mean….

  3. Gary Jun 3, 2008 at 4:01 am | Permalink

    Was there any factual basis whatsoever for those numbers or were they simply plucked out of thin air, I wonder? Multiplying two pure guesses together merely amplifies the guesswork.

    Such numbers are really not helpful. Even if it were supported by some evidence, the proportion of insider vs outsider attacks is also not very helpful. This is apples vs pears. Insiders, in the main, hold trusted positions with wide and deep access to systems, knowledge of values etc. They can observe and explore many systems, processes, network packets etc. without fear of being caught or sanctioned. They can commit their frauds and other attacks over an extended period, choosing their opportunities carefully. Outsiders have less knowledge of the specific internal configs, systems, apps, processes, data sets etc., greater technical barriers to exploring and compromising them, fewer opportunities to penetrate/exploit and (arguably) more chance of being caught at least at the perimeter. Some of them may have greater motivation and skills than insiders, but not all. For the organization, internal incidents such as staff frauds may be very costly but are less likely to be disclosed publicly, limiting the reputational damage. Outsider frauds etc. are probably more likely to end in disclosure and prosecution, hence perhaps reputational damage.


  4. Alex Jun 3, 2008 at 9:15 am | Permalink

    To pile on….

    In addition to the “not helpfulness” that Gary pointed out – those numbers, like any global/national/industry level statistic have meaning only in context. A specific entity (business, gov’t, etc…) will actually find the past 10 years of history a more meaningful metric, defined by what an insider incident *is* in their context.

    It’s kind of like trying to tie your budget to an industry norm. Who gives a rip what the rest of the industry is spending? Your spending has to be in the context of management’s tolerance for risk.

Post a Comment

Your email is never published nor shared. Required fields are marked *