Snowden’s Unknown Cache

While it was not entirely unexpected, it did come as a shock to read that the National Security Agency (NSA) may never know the full extent of the information that Booz Allan contractor Edward Snowden stole. In a front page article in The New York Times of December 15, 2013, with the title “Officials Say U.S. May Never Know Extent of Leaks – What Snowden Stole  – Failure to Quantify Loss Illustrates Major Lag in Security,” Mark Mazzetti and Michael S. Schmidt report that “American intelligence and law enforcement investigators have concluded that they may never know the entirety of what the former National Security Agency contractor Edward J. Snowden extracted from classified government computers before leaving the United States, according to senior government officials.”

In my July 1, 2013, BlogInfoSec column “NSA: IAM … What IAM?” I wrote the following:

“Even if an organization has a good handle on registration, identification, authorization and access control, it is not sufficient. You need to be able to monitor user behavior over time and pick up on suspicious use patterns and anomalous behavior. Ideally you will have an established list of many such inappropriate activities that are suspicious even if they appear to have been authorized at some level. This type of analysis is what the NSA is so good at, so why did they not appear to be able to pick up on Snowden’s activities in real-time or near real-time? Maybe it is the old story about the cobbler’s children not having good shoes. Or perhaps it is a result of the old saw ‘trust, but verify.’ It might sound perverse, but I think a more effective approach is ‘trust nobody initially and let them build up their trustworthiness over time.’ There are known techniques for those with privileged access, including system administrators, to be given limited access to sensitive information and yet still perform their duties. We see this with password resets, where Help Desk personnel cannot view past, current or future passwords, and yet can still perform the job.”

Reading the Mazzetti-Schmidt article further, one learns the shocking truth that the NSA had not implemented monitoring software on its computer and networks systems in Hawaii. The lack of instrumentation in applications, monitoring of systems and reporting suspicious behavior is rampant, but it is particularly distressing to think that the NSA, which is in the monitoring business, is deficient when it comes to its own systems. Whether one thinks that Snowden is a hero or a traitor, the bottom line is that no one should have been able to abscond with “company data” and, if they could, there should certainly have been sufficient other data on such activities to be able to determine exactly what happened through computer forensics methods.

I make this very point in my article “Creating Data from Applications for Detecting Stealth Attacks,” in the September/October 2011 issue of CrossTalk magazine. You have to build security intelligence data creation into applications. Infosec professionals must make their requirements known at the earliest stages of the SSDLC (software system development lifecycle) and continue oversight throughout each phase of the lifecycle to ensure that those requirements have been implemented. There also has to be high-level commitment and support of these efforts and the courage to delay implementation, close down and/or reduce access to systems that do not meet security, integrity, availability and performance requirements. Are these Draconian measures? Perhaps they are, but this is a very serious business. Security cannot be neglected in favor of convenience and functionality.

The NYT article faults the NSA for the lag in implementing computer security in the light of President Obama’s demands for tighter security standards following the Wikileaks incident in 2010. It is clear from that neither the NYT reporters nor the White House have a good grasp of what is needed to prevent data leakage and, if a deflected or successful attempt is made to access and leak sensitive data, how that needs to be detected, reported and handled. The common perception is that you just bolt on the necessary security tools and the job is done. It is such thinking that results in pervasive “Swiss-cheese” systems that are so easy to attack successfully without leaving any trace of such activities. No, security has to be built in from the beginning and this is a large, arduous and expensive task. But, in comparison to such regular pillorying by the press, such investments are well worth it.

Post a Comment

Your email is never published nor shared. Required fields are marked *