C. Warren Axelrod

Data Masking: Good … Information Masking: Very Bad

As we learn more and more about the huge data breach of the U.S. Office of Personnel Management (OPM), two aspects are grabbing everyone’s attention. One is the weakness of the security measures implemented by OPM and its contractors; the other is that senior management of OPM and purportedly the Administration were not forthcoming in disclosing the scope of the leak.

Much of the blame for the data exfiltration has been assigned to OPM’s using an older version of Einstein, a government built intrusion detection system (IDS). Einstein’s the upgrade to an intrusion prevention system (IPS) has been hampered by delayed funding and approvals, according to the article “Breached Network’s Security is Criticized” by Damian Paletta on the front page of The Wall Street Journal of June 24, 2015. From my experience, the move from IDS to IPS is not a trivial exercise, since one needs to be careful not to block important messages. It takes time and expertise to fine-tune IPSs. However, the problems at OPM appear from reports to be much more extensive.

However, more importantly, no organization depends solely on IDS/IPS to manage its security. There is a whole range of measures, including strong, preferably two-factor, authentication, role-based authentication, and data masking. With data masking sensitive data fields are only made available (for read, write, and/or modify) to those with a need-to-know. The WSJ article also mentions separating data so that getting access to a single database does not reveal the whole picture of someone’s identity. This method has been discussed for some time, but is not trivial to implement. It first depends on precise data classification. Then the computer applications have to be designed to perform the complex operation of merging data and then breaking them apart. It is important to recognize that this method, along with encryption, is only effective for protecting raw data. If the attacker goes through an application with privileged access rights, then the system will bring together (and decrypt) the data and make the data available in aggregated (and cleartext) form. Believing that data aggregation and encryption are the answer is a myth of data protection held by many, including lawmakers.

The only approach that has a chance of working is a combination of effective IAM (Identity and Access Management) system, importantly including stringent registration procedures, and instrumenting applications so that you can actually know who is accessing and leaking what data in real time. The other stuff is nice to have, but doesn’t do much if the attacker steals legitimate credentials through spear-phishing or social-engineering means.

So … data masking is an important tool for ensuring that only those with a need-to-know see certain very sensitive information, but implementing data masking requires a strong understanding of what data are needed by whom and it can take a lot of programming work, especially with today’s integrated systems of systems.

The second issue is masking (or not disclosing) information so that the true extent of a breach is not made known to those affected and to the public at large. This could be a case of not actually knowing what was taken, although this does not appear to be the major issue in the OPM case as described in the article “Officials Masked Severity of Hack” by Devlin Barrett and Damian Paletta in the June 25, 2015 issue of The Wall Street Journal, where obfuscation appears to have been deliberate.

In many cases, an organization does not know what data were taken, when, and by whom. This is due to weak monitoring of networks and a lack of instrumentation within the applications. The former is relatively easy to fix—you just buy some products—but the latter takes a lot of work, especially if a software rewrite is needed. Of course, it is much better to include the need for instrumentation in requirements phase in the system development lifecycle, and carry those requirements through the design, development and testing phases. I wrote about this in an article “Creating Data from Applications for Detecting Stealth Attacks,” in the September/October 2011 issue of CrossTalk Journal, see http://static1.1.sqspcdn.com/static/f/702523/14121186/1315886331850/201109-Axelrod.pdf?token=0%2FFV5vW3t5qnshouG5UH3mKmNzE%3D

When you look at all the things that need to be done, their cost, resources needed and time to implement, one might despair of any ability to close the flood gates or, unfortunately in many cases, the “barn door.” So, what is there left to do? If protection does not solve the short-term problem, then one is left with deterrence and avoidance. Deterrence in the cyber world is highly questionable, especially as attribution is fraught with errors … sources can be spoofed. Sanctions are not as effective for cyber warfare as they may be for kinetic wars.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*