C. Warren Axelrod

Did NIST Plagiarize My Security-Privacy Venn Diagram?

.. or did I copy theirs? Or did someone else come up with it before either of us did?

Nowadays, it’s really hard, if not impossible, to determine which came first. All I know is that the Venn diagram, which shows the intersection between privacy and security, and which is Figure 1 in my article “Achieving Privacy Through Security Measures” in Volume 2, 2007 of the ISACA Journal, is very similar to Figure 1 in the draft of NIST Special Publication 800-53 Revision 5, Security and Privacy Controls for Information Systems and Organizations, dated August 2017. And neither of us referenced the other. Of course, Revision 1 of the NIST SP 800-53, dated December 2006, was contemporaneous with my article. But, as far as I can see, it did not include the Venn diagram. I would assume, therefore, that my diagram preceded NIST’s, but who knows what might have occurred earlier?

This above example is really no big deal. Anyone writing on the relationship between security and privacy would likely come up with such a diagram on their own. However, it does illustrate an originality issue with respect to research projects and publications. The question is: “Who got there first?” With so much being published, it is virtually impossible to uncover all of the relevant “prior art,” as they say in the patent world. Yes, googling the topic at hand is a somewhat effective way to discover precedents. But that depends on your entering precisely the right keywords into the search engine. And the same concept can be described in very different terms depending largely on the field from whence it came.

An interesting case in point is described in an article by Martin Hellman, of Diffie-Hellman fame, in the December 2017 issue of CACM (Communications of the ACM). Hellman’s featured “Turing Lecture” article, “Cybersecurity, Nuclear Security, Alan Turing, and Illogical Logic,” points out that Ralph Merkle “developed the concept of a public key distribution system in the fall of 1974,” and Whitfield Diffie and Martin Hellman, unaware of Merkle’s work, “proposed a more general framework—a public key cryptosystem—in the Spring of 1975.” Hellman suggests that we should rightfully refer to the discovery as the “Diffie-Hellman-Merkle Key Exchange.” It’s a bit late for that, I’m afraid. Of course, getting deserved credit for a major invention is orders of magnitude more significant than having a Venn diagram in common. It’s a real shame that Ralph Merkle had to wait more than 40 years to get the recognition he deserved. I’m sure that this kind of omission has occurred very many times—we just don’t know about them unless someone has the knowledge and the decency to point out the occurrence and has the forum to do so.

So, what’s the point? The point is that often you come across something that you weren’t aware of, but that is relevant to something else you are looking into, or had been doing long ago. Is there any way in which to avoid this? Currently, I don’t believe that there is. But it will be a true test of the artificial intelligence (AI) engines that the likes of Google and Facebook are working on.

If indeed the AI systems become capable of ferreting out even obscure references to a particular subject, along with the ability to compare diagrams and photographs, we might be in for a surprise. The surprise could be that others have already thought about the ideas and published them. It’s just that previously you weren’t aware of those particular sources.

Another issue, with which AI might help, is determining the relative importance of various ideas. That will be a real challenge.

If Google were to achieve one of their ambitions to become the repository of all knowledge (other ambitions might include world dominance of information, true or fake), then they could become the curator of all knowledge. That becomes a scary situation. The algorithms will decide. But those algorithms are developed by somewhat narrow-minded (or, more generously, focused) engineers, who likely are not familiar with the nuances of different fields. Or, worse yet, the AI systems will make those decisions, and who knows where their biases will lean?

We are already in a crisis about the genuineness of information. Not only do certain individuals believe that they can label “fake news,” but we read from time to time about respected sources bending the truth, of bona fide researchers falsifying their results, and the like. If we give over the decision as to whether something is relevant and/or false to obscure systems, we won’t solve the problem. We can make it worse. Who knows what errors or bad judgments lurk in the depths of the AI code that no individual can possibly ferret out? Who knows whether someone has hacked into the system and directed it to decide in their or their sponsors’ favor? I certainly don’t. And I would challenge anyone who says they do.

Perhaps we are better off with the current system where there is a good chance that we will be ignorant of some prior work in an area, but at least we have some measure of control over what we look into and whether we might trust the source. An AI system may be attractive because of its ability to rapidly search huge swaths of information. On the other hand, its selection criteria will influence what is presented to the researcher/reader and we may indeed have to rely more heavily on incomplete and inaccurate information … but we won’t know it!

 

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*