Disclaimer: The opinions of the columnists are their own and not necessarily those of their employer.
C. Warren Axelrod

Conflict vs. Consensus Cybersecurity Risk Models

I gave a presentation at the end of April 2017 on “A Consensus Model for Optimizing Privacy, Secrecy, Security and Safety” at the IEEE Homeland Security Technology Conference. The topic occurred to me when reading a quote by Brookings Institute Fellow, Susan Hennessey, as follows:

“We could set up our laws to reject surveillance outright, but we haven’t … We’ve made a collective agreement that we derive value from some degree of government intrusion.” [Emphasis added]

Up until now, I have been a proponent of the traditional adversarial model of security … it’s us versus the bad guys. I hadn’t really thought much about a model wherein a certain lack of privacy or security or secrecy or safety might be acceptable … until recently. I was aware, of course, of our general willingness to trade privacy for little or no compensation. Famously, a researcher had found that students on campus would gladly reveal their passwords for candy. And we all give up information about our interests, activities, search histories and purchase patterns in exchange for a Google search or a Facebook social interaction, a Twitter broadcast, or the convenience of an Amazon purchase or an Uber ride. For the most part, we are not even aware of the value of the information that we are giving up or how the data may be used for or against our personal interests … but the value is huge given the enormous profits that these Web companies gather.

The situation has become even more intense with the advent of such “personal assistant” devices as Amazon’s Echo, as described in Haley Sweetland Edwards’ article “Alexa takes the stand: Listening devices raise privacy issues” in the May 15, 2017 issue of TIME magazine. Such devices, which also include Google’s Home and Samsung’s smart TV, are constantly listening to conversations in your home. It is claimed that they only record actual conversations after they have been given the appropriate “wake up” word (e.g., “Alexa,” “Hey, Siri”), but that can surely be hacked or otherwise subverted. And now, with Amazon’s latest offering, Echo Show, there is a camera on-board, so that the device can not only hear what you’re saying but see what you’re doing.

This ability to eavesdrop is by no means a new phenomenon. I recall that General Motors was asked by law enforcement to be able to listen in on conversations on cars equipped with the OnStar system, as documented in Declan McCullagh’s November 19, 2003 (Yes! More than 13 years ago) article “Court to FBI: No spying on in-car computers.” The article is at https://www.cnet.com/news/court-to-fbi-no-spying-on-in-car-computers/

Whether we are supportive or not of these privacy-destroying devices, it comes down to some virtual collective agreement that we’re okay with the trade of privacy or secrecy or safety for some other benefits. You might question the assertion that we are willing to give up secrecy, but that’s what happened with Edward Snowden’s leaks, for example, where many seemed to accept the revelation of secrets, important to national security, in exchange for information about government surveillance and possible invasions of privacy, which so far has turned out to lead to minimal protection of personal communications data. Was the damage done worth it? It depends on where you are coming from.

It’s not like we are able to vote explicitly for or against certain levels of privacy, etc. It’s more about whether protests by victims against entrenched interests (e.g., lobbyists, Web companies, ISPs) are effective, which they are mostly not. Net neutrality is a case in point, except that there are large companies with their own special interests on both sides. Individuals have very little influence.

An interesting article about Facebook’s struggle with such issues as News Feed, the “filter bubble” and clickbait was published in The New York Times Magazine of April 30, 2017. It is “Social Insecurity” by Farhad Manjoo and has the following subtitles:

“Mark Zuckerberg now acknowledges the dangerous side of the social-media revolution he helped to start.”

“But can Facebook really fix its own worst bug?”

The author and a NYT colleague, Mike Isaac, met several times with Mark Zuckerberg and a Facebook P.R. person.

From a consensus model perspective, it is interesting to note that Zuckerberg’s vision, which he articulated in his 2012 manifesto, was “to make the world more open and connected.” This has clearly been achieved in spades. However, he is now questioning whether it was indeed wise to connect the world. It is an open question given “the global ills that have been laid at Facebook’s feet.”

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*