Disclaimer: The opinions of the columnists are their own and not necessarily those of their employer.
C. Warren Axelrod

Campaign Lessons Learned—Part 3: Authenticity, Authority and Access

From the cybersecurity professionals’ point of view, identity and access management (IAM) is really all a matter of authenticity, authorization and access permissions. Similarly, if the origin of a news item or blog or comment can be traced to a provably authoritative source, then it is more likely to be taken to be true than not … but not always.

There have been occasional cases of reporters at some of the most respected news sources having bent the truth or just plain lied. It doesn’t appear to be a frequent occurrence, but it does happen. Content is plagiarized, photographs are manipulated. Here the source is authentic. We know who it is. And the source is generally-held to be authoritative. All too often, however, sources, such as teenagers in Montenegro making up news stories, are anonymous or disguised (spoofed).

The Internet is fertile ground for fakery and anonymity. I have written and presented frequently about the misuse of anonymity. I’ve complained about how easy it is to post unfounded, anonymous comments to articles in respectable publications, and yet how very difficult it is to get letters to the editors published due in part to space limitations and but also as a result of rigorous vetting by editors. I should know … I’ve had four or five letters to the editor published in The Wall Street Journal, The New York Times, The New Yorker, and BusinessWeek, and have gone through their respective review processes. Reviews by The New Yorker are the most stringent, but others are tough also. Typically, for letters, a respected publication will engage in extensive background and fact checking, and some editing, whereas for online comments, there appear to be virtually no controls or constraints except perhaps screening for bad language and blatantly inciting remarks … although the standards for such editing appears to be quite low. As a result, a number of on-line sites have discontinued their comments sections altogether.

There have always been fakery and falsehoods, or, as Sir Winston Churchill famously called them, “terminological inexactitudes” (protocol forbids calling a fellow Member of Parliament a liar) … but never have we seen such a proliferation of questionable “facts.” What has changed in the past decade or so is the low cost and considerable gain potential from putting out false news and hurtful comments (cyberbullying, harassment) and the low likelihood that there will be any meaningful consequences for, or retribution against, the perpetrators.

Vinton G. Cerf, Google’s Chief Internet Evangelist and “one of the fathers of the Internet,” devoted his January 2017 monthly “Cerf’s up” column in the Communications of the ACM to the subject of “Information and Misinformation on the Internet.” Cerf considers the following reactions to the burgeoning volume of fake news:

  • Remove bad information
  • Provide more information to allow readers/viewers to decide for themselves what to accept or reject
  • Provide countervailing information (fact checking) to help inform the public
  • Suggest to people that they ignore anything counter to their worldview

While I have the utmost respect and regard for Vint Cerf, as do many of my colleagues, I find his suggestions to be well-meaning, but weak and unlikely to be effective. Many of those publishing falsehoods have too much of value at stake to be deterred by such measures. And consumers of such lies have little motivation to seek the truth … it is far easier just to believe what they see, hear or read. What is really needed are strong legal and regulatory responses and we must craft and pass stringent laws to harness the tsunami of lies. But then, of course, you need to choose adjudicators whom you trust … and trust is highly subjective and varies greatly from one person to another.

In the Sunday Review section of the March 5, 2017 New York Times, Philip Fernbach and Steven Sloman wrote a “Gray Matter” essay on “Why We Believe Obvious Untruths.” They claim that “The problem is that the forces underlying bogus and real knowledge are similar.” They go on to say that “It is remarkable that large groups of people can coalesce around a common belief when few of them individually possess the requisite knowledge to support it.” This seems to be the key. Namely, that there is a minimum “requisite knowledge” needed to support a “belief” or an “opinion.” However, believed facts do change over time, as with the “fact” that the Earth is round rather than flat … although there are still some who believe it to be flat despite so much evidence against that claim! It comes down to the evidence that supports asserted facts, as well as whether the sources of the evidence are trustworthy and sufficiently knowledgeable.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*