Disclaimer: The opinions of the columnists are their own and not necessarily those of their employer.
C. Warren Axelrod

Facebook Fallibility—Algorithms vs. Judgment vs. Ourselves

A front page article in the May 21, 2016 issue of The New York Times, by Mike Isaac has the title “At Facebook, Human Backup for Algorithms Proved Fallible.” The article describes how a team of Facebook leaders discussed how to “use human judgment to make algorithms better at finding news on Facebook.”

This reminds me of a project, on which I worked early in my career, where we ran a model of the U.S. economy for the firm’s Chief Economist. Each week he would review the results and modify those with which he did not agree. Yet to the world at large his forecasts were “based on a computer model.” Why, one might ask, didn’t he just improve the model? After all, wasn’t the model based on his judgment in the first place?

Also, around that time, I wrote an “artificial intelligence” program for scheduling calls for anesthesiologists at three hospitals. Here the sponsor wanted to be able to inject his personal requirements (to not be on call on certain days at certain times) so that they would override any results that the program would have generated.

I also recall working with an early commercial spreadsheet program where you could change end results that you didn’t “like,” and the program would reverse-engineer the model so that the desired results would present themselves.

Fast forward a few decades to the situation at Facebook. Facebook’s excuse seems to be that “Trending Topics [the section of the website that is supposed to present the most popular news stories at a particular point in time] was a fledgling, ill-managed group—made up largely of recent college graduates with little work experience—where individual judgment was encouraged.” Surely Facebook could have afforded to hire some of the many experienced journalists who have been displaced by news blogs and social media. Not that this issue had not been anticipated in Andrew Keen’s 2007 book “The Cult of the Amateur: How blogs, MySpace, YouTube, and the rest of today’s user-generated media are destroying our economy, our culture, and our values.” Of course, you should substitute Facebook, Twitter, Reddit and others for MySpace.

During first Internet bubble I talked to a bright recent graduate who, despite being a new company employee of 3-6 months with zero experience in journalism, was given a key news reporting job at the startup company. It seems that not much has changed on that score over the past decade-and-a-half.

The day after the above-mentioned article, on May 22, 2016, Op-Ed columnist Frank Bruni wrote a piece in the Sunday Review section of The New York Times about “How Facebook Warps Our Words.” Bruni doesn’t refute Isaacs’s article, which Bruni does not reference specifically, but claims that the manipulation of “trending” news is “just one facet of Facebook,” and not even the most prevalent one. Bruni believes that the “crucial dynamic, algorithm, or whatever you want to call it” is the one that connects to posts from followed friends and groups of likeminded individuals. Bruni puts the onus of political leanings not on the Internet but on the user population. That isn’t quite the case. Let’s not get confused among dynamics (use), algorithms and system design, as Bruni seems to be. It’s akin to the old justification that guns don’t kill, people do, versus the argument that if guns were not as readily available to the populace, there would be far fewer gun-related deaths and injuries. Having grown up in the U.K., I can attest to the latter argument … at least for one country.

Getting back to the Internet, Facebook, Google, Amazon, Twitter and the like, there is little doubt that many “bad” uses of the Internet in general and social media in particular happen because the systems themselves, with their common lack of governance and absence of user self-control, are facilitators of such negative uses, even if they were not originally intended to be so. The algorithms that operate within these systems are biased because to some extent those who develop the algorithms base them on their own views and life’s experience, whether consciously or subconsciously.

These arguments as to goodness and badness don’t appear to enter into decisions by Mark Zuckerberg, Eric Schmidt, Jeff Bezos and others like them. It really comes down to their financial bottom lines, which are affected by the number of users, the length of their stay (the stickiness of the features of the websites), and the potential for revenue gains. That we are all drawn into this whirlpool is merely a description of what is happening. How we use the features is facilitated by the systems, features and algorithms, but the ultimate decision is a personal one. It just appears that many of us are making questionable decisions.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*