Are Social Media Bots “Good”?

I found it really eye-opening to read Nick Bilton’s article “Phony Friends, Real Profit” which appeared on page E2 of The New York Times of November 20, 2014, and seems to be in favor of using bots to represent fakes. Either Bilton was writing with his tongue in his cheek or, if not, was condoning a really dishonest activity, namely, the misrepresenting of followers and tweeters in order to enhance rankings on Facebook, Twitter, etc. and suggesting where one might purchase services to facilitate this activity. I would think that Bilton is being facetious … I really hope so.

The very idea of creating fake followers is indeed an insidious one and it only leads to greater distrust of social media and the Internet in general.

It is bad enough that pseudonymous commenters frequently destroy serious discussions of real issues, but the idea that many activities are deemed to be genuine, when in fact they are fakes, is just one more step in the deterioration of the credibility of the Web. What was once a means of wholesome interchange has become a dangerous and dishonest mess, where the less scrupulous and more criminal elements seem to be taking over.

It really isn’t that funny an issue to warrant a flippant article. How can we determine which sources are authentic and what content is authoritative and credible? For the most part it is obvious, such as when an online discussion veers away from the topic at hand and descends into highly-objectionable insults.  But in other cases, especially when real names and affiliations are used for the purposes of maligning real-life individuals, as with impersonation and its sick relative imposture, it can be very damaging.

The issue at hand is that, as more insidious means of misrepresenting individuals and their personal views and activities are developed, the trustworthiness of the Web is steadily diminished … and hence its usefulness for honest pursuits. Perhaps this is inevitable, but it certainly isn’t helped by flippant articles on the subject that appear to condone wrongdoing.

As human beings, we are, as a whole, flexible and tolerant of inconveniences and abuses and generally just get on with our lives. The pity is that there are so many benefits of innovative technologies that are only partially realized or never attained because of lack of trust. So the question remains … where do we go from here? As a first step, we need to ensure that the persons behind online comments and assertions are authentic and, secondly, that they indeed are personally responsible for their statements. Such a process would immediately eliminate many fakes and hoaxes. We then need real folks to be more circumspect as to what they actually post online and more aware of the consequences of inappropriate postings. Ironically, the latter will be much more difficult to achieve than the former.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*