The Threat of Artificial Intelligence

In a recent column I argued that general columnists, such as David Brooks, don’t understand enough about certain technologies, such as artificial intelligence (AI), to assess their impact properly. As a result AI is considered by the general public to be much more benign than some technologists believe. I placed greater credence on Elon Musk’s fear of the threat of AI to our very existence, as mentioned in Brooks’ column, than Brooks’ own concern about a potential “cold, utilitarian future.”

Two days before my original AI column was posted (on December 5, 2014), Gary Marcus wrote a piece with the title “Artificial Intelligence Isn’t a Threat—Yet” in The Wall Street Journal of December 13-14, 2014, in which he described the future dangers of AI and prescribed some remedies. Marcus, who is a professor of psychology and neuroscience at New York University, quotes Elon Musk, as did I, but adds a subsequent December 2, 2014 quote from cosmologist Stephen Hawking, as follows: “The development of full artificial intelligence could spell the end of the human race.”

I noticed that Hawking used the word “could” rather than “will,” which leaves us potential wiggle room. Musk stated that the AI “demon” could be “our biggest existential threat.” He also uses “could” so we are not necessarily completely doomed.

Since the apocalyptic predictions of doom have been somewhat toned down in quotations by Hawking and Musk, the suggested methods for alleviation of these horrendous threats are also somewhat diluted. Marcus comes up with some recommendations that are already within the system, are completely impractical or just don’t go far enough. He suggests that we (presumably meaning government agencies):

  • demand transparency in programs that control important resources
  • fund advances in techniques for “program verification,” making sure that do what they are designed to do, and
  • outlaw specific, risky applications

On the surface, these may look like logical and worthwhile ideas. However, they are neither new nor would they be sufficiently effective, if attempted. Here is why:

  • Much of the current software that runs critical systems is “open source,” meaning that is already fully transparent—anyone can view the source code. Yet we have seen major breaches via open-source software, including those under the names of Heartbleed and Shellshock, which are recent breaches that have been widely publicized. One reason for the vulnerability of such software is that so many important open-source software programs, used on the majority of servers and many workstations attached to the Web, are supported by a few guys working on them part-time. This means that many weaknesses are never discovered until used for nefarious purposes and, when vulnerabilities are discovered, there is no one to go to … or the go-to persons don’t have the resources to address the issues.
  • Sophisticated software verification and validation procedures already exist and are already being used. The military, in particular, spends huge amounts of effort and resources on checking programs that control weaponry, for example. After all, you wouldn’t want a cruise missile to go out-of-control. But that is just the point: You need not only to test that a program functions as it should, but you also have to make sure that it doesn’t do what it is NOT supposed to do. Such verification goes by such names as “functional security testing” or “negative testing,” and it is hugely underserved in the computer world. This is mainly because it can cost orders of magnitude more than regular functional testing and can hold up project completions excessively. However, it is the only reasonably effective method to ensure that securities trading systems won’t go haywire, driverless cars won’t crash, and nuclear plants will survive hacker attacks, and the like.
  • The outlawing of “risky” applications is, in my opinion, a pipe dream. An application that puts you personally at risk can be a major money-maker for such companies as Facebook, Amazon, Uber and Google. As business tycoon and philanthropist, Leslie Wexner, put it (in Forbes magazine’s online “quote of the day” for December 4, 2014): “Society can’t wait. It’s sad there are so many entrepreneurs, business successes and venture capitalists who give no thought to society.” As long as money trumps social welfare, which it always will, there is little than can or will be done to halt, or even slow down innovation, whether for society’s overarching benefit or not.

There is little doubt that certain technological advances will threaten society to some degree—hopefully at less than an Armageddon level. Nevertheless, it is important that those who really understand technology—such as Elon Musk—be the ones who propose and support mitigation, not a reporter, however skilled, or someone expert in neuroscience who is not familiar with the intricacies of software development and the huge amount of resources needed to address AI issues. Generalities, such as “outlaw risky applications,” will not cut it. That is not to say that an essay that raises awareness of the issues surrounding AI is a waste of time. No, it is really very important to keep the momentum of discussions going. But these discussions won’t solve the problem. We need to listen to those knowledgeable enough to understand what has to be done and then proceed under their leadership.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*