Artificial Ignorance

On the Op-Ed page of The New York Times of October 3, 2014, David Brooks wrote a column titled “Our Machine Masters,” which discusses how “artificial intelligence” (AI) might be used for good or evil. His thoughts about AI were prompted by the Pandora feeding him suggestions as to what other music he might like. This feature of Pandora is not AI, however. My own definition of AI relates to the ability of the system to evolve and adapt, that is, “learn” from experience. Pandora’s system isn’t actually “learning” about a person’s music preferences. Pandora’s software systems have cleverly categorized various characteristics of musical works so that they can match your selections against their database and feed back to you other works that correlate closely with your choices. Essentially, Pandora uses a search engine (as with Google) to come up with various hits that match the listener’s preferences.

While no doubt the developers at Pandora are constantly refining their code, the computers programs do not modify themselves in response to new information, even though they no doubt constantly refine their correlations. This is an important distinction in my opinion. The thinking machine examples that Brooks provides—namely, computers that play chess, win at Jeopardy, and “do math”—are not all necessarily strictly AI machines, either. To the extent that they look up and analyze information entered into huge databases and tagged with specific indicators, they are traditional list processing machines. To the extent that they learn from experience and modify the ways in which they process new inputs, they can be considered to be AI machines.

Why is the above distinction important? It’s important because data-processing computers are predictable in their behavior, whereas AI machines are not. This gives the latter much more insidious potential. The HAL 9000 computer in Stanley Kubrick’s movie “2001: A Space Odyssey,” which Brooks references in his column, is a good example of AI. HAL (claimed to be IBM shifted by one letter, as you probably know, but which the film producers deny) begins to behave in an unpredicted manner as it kills off astronauts one by one, or so we are led to believe. After all, who would knowingly program such murderous acts into the original programs? However, we should remember that the film was produced in 1968, well before the Internet was born and before computers were hacked into and controlled for malicious purposes. Today we might well believe that a hostile nation state or group of terrorists might purposely invade such a computer as HAL and take over space flights or weapons systems and make them perform in nefarious ways.

Brooks also refers to the excellent article “Brain Power” by Kevin Kelly in the November 2014 issue of Wired magazine, quoting Kelly as saying that “the age of artificial intelligence is finally at hand.” But Brooks doesn’t explain how Kelly is gobsmacked by the wonders of AI but blind to some of its more serious potential consequences. Kelly ends his article with the following words: “The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.”

I think that this is a particularly terrifying thought. HAL defined humanity as useless and proceeded to destroy any human being who might threaten “his” dominance. Is that what we really want?

Elon Musk’s opinion of AI reflects such concerns. In the text accompanying an interview on CNN Money … see “Elon Musk warns against unleashing artificial intelligence ‘demon’” by Gregory Wallace  October 26, 2014 at http://money.cnn.com/2014/10/26/technology/elon-musk-artificial-intelligence-demon/ … Musk, who is unquestionably one of the foremost technology thought leaders of our time, voices his grave concerns about “unleashing [the] artificial intelligence ‘demon’” on the world. He said that “we should be very careful about artificial intelligence,” as it could be “our biggest existential threat.” This is clearly a much more hair-raising view than Brooks’ concern about a potential “cold, utilitarian future” or Kelly’s view of a “brave new world.”

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*