I have to admit that the cover of this month’s Atlantic, proclaiming “Why Machines Will Never Beat the Human Mind,” left me rather uninterested in reading the article, as claims to have made such a proof almost never hold up. And, indeed, to the extent that the article implies that it has provided a case against artificial general intelligence (AGI), it really hasn’t (for my money, it’s an open question as to whether AGI is possible).

Nonetheless, Brian Christian’s article is easily the most insightful non-technical commentary on the Turing Test I’ve ever read, and one of the best pieces on artificial intelligence in general I’ve read. If he hasn’t disproved AGI, he has done much to show just what a difficult task it would be to achieve it — just how complicated and inscrutable is the subject that artificial intelligence researchers are attempting to duplicate, imitate, and best; and how the AI software we have today is not nearly as comparable to human intelligence as researchers like to claim.

Christian recounts his experience participating as a human confederate in the Loebner Prize competition, the annual event in which the Turing Test is carried out upon the software programs of various teams. Although no program has yet passed the Turing Test as originally described by Alan Turing, the competition awards some conciliatory prizes, including the Most Human Computer and the Most Human Human, for, respectively, the program able to fool the most judges that it is human and the person able to convince the most judges that he is human.

Christian makes it his goal to win the Most Human Human prize, and from his studies and efforts to win, offers a bracing analysis of human conversation, computer “conversation,” and what the difference between the two teaches us about ourselves. I couldn’t do justice to Christian’s nuanced argument if I attempted to boil it down here, so I’ll just say that I can’t recommend this article highly enough, and will leave you with a couple excerpts:

One of the first winners [of the Most Human Human prize], in 1994, was the journalist and science-fiction writer Charles Platt. How’d he do it? By “being moody, irritable, and obnoxious,” as he explained in Wired magazine — which strikes me as not only hilarious and bleak, but, in some deeper sense, a call to arms: how, in fact, do we be the most human we can be — not only under the constraints of the test, but in life?…

We so often think of intelligence, of AI, in terms of sophistication, or complexity of behavior. But in so many cases, it’s impossible to say much with certainty about the program itself, because any number of different pieces of software — of wildly varying levels of “intelligence” — could have produced that behavior.

No, I think sophistication, complexity of behavior, is not it at all. For instance, you can’t judge the intelligence of an orator by the eloquence of his prepared remarks; you must wait until the Q&A and see how he fields questions. The computation theorist Hava Siegelmann once described intelligence as “a kind of sensitivity to things.” These Turing Test programs that hold forth may produce interesting output, but they’re rigid and inflexible. They are, in other words, insensitive — occasionally fascinating talkers that cannot listen.

Christian’s article is available here, and is adapted from his forthcoming book, The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive. The New Atlantis also has two related essays worth reading: “The Trouble with the Turing Test” by Mark Halpern, which makes many similar arguments but goes more deeply into the meaning of the “intelligence” we seem to see in conversational software, and “Till Malfunction Do Us Part,” Caitrin Nicol’s superb essay on sex and marriage with robots, which features some of the same AI figures discussed in Christian’s article.

0 Comments