While the Singularity Hub normally sticks to reporting on emerging technologies, their primary writer, Aaron Saenz, recently posted a more philosophical venture that ties nicely into the faux-caution trope of transhumanist discourse that was raised in our last post on Futurisms.
Mr. Saenz is (understandably) skeptical about efforts being made to ensure that advanced AI will be “friendly” to human beings. He argues that the belief that such a thing is possible is a holdover from the robot stories of Isaac Asimov. He joins in with a fairly large chorus of criticism of Asimov’s famous “Three Laws of Robotics,” although unlike many such critics he also seems to understand that in the robot stories, Asimov himself seemed to be exploring the consequences and adequacy of the laws he had created. But in any case, Mr. Saenz notes how we already make robots that, by design, violate these laws (such as military drones) — and how he is very dubious that intelligence so advanced as to be capable of learning and modifying its own programming could be genuinely restrained by mere human intelligence.
That’s a powerful combination of arguments, playing off one anticipated characteristic of advanced AI (self-modification) over another (ensuring human safety), and showing that the reality of how we use robots already does and will continue to trump idealistic plans for how we should use them. So why isn’t Mr. Saenz blogging for us? A couple of intriguing paragraphs tell the story.
As he is warming to his topic, Mr. Saenz provides an extended account of why he is “not worried about a robot apocalypse.” Purposefully rejecting one of the most well-known sci-fi tropes, he makes clear that he thinks that The Terminator, Battlestar Galactica, 2001, and The Matrix all got it wrong. How does he know they all got it wrong? Because these stories were not really about robots at all, but about the social anxieties of their times: “all these other villains were just modern human worries wrapped up in a shiny metal shell.”
There are a couple of problems here. First, what’s sauce for the goose is sauce for the gander: if all of these films are merely interesting as sociological artifacts, then it would only seem fair to notice that Asimov’s robot stories are “really” about race relations in the United States. But let’s let that go for now.
More interesting is the piece’s vanishing memory of itself. At least initially, advanced AI will exist in a human world, and will play whatever role it plays in relation to human purposes, hopes and fears. But when Mr. Saenz dismisses the significance of human worries about destructive robots, he is forgetting his own observation that human worries are already driving us towards the creation of robots that will deliberately not be bound by anything that would prevent them from killing a human being. Every generation of robots that human beings make will, of necessity, be human worries and aspirations trapped in a shiny metal shell. So it is not a foolish thing to try to understand the ways that the potential powers of robots and advanced AI might play an increasingly large role in the realm of human concerns, since human beings have a serious capacity for doing very dangerous things.
Mr. Saenz is perfectly aware of this capacity, as he indicates in his remarkable concluding thoughts:
We cannot control intelligence — it doesn’t work on humans, it certainly won’t work on machines with superior learning abilities. But just because I don’t believe in control, doesn’t mean that I’m not optimistic. Humanity has done many horrible things in the past, but it hasn’t wiped itself out yet. If machine intelligence proves to be a new form of Armageddon, I think we’ll be wise enough to walk away. If it proves to be benevolent, I think we’ll find a way to live with its unpredictability. If it proves to be all-consuming, I think we’ll find a way to become a part of it. I never bet against intelligence, even when it’s human.
Here, unfortunately, is the transhumanist magic wand in action, sprinkling optimism dust and waving away all problems. Yes, humans are capable of horrible things, but no real worry there. Why not? Because Mr. Saenz never bets against intelligence — examples of which would presumably include the intelligence that allows humans to do horrible things, and to, say, use AI to do them more effectively. And when worse comes to worst, we will “walk away” from Armageddon. Kind of like in Cormac McCarthy’s The Road, I suppose. That is not just whistling in the dark — it is whistling in the dark while wandering about with one’s eyes closed, pretending there is plenty of light.