Mark Walker recently wrote an interesting piece over at The Global Spiral suggesting that when it comes to preventing the extinction of civilization, transhumanism is the best of the bad options we have. He frames the problem in a familiar way: the democratization of existential risks. As things are going now, more and more people will become capable of doing greater and greater harm, particularly via biotechnology. But if business as usual is in effect the problem, relinquishment of the knowledge and tools to do such harm would require draconian measures that hardly seem plausible. Transhumanism, while risky, is less risky than either of these courses of action because “posthumans probably won’t have much more capacity for evil than we have, or are likely to have shortly.” That is to say, once you can already destroy civilization, how much worse can it get? Creating beings who are “smarter and more virtuous than we are” has a greater chance for an upside, as “the brightest and most virtuous” would be “the best candidates amongst us to lead civilization through such perilous times.”
At one level, Walker’s essay might appear as mere tautology. If the transhumanist project works out as advertised (smarter and more virtuous beings), then the transhumanist project will have worked out as advertised (smarter and more virtuous beings will do smarter and more virtuous things). But more interestingly, Walker nicely encapsulates a number of issues that transhumanists regularly seek to avoid thinking seriously about. For example:
1) What is the relationship between human and posthuman civilization? If proponents of “the Singularity” are correct, then the rise of posthumans would likely be just another way of destroying human civilization. Our civilization will not be “led through perilous times,” it will be replaced by something new and radically different. One could say that at least then human civilization would have led to something better, rather than simply lying in ruins. But then the next question arises.
2) What makes Walker think that posthuman wisdom and virtue will look like wisdom and virtue to humans? Leaving aside the fact that humans already don’t always agree about what virtue is, we label the things we label virtues because we are the kinds of beings we are. By definition, posthumans will be different kinds of beings. At the very least, why should we expect that we will understand their beneficent intent as such any better than my cat understands I am doing her a favor by not feeding her as much as she would like?
3) Walker suggests we have “almost hit the wall in our capacity for evil.” I hope he is right, but I fear he simply lacks imagination. The existing trajectory of neuroscience, not to speak of how it might be redirected by deliberate efforts to create posthumans, seems to me to open exciting new avenues for pain and degradation along with its helping hand. But be that as it may, I wonder if “destruction of human civilization” is really as bad as it gets. As is clear from discussions that have taken place on Futurisms, for some transhumanists that would hardly be enough: nature itself will have to come under the knife. That kind of deliberate ambition makes an accidental oil spill, or knocking down a few redwood groves, look like shoplifting from a dollar store.
So: human beings have made a hash of things, but since we can imagine godlike beings who might save us we should go ahead and try to create them. We might make a hash of that project, but doing anything else would be as bad or worse. That’s what you call doubling down.