I’m not sayin’, I’m just sayin’.

Another day, another cartoon supervillain proposal from the Oxford Uehiro “practical” “ethicists”: use biotech to lengthen criminals’ lifespans, or tinker with their minds, to make them experience greater amounts of punishment. (The proposal actually dates from August, but has been getting renewed attention owing to a recent Aeon interview with its author, Rebecca Roache.) Score one for our better angels. The original post, which opens with a real-world case of parents who horrifically abused and killed their child, uses language like this:

…[the parents] will each serve a minimum of thirty years in prison. This is the most severe punishment available in the current UK legal system. Even so, in a case like this, it seems almost laughably inadequate…. Compared to the brutality they inflicted on vulnerable and defenceless Daniel, [legally mandated humane treatment and eventual release from prison] seems like a walk in the park. What can be done about this? How can we ensure that those who commit crimes of this magnitude are sufficiently punished?…

[Using mind uploads,] the vilest criminals could serve a millennium of hard labour and return fully rehabilitated either to the real world … or, perhaps, to exile in a computer simulated world.

….research on subjective experience of duration could inform the design and management of prisons, with the worst criminals being sent to special institutions designed to ensure their sentences pass as slowly and monotonously as possible. 

The post neither raises, suggests, nor gives passing nod to a single ethical objection to these proposals. When someone on Twitter asks Ms. Roache, in response to the Aeon interview, how she could endorse these ideas, she responds, “Read the next paragraph in the interview, where I reject the idea of extending lifespan in order to inflict punishment!” Here’s that rejection:

But I soon realised it’s not that simple. In the US, for instance, the vast majority of people on death row appeal to have their sentences reduced to life imprisonment. That suggests that a quick stint in prison followed by death is seen as a worse fate than a long prison sentence. And so, if you extend the life of a prisoner to give them a longer sentence, you might end up giving them a more lenient punishment.

Oh. So, set aside the convoluted logic here (a death sentence is worse than a long prison sentence … so therefore a longer prison sentence is more lenient than a shorter one? huh?): to the marginal extent Ms. Roache is rejecting her own idea here, it’s because extending prisoners’ lives to punish them longer might be letting them off easier than putting them to death.

———

Ms. Roache — who thought up this idea, announced it, goes into great detail about the reasons we should do it and offers only cursory, practical mentions of why we shouldn’t — tries to frame this all as a mere disinterested discussion aimed at proactive hypothetical management:

It’s important to assess the ethics *before* the technology is available (which is what we’re doing).” 

There’s a difference between considering the ethics of an idea and endorsing it.” 

… people sometimes have a hard time telling the difference between considering an idea and believing in it …

I don’t endorse those punishments, but it’s good to explore the ideas (before a politician does).

What constitutes humane treatment in the context of the suggestions I have made is, of course, debatable. But I believe it is an issue worth debating.

So: rhetoric strong enough to make a gulag warden froth at the mouth amounts to merely “considering” and “exploring” and “debating” and “assessing” new punitive proposals. In response to my tweet about this…

…a colleague who doesn’t usually work in this quirky area of the futurist neighborhood asked me if I was suggesting that Ms. Roache should have just sat on this idea, hoping nobody else would ever think of it (of course, sci-fi has long since beaten her to the punch, or at least the punch’s, um, ballpark). This is, of course, a vital question, and my response was something along these lines: In the abstract, yes, potential future tech applications should be discussed in advance, particularly the more dangerous and troubling ones.

But please. How often do transhumanists, particularly the Oxford Uehiro crowd, use this move? It’s the same from doping the populace to be more moral, to shrinking people so they’ll emit less carbon, to “after-birth abortion,” and on and on: Imagine some of the most coercive and terrible things we could do with biotech, offer all the arguments for why we should and pretty much none for why we shouldn’t, make it sound like this would be technically straightforward, predictable, and controllable once a few advances are in place, and finally claim that you’re just being neutral and academically disinterested; that, like Glenn Beck on birth certificates, you’re just asking questions, because after all, someone will, and better it be us Thoughtful folks who take the lead on Managing This Responsibly, or else someone might try something crazy.

Who knows how much transhumanists buy their own line — whether this is as cynical a media ploy as it seems, or if they’re under their own rhetorical spell. But let’s be frank about the work these discussions are really doing, how they’re aiming to shape the parameters of discourse and so thought and so action. Like Herman Kahn and megadeath, when transhumanists claim to be responsibly shining a light on a hidden path down which we might otherwise blindly stumble, what they’re really after is focusing us so intently on this path that we forget we could yet still take another.

8 Comments

  1. So, you are sceptical of my claim that it's important to consider potentially ethically suspect technologies before they arrive because transhumanists (I'm not one) don't do this?!

    You're clearly suspicious that I don't take seriously the idea that potentially dangerous technology should be ethically evaluated before it arrives. In fact, I published an academic paper arguing for exactly this in 2008 ('Ethics, speculation, and values', available here: http://rebeccaroache.weebly.com/research.html), long before the current furore. So, I hardly think I can be accused of wheeling out that view as a form of whining about people's responses to the Aeon interview.

    I can't speak for the arguments of other people – this much should have been obvious – but since you refer to a paper I co-authored ('Human engineering and climate change'), I'd like to point out that we did not 'offer all the arguments for why we should [do horrible stuff with biotech]', in this case, engineer humans to curb climate change. We argued that the possibility of doing so ought to be debated, assuming we're serious about curbing climate change. That's compatible with deciding that it's a terrible idea.

    1. 1. People are not going to agree to be engineered to reduce their climate impacts (if anybody seriously proposes that), and the climate crisis is one that faces this generation — but we can meet it by reducing waste and moving to solar, nuclear and other carbon-free energy technologies.

      2. There are many ways we might use advanced technology for purposes of torture. We shouldn't.

      Can I get my job at Oxford now?

    2. Heck, forget about advanced technology, there are plenty of ways we might use rusty spoons for purposes of torture.
      It is perhaps surprising that prison systems in the United States have had access to rusty spoons for decades, and yet they seem to have relinquished the exceptional retribution-enhancing pain-infliction capabilities that this technology enables. What is even more surprising is that there does not seem to be any record of systematic ethical reasoning about the acceptability of employing rusty spoon technology for the purposes of inflicting pain on prisoners to increase the severity of punishment.
      Really before ethicists start talking about futuristic technologies, they ought to get busy catching up on the ethics of rusty (and non-rusty) spoons and other common dual-use dining implements that have so many potentially troubling (though also potentially justice-enhancing) uses in our penal system. Clearly, the ethics is lagging far behind the technology in this area.

  2. Ms. Roache: In response to your Twitter query, I hadn't replied to your comment yet because I wasn't quite sure how to, since it manages to at once completely miss the point I'm making and more or less re-illustrate the point I'm making.

    It's a real stretch to read my post as saying we should ignore the future. To reiterate: I have zero beef with a thoughtfully conducted, far-reaching discussion of hypothetical innovations in technology and ethics — one that's honest about its prior commitments and makes a good-faith attempt to represent and reflect on competing views. That, of course, is not an easy thing to do (though it's what this blog and this journal aspire to).

    No, what I'm arguing is that your discussion — both in this instance, and in the overall pattern of Oxford Uehiro writers — is at once highly slanted and thoroughly misleading about how slanted it is. So: either you naively believe that the way in which we debate the future doesn't matter — that there's nothing wrong with an ostensibly intellectual institution churning out shock headlines like a lowest-common-denominator cable news network, and then hiding behind the guise of academic disinterestedness to skirt any responsibility for its own claims — or else you're cynically exploiting the academic guise, doing advocacy and asking us to call it mere philosophy. Which is it?

    1. Isn't there another option, Ari? That Ms Roache and her colleagues genuinely believe that what they are doing is serious moral analysis of future possibles, but that they have only the vaguest and most rudimentary understanding of what such an analysis might contain. For what would a "serious" ethical discussion of these matters look like beginning from the pervasive assumption that it would involve non-rationally grounded "values" that are either culturally or individually determined? The moral framework most readily available to them has worn thin indeed. They want to play symphonic music, but all they have are kazoos.

    2. Nicely put, Charlie. I'm certainly willing to buy that "they have only the vaguest and most rudimentary understanding of what such an analysis might contain." But I continue to be in disbelief that they either really think that they themselves have no stake or opinion in this discussion, or that separating their opinion from their analysis is no more complicated than simply stating that they're separating their opinion from their analysis.

    3. I did not read your post as saying we should ignore the future. I read it as expressing scepticism that I am interested merely in ethically debating futuristic technologies, and justifying this scepticism by claiming that a group of people to which I do not belong (transhumanists) are not interested in such debate. I've just re-read your post to check whether I might have misunderstood you (as you claim), but I think it's pretty clear that this is how you're arguing.

      I'm doing neither of the things you mention in your second paragraph of your most recent comment. I do not 'churn out shock headlines': I (and my co-authors) did an interview with Aeon, which was not published with a sensationalist tone. I have little control over what happens when a bunch of hacks (like those at the Daily Mail) get hold of the story. Nor am I doing advocacy. I've spent the morning writing a blog post clarifying my views about all this: http://rebeccaroache.weebly.com/1/post/2014/03/the-future-of-punishment-a-clarification.html – I hope that will help show what I'm actually working on behind the headlines.

  3. The idea of virtual Hell has a long, dark history in speculative fiction, with perhaps the most chilling depiction being Harlan Ellison's quite excellent short novella, I Have No Mouth And I Must Scream. See the recently broadcast Black Mirror / White Christmas BBC TV special for a particularly frightening twist on that old sf trope. Ms. Roache, while I appreciate your participation here, I don't think you've sufficiently addressed the substance of the charges made by Mr. Schulman and backed up with copious references to your own words. Would you care to take another crack at bat?

Comments are closed.