Over on his blog Accelerating Future, Michael Anissimov has a few criticisms of our blog. Or at least, a blog sharing our blog’s name; he gets so many things wrong that it seems almost as though he’s describing some other blog. And Mr. Anissimov’s comments beneath his own post range from ill-informed and ill-reasoned to ill-mannered and practically illiterate. They are beneath response — except to note that Mr. Anissimov should know better. But putting aside those comments and the elementary errors that were likely the result of his general carelessness in argument — like misattributing to Charlie something that I wrote — some of the broader strokes of Mr. Anissimov’s ignorant and crude post deserve notice.
First, Mr. Anissimov’s post is intellectually lazy. To label an argument “religious” or “irreligious” does not amount to a refutation. Nor can you refute an argument by claiming to expose the belief structures that undergird it.
Second, Mr. Anissimov’s post is intellectually dishonest. He approvingly quotes an article that claims that “all prominent anti-transhumanists — [Francis] Fukuyama, [Leon] Kass, [and Bill] McKibben — are religious.” But anyone who has read those three thinkers’ books and essays will know that they make only publicly-accessible arguments that do not rely upon or even invoke religion. And more to the point, it is an indisputable matter of public fact that none of us here at Futurisms has made the arguments that Mr. Anissimov is imputing to us. None of us has ever argued that we object to transhumanism because “through suffering [we] will enter paradise after [we] are dead.” Not even close.
Once Mr. Anissimov has (falsely) established that those of us who disagree with him do so for religious reasons, he claims that we “want the same damn thing” that he wants. Except that while he wants to achieve immortality through science, his critics “think they can get it through magic.”
To the contrary, our arguments have in fact been humanistic and what you might call earthly — hardly magical thinking or appeals to paradise. The very distinction between humanists and transhumanists should make plain whose beliefs are grounded in earthly affairs and whose instead depend on appeals to fantasy. We are skeptical of transhumanist promises of paradise because their arguments are, by and large, based on faith and fantasy instead of reason and fact; because what they hope to deliver would likely be something quite other than paradise if it became reality; and because the promise of paradise can be used to justify things that ought not be tolerated.
It is too much to ask for Mr. Anissimov to be a charitable reader of our arguments, but if he wants to be taken seriously he should make an effort to seem capable of at least comprehending them. Until he does, it is a peculiar irony that a transhumanist would invoke religion in order to avoid engaging in a substantive debate with his critics.


  1. I challenge you to name a single bioconservative paper of any sort that engages with advanced transhumanist writing of any sort. "Advanced" means on the order of Nick Bostrom, Robin Hanson, Max More, or Eliezer Yudkowsky, not popular transhumanist writers or their books (and specifically not Ray Kurzweil).

  2. I am willing to have a serious debate, but I'd need to know more precisely what to respond to. I'll read some of the papers linked on the right hand column.

    My post doesn't amount to a refutation, sure. A refutation would take more time, which I am willing to give. I disagree that religion doesn't play into our bioethical evaluations deeply even if it isn't specifically invoked. Our religious position deeply influences the way we think about all things, to what we assign significance, and how we choose to go about achieving goals.

    I am willing to grant your arguments a charitable reading. I haven't read any religious material here, so in my closing comments, I was actually referring just to all religious bioethics folks, not specifically you guys, though I assume you are religious.

    Great points about paradise. Part of the reason my post was just a general complaining about religion is because I didn't read anything here yet that really stuck in my mind strongly. I am subscribed to this blog though, so feel free to argue away!

  3. I'm reading various papers on this website (I have before but I'm trying to read more this time), and I'm trying to understand what it is you guys are all going for. Is it a ban on all enhancement, or just careful regulation of enhancement? I am in favor of some version of the latter, personally.

  4. It seems as though most of the problems with the 'debate' between bioconservatives and transhumanists is that issues that are in principle pragmatic are too often given a moralistic twist.
    Take the issue of 'cyborg rights'. Critics of the transhumanist view of the future of artificial intelligence, cyborg citizenship, etc. etc. are basically making the point that strong AI projects that attempt to duplicate the relevant features of the human mind are not feasible. This is a pragmatic, technical point concerning the complexity and difficulty of such technological undertakings, but the transhumanists call it 'human racism', in an attempt to make it a moral issue.
    I hope that the transhumanists who talk like this are doing so for rhetorical purposes, because to be equating doubts concerning artificial intelligence with the moral deficiency of racism is really quite bizarre. Human racism in the sense animal rights advocates use it is at least within the ballpark of actual reality, but claiming that the 'assumption' that cyborgs aren't people is racist is taking a bit of a strange walk outside the real world of moral judgments. It is not an assumption to believe cyborgs aren't people, because there are no cyborgs. There really cannot be a comparison between the belief that another actual human being is not a person, and the belief that entities that do not as yet (and may very well never) exist are not persons. If what I just said sounded strange, that's because it is strange to be talking about our moral obligations to possible beings. It is a strange leap from reality to assume that there is a moral obligation to acknowledge the possibility of certain beings; certainly it is strange that one would have a moral obligation to the possible beings themselves. Also, please do not imagine that this has anything to do with the moral obligations we owe to posterity. That is not the same issue, at all.

    Bioconservatives are also guilty of moralizing practical issues from time to time. Take the issue of radical life extension. It seems that there is some temptation to look for moral quandries in the extension of life that probably aren't there. Having a longer life is basically a good thing; the most relevant questions regarding such schemes concern their feasibility, not their moral consequences.

  5. Mr. Yudkowsky – As you have heard us say before, we will be engaging with a wide range of transhumanist writers here on Futurisms. But you are not the arbiter of which ones are serious and which are silly. And as much as you would like us to ignore Ray Kurzweil, we don’t intend to: judging by book sales, he is more popularly known and respected than any of the others you name, and judging by his treatment at the most recent Singularity Summit, even the cognoscenti think he deserves a place of prominence.

    As for your “challenge” to us, you might consider starting right here. Our co-blogger Charles Rubin has “engaged with advanced transhumanist writing” repeatedly in the pages of The New Atlantis (see here and here, for example). In the 2008 book Medical Enhancement and Posthumanity, he has a chapter in which he takes on More, Bostrom, David Pearce, and others (it’s available here). He also has a chapter on transhumanism in Human Dignity and Bioethics, a 2008 book published by the President’s Council on Bioethics. Bostrom also wrote a chapter for that book, and Charlie Rubin then was given a few pages to respond to Bostrom’s essay. (Bostrom was offered the opportunity to respond to Rubin but declined.)

    Mr. Anissimov – Here on Futurisms we try to make arguments based on and defended by publicly accessible reasoning; they do not depend on private revelation. When you try to suggest that secret religious motivations underlie our arguments, you are being evasive. The same can be said of your approach to the other thinkers whose religious beliefs you crudely try to suss out in the comments beneath your blog post.

  6. "'Advanced' means on the order of Nick Bostrom, Robin Hanson, Max More, or Eliezer Yudkowsky…."

    -Eliezer Yudkowsky

    I think this speaks for itself.

Comments are closed.