À la XKCD, several recent posts here on Futurisms have stirred up some lively debate in comment threads. In case you missed the action, the “Transhumanist Resentment Watch” has led to a deeper exploration of some of this blog’s major themes — resentment, disease, and normalcy. A post on magic pills has sparked a discussion on medical economics. The question of libertarian enhancement continues to bounce back and forth. And my rather innocuous posting of an Isaac Asimov story has led to tangents on hedonism and accomplishment.

7 Comments

  1. It is YOU that is wrong, not me.

    I'm not trying to impose my beliefs on you. It is you who is trying to impose your beliefs on me. That, by definition, makes you the aggressor, not me. By default, this means that you are wrong and I'm right.

  2. kurt9, forgive my inability to read sarcasm without the aid of visual/aural cues in meatspace, but you are joking, right?

    Just in case you're not: A blog has got to be the most inefficient means known to man of imposing views on others. And the most charitable interpretation I can come up with of the rest of your post is that you're claiming that the burden of proof lies with those who initiate the line of questioning. This is reasonable enough (in the absence of other criteria at least), but a far cry from pre-establishing the outcome of said dialogue.

  3. You seem tremendously unaware of the degree to which everything you say has already been considered and answered by transhumanists. You give the impression of having read nothing except Kurzweil, and to plead that Kurzweil is the most popular author is sort of like reading a nontechnical popularization of quantum mechanics, and then pleading that you've read the most popular such book. Actually worse than that, since many transhumanists, possibly a majority, disagree vehemently with Kurzweil on many points.

    Where's the engagement with Bostrom, or Hanson, or the other serious academic transhumanists?

    Have you even read the Fun Theory Sequence on Less Wrong, in which practically all your questions and criticisms are thoroughly considered? Or have you read it, and are now just pretending that it doesn't exist, because it's more convenient for you that way?

    I suppose it's plausible enough that you've never read anything except Kurzweil, but you may consider yourself as having been put on notice that this is not excusable.

    To your other readers, I can only go on saying that if Ari continues in this vein, you should be aware that everything he says has long since been considered and taken into account, and that his presentation of transhumanists as blithely unaware of them is either a straw argument or simple ignorance. Many of the considerations he presents are not only considered, but taken as major motivations for plans and actions – Ari seems to think that transhumanists just dismiss such questions, or have never thought of them.

    If you're looking for simple summaries, I'll go ahead and repeat that virtually everything Ari says about transhumanism is dealt with in Fun Theory, and you can just look it up in the appropriate section.

    Hopefully some of you will take on the habit of looking up, and posting the URLs to, the standard replies to whatever Ari says. If Ari's not going to engage, then the best we can do is probably repeat in every post that there are standard replies and refer the readers to those standard replies, so that they can see for themselves that Ari's thoughts are far from unconsidered and that there are thorough replies to them that Ari is saying nothing about.

  4. Brian,

    I agree.

    The rest of you,

    I had a thought. Let's say for sake of argument that all of this mind emulation/uploading/AI stuff actually get developed in the next few decades. Then all of the transhumanists and other techno-geeks then create these computer networks, either underground or in space, and end up uploading themselves and going off into their own cyberspace universe. Everyone else would be left to live their lives in the "real" world free of the influence of all of those pesky transhumanists. In other words, everyone gets what they want.

    I would think this scenario would be desired by all.

  5. Dear Mr. Yudkowsky –

    Goodness, what an outburst! On behalf of the Futurisms bloggers, let me take a moment to jot off a quick response. Please consider this a reply not just to your comment on the post above, but also to a few of the other unhappy comments (like this and this) that you have contributed in the past few hours.

    First, Futurisms is a blog. When we point out an interesting news item or write about something that catches our attention, we won’t take the time to give a complete history of the subject we’re writing about. That’s not what blogs are for. We don’t pretend that posts here are scholarly articles, legal briefs, or encyclopedia entries. Our purpose is to just offer some brief commentary — a basic glimpse for our readers of some of the larger issues raised by the subjects we cover.

    Second, we are well aware that there are transhumanist arguments about many of the subjects we discuss on this blog, even though we don’t always mention those arguments. To be sure, we are not as well versed as you in the archives and arcana of transhumanist writing. That is why we welcome from our commenters and correspondents suggestions for readings we might not know about. This blog is not yet even six weeks old. In the months to come, we will engage with many more writers and arguments — including some that you like and some that you don’t.

    Third, there is another reason we sometimes choose not to link to transhumanist writing: a lot of it is just plain bad writing. Not all of it, to be sure, but a lot. If you’ll forgive the bluntness, much of your own writing fits in this category. While I can’t claim to have worked through your entire oeuvre, those of your writings that I have read are longwinded and repetitive. Your writing style is also self-referential to an almost comical degree — evidence of the very high regard in which you hold yourself. (That self-regard may explain your frustration with the fact that we have not yet committed to memory all the arguments in your ‘Fun Theory’ manifesto.)

    Relatedly, much transhumanist writing is extremely shallow. Here, again, I will use you as an example. So proud are you of your proleptic abilities that you claim in one comment here to have anticipated “every last dot and jot and tittle” of one of Ari’s posts. As evidence, you linked to two posts on your own site — each as tedious and labyrinthine as the 3 a.m. pontifications of a tipsy philosophy undergraduate, and quite irrelevant to what Ari was discussing. And might I add that it is passing strange that you would criticize us for ignoring the corpus of transhumanist writings when transhumanists so often write in a vacuum, without acknowledging the vast extant literature on human nature and ethics? Even in your own writings on human rationalism, you seem to have had very little to say about the many great philosophers in the Western tradition who have written on the subject for millennia before you.

    At any rate, we intend to read more of the writings of the transhumanist community and to respond to its arguments. You are welcome to post comments here, Mr. Yudkowsky, so long as your comments remain civil. We will be happy to engage with you if you have something thoughtful to contribute, even in disagreement. But please understand that it is hard to take you seriously when you claim, quite wrongly, that transhumanists have already answered all conceivable objections to their views; when you do so, moreover, with almost no engagement with either the traditions or the contemporary critics that actually have offered sophisticated critiques of your views; when you petulantly complain that we have not already imbibed every droplet of wisdom on your website; and when you resort to condescension and schoolyard insults.

    Yours,
    Adam Keiper

  6. Really? So I'm too long-winded, especially compared to your own writing? I'm too insulting, as compared to the grave courtesy and charity with which you treat transhumanists?

    Okay. Here's the short version: The 31 Laws of Fun. Here you'll find the quick reference, so that when, say, someone talks about how transhumanists want to make our lives easier and easier until no one has anything worth doing, you can read down the list until you come to, say:

    "3. Making a video game easier doesn't always improve it. The same holds true of a life. Think in terms of clearing out low-quality drudgery to make way for high-quality challenge, rather than eliminating work. (High Challenge.)"

    Then you know to click through to High Challenge and read it to find out why transhumanists think this issue is important, and what one transhumanist wants to do about it.

    Let the informed reader simply read and judge. And I hope that some of them take to the habit of posting links to the existing transhumanist replies, so that casual visitors to your site know about what you're not discussing.

    If you think that High Challenge isn't on-topic, go ahead and say it outright, so that the fair reader may judge whether it is so.

    I'm willing to extend Ari some benefit of the doubt, since he shows some signs of analytic intelligence – his depiction of the problems matches a lot of the work I did on Fun Theory, though of course he dwells on how impossible the problems are, rather than trying to solve them. But benefit of doubt is not unlimited, and your lot has a rather obvious motive to pretend the most sophisticated transhumanist writing doesn't exist. The Fun Theory sequence exactly addresses at least half of what Ari has been talking about. If he doesn't engage, doubt runs out.

  7. "Have you even read the Fun Theory Sequence on Less Wrong, in which practically all your questions and criticisms are thoroughly considered? Or have you read it, and are now just pretending that it doesn't exist, because it's more convenient for you that way?"

    No, I for one have never read it. I've never read Dianetics by L. Ron Hubbard either.

Comments are closed.