Editor’s Note: In 2010, Mark Gubrud penned for Futurisms the widely read and debated post Why Transhumanism Won’t Work.” With this post, we’re happy to welcome him as a regular contributor.

Okay, fair warning, this review is going to contain spoilers, lots of spoilers, because I don’t know how else to review a movie like Transcendence, which appropriates important and not so important ideas about artificial intelligence, nanotechnology, and the “uploading” of minds to machines, wads them up with familiar Hollywood tropes, and throws them all at you in one nasty spitball. I suppose I should want people to see this movie, since it does, albeit in a cartoonish way, lay out these ideas and portray them as creepy and dangerous. But I really am sure you have better things to do with your ten bucks and two hours than what I did with mine. So read my crib notes and go for a nice springtime walk instead.

Set in a near future that is recognizably the present, Transcendence sets us up with a husband-and-wife team (Johnny Depp and Rebecca Hall) that is about to make a breakthrough in artificial intelligence (AI). They live in San Francisco and are the kind of Googley couple who divide their time between their boundless competence in absolutely every facet of high technology and their love of gardening, fine wines, old-fashioned record players and, of course, each other, notwithstanding a cold lack of chemistry that foreshadows further developments.

The husband, Will Caster (get it?), is the scientist who “first wants to understand” the world, while his wife Evelyn is more the ambitious businesswoman who first wants to change it. They’ve developed a “quantum processor” that, while still talking in the flat mechanical voice of a sci-fi computer, seems close to passing the Turing test: when asked if it can prove it is self-aware, it asks the questioner if he can prove that he is. This is the script’s most mind-twisting moment, and the point is later repeated to make sure you get it.

Since quantum computing has nothing to do with artificial intelligence now or in the foreseeable future, its invocation is the first of many signs that the movie invokes technological concepts for jargon and effect rather than realism or accuracy. This is confirmed when we learn that another lab has succeeded in uploading monkey minds to computers, which would require both sufficient processing power to simulate the brain at sub-cellular levels of detail, and having the data to use in such a simulation. In the movie, this data is gathered by analyzing brain scans and scalp electrode recordings, which would be like reading a phone book with the naked eye from a thousand miles away. Uploading might not be physically impossible, but it would almost certainly require dissection of the brain. Moreover, as I’ve written here on Futurisms before, the meanings that transhumanists project onto the idea of uploading, in particular that it could be a way to escape mortality, are essentially magical.

Later, at a TED-like public presentation, Will is shot by an anti-technology terrorist, a member of a group that simultaneously attacks AI labs around the world, and later turns out to be led by a young woman (Kate Mara) who formerly interned in the monkey-uploading lab. Evading the FBI, DHS, and NSA, this disenchanted tough cookie has managed to put together a global network of super-competent tattooed anarchists who all take direct orders from her, no general assembly needed.

Our hero (so far, anyway) survives his bullet wound, but he’s been poisoned and has a month to live. He decides to give up his work and stay home with Evelyn, the only person who’s ever meant anything to him. She has other ideas: time for the mad scientist secret laboratory! Evelyn steals “quantum cores” from the AI lab and sets up shop in an abandoned schoolhouse. Working from the notes of the unfortunate monkey-uploading scientist, himself killed in the anarchist attack, she races against time to upload Will. Finally, Will dies, and a moment of suspense … did the uploading work … well, whaddya think?

No sooner has cyber-Will woken up on the digital side of the great divide than it sets about rewriting its own source code, thus instantiating one of the tech cult’s tropes: the self-improving AI that transcends human intelligence so rapidly that nobody can control it. In the usual telling, there is no way to cage such a beast, or even pull its plug, since it soon becomes so smart that it can figure out how to talk you out of doing so. In this case, the last person in a position to pull the plug is Evelyn, and of course she won’t because she believes it’s her beloved Will. Instead, she helps it escape onto the Internet, just in time before the terrorists arrive to inflict the fate of all mad-scientist labs.

Once loose on the Web — apparently those quantum cores weren’t essential after all — cyber-Will sets about to commandeer every surveillance camera on the net, and the FBI’s own computers, to help them take down the anarchists. Overnight, it also makes millions on high-speed trading, the money to be used to build a massive underground Evil Corporate Lab outside an economic disaster zone town out in the desert. There, cyber-Will sets about to develop cartoon nanotechnology and figure out how to sustain its marriage to Evelyn without making use, so far as we are privileged to see, of any of the gadgets advertised on futureofsex.net (NSFW, of course). Oh, but they are still very much in love, as we can see because the same old sofa is there, the same old glass of wine, the same old phonograph playing the same old song. And the bot bids her a tender good night as she slips between the sheets and off into her nightmares (got that right).

While she sleeps, cyber-Will is busy at a hundred robot workstations perfecting “nanites” that can “rebuild any material,” as well as make the lame walk and the blind see. By the time the terrorists and their new-made allies, the FBI (yes, they team up), arrive to attack the solar panels that power the underground complex, cyber-Will has gained the capability to bring the dead back to life — and, optionally, turn them into cyborgs directly controlled by cyber-Will. This enables the filmmakers to roll out a few Zombie Attack scenes featuring the underclass townies, who by now don’t stay dead when you knock them over with high-caliber bullets. It also suggests a solution to cyber-Will’s unique version of the two-body problem, but Evelyn balks when the ruggedly handsome construction boss she hired in town shows her his new Borg patch, looks her into her eyes, and tells her “It’s me — I can touch you now.”

So what about these nanites? It might be said that at this point we are so far from known science that technical criticism is pointless, but nanotechnology is a very real and broad frontier, and even Eric Drexler’s visionary ideas, from which the movie’s “nanites” are derived, have withstood decades of incredulity, scorn, and the odd technical critique. In his books Engines of Creation and Nanosystems, Drexler proposed microscopic robots that could be programmed to reconfigure matter one molecule at a time — including creating copies of themselves — and be arrayed in factories to crank out products both tiny and massive, to atomic perfection. Since this vision was first popularized in the 1980s, we have made a great deal of progress in the art of building moderately complex nanoscale structures in a variety of materials, but we are still far from realizing Drexler’s vision of fantastically complex self-replicating systems — other than as natural, genetically modified, and now synthetic life.

Life is often cited as an “existence proof” for nanobots, but life is subject to some familiar constraints. If physics and biology permitted flesh to repair itself instantly following a massive trauma, evolution would likely have already made us the nearly unstoppable monsters portrayed in the movie, instead of what we are: creatures whose wounds do heal, but imperfectly, over days, weeks, and months, and only if we don’t die first of organ failure, blood loss, or infection. Not even Drexlerian nanomedicine theorist Robert Freitas would back Trancendence’s CGI nanites coursing through flesh and repairing it in movie time; for one thing, such a process would require an energy source, and the heat produced would cook the surrounding tissue. The idea that nonbiological robots would directly rearrange the molecules of living organisms has always been the weakest thread of the Drexlerian narrative; while future medicine is likely to be greatly enabled by nanotechnology, it is also likely to remain essentially biological.

The movie also shows us silvery blobs of nano magic that mysteriously float into the sky like Dr. Seuss’s oobleck in reverse, broadcasting Will (now you get it) to the entire earth as rainwater. It might look like you could stick a fork in humanity at this point, but wouldn’t you know, there’s one trick left that can take out the nanites, the zombies, the underground superdupersupercomputer, the Internet, and all digital technology in one fell swoop. What is it? A computer virus! But in order to deliver it, Evelyn must sacrifice herself and get cyber-Will —by now employing a fully, physically reconstituted Johnny Depp clone as its avatar — to sacrifice itself … for love. As the two lie down to die together on their San Francisco brass-knob bed, deep in the collapsing underground complex, and the camera lingers on their embraced corpses, it becomes clear that if there’s one thing this muddled movie is, above all else, it’s a horror show.

Oh, but these were nice people, if a bit misguided, and we don’t mean to suggest that technology is actually irredeemably evil. Happily, in the epilogue, the world has been returned to an unplugged, powered-off state where bicycles are bartered, computers are used as doorstops and somehow everybody isn’t starving to death. It turns out that the spirits of Will and Evelyn live on in some nanites that still inhabit the little garden in back of their house, rainwater dripping from a flower. It really was all for love, you see.

This ending is nice and all, but the sentimentality undermines the movie’s seriousness about artificial intelligence and the existential crisis it creates for humanity.

Evelyn’s mistake was to believe, in her grief, that the “upload” was actually Will, as if his soul were something that could be separated from his body and transferred to a machine — and not even to a particular machine, but to software that could be copied and that could move out into the Internet and install itself on other machines.

The fallacy might have been a bit too obvious had the upload started working before Will’s death, instead of just after it. It would have been even more troubling if cyber-Will had acted to hasten human Will’s demise — or induced Evelyn to do so.

Instead, by obeying the laws of dramatic continuity, the script suggests that Will, the true Will, i.e. Will’s consciousness, his mind, his atman, his soul, has actually been transferred. In fact, the end of the movie asks us to accept that the dying Will is the same as the original, even though this “Will” has been cloned and programmed with software that was only a simulation of the original and has since rewritten itself and evolved far beyond human intelligence.

We are even told that the nanites in the garden pool are the embodied spirits of Will and Evelyn. What was Evelyn’s mistake, then, if that can be true? Arrogance, trying to play God and cheat Death, perhaps — which is consistent with the horror-movie genre, but not very compelling to the twenty-first-century mind. We need stronger reasons for agreeing to accept mortality. In one scene, the pert terrorist says that cutting a cyborg off from the collective and letting him die means “We gave him back his humanity.” That’s more profound, actually, but a lot of people might want to pawn their humanity if it meant they could avoid dying.

In another scene, we are told that the essential flaw of machine intelligence is that it necessarily lacks emotion and the ability to cope with contradictions. That’s pat and dangerous nonsense. Emotional robotics is today an active area of research, from the reading and interpretation of human emotional states, to simulation of emotion in social interaction with humans, to architectures in which behavior is regulated by internal states analogous to human and animal emotion. There is no good reason to think that this effort must fail even if AI may succeed. But there are good reasons to think that emotional robots are a bad idea.

Emotion is not a good substitute for reason when reason is possible. Of course, reason isn’t always possible. Life does encompass contradictions, and we are compelled to make decisions based on incomplete knowledge. We have to weigh values and make choices, often intuitively factoring in what we don’t fully understand. People use emotion to do this, but it is probably better if we don’t let machines do it at all. If we set machines up to make choices for us, we will likely get what we deserve.

Transcendence introduces movie audiences, assuming they only watch movies, to key ideas of transhumanism, some of which have implications for the real world. Its emphasis on horror and peril is a welcome antidote to Hollywood movies that have dealt with the same material less directly and more enthusiastically. But it does not deepen anybody’s understanding of these ideas or how we should respond to them. Its treatment of the issues is as muddled and schizophrenic as its script. But it’s unlikely to be the last movie to deal with these themes — so save your ticket money.

8 Comments

  1. >>"…Once loose on the Web — apparently those quantum cores weren’t essential after all — cyber-Will sets about to…"

    Cyber-will (get it) .. ok, but seriously, moving on…
    Why think of the future transcendence of the human mind as a mere copy of the biological wet-ware of the 20th century?
    ‘Evolution of consciousness’ – I prefer exploration of that idea… to today’s primitive thinking practiced by some “futurists” that make them proclaim uploaded minds are: “mere copies”

    But yet… there’s still hope then for a Hard science film to be made on how narrow AI might transcend to full blown AI or Artificial General Intelligence (AGI).

    I think we’re giving people far too little credit in this day of information accessibility and tech knowledge.
    Films like HER, too, didn’t explain the “how” – just the “What” and the What-if.
    After all, A.I. and Electric Dreams and Lawn-mower man were already made before.
    It’s time to make a film with more low-level (borrowing computer terminology) science.

  2. Mark – thanks for a very thoughtful piece. I came into expecting a polemic against transhumanism in general, but you acknowledge the possibilities of the technology (uploading, "dry" nanobots, emotional AI) while still warning against some of the implications.

    Question:

    I'm still unclear why "emotional robots are a bad idea". Emotions in humans are simply conscious perceptions of hidden underlying logic in the brain (in other words, anger isn't magic, it's an unconscious calculation by the brain which then triggers adrenaline, aggressiveness, etc.). Emotion can often lead to great outcomes – empathy, tenderness, affection. Why wouldn't we want AI to have the same kinds of calculations operating deep in the code that lead to AI perceptions of empathy, etc?

    1. Christopher – Thanks for your thoughtful comment, which I can't fully respond to here.

      I am opposed to transhumanism as a philosophy that seeks to replace humanism and proposes the replacement of humanity by technology. I agree that many of the technical concepts may be possible, although transhumanists tend to oversimplify them and expect them to be realizable far sooner and perform better than I believe is likely.

      As I stated in the movie review, I think that reason is a better basis for decision making than emotion when reason is possible. Basically, if a robot can't make a decision based on reason proceeding from goals and rules laid down to it by humans, I don't want it making the decision. The principles of human control, responsibility, dignity and sovereignty must be upheld. If you ask why, you may as well ask, Why live?

      I also don't want people to relate to robots as if they were human. Any apparent emotional behavior by robots is likely to be a deception, and even if it could be authentic, it would falsely suggest that the robot is in some way human when in fact it simply is not human.

      I regard the transhumanist technical visions as a potential acid bath for our species. I want humanity to continue. It is essential that we recognize the danger and not allow ourselves to be seduced by false appearances and our own tendency to project humanity onto inanimate objects.

      We need to regard robots, AI, and all technology as tools, and we need to design them and their appearances accordingly, so that we don't forget that we are the sovereigns, we made these things for our use, and we have the responsibility to retain our dignity, to make the decisions and stay in control.

    2. Mark,

      I appreciate the time you took to write a detailed response to my question. Thanks!

      I agree that transhumanists probably oversimplify thorny scientific and engineering problems like mind uploading or creating "emotional responses" from machine intelligence. Accelerating computational capabilities and an increasingly real-time global network of knowledge will probably accelerate the pace of discovery in key disciplines like neuroscience. Nonetheless, history shows that futurists are almost always overly optimistic about when these technologies will arrive.

      However, arrive they will, and almost certainly in our lifetimes. The concerns you’re raising aren’t going to be science fiction for much longer, so you’re doing all of us a real service by asking these questions now.

      On the specific question of “emotional robots”, it seems you're making two arguments: 1) That robots should only be put into a position of making decisions if we can follow the chain of logical reasoning and 2) Allowing robots that trigger human affective responses will cause confusion and dangerously blur the line between human and nonhuman.

      On the first argument, I think the horse is already out of the proverbial barn. When the designers of Watson were asked how it made its decisions in the famous 2011 Jeopardy game, they candidly admitted they didn’t know. It was using multiple systems simultaneously and applying Bayesian logic in a combination that was opaque to the outside. We’re already in the black box era for AI decision-making. Adding logic in the black box that approximates human emotional reasoning (reading facial expressions, recognizing pain, adjusting behavior accordingly) doesn’t cross a new line.

      On the second argument, I’m very sympathetic. We’re hard-wired to respond to the appearance of emotions, whether there’s any mental life underlying it. Yet where do we draw the line in barring emotional responses in our robotic companions? Should we ban even the basic simulation of emotion in caregiving assistants for nursing homes, even if that will make the patient more comfortable and at ease? Should we ban AI agents from having facial expressions that we can read and respond to (imagine a visual version of Siri). How would that be enforced?

      Chris

  3. Chris,

    Thanks for the thoughtful conversation. You have restated some of my points perhaps more clearly than I stated them in the first place.

    In general, I am not impressed by "horse is out of the barn" arguments. If need be, we can often get the horse back into the barn, the genie back into the bottle, and even the toothpaste back into the tube (really, I can show you how to do it). In this particular case, your point is just that Watson is executing an algorithm which is too complex for humans to follow… but of course, if desired, one could trace through its steps.

    Lots of complicated machinery that people have used for a long time, and plants and animals as well, work in ways that people do not understand. But when we allow machines to make decisions for us, then we are not making our own decisions. Obviously I don't mean trivial decisions, I mean the ones that are (or are supposed to be) difficult for us. Those decisions are our responsibility to make, and if we delegate that responsibility to machines we are in peril.

    I would regard the simulation of emotion for the comfort of patients in nursing homes, or of children, or others whom our absurdly competitive and needlessly time-taxed society regards as not sufficiently important to command actual attention, as a kind of postmodern horror. We are currently suffering from high unemployment and underemployment, and only a small fraction of those employed are performing actually needed productive work. I say, let the robots do the jobs nobody would choose to do anyway, and lets reward (e.g. pay) people for taking care of others when they are in need.

    It's not that I would ban emotional robotics; we need to move beyond this sterile dialogue about individual freedom vs. repressive government. What we need is a deeper revolution through which people will come to understand the reason why we must cling to our humanity (it's all we have; if we lose it, we lose everything) and become immune to the seduction of imitation love.

    Mark

  4. Mark mark,

    I don't really agree with your characterisation of transhumanism as a philosophy that seeks to replace humanism and proposes the replacement of humanity with technology. Especially the last part of that definition is something that I think few if any transhumanists would subscribe to. Are you sure it's really transhumanism you're opposed to and not a caricature of it?

    Peter

  5. Transhumanism seems (to me) to be an evolutionary process accelerated by technologies that will be available in the future.

    Mr. Kurzweil says we are already underway, because we can hold a measure of intelligence in our hands (smart phones) and by other means.

    But the point Christopher made about Watson:

    [i]"When the designers of Watson were asked how it made its decisions in the famous 2011 Jeopardy game, they candidly admitted they didn’t know. It was using multiple systems simultaneously and applying Bayesian logic in a combination that was opaque to the outside. We’re already in the black box era for AI decision-making. "[/i]

    Tells me something interesting.

    Perhaps humans will not be able to complete the "transhuman process" ourselves, because we simply lack the understanding to do so and will always lack the understanding to do so (a "limits to human intelligence and capabilities" hypothesis).

    If future AI does what Watson does at some advanced level, and we cannot figure out "why" or "how" it does that … wouldn't that be a new species evolving from humans, rather than humans evolving themselves?

    It just feels to me like there is a "bridge we cannot cross" here, because of our limitations.

    In short, maybe we'll just remain human, but will be responsible for aiding the evolution of a new technological species?

Comments are closed.