I had a really wonderful time in Cambridge the other night talking with Adam Roberts, Francis Spufford, and Rowan Williams about Adam’s novel The Thing Itself and related matters. But it turns out that there are a great many related matters, so since we parted I can’t stop thinking about all the issues I wish we had had time to explore. So I’m going to list a few thoughts here, in no particular order, and in undeveloped form. There may be fodder here for later reflections.

  • We all embarrassed Adam by praising his book, but after having re-read it in preparation for this event I am all the more convinced that it is a superb achievement and worthy of winning any book prize that it is eligible for (including the Campbell Award, for which it has just been nominated).
  • But even having just re-read it, and despite being (if I do say so myself) a relatively acute reader, I missed a lot. Adam explained the other night a few of the ways the novel’s structure corresponds to the twelve books of the Aeneid, which as it happens he and I have just been talking about, and now that I’ve been alerted to the possible parallels I see several others. And they’re genuinely fascinating.
  • Suppose scientists were to build a computer that, in their view, achieved genuine intelligence, and intelligence that by any measure we have is beyond ours, and that computer said, “There is something beyond space and time that conditions space and time. Not something within what we call being but the very Ground of Being itself. One might call it God.” What would happen then? Would our scientists say, “Hmmm. Maybe we had better rethink this whole atheism business”? Or would they say, “All programs have bugs, of course, and we’ll fix this one in the next iteration”?
  • Suppose that scientists came to believe that the AI is at the very least trustworthy, if not necessarily infallible, and that its announcement should be taken seriously. Suppose that the AI went on to say, “This Ground of Being is neither inert nor passive: it is comprehensively active throughout the known universe(es), and the mode of that activity is best described as Love.” What would we do with that news? Would there be some way to tease out from the AI what it thinks Love is? Might we ever be confident that a machine’s understanding of that concept, even if the machine were programmed by human beings, is congruent with our own?
  • Suppose the machine were then to say, “It might be possible for you to have some kind of encounter with this Ground of Being, not unmediated because no encounter, no perception, can ever be unmediated, but more direct than you are used to. However, such an encounter, by exceeding the tolerances within which your perceptual and cognitive apparatus operates, would certainly be profoundly disorienting, would probably be overwhelmingly painful, would possibly cause permanent damage to some elements of your operating system, and might even kill you.” How many people would say, “I’ll take the risk”? And what would their reasons be?
  • Suppose that people who think about these things came generally to agree that the AI is right, that Das Ding an Sich really exists (though “exists” is an imprecise and misleadingly weak word) and that the mode of its infinitely disseminated activity is indeed best described as Love — how might that affect how people think about Jesus of Nazareth, who claimed (or, if you prefer, is said by the Christian church to claim), a unique identification with the Father, that is to say, God, that is to say, the Ground of Being, The Thing Itself?

10 Comments

  1. It's interesting to me that some of these questions arrive sideways at the "AI control problem," currently the subject of serious inquiry & hot debate — not because they are questions about the dangers of AI, but about the overlap (or lack thereof) between an AI's view of the universe (and its goals therein) and our own.

    I guess my first reaction to any of the developments you imagine here would be to ask: to what degree did our own view of the universe — our own culture — leak into this AI during its development and learning? Just as any science fiction featuring two-eyed, upright aliens is automatically suspect (in terms of Actually Thinking About the Real Universe) — not totally discredited, just suspect — so too would be an AI talking about God.

    But much more importantly: I just snagged "The Thing Itself." Sounds fantastic.

  2. Robin, thanks for the great comment — but let me play Yahweh's advocate for a minute. 🙂

    Try this out for size: given that in our society most leading scientists (including computer scientists) are atheists or agnostics, maybe they'd be more likely to write code that ignores or steers computers away from the kinds of evidence that might lead to a religious conclusion.

    Not that I truly think that's likely — but (and this is a point I wanted to raise at the convo the other night but didn't get a chance to) you're making a vital point: how can we count on computers, even ones that achieve genuine AI, to give us a genuinely nonhuman perspective on the cosmos if they are programmed by humans? It's possible that our most intelligent computers will never do anything more than reflect the assumptions and prejudices of their programmers.

  3. Or put it this way: what if the understanding of the universe accessible to a vastly intelligent computer cannot be conveyed in a human language such as English? Loon for a future post on this featuring Kenneth Burke's essay "Terministic Screens."

  4. > what if the understanding of the universe accessible to a vastly intelligent computer cannot be conveyed in a human language such as English?

    I'd say this is already the case, at least weakly, and I'm grappling with it in my own personal dorkings with machine learning. Even our current ML systems (sure to be seen as hopelessly primitive by the hyperhumans of 2200) routinely encode images, text, etc. as points in high-dimensional space that are (a) incredibly "perceptive" and extremely useful, but also (b) almost completely inscrutable, even alien.

    These ML systems can learn to "see" an image (or a word, or… this comment) as a vector of hundreds or thousands of values in a corresponding number of dimensions, and increasingly there's a sense that those dimensions are (under certain conditions) very meaningful. So, on the most superficial (and interpretable) level, there might be a dimension for color. Another for contrast; "is this image harshly contrasty, or more mellow?" Another for spots or dots. Another for… presence of cockatiels. That's all swimming in the very shallow end of the pool. No one, not even the bleeding-edge researchers in the field, have managed to fully interpret these encodings. Some see that as worrisome, a problem to be solved; I get the sense others see it as very exciting, a confirmation that they're on to something big indeed.

  5. "how can we count on computers, even ones that achieve genuine AI, to give us a genuinely nonhuman perspective on the cosmos if they are programmed by humans? It's possible that our most intelligent computers will never do anything more than reflect the assumptions and prejudices of their programmers"

    It's a good point, of course. I read something somewhere, though I can't seem to retrieve from my memory what it was or where, that the reason programmers have yet to make an AI that passes a Turing Test is that they're trying to replicate human structures of consciousness — trying, that is, and failing. But that's almost certainly the wrong way to go about it. Think of robots: SF in the 1940s and 1950s assumed that robots would be human-sized, humanoid machines, walking on two legs and so on; but actual robots are almost never that (and those that are exist as curious and toys). Actual robots, like the ones that build our cars, are designed to fit their function and look nothing like human beings. It seems to me common sense that, as with material form, so with software and AI: when we actually do breakthrough and make AI it will be because we've given up the 'must think like a human' bias.

  6. R.A. Lafferty's novel Arrive at Easterwine: The Autobiography of a Ktistec Machine is a lively philosophical examination of life and love and being from the perspective of a created machine. It touches, indirectly, on many of the questions that you raise in this blog post.

    Unfortunately, it's probably too much fun for many to take seriously. :-p

    And it probably works best as a novel if you've already read at least a few of Lafferty's Instute for Impure Science stories.

    I'd love to see Alan Jacobs, Adam Roberts, and Robin Sloan have a roundtable discussion on Arrive at Easterwine after they've read this comment and have felt compelled to read the novel immediately! I'll stay tuned for the announcement of that upcoming conversation.

    Alas, I know that I'm only dreaming. 🙂

  7. Adam:

    Gollancz recently obtained UK/Europe ebook rights and has released e-editions of many of Lafferty's novels and short story collections.

    So, technically, the novel is still out of print, but it is at least available to read once again. Unfortunately, it's still unavailable in any form in the States.

    https://www.amazon.co.uk/Arrive-Easterwine-Autobiography-Ktistec-Machine-ebook/dp/B01A5TIJ7G/
    http://www.sfgateway.com/books/a/arrive-at-easterwine/

    Used print copies of Easterwine, including a nice hardcover edition, can also still be obtained for relatively low prices. There are a few listed for 8 pounds on Amazon UK. Some other OOP Lafferty titles are much rarer and are usually listed for ridiculously high prices.

    I would love to see an Adam Roberts review of an R.A. Lafferty novel. Just the thought of it possibly happening is making me smile. 🙂

Comments are closed.