Chalmers isn’t talking about the philosophical requirements of simulation. He’s breaking down the claims behind the exponential growth claim of Singularity-pushers — the idea that once we make our computers have a certain level of intelligence (something past human-level), the pace will take off as the intelligence improves upon itself. This requires, he says, an extendible method, which neither biological reproduction nor brain emulation is.
The only way the Singularity will happen, he says, is through self-amplifying intelligence. The general requirement is just that an intelligence be able to create an intelligence smarter than itself. The original intelligence itself need not be very smart. Direct human methods won’t do it. The most plausible way, he says, is simulated evolution. If we arrive at above-human intelligence, he says, it seems likely it will take place in a simulated world, not in a robot or in our own physical environment.
Now he’s getting to practical and philosophical implications if this were to happen, asking “how can we negotiate the Singularity, to maximize the chances of a valuable post-Singularity world.” He says that one possibility would be to put constraints on the intelligences we create, but that that would be impractical because they’re so difficult to predict. The other option, he says, is ongoing control, whereby we would monitor new intelligences as they come along, and would terminate undesirable intelligences. This is another point, he says, in favor of simulated intelligence.
(One critical objection: Is human-level intelligence in computers really possible? Let’s put aside the questions about replicating and simulating the human brain
and just discuss intelligence in the abstract. There is still a basic requirement, in order to simulate something approaching human-level intelligence, of discerning its computable function. If we can’t figure it out from the brain or from our own direct engineering — as Chalmers confirms has been so difficult — how else are we going to do it? None of the A.I. programs we have created so far have been smart enough to create something smarter than themselves.)
Chalmers is concluding now with moral/ethical questions. How do we integrate into the post-Singularity world? Separatism, inferiority, extinction, or integration are our alternatives, he says. He prefers the last: we upload and self-enhance. That option, he says, raises some big questions, like will an upload system be conscious? And will it be me? (This, he says, is where it gets interesting for him, as a philosopher of mind.)
We don’t have a clue how a computational system could be conscious, he says, but we also don’t have a clue how brains could be conscious. But he emphasizes that he doesn’t think there is a fundamental difference between hardware and wetware.
He says in a gradual uploading scenario, he thinks what’s most likely is that consciousness will stay equivalent, rather than fade out. Chalmers says that once people see that you are preserved after being uploaded, and once all your friends are doing it, you’ll have to too. Goodie.
Chalmers concludes that super-A.I.s will be able to reconstruct us from existing recordings of us — things we’ve written, recordings of us, and so on. So what’s the best way we can reach the Singularity if it happens after our lifetimes — what’s the best way for us to be reconstructed? By giving talks at Singularity conferences, of course!
[UPDATE: Ray Kurzweil, in the talk he gives to end the first day of the conference, remarks on some of Chalmers’s comments.]