Thiel begins by outlining common concerns about the Singularity, and then asks the members of this friendly audience to raise their hands to indicate which they are worried about:
1. Robots kill humans (Skynet scenario). Maybe 10% raise their hands.
2. Runaway biotech scenario. 30% raise hands.
4. War in the Middle East, augmented by new technology. 20% raise hands.
5. Totalitarian state using technology to oppress people. 15% raise hands.
7. Singularity takes too long to happen. 30% raise hands — and there is much laughter and applause.
Thiel says that, although it is rarely talked about, perhaps the most dangerous scenario is that the Singularity takes too long to happen. He notes that several decades ago, people expected American real wages to skyrocket and the amount of time working to decrease. Americans were supposed to be rich and bored. (Indeed, Thiel doesn’t mention it, but the very first issue of The Public Interest
, back in 1965, included essays that worried about this precise concern, under the heading “The Great Automation Question.”) But it didn’t happen — real wages have stayed the same since 1973 and Americans work many more hours per year than they used to.
Thiel says we should understand the recent economic problems not as a housing crisis or credit crisis but rather as a technology crisis. All forms of credit involve claims on the future. Credit works, he says, if you have a background of growth — if everything grows every year, you won’t have a credit crisis. But a credit crisis means that claims for the future can’t be matched.
He says that if we want to keep society stable, we have to keep growing, or else we can’t support all of the projected growth that we’ve currently leveraged. Global stability, he says, depends on a “Good Singularity.”
In essence, we have to keep growing because we’ve already bet on the promise that we’ll grow. (I tried this argument in a poker game once for why a pair of threes should trump a flush — I already allocated my winnings for this game to pay next month’s rent! — but it didn’t take.)
Thiel’s talk is over halfway into his forty-minute slot. He is an engaging speaker with a fascinating thesis. The questioners are lining up quickly — far more lined up than for any other speaker so far, including Kurzweil.
In response to the first question about the current recession, Thiel predicts there will be no more bubbles in the next twenty years; either it will boom continuously or stay bust, but people are too aware now, and the cycle pattern has been broken. The next questioner asks about regulation and government involvement — should all this innovation happen in the private sector, or should the government fund it? Thiel says that the government isn’t anywhere near focused enough on science and technology right now, and he doesn’t think it has any role to play in innovation.
Another questioner asks about Francis Fukuyama’s book, Our Posthuman Future
, in which he argues that once we create superhumans, there will be a superhuman/human divide. (Fukuyama has also called transhumanism
one of the greatest threats to the welfare of humanity.) Thiel says it’s implausible — technology filters down, just like cell phones. He says that it’s a non-argument and that Fukuyama is hysterical, to rapturous applause from the audience.
After standing in line, holding my laptop with one hand and blogging with another, I take the stand and ask Thiel about the limits of his projection: if we’re constantly leveraging against the future, what happens when growth reaches its limits? Will we hit some sort of catastrophic collapse? He says that we may reach some point in the future where we have, basically, a repeat of what we had over the last two years, when we can’t meet growth and we have another collapse. So are there no limits to growth, I ask? He says if we hit other road bumps we’ll have to just deal with it then. I try again, but the audience becomes restless and Thiel essentially repeats his point, so I go sit down.
What I should have asked was: Why is it so crucial to speed up innovation if catastrophic collapse is seemingly inevitable, whether it happens now or later?
October 4, 2009
Thiel makes an error when he says, "…real wages have stayed the same since 1973 and Americans work many more hours per year than they used to." He doesn't look at employer cost per employee which includes all non-wage compensation, e.g., healthcare, workman's compensation etc. When these non-wage benefits are calculated wages have risen dramatically since 1973. Just over the past four years total employment costs per employee have risen by 17%. And his point of hours worked per year is also wrong. In 1973 average annual hours worked was 1888 and in 2008 that had fallen to 1792 a decrease of 96 hours or more than two 40 hour workweeks less.
Why is it so crucial to speed up innovation if catastrophic collapse is seemingly inevitable
There are nontechnological catastrophes too. For instance, getting hit by a giant meteor. Or a pandemic. Post-singularity, noticing and deflecting an incoming civilization-ending meteor might be a minor inconvenience; today it would be a tad more difficult. If you worry much about those sort of threats, getting smarter ASAP seems pretty important.
Glen, I won't disagree that those are catastrophes worth being concerned about. But Thiel was talking specifically about catastrophic economic collapse as the result of having leveraged on future growth that we are then bound to attain. Yet he also seemed to say, when I questioned him, that such a collapse was inevitable, which makes unclear his central claim that avoiding that collapse is imperative.
It could be stated "stop slowing down natural growth" as well as "speed up growth". When governments remove capital from markets AND cause lack of clarity by creating uncertainty, natural growth slows dramatically.
Fortunately the opposite is also true.
LibriLoop.com (beta test)
Comments are closed.