Sam Altman Doesn’t Want To Be Your AI King

…but he might be anyway.
Subscriber Only
Sign in or Subscribe Now for audio version

Sam Altman sits slightly slouched in his chair, taking the interviewer’s questions like punches in the face. He looks both terribly young and deeply tired, with his floppy hair and nervous expression. ABC’s Rebecca Jarvis asks him who should be responsible for putting guardrails on ChatGPT, the powerful chatbot that his company OpenAI recently unveiled, to keep society safe.

“Society should,” he answers.

“Society as a whole? How are we gonna do that?”

“So, I can paint, like, a vision that I find compelling. This will be one way of many that it could go. If you had representatives from major world governments, you know, trusted international institutions, come together and write a governing document, you know, here is what the system should do, here is what the system shouldn’t do, here’s, you know, very dangerous things that the system should never touch … and then developers of language models like us use that as the governing document….”

Altman has been proposing some version of this for years. He told the New Yorker back in 2016 that OpenAI was looking for “a way to allow wide swaths of the world to elect representatives to a new governance board” for AI. And when he appeared before a U.S. Senate committee this May, he emphasized how important it is to create what he called an “alignment data set” to impose human values on AI. This data set must be created “by society as a whole, by governments as a whole.”

This is nice. It’s nice to think that if we could just get the key players in a room to hash it out, they could articulate a broad consensus on behalf of humanity: What we want from ChatGPT (and all the coming breakthroughs in artificial intelligence), what we don’t, and how these insights can be translated into rules for developers to follow. All we need to do is make a plan and get everyone on board. How hard could it be?

Apocalypse and Utopia

In the interview with Rebecca Jarvis, Altman admits that he is “a little bit scared” of ChatGPT and its early successes. He knows it has the potential to eliminate jobs and cause massive social upheaval. Some researchers even think that new breakthroughs in AI may be the beginning of the end for humanity — that they could make us obsolete, wreak havoc, or kill us all. Altman can’t be sure this is wrong.

So he’s worried — but not that worried. “I think people should be happy that we are a little bit scared of this,” he tells Jarvis. We’re worrying in the right way: while we can still do something about it. The fact is, when Sam Altman isn’t fighting to save the world from technology, he’s fighting to save the world with it. The New Yorker profiler, quoting venture capitalist Paul Graham, wrote that “Altman, by precipitating progress in ‘curing cancer, fusion, supersonic airliners, A.I.,’ was trying to comprehensively revise the way we live: ‘I think his goal is to make the whole future.’”

Today, Altman is heavily invested in ambitious startups, biotech, and energy. His projects include Helion Energy, a company attempting to build the world’s first nuclear fusion power plant, Neuralink, Elon Musk’s neurotechnology company working to develop an implantable brain-to-computer interface, and Retro Biosciences, a biotech company trying to add ten years to the healthy human life span. He also co-founded Worldcoin, a crypto-financial project trying to develop the first global blockchain token to be linked to an individual’s unique biometric markers “for both utility and future governance.”

But Altman’s chief obsession is with generalized AI because it promises to accelerate all other enterprises at once. As he put it on a podcast in May, “I very deeply believe this will be the most positively transformative technology that humanity has yet developed, and it will, to the degree that this is the technology that helps us invent all future technologies, I think it’ll be a super bright future.”

So when Jarvis asks him why he would develop and release a technology that might disrupt society, upend the economy, and kill us all, Altman has a response. “I think it can do the opposite of all of those things too. Properly done, it is going to eliminate a lot of current jobs, that’s true. We can make much better ones…. Would you push a button to stop this if it meant we are no longer able to cure all diseases? Would you push a button to stop this if it meant we couldn’t educate every child in the world super well?”

Jarvis counters, “Would you push a button to stop this if it meant there was a five percent chance it would be the end of the world?”

“I would push a button to slow it down.”

Creating artificial intelligence, slowing it down; destroying the world, saving the world. Whatever the problem, Altman imagines a button he can push to solve it.

Alignment Problems

In a 2022 survey of experts in the field, respondents were asked with what probability they thought “human inability to control future advanced AI systems” would cause either human extinction or some “similarly permanent and severe disempowerment of the human species.” The median answer was 10 percent. This May, a group of researchers and CEOs — including Altman — published a one-line statement on AI risk saying simply: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

To avoid apocalypse, the experts tell us, we need to solve the “AI alignment problem.” We need to figure out how to align AI systems with human desires so that the technology can help us achieve our goals rather than run amok and take over the world. Altman is optimistic that we can solve this problem. This is a man with lots of experience iterating and reiterating systems, finding flaws and fixing bugs.

But for all Altman’s eagerness to acknowledge the risks of AI, he seems oddly blind to the greatest obstacle in the way of the super bright future he envisions. When sufficiently powerful AI is released into the market, Altman predicts in a 2021 article, this “will create phenomenal wealth,” while the price of labor will fall toward zero in many parts of the economy. “The world will change so rapidly and drastically,” he expects, “that an equally drastic change in policy will be needed to distribute this wealth and enable more people to pursue the life they want.” He has ideas for how this can be done, but even if the only political question were how to spread the wealth around, this would be hard to implement. And he doesn’t even ask: What if a lot of people don’t want to live in this kind of world at all?

Who or what will convince all the doctors, lawyers, and teachers to step aside and let AI systems take over the work that currently pays their bills? Who or what will convince their patients, clients, and students to accept artificial substitutes for human help? And if ordinary people refuse to work toward a future in which their labor is unnecessary, how do we get them to take an active role in designing a new policy regime to govern a new world? More likely, political leaders will impose changes from above while tech companies, little by little, entrench AI systems in the economy. But this is hardly consistent with Altman’s democratic vision, in which “society as a whole,” “governments as a whole” decide what AI should and shouldn’t do.

There is a deeper alignment problem at issue here. Before we can figure out how to align AI systems with human desires, we have to align human desires for AI with one another, and this is no easy task.

French President Emmanuel Macron meets with Sam Altman on May 23, 2023.
Abaca Press / Alamy
A Thought Experiment

Join me in a thought experiment. The year is 1712. Thomas Newcomen, a blacksmith and sometime Baptist preacher from Devonshire, has overseen the first successful installation of his atmospheric steam engine. His machine has just been used to remove a large quantity of water from a flooded coal mine in a fraction of the time it would have taken with the help of horses or a water wheel.

Now imagine that a prescient newspaperman from London recognizes the gravity of this achievement and travels to Tipton to see the engine and meet its inventor. When they sit down together, the Londoner says: “I believe this machine has the power to alter the world, for it can be used to relieve the burden of many human labors. Who will govern its use and enjoy its fruits?” Newcomen, early tech genius that he is, answers, “This is but a foretaste of the great power of my engine and the many uses to which it will be applied. But I am only a humble blacksmith, it is not for me alone to decide the future history of the world.” So he proposes a summit.

Newcomen will be there, of course, to share his expertise on the workings and development of his engine and ask for input on the human projects to which it should and shouldn’t be applied. Queen Anne should be there too, as should Louis XIV and Peter the Great. But it’s the serfs and peasants whose lives will change the most dramatically in the shortest term, so let’s invite a representative sample of them to attend as well.

How well do you think this will work? Will all the participants at this summit be given an equal voice? Will they anticipate that this is just the beginning of an entire industrial revolution? Will they manage to forestall three centuries of social upheaval and political turmoil, and still usher in an age of unprecedented prosperity? Or even just put their differences aside and come to a consensus for the future that is just, enforceable, and wise?

The Nature of the Obstacle

Years before ChatGPT was released, Altman’s profiler at the New Yorker observed that Altman had “been reading James Madison’s notes on the Constitutional Convention for guidance in managing the transition” to global governance of AI. “If I weren’t in on this,” Altman said, referring to the key players in the race to decide how AI will shape the future, “I’d be, like, Why do these f***ers get to decide what happens to me?”

This is a welcome change from a decade of Mark Zuckerbergs and Jack Dorseys who broke things first and reflected on the pieces later. But like it or not, Sam Altman does have the jump on the rest of us in deciding what happens with AI, and he is putting himself in position, if not to write the rules himself, then to shape how the rules will be shaped, to decide who will get to decide.

So far, representative democratic deliberation is the best process we’ve come up with for making collective decisions in anything like an equitable way. But it has its limitations. It’s messy, inefficient, and the best outcome you can hope for is a compromise. Otherwise kind and lovely people will struggle and fight when they sit down to divvy up goods for the long haul. Try gathering your neighbors together, tell them you are building a new community garden, and ask for input. You will see what I mean.

Democratic deliberation is also reactive. People can’t deliberate effectively about a problem they haven’t yet seen and don’t fully understand. This makes democratic systems particularly bad at regulating emerging new technologies. Recall our global conference of 1712. How much could Anne, Louis, Peter, Newcomen, or the serfs really have foreseen about the economic revolution in the offing the year the steam engine emptied its first mine?

And finally, it’s slow. Deliberative bodies are no match for a charismatic leader with a clear vision when time is of the essence and apocalypse looms.

In a famous passage from the Republic, Socrates suggests that there will be no end to the struggles of political life until kings become philosophers or philosophers become kings. But this suggestion has never been popular, because it’s not what anyone involved would freely choose: not the philosophers, who would rather think in peace, not kings, who would rather rule unencumbered by philosophers, and not the subjects forced to live out someone else’s idea of the good. The obstacle to utopia is freedom.

Sam Altman has stumbled onto an old political problem he can’t push a button to fix. His project is made up of two incompatible parts: one open and democratic, one closed and autocratic. Maybe he’ll find “a way to allow wide swaths of the world to elect representatives to a new governance board” for AI, so that power does not get concentrated in the hands of a few. Or maybe he’ll succeed in forming a like-minded capitalist elite that can leverage AI systems to solve climate change, cure all diseases, and eliminate the need for human labor. Either of these projects is a long shot. But he can’t have both.

More from the Summer 2023 symposium
“They’re Here… The AI Moment Has Arrived”

Louise Liebeskind, “Sam Altman, the Man Who Would Not Be King,” The New Atlantis, Number 73, Summer 2023, pp. 58–63. Published online as “Sam Altman Doesn’t Want To Be Your AI King.”
Header image: AP Photo/Alastair Grant

Keeping humanity human

The New Atlantis is building a culture in which science and technology work for, not on, human beings.