In “Darwin Among the Machines,” a letter to an editor published in 1863, the English novelist Samuel Butler observed with dread how the technology of his time was degrading humanity. “Day by day,” he wrote, “the machines are gaining ground upon us; day by day we are becoming more subservient to them.” For the ironical Butler, the solution was simple: kill the machines. “War to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of his species.”
In his later novel Erewhon, Butler imagined a people who take his advice and smash their machines — the inspiration for the “Butlerian Jihad” in Frank Herbert’s Dune. But to make his central conceit plausible, by the loose rules governing a Victorian satire, Butler had to drop the society of Erewhon in the middle of “nowhere” (an anagram of the name), in a remote valley cut off from the rest of the world. The Erewhonians, Butler recognized, would never have survived centuries of Luddism anywhere else: they would have vanquished the machines only to be vanquished by an antagonist lacking their technological caution. In the real world, Butler suggests, we face a choice: Will you preserve your humanity or your security?
This may be just the choice we face today. From Washington, D.C. to Silicon Valley, champions of new technologies often argue, with good reason, that we must embrace them because, if we don’t, the Chinese will — and then where will we be?
Driven by geopolitical pressures to accelerate technological development, particularly in AI and biotech, we seem to have two options: channeling innovation toward humane ends or protecting ourselves against competitors abroad.
To appreciate the difficulty of this choice, we should take a page from military theorists who have wrestled with what is known as the “security dilemma.” Even though it is one of the most important concepts in international relations, it has been given little attention by those grappling with the promises and challenges of new technologies. But we should, because when we apply its core insights to technological development, we realize that achieving a prosperous human future will be even more difficult than we tend to think.
The security dilemma, as described by the political scientist Robert Jervis in a 1978 paper, is that “many of the means by which a state tries to increase its security decrease the security of others.” Take two nations that don’t know each other’s warfighting capabilities or intentions. With a questionable neighbor next door, one side reasons, it’s only sensible to build up a reliable defense, just in case. But the other nation has the same thought, and similarly proceeds to boost its armaments as a precaution. Each nation sees the other militarizing, which justifies its own defense build-up. Before long, we have a frantic arms race, each nation building up its military to surpass the growing military next door, spiraling toward a conflict neither actually desires.
The dilemma is this: each nation can either militarize, prompting the other to reciprocate and heightening the risk of an ever-more-violent war; or not militarize, endangering itself before a power that is suspiciously expanding its arsenal. The dilemma suggests that each nation can be all but helplessly compelled to militarize but then is no better off than before, for the other nation is doing the very same. In fact, everyone is worse off, as the stakes and lethality of a looming war continue to rise. Perverse geopolitical incentives drive both sides to rationally pursue a course of action that harms them both.
It’s no coincidence that the dilemma was first articulated in the 1950s, amid the Cold War menace of mutual assured destruction. But its underlying logic applies not only to national defense per se but also to technological innovation broadly.
Consider how geopolitical pressures motivate technological advancement. World War II spurred the development of cryptography and early computers, such as America’s ENIAC and Britain’s Colossus. The Cold War rivalry between America and the Soviet Union prompted their race to be the first to put a man on the Moon. And in the 1980s, Japan’s prowess in the semiconductor industry motivated America to launch state projects, like the SEMATECH consortium, to remain competitive.
Just as with the security dilemma, deciding not to act in response to these pressures is a recipe for failure, because it risks making one as helpless against one’s competitors as the armored knight against the musket. Whichever side is technologically superior will gain the upper hand — economically, geopolitically, and, down the road, militarily. So each side, if it hopes to survive, must adopt the more sophisticated technology.
But the risk of this trajectory is not only to other nations, as with militarization itself — it is potentially to one’s own people. As every modern society has come to experience, technological innovation, despite the countless ways it has improved our lives, can also bring not just short-term economic instability and job loss, but also long-term social fracture, loss of certain human skills and agency, the undermining of traditions, and the empowerment of the state over its own people.
In what we might call the “technological security dilemma,” each nation faces a choice: either pursue technological advancement to the utmost, forcing your competitors to reciprocate, even if such advancement jeopardizes your own citizens’ wellbeing; or refuse to do so — say, out of a noble concern that it threatens your people’s form of life — and allow yourself to be surpassed by an adversary without the same concern for its people, or for yours.
As Palantir’s Alex Karp and Nicholas Zamiska recently put it in their book The Technological Republic, “Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed.” So if a nation won’t accept one horn of the dilemma, allowing its geopolitical standing to falter and putting itself at the mercy of the more advanced nation, then it must choose the other, adopting an aggressive approach to technological development, no matter what wreckage may result.
The question is whether that nation can long enjoy both its tech dominance and its humanity.
Today, the technological security dilemma is the very situation America finds itself in with China.
Consider artificial intelligence. Venture capitalist Marc Andreessen writes that “AI will save the world,” but also that it could become “a mechanism for authoritarian population control,” ushering in an unprecedentedly powerful techno-surveillance state. It all depends on who is leading the industry: “The single greatest risk of AI,” he writes, “is that China wins global AI dominance and we — the United States and the West — do not.” In a similar vein, Sam Altman once said on Twitter — when he was the new president of Y Combinator in 2014 — that “AI will be either the best or the worst thing ever.” The difference, he said a decade later, now writing as CEO of OpenAI in the Washington Post, is whether the AI race is won by America with its “democratic vision,” or by China with its “authoritarian” one. Our own good AI maximalism is thus “our only choice” for countering their bad AI maximalism.
But in that case, the AI boosters’ arguments for its benefits and their refutations of popular fears are almost beside the point. As the technological security dilemma suggests, even if all of AI’s speculated downsides were to come about — mass unemployment, retreat into delusional virtuality, learned helplessness among all who can no longer function without ChatGPT, and so forth — we would still need to accelerate AI to stay ahead of China.
A world run by China’s AI-powered digital authoritarianism would indeed be a nightmare for the United States and everyone else, marked by a total disregard for individual privacy, automated predictive policing that renders civil liberties obsolete, and a global social credit system that blacklists noncompliant individuals from applying for credit, accessing their bank accounts, or using the subway. How then can we afford to deliberate about AI’s impact, much less slow down its advancement? Its potential domestic harms, the dilemma suggests, are the necessary price to pay for our national security.
It would therefore be a mistake to dismiss the arguments from Andreessen and Altman as nothing more than self-serving P.R. tactics to lobby for government favors. They are getting at something fundamental: in a technological arms race, the only rational action is to try to win. Once China has entered the race for technological dominance in AI, America, if it wishes to maintain its own political independence and avoid becoming China’s vassal state, has no choice but to enter the race as well, no matter what damage results. As Altman puts it, “there is no third option.”
Consider gene editing as well. If one country achieves the ability to genetically engineer its citizens to boost their intelligence, reflexes, or whatever else, it could have an enormous advantage over other nations, and not only in wartime. How then could any adversary fail to do the same? It couldn’t, not without jeopardizing its own security, even if doing so requires abusing its own citizens. Here too, then, the security dilemma is apposite. As the bioethicist Françoise Baylis has suggested, “there is reason to expect that the science will seed fierce competition among research teams, for-profit companies, and nation states…. similar to that which characterized the 20th century’s space race and nuclear arms race.”
Such a race may be only speculative for now, given both the early state of gene editing technology and widespread aversion at the thought of where it may lead. But we should not count on either constraint lasting long — after all, Americans are already selecting for desirable features in their children through embryo screening in the lab. Any aversion to gene editing will likely crumble in the face of a geopolitical imperative. For the technological security dilemma suggests that only one nation will need to start experimenting on its citizens before every competing nation will find itself strategically compelled to do the same.
The way that international competition tips the scales toward an accelerationist approach to gene editing has been articulated by military strategist Andrew Krepinevich, Jr. in The Origins of Victory: “There are worries that rivals will use these techniques to enhance their military capabilities, such as by breeding humans with specific ‘enhancements,’ such as a race of ‘super warriors’ to meet the demands of an authoritarian regime.” Any nation not willing to accept the first horn of the dilemma — defeat at the hands of a post-human adversary — would need to accept the second horn — a countervailing effort to replace natural humans with something more competitively fit. It would be the abolition of man as grand strategy.
In China, there are already efforts underway toward this end. In 2013, the Chinese genomics company BGI undertook a gene-sequencing study of over two thousand high-IQ individuals from the United States and elsewhere with the goal of engineering a smarter population. One of the participants who contributed his DNA to the project explained, “even if it only boosts the average kid by five IQ points, that’s a huge difference in terms of economic productivity, the competitiveness of the country, how many patents they get, how their businesses are run, and how innovative their economy is.”
One of BGI’s scientific advisers bluntly stated, “We’re headed for a Brave New World or something like Gattaca. Every country will have to decide what its laws will be, but the technology will definitely be there. It’s just a question of which of the countries will take control of their genetic future.”
Will America be able to resist the urge to follow China’s lead and obsolesce its own people? The technological security dilemma suggests not.
The U.S.–China competition has tempted some observers to conclude that the two powers are ultimately mirror images of each other, with little distinguishing their pursuits of political and technological leadership beyond cosmetic differences in propaganda. But one doesn’t have to fall for such crudely false moral equivalences to spot a glimmer of insight here.
To understand the temptation that China’s example offers to the United States, we need to recognize the structural incentives under which we operate: two nations trapped by the same spiraling logic of global dominance will come to resemble each other in their approach to technology. After all, few American technologists would disagree with the claim that “science and technology are the foundation of national strength and prosperity, and innovation is the soul of national progress” — at least until they learn that it comes from a handbook of the Chinese Communist Party.
How then can America secure its lead over China without sacrificing the values that are precisely what make it necessary for us to beat them? While Chinese leadership may wave away concerns about a new technology’s downsides for its people, we should not. But Americans who are troubled by both the mindless embrace of new technologies at home and the threat posed by China’s abuse of them abroad thus face two bad options. As the dilemma warns, we could recognize a new technology’s harm and still be compelled to deploy it.
The least we can do is to avoid false hopes. One sobering lesson from the analogy of tech competition to military competition is not to fall for the wishful thinking that the next powerful device — be it the Gatling gun, the Maxim gun, dynamite, or poison gas — will inspire everyone to lay down their arms. It has never worked out that way, for a simple reason: the fact that all sides would be better off not taking an action is no reason for any particular side not to take it.
In the same way, the tech skeptic might suppose that if enough people realized a device’s harms, we could explode the dilemma, reach a sort of ceasefire, and deliberate upon a better path forward. But the security dilemma suggests that even a widespread recognition of a technology’s harm doesn’t mean we can afford not to use it. If we all took inspiration from the Erewhonians and beat our computers into ploughshares, we would simply give the advantage to our enemies. The tech skeptic, along with the pacifist, would be the easiest enemy to overcome, and for similar reasons.
But if it is madness for the tech skeptic to let China chart the technological frontier, no less mad is unleashing technologies on a helpless populace and denying the possibility of any downsides. The dilemma’s alternative extreme — the no-holds-barred embrace of revolutionary tech championed by today’s “effective accelerationists” — is similarly illuminated by a page from the war manuals. For effective accelerationism resembles nothing so much as the “attack to excess” mindset that captured European military thinking during the lead-up to World War I. According to this school’s “cult of the offensive,” as the political scientist Stephen Van Evera has dubbed it, new technologies gave the nation that struck first an overwhelming strategic advantage.
This cult deeply influenced France’s military planning for a potential war against Germany. As one French official at the time put it, “in the offensive, imprudence is the best safeguard. The least caution destroys all its efficacy.” Military strategists were convinced that delays in mobilization or any loss of intensity were forfeits of the invincible initiative. To pause long enough to ask whether the offensive had the virtues ascribed to it was to consign oneself to defeat.
As it happened, the offensive did not have the virtues ascribed to it. As Van Evera shows, it not only motivated all adherents to rush into the Great War; it also made the war even more terrible when it came. The accelerationist-offensive mindset in warfare was compelling just until it ended in a devastation that its adherents had insisted they could not pause to anticipate. There is little reason to expect that the same mindset in technology — call it the “cult of the accelerationist” — would produce a different outcome.
So we seem to be stuck in an iron cage of technology, one that we can either exit to our peril, or enclose ourselves in ever more tightly.
Still, nothing is ever certain, as the Cold War’s original security dilemma should remind us. After all, we’re still here. We avoided the nuclear war that the dilemma pointed to, prompting the economist Thomas Schelling to marvel that “the most spectacular event of the past half century is one that did not occur.”
Perhaps, then, there is something paradoxically heartening about the resemblance of our dilemma to the Cold War threat of mutual assured destruction. We managed not to blow ourselves up. Maybe this time around, we’ll manage not to usher in a techno-surveillance, post-human future in our race for survival.
But if we do avoid such a fate, it would be the most spectacular non-event of this century.
Keep reading our Summer 2025 issue
Uncle Sam’s dream house • What toasts your bagel • Tech & marriage • Subscribe