How the State Built This AI Moment

The law isn’t lagging, it’s leading.
Subscriber Only
Sign in or Subscribe Now for audio version

“The horse has not merely bolted; it is halfway down the road and picking up speed — and no one is sure where it’s heading.” Thus begins a recent Guardian editorial, which joins an increasingly loud chorus of alarm bells registering the risks posed by artificial intelligence. These warn of a host of observed and possible dangers, most strikingly, as a Financial Times piece puts it, that “God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race.”

The authors of such alarms run the gamut from policymakers and journalists to academic researchers and prominent technologists. Their arguments all follow the same logic: that technology is charging ahead and the rest of us are desperately trying to catch up, especially those responsible for creating policies and regulations to protect public wellbeing. Social scientist Sheila Jasanoff calls this the law-lag narrative, which dates long before AI, at least since the early twentieth century, when the economist Thorstein Veblen described technological change as persistently outpacing societies and their institutions.

But the idea is strikingly inaccurate, obscuring as it does the important role of human actors, institutions, and policies in creating novel, transformative technologies. What we lose by describing the world in this manner is not only a clear understanding of social and technological change; we lose the opportunity for public debate about the most significant decisions regarding AI and human wellbeing.

Rather than a runaway technology, AI is a horse we’ve bred and raised and whose course we’ve long been designing, and we need to stop pretending we didn’t.

Searching for a Broncobuster

For an example of how the law-lag narrative frames current public discourse on AI, take the open letter published by the Future of Life Institute and now bearing some 30,000 signatures. Its authors quote the Asilomar AI Principles, according to which “advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.” “Unfortunately,” the letter’s authors add, “this level of planning and management is not happening.” This is an account of a world lacking foresight and human agency, where technological developments arrive without anyone planning for them, taking societies and their governing institutions entirely by surprise.

The response that the signatories demand is a six-month pause in AI development past GPT-4 to allow the rest of us to catch up — and to allow “AI developers [to] work with policymakers to dramatically accelerate development of robust AI governance systems.” The same disconnect between technology on the one hand and legal and ethical frameworks on the other is posited in the Financial Times piece, which calls for AI developers to “slow down and let the rest of the world have a say in what they are doing.”

Similarly, well-known public figures like Yuval Noah Harari and Thomas Friedman warn of alarming consequences and call for urgent action to get regulation and ethics right. Key to this discourse is the conviction that these technologies have great potential to do outsized good in the world, from Friedman’s hope that “A.I., in effect, gives us a do-over” on climate change to Harari’s projection that “certainly, AI can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis.” Both writers insist that this is the moment to ensure we realize the dream and prevent the nightmare, pointing to much-needed “smart” regulation and ethics. “We can still regulate the new AI tools,” Harari writes, “but we must act quickly…. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.” “God save us,” warns Friedman, “if we acquire godlike powers to part the Red Sea but fail to scale the Ten Commandments.”

The fantastical promises AI inspires go hand-in-hand with the specter of impending doom. As Naomi Klein argues, the utopian promises are “hallucinations” that serve as “powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history” — the theft and privatization, that is, of the vast digitized trove of collective human creativity and knowledge. Only promises as extravagant as the end of human drudgery and the climate crisis can help to “rationalize AI’s undeniable perils.”

And yet, the only actors with formal powers to govern and regulate — the policymakers — are always portrayed as lacking the proper knowledge and qualifications to do so, and as arriving too late. Lawmakers have “failed to keep pace,” a recent New York Times piece explained; they “struggle to understand the technology,” said another. By this account, lawmakers must first be educated, and they must work hand in hand with the creators of these systems in order to protect human wellbeing. The horse bolted from the barn without so much as a whinny’s warning, and now lawmakers are blindly chasing it wherever it may go.

Breeding the Horses

The law-lag narrative obscures an important part of the story: that governments have had a leading hand in breeding the horses, constructing the racecourse, instituting the wagering system that makes the race profitable, and ringing the starting bell.

Since at least 2017, amid growing private investment in AI research and development, governments across the world have been producing AI strategies, initiatives, and policies, such as the Pan-Canadian Artificial Intelligence Strategy aimed at driving “the adoption of artificial intelligence across Canada’s economy and society,” or the French AI for Humanity program geared toward making “France a European leader in research in AI.” The aim of such policies is to catalyze the transition from research and development to commercialization and deployment — into sectors such as health care, agriculture, and defense. As Germany’s economic affairs minister put it, the aim of its AI program is “getting the horse-power generated by research onto the roads.” Some countries, for example the United Kingdom, offer significant tax breaks for companies working on AI or robotics.

Whether through funding of AI research or incentivizing startups, governments have joined the “gold rush” in AI innovation. Hardly a week passes without yet another announcement of new funding mechanisms for AI, typically designed as public-private partnerships and presented as powerful tools for securing economic growth, market competitiveness, and social wellbeing.

In May, the U.S. National Science Foundation (NSF) presented a $140 million investment for the creation of seven new National AI Research Institutes “to pursue transformational advances in a range of economic sectors, and science and engineering fields — from food system security to next-generation edge networks.” Through partnership with some of Big Tech’s biggest players, including Google, Amazon, and Intel, NSF director Sethuraman Panchanathan hopes not only to “accelerate discovery and innovation in AI” but also to “improve our lives from medicine to entertainment to transportation and cybersecurity and position us in the vanguard of competitiveness and prosperity.” The 2023 National Defense Authorization Act has already placed AI at the center of the U.S. military and security agenda.

In contrast to the repeated cycles of hype and disappointment about AI in the twentieth century — what are sometimes called periods of “AI winter” — it seems policymakers are seeking to ensure the twenty-first century will bring about an “AI spring.”

Laying Bets

Given the long history of government investment in breakthrough technologies — from nuclear energy and weapons to the Internet, nanotechnology, and even the iPhone — government attention to AI is hardly surprising. Yet the oft-repeated law-lag myth reinforces a false dichotomy between public versus corporate agency in the pursuit of innovation, in which the state figures at best as reactive, and at worst as a passive agent of social and technical change. As economist Mariana Mazzucato has documented in her 2013 book The Entrepreneurial State, the U.S. government in particular has “often been the source of the most radical, path-breaking types of innovation” and an active creator of new technological markets, rather than merely regulating them.

The picture of AI innovation as the exotic offspring of Silicon Valley venture capital, start-ups, and their visionary leaders not only ignores the entrepreneurship of governments that already support high-risk, high-reward innovation, but also obscures the huge opportunity that AI technologies present for more of this kind of entrepreneurship. Hopes that AI could indeed become a general-purpose technology, generating unforeseen rates of economic growth, fuel the entrepreneurial ambitions of governments as much as they inspire utopian “hallucinations” of AI-enabled futures free from climate change, poverty, and pandemics.

Governments today are expected to drive innovation toward grand social challenges, or “missions,” and AI proves to be a perfect moonshot to test the ability of public institutions to catalyze this process. Indeed, political leaders have made sure to express their optimism when it comes to AI. Then-President Barack Obama, in a 2016 Wired interview, said that early in AI development, government regulation should remain minimal so that “a thousand flowers should bloom.” “I tend to be on the optimistic side,” he noted, since “historically we’ve absorbed new technologies, and people find that new jobs are created, they migrate, and our standards of living generally go up.” European Commission President Ursula von der Leyen declared, “I believe in the power of artificial intelligence,” which she thinks is “simply amazing.”

Designing the Racecourse

Rather than being caught off guard about how AI may usher in a range of harms for societies and individuals, countries invested in AI have along the way created high-level advisory bodies to address the ethical, legal, and social implications of AI.

The U.S. National Science and Technology Council was among the first to create norms for “Preparing for the Future of Artificial Intelligence” in 2016, followed by the Villani Report advising the French government on how to achieve “Meaningful Artificial Intelligence,” the Japanese “Social Principles for Human-Centric AI,” and Dubai’s “AI Ethics Principles and Guidelines,” to name but a few. Governments have furthermore joined forces for ensuring responsible AI at the international level, most notably by way of the OECD’s  “Recommendation of the Council on Artificial Intelligence,” signed by almost fifty of the world’s most advanced economies.

Committees and councils established to deliberate on and produce these guidelines typically take a multi-stakeholder approach in which experts on ethics, law, and policy are joined by AI researchers and high-level corporate representatives. In the EU, for instance, the 52-member High-Level Expert Group on AI featured at least 26 delegates tied to Big Tech, including Google, Meta, and Microsoft; in the United States, in turn, the National AI Advisory Committee is co-chaired by a Google senior vice president and accompanied by top representatives from Amazon and IBM.

Some have accused such AI advisory bodies of functioning as inroads for lobbyists, and for enabling “ethics washing” on part of the industry: “Companies engage in an ethical debate with the goal of postponing or completely preventing legal regulations. The longer the debate continues, the longer it takes for actual laws to disrupt one’s own business model,” a 2019 article by the research and advocacy organization Algorithm Watch noted.

Such AI ethics guidelines are not regulatory interventions. They are legally non-enforceable principles for self-governance, ostensibly designed to anticipate the risks posed by AI technologies but without at the same time stifling innovation and market creation. They are “soft law” mechanisms that promise to keep policy responses to AI flexible and governance agile, while creating incentives for actors in AI to behave responsibly, and to give time to anticipate potential bad outcomes.

The EU’s High-Level Expert Group on AI, for example, proposed an “Assessment List for Trustworthy AI” in 2020, “intended for self-evaluation purposes” and “flexible use” by AI tech organizations, helping them to understand “what risks an AI system might generate, and how to minimize those risks while maximizing the benefit of AI.” Similarly, the U.S. Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights” featured a “technical companion” with a range of self-assessment measures, such as encouragement of “proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.”

Installing the Guardrails

The European Commission’s much-anticipated AI Act, first proposed in 2021 and currently under consideration by European institutions, declares as its primary objective “to give people and other users the confidence to embrace AI-based solutions, while encouraging businesses to develop them.” Notably, the proposed policy framework is not framed as an act of regulation, to protect the public from harms, but as an act of enablement — to encourage the use and continued application of AI.

An important role of regulation as an enabler of innovation is to clarify and harmonize rules across jurisdictions, not only within the EU but in prospective international accords. As European parliamentarians make clear, “we are aware this Regulation, covering the entire EU market, could serve as a blueprint for other regulatory initiatives in different regulatory traditions and environments around the world.” In this way, timely regulation can override and preempt the proliferation of diverging, and what might be interpreted as overly-restrictive, regulations in lower jurisdictions, such as Italy’s data protection authority’s early decision to block ChatGPT. By delineating spaces of self-governance, then, such regulatory frameworks not only articulate where regulation is necessary, but more importantly, where it is precluded. This process of clarification and boundary-setting is accomplished in part by outlining an overarching concept by which regulation should be structured and interpreted — in this case “risk-based” governance.

The AI Act describes a risk-based approach to the governance of AI by classifying the risks of AI technologies as unacceptable, high, limited, or minimal. The act defines risk primarily in terms of the use of AI technologies, narrowing down regulatory obligations to a limited set of AI systems, such as social scoring systems used by governments, and the application of AI in critical infrastructure, employment processes, or law enforcement. This approach places responsibility first and foremost on the users of AI rather than its creators. General-purpose AI — that is, any AI technology capable of a wide range of tasks and intended for multiple purposes — was hence systematically excluded from the act’s original scope, not least due to efforts by Big Tech to defang harder rules and accountability for their AI systems, instead shifting the burden of regulation to “the companies deploying them in various ways,” as a recent report by the Corporate Europe Observatory notes.

While the latest amendments to the drafted legislation indicate willingness to include general-purpose AI and to balance regulatory responsibilities between providers and users of such technologies, the legislation nevertheless reaffirms the risk-based approach: “The obligations set out in this Regulation only apply to forbidden practices, to high-risk AI systems, and to certain AI systems that require transparency.”

Off to the Races

The recent rise of ChatGPT, one of the most powerful general-purpose AI applications on the market, purportedly “broke the EU plan to regulate AI,” as a recent law-lag–peddling commentary in Politico claims. This is a fantasy — ChatGPT is rather the offspring of policymakers’ intensive breeding of AI technologies. Accordingly, the stated intent of established AI policy and of proposed regulations is to facilitate, and not limit, its continued permeation of our daily lives. Nevertheless, policymakers propagated the law-lag narrative when public discussions around ChatGPT emerged.

For instance, in April members of the European Parliament called their colleagues to action with the declaration that “recent advances in the field of artificial intelligence (AI) have demonstrated that the speed of technological progress is faster and more unpredictable than policymakers around the world have anticipated.” With such “rapid evolution of powerful AI,” they “see the need for significant political attention” — attention that has long been bestowed upon this well-kept thoroughbred, as demonstrated by years of funding, careful strategizing, knowledge creation, and close collaboration with the private sector to ensure its rapid pace and formidable power.

The law-lag narrative — the call for policymakers to “finally catch up” and “do their duty” on AI — actually prevents the critical discussion of AI governance we really need. It keeps us from making collective decisions about what public wellbeing is and how we should pursue it.

If instead we were to drop the law-lag narrative, we could direct our attention to policymakers’ and regulators’ presumed mandate to dedicate significant public resources toward unfettered innovation in the name of the public good. We could have a broader debate about the future to which we aspire and what role we want AI to play in it. This would include deliberation on whether AI technologies should be unleashed to transform societies and economies, in what ways, and what roles our democratic representatives should play in taming them.

Rather than broadly embracing AI and regulating based on a narrow understanding of the risks it may cause, policymakers would do well to consider questions regarding AI’s broader desirability: Do we indeed share a vision of AI as indisputably good for people within and across societies? Are there particular futures AI might bring about that we are interested in preventing? Are we willing to account not only for its anticipated risks and harmful effects, but for the great uncertainty of a widespread use of AI in key sectors of public interest such as education, health, and security? How can our ethical frameworks and legal apparatuses not only be mobilized as means for making technologies more acceptable to societies but also for shaping, controlling, and in some cases countering them? Who should decide on these questions — and how should such decisions be made?

Rejecting a narrative in which technologies lead and policy follows could thus create a much-needed space of democratic debate — for AI policy and beyond. We could talk more clearly about how and whether public resources and institutions should encourage technological development, and how we can envision them to better serve instead of undermine human flourishing.

More from the Summer 2023 symposium
“They’re Here… The AI Moment Has Arrived”

Tess Doezema and Nina Frahm, “All the King’s Horses,” The New Atlantis, Number 73, Summer 2023, pp. 46–53. Published online as “How the State Built This AI Moment.”
Header image: eye35 / Alamy

Delivered to your inbox:

Humane dissent from technocracy

Exhausted by science and tech debates that go nowhere?