A President’s Council on Artificial Intelligence

The White House’s bland solution doesn’t match the problem. Here’s what would.
Subscriber Only
Sign in or Subscribe Now for audio version

Above: A meeting in the White House with artificial intelligence CEOs on May 4, 2023

Last month, President Joe Biden issued an executive order on artificial intelligence. Among the longest in recent decades and encompassing directives to dozens of federal agencies and certain companies, the order is a decidedly mixed bag. It shrinks back from the most aggressive proposals for federal intervention but leaves plenty for proponents of limited government to fret about. For instance, the order invokes the Defense Production Act — a law designed to assist private industry in providing necessary resources during a national security crisis — to regulate an emerging technology. This surely strains the law’s intent, not to mention allowing the executive branch to circumvent Congress.

Setting aside the merits or demerits of the order itself, however, it is worth stepping back to consider what this move by the White House tells us about the politics of AI — and in particular what it leaves out. The executive order is just the latest and most prominent instance of a recurring pattern in the debate over AI policy: a reflexive turn toward federal regulatory tools as the mechanism of choice for grappling with the risks and opportunities posed by the emerging technology.

The regulatory reflex has especially been a feature of the Biden administration’s AI initiatives, from agency actions to proposed audits to impact assessments. Even the administration’s Blueprint for an AI Bill of Rights, released last year by the Office of Science and Technology Policy, and the AI Safety Institute, recently announced by Vice President Kamala Harris, though they are not strictly regulatory, are oriented toward frameworks, principles, and best practices for implementing regulations.

The White House is hardly alone in reaching for regulatory tools. The emphasis among political leaders of both parties is on what concrete regulatory steps the government should take to prevent and mitigate the harms of AI — everything from algorithmic discrimination and job loss to bioterrorism and other national security risks. In Congress, this argument is often couched in regret for not having done more in the past to forestall social media’s detrimental effects on children and teens.

There is, of course, an important role for federal regulation — although precisely what role remains hotly contested. Should Congress create a new agency for AI, or simply rely on existing agencies? Should the government regulate all of AI, broadly construed, or reserve strict rules only for the most powerful foundation models or for certain uses of AI?

At issue are the same questions that attend every major technological shift: What are the benefits? What are the risks? Which will outweigh the other? Those who see mostly upside fear that regulation will stifle innovation and deprive us of its many fruits. By contrast, many commentators — including some prominent AI experts — have been raising alarms about the potential dangers of the new technology, ranging from short-term concerns about deepfakes and discrimination to longer-term fears about job loss or even existential risk.

Yet what this debate overlooks is that federal regulation is not always the most appropriate way to deal with the consequences of technological change. Nor is regulation the only option available to executive branch institutions.

In 2001, President George W. Bush offered a different model of technology governance when he issued an executive order establishing the President’s Council on Bioethics. In the years that followed, that council pioneered a new form of public deliberation about emerging technologies that may contribute to human flourishing but also pose serious ethical questions about freedom, dignity, and even what it means to be human.

Today, we find ourselves in a similar situation with AI. A new set of transformative technologies is on the rise that promises to increase economic productivity and prosperity, and thereby improve the quality of life, while also challenging us to consider the purposes to which our growing technological power might be put. Unfortunately, the discourse surrounding AI so far is dominated by the technocratic rationality that befits a bureaucratic state and the industries it seeks to regulate.

A more holistic and humanistic approach is needed. We need an approach that considers not only the best — for example the safest or most efficient — means to achieve given ends, but also the ends we ought to be pursuing in the first place, and whether and how new technologies can or cannot help us attain them. Here the President’s Council on Bioethics can provide a model for the Biden White House to follow on AI.

Why Regulation Doesn’t Match the Problem

Artificial intelligence poses distinctively complex challenges. That’s because AI itself is distinctive and complex. The very concept is vexed. To begin with, what is it?

AI is, on one level, a field of scientific research — the main objective of which is not, like neuroscience or psychology, say, to advance our understanding of human intelligence, but rather to simulate human intelligence. Here we bump up immediately against thorny questions with practical implications: What is human intelligence? When will genuine artificial intelligence be achieved? Can it be achieved? How will we know? What will it mean if it is achieved? Should we be trying to achieve it in the first place? These questions — at once scientific, philosophical, technological, and ethical — are all matters of controversy, and have been for the last three-quarters of a century.

But, of course, AI is not only a field of scientific research. It is also a highly successful technology — or rather a collection of technologies — that has already begun to affect and even transform various aspects of social and economic life. Already, AI is finding an astonishingly wide range of applications, from drug discovery and clinical medicine to communications and manufacturing, warfare, and art, and just about everything in between. In recent years, one particular branch of AI — so-called “generative AI,” based on a branch of machine learning known as “deep learning” — has proven far more capable than both its greatest prophets and critics seem to have predicted, with the success of public-facing applications like ChatGPT and Midjourney.

But while generative AI has, rightly, engendered the most public attention recently, there are in fact a variety of approaches to artificial intelligence that hold promise for future development or are already in wide use. While many experts, buoyed by the astonishing success of large language models, think current “neural net” and “deep learning” approaches will continue to succeed where others have failed, some argue that these approaches face inherent technical limits. They suggest other avenues for development, such as drawing on causal inference methods or even returning to “good old-fashioned AI” based on symbolic reasoning, or some combination thereof.

These disagreements turn not only on technical questions but also philosophical and ethical questions about how to understand human reasoning, how to model it, and what the aim of AI development should be: Should we simulate and perhaps one day replace humans by creating so-called artificial general intelligence? Or should we complement and augment human capabilities by focusing on automating discrete tasks? Partly owing to these rival approaches, experts vary widely in their assessment of current AI and its future prospects, both positive and negative. Many seem to believe artificial general intelligence is just around the corner; others believe it is decades off; some believe it impossible; and a few even claim it has already arrived.

One thing nearly all parties to these debates do seem to agree on is that AI is — or at least will be — transformative. Yet AI is arguably distinct from past transformative technologies, such as the telegraph or the automobile or the smartphone, which also had enormous implications for economic growth, employment, and social interaction. By its very nature, and given its potential applications, AI raises deep questions about the nature and place of the human person in a way and to a degree that few other technologies do. (This may help explain the seductive appeal of the outlandish and sci-fi-like scenarios of technological apocalypse that tend to grab headlines.)

Clearly, the emergence of AI demands careful moral reflection and political deliberation. But the federal bureaucracy is not the right venue for this. The problem is not regulation per se, but that the language of federal regulation is ill-suited to the task at hand. Its moral vocabulary is too narrow, adapted as it is to such instrumental concerns as safe deployment and fair implementation. A narrow focus on regulatory tools risks obscuring more fundamental questions about ends, not just means, thus depriving us of the moral resources needed to grapple with the challenges and opportunities we are likely to face in an age of AI.

A Richer Debate

President Bush’s bioethics council emerged in response to a scientific and technological development that bears some interesting parallels to AI, and to a certain extent overlaps with it: biotechnology. Like AI, biotechnology emerged as a kind of hybrid field, at once scientific and technological. And it was (and is) rapidly transforming the world around us — by advancing science, medicine, and innovation, and improving medical care and thus the length and quality of life. At the same time, biotechnology poses distinctively complex questions about ethics, risk, and the human person, whether it involves genetic engineering, abortion, end-of-life care, human enhancement, or research on pathogens of pandemic potential.

How to think about these issues — and especially how to reap the benefits of technological innovation while guarding against its downsides — has been a subject of concern for over half a century among experts in and around the fields of biotechnology, from biomedical research to bioengineering to clinical medicine. Bioethics arose to grapple with such issues during the 1960s and 70s and has since developed into a mature field of academic research as well as a professional practice, informing and shaping medical research, clinical medicine, and health care decision-making.

During the 1990s, the pace of biotechnological development accelerated, engendering renewed public focus on bioethical concerns, especially on issues related to reproductive technologies, stem cell research, and the prospect of human cloning. Leon Kass, one of the pioneers of the field of bioethics, elegantly characterized the situation in the first issue of this journal in 2003:

As nearly everyone appreciates, we live near the beginning of the golden age of biotechnology. For the most part, we should be very glad that we do. We are many times over the beneficiaries of its cures for diseases, prolongation of life, and amelioration of suffering, psychic as well as somatic. We should be deeply grateful for the gifts of human ingenuity and cleverness, and for the devoted efforts of scientists, physicians, and entrepreneurs who have used these gifts to make those benefits possible. And, mindful that modern biology is just entering puberty, we suspect that the finest fruit is yet to come.

Yet, notwithstanding these blessings, present and projected, we have also seen more than enough to make us anxious and concerned. For we recognize that the powers made possible by biomedical science can be used for ignoble purposes, serving ends that range from the frivolous and disquieting to the offensive and pernicious. These powers are available as instruments of bioterrorism…; as agents of social control…; and as means of trying to improve or perfect our bodies and minds and those of our children…. Anticipating possible threats to our security, freedom, and even our very humanity, many people are increasingly worried about where biotechnology may be taking us. We are concerned about what others might do to us, but also about what we might do to ourselves. We are concerned that our society might be harmed and that we ourselves might be diminished, indeed, in ways that could undermine the highest and richest possibilities of human life.

Kass went on to observe that such considerations are not merely academic, but bear directly on practical questions of technological development, scientific research, and medical and personal decision-making. As he put it: “Decisions we today are making…will shape the world of the future for people who will inherit, not choose, life under its utopia-seeking possibilities. It is up to us now to begin thinking about these matters.” It was precisely to grapple with such decisions, and the complex range of problems they posed, that Bush had established the President’s Council on Bioethics two years earlier, tapping Kass as chairman.

The Bush-era council was unique in several respects. It was an expert advisory council rather than a regulatory body, aiming to inform but not dictate actual policy decisions. Yet unlike other executive branch advisory committees, the council’s roster was populated not just with technical experts, but also with legal scholars, social scientists, ethicists, and philosophers. The diverse composition of the council — Kass saw it as “a Council on bioethics, not a Council of bioethicists” — reflected the breadth and depth of the issues that fell under its purview. The reports the council produced touched not just on whether or how new technologies, such as cloning, could be deployed safely but also on the nature of human dignity, flourishing, and identity, all of which are implicated by biotechnological innovation.

This humanistic approach — what philosopher Adam Briggle has described as a “rich public bioethics” — was arguably unique even within the field of bioethics. According to Briggle, since the 1970s, professional bioethics had grown increasingly technocratic, contributing to a “thinning” of public debate about bioethical issues. As a result, “discussion about ends and goods was largely displaced by a discussion about the means to ensure the assumed needs of autonomy, beneficence, and justice.” In contrast to such “instrumentalism,” Kass invoked what he called a “mandate to raise questions not only about the best means to certain agreed-upon ends, but also about the worthiness of the ends themselves, a mandate to be clear about all of the human goods at stake that we seek to promote or defend.”

Even as the council considered a broader and deeper range of issues than most federal advisory committees, it did not seek to achieve consensus where there was none, establishing instead a distinctive model of deliberation and dissent. At the same time, the council provided detailed policy recommendations and analysis intended to serve the practical needs of government decision-making.

For example, in its 2002 report on human cloning, the council held unanimously that “cloning-to-produce-children is unethical, ought not to be attempted, and should be indefinitely banned by federal law, regardless of who performs the act or whether federal funds are involved.” But as to whether cloning for the purposes of biomedical research should be permitted, the council was split, with seven members in favor and ten opposed. Ultimately, the report offered a range of policy proposals, and gave council members the opportunity to explain the rationales for their personal positions.

The council was not perfect, of course, nor was it uncontroversial. Some critics argued that it suffered from a dearth of technical expertise and a surfeit of groupthink. Others, including the incoming Obama administration, saw its aversion to consensus as impractical. But the council’s commitment to taking the most fundamental questions raised by technological change with the utmost moral seriousness provided a model for deliberation about complex technical issues of great public concern — a model that could be fruitfully adopted for AI today.

A Presidential Council on AI Ethics

We are arguably living at the beginning of a golden age of AI. And, just as Kass said about biotechnology two decades ago, this golden age presents new opportunities for which we should be deeply grateful, but it also raises profound questions about the ends to which our technological power might be directed. It is up to us to address them, to furnish our political leaders — and ourselves — with moral resources to help us strike the ever-delicate balance between freedom and responsibility. Doing so requires moving beyond instrumentalism — a narrow focus on only the safest or most efficient means to achieve already agreed-upon ends. The White House can, and should, take the lead by establishing a presidential council on AI ethics modeled on Kass’s bioethics council.

A presidential council on AI ethics should be similarly composed, with a diverse range of experts, drawing from but going beyond the burgeoning academic field of AI ethics. It should include practitioners in AI as well experts in computer science, mathematics, logic, statistics, philosophy, psychology, and other social sciences. Like Kass’s council, it should be advisory rather than regulatory, aiming to provide substantive and actionable advice to policymakers. At the same time, it should be similarly comfortable with deep and sometimes irreconcilable disagreements when it comes to policy recommendations and the ethical principles that inform them. Above all, such a council should aim to transcend the narrow instrumental questions that dominate current AI debates, reaching instead for the difficult but ultimately more important questions about human goods and goals at the root of those debates.

Of course, an AI ethics council would be no silver bullet. And, like the Bush-era council on bioethics, it would surely draw political criticism. A non-regulatory body, Bush’s council was nevertheless criticized (especially but not exclusively from the left) for being anti-science and anti-progress, and too powerful, imposing its “theocratic” views on the unsuspecting public under the guise of disinterested expertise. Ironically, a presidential council on AI ethics today, though it might draw some libertarian ire, would likely garner the most criticism from those (especially but not exclusively on the left) who seek more aggressive federal action against this emerging technology.

Yet taken together, these critiques — that a presidential ethics council is too powerful, on the one hand, and too impotent, on the other — are reflections of the core tensions underlying our public debates over technological innovation and change. These critiques thus point, however inadvertently, to the very need that such a council could fill. The dominant approach to policymaking today, in science and technology policy and beyond, focuses almost exclusively on inputs, outputs, and implementation. But as Kass reminds us, some technologies are not only transformative but raise questions that challenge our conceptions of our own humanity. Like biotechnology, AI calls for careful reflection and prudential action, at least if we are to sustain the conditions for — and to serve the goals of — freedom, dignity, and human flourishing.

M. Anthony Mills, “A President’s Council on Artificial Intelligence,” Number 75, Winter 2024, pp. 100–107. Published online on November 17, 2023.

Delivered to your inbox:

Humane dissent from technocracy

Exhausted by science and tech debates that go nowhere?