Recently in Paris, Vice President J. D. Vance warned that “excessive regulation in the AI sector could kill a transformative industry just as it’s taking off.” “AI, I really believe,” he told an audience gathered for the AI Action Summit, “will facilitate and make people more productive. It is not going to replace human beings — it will never replace human beings.” No, “the AI future is not going to be won by hand-wringing about safety. It will be won by building.”
Vance is right. And the reason goes well beyond the general tech optimism of his speech. The real reason the AI future belongs to the builders and not the regulators is that figuring out how to use AI well is not something we can just create abstract rules for and then impose them on AI development. No, the ethics of AI is something we will find by actually building the technology. We will find it in the very practice of programming.
This may seem counterintuitive. The current call for regulation of AI is motivated in large part by a recognition that the development of the Internet — including social media — would have been less damaging to individuals, businesses, the economy, and culture at large if it had been better regulated from the start. It’s true that there has not been much regulation of the Internet, and because that was the last technology boom before the current one, this fact focuses the mind on not making the same mistake twice. But AI is not the Internet, and we have to grasp what it really is before we build up any kind of regulatory framework for it.
There will always be a place for hard ethical rules. And there will be a place for tangible assessments of AI technology’s likely consequences in the near term, like a loss of trust in policing and the judiciary, or extensive job losses for what can be easily automated. Near-term consequences might be combatted with punishment of people who use AI badly, like those who create deep-fake pornography and pedophilic material, or those who let AI make bad health diagnoses under their watch.
But any AI ethics that isn’t centered on the human practice of designing the technology is destined to fail. It will only ever be reactive.
Let’s start with a persuasive worry about whether we can get AI ethics right. Recently in these pages, R. J. Snell argued that ethics will not save us from AI, as a way of critiquing tech entrepreneur Brendan McCord’s ambitious project to infuse the design of AI systems with Enlightenment philosophy.
Here at the University of Oxford we have a new Human-Centered AI Lab, supported by McCord’s Cosmos Institute. The lab’s goal is to create a “philosophy-to-code pipeline” that will “bring together leading philosophers and AI practitioners to embed concepts such as reason, decentralisation, and human autonomy into the AI technologies that are shaping our world.” Snell is concerned that the philosophy part of philosophy-to-code will advance the Enlightenment’s idea of reason without realizing that the Enlightenment ultimately failed as a moral project.
Snell’s 2023 book Lost in the Chaos provides a comprehensive analysis of why he thinks rationalism fails. In a chapter titled “Fever Dreams of Rationalism,” he cautions against thinking that our social problems can be solved by appealing to reason alone. For Snell, Enlightenment rationalism fails to address truly moral questions. He draws on the English philosopher Michael Oakeshott, for whom rationalists demand “perfection, for problems to be solved, for uniform order, and [they] want these immediately and completely.” Rationalism is not really moral reasoning, but more like a way of reconfiguring human dilemmas as math puzzles.
The impatience Snell is worried about is when the ethical is a direct conclusion of what would be optimal for us to achieve universally — a goal even more tempting at a time when technological solutionism runs rampant in politics and law. This approach ignores the reality of human freedom and the journey toward moral purpose that is different from one person to the next. The ethical, Snell argues, is rooted in all dimensions of what it means to be human, not just our rationality. It must be grounded in attention to circumstance, biography, and anthropology. It involves real choices and real people. In this sense, finding the right universal laws is an insufficient guarantor of morality, which also requires wisdom, habit, and relationships.
All these realities set the rationalist up for something dangerous. When a rationalist appeals to reason and gets a disappointingly inconclusive result, he will despair. As a result,
Destruction and creation come far more naturally to this disposition than does reform or patching up. Good enough is not good enough for the rationalist, and instead of gratitude for the good attained by custom, he “puts something of his own making” — the rationalist has a plan, always modeled on the dispositions of the engineer rather than the elder.
Snell worries that we are now committing the same mistake with AI. McCord’s appeal to reason will, Snell suggests, lead us to a view of humanity as able to engineer its own standards of conduct, which is the self-invention and self-projection heralded by Nietzsche.
The alternative Snell proposes is to go back to the basics of Aristotelian ethics: recognize moral goods as real and objective, and grounded in a fuller account of what it means to be human. We are not only rational creatures in pursuit of goals; we are also relational and political beings, and if our broad acceptance of all things AI fails to account for that, we are surely setting ourselves up for more harm than good.
Snell is right: we shouldn’t bring a narrow conception of Enlightenment goals to bear on all it means to be human, or to be good. The problem with this argument is that what it means to be human, or to be good, isn’t the only question we have to answer to get AI right. We need to get much more specific. That’s because AI is a tool. It is a particular kind of tool that we make and make through our use, which means that the question is really how we should conduct certain human practices well. If we can answer this question about other practices, we should be able to answer it about rather new practices like programming AI. Moreover, AI is a simple and limited tool — simple enough that the rationalist procedure of thinking about how to achieve specific moral ends with this tool is actually close to the right approach. The “philosophy-to-code pipeline” may be on to something after all.
The starting point for AI ethics must be the recognition that AI is a simple and limited instrument. Until we master this point, we cannot hope to work back toward a type of ethics that best fits the industry.
Unfortunately, we are constantly being bombarded with the exact opposite: an image of AI as neither simple nor limited. We are told instead that AI is an all-purpose tool that is now taking over everything. There are two prominent versions of this image and both are misguided.
The first is the appeal of the technology’s exponential improvement. Moore’s Law is a good example of this kind of widespread sentiment, a law that more or less successfully predicted that the number of transistors in an integrated circuit would double approximately every two years. That looks like a lot, but remember: all you have in front of you is more transistors. The curve of exponential change looks impressive on a graph, but really the most important change was when we had no transistors and then William Shockley, John Bardeen, and Walter Brattain invented one. The multiple of change from zero to one is infinite, so any subsequent “exponential” rate of change is a climb-down from that original invention.
When technology becomes faster, smaller, or lighter, it gives us the impression of ever-faster change, but all we are really doing is failing to come up with new inventions, such that we have to rely on reworking and remarketing our existing products. That is not exactly progress of the innovative kind, and it by no means suggests that a given technology is unlimited in future potential.
The second argument we often hear is that AI is taking on more and more tasks, which is why it is unlimited in a way that is different from other, more single-use technologies of the past. We are also told that AI is likely to adopt ever more cognitively demanding activities, which seems to be further proof of its open-ended possibilities.
This is sort of true but actually a rather banal point, in the sense that technologies typically take on more and more uses than the original designers could have expected. But that is not evidence that the technology itself has changed. The commercially available microwave oven, for example, came about when American electrical engineer Percy Spencer developed it from British radar technology used in the Second World War, allegedly discovering the heating effect when the candy in his pocket melted in front of a radar set. So technology shifts and reapplies itself, and in this way naturally takes on all kinds of unexpected uses. But new uses of something does not mean its possible uses will be infinite.
It is also true that the multifarious applications of AI will cause wide market instability, in a similar way to what happened with the World Wide Web. But like the Internet, AI is and will be a medium suited well for certain tasks and not for others. While we so-called scholars make every effort to avoid talking about computer games (lest we get accused of having played one), in that world the point has already been abundantly clear for a while now. Gaming has some very fine examples of AI bots, and yet multiplayer human-to-human games have continued to thrive if not increase in demand at the very same time as these bots have been perfected. It seems that computer gaming is an industry for which AI has limited use, even though the programmed bots outplay many human gamers. The reason is that play is a basic good that engages the social side of being human. We need to play with people like ourselves.
Sometimes, nevertheless, we are surprised by AI’s application to a new type of activity and view it as ushering in a new type of AI altogether — such as a large language model first being able to write a poem, or AI deployment in fintech. In these examples the comparison with the microwave seems to fall flat and it appears AI may indeed be unlimited in its future possibilities. Here, however, what we are dealing with is not AI’s expansion but more and more things being called AI.
It is no secret that the buzz around AI means that every CEO wants shareholders to feel they are not missing out. There is much re-description of existing mechanisms as able to be transformed through AI, which means that our popular understanding of what counts as AI is ever-expanding. But calling more and more mechanisms AI does not provide meaningful evidence for future capacities being unlimited.
A case in point is ChatGPT, which is really at the center of current enthrallment with AI. ChatGPT is, at the time of writing, described by Wikipedia as “a generative artificial intelligence chatbot.” But when the Wikipedia page was first created on December 5, 2022, it was more humbly “a chatbot developed by OpenAI focused on usability and censoring inappropriate prompts.” The change in the tool’s description as an AI chatbot occurred on February 25, 2023 — the day after Meta released a rival model — despite the fact that throughout this time ChatGPT’s underlying large language model mechanism remained the same. ChatGPT rightly counts as AI and that categorization is now beyond debate, but it is beyond debate in part because our definition of AI is steadily expanding to include any algorithm-based processing of information done with the aid of computers.
Because the dynamics of pretty much anything can be expressed — however primitively — in algorithmic form, it seems almost any mechanism can be reworked to fit under the definition. Are weather monitoring systems a type of AI? Is translation software a type of AI? Is a points-based scoring of candidates for university a type of AI? We may have the impression that AI is expanding, but all we are doing is calling everything AI.
If AI is everything, an all-purpose tool ever expanding in scope and capabilities, then R. J. Snell is right and closes out the debate: we need an all-encompassing ethics to guide its development and use. But what if AI’s distinctive feature is machine learning — the only novel thing that AI-related technologies have brought to the table over the past few decades — and it is a simple and limited instrument? Can we not, then, have an ethics that is specific to AI?
Machine learning is no more and no less than a method of pattern identification and response. Machine learning helps to identify patterns in large data sets, and generates optimized responses to the patterns that have been identified. It is a method that one can choose to adopt for a problem or task at hand. This means that — like chess, or painting, or science — the design of machine-learning techniques is a type of practice, a human activity we can employ to achieve certain kinds of goals.
Chess, painting, and science have their own rules and norms that help establish what is good within them, and thus what makes someone good at them. For example, a scientist must identify repeatable results and offer an honest and transparent account of his or her methodology. The chess player agrees to alternate using white and black pieces in a tournament and abide by the rules of the game. Society’s general rules and regulations — like laws against harassment and stealing — are helpful, but they do not give the full picture of what it means to be a good chess player, a good painter, or a good scientist.
A guide for thinking this through is the late Scottish philosopher Alasdair MacIntyre. In his book After Virtue, he explains that human practices can foster internal moral goods. There are certain qualities of being good that are in line with the kinds of specialized activities one does. Being virtuous isn’t just about being a great person in general, but also being noble in the particular role one plays in society.
Now, many will think that such a heavy moral conceptualization of AI and the practice of actually doing machine learning cannot apply because MacIntyre was describing cooperative human activities, which is the opposite of the AI that is presently making humans obsolete. But it isn’t. If AI is simply removing humans from moral decisions, then yes, it has to be taken as bad, point blank. But the reality is actually much more complicated. Machine learning is a mechanism for releasing human intentions into the world, so even if some jobs are changed or no longer needed, human intentionality will continue to direct what AI is. To believe that AI is part of a general trend to remove human agency and human community is to buy the narrative of AI as a capable self-mover, which is as ridiculous as saying language bots will remove the need for language.
The idea that AI is about human replacement remains dominant, and it is ruining our ability to foster a genuine debate about the ethics of how AI gets built. It is quite amazing how much reflection on AI ethics is coming out with the barest of mentions — or with complete omission — of the human programmers behind it. AI programs and machine learning techniques are the product of human authorship and design. This means that if AI has the effect of removing humans from an activity, which can sometimes be a horrible thing, other humans enter in through a different path, and the substitution is a problem if these new agents fail to understand and deliver on the true meaning of the practice. The fact that AI programs and machine learning techniques are all products of human authorship and design means that ethical responsibility will never be detachable from the humans involved in AI’s making and use.
In addition to the fact of programming being an irreducibly human activity, of all types of human activities it is also highly cooperative. Any programmer will be happy to explain how little they can do on their own. Open source, for example, is not just a generous gesture by hippie Californians; it is also an essential condition of success for many programs and platforms, because anything from add-ons to bug fixes to new versions depends on a community of developers who share an interest in seeing the software brought to its full potential. Not to mention the rampant poaching of code that goes on everywhere and is generally endorsed by writer and receiver alike as a method for creative innovation, no matter how much it departs from the older insistence on patents and copyright as necessary to incentivize creativity. I once had lunch in Philadelphia with one of the developers of Google Docs and she was just over the moon that everyone was using it and developing it further. Of course there are the secret algorithms of the search engines, but even these are massively collaborative affairs at the firms offering the services.
The real two reasons we should doubt whether programming counts as a practice in the ethical sense are that, first, it is not a practice with a very long tradition and, second and most importantly, it is not yet evident — to borrow from MacIntyre — that “human conceptions of the ends and goods involved” are being “systematically extended.” While the first of these reasons is something that just needs time, the second brings us full circle: it is true that programmers need philosophy to better think through what they are doing. There has been a growth in vocational training combined with philosophy, like the College of St. Joseph the Worker, and this applies also to programmers, with degrees like the B.A. in Computer Science and Philosophy at Oxford, or places like CatholicTech, which is a fascinating American research university in Italy that is seeking to offer degrees in STEM steeped in philosophy, theology, and ethics.
Now, if you walk around saying you are a philosophical technologist, people will think you are trying a bit too hard to distinguish yourself. But what if this is the only way of really doing technology? MacIntyre seems to think there is good reason we call someone in the tradition of making things with wood a carpenter, or someone in the tradition of making buildings an architect: these are unique practices with their own standards of excellence. Because we do not have much of a tradition of programming, we struggle to think of it as anything more than functional, but that is precisely what we need philosophy for — helping us find the ethics within the practice itself.
There are signs of movement in the right direction: some thinkers are turning away from absolutist questions of whether technology is good or bad at face value, and toward more refined views of the ways of technologizing that are best. For example, the appeal to the virtues, or moral habits, needed for AI ethics is at the forefront of Josiah Ober’s and John Tasioulas’s push to bring Aristotle into the discussion on AI ethics, asking us to think about the habitual ways of doing technology that will be most conducive to human flourishing.
Philipp Koralus, who runs the Human-Centered AI Lab at Oxford, describes the need for “a new class of philosopher-technologists” — for AI developers “who ask how to build systems that truly contribute to human well-being.” This, he writes, requires “learning by doing,” as both philosophy and engineering involve active engagement rather than only theorizing. This means that AI ethicists should be in touch with programmers themselves so as to best articulate the moral ends of what programmers are doing and the moral habits needed to do it well.
There are going to be habits of reason and habits-in-line-with-reason that best make sense of the practice of programming when done well, such as norms of collaborative sharing, which, when given more thought and articulation, can set standards for programming as an ethical practice.
For my part, it seems clear to me that digital technologists have gone overboard in trying to make a product that everyone can use at all times, which has influenced the AI community’s self-perception of its mission. While that ambition may help create a stock market bubble, it is ultimately a misguided attempt to repeat the post-war economic boom of providing white goods — washing machines, refrigerators, tumble dryers, and so forth — to every household. The benefits of AI instead lie in specialized pattern identification and programmed optimized responses, which is of subject-specific utility and requires tailored application.
The practice of doing programming well requires being close to the users and end beneficiaries. This means training medical-programmers, legal-programmers, linguist-programmers — and letting go of the insistence that AI is an all-things-to-all-people new deity.