This is a response to the Fall 2025 essay “AI Ethics Is Simpler Than You Think” by Dominic Burbidge.
Dominic Burbidge rightly highlights the need for AI ethics to begin with the humans coding the algorithms, emphasizing computer programming as a practice with internal “moral ends” requiring “moral habits.” His essay suggests that all that AI ethics needs is for practitioners to be informed by good moral theory, which will let them make good, moral algorithms. Pope Leo XIV recently tweeted a similar idea, exhorting “all builders of #AI to cultivate moral discernment as a fundamental part of their work.” The sentiment struck a chord: When venture capitalist Marc Andreessen issued a mocking meme in reply, enough people leapt to the pope’s defense that Andreessen deleted the tweet.
Burbidge rightly turned to philosopher Alasdair MacIntyre in his exposition of how human practices have internal standards of excellence and distinctive goods that the practitioners themselves must understand, judge, and pursue. The creation of AI will be no different, he says. Just as medical ethics is best accomplished by the virtuous doctor’s pursuit of the patient’s health, and business ethics involves virtuous accountants keeping faithful records, so AI ethics needs virtuous programmers seeking to articulate and pursue the internal moral ends of the practice of programming.
Yet, as MacIntyre taught his students, what you do not say can be more important than what you do say. (An example the philosopher once used in class was that of your hypothetical overbearing aunt with a preposterously ugly hat. If asked what you think of it, the only acceptable answer would be along the lines of “why, what an interesting hat you have!” as it would come closest to satisfying both honesty and courtesy.) What Burbidge does not say in his otherwise excellent essay is how the “philosopher-technologists” he calls for are to be employed, or how the practice of building AI for flourishing can be institutionally housed. MacIntyre’s insight on the importance of practices only leads to a functioning ethical system when practices are housed in well ordered institutions, and when the leaders of such practices and institutions have an implicit understanding of how their domain-specific goods relate to the good human life as a whole. This in turn requires participation in a broader tradition of moral inquiry, such as classical liberalism or Roman Catholicism.
Virtuous computer scientists are necessary but insufficient to enact AI ethics. For builders to apply the “moral discernment” the pope calls for — a digital discernment, if you will — they will need companies and ecosystems that welcome this rather than scorn it, as Andreessen did. Recall how OpenAI was founded as an earnest nonprofit with plans to open-source its aligned AI for the good of humanity; in its transition to becoming Sam Altman’s publicly traded, power-maximizing fiefdom, it has of necessity pushed out many of the programmers who held that ideal.
Burbidge points to the Cosmos Institute’s “philosophy-to-code pipeline” in seeking to imbue programmers with a classical-liberal vision of human flourishing. Of equal importance to mention is Cosmos Holdings, its paired investment fund to launch companies that share this vision. Likewise, the pope’s tweet did not come apropos of nothing. It is an extract from a statement he made to the Builders AI Forum, an annual gathering in Rome of Catholic technologists who recognize that “every design choice expresses a vision of humanity” and are thus building companies like Magisterium AI to encode those visions.
In sum: Burbidge’s essay offers the right starting point for AI ethics in the moral formation and vision of AI programmers. From there, we need to add AI companies that pursue profit only insofar as it results from the promotion of human flourishing, and we need a clearer articulation of the standards of flourishing in different traditions, with an emphasis on seeking harmonization of contrasting views.
Andreessen had one thing right, at least: it’s time to build. But what we should be building are tools as well as institutions that better serve our wellbeing.
Keep reading our
Winter 2026 issue
Dinergoths • Empty cities • Obesity wars • Killing sports • Subscribe