A year or two ago, when she was in high school, my sister-in-law mentioned that her peers had taken to calling each other “bots,” a term of abuse meant in much the same way that my peers had once, regrettably, used the r-word. This seems to have been tied to the brief period when “bot” was the term-of-abuse du jour among online progressives, the equivalent of the online right’s “NPC”, or non-player character.
“Bot” — premised mostly on a misunderstanding about Russian intervention in the 2016 election — was typically used to accuse right-wing social media users of being insincere agitators in service to the Kremlin. “NPC” somewhat more playfully refers to a video game character directed by a computer rather than a human subject, deployed to suggest that progressives’ servility to political fashion had overcome their powers of independent thought.
Contextual differences aside, “bot” and “NPC” mean basically the same thing. They are facetious expressions of the same earnest suspicion: that our interlocutors, for all their semblance of a shared humanity, are not really human, not really possessed of an inner world — that, as American Psycho’s Patrick Bateman put it under slightly different circumstances, “I simply am not there.”
A great deal of online argumentation leans on accusations like these. Chronic Twitter users will be familiar with “Types-of-Guy” taxonomies. In a 2019 essay in The Point, Justin E.H. Smith describes a still-popular device that does the same sort of thing:
There are memes circulating that are known as “bingo cards,” in which each square is filled with a typical statement or trait of a person who belongs to a given constituency, a mouth-breathing mom’s-basement-dwelling Reddit-using Men’s Rights Activist, for example, or, say, an unctuous white male ally of POC feminism.
The cards function as a comprehensive table charting the types of things a person of that group typically says, of every move he can make. And so
whenever the poor sap tries to state his considered view, his opponent need only pull out the table and point to the corresponding box, thus revealing to him that it is not actually a considered view at all, but only an algorithmically predictable bit of output from the particular program he is running. The sap is sapped of his subjectivity, of his belief that he, properly speaking, has views at all.
A bot’s soft, human exterior (or, more likely, his human-looking profile picture) only masks a lifeless, algorithmic interior. These are accusations not only of insincerity but of inhumanity. Whereas the inner world of a human being is populated by sincere questions and heartfelt convictions, bots and NPCs have “talking points” and programmed responses — either cold machine logic or unthinking and brutish imitation.
This is perhaps the most uncharitable accusation that one could possibly level against an opponent. It also sticks to most of us. You almost certainly know someone to whom it applies in an exceptional way — someone whose every thought has been taken captive by some political discourse or another, who can apparently think or speak of nothing else, who seems to respond to all intellectual stimuli in an automatic and predetermined way, and to be constitutionally incapable of exploring any new conceptual territory. Be honest: It more than likely describes you, even if just a little bit. It certainly describes me. Nothing is so predictable, after all, as making recourse to prefabricated rhetorical bingo cards. Nothing is so typical of an NPC as whinging about bots.
How many people in our country go about their lives burdened with the well-founded suspicion that their human-looking fellow citizens are not true humans at all, but partisan automata? We are a nation of human beings who constantly, conspicuously fail the Turing Test. The possibility conceived in that test — of a world in which human and machine behaviors are effectively indistinguishable — was inherent in the project of creating lifelike machines from the outset. But it goes back further. Thomas Hobbes, who proposed that mechanical automata “have an artificial life,” also treated them as the paradigm for human life. And we’re familiar with the descendants of this anxiety in the work of Philip K. Dick. For the most part, we have imagined that it would be machines whose behavior would become indistinguishable from humans, not the other way around.
Perhaps the first work of art to take the latter tack — unsurprisingly, a product of the MySpace era — was Basshunter’s 2006 Eurodance hit “Boten Anna,” which tells the story of a young woman mistaken for a chatroom bot. Something similar is at work in the popular “pitchbot” Twitter accounts — New York Times Pitchbot, Federalist PitchBot, and so on — which mock those publications for their rote predictability. These are instances of art-imitating-life-imitating-art: people pretending to be bots imitating human writers who write like bots.
More seriously, Will Arbery’s play Heroes of the Fourth Turning, the finest work of art to come out of 2019, offers an explicitly demonological presentation of media-saturation. Each of the play’s young conservatives is possessed: veteran and Catholic convert Justin is possessed by his complicity in violence, Simone Weil–like Emily by the world-embracing sympathy engendered by her own chronic illness, developmentally-arrested Kevin by something like sexual pathology, and far-right media influencer Teresa by her inner pitchbot. While the other forms of possession on display are more spectacular, finding at times somewhat Exorcist-like modes of expression, Teresa’s is perhaps the most profound. Her friends all have inner lives, albeit vexed and fraught. Teresa, by all appearances, has none. She’s been hollowed out. This is horror as harrowing as anything supernatural, courtesy of our decade’s own peculiar demons.
Adam Elkus, in the previous issue of this journal, writes that behind the new regime of social media speech policing is “the fear of chaotic human behavior.” But what scares us about our neighbors most enraptured of some cause is not human unpredictability so much as algorithmic inhumanity. We fear less that their political convictions will lead them to act in ways we can’t foresee than that, like robots, they never stopped to think about the program they are running. This is a robot uprising of a different sort than we are used to hearing about, the shape of which is only just becoming clear to our collective imagination yet makes a familiar impression. That automated systems deserve blame is well understood. Whatever one thinks of the desire now to use those systems to suppress the unruly output of our flesh-and-blood hardware — and I don’t think at all highly of it myself — it’s at least consistent. There’s a word for this kind of procedure: debugging.