Put Not Thy Trust in Nate Silver

How simulation replaced reality
Subscriber Only
Sign in or Subscribe Now for audio version

On the evening of November 8, 2016, much of the country found itself transfixed by the vacillations of a tiny needle. It was the graphical representation of the New York Times’s presidential election forecast, which initially rated Hillary Clinton’s chances of victory at above 80 percent. Around 8 p.m. Eastern on election night, it began to lurch unsteadily toward her opponent. In the end, the needle correctly projected his victory a few hours before it was called. The needle’s jittery motions made it the object of collective outrage on the part of Clinton partisans, but it was fundamentally doing what it was designed to do: model the likelihood of outcomes based on available data.

The New York Times election forecast needle as it appeared on election night in 2016 at 7:18 p.m. Eastern.

By the end of the night, it was clear that the needle had been as much a barometer of the shifting mood of Clinton supporters as a visualization of shifting probabilities. As it haltingly revised the chances of Democratic victory downward, the needle tracked the bewildered state of many who had counted on it and similar predictions to pacify their unease about the already disorienting political trajectory of the country. “Keep Calm and Trust Nate Silver,” one meme counseled, referring to the celebrity data analyst behind the prediction site FiveThirtyEight. Such forecasts performed a tranquilizing function as long as the needle rested on the blue-shaded side of the digital gauge, but wound up stoking the collective panic provoked by Trump’s victory.

The following year, the Times needle came back into liberal favor when it anticipated Democrat Doug Jones’s win in Alabama’s special Senate election. But as the needle’s creators explained, it had done its job equally well in 2016 and 2017. The problem was that people had demanded something it could not deliver: certainty. The element that had most upset Clinton supporters in 2016 was the needle’s jittering, which did not reflect actual moment-to-moment changes, but rather had been incorporated as an extra visual cue to “reflect the uncertainty of the forecast.” However, people’s emotional fixation on mathematical election modeling had scant relation to its real capacities. In 2017, the Times bowed to reader preferences and turned the jittering off.

The controversial Times needle offers a revealing vantage point on the psychosocial history of American liberal politics over the past four years. The real question we should be asking is not how well forecasts have performed in relation to eventual outcomes, but why so many have come to use volatile statistical models as a way of (supposedly) grounding themselves in reality. The recent history begins with Nate Silver, whose career took off with Barack Obama’s victory in 2008 and contributed to the optimistic spirit of the early Obama years. But prediction and modeling have been tied to the technocratic agenda of modern liberalism for far longer. Across decades, their fortunes have risen and fallen together.

A ‘Fabulous Electronic Machine’

Jill Lepore’s new book, If Then: How the Simulmatics Corporation Invented the Future, recounts an earlier phase of the overlap between liberal politics and the attempt to apply predictive methodologies to the political and social realms. Drawing from the archives of the little-remembered Simulmatics Corporation, Lepore reveals that the effort to introduce computer-based modeling into the political arena shaped not only election strategy but various liberal enterprises of the 1960s, including the most disastrous one: the Vietnam War. The story ends with Simulmatics bankrupt and postwar technocratic optimism in ruins. Nevertheless, Lepore argues, the worldview forged out of these projects remains with us today.

Despite the attention it receives, election prediction is a relatively low-stakes game. The illusion of control it provides over the chaotic flux of politics has lately made it the object of large investments, but in the end it does little more than satisfy the 24-hour news cycle’s demand for immediacy. But the intellectual project surrounding such efforts — the aspiration to use computer simulations to model uncertain, multifaceted realities — has had a broader impact. The question Lepore’s book raises but leaves unanswered is how this enterprise has continued to thrive to this day despite its only modest success even in the well-defined enterprise of electoral predictions — and its catastrophic failures elsewhere.

As Lepore notes, the attempt to introduce computers to politics began alongside the rise of a more visibly influential technology. By the early 1950s, more than a third of U.S. households owned TV sets, making that year’s presidential campaign the first of the televisual era. Computation had also advanced rapidly during and after World War II, but was still largely confined to corporations and labs. The first major televised appearance of a computer occurred during CBS’s 1952 election night coverage, during which Walter Cronkite presented the public with a “fabulous electronic machine” known as the Universal Automatic Computer (UNIVAC).

Much like the Times election needle many decades later, the UNIVAC was supposed to forecast the eventual outcome based on early returns. (In reality, the machine on display in the CBS studio was a fake, and the calculations were to be called in from a real UNIVAC in Philadelphia.) As would also be the case with the needle, the machine failed to impress most of its audience. Perhaps wary of outraging viewers if its predictions faltered, the computer’s operators withheld the initial forecast it generated from the public, leaving Cronkite unable to offer the high-tech election-night splash CBS had hoped for. But according to Lepore, UNIVAC was right: It forecast an Eisenhower landslide, and that’s what happened.

The botched CBS spectacle is the origin story of the Simulmatics Corporation. The company’s founder, Edward L. Greenfield, was a Madison Avenue ad man who was also an enthusiastic liberal Democrat. The ‘52 election left him disappointed by the loss of his candidate, Adlai Stevenson, but he also thought he glimpsed the future in the UNIVAC — both his own and his party’s. As Lepore writes: “the way Greenfield figured it, Republicans had made better use of television in 1952, by way of advertising, which meant that Democrats ought to figure out how to make better use of the computer, and fast, because what could be more valuable to a campaign than a computer that could predict the vote?”

To make this dream a reality, Greenfield recruited top academics from M.I.T., Stanford, and UC Berkeley. Many researchers he brought into Simulmatics were linked to two new fields that exercised wide influence in the postwar era: behavioral science and cybernetics. Greenfield hoped that “simulmatics,” a portmanteau of “simulation” and “automatic,” would enter the popular lexicon, as “cybernetics” had due to the success of Norbert Wiener’s 1948 book of that name.

It didn’t, but the Simulmatics project anticipated a great deal of what was to come. As Lepore writes, “by the early twenty-first century, the mission of Simulmatics had become the mission of many corporations, from manufacturers to banks to predictive policing consultants. Collect data. Write code. Detect patterns. Target ads. Predict behavior. Direct action. Encourage consumption. Influence elections.” Yet “simulmatics,” both the company and the neologism it coined, have been forgotten, partly because of Greenfield’s failure to deliver on what he promised and partly because of the downfall of Great Society liberalism, with which he and his company became inextricably linked.

The ‘A-Bomb of the Social Sciences’

From the outset, there was a tension between Greenfield’s immediate goal of improving his beloved Democratic Party’s electoral prospects and his towering ambition to create a new field of science that could use computer simulations to predict human behavior as accurately as they could foresee the trajectory of a missile. Moreover, many Democratic strategists were skeptical of Simulmatics. Advisor Newton Minow — later of “vast wasteland” fame — wrote in a memo during the 1960 campaign, “Without prejudicing your judgment, my own opinion is that such a thing (a) cannot work, (b) is immoral, (c) should be declared illegal.” Despite these misgivings, the Kennedy campaign eventually contracted Simulmatics.

What it promised the campaign was a “People Machine”: a model of the electorate that divided voters into demographic categories associated with certain values, beliefs, and behaviors. Using the latest IBM model available, the company ran complicated simulations of electoral outcomes that predicted the behavior of the virtual voters constructed within the program, based on particular messaging and policy choices. The company would go on to claim that its predictions had played a role in two major Kennedy campaign decisions: embracing a strong civil rights agenda, which solidified its support among African American voters; and addressing the candidate’s Catholicism head-on.

But it’s unclear, according to Lepore, whether Simulmatics did anything more than provide a technically impressive confirmation of what was already “fairly commonplace political wisdom among his close circle of advisers.” Nevertheless, after Kennedy’s narrow victory, Simulmatics’ work for the campaign became a succès de scandale when Harper’s published a story that dubbed the “People Machine” the “a-bomb of the social sciences” and implied that Kennedy’s win owed a great deal to its clandestine operations. The outraged reaction to this exposé anticipated in nearly every detail the panic around Cambridge Analytica after the 2016 election. Pundits and journalists described Simulmatics as a creepy mind-control machine and accused the company of manipulating the public. The Kennedy campaign was embarrassed, fearing that the story cast a pall of illegitimacy over the incoming administration.

As it turned out, the Harper’s piece was written by Thomas Morgan, one of Simulmatics’ own PR men (he did not mention the affiliation in his bio). While the article seemed to cast the company’s work in a sinister light, it also touted the efficacy of its methods. In this, too, Simulmatics resembled Cambridge Analytica, since the dystopian fears about Cambridge’s ability to influence social media users resulted in part from journalists taking the company’s PR at face value.

For Simulmatics, the gambit worked. While the scandal around the Harper’s story upset the Kennedy White House, it also raised the profile of the company. Soon after — like FiveThirtyEight many years later — it was brought on board by the New York Times to enhance the paper’s coverage of the 1962 midterm elections with granular statistical analyses and precise predictions.

War Games

The Simulmatics stint at the New York Times went poorly, and the partnership ended acrimoniously after the election. But the company failed upward, partly on the strength of its founder’s Democratic Party connections. While Greenfield’s political affiliation scored Simulmatics large contracts, it also mired it in the policy failures that would eventually leave 1960s liberalism in tatters. The fatal blow came when the company found a kindred spirit in Secretary of Defense Robert McNamara, whose decisions under the Kennedy and Johnson administrations contributed to the disastrous escalation of the Vietnam War.

What Simulmatics had tried to do with elections, with mixed results, McNamara hoped to do with warfare: turn it into an exact predictive science. Computer simulation had first been used to determine missile paths during the Second World War; McNamara took this logic to its ultimate conclusion. He wanted to run simulations that would determine the inputs (troops, ammunition, enemy body counts) required to produce the desired output (the victorious resolution of the war in the minimum necessary time). Simulmatics claimed it could do just that. By many accounts, McNamara’s fixation on quantitative indices of success blinded him to the disasters on the ground. He was fighting a simulated war while the real one raged elsewhere.

“The U.S. war in Vietnam was the first war waged by computer,” writes Lepore. Simulmatics and its better-known counterpart, the RAND Corporation, made that dubious feat possible. By the mid-sixties, Simulmatics had an office in Saigon, and most of its revenue came from Defense Department contracts. The People Machine, which the fervent liberal Ed Greenfield had imagined as an instrument to help Democrats win elections and implement progressive policies, had instead become part of the pretext for continuing to fight a futile war at the cost of many lives. Unintentionally exposing the callousness of the whole operation, top Simulmatics consultant and M.I.T. professor Ithiel de Sola Pool referred to Vietnam as “the greatest social-science laboratory we have ever had.”

Simulmatics became embroiled in other dubious Cold War liberal enterprises, including a Johnson administration–sponsored plan to develop a “Riot Prediction Machine” for American cities and an attempt to build a computer model of the Venezuelan economy to help make policy recommendations that would forestall a communist insurgency. The results were no more promising than the simulations it ran in Vietnam, albeit with fewer harmful consequences. By the time of Richard Nixon’s presidential victory in 1968, the bold liberalism of the Kennedy and Johnson administrations was in retreat. Simulmatics would file for bankruptcy just two years later.

Lepore concludes that while Simulmatics’ guiding vision of “decision by numbers, knowledge without humanity, the future in figures” resoundingly failed, it also endured. “In the twenty-first century,” she writes, “it would organize daily life, politics, war, commerce. Everything.” If the attempt to use computer simulations to model complex social phenomena bore little fruit, how did it persist and propagate itself? One explanation is that later iterations of the grand schemes pioneered by Simulmatics were more effective because of technological advances. Another is that they were not — but the underlying logic of simulation managed to impose itself by other means.

The Rise of Simulation Nation

In the final pages of If Then, Lepore ominously links Simulmatics to the notorious Cambridge Analytica, but then goes on to dismiss Cambridge as another overhyped flop: “Faster, better, fancier, pricier, but the same hucksterism, and as for the claims of its daunting efficacy, the same flimflam.” Despite the prolonged panic after 2016 — which, like the Harper’s exposé on Simulmatics, mainly served to reinforce Cambridge’s own exaggerated claims about its techniques — there’s good reason to believe that the company’s impact on the election was nil. Even if we look at targeted social media advertising broadly, some researchers have concluded that, contrary to the narrative of most post-election coverage, it had only minimal influence on voters’ decisions.

Despite more sophisticated machines and a vastly greater volume of data available for analysis, the oracular power we attribute to simulation appears in many ways as intellectually dubious as it did in the 1960s. FiveThirtyEight has a solid enough track record with its election predictions, and of framing its forecasts with appropriate skepticism. But strangely, many of the same observers who treated it and other prediction sites with disdain after they supposedly overrated Hillary Clinton’s chances in 2016 were prepared to accept the far-fetched claims of Cambridge Analytica whistleblowers about that company’s ability to predict and change voter behavior. Today’s attempts to “weaponize” simulation to achieve social and political ends don’t necessarily seem to work any better than McNamara’s schemes did in the 1960s.

Lepore’s book provides some hints of how simulation has continued to inspire both faith and fear despite its mixed track record. But for a more complete answer, we must look elsewhere. Simulmatics is only a small part of the much larger story of how, after the limited beginnings of computer simulation guiding missiles in the Second World War, it came to shape the broader cultural logic of the contemporary world.

Today, simulation is central to our collective imagination, as the iconic status of films like The Matrix shows. Eminent scientists now seriously debate the hypothesis that we all live in a computer simulation. How we got here was the theme of a great deal of what is now called “postmodern theory,” much of which seemed preposterous when it appeared, but looks more reasonable now.

This cultural evolution has been hastened by the way that simulation alters the relationship between the proverbial map and territory. Commenting on Simulmatics’ attempt to dynamically model the evolution of public opinion leading up to the 1962 midterms, Lepore notes that “real-time computing collapses the distance between something happening and a machine conducting an analysis of it. Covering an election … is like watching out for incoming missiles: both involve evaluating data instantly, with no lag for processing, because delay spells failure, and even disaster.”

As a result of this process, map and territory become chronologically indistinct, because simulation renders them simultaneous and places them in a constant feedback loop. Moreover, because the simulation is supposed to permit the territory to be acted on, the territory may recede into the background. Robert McNamara sometimes seemed to be waging war against a mathematical model of the Viet Cong more than the real insurgency.

The Hyperreal

In the 1970s, the decade after Simulmatics went bankrupt, the French philosopher Jean Baudrillard would make the collapse of map and territory central to his account of the intellectual consequences of simulation. In the traditional concept of representation, reality precedes its reproduction. But through the logic of simulation, Baudrillard argues, the reproduction fuses with the reality it represents. The phenomena it attempts to capture are never not being modeled, tested, and probed, and the goal is to be a step ahead of them at all times. As a result, “the territory no longer precedes the map, nor does it survive it.”

The aspiration to blur the two, according to Baudrillard, is implicit in all simulation. The Simulmatics project was infused with just this ambition. Its scientists believed their models would anticipate election victories, Viet Cong attacks, and riots. The virtual versions of these events would precede their real versions, and thereby enable their prevention. Simulmatics, then, hoped to generate realities that would never really exist. In Baudrillard’s terms, such simulation seeks “the generation by models of a real without origin or reality: a hyperreal.”

One implication of this approach is that the predictions generated by a given simulation may be non-falsifiable. In one application of the riot prediction model it developed, Simulmatics identified a precise date and time when, it claimed, a riot would occur in Rochester, New York. Were they right? As Lepore recounts, “according to Simulmatics, the riot occurred, exactly as predicted, but police were there to keep the mayhem to a minimum.” If the aim is to predict the occurrence or not of a given event in order to prevent the event from happening, successful and unsuccessful simulations will be indistinguishable. In both cases, after all, the event will not occur in reality; but in the case of a successful simulation, it will exist as simulated “hyperreality.” For Baudrillard, the totalizing aspiration of simulation is for the model to always precede, and preempt, reality.

When a simulation aims to calculate the likelihood of different possible outcomes, it might seem easier to rate the accuracy of the predictions. However, since the forecasts generated by simulation are necessarily probabilistic, their failure to aptly predict a given result will never discredit the underlying approach. Once we accept the basic premises of simulation, in other words, it is never truly “wrong.” Washington Post columnist Dana Milbank made this point about Nate Silver before the 2016 election:

How can Silver be predicting a healthy Clinton victory while also noting she is in danger of losing (and simultaneously making allowance for the possibility she’ll win in a landslide)? Well, this is the result of a complex statistical method known as covering your bases.

Keep Calm and Trust the Model

In an account of Simulmatics’ efforts to turn Vietnam into a “real-time war game,” Lepore quotes an analyst who remarks that the models they built were “precise but not necessarily accurate: ‘We could tell to the second decimal place from the results what was going on, but we had no idea whether it was true.’” Does a one-point shift in a polling average reflect a genuine shift in public sentiment? It doesn’t matter: the average is what becomes the subject of news reports. Again, the model supplants what it’s supposed to represent.

When reality and a model of that reality appear to be mismatched, in other words, we may discard the model, or we may discard reality. Baudrillard argues that we have collectively discarded reality. The cultural logic of simulation has altered the epistemic framework that determines what is real, leaving us with the hyperreal where the real once was.

Baudrillard made this argument well before the appearance of social media, but the rapid embrace of these virtual social realms confirms that most of us were already comfortable with the idea that reality could be plugged into a coded model in which digital avatars compete for empty numerical rewards. To run its simulations, whether of the U.S. electorate or the Vietnamese peasantry, Simulmatics constructed model populations made up of ideal individuals with the demographic distributions of the relevant population. By joining platforms like Facebook, we have enabled the creation of a far vaster model that can be used to similar ends — one that can no longer be meaningfully distinguished from what it simulates.

In his 1940 story “Tlön, Uqbar, Orbis Tertius,” Jorge Luis Borges (a key influence on Baudrillard) tells the strange tale of the encyclopedia of a fantastical planet called Tlön. Once this monumental text is widely disseminated, it gradually causes human beings to reject the real world in favor of its imaginary one. “Reality,” Borges’s narrator remarks, “longed to yield. How could one do other than submit to … the minute and vast evidence of an orderly planet?” The “orderliness” that is the secret of Tlön’s appeal is also the selling point of simulation. It offers up a version of reality that can be monitored, tested, tracked, controlled, and predicted at every stage.

But Borges’s narrator remarks upon the limitations of this virtual realm: “Enchanted by its rigor, humanity forgets over and again that it is a rigor of chess masters, not of angels.” Back in 2016, many experienced the uncanny moments when the Times needle quivered rightward as a “glitch in the Matrix.” But as long as the precision of the simulated model retains its seductive power, such glitches will be fleeting. Simulation, as Ed Greenfield predicted, is fully automatic.

Geoff Shullenberger, “Put Not Thy Trust in Nate Silver,” The New Atlantis, Number 63, Winter 2021, pp. 43–51.
Header image by iStockphoto
Related

Delivered to your inbox:

Humane dissent from technocracy

Exhausted by science and tech debates that go nowhere?