Rethinking Public Opinion

Subscriber Only
Sign in or Subscribe Now for audio version

It is hard to overstate the importance of public opinion polling in American political life. Surveying, sorting, collecting, reporting, and analyzing public opinion is a multibillion-dollar integrated industry reaching so deeply into our politics that its influence can no longer be untangled from the functioning of our broader political system. Public officials and political types commonly think about and discuss changes of national mood in the standardized language of polling reports, while for print and electronic media the idea of “public opinion” has by now become synonymous with the numbers provided by the convenient technology of polling. But while the industry has established itself as a crucial and respected player in our public life, we hear very little about its internal workings and its limitations.

Opinion surveys are conducted by specialized opinion research organizations as well as by newspaper chains, TV networks, conglomerates of corporate advertising and market research firms, policy institutes within universities, and well-funded, quasi-public foundations. Clients sponsoring surveys to learn about public attitudes include journalists, lobbyists, special interest and advocacy groups, product associations, government agencies, and product marketers, along with aspiring candidates for office, or incumbents who hope to remain there.

Impressive too is the diversity of the issues, partisan causes, problems, and organizational interests for which information is sought. Curriculum content in schools, capital punishment, environmental regulations, single-sex marriage, gun control, nuclear energy, genetic testing, confidence in government, urban-rural sprawl, civil rights, stem cell research, immigration reform, health care, federal and state poverty programs, foreign trade policies, and an enormous array of other issues are recurring subjects of polling. However complex the circumstances surrounding each public question, opinions about it are solicited, converted into quantified reports for poll sponsors, and summarized for media audiences.

From time to time, journalists and others who are routinely occupied with politics might complain about the quality and accuracy of information collected about public opinion through polling. Most often, they cite distortions and bias built into the wording of poll questions, or the exaggeration of reported findings. But while the polling industry acknowledges some concerns along these lines, it seldom admits to more pervasive problems that, if widely understood, would raise fundamental challenges to its well-established methods. The immense importance of public opinion polling in American politics, and the under-reported problems at the heart of the enterprise, combine to call for a serious critique of the polling industry, its assumptions, and its methods.

Quantifying Opinion

In 1936, George Gallup, a father of modern opinion polling, successfully predicted the re-election of President Roosevelt over Alf Landon, while his chief competition, the Literary Digest poll of voters, incorrectly predicted a Landon victory. Gallup accomplished his feat by directly phoning selected voters about their intentions, thereby avoiding the built-in bias of the Digest’s mailed “straw ballots.” His innovative method was welcomed by politicians and journalists as a superior and technologically advanced approach to gauging the views of the public.

But Gallup’s triumph provided only modest guidance to the burgeoning field. There was no consensus in the new polling industry on how opinions might be captured and assessed when the question at hand was not an either/or choice in a national election that everyone knew about and had presumably given some thought to. Would the new technique also serve to assess less distinct views about more poorly understood and complex issues and questions? What, exactly, was to be granted the status of current public opinion? Where was it located for collection? And how to measure it when found? These questions posed serious challenges and evoked a critical intellectual moment for the field. Responsible practice required a clear sense of that loosely bounded something waiting out there to be studied and described. It called for a psychology of public opinion.

As it turned out, these concerns would be overwhelmed by the sheer pull of Gallup’s technology. Questions of theory would be abandoned for a single conceptual orientation with rules for observation, recording, and interpretation that we now call the opinion research paradigm. It reduced the enormous diversity and linked issues awaiting pollsters’ study into a set of discrete and aggregated data lifted from narrowly defined frames of reference to constitute “public opinion.” The concept of designed and directive interviews provided the basis for a convenient and efficient technology well suited to wide application and subsequent marketing promotion. It would produce straightforward numbers that would bear a resemblance to the kind of results produced by elections, and so would fit neatly into the structure of our politics. Alternative approaches to learning about public opinion as a distinct reality, or about its sources and processes of formation, stability, or evolution, were set aside in favor of a basic posture of inquiry still with us: call people on the phone, ask them a few questions, then add up the coded answers, and call the results public opinion.

Acceptance of that now-standardized methodology followed from its fortuitous intersection with a rising academic movement: the doctrine of positivism that sought to unify social science. Historians and others who had been observing the rise and fall of nations, empires, and civilizations wanted to get beyond armchair ruminations and archeological records to reveal (or construct) historical and societal laws similar to invariant physical laws discovered by the eminently successful physical sciences. In sum, the human world would be studied in the same ways as the natural world. Meanwhile, by the 1920s and 30s academic psychologists had grown impatient with the proliferation of loosely bounded theories of mind, associational or introspective explanations of affect, and especially the psychoanalytical school which then stood in the way of acceptance of a science of psychology as a legitimate academic discipline. Their turn toward naturalism would thereafter join social research with the physical sciences and require measurements based on visible evidence to qualify for empiricist objectivity. Certainty demanded facts from sense experience and confirmed observation, not theory and abstraction.

Measuring physical or material things is often simple and straightforward: density, weight, dimension, temperature, volume, motion, and so forth yield numerically articulable measures. But for those of the naturalist persuasion who wanted to escape the old mentalist outlooks for studying societal interaction, the actual measuring of collective opinion presented a difficult problem. With so many aspects and expressions of opinion and belief, convincing measurement seemed elusive.

Everyone took for granted the existence of public opinion; how could it be doubted? One need only recall its force when expressed in historical upsurges of popular fervor; of raging mobs in city streets; of puzzling bursts of hysteria, frenzy, and panic breaking out in crowds — or even just the expression of public opinion at the polling place on election day. So how might the rationality of science comprehend the often irrational and highly volatile attitudes of the masses? Could the light of consciousness be caught in a specimen jar?

Questions about how to study opinion paralleled those presented by human intelligence. Although important differences among individuals were commonly recognized, psychological researchers had long disagreed on just what intelligence is. Like intelligence, opinion lacks a visible presence or a discrete location for ready examination. And scientific discipline demands a uniform vocabulary with accepted meanings, as in the given world of physical substance. So which objectifying gauge, meter, lens, prism, chemical test, or other cognitive metric would be most fitting, comprehensive, and dependably accurate, considering the peculiar nature of the subject? For both opinion and intelligence, the answer offered would be clearly visible performance.

The winning solution for studying public opinion was modeled on the work of a (then) radical school of theory and research practice called behaviorism, which became prominent in psychology faculties in the 1930s. It was particularly notable for its studies of reinforcement learning and physical reactions of confined (often hungry) rodents to imposed stimulus, whereby the creature’s response is a “dependent variable.” By running many such animal experiments designed to permit direct observation, simple tabulations, and easy replication in totally controlled laboratory settings, behaviorism asserted scientific objectivity. It claimed to sweep away fuzzy speculations about interior mental attributes, which led in turn to new methods for improving people — or more correctly, their “behavior.”

Despite a mechanistic orientation and rigid methods that led to its later decline, behaviorism had introduced a radical intellectual innovation. All at once, the troublesome mind-body problem was settled by collapsing one inside the other. Focus had shifted from feelings and thoughts; what counted instead was their expression in movement or “verbal behavior.” Simply stated: for this scientistic orientation there is only “behavior.” Purposes, meanings, memories and integrated experience, choice and resistance, normative demands, linguistic framings, and so forth, capable of shaping reality for individuals, were displaced by an austere technology.

This new approach came at an opportune time for the emerging opinion-polling industry, which wanted not only to predict election outcomes, but to measure people’s views about social problems or government policies. Emphasis on visible or “empirical” behavior also promised an efficient way to deal with the scale, complex character, and transience of opinion. The Stimulus/Response concept, moved from the experimental lab to telephone interviews, provided a research model: opinions could be seen as measurable reactions to the stimuli of delivered questions. When added together, quantities of numerically coded answers (“responses”) to a questionnaire on specific topics by selected individuals would be equivalent to, and now constitute, the opinion of a public.

Of course, if polls were to collect opinions as responses, the questions themselves would have to be stated in exactly the same way to every respondent to ensure a uniform stimulus. Individuals would listen to and answer the identical question asked of all others interviewed, so that each answer could be consistently observed and counted in the same way as were “behavioral” reactions by confined experimental subjects. From this elevated perch, the only opinions the survey recorder would accept and include would be those said in reply to specific questions, no matter what else opinion-rich repliers might have liked to say at the time about the topic.

Since poll questions had to be formulated as distinct stimuli, the industry soon realized that replies by those interviewed were vulnerable to particular wording and to the sequencing of the questions asked. If pollsters want to find out what citizens will say about secret government use of electronic technology, for example, they get different reactions depending on whether questions refer to “surveillance,” “eavesdropping,” “wiretapping,” “listening in,” or “telephone intercepts.” While considerable attention has been given over the years to designing questions that foreclose alternative interpretations, any question formulated with words inevitably carries the potential for personal connotations and inferred meanings, which alters the stimulus for one or another individual. Varied or alternative phrasing also can produce different opinions on the same topic from the same person. Obtained opinion thereby becomes an artifact and partial creation of the investigative method itself instead of an ample depiction of what people have been thinking, feeling, and saying to each other. The range of potential responses, as well as the content of questions, must be carefully designed for presentation to respondents as choices on a multiple-choice list, which constrains their options and forces them to compress a bundle of views into a single point on the offered spectrum.

The cognitive model and conditions of the laboratory were retained in another, less-noticed practice of opinion polling: the anonymous, encaged status of those selected by pure chance to stand as compliant objects for observation and measurement. The assumption underlying this approach is that opinion is an almost physical characteristic of the subject, which can be isolated and studied in ideal laboratory conditions. The method enforces a kind of conceptual isolation by preventing contagion between selected subjects so none can talk with or influence any of the others surveyed. This practice is, of course, entirely foreign to the ways opinions, attitudes, and issue outcomes are commonly shaped in the busy hive of everyday life in a democratic republic. And if each of those interviewed were to be told he is standing as a delegate whose replies to questions would be attributed to perhaps hundreds of thousands of other non-interviewed people outside the sampled group, would not that realization make him wonder, “How can I get it right?”

Information from polling surveys, moreover, becomes encapsulated and soon depleted of significance by the absence of a time dimension. Attitudes and opinions are presented in media summaries as having no duration; they are wholly punctual; they show what was said by individuals in a sample on a particular day and date, but usually without indicating continuity with or change from their previous attitudes. Inasmuch as none will be interviewed again, their contributions record a moment of suspended time, thought-presence as ephemeral as crowd faces in a journalist’s photoshoot. Persistence or change depicted as “trends” therefore has to be represented by comparing additional surveys done on later days with different respondents. Public attitudes thus earn a constructed, temporal existence by sequential polling. While today’s poll report may display only a page of charted facts, an illusion of motion and ongoing narrative can be imagined as a succession, each slightly altered, reminiscent of the sketched cartoons of the early zoetrope. Other conventions, like showing change with curved or jagged lines, present a further illusion of equivalence.

To choose just one prominent example, the General Social Survey conducted by National Opinion Research Center since 1972 has recorded public expressions annually on a spectrum of topics including capital punishment, religious adherence, consumer confidence, public trust, and approval ratings of the president. Each is depicted in published graphs, compressing states of mind, ideologies, national moods, and shared concerns. But viewed together, can they be commensurate? Since not the same but different people in separate samples were used, and the meaning of questionnaire terms surely changed over thirty years, the actual reasons for charted change are obscured by method.

The Central Dogma: Statistical Samples

The industry’s claims for the scientific accuracy of its research practices rests on a single platform: the statistically-derived formulas for random or unguided selection of subjects for interviews. Probability sampling provides, we are told, unbiased inference about a largely unobserved population from observed small samples. Randomized selection in polling relies on statistical probability methods that had already been widely used in agricultural research, industrial quality control, risk calculation by gambling casinos, and insurance actuaries long before the advent of modern polling — but the significance of this device as both a cognitive leap and as a practical challenge for democratic politics went largely unrecognized when introduced. By the logic of sampling, a comparatively few people remotely selected for a poll interview would be elevated as proxies or surrogates authorized by arcane theory to voice the needs, wants, and aspirations of an entire country, and to speak for very many silent citizens who would never be heard on the matter at hand. Moreover, as polls became more influential, some of those selected for phone interviews could, without acquainting themselves with relevant facts or spending time in tedious or disconfirming deliberations, cast a significant polling “vote” for either side of an issue or legislative proposal.

Polling organizations rely on samples of one thousand or so individuals to represent a nation of 300 million. Each respondent thus presumably expresses the views of perhaps 200,000 other adults. But do they really? In its reliance on statistical probability theory, the polling industry always asserts that a sample of a thousand or so individuals will provide the same results, within a plus-or-minus 3.5 percent error and with a 95 percent “confidence level” that similar samples would obtain the same results. A small thought experiment will show how that result need not follow.

Suppose all of the one thousand adults in a random sample were to be asked to respond to a typical (and actually surveyed) opinion question on a topical issue, such as “Do you support or oppose using nuclear power to generate electricity?” or “Do you favor or oppose a law allowing homosexuals to marry?” The same participants would be re-interviewed, this time at length and in person, when they could speak freely on whatever they thought about the issue, expressing their attitudes, doubts, concerns about effects, alternative approaches, and so forth. That formerly excluded material, fully taken down by attentive interviewers, would provide a quantity of amplified and explanatory content for their opinions and attitudes, along with personal reflection and recalled experience of a more interpretive quality than the thousand brief “responses” permitted by the usual survey. Does this body of amply elaborated views more accurately represent actual “public opinion” of the nation’s population? How could we know? Can both the short form of either/or replies (or a five-point scale) and the fully attentive exercise be equally valid, and equally representative of the country? And would the greater nuance introduced by more detailed surveys make the small sample more or less representative of our large country?

Describing and reporting opinion by the standard method appears to show consistency because the content and complexity of the interviewee’s thought has been compressed to fit the categories of the questionnaire. If my Agree (or Strongly Disagree) was not intended to carry the same weight, personal import, residual doubt, or expected consequences as your Agree (or Strongly Disagree), reporting both of us as in agreement (or not) on a topical question fails to distinguish important differences in our attitudes and priorities.

The purpose of random sampling in survey work is commonly misunderstood. While the technique can confidently select individuals for interviews because they have tangible physical presence, opinions and attitudes have no such materiality. On the other hand, if the distribution of individuals in the population is assumed to be accessible to sampling, it follows (and conveniently so) that opinions are similarly distributed for similar access. The randomness of a sample of one thousand individuals therefore is important not because they are somehow equivalent to the general population, but because they produce replies which, taken together, can be imputed as representing a portion of the total body of attitudes throughout the country.

However, in order for opinions-in-themselves to be randomly sampled, they must be identified as distinct units or well-bounded items that can be folded into statistical calculations. To make that possible, survey organizations have devised scales and coding mechanisms to trim down and compress the immense variety of shapes, weights, and dimensions of belief, attitude, prejudice, and the like. That shrinkage, of course, requires a willingness to disregard actual but unspoken differences in intended meanings, interpretations, and contextual relevance within each individual reply. That move in turn conceptually assumes that the opinions of the vast unsampled (“unobserved”) majority of the population might be similarly coded into the same numerical scale, while ignoring its even greater (unsurveyed) potential for diversity. This potential grows substantially the further the issue in question is from a simple yes or no choice.

The Consumer Confidence Index offers a ready illustration of these underlying cognitive tensions. Sponsored by the Conference Board, the Index and its component scales are used to chart rising or falling changes in consumer sentiment and producer expectations. Its reports are widely watched by marketing and financial interests, while movement of its numerically scaled levels is used as a predictor of economic change. The Index has gradually become a reified entity accepted as a distinct and consequential public reality, even though it has no tangible materiality.

So can we postulate an actual existence of “confidence” itself that does not rely on quantitative confirmation by continued surveys, and prevails independently of assenting interviews? Does surveying actually detect and disclose an objective, overarching national confident mood really out there, and not just a packaged creation of a methodological practice which would evaporate if surveys ceased? And if the confidence of consumers is an existing real entity, is that also the case with other significant public opinions, such as the often-polled and disputed “trust” level among our citizens? Answers to a question about personal confidence or trust will surely be given if the question is asked, but to what extent do the replies, even when considered, tell us about an amorphous overarching national something that exists and prevails before poll questions are asked?

The industry sometimes claims that replies to a poll’s questions reflect national opinion because the random sample itself is a “cross section” of average or typical Americans that follows the contours of the nation’s demographic mix. Samples are also commonly weighted — that is, patched with answers imputed by undisclosed formulae to correct for non-response, or by the need to balance demographics of age, location, sex, household size, and so forth. But how do pollsters know which criteria are relevant? If the universe of descriptive criteria comes, as it often does, from the Census, then its poorly bounded and crammed classifications become an issue in themselves. With the doubling of the nation’s population during the last half-century of enormous social and technological change, more opportunities have become available for individuals to adopt alternative self-identities and orientations not amenable to stereotypical profiles. The U.S. Census Bureau recognizes six categories of marital or “partners” status, eight educational levels, ten levels of income, nine adult age levels, eight categories of households, and now offers the option to select more than one race. But even though Census classifications have been divided and augmented over the years, they still do not reflect the enormous variety and difference among the nation’s people. It is far from clear how a typical sample of one thousand automatically dialed phone numbers can dependably select an assortment of individuals which captures in extent and distribution a true picture of the mingled, interactive, often diffuse, unarticulated, and inconsistent views within the populace.

Meanwhile, a variety of limiting factors constrains the ability of pollsters to capture key segments of the population — segments representing a significant portion of the public. For instance, English is a second language for some 10 percent of the total population, which would suggest that many of them actually “hear” different questions in a poll about an American issue or problem. Persons who are deaf, or those afflicted with autism or dyslexia, present particular difficulties for interview question sequences. Hard to reach in other ways are individuals who are deeply depressed and those who live in custodial settings. Others who formerly would have been confined to psychiatric hospitals now live among us with the help of continued psychotropic medication, but still are not easily located for interviews. Also less available are people who suffer from dementia or severe mobility limitations and reside in assisted living arrangements as well as those at hospices in stages of terminal diseases. Significant as well are people left out of opinion-collection circuits because they are among the hundreds of thousands in transit as migrant laborers or, for example, interstate truckers. Over a million men and women are in prison in this country, with many more politically marginalized as disenfranchised felons on parole. The Department of Housing and Urban Development reported in 2007 that more than 750,000 homeless people live in shelters, transitional housing, and on the street. Then there are the many others dwelling in the shadows of drug addiction or medical alcoholism, and those barely able to read or write. These categories easily add up to millions of Americans not proportionately represented by opinion polls, while the lived circumstances and conditions of the anonymous and marginalized reduce the relevance of the stereotyping demographic profiles used for polling reports. Simply because their views may not be readily accessible to the metered attention of the currently favored research technology, their perspectives and understandings need not count for nothing.

The Ethnographer Visits

Even if sampling procedures were able to deliver statistically accurate cross-sections of the entire population, replies by those selected would still be of limited value because too many lack meaningful information about the issue at hand, or even familiarity with it. Survey organizations have long recognized the way limited understanding of complex issues bias poll results and have tried to correct for them, but deeper problems remain.

One such problem was raised by Daniel Yankelovich, a senior eminence in the polling industry and the founder of the opinion research organization bearing his name, when speaking at a 1998 conference on wider participation in policy development for genetic research. He pointed out that such public policy issues often lack resolution because of a failure of engagement with the affected public, and the continuing disconnect between policymakers and the public itself: “Experts and elites live in one world, the public in another. These worlds have different concerns, agendas, vocabularies, and subcultures, and are barely connecting with each other.” His diagnosis can be confirmed throughout standard opinion polling, mostly done not in face-to-face interviews or in focus groups, but by telephone calls.

Those who have agreed to participate in a polling interview commonly assume that the opinions they offer are their own, personally arrived at, without previous rehearsal. They seldom appreciate how the issues or problems being read to them have been defined in a particular form by a professional staff shaping the descriptive content of the questions and thereby the “opinion” given in reply. Inasmuch as replies must be fitted without qualifiers or reservations into some set of options offered, the phrasing of views may sound more confident, assertive, final, or dismissive than the respondent actually feels or than he would express himself. Moreover, the issue raised by the pollster’s questionnaire has been extracted as a distinct and free-standing topic from a grid of associated concerns, but respondents will not learn why sponsoring interests decided to pay for its study over others meriting attention. No close reading of poll questions is needed, however, to recognize the authoritative expert presuming to reduce complex economic or political situations to compact alternatives: “Do you think it is more important to pass additional tax cuts to stimulate the economy now or do you think it is better to hold off on tax cuts to make sure that the budget does not go into a deeper deficit?” “On the whole, do you think our investment in space research is worthwhile or do you think it would be better spent on domestic programs such as health care and education?”

In everyday life, of course, people who have opinions on an issue or who are dissatisfied with politics will say so plainly to friends or family; they will justify their feelings without recourse to the vocabulary of professional political analysts. When speaking as poll respondents therefore, they tend to conflate a cited issue or topic with their own ideas about “the real problem” and mundane concerns seen from the windows of their own problem-filled lives. And their “issues,” if they ever use that term, are often not the same as those identified by elites and academics. But when they find themselves speaking to an interviewer, they must adopt a different stance of attention, and reply to questions-as-written or else be put down (in both senses) as “don’t know” or “no opinion.”

That interviewing situation might remind us of ethnographers’ stories of when an anonymous emissary from the First World appears among long-settled residents of the hinterlands in their everyday surroundings. Unexpectedly put in an awkward position toward an intruder who asks questions (perhaps while the rest of the household eats dinner and listens), the nominated informant must confront a complicated social issue, legislative proposal, or name of a barely-known office seeker. As in ethnographers’ visits to faraway places, the interviewee will not learn the reason for sudden attention to “us out here,” or anything about the political culture of the well-educated experts who select and define those issues. And the situation is anomalous in another way for interviewees whose work life is spent in low-skill jobs performing highly routine, confining, and closely supervised tasks, where daily experience consists of direction and evaluation flowing from the few to the many, with hardly any initiative or critique flowing the other way.

Important aspects of the interviewee’s self-identity and shared affiliations may also be obscured by the artificiality of the occasion and the projected authority of the remote caller. Confronted by really big problems, such as trade imbalances or charter schools or the country’s agricultural policies, what to do but improvise a passable reply — or apologize? Being called to be interviewed, itself a gratuitous moment, imposes stress of its own; how to reckon unsayability (one’s many doubts, hopes, or half-remembered, disorganized facts) against implicit expectations to give a credible, coherent speech performance then and there before the projected image of a judgmental caller? Surely, the situation can revive doubt and anxieties of classroom recitation in childhood, or interrogation as a courtroom witness.

But the natives soon catch on to the rules of engagement. Standard style for asking poll questions portrays problematic issues as abstract and generalized, so the interviewee hears a tacit invitation to adopt the same perspective as the confident caller. To take on the role of a proper respondent, he must think of the issue at hand in the phrasing and elevated outlook of distant others, as in this one about the nation’s energy needs: “Should increasing the production of petroleum, coal, and natural gas be a priority, or should conservation?” After hearing that arbitrary version of a complex national energy problem, respondents are not permitted to speculate, as elites themselves dependably do, about its buried assumptions, or alternatives, or long-term costs, or possible divisive consequences, or necessary trade-offs, or conflicts with other issues — always involved in real-world practice. Similar this-or-that oppositions are routinely imposed in questionnaires for sloganized goals of a “better environment,” a “stronger military,” or “lower taxes.”

Beneath the assurances of neutrality offered by trained interviewers, the downward-looking, in-charge mentality shows through. They want to get the numbers and go on to the next call. No time for, or interest in, the particularity of the surroundings or a full picture of those asked to reply. Those who are interviewed also begin with the obvious disadvantage of being unable to consider the problem in advance, or prepare their comments so as to show themselves favorably, as do members of the political class invited to appear on TV panels.

Often, the awkward immediacy of inquiry about serious policy options is relieved by the caller turning a questionnaire solicitation into a breezy chat. A permissive tone may be adopted to imply that answering the posed questions is like taking an easy, fill-in-the-blanks quiz: “There are no wrong answers.” Few of the interviewed notice the emollient phrases commonly used to preface (and dumb down) the questions posed: “As you may know…” or “Generally speaking…” A recent poll question on the large and difficult issue of immigration policy starts with another permissive lead, while offering just two seemingly simple options that nonetheless ended up in media reports portraying a divided nation of polarized views and hardened positions: “Overall, would you say immigrants are having a good or bad influence on the way things are going in your country?”

Such language also serves as unctuous assurance that the person interviewed need not have factually grounded views to express. The caller’s demeanor and voice of detached rationality implies that both the interviewee and interviewer are reasonable people with temperate attitudes arrived at by a broad “overall” grasp of an issue, no matter the emotional content (fear? anger? disgust?) of a topic of inquiry. Well-managed interviews attenuate intensity of feelings and beliefs with dumbing-down phrases like “the way things are going” while offering closed-end choices that discourage nuanced views. Subtle slanting and the pretense of fair-minded equivalence also shows in cool turns of phrase: “Which concerns you more right now? That the government will fail to enact strong new anti-terrorism laws, or that the government will enact new anti-terrorism laws which will restrict the average person’s civil liberties?” All that metered charm, of course, makes it difficult to call attention to neglected aspects of an issue which could evoke quite different survey results.

Some interviewed people are unwilling to admit, “I don’t know anything about that,” while, “I couldn’t care less” would sound rude or ignorant. Or they may be reluctant to talk about their feelings (“attitudes”), or may have been raised to avoid appearing too “opinionated” with an unseen stranger. Or they may not say what they really think to an official-sounding voice, or may just give any made-up answer so they can get back to what they were doing before the phone interrupted them. All such replies will be counted with equal weight as opinions from those who answered with substantial understanding of the issue, and since the sampling method greatly magnifies cases by extrapolation, each unconsidered reply contributes to a misleading portrayal of national opinion.

Scripted interview procedures and narrowly structured survey instruments are also poor substitutes for undemanding conversational talk. Telephone empiricism has particular difficulty in observing what is known as “emotional prosody,” a linguistic function of nonverbal aspects of “talk” which signal feelings and help to maintain meaningful exchange. From years of daily practice, talking is routine back and forth saying and hearing, then saying more. Opinions are reshaped even as they are expressed. Spontaneous everyday exchange is supplemented with facial expression, eye contact, tone, gesture, posture, and comportment. In ordinary talking, intended meaning is clarified and adjusted to replies (or silence) from listeners. If an interlocutor is not known well, talk is steered by a quick estimate of what he can be expected to hear and understand. Attempts to communicate intended meaning are not always successful, so a second or third version may be attempted, each speaker all the while trying to maintain the appearance of a sensible, right-thinking person and not to lose esteem or provoke the listener to break contact. All such nuance is lost in a fleeting interview limited to talking within constricted choices over the phone.

The Politics of Public Opinion Polling

From the beginning, opinion polls not only measured public views but also shaped them. The flow of news about opinion and attitudes, along with the daily approval/disapproval ratings of major political figures, has a recursive effect, circling back to influence the content and strength of opinions and attitudes waiting out there for gathering. The polling industry, having appointed itself as a sort of supplementary branch of representative government that delivers periodic dispatches expressing the People’s Will, shows little interest in conveying its ambiguities and branching consequences, and takes itself to be an observer more than a player in politics. When TV network anchors announce, “A new poll shows that a majority of Americans agree that [the government should whatever…]” the subtext suggests that a particular issue has been depoliticized and is all but settled, so further contention is unnecessary. Easily read charts and prominent numbers displayed on the screen convey an implicit claim of speaking clearly for the nation, or identifying a grievance that demands redress by courts and legislators.

Frequent reports from polling organizations in print and electronic media on opinions and attitudes of the public can ironically result in a less active or inclusionary democracy. Although publicity about polls creates an appearance of public participation and expression, survey respondents have no actual opportunity to discuss an issue or an alternative proposal among themselves. Polling methodology keeps respondents separated from each other to prevent them forming a quorum or caucus — a common event in free societies — but thereby afford no opening for clarifying positions, exchanging information or assurances, and sharpening or amending existing views, to arrive at common ground and shared understandings. How this contrasts with the exuberant progressivism of George Gallup and the early advocates of polling! They believed that polling would inaugurate the next, more pure, phase of democracy; engaged citizens would debate issues in a conceptual public square toward eventual influence on legislation. But the absence of deliberation between respondents to questions during polling encounters continues to go unremarked, as does their being deposed from the status of conversant citizens into autonomous units who briefly support or oppose, agree or disagree. For its part, the TV audience at home watching sound-bite summaries has little disposition to take notes, much less to seek opportunities to discuss implications of the posted numbers — so our opinions begin to look more like polling results, rather than the other way around.

From a longer perspective, support for positivist social research by survey organizations, as well as by media which also conduct or sponsor polls, has in effect confirmed and strengthened the secular technocratic culture of late modernity and its pervasive themes of utilitarian, rationalistic, and instrumental control from a distance. Opinion polling is itself one more routinized technology used to assemble, under the banner of scientific objectivity, masses of statistical information of every kind for steering large systems.

Whatever may lie behind individuals’ answers to uniform survey questions, when aggregated they are accepted as defining fact events, with an apparent solidity in venues that matter, an abstract authority to crowd out (or obviate) attention to other sources of less well formulated evidence of change and stability. A truly remarkable result of the polling technique has been the creation of an invented, semi-official “public opinion” entirely composed of increments — that is, separate and brief units of speech about specific issues or problems. Our ordinary and vernacular sense of a broadly experienced state of mind or expectation among the nation’s people is never confirmed, however, because a comprehensive portrait cannot be realized from such increments. Obviously, numerical results reported for the many surveys done one at a time for different purposes and topics are not additive.

This state of affairs brings to mind political scientist Sidney A. Pearson, Jr.’s comments in a 2004 article on the illusion of certainty emitted by computational operations:

The numbers that emerge from opinion polling do less to illuminate the problems involved than to create a false sense of precision where imprecision is properly called for…. The quantification of public opinion as the fundamental reality of opinion is typically purchased at the price of moral reasoning, which tends not to be quantifiable by its very nature.

How remarkable, then, that the exacting habits of the laboratory, and its controlling mentality alert to exterior, measurable movement, still prevail in the polling enterprise, confining itself to expressible and heard expressions of ideation for capturing fuzzy, loosely-bounded “objects” called opinions and attitudes from a corner of a vast realm.

What to Do About A Failing Technology?

The one thing the polling industry has always counted on is that people would give their opinion when asked. But that assumption is growing increasingly tenuous, and with it each of the potential distortions and problems discussed above grows sharper and deeper. The polling industry has relied on virtuoso statisticians to justify patching and “adjusting” to fill no-answer spaces in a sample with typified answers from “equivalent” or demographic stand-ins through the years, but polling professionals have recently begun to express more concern about refusals and other sources of “non-response” that have sharply increased because of cell phones, caller ID, voicemail, the Do-Not-Call Registry, and similar factors. A special issue of Public Opinion Quarterly last year was given over to the problem. Most interesting was the warning voiced in that issue by political scientist Cliff Zukin, then-president of the American Association for Public Opinion Research: Because cell-phone use has made it harder to find and interview representative samples — thereby creating non-random error in household surveys — “our operating model, or paradigm, is breaking down.”

But given the other sources of concern we have discussed, declining response is surely not the deepest problem the polling industry faces. Imputing public opinion by inductive reasoning from standard samples is already becoming less trustworthy as diverging characteristics multiply in an expanding and diversifying population, and as it does, the methodological flaws that have always plagued polling become more evident and problematic. Still, it is no use to complain to survey organizations about their methods and practices. Survey staffs can always wave around a statistical study from their files to reject any critical comment out of hand. Found useful by clients, and with no opposition from the press, national public opinion polling has become a methodological fixture; the medium has become its own message. Beyond that, the central dogma — that a random sample of individuals can speak for and represent an entire country — cannot be questioned by outsiders.

The opinion and attitude research industry can be seen as a continuing body of work over seventy years, most of it done from a single cognitive orientation: research findings must be quantified to achieve the authority and certainty of science. Quantification is made possible by treating public opinion as comprised of choices among options made by selected individuals replying to designed questions posed by an interviewer so as to yield countable units. Fundamental conceptual problems have been left unsettled: Does all that “opinion” reside within individual persons (the familiar “individualist bias” of opinion research that leaves out valid sources of opinion and its maintenance, such as established institutions and advocates)? Or, following positivist economy, can it be harvested from observations of reactions to a bracketed verbal inquiry? Although polling methods offered a new way of learning what people might say on matters of civic importance, its practitioners showed little interest in epistemic theory or longstanding intellectual concerns about the implications of alternative cognitive paradigms. As a result, certain cognitive tensions built into the assumptive foundations of the survey enterprise remain there today.

The field in its current state is reminiscent of another behaviorial technology: the IQ test. Its techniques were in wide use a half-century ago, and years of criticism were required to bring down the presumptive authority of its hardened numbers in employment and educational decisions. Absurd as it may now seem, the basic notion was that by scoring penciled replies to a set of puzzle-like questions, testers could assign to each person an intelligence “quotient” — a number — to define and rank how intelligent (or not) an individual is, and thereby to predict what prospects each would have for school success, and inevitably, restrict access to employment in large organizations. Its instrumentally-produced scores were presented both as indicative performance numbers on an arbitrary scale and as actually portraying underlying differences in inherited or formed “intelligence.” Although there was little agreement within the psychiatric disciplines about the constituents of intelligence or if they could be separately measured, the appeal of the numerical measurement was then irresistible. Almost no one believes in the monolithic IQ anymore; like opinionating, intelligence appears in different occasions and contexts of realization and forms of expression.

Breaking the grip of the standard media-ready opinion poll will be more difficult; the distance between the measurement technology and the nature or character of the things being measured is much more difficult to discern or explain. Continued attention by political interests and advocacy organizations has promoted confidence in the results of opinion polls, while the industry’s methods for identifying, describing, recording, and aggregating them have formed a resident background by which we now refer to and even think about “opinion” and “opinions.” The eminent success throughout late modernity of products and processes derived from scientific research continues to distract popular awareness of the often equivocal nature of their benefits, even as they come to define the conditions and circumstances of everyday life. At the same time, professional vocabularies and managerial grammar, institutionally located and imposed, continue to shape our understanding and to deflect opposition. One component of that loosely linked system, the remote observation and reporting of the opinions and attitudes of the public, has been codified by a widely practiced soft technology now folded into the media landscape. Habituation to that firmly established and assertive presence acts to defer recognition by the polity of erratic, unreflective, and misleading interpretations about us delivered by its narrowed forms of social inquiry and tossed into the contentious public square.

Among the strengths of our own history there remains a tradition of confidence in the processes of self-government in a republic of free and responsible people. Although public understanding flourishes unpredictably, it can move in spontaneous ways through dense political environments to express itself and to influence the course of the nation. Our apprehending of a still prevailing common mind, divided or joined as it often has been, and our own local ways of looking, listening, talking, and working together, still need to be supported and supplemented with alternative sources of reliable information and wise counsel. The voice of reason may be low, but it is persistent.

Thomas Fitzgerald, "Rethinking Public Opinion," The New Atlantis, Number 21, Summer 2008, pp. 45-62.
Related

Delivered to your inbox:

Humane dissent from technocracy

Exhausted by science and tech debates that go nowhere?