Surveillance Humanism

The unholy union of AI and HR is coming.
Subscriber Only
Sign in or Subscribe Now for audio version

Bloomberg columnist Ben Schott recently suggested that Amazon and Facebook should be given seats at the United Nations. The argument is that since these giant corporations exercise the power and influence of a modern nation state — and in many cases, considerably more power and influence — they should be held accountable to the same mechanisms governing international relations. There is certainly some logic to this proposal, especially given concerns over the role Big Tech already plays in global politics, whether facilitating electoral interference, censoring information, or providing the facial recognition software for tracking down political dissidents.

Yet the political use of technology is also nothing new. Technical innovation — particularly the development of “machine intelligence” — has always evolved hand-in-hand with political innovation. The English mathematician Charles Babbage is generally credited with inventing the first computer, but in fact he made a more lasting contribution to political economy. A prototype of his Difference Engine, a calculating machine, was produced in 1822, but like many generously funded government schemes the finished product disappeared beneath mounting costs, increasing delays, and contractual disputes with the chief engineer. And perhaps just as well; the completed machine would have involved 25,000 individual parts and weighed over four tons. Yet as Babbage explained in his book On the Economy of Machinery and Manufactures (1832), the mechanization of mental labor was only partly intended to redress “the inattention, the idleness, or the dishonesty of human agents.” Its principal benefit was managerial. Mechanization allowed one to clearly individuate the various sub-processes involved in any complex operation, and thereby improve their overall efficiency. This became known as the Babbage Principle, the financially cut-throat extension of Adam Smith’s division of labor and a forerunner of modern management practices of reducing labor costs by differentiating between high-skill and low-skill tasks and paying workers accordingly. From its very beginning, then, the concept of “machine intelligence” had a double meaning, ambiguous between the apparently purposeful behavior of the machinery and the information such machinery allows you to gather about your employees.

Reviewed in this essay
Oxford ~ 2022 ~ 377 pp.
$24.95 (cloth)

These two interrelated issues of technological control and political control frame the recommendations advanced in Human-Centered AI, a new book by computer scientist Ben Shneiderman. Beginning in the 1980s, Shneiderman pioneered the research and design of some of the familiar ways in which we have all come to operate personal computers through screens — for example, using highlighted hyperlinks and touchscreen keyboards. He has also long been critical of the direction of artificial intelligence research, and its narrow-minded focus on purely technical innovation. By contrast, his proposed “Human-Centered Artificial Intelligence” (HCAI) framework recommends using AI to provide practical solutions to everyday problems, seeking “not to replace people but to empower them.”

It sounds promising in concept. Yet in the writeup, we encounter passages like this:

HCAI is based on processes that extend user-experience design methods of user observation, stakeholder engagement, usability testing, iterative refinement, and continuing evaluation of human performance in the use of systems that employ AI algorithms such as machine learning. The goal is to create products and services that amplify, augment, empower, and enhance human performance. HCAI systems emphasize human control, while embedding high levels of automation.

This is not a rare instance of infelicitous phraseology; Human-Centered AI is largely composed of such jargon and bureaucratic buzzwords, short on concrete examples but amply illustrated with PowerPoint-style flowcharts and numbered action points. The book functions, therefore, as both a discussion of contemporary AI research, and as a kind of corporate manual for middle management, one as religiously devoted to the human-resources regimentation and tick-box assessment of modern employees as a nineteenth-century industrialist tract. Babbage would have loved it.

According to Shneiderman, AI research has focused on the wrong objective. It has pursued what he calls “science goals” — utopian projects to build computers that replicate human activity by means of pattern recognition, language translation, playing games, or more nebulous notions of “consciousness.” These projects measure success based on how well a computer performs some task relative to human benchmarks, rather than how useful it actually is to have a computer do this. It is often guided by an implicit assumption that people are ultimately just complex machines too, and that AI research will eventually resolve the outstanding problems of humanity.

The problem with AI’s grandiose science goals is that blue-sky thinking can often lose sight of the reality on the ground. In 2016, a Florida man was killed when the autopilot on his Tesla Model S failed to recognize a large white eighteen-wheeler crossing the highway. A statement by the U.S. National Transportation Safety Board cautioned that the crash was “an example of what can happen when automation is introduced ‘because we can’ without adequate consideration of the human element.” The Tesla’s limited automation technology was designed only to keep the car in its lane and adjust speed in response to vehicles driving ahead of it, but the term “autopilot” encouraged false expectations and less vigilance on the part of the driver. In 2019, the Boeing 737 MAX was grounded following a crash off the coast of Indonesia and another in Ethiopia. A faulty sensor caused the autonomous control system to pitch the plane downwards — an error that could have been rectified if the pilots had known that the new system had been installed. As Shneiderman argues, these tragedies were the result of replacing human operation without properly considering its context — the result of technical development for its own sake.

These are problems about out-of-control technology. But another problem is that as ever more aspects of our lives become governed by autonomous algorithms — everything from Netflix recommendations to mortgage applications — we must feed the machine ever more data about ourselves. A reliable algorithm requires a reliable population, neatly packaged into useful information and operating within prescribed parameters. Automation has come a long way from cranking the handle on several tons of rotating metal cylinders in the 1820s, but it has always reflected broader intuitions about society. The earliest “chatbots” in the 1960s, for instance, worked on the basis of symbolic manipulation, isolating individual words and phrases, and transformed them according to purely formal rules of syntax. Joseph Weizenbaum’s ELIZA program could approximate human conversation simply by repeating statements back to the user in the form of a question, punctuated with a handful of rote expressions of interest or sympathy. To take an example Weizenbaum offered:

User: My boyfriend made me come here.

Eliza: Your boyfriend made you come here?

User: He says I’m depressed much of the time.

Eliza: I am sorry to hear you are depressed.

User: It’s true. I am unhappy.

Eliza: Do you think coming here will help you not to be unhappy?

User: I need some help, that much seems certain.

Eliza: What would it mean to you if you got some help?

User: Perhaps I could learn to get along with my mother.

Eliza: Tell me more about your family.

As Weizenbaum made clear, the system deliberately parodied psychoanalytic techniques — “tell me more about your family” — a joke singularly lost on the therapeutic community who seriously considered mass implementation of the program, predicting that “several hundred patients an hour could be handled by a computer system.” If Charles Babbage saw the workforce as cogs in the machine of industry to be optimized for economic efficiency, the self-help gurus of the Sixties added that workers only needed to be fed the right inputs in the right order to find self-fulfillment.

By the twenty-first century, AI had adopted large-scale statistical techniques. Google Translate does not simply match individual words according to some pre-programmed dictionary, but rather trawls the Internet to identify similar patterns in other documents. The short-lived Google Flu Trends tracked the spread of influenza through searches for flu symptoms and online purchases of chicken soup — until the self-reinforcing nature of Google’s search algorithms, in which the engine’s own recommendations alter what people search for, infected the results. Another example: Financial algorithms allow programmers to “train” an AI system to assess the credit rating of prospective customers by feeding it vast numbers of case studies, and letting it adjust its own weights until it correctly identifies those at high risk of default. One consequence has been the disproportionate rejection of mortgage applications among African Americans: Their past lack of access to credit now makes them less likely to have a credit history deemed worthy of granting a mortgage, a self-reinforcing injustice that credit-rating algorithms have only made more efficient. In 2013, a Wisconsin court handed down an increased prison sentence because an algorithm predicted a high likelihood of recidivism. No longer cogs in the machine perhaps, but rather data points on a spreadsheet, and the more information we provide our AI systems, the more control they can exercise over their users.

Shneiderman’s solution to these troubling applications of AI is a renewed focus on what he calls “innovation goals.” Whereas a “science goal” replaces humans with AI, for example building a humanoid robot that welcomes you to the airport and carries your bags to check-in, an “innovation goal” uses AI to improve on existing products and services, perhaps a better app with real-time information about the waiting time at security. Or as Shneiderman puts it, less vividly:

AI researchers and developers who shift from one-dimensional thinking about levels of automation/autonomy to the two-dimensional HCAI framework may find fresh solutions to current problems. The HCAI framework guides designers to give users appropriate control while providing high levels of automation. When successful, these technologies amplify, augment, enhance, and empower users, dramatically increasing their performance.

The best sections of Human-Centered AI focus on the role that various design metaphors — like that of the “social robot” — play in guiding AI research, and the pitfalls associated with building machines intended to replicate or replace human activity. As Lewis Mumford pointed out in Technics and Civilization (1934), copying designs found in nature does not always produce the best results — airplanes do not flap their wings.

Certainly some recent robotic devices have singularly failed to capture the public imagination. Honda retired its friendly-looking humanoid robot Asimo in 2018, and the company now focuses less on humanoid devices designed to assist mobility or perform other specific tasks. Hanson Robotics’s Sophia was granted citizenship by Saudi Arabia in 2017, but nobody seems to have bought it, so the company has shifted to marketing a toy version. The world’s first robot-staffed hotel in Nagasaki actually had to increase its human workforce to deal with all the customer complaints. Production of Softbank’s Pepper robot was halted last summer due to lack of demand, even though (or maybe because) it enjoyed widespread deployment in shopping malls in France throughout the pandemic to help “remind” customers to comply with mask mandates — perhaps a little too on the nose for those concerned about the ambiguities of machine intelligence.

By contrast, Shneiderman is clearly delighted with the “prominent consumer success” of the Roomba vacuum-cleaning robot. Early designs tended toward autonomous thinking, with few buttons for user control, but more recent versions can be programmed from your smartphone with different cleaning schedules for different rooms. It performs a single job in a way most convenient for the user. Similarly, Shneiderman is also impressed with the “notable success for the idea of active appliances … such as Amazon’s Alexa, Apple’s Siri, Google’s Home, and Microsoft’s Cortana.” Crucially, the voice recognition software is generally seen as a hands-free device for performing certain tasks — searching for music, turning on the lights — rather than a conversation partner. It is just a tool, one that uses AI to “amplify, augment, enhance, and empower users.”

But here things become tricky. In order to work, Alexa needs to gather a great deal of intelligence about its user, leading to concerns about privacy. At the same time, the results of any search performed by the device will be far from neutral, either because of deliberate interference by tech companies, or simply through the sort of positive feedback loops that derailed Google Flu Trends. These devices may assist their users through everyday problems, but only within carefully approved parameters.

Similarly, Shneiderman argues that rather than building “self-driving” cars, we should find ways in which AI systems can help the human driver, such as parking assistance, lane following, or collision avoidance. This is the approach favored by Waymo, a Google spin-off that, rather than attempting to replace the driver, aims to be a “safety first” car — automatically slowing at busying intersections, for example. As Shneiderman sees it:

These changes can then be refined and adapted in ways that increase safety, eventually leading to less need for driver control, with greater supervisory control from regional centers, akin to air-traffic control centers.

And in special cases:

A possible solution is to equip authorized police, fire, and ambulance crews to control nearby cars, in the same way that they have special keys to take over control of elevators and activate apartment fire control displays in emergency situations.

This is definitely a “human-centered” approach. Only it is not the human one expects, the driver now secondary not to autonomous AI, but rather to “supervisory control” from regional operators, emergency services, and the police. By clearly differentiating the individual components of our social activities, Shneiderman’s AI would provide as much information and managerial regulation as anything accomplished by Babbage’s Difference Engine.

Much of Human-Centered AI is thus about the need for governance structures, and how exactly they will help ensure an effective approach to new technology. The discussion exemplifies many of the familiar tropes of modern corporate culture. In a section on best engineering practices, for example, Shneiderman recommends that when designing software we pivot away from traditional “waterfall” workflows — from defining the problem to deploying the solution — to a less linear system:

The newer workflows are based on the lean and agile models, with variants such as scrum, in which teams work with the customers throughout the lifecycle, learning about user needs (even as they change), iteratively building and testing prototypes…. Agile teams work in one- to two-week sprints, intense times when big changes are needed. The agile model builds in continuous feedback to ensure progress towards an effective system, so as to avoid big surprises.

Similarly, we are assured that:

Commitment to a safety culture is shown by regularly scheduled monthly meetings to discuss failures and near misses, as well as to celebrate resilient efforts in the face of serious challenges. Standardized statistical reporting of events allows managers and staff to understand what metrics are important and to suggest new ones.

More generally, Shneiderman proposes four levels of nested governance, beginning with “sound software engineering practice,” overseen by a “safety culture through business management strategies,” embedded within a system of “trustworthy certification by independent oversight,” and ultimately covered by “regulation by government agencies” — presumably because ever greater layers of bureaucracy always help people feel more empowered.

The superficiality of this approach becomes glaringly apparent in the discussion of bias. As Cathy O’Neil brilliantly outlined in her 2016 book Weapons of Math Destruction, there are many ways in which inequalities can be perpetuated by technology, either through implementing biases present at the design stage, or through unintentional consequences of the algorithm itself. Recent examples include the soap dispenser that failed to recognize dark skin tones, or voice recognition software that listens only to men. These are genuinely problems, and one that a more “human-centered” approach should be tailor-made to address. Shneiderman, however, merely proposes that

better outcomes are likely for development teams that appoint a bias testing leader who is responsible for assessing the training data sets and the programs themselves. The bias testing leader will study current research and industry practices and respond to inquiries and concerns…. Continued monitoring of usage with reports returned to the bias testing leader will help to enhance fairness.

This is the sort of political waffle that companies produce in order to avoid addressing the problem. It also raises more questions than it answers. Who ensures the fairness of the bias testing leader — who watches the watchers? What exactly do we mean by “fairness”? This is a politically loaded concept, not a fungible commodity that needs only to be optimized. Shneiderman concludes his recommendation by acknowledging that:

The question of bias is vital to many communities that have suffered from colonial oppression, including Indigenous people around the world. They often have common shared values that emphasize relationships within their local context, foregrounding their environment, culture, kinship, and community. Some in Indigenous communities question the rational approaches of AI, while favoring empirical ways of knowing tied to the intrinsically cultural nature of all computational technology.

No, I don’t get what this means either — but fortunately our bias-testing leader will be continuously monitoring usage through reports that meet the latest industry practices to ensure fairness.

The central idea behind Shneiderman’s critique of contemporary AI research is that it pursues technological innovation for its own sake. It attempts to replicate or replace human agency simply to see if it can be done — a narrow-minded approach that inevitably causes problems. Yet his own “human-centered” approach is oriented toward an endless appeal to ever more levels of bureaucratic control — much as the saying, sometimes attributed to Oscar Wilde, that “bureaucracy is expanding to meet the needs of the expanding bureaucracy.” This leads to further problems of its own, not least the increasing standardization of those within the system. It is the darker side of Babbage’s notion of machine intelligence: not the ingenuity of the machine but the surveillance and control of its users.

Human-Centered AI concludes with some predictions about the future. Shneiderman notes that “China’s large and well-educated population gives the country advantages that make its plans for leadership in AI a realistic possibility.” More generally:

China’s cultural differences lead to some advantages in broad application of AI. While privacy is stated as a Chinese value, so is community benefit. This means that government collection of personal information, such as medical data, is expected. The data from individual healthcare are used to support public health practices that will bring the greatest benefits to all. This contrasts with Western practices, especially in the European Union, which prioritize individuals and strongly protect their privacy. Similarly, widespread surveillance in China via a vast system of cameras enables local and national governments to track individuals. These data contribute to a Social Credit System, a national system for personal economic and social reputation for each individual. While some see this as an invasive surveillance system, it gives China huge databases that enable rapid progress on AI machine learning algorithms and scalable implementation techniques.

Indeed, any concerns about the apparently symbiotic relationship between AI research and political authoritarianism are brushed aside.

Shneiderman concedes that China’s plan “has little to say” about human-centered, “responsible AI.” But fear not. A “follow-on document from China’s Ministry of Science and Technology does address some of the familiar principles of HCAI, such as fairness, justice, and respect for privacy, while adding open collaboration and agile governance.” All of which will be extremely reassuring for the Hong Kong protestors tracked down by facial recognition technology.

Charles Babbage had no doubt regarding the importance of those who oversaw the mechanization of mental work; indeed, by abrogating human intelligence to the overall function of the machine, it became ever more important for the manager — a well-educated, socially respectable, right-thinking individual just like Babbage himself — to oversee the entire operation. Babbage thought they should be given political power, knighthoods and honorary titles for life in recognition of their importance. Today, he would have wanted them to have a seat at the U.N.

Paul Dicken, “Surveillance Humanism,” The New Atlantis, Number 68, Spring 2022, pp. 113–121.
Header image: iStock
Related

Delivered to your inbox:

Humane dissent from technocracy

Exhausted by science and tech debates that go nowhere?