Attending to Technology

Theses for Disputation
Subscriber Only
Sign in or Subscribe Now for audio version

Aphorisms are essentially an aristocratic genre of writing. The aphorist does not argue or explain, he asserts; and implicit in his assertion is a conviction that he is wiser or more intelligent than his readers.

– W. H. Auden and Louis Kronenberger,
The Viking Book of Aphorisms
Author’s Note

I hope that the statement above is wrong, believing that certain adjustments can be made to the aphoristic procedure that will rescue the following collection from arrogance. The trick is to do this in a way that does not sacrifice the provocative character that makes the aphorism, at its best, such a powerful form of utterance.

 

Here I employ two strategies to enable me to walk this tightrope. The first is to characterize the aphorisms as “theses for disputation,” à la Martin Luther — that is, I invite response, especially response in the form of disagreement or correction. The second is to create a kind of textual conversation, both on the page and beyond it, by adding commentary (often in the form of quotation) that elucidates each thesis, perhaps even increases its provocativeness, but never descends into coarsely explanatory pedantry.

 
Theses and Commentary
 

When we concentrate on a material object, whatever its situation, the very act of attention may lead to our involuntarily sinking into the history of that object. Novices must learn to skim over matter if they want matter to stay at the exact level of the moment. Transparent things, through which the past shines!

– Vladimir Nabokov, Transparent Things

Everything begins with attention.

There is something in our soul that loathes true attention much more violently than flesh loathes fatigue. That something is much closer to evil than flesh is. That is why, every time we truly give our attention, we destroy some evil in ourselves. If one pays attention with this intention, fifteen minutes of attention is worth a lot of good works.

Simone Weil

As so often with Weil, the formulation seems extreme at first, and probably on reflection as well: it is her habit to describe perception not as some neutral function but as a set of choices charged with moral significance. What we fail to perceive we have on some level chosen not to perceive; we have looked away; we have allowed indifference to have sway over us. Genuinely to attend is to give of oneself with intent; it is to say: For as long as I contemplate this person, or this experience, or even this thing, I grant it a degree of dominion over me. But I will choose where my attention goes; it is in my power to grant or withhold.

Yet as soon as we think in this way, the way Simone Weil urges that we think, questions press insistently upon us: Do I really have the power to grant or withhold? If not, how might I acquire that power? And even if I possess it, on what grounds do I decide how to use it?

The question of what I should give attention to is inseparable from the question of what I should decline to give attention to.

In the thirtieth canto of the Inferno, Dante the pilgrim and his guide Virgil, trudging through the depths of the eighth circle of Hell, encounter those who falsify — impersonators, counterfeiters, perjurers. Two of these sinners grow suddenly angry with each other; they clash and strike and curse. We hear for thirty lines or so their recriminations, and then Virgil says quietly to the pilgrim, “Go right on looking / and it is I who’ll quarrel with you.” Dante blushes when he realizes how utterly rapt he has been — how lost (in more than one way) in the cursing and mocking. Virgil is pleased to see his pupil’s shame, but leaves him with a stern warning: “The wish to hear such things is base.”

Attention is both given and paid.

There was a young couple strolling along half a block ahead of me. The sun had come up brilliantly after a heavy rain, and the trees were glistening and very wet. On some impulse, plain exuberance, I suppose, the fellow jumped up and caught hold of a branch, and a storm of luminous water came pouring down on the two of them, and they laughed and took off running, the girl sweeping water off her hair and her dress as if she were a little bit disgusted, but she wasn’t. It was a beautiful thing to see, like something from a myth. I don’t know why I thought of that now, except perhaps because it is easy to believe in such moments that water was made primarily for blessing, and only secondarily for growing vegetables or doing the wash. I wish I had paid more attention to it. My list of regrets may seem unusual, but who can know that they are, really? This is an interesting planet. It deserves all the attention you can give it.

– John Ames, in Marilynne Robinson’s Gilead

“I wish I had paid more attention…. all the attention you can give it.” Both verbs are necessary. “Give” reminds us of the freedom of our choice to attend, or not; “pay” reminds us of attention’s costliness, and of the value of that to which we attend. The planet deserves attention; it is “meet and right,” to borrow an old phrase, to look closely at it. The beauty of water is something like the opposite of two foul sinners cursing each other: it would be base not to be interested in it.

Attention given to one thing cannot be given, then and there, to another; and no moment comes to us twice.

Attention is not an infinitely renewable resource — but it is partially renewable, if well-invested and properly cared for.

In Pilgrim at Tinker Creek, Annie Dillard writes of her urge to see. “I still try to keep my eyes open. I’m always on the lookout for antlion traps in sandy soil, monarch pupae near milkweed, skipper larvae in locust leaves. These things are utterly common, and I’ve not seen one…. I squint at the wind because I read Stewart Edward White: ‘I have always maintained that if you looked closely enough you could see the wind — the dim, hardly-made-out, fine debris fleeing high in the air.’” Who indeed can see? “The lover can see, and the knowledgeable…. The point is that I just don’t know what the lover knows.”

The lover gives freely of her attention, but nevertheless is repaid. Yet it is still a gift, in part because one never knows how attention will be repaid. Weil again: “If we concentrate our attention on trying to solve a problem of geometry, and if at the end of an hour we are no nearer to doing so than at the beginning, we have nevertheless been making progress each minute of that hour in another more mysterious dimension. Without our knowing or feeling it, this apparently barren effort has brought more light into the soul.”

Attention is therefore not just paid, but also invested.

In The World Beyond Your Head, Matthew B. Crawford writes about “jigs”:

A jig is a device or procedure that guides a repeated action by constraining the environment in such a way as to make the action go smoothly, the same each time, without his having to think about it…. A physical jig reduces the physical degrees of freedom a person must contend with. By seeding the environment with attention-getting objects (such as a knife left in a certain spot) or arranging the environment to keep attention away from something (as, for example, when a dieter keeps certain foods out of easy view), a person can informationally jig it to constrain his mental degrees of freedom. The upshot is that to keep action on track, according to some guiding purpose, one has to keep attention properly directed. To do this, it helps a great deal to arrange the environment accordingly, and in fact this is what is generally done by someone engaged in a skilled activity.

To make a jig, then, is to offload or automate forms of attention that do not reward investment; it is to say that I want to invest my attention here, not there, because it is here where I hope to find my reward. There are all sorts of jigs: making up a large batch of muesli and storing it in a plastic container, creating a template for a form letter, writing a Python script to automate repetitive programming tasks.

The jig-maker practices a kind of attentional austerity — “austerity” being a troubling word in the current global economic climate, and yet the right one in this case. Ivan Illich, following Thomas Aquinas, writes of austerity not simply as a trait but as a virtue:

In the Summa Theologica, II, II, [in the 168th question, article 4] Thomas deals with disciplined and creative playfulness. In his third response he defines “austerity” as a virtue which does not exclude all enjoyments, but only those which are distracting from or destructive of personal relatedness. For Thomas “austerity” is a complementary part of a more embracing virtue, which he calls friendship or joyfulness. It is the fruit of an apprehension that things or tools could destroy rather than enhance eutrapelia (or graceful playfulness) in personal relations.

This is a powerful idea: that austerity is virtuous because it helps us to place outside our sphere of attention those temptations that are “destructive of personal relatedness,” that detract from our legitimate joys. The best reason to make a jig is to preserve our attention for personal relatedness, for friendship and joyfulness.

Play is one of the most beautiful and essential forms of attention.

Illich refers to “eutrapelia (or graceful playfulness),” and indeed there is grace in all true play. It is the grace of acting freely — and attending freely, to what delights or moves. To practice “graceful playfulness” is to embody a humanistic politics of attention.

In our age, the mere making of a work of art is itself a political act. So long as artists exist, making what they please and think they ought to make, even if it is not terribly good, even if it appeals to only a handful of people, they remind the Management of something managers need to be reminded of, namely, that the managed are people with faces, not anonymous members, that Homo Laborans [working man] is also HomoLudens [playing man]….

Among the half dozen or so things for which a man of honor should be prepared, if necessary, to die, the right to play, the right to frivolity, is not the least.

– W. H. Auden, “The Poet and the City

We are fed by what we attend to.

This is to shift the metaphors from the economic to the nutritive. Francis Bacon, in the essay “Of Studies,” provided a stringent model for how to narrow our attentiveness: “Some books are to be tasted, others to be swallowed, and some few to be chewed and digested: that is, some books are to be read only in parts, others to be read, but not curiously, and some few to be read wholly, and with diligence and attention.” In her wonderful book Too Much to Know, Ann Blair explains that Bacon in this essay offered instruction in the skills of intellectual triage for people afflicted by information overload. Blair points out that one of the most common complaints of literate people in the sixteenth and seventeenth centuries is the proliferation of stuff to read. Cried Erasmus in 1525, “Is there anywhere on earth exempt from these swarms of new books?”

And many of those books were simply not good — not good for you, lacking nutrition. Therefore Bacon recommends that we begin with tasting; and in many cases that will be sufficient. It is unhealthy to read worthless books “with diligence and attention.”

However, it is not only the great and the noble that have worth. We must play, and as Auden says, the “right to frivolity” is essential to human flourishing. The book or show or song that makes us laugh in “graceful playfulness” has rewarded our attention in a distinctively sweet way.

An essential question is, “What form of attention does this phenomenon require? That of reading or seeing? That of writing also? That of laughter or play? Or silence?”

The chief danger of seeking to be attentive is the accompanying desire to be acknowledged as seeking to be attentive. But true attentiveness may not be compatible with displaying one’s attentiveness, for instance in the form of public writing. The character John Ames, in Robinson’s Gilead, saw the water pour down on the young couple and said nothing — merely noting to himself that it was “like something from a myth” — and then, later, recorded the moment in a letter to the son whom he will not live to see reach adulthood. This is the virtue of austerity at work.

“Mindfulness,” which many have recommended as a response to the perils of incessant connectivity, confines itself to the cultivation of a mental stance without objects to attend to.

In the New Republic article “The Mindfulness Racket,” Evgeny Morozov writes that “a long list of celebrities — Arianna Huffington, Deepak Chopra, Paolo Coelho — are all tirelessly preaching the virtues of curbing technology-induced stress and regulating the oppressiveness of constant connectivity, often at conferences with titles like ‘Wisdom 2.0.’” This curbing goes by the name of “mindfulness,” but “In essence, we are being urged to unplug — for an hour, a day, a week — so that we can resume our usual activities with even more vigor upon returning to the land of distraction…. In our maddeningly complex world, where everything is in flux and defies comprehension, the only reasonable attitude is to renounce any efforts at control and adopt a Zen-like attitude of non-domination. Accept the world as it is — and simply try to find a few moments of peace in it.”

That is, the gospel of mindfulness reduces mental health to a single, simple technique that delivers its users from the obligation to ask any awkward questions about what their minds are and are not habitually attending to. But the only mindfulness worth cultivating will be teleological through and through: it will be mindfulness for something — for personal formation, for service, for love.

Goal-directed mindfulness, and all other healthy forms of attention — healthy for oneself and for others — can only happen with the creation of and care for an attentional commons.

Matthew Crawford, in the New York Times article “The Cost of Paying Attention,” writes that “we’ve auctioned off more and more of our public space to private commercial interests, with their constant demands on us to look at the products on display or simply absorb some bit of corporate messaging…. In the process, we’ve sacrificed silence — the condition of not being addressed. And just as clean air makes it possible to breathe, silence makes it possible to think.” If we saw the conditions necessary for attention in a way similar to how we see air or water, as a valuable and shared resource — what Crawford calls an “attentional commons” — then perhaps “we could figure out how to protect it.”

Protecting our attentional commons will not be easy to do in a culture for which surveillance has become the normative form of care.

When Danielle and Alexander Meitiv of Silver Spring, Maryland tried to teach their two children, ages six and ten, how to make their way home on foot from a mile away, the children were picked up by police and the parents charged with child neglect. Yet whether the Meitivs were right or wrong in the degree of responsibility they entrusted their children with, what they did is the opposite of neglect — it is thoughtful, intentional training of their children for responsible adulthood. They instructed their children with care and the children practiced responsible freedom before being fully entrusted with it. And then the state intervened before the lesson could be completed. (The story made national and even international news for months in 2014 and 2015; charges against the parents have since been dropped.)

I think this event is best described as the state enforcing surveillance as the normative form of care. The state cannot raise its citizens, whose natural and social home is the family; it can only place them under observation. Perfect observation — panopticism — then becomes the goal, which the state justifies and universalizes by imposing a responsibility to surveil on the very citizens already being surveilled. The state’s commandment to parents: Do as I do.

But by enforcing surveillance as the normative form of care, the state effectively erases the significance of all other forms of care. Parents might teach their children nothing of value, no moral standards, no self-discipline, no compassion for others — but as long as those children are incessantly observed, then according to the state’s standards the parents of those children are good parents. And they are good because they are training their children to accept a lifetime of passive acceptance of surveillance.

If Simone Weil is correct in claiming that “Attention is the rarest and purest form of generosity,” then surveillance is the opposite of attention.

The primary battles on social media today are fought by two mutually surveilling armies: code fetishists and antinomians.

Modern liberal society tends toward a kind of “code fetishism,” or nomolatry. It tends to forget the background which makes sense of any code — the variety of goods which rules and norms are meant to realize — as well as the vertical dimension which arises above all these.

– Charles Taylor, “The Perils of Moralism

Taylor goes on to explain that “Code fetishism means that the entire spiritual dimension of human life is captured in a moral code” — a view that effectively begins with Kant and then almost immediately generates its opposite: the antinomianism, or rebellion against codes and disciplines, that drives much of the Romantic movement. The same tension is replicated today on multiple fronts: for example, those who find and report abusive or even just hurtful language online (such as the progressives sometimes dubbed “Social Justice Warriors”) versus those who seek online venues for maximal linguistic freedom (such as many denizens of Reddit and 4chan).

The intensity of the battles on social media is increased by a failure by any of the parties to consider the importance of intimacy gradients.

The concept of “intimacy gradients” was first articulated in A Pattern Language: Towns, Buildings, Construction, by Christopher Alexander and several coauthors. It is common to refer to the book as about architecture, and it is, but it is also about many other things. In one section the authors describe how a street cafe ideally functions: It provides “a place where people can sit lazily, legitimately, be on view, and watch the world go by.” But “in addition to the terrace which is open to the street, the cafe contains several other spaces: with games, fire, soft chairs, newspapers…. This allows a variety of people to start using it, according to slightly different social styles.” And of course any given person will find, at varying times, a use for the various levels of intimacy the cafe affords: there are times to “be on view” and times to play a game or have a quiet conversation with a single friend.

Twitter has a fairly sophisticated set of intimacy gradients: public and private accounts, replies that will be seen automatically only by the person you are replying to and people who are connected to both of you, direct messages, and so on. Where it has failed so far is in the provision of intimate places — smaller rooms where friends can talk without being interrupted. It gives you the absolute privacy of one-on-one conversations (direct messages) and it gives you all that comes with “being on view” at a table that extends out into the street, where anyone who happens to go by can listen in or make comments; but, for public accounts anyway, not much in between. Twitter’s recent implementation of group direct messages hasn’t helped, because people have turned instead to tools like Slack, designed from the ground up for group interaction — but it has become common for people to bemoan the avalanche of Slack messages in much the same way that we have for decades bemoaned the avalanche of e-mail. Social media so far have been wholly inept at managing intimacy gradients: people using them always seem to feel either isolated or overwhelmed by crowds.

“And weeping arises from sorrow, but sorrow also arises from weeping.” — Bertolt Brecht, writing about Twitter.

Actually, Brecht was writing about the theater — but also, and necessarily according to his theory, the whole of human society.

One easily forgets that human education proceeds along highly theatrical lines. In a quite theatrical manner the child is taught how to behave; logical arguments only come later. When such-and-such occurs, [the child] is told (or sees), one must laugh. It joins in when there is laughter, without knowing why; if asked why it is laughing it is wholly confused. In the same way it joins in shedding tears, not only weeping because the grown-ups do so but also feeling genuine sorrow. This can be seen at funerals, whose meaning escapes children entirely. These are theatrical events which form the character. The human being copies gestures, miming, tones of voice. And weeping arises from sorrow, but sorrow also arises from weeping.

Brecht goes on to say that this theatrical education of learning by imitating both actions and emotions continues throughout life: “Only the dead are beyond being altered by their fellow-men.”

Twitter and Facebook are extraordinarily effective instruments for teaching proper responses, because their cues are so explicit and consistent, like experiments in operant conditioning, where behaviors are taught through reinforcement. Retweets, likes, faves, hashtags, memes — all these are associated with easily learned behaviors and powerful emotions.

It is impossible to understand social media without grasping that, as Craig Raine has said, “All emotion is pleasurable.”

Raine, an English poet, said this in response to literary critic A. D. Nuttall, who asked why tragedy gives pleasure. The pleasure that all strong emotions confer is one of the reasons that, as Walker Percy pointed out, people are happy during hurricanes. Thus also Tennyson, wishing to hold on to grief, cries out in his poem in memory of his dead friend Arthur Henry Hallam, “O last regret, regret can die!” People who are relentlessly angry on Twitter should be understood as addicts, in the grip of a compulsion to recharge their emotions.

Some have suggested that the Internet is on the verge of becoming a failed state. But the Internet is not a state; it is an ecosystem.

Sean Gallagher, an editor of the Ars Technica website, suggested in early 2015 that the Internet may become the next failed state. It is a compelling metaphor, one that many are attracted to because of the relentlessness of the anger and hatred one finds on the Internet, and the inability of anyone in authority to restrain those overcome by such emotions.

Such failure has not happened yet, says Gallagher: “There is a cacophony of hateful speech, vice of every kind …, and policemen of various sorts trying to keep a lid on all of it — or at least, trying to keep the chaos away from most law-abiding citizens. But people still use the Internet every day, though the ones who consider themselves ‘street smart’ do so with varying levels of defenses installed. Things sort of work.” But that doesn’t mean they always will: “the Internet might soon look less like 1970s New York and more like 1990s Mogadishu: warring factions destroying the most fundamental of services, ‘security zones’ reducing or eliminating free movement, and security costs making it prohibitive for anyone but the most well-funded operations to do business without becoming a ‘soft target’ for political or economic gain.”

Yet none of these eventualities — all of which seem eminently possible to me, and disturbing — would affect the Internet itself. Rather, they would disturb a variety of services that are built on the Internet, which is, essentially, three things: a set of software protocols, millions of miles of cable, and millions of routers (“core” and “edge” routers) that transfer data from one cable to another. These are the resources of the ecosystem that we call the Internet, which allow everything that happens online to happen. All three elements must be present and functional for the ecosystem to work, but they are widely distributed and the system can adapt to the failure of many of its parts. For instance, if a cable is cut, routers send data through other cables, perhaps by a more circuitous route; when one router fails, the protocols send the data through others. There is a great deal of redundancy in the system, which means that while it can be slowed down it could only be broken by a cataclysm so immense that the loss of connectivity would be the least of our worries.

Individual companies (Google, for example) and their services (such as Gmail) are considerably more vulnerable, though not in an electromechanical way: they spend a lot of money on building and maintaining their own redundant systems. But Google could shut down Gmail tomorrow if it wanted to, as it has shut down other services in the past; there are no legal barriers to it doing so.

Facebook, by contrast, is a state — a Hobbesian state.

I Authorize and give up my Right of Governing my selfe, to this Man, or to this Assembly of men, on this condition, that thou give up thy Right to him, and Authorise all his Actions in like manner. This done, the Multitude so united in one Person, is called a COMMON-WEALTH, in latine CIVITAS. This is the Generation of that LEVIATHAN, or rather (to speake more reverently) of that Mortall God, to which wee owe under the Immortall God, our peace and defence.

– Thomas Hobbes, Leviathan, Part II, Chapter XVII

To sign up for a Facebook account is to yield sovereignty over our data — our family pictures, our conversations with friends — to Facebook, trusting in our “mortal God” to gather it and keep it safe.

If instead of thinking of the Internet in statist terms we apply the logic of subsidiarity, we might be able to imagine the digital equivalent of a Mondragon cooperative.

In May 2012, some researchers reported their surprise at learning that many people in Indonesia said they did not use the Internet but they did use Facebook. Similar patterns have since been observed elsewhere. The researchers are not sure precisely what these people mean, but the most likely explanation is that their online lives happen wholly, or almost wholly, on Facebook; if they click links that take them out of Facebook, they are not aware of that. This apparent confusion is likely to spread as Facebook continues to roll out its Internet.org, which provides for a number of developing countries free mobile Internet access — limited to particular services. And even in developed countries, as Facebook deepens its relationships with what we are learning to call “content providers,” the links it provides will rarely if ever lead the user away from Facebook’s domain.

An innocent reading of this phenomenon would say that it is a version of calling all soft drinks “Coke”; a less innocent one would say that it is like the world envisioned by WALL-E, a world in which both the ruined Earth and the spaceships that allow people to escape from it are controlled by the megacorporation Buy n Large.

In response to this kind of colonization-by-technological-behemoth, some people are re-emphasizing the open web, the non-paywalled, un-boxed-in Internet commons where people can own and speak from their own digital turf — either individually or in small collectives of their own choosing, just as it was before post-industrial giants figured out how much money can be made online. This is a digital implementation of the principle of subsidiarity, especially in its distinctively Catholic form, according to which everything that can be done locally should be done locally. Assigning authority to greater spheres will sometimes be necessary, but to do so when one need not is to invite a diminishment of human flourishing.

The principle of subsidiarity may be seen lived out in the worker collectives of the Mondragon Corporation in the Basque region of Spain. The Mondragon model of worker-owned-and-operated capitalism, which was if not quite invented then introduced and sustained in the 1950s by a priest named José María Arizmendiarrieta, is a beautiful one for those invested in the Internet but dismayed by its transformation. The Internet groans in travail as it awaits its Arizmendiarrieta.

Digital textuality, especially within a flourishing digital commons, offers us the chance to restore commentary to its pre-modern place as the central scholarly genre.

In his powerful and illuminating book Religious Reading, Paul Griffiths explains that religious reading — he looks at early Christian and at Buddhist writings — has historically centered on two genres: commentary and anthology. Note that these are genres of writing as well as reading. To write a commentary on the text is to acknowledge that the text is provocative and worthy of attention: it demands response. Silence would be disrespectful. But such writing is, Griffiths argues, fundamentally a mode of reading, a means of finding out what you think about what you have read by attempting a record of your response. This is, to put it mildly, not the spirit in which most Internet commentary — indeed most critical commentary in all media — is carried out. One might argue that this is because the stuff written online is not worthy of such a reverential response, and perhaps that is generally true, although surely not always. But to the extent it is true, it raises the question: Why are you reading so much that does not deserve a serious response?

Anthologies too are means of organizing one’s experience as a reader. They too indicate deep respect for, and contemplative attention to, texts that matter. (This is why much of what I offer in these theses is quotation. If I accomplish nothing else in these theses, I hope to have provided a brief anthology of serious reflection on technology.)

Recent technologies enable a renewal of commentary, but struggle to overcome a post-Romantic belief that commentary is belated and derivative.

Comment threads often seethe with resentment at the status of commentary itself. I should be the initiator, not the responder! Or, as Paul Ford put it in a 2011 essay about the Internet, “‘Why wasn’t I consulted,’ which I abbreviate as WWIC, is the fundamental question of the web. It is the rule from which other rules are derived. Humans have a fundamental need to be consulted, engaged, to exercise their knowledge (and thus power), and no other medium that came before has been able to tap into that as effectively.”

Only a Bakhtinian view of the primacy of response in understanding could genuinely renew online discourse.

In the actual life of speech, every concrete act of understanding is active: it assimilates the word to be understood into its own conceptual system filled with specific objects and emotional expressions, and is indissolubly merged with the response, with a motivated agreement or disagreement. To some extent, primacy belongs to the response, as the activating principle: it creates the ground for understanding, it prepares the ground for an active and engaged understanding. Understanding comes to fruition only in the response.

– Mikhail Bakhtin, “Discourse in the Novel” (emphasis added)

Nevertheless certain texts will generate communities of comment around them, communities populated by the humbly intelligent.

One of the best ways of evaluating written work is to begin with the question: What sort of response does this text invite?

Blessed are they who strive to practice commentary as a legitimate, serious genre of responsiveness to others’ thoughts.

The poet Charles Simic writes,

Wherever and whatever I read, I have to have a pencil, not a pen — preferably the stub of a pencil so I can get close to the words, and underline well-turned sentences, brilliant or stupid ideas, interesting words and bits of information. I like to write short or elaborate comments in the margins, put question marks, check marks, and other private notations next to paragraphs that only I — and sometimes not even I — can later decipher. I would love to see an anthology of comments and underlined passages by readers of history books in public libraries, who despite the strict prohibition of such activity could not help themselves and had to register their complaints about the author of the book or about the direction in which humanity has been heading for the last few thousand years.

But those warm thoughts from Simic should have set next to them these colder words from Nietzsche’s “On the Uses and Disadvantages of History for Life”:

The historical culture of our critics will no longer permit any effect at all in the proper sense, that is an effect on life and action: their blotting-paper at once goes down even on the blackest writing, and across the most graceful design they smear their thick brush-strokes which are supposed to be regarded as corrections: and once again that is the end of that. But their critical pens never cease to flow, for they have lost control of them and instead of directing them are directed by them. It is precisely in this immoderation of its critical outpourings, in its lack of self-control, in that which the Romans call impotentia, that the modern personality betrays its weakness.

Distinguishing genuine commentary from the “outpourings” of those afflicted by impotentia — this is a great task for one who would be wise.

“Since we have no experience of a venerable text that ensures its own perpetuity, we may reasonably say that the medium in which it survives is commentary.”

This statement is from literary critic Frank Kermode, who continues,

All commentary on such texts varies from one generation to the next because it meets different needs; the need to go on talking is paramount, the need to do it rather differently is equally urgent, and not less so because the provision of commentary is a duty that has now devolved upon a particular profession, a profession which, at any rate until recently, has tended to judge the achievement of its members by their ability to say something new about canonical texts without defacing them.

This passage, from Kermode’s Forms of Attention: Botticelli and Hamlet, is very rich, and several points are worth serious contemplation: first, the good and the bad in the circumstances that led to commentary becoming a profession (that of the critic/scholar) that has duties; second, the double-felt need to comment and to comment differently; third, the notion of speech that is new but does not deface.

A whole ethics of commentary can profitably be generated from these three points. But our current circumstances call us to reflect on the way that the Internet enables amateur commentary — in both the best and the worst senses of “amateur,” and every other sense in between. The amateur commentator does not feel so strongly the impulse to novelty, which is not necessarily a bad thing. But the amateur commentator is also not vocationally committed to avoiding the defacement of that on which he or she comments.

We should seek technologies that support the maximally beautiful readerly sequence of submission, recovery, comment.

Submission, recovery, comment: This sequence of actions also comes from Kermode, that master commentator and devoted lover of commentary, in his book Pleasure and Change: The Aesthetics of Canon. He speaks here of critics overcome by the beauty of some lines of poetry: “They have surrendered and recovered and are trying to think of something to say.” There is, for Kermode, a deep mystery about that “network of responses” that invite “submission, recovery, and comment.” (Kermode may be thinking of actual neural networks here, launched into fizzy fireworks by those “great moments” of poetic beauty.) It will often be that what we want — even when “what we find to say amounts to no more than an expression of astonishment” — is to “induce an equivalent submission in our hearers.” But this in itself is valuable because it helps to draw our hearers into “the conversation that prevents such lines from becoming rubbish in the end.”

The question I wish to ask is a very practical and particular one: What technologies promote this sequence of responses and acts? There is little point in commending the value of such readerly behavior if we do not know how to realize it, how to embody it. Consider, then, the following possibilities:

● Writing a critical essay to submit to a literary quarterly

● Pursuing a doctoral degree in literature in hopes of sharing the texts you love with generations of students

● Starting a neighborhood book club

● Volunteering to teach literature in a nearby prison

● Writing ecstatic commentary in the pages of a library book so that future readers of that volume will see and perhaps share your enthusiasm

● Starting a blog devoted to your favorite poet

● Searching for existing online commentary on works you love and joining the conversation with your own comments

Each of these is a practice either enabled by or accompanied by certain technologies. Each practice merits the painstaking labor of a discipline we might call techno-ethnography to understand better how these technologies help or hinder the sequence of submission, recovery, and comment. But none has been studied in this way.

“Technology wants to be loved,” says Kevin Kelly, wrongly: but we want to invest our technologies with human traits to justify our love for them.

Kelly, a founding editor of Wired magazine, tells us “what technology wants” (and has written a book by that name). But technology doesn’t want — we want, with technology as our instrument. In the 1970s, some philosophers and theorists ascribed a kind of agency to language; today, Kelly and likeminded writers are doing something similar with technology. These are, I believe, evasions of the human and instances of what is called the “pathetic fallacy” — the attribution of our own emotions to something else. Responding to an earlier version of these theses, Ned O’Gorman wrote,

Of course technologies want. The button wants to be pushed; the trigger wants to be pulled; the text wants to be read — each of these want as much as I want to go to bed, get a drink, or get up out of my chair and walk around, though they may want in a different way than I want. To reserve “wanting” for will-bearing creatures is to commit oneself to the philosophical voluntarianism that undergirds technological instrumentalism.

I do not see any way to make the distinction, one essential to O’Gorman’s position, between “wanting” and “willing.” What would it mean to be without will and yet wanting? If buttons want to be pushed “in a different way than I want” something, what is this difference? And given this difference, whatever it is, how can buttons want “as much as I”?

Kelly himself has struggled to articulate this point, in a Wired article on the origin of ideas:

As I started thinking about the history of technology, there did seem to be a sense in which, during any given period, lots of innovations were in the air, as it were. They came simultaneously. It appeared as if they wanted to happen. I should hasten to add that it’s not a conscious agency; it’s a lower form, something like the way an organism or bacterium can be said to have certain tendencies, certain trends, certain urges. But it’s an agency nevertheless.

But in what sense is it “an agency”? Is Kelly really talking about anything more than propitious circumstances? Is O’Gorman really talking about anything other than affordances — affordances designed by people for their own purposes? (Consider how much physical comedy — Buster Keaton trying to control a train, Charlie Chaplin wrestling with a folding chair — centers on the complications we experience when our tools give the appearance of disobedience. If they were actually disobedient it wouldn’t be funny.)

When Kelly says, “I think technology is something that can give meaning to our lives,” he seeks to promote what technology does worst.

Our current electronic technologies make competent servants, annoyingly capricious masters, and tragically incompetent gods. We try to give power to our idols so as to be absolved of the responsibilities of human agency. The more they have, the less we have for our use of them.

First we would immanentize the eschaton, then code it in Java.

In a sense there is no God as yet achieved, but there is that force at work making God, struggling through us to become an actual organized existence, enjoying what to many of us is the greatest conceivable ecstasy, the ecstasy of a brain, an intelligence, actually conscious of the whole, and with executive force capable of guiding it to a perfectly benevolent and harmonious end.

That’s either George Bernard Shaw in 1907, or Kevin Kelly last week. The techno-utopians believe there is a force working within us that will bring about heaven on earth through technological progress. (Who needs the élan vital when we have Moore’s Law?) They even personify this force — much as children personify their toys — and imagine that it wants things. But to project our desires onto our technologies is to court permanent psychic infancy.

The cyborg dream is an extension of the idolatry of technology: to erase the boundaries between our selves and our tools.

The writers who make their fictional cyborgs humorless know what they are doing: the fusion of person and tool disables self-irony. The requisite distinction between self and environment is missing. (There are no Buster Keatons and Charlie Chaplins among the cyborgs.)

The “what technology wants” model cannot be reconciled with the “hacker” model of engagement with technology.

The “hacker” model is better: given imagination and determination, we can bend technologies to our will. Thus we should stop thinking about “what technology wants” and start thinking about how to cultivate imagination and determination. Speaking of “what technology wants” is an unerring symptom of akrasia — a lack of self-control — and a way of deflecting responsibility for our actions.

When I reflect on how I feel about Twitter, or the Internet more generally, or even just my laptop computer, I can’t help recalling the question that Frodo asked Gandalf about Gollum’s hatred of the Ring: “If he hated it, why didn’t he get rid of it, or go away and leave it?” And Gandalf’s reply: “You ought to begin to understand, Frodo, after all you have heard…. He hated it and loved it, as he hated and loved himself. He could not get rid of it. He had no will left in the matter.”

The physical world is not infinitely re-describable, but if you had to you could use a screwdriver to clean your ears.

In his book Kant and the Platypus, Umberto Eco tells the story of a debate he had with the philosopher Richard Rorty in 1990, during which Rorty “denied that the use made of a screwdriver to tighten screws is imposed by the object itself, while the use made of it to open a parcel is imposed by our subjectivity” and “alluded to the right we would have to interpret a screwdriver as something useful to scratch our ears with.”

To this claim Eco replied that “A screwdriver can serve also to open a parcel (given that it is an instrument with a cutting point, easy to use in order to exert force on something resistant); but it is inadvisable to use it for rummaging about in your ear precisely because it is sharp and too long to allow the hand to control the action required for such a delicate operation; and so it would be better to use not a screwdriver but a light stick with a wad of cotton at its tip.”

Eco’s comment is a shrewd one, and an important one. The world resists human will, and does so whether that will is exercised interpretatively or physically. And yet human creativity so often arises from the refusal to take the world’s “No” as final. It speaks well of us that we would “interpret a screwdriver as something useful to scratch our ears with” rather than, with no other tool at hand, just growing irritated at the itch in our ear canal.

The contemporary version of the pathetic fallacy is to attribute agency not to nature but to algorithms.

John Ruskin, in the third volume of Modern Painters:

I want to examine the nature of the … error, that which the mind admits when affected strongly by emotion. Thus, for instance, in Alton Locke,—

They rowed her in across the rolling foam—
The cruel, crawling foam.

The foam is not cruel, neither does it crawl. The state of mind which attributes to it these characters of a living creature is one in which the reason is unhinged by grief. All violent feelings have the same effect. They produce in us a falseness in all our impressions of external things, which I would generally characterize as the “Pathetic fallacy.”

Now, compare that passage — the coinage of the term “pathetic fallacy” in the mid-1800s — to the New York Times article from March 2015 titled “If an Algorithm Wrote This, How Would You Even Know?”:

Let me hazard a guess that you think a real person has written what you’re reading. Maybe you’re right. Maybe not. Perhaps you should ask me to confirm it the way your computer does when it demands that you type those letters and numbers crammed like abstract art into that annoying little box.

Because, these days, a shocking amount of what we’re reading is created not by humans, but by computer algorithms. We probably should have suspected that the information assaulting us 24/7 couldn’t all have been created by people bent over their laptops.

Not one word in the article acknowledges the (rather significant!) fact that the algorithms were written by humans bent over their laptops. This is our version of the pathetic fallacy; our minds are “unhinged” by our incomprehension of algorithmic programming.

It seems not enough for some people to attribute consciousness to algorithms; they must also grant them dominion.

In the film The Avengers (2012), Tom Hiddleston as Loki delivers with magnificent superciliousness these words: “It’s the unspoken truth of humanity, that you crave subjugation. The bright lure of freedom diminishes your life’s joy in a mad scramble for power, for identity. You were made to be ruled. In the end, you will always kneel.” In a very different context, and in relation to a very different deity, C. S. Lewis once said in conversation with a friend: “I was not born to be free — I was born to adore and obey.” Perhaps both Loki and Lewis are right.

“Any sufficiently advanced logic is indistinguishable from stupidity.”

So says economist Alex Tabarrok:

The problem isn’t artificial intelligence but opaque intelligence. Algorithms have now become so sophisticated that we humans can’t really understand why they are telling us what they are telling us.

Tabarrok quotes from a story in the Wall Street Journal about Orion, the system UPS uses to plan the routes for its delivery vans: “One driver, who declined to speak for attribution, said he has been on Orion since mid-2014 and dislikes it, because it strikes him as illogical.” Tabarrok comments, “Human drivers think Orion is illogical because they can’t grok Orion’s super-logic. Perhaps any sufficiently advanced logic is indistinguishable from stupidity.”

The real function of the Turing Test is to establish our level of credulity and submissiveness before algorithms.

The Turing Test, which involves a questioner interacting with a hidden entity and judging whether it is a person or a computer, is a proposed method for determining whether a computer can be said to think. But as Jaron Lanier writes in You Are Not a Gadget, “The Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart.” What does it say about our understanding of human intelligence that we consider it to be something that can be assessed by a one-off “test” — and one that is no test at all, but an impression of the moment, an improvised intuition?

The chief purpose of consumer technology is to make commonplace actions that had long been done painlessly seem intolerable.

Here is a piece of incontrovertible evidence for this claim: the ceaseless conversation in America about how impossible it is for a family to get by on anything less than two full incomes, the pious invocations of how much better it was in Grandpa and Grandma’s day, without even a momentary acknowledgment of how much less there was for Grandpa and Grandma to buy. They had but one bathroom in a house cooled in the summer by fans, no microwave oven, a very basic refrigerator, no dishwasher, possibly no clothes dryer, a small black-and-white television that got four stations (at most) via broadcast, one automobile without air-conditioning or power windows, vacations only within driving range of that single automobile…. One could go on and on. Were any of us willing to live as they did we might find a single income sufficient for an entire family. But that would be crazy — wouldn’t it?

Everyone should sometimes write by hand, to recall what it’s like to have second thoughts before the first ones are completely recorded.

Human beings have long wanted (perhaps we have always wanted) our technologies of writing to approximate as closely as possible the speed of thought: from writing on clay, to writing on paper, to writing some version of shorthand, to typing on a typewriter, to typing on a computer’s keyboard, to dictating to voice-recognition software — each technological development asymptotically approaches falling into step with our thinking. Rarely is the possibility considered that thought moves too quickly, and that matching our writing to the pace of the body’s movements may yield something, well, more thoughtful.

In a powerful passage from A Room of One’s Own, Virginia Woolf writes of the ways that women of her time, given their particular social standing and responsibilities, needed to adjust their ambitions in order to produce the kinds of books that were possible for them. But the way she formulates this problem is curious: “The book has somehow to be adapted to the body.” For Woolf, the (woman’s) book had to be adapted to the (woman’s) body because of certain unfortunate but unavoidable social constrictions. Yet there may be a categorical imperative lurking here, a more general law that writing benefits less from striving to match the pace of thought, more from slowing itself to the pace of the body.

I have often in writing by hand realized, midway through inscribing a sentence, that it should not go in the direction I had thought it should go; or that it should not be written at all. Conversely, I have often typed in haste and repented of what I have typed at leisure. To write by hand is to revisit and refresh certain synaptic connections, links between mind and body. To shift from typing to handwriting to speaking is to be instructed in the relations among minds, bodies, and technologies. And if you can set aside your instincts for speed, writing by hand can be immensely enjoyable.

A desirable replacement for the Myers-Briggs personality test: a “personality inventory” based on your preferences in tools.

I have in mind not the tools you use, but the ones you prefer — the ones you feel drawn to, that you enjoy looking at or touching, the ones whose use gives you pleasure. (Thinking of a technology as a means of pleasure may be ethically limited, but it’s much healthier than turning it into an idol.) But such an “inventory” would only be truly helpful if repeated at intervals over time. How have your preferences changed? How does the digital world hold your heart now? Only if we see how the preferences change can we discern the forces that change them. The always-connected forget the pleasures of disconnection, then become impervious to them.

The Dunning-Kruger effect — an illusion of competence — grows more pronounced when online and offline life are functionally unrelated.

A history of personal preferences is valuable insofar as it can help dispel the illusion that we simply use the best tools for the job — that we choose the tools rather than being pressed towards the use of certain tools by enormously powerful social forces. This illusion of controlling our technological choices feeds into — intensifies and is in turn intensified by — what is often called the Dunning-Kruger effect, the belief held by many unskilled and unknowledgeable people that they are skilled and knowledgeable. The very invitation to commentary on websites, in its universal openness, is a technological embodiment of the notion that one person’s opinion is as good as another’s, and constantly reinforces that notion. In our daily encounters with others whom we know, and by whom we are known, we regularly run up against the limits of our knowledge — in a job-performance review, for instance, or the failure of an ambitious investment scheme. Online, error rarely has unavoidable consequences.

The digital environment disembodies language in this sense: it prevents me from discerning the incongruity between my self-presentation and my person.

The Dunning-Kruger effect is a type of cognitive bias and, because everyone is afflicted by cognitive biases of one kind or another, should not be thought of as pathological. But I fear it can become pathological. A hundred years ago the French neurologist Joseph Babinski discovered and named a condition he called anosognosia: a disabled person’s complete inability, often produced by neural trauma, to perceive his or her disability. A related and more commonly used term is what psychologists usually call “denial,” which describes not primarily an effect of neural damage but the general condition of being unfixable, not amenable to therapeutic treatment because the patient does not perceive the injury. It is a frightening but in my judgment not implausible thought that the various illusions produced by total immersion in an online social world, with its consequence-free errors and walled-off echo chambers, could produce in many people a condition we might call “digitally amplified anosognosia.”

Consistent pseudonymity creates one degree of disembodiment; varying pseudonymity and anonymity create infinite disembodiment.

Again, there are hardly any unavoidable consequences in online discourse: if I make a fool of myself under one name, I can just post under another, or, when allowed, post under no name at all. My embodied life is fully removed from my online actions. To return to the theme of writing by hand: an inchoate awareness of the effects of such disembodiment may be seen in the demands made in November 2015 by students at the University of Missouri that the university’s president write an apology to them by hand; or, in a tragic rather than farcical key, the practice in Japan, mainly in the seventeenth and eighteenth centuries, of forcing suspected Christians to trample on a fumi-e, a carved image of Christ, in order to prove their disdain for the alien religion. When people speak of “embodying” their beliefs they often mean it only metaphorically; embodiment is a concept that should be purged of metaphor when possible.

Social media are bread and circuses without the bread.

The darkest manifestation of this point is the people who collapse and even die at their computers while playing a multiplayer online game, or worse, while generating goods in such a game only to sell them to others. But only slightly less dark is the condition of those who achieve sufficient disembodiment to take pleasure in the unimpeded infliction of pain on others — a tendency that has been evident since the early days of the Internet, as documented in Julian Dibbell’s great and terrifying 1993 essay, “A Rape in Cyberspace.” As the writer Warren Ellis recently commented, “It’s kind of fascinating to see the craters left in the real world from digital bombing runs.” “Fascinating” is just one word for it; “horrifying” is another. But perhaps the single most universally applicable term is “mysterious.”

The idea of the blind man’s stick invokes a mystery to be contemplated, not a problem to be solved.

In his Phenomenology of Perception, Maurice Merleau-Ponty imagined a blind man using his cane. What is the relationship between the cane and the man’s perceptual apparatus? “The cane is no longer an object that the blind man would perceive, it has become an instrument with which he perceives.” Or, as Gregory Bateson — who does not mention and probably was not aware of Merleau-Ponty — put the same thought experiment in Steps to an Ecology of Mind,

Consider a blind man with a stick. Where does the blind man’s self begin? At the tip of the stick? At the handle of the stick? Or at some point halfway up the stick? These questions are nonsense, because the stick is a pathway along which differences are transmitted under transformation, so that to draw a delimiting line across this pathway is to cut off a part of the systemic circuit which determines the blind man’s locomotion.

Self, perceptual apparatus, technology — all flow along a continuum in which differences can be neither erased nor made absolute. The cyborg dream of perfect fusion with our technologies is an illusion; so too is the instrumentalist model in which tools are neutral, ready-to-hand, malleable by our intentions. As Sara Hendren has written, “All technology is assistive technology”; the question, then, is: What does any given technology assist? But such a question can only be answered meaningfully and profitably from within an acceptance of the mystery of our relations with all that we make.

Precisely because of this mystery, we need to evaluate our technologies according to the criteria established by our need for “conviviality.”

I use the term with the particular meaning that Ivan Illich gives it in Tools for Conviviality:

I intend it to mean autonomous and creative intercourse among persons, and the intercourse of persons with their environment; and this in contrast with the conditioned response of persons to the demands made upon them by others, and by a man-made environment. I consider conviviality to be individual freedom realized in personal interdependence and, as such, an intrinsic ethical value. I believe that, in any society, as conviviality is reduced below a certain level, no amount of industrial productivity can effectively satisfy the needs it creates among society’s members.

In my judgment, nothing is more needful in our present technological moment than the rehabilitation and exploration of Illich’s notion of conviviality, and the use of it, first, to apprehend the tools we habitually employ and, second, to alter or replace them. For the point of any truly valuable critique of technology is not merely to understand our tools but to change them — and us.

Alan Jacobs, “Attending to Technology: Theses for Disputation,” The New Atlantis, Number 48, Winter 2016, pp. 16–45.
Header image via Shutterstock

Delivered to your inbox:

Humane dissent from technocracy

Exhausted by science and tech debates that go nowhere?