This defense of Facebook by Tal Yarkoni is telling in so many ways. Let me count some of them.
Yarkoni begins by taking note of the results of the experiment:
The largest effect size reported had a Cohen’s d of 0.02–meaning that eliminating a substantial proportion of emotional content from a user’s feed had the monumental effect of shifting that user’s own emotional word use by two hundredths of a standard deviation. In other words, the manipulation had a negligible real-world impact on users’ behavior. To put it in intuitive terms, the effect of condition in the Facebook study is roughly comparable to a hypothetical treatment that increased the average height of the male population in the United States by about one twentieth of an inch (given a standard deviation of ~2.8 inches). Theoretically interesting, perhaps, but not very meaningful in practice.
This seems to be missing the point of the complaints about Facebook’s behavior. The complaints are not “Facebook successfully manipulated users’ emotions” but rather “Facebook attempted to manipulate users’ emotions without informing them that they were being experimented on.” That’s where the ethical question lies, not with the degree of the manipulation’s success. “Who cares if that guy was shooting at you? He missed, didn’t he?” — that seems to be Yarkoni’s attitude.
Here’s another key point, according to Yarkoni:
Facebook simply removed a variable proportion of status messages that were automatically detected as containing positive or negative emotional words. Let me repeat that: Facebook removed emotional messages for some users. It did not, as many people seem to be assuming, add content specifically intended to induce specific emotions.
It may be true that “many people” assume that Facebook added content, but I have not seen even one say that. Does anyone really believe that Facebook is generating false content and attributing it to users? The concern I have heard people express is that they may not be seeing what their friends or family are rejoicing about or lamenting, and that such hidden information could be costly to them in multiple ways. (Imagine a close friend who is hurt with you because you didn’t commiserate with her when she was having a hard time. After all, the two of you are friends on Facebook, and she posted her lament there — you should have responded.)
But here’s the real key point that Yarkoni makes — key because it reveals just how arrogant our technological overlords are, and how deep their sense of entitlement:
It’s not clear what the notion that Facebook users’ experience is being “manipulated” really even means, because the Facebook news feed is, and has always been, a completely contrived environment. I hope that people who are concerned about Facebook “manipulating” user experience in support of research realize that Facebook is constantly manipulating its users’ experience. In fact, by definition, every single change Facebook makes to the site alters the user experience, since there simply isn’t any experience to be had on Facebook that isn’t entirely constructed by Facebook…. So I don’t really understand what people mean when they sarcastically suggest — as Katy Waldman does in her Slate piece — that “Facebook reserves the right to seriously bum you out by cutting all that is positive and beautiful from your news feed”. Where does Waldman think all that positive and beautiful stuff comes from in the first place? Does she think it spontaneously grows wild in her news feed, free from the meddling and unnatural influence of Facebook engineers?
Well, I’m pretty sure that Katy Waldman thinks “all that positive and beautiful stuff comes from” the people who posted the thoughts and pictures and videos — because it does. But no, says Yarkoni: All those stories you told about your cancer treatment? All those videos from the beach you posted? You didn’t make that. That doesn’t “come from” you. Yarkoni completely forgets that Facebook merely provides a platform — a valuable platform, or else it wouldn’t be so widely used — for content that is provided wholly by its users.
Of course “every single change Facebook makes to the site alters the user experience” — but all changes are not ethically or substantively the same. Some manipulations are more extensive than others; changes in user experience can be made for many different reasons, some of which are better than others. That people accept without question some changes while vigorously protesting others isn’t a sign of inconsistency, it’s a sign that they’re thinking, something that Yarkoni clearly does not want them to do. Most people who use Facebook understand that they’ve made a deal in which they get a platform to share their lives with people they care about, while Facebook gets to monetize that information in certain restricted ways. They have every right to get upset when they feel that Facebook has unilaterally changed the deal, just as they would if they took their car to the body shop and got it back painted a different color. And in that latter case they would justifiably be upset even if the body shop pointed out that there was small print in the estimate form you signed permitting them to change the color of your car.
One last point from Yarkoni, and this one is the real doozy: “The mere fact that Facebook, Google, and Amazon run experiments intended to alter your emotional experience in a revenue-increasing way is not necessarily a bad thing if in the process of making more money off you, those companies also improve your quality of life.” Get that? In Yarkoni’s ethical cosmos, Facebook, Google, and Amazon — and presumably every other company you do business with, and for all I know the government (why not?) — can manipulate you all they want as long as they “improve your quality of life” according to their understanding, not yours, of what makes for improved quality of life.
Why do I say their understanding and not yours? Because you are not consulted in the matter. You are not asked beforehand whether you wish to participate in a life-quality-improving experiment, and you are not informed afterwards that you did participate. You do not get a vote about whether your quality of life actually has been improved. (Our algorithms will determine that.) The Great Gods of the Cloud understand what is best for you; that is all ye know on earth, and all ye need know.
In addition to all this, Yarkoni makes some good points, though they’re generally along the other-companies-do-the-same line. I may say more about those in another post, if I get a chance. But let me wrap this up with one more note.
Tal Yarkoni directs the Psychoinformatics Lab. in the Psychology department at the University of Texas at Austin. What do they do in the Psychoinformatics Lab? Here you go: “Our goal is to develop and apply new methods for large-scale acquisition, organization, and synthesis of psychological data.” The key term here is “large-scale,” and no one can provide vast amounts of this kind of data as well as the big tech companies that Yarkoni mentions. Once again, the interests of academia and Big Business converge. Same as it ever was.