I’ve written here about my interest in Amazon’s recently implemented “Popular Highlights” feature, which lets Kindle readers know what passages other Kindle readers are taking note of. But Ted Striphas points to a rather worrisome aspect of this technology:
When people read, on a Kindle or elsewhere, there’s context. For example, I may highlight a passage because I find it to be provocative or insightful. By the same token, I may find it to be objectionable, or boring, or grammatically troublesome, or confusing, or…you get the point. When Amazon uploads your passages and begins aggregating them with those of other readers, this sense of context is lost. What this means is that algorithmic culture, in its obsession with metrics and quantification, exists at least one level of abstraction beyond the acts of reading that first produced the data.I’m not against the crowd, and let me add that I’m not even against this type of cultural work per se. I don’t fear the machine. What I do fear, though, is the black box of algorithmic culture. We have virtually no idea of how Amazon’s Popular Highlights algorithm works, let alone who made it. All that information is proprietary, and given Amazon’s penchant for secrecy, the company is unlikely to open up about it anytime soon.In the old cultural paradigm, you could question authorities about their reasons for selecting particular cultural artifacts as worthy, while dismissing or neglecting others. Not so with algorithmic culture, which wraps abstraction inside of secrecy and sells it back to you as, “the people have spoken.”
Perhaps a more mysterious algorithm would have better served the quoted author's argument. Amazon.com all but reveals the secret sauce in its recipe: "We combine the highlights of all Kindle customers and identify the passages with the most highlights. The resulting Popular Highlights help readers to focus on passages that are meaningful to the greatest number of people." http://kindle.amazon.com/
The following blog post suggests the algorithm is rather crude; the most frequently highlighted passages, not surprisingly, are those with the most obscure words. http://erratasec.blogspot.com/2010/05/popular-highlights-on-amazon-kindle.html
Though Amazon is a "closed" company, algorithmic culture is not especially opaque. These secrets behind these sorts of "social web" algorithms are revealed in dozens, if not hundreds, of books. Google "collective intelligence" for a sampling.
You're straying into my territory now, Alan, so you'll pardon me if I reach in and make an adjustment. 🙂
I think the real danger of "algorythic culture" is the broad perception that the "unbiased, perfectly informed and programed machine has spoken".
And with the explosion of information, we are more and more dependent on these "unbiased, perfectly informed and programed machines" to parse questions that have traditionally be the province of human judgement: fair use, fighting words, and other things of more or less importance.
Tony: I thought I might hear from you about this!
Dave: I think Striphas's chief point is that the algorithms don't tell us as much as we might tend to think. E.g., it can know what we highlight but not why we highlight it. And that point stands whether the algorithm is simple or complex.
I believe Striphas makes two points: that the algorithm denudes the highlights, stripping them of context, and that it is opaque. His ultimate conclusion—that algorithmic culture generates a deceptive aura of objectivity—rests on both points.
I am not pessimistic as Striphas seems to be about the reader's ability to criticize algorithmic culture—i.e., to take it for what it's worth. Amazon itself admits that the aggregator is rather crude, simply tallying how many times an item has been highlighted. I imagine that most readers would intuitively understand these limitations. Thus, I cannot conclude with any confidence how much "we might tend to think" about algorithms. (Indeed, I have learned to question all uses of the first person plural in tech commentary. "We will no longer use PCs, we are all becoming shallow thinkers, we place too much trust in algorithmic culture," and whatnot.) Perhaps a better question is whether there is anyone who has not learned from experience to adopt a healthy skepticism towards Amazon.com's product recommendations.
"E.g., it can know what we highlight but not why we highlight it. And that point stands whether the algorithm is simple or complex." Certainly, an algorithm can never know why a particular individual chooses to highlight a particular sentence or word. But a sufficiently complex algorithm can begin to make "educated" guesses about certain passages have been highlighted more often than others, whether by taking into account each user's highlighting habits or by "learning" what particular types of marks (e.g., long vs. short) tend to indicate.
Seems like the only time I hear people complain about algorithmic recommendations, it's not from the people searching, (e.g. "Google's giving me bland results!") but from people worried that their products will be excluded ("Google's not listing my website!").
I don't hear people saying, "I'm not finding what I want." I hear people saying, "Other people might not be finding what I want them to find."
Michael, what you're saying is, if you'll pardon my french, bullshit.
(Virutally) the only people who notice what sort of results are excluded are those who have some skin in the game, because that's the way it works: you don't see what you don't know you're missing.
If this matters, it's because part of what Shirky, Job, et al are pitching IT as greater democratization of access, but it's not really what they're selling.
The dark vision: Yes, you'll have more computing power in your smartphone than all of NASA circa 1966, but you'll be less empowered than ever. It's James Poulos' PinkPoliceState but instead of X and allnight rave parties, it's lolcatz and other "creative capital"; it's a sense of freedom and liberty instead of freedom and liberty.
And I suspect that's enough for most people, maybe even you. 😉
Do you, personally, have trouble finding good information in the field you know best? Do you have reason to believe that when you do a determined search for information in a field you don't know well that you get bad/superficial information because the available tools are insufficient?
Maybe I'm a deluded sheeple, but my answer to both questions is emphatically no. I have an easier time finding what I think is good information in both categories now than at any time in my life, and I don't see a trend for it getting worse.
Well since we're doing anedotes, Mike, how about this one:
After the birth of their first child, my sister and brother-in-law were having some (not uncommon) marital issues; and my sister turned to the internet for information, advice, and perhaps a little solace.
My sister is not a stupid person. Nor is she a novice researcher, or computer user. In fact, she's doing graduate work in musicalogy, and is a part-time professional researcher.
Despite the above, and dilligent searching on her part, the only information the internet seemed interest in providing her was either evangelical christian marital advice, or MILF and/or prego porn. My sister found none of this information useful; and more than that, she found the experience furthered her sense of isolation.
Fortunately for her and her marriage, sexual information and the internet is an area in which I have some expertise, and I was able to point her to resources that did not rank as especially Relevant under the internet's algorymic interpretation of her various queries. And because sexual information and the internet is an area in which I have some expertise, I have an understanding of why the information she was looking for is not consider Relevant by the algorythms she hoped would help her; and when I consider these reasons, my enthusiasm for what the internet provides me is tempered by an awareness that it can only provide me with what it is programed to provide me.
As always, your milage may very.
Comments are closed.