Several times in recent posts and comments I’ve mentioned Jaron Lanier’s book You Are Not a Gadget. It’s wildly uneven, a product of too many overlapping and non-overlapping interests, but there’s a lot of wonderful stuff in it. I thought I might reproduce one little section. Says Lanier, “These are some of the things you can do to be a person instead of a source of fragments to be exploited by others.”

* Don’t post anonymously unless you really might be in danger.

* If you put effort into Wikipedia articles, put even more effort into using your personal voice and expression outside of the wiki to attract people who don’t yet realize that they are interested in the topics you contributed to.

* Create a website that expresses something about who you are that won’t fit into the template available to you on a social networking site.

* Post a video once in a while that took you one hundred times more time to create than it takes to view.

* Write a blog post that took weeks of reflection before you heard the inner voice that needed to come out.

* If you are twittering, innovate in order to find a way to describe your internal state instead of trivial external events, to avoid the creeping danger of believing that objectively described events define you, as they would define a machine.

One more random but really interesting point from Lanier. He looks at the famous Turing test in a very distinctive way:

But the Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?

Text Patterns

August 3, 2010

6 Comments

  1. This:

    "Post a video once in a while that took you one hundred times more time to create than it takes to view.

    And then this:

    "you've just lowered your own standards"

    I am too young to remember myself but have been told that when video first arrived, there were news directors who said something along the lines of "Not in my newsroom. Not now, not ever." But the sheer convinience of video over film was too much for these crumudeons to withstand.

    Similarly, BetaSP still looks better than MiniDVD or DVD (let alone VHS), but again, convenience wins the day. A Pentax SLR (the standard student model for many years) still takes more beautiful images than any digital point and shoot, again, it's not about beauty, it's about convenience; and all we need to take advantage of teh convenience of MiniDV or a Canon elf or whatever is lower or expectations.

    Sometimes it's a thoughtful and fair trade, but I suspect that often it's simply a convenient one. This suspicion of mine is reenforced by the fact that over and over again I see people finding advantage in doing what most regard as inconvenient.

  2. Hooray for Lanier's Talbottesque critical attitude toward the Turing test! Today's facile equation of mind with computing can scarcely be called into question enough to suit me.

  3. This is utterly peripheral to your point (and Lanier's) but I need to say this to get it off my chest.

    The Turing test says nothing whatsoever about a human being's intelligence. Nothing. (It may say something about a computer program's intelligence, but nothing fundamental, which I'll get to in a moment) That's because a conversation is not, as most people seem to think, an interaction where you have an idea "inside" your head which you then translate into words, and push "outside," and which the other person then puts into his head (with some noise) and extracts its meaning.

    Instead a conversation is a two-way street where the listener is as important as the speaker. What holds it together is the background of interpretation which the listener brings to it. That's why Joseph Weizenbaum's Eliza worked. Because Eliza was introduced as a therapist to the students, they interpreted her responses in that light and did their best to make sense of it. This does NOT mean that those students are getting less intelligent. All it means is that a conversation is not what we think it is, a back-and-forth passing of ideas but something different, more collaborative in nature.

    Harold Garfinkel did some interesting research on this (there's also "Conversation Analysis," a field that was pioneered by Harvey Sacks). Here's one that I particularly like. The first table here shows how much is left out of our utterances and how the listener brings the entire background of the conversation to bear while trying to understand what the speaker says. It's just so effortless for us that we don't even notice this.

    So, to recap, the fact that someone was fooled by a cleverly written computer program into thinking that the program was a person says nothing whatsoever about the said person's intelligence. Moreover, once you understand conversations in this way, it becomes easy to see why programs like Weizenbaum's Eliza succeed. All they have to do is pass a bare minimum threshold of intelligibility and the listeners will take care of the rest.

  4. _"All they have to do is pass a bare minimum threshold of intelligibility and the listeners will take care of the rest."_

    One of my pithy things to say at director's Q&As is that on his best day Steven Spielberg, at most, gives his audience 49%, and that they provide the other 51% themselves; and that since I have neither Speilberg's talent nor resources, I'm lucking if I can get up to 19%, leaving a lot of work to be done by the audience.

    I guess that puts filmmakers, AI programmers, and magicians all in the same class!

  5. The rule about anonymity is OK if you define "danger" broadly enough. Anonymity allows people to:

    * offer "lay" advice in their field of expertise without fear of a lawsuit

    * comfort a victim of trauma by sharing a very personal traumatic story

    * ask questions about sex or some other embarrassing topic

    * put out feelers for other job opportunities when they've already got a job

    * tell jokes or make comments that are appropriate in one community but that might offend your mother or your boss or your grandma's minister or your ten-year-old nephew or your next-door neighbor

    As well as more obvious dangers like whistleblowing, criticizing powerful governments or businesses, organizing unions, voicing unpopular opinions, etc.

  6. They say computers in Japan are more likely to pass the Turing test because the judges don't want the computer to lose face.

    But seriously, the Turing test is not about how "smart" a computer (or a person) is. It's a test of whether a computer is doing a particular kind of thinking that (so far) only humans can do. And it's a kind of thinking that humans are not always engaged in, so articles that snark about a human being "failing" a Turing test are stupid. I'd "fail" a driving test if I'm not in a car.

    I don't have a definition for this kind of thinking, and neither did Turing. Rather I have a conviction that in a prolonged conversation I could detect whether a computer were really doing the kind of thinking that so far only humans can do.

Comments are closed.