Psycho Babble

Tuesdays with Siri

By Jessica Love | December 11, 2014

 

Jessica Love’s last Psycho Babble column appeared on Oct. 2, 2014. We are re-posting several of our favorite columns, including this one, published on August 29, 2013.

 


By all accounts, most of us talk very differently to computers—Google Search, Siri, the automated phone system tasked with extracting our pizza orders—than to fellow humans. We are inclined to speak to machines loudly, slowly: not unlike talking to a stupid child, as one friend puts it. We are careful to curb our regional accents. We e-nun-ci-ate. We do not squander our best jokes.

Still, human-computer conversations hold some rather surprising resemblances to conversations of the human-human ilk. One such resemblance is, well, resemblance. As I’ve written about in the past, when we chat with other people, we tend to adopt some of their language patterns, and they adopt some of ours. Should our friend call a couch a sofa, we too (at least in his presence) call it a sofa. When he asks us to help carry the sofa home, we are likelier to agree to carry the sofa home than to carry home the sofa. If our friend is a fast talker, we may even converge upon his speech rate.

Our conversations with computers are also characterized by this subtle, often unconscious linguistic mimicry, according to work by the University of Edinburgh’s Holly Branigan and her colleagues. Some studies have even found that we are more likely to pattern our language after a computer than a human. Why? Branigan has suggested that it’s because we believe machines to be less linguistically savvy than people. In other words, we want to help a computa out:  If it understands carry the couch home, why take a chance with carry home the sofa? Indeed, in another study, Branigan and her colleagues report that people were more likely to replicate the linguistic patterns of basic, out-of-date computers than advanced, up-to-speed ones.

And yet. Our responsiveness to language produced by computer algorithms goes far beyond what could plausibly be expected to aid communication. For instance, implicitly wary of hurt “feelings,” we offer computers more generous performance reviews when questioned directly by them than by other computers, or via pen and paper. We also respond to social cues: we are delighted when our virtual conversants look us in the eyes as if to say, “Go on, I am listening.” We’re won over when they indicate on their handsome, pixilated faces that they’d like to speak next.

It seems clear, then, that many of the behaviors that we’ve internalized over a lifetime of human conversations are unlikely to change just because our conversation partner has a microprocessor instead of a brain. But this has me wondering: What then happens when our daily conversations are as likely to take place with computers as with humans—something futurists have long predicted but that recently has seemed more real and urgent?

Just this month, the New York Times’s Ian Urbina reported on the increasing ubiquity of socialbots: robotic programs designed to lure actual humans into virtual conversations, and then, more often than not, convince them to do something: buy stock, adopt a political stance, even fall in love. (And who better to fall for than a Nigerian Prince?) “Within two years,” Urbina writes, “about 10 percent of the activity occurring on social online networks will be masquerading bots, according to technology researchers.”

Make no mistake: these bots will get good. As will Siri and Google Search and any number of algorithms programmed—for reasons insidious or otherwise—to behave as humanly as possible. I find it very probable that, in my lifetime, I’ll be able to have entire conversations without ever quite knowing whom or what I’m talking to.

But here’s the thing: given that the mechanisms underlying computers’ humanly behavior remain very different from those underlying ours—which is to say, given that computers largely stay computers and we largely stay human—it seems likely that there will always be some circumstances under which machines have an easier time “passing” than others. There will always be some conversation styles or linguistic structures that computers will find easier, less error-prone—and will therefore prioritize.

As these features appear more predominantly in our linguistic environment, we too are likely to embrace them. I therefore can picture not only a world in which computers have adapted to human language but also one in which, ever so slightly, human language itself has shifted to make communication easier for computers.

 

Permission required for reprinting, reproducing, or other uses.

Comments powered by Disqus