Acting Like a Baby

Why is it so hard to hear non-native sounds?

 

My last two columns focused on the psychology of acting, from the memory challenges faced by those learning obscene amounts of dialogue to the broader challenges of performing an identity onstage. This week’s post concludes this jaunt into thespianism with a discussion of yet another challenge: the accent.

According to W magazine until the 1980s nobody—not directors, actors, or theatergoers—really cared about how authentically an accent or dialect was depicted. If the proliferation of articles like “Keanu Reeves Dominates the List of 20 Worst Accents in Film History” is to be trusted, however, audiences are less forgiving these days. (Two equally powerful forces have been identified as responsible for the change: globalism and Meryl Streep.)

Perhaps, though, it is unfair to come down too harshly on actors for their mangled accents. Mastering the fine-grained motor movements necessary to produce unfamiliar sounds can be difficult, as one might expect. But the process is made all the more difficult because we aren’t always able to hear these unfamiliar sounds with any degree of precision.

A bit of background: humans tend to perceive sounds as discrete, rather than continuous, in nature. Consider the syllables ba and da. They are produced similarly, except that we close our lips to say ba, and we put our tongue against the roof of our mouth to say da. We may accidentally produce a sound somewhere in between—our tongue may touch the back of our teeth, for instance—but our listeners will still probably treat the sound as either ba or da. (Which they go with will depend on a number of factors, such as who we are and what we’re talking about.) This is great! Our perceptual systems are wired to help us make meaningful distinctions in our language.

But our listeners give something up to treat sounds as discrete, namely some of the details of the sound itself. This can be troublesome when it comes time to perceive the new distinctions that are relevant for an unfamiliar language or dialect. In Hindi, the continuum between ba and da is divided into three meaningful sounds, not two—that is, a Hindi speaker will hear a da produced against the roof of the mouth and a da produced against the teeth as two distinct sounds. Speakers of English and Hindi simply don’t hear the ba-da continuum in the same way.

Adult speakers, that is. As infants we would have been equally sensitive to the distinction between Hindi’s da and da. Humans are born with an ability to discriminate among all of the sounds that carry meaning in any human language. Then, at between six and 10 months of age, we lose some of the distinctions that aren’t meaningful to us. Indeed, it would be worrisome if we didn’t.

A 2005 study led by Patricia Kuhl at the University of Washington found a relationship between an infant’s sound-discrimination abilities at seven months old and her later language abilities up through age two and a half. But this relationship was complicated: better discrimination between native sounds predicted better later language development, and better discrimination between non-native sounds actually predicted poorer language development. In other words, English-speaking infants who still have an easy time discriminating between Hindi’s da’s at seven months are behind the curve.

Here’s a puzzle (not even ostensibly related to acting): while a very young infant can easily discriminate b from d, distinguishing ball from doll is far more difficult. It is something of a mystery, then, how infants can know which sound distinctions are meaningful in their language (and thus should be attended to) before they can recognize words well. Distinctions between b and d only matter, after all, because when we say ball we mean something very different from doll.

Some psychologists suspect that infants are sensitive to the distributions of sounds in their languages. The fact that in English, sounds on the ba-da spectrum tend to cluster at the extreme ends may lead English-learning children to form two categories, whereas three clusters, or one, would have prompted them to divide the spectrum in thirds, or not at all. Other researchers argue that young infants can indeed learn to recognize words, but only after hearing them spoken by a wide range of speakers. With more variability in the ways in which a word is spoken, the thinking goes, it’s easier to perceive what all of these different ways have in common—which is to say, the properties of the word itself.

Sound perception is wonderfully complicated, and there’s no doubt about it: actors older than six months are at a severe disadvantage (though they can speak—I’ll give them that). It seems the country’s dialect coaches have their work cut out for them if they hope to develop the next Meryl Streep.

 

Permission required for reprinting, reproducing, or other uses.

Jessica Love holds a doctorate in cognitive psychology and edits Kellogg Insight at Northwestern University’s Kellogg School of Management.

● NEWSLETTER

Please enter a valid email address
That address is already in use
The security code entered was incorrect
Thanks for signing up