Time Traveling Through a Sentence
To understand language we relive the past and predict the future
How quickly does language comprehension happen? Do we allow words and structures to accumulate like snowflakes on a porch swing, scooping them up at a sentence’s end to take stock of what we’ve got? Or does sense-making never stop? Do we perhaps update our understanding mid-sentence, mid-phrase, mid-word, mid-syll—? After decades of research, psycholinguists can say with some definitiveness that we understand language as we perceive it: incrementally as a striptease, its message laid bare a word, or fraction of a word, at a time.
Sometimes, however, language comprehension appears to happen in reverse. Take a sentence like The *eel was on the table, with the asterisk signifying a cough, a burst of white noise, a feline caterwaul—anything, really, that prevents us from hearing that phoneme, or sound. We will never perceive the phoneme as missing; our minds will replace it seamlessly. Because the sentence continues was on the table, we’ll mentally replace the missing phoneme with an m to form meal. But had the sentence continued was on the axle, we’d have plugged in a w to form wheel. We process language almost as quickly as it unfolds, yet we can use new information to restore a missing phoneme four words back and then continue on, oblivious to the hiccup.
At other times language comprehension seems to anticipate the future. In a famous study, Gerry Altmann and Yuki Kamide showed participants a cartoon scene of a boy surrounded by a cake and some toys. It is a truth semi-universally acknowledged that people, whenever possible, look at what they’re thinking about, so as participants listened to sentences, their eye-movements to various pictures in the scene were recorded and measured. Participants who heard The boy will move the cake shifted their eyes to the cake at the beginning of the word cake. But when move was replaced with eat, eyes shifted to the cake much earlier. A verb’s meaning, then, seemingly allows us to predict what words we’ll hear next, or at least to ignore the ones we probably won’t.
Of course, it’s one thing to hear eat and predict cake while staring at five objects, only one of which is edible. It’s quite another to predict future words that refer to things not in our immediate environment. But how to test whether a participant is predicting that, say, the word kite will appear at the end of a sentence like The day was breezy so the boy went outside to fly a ______ ? If you ask the participant directly what she thinks will come next, her prediction is no longer spontaneous. If you investigate how she responds to the word kite, well, she’s definitely thinking kite—now.
So Katherine DeLong, Thomas Urbach, and Marta Kutas measured the brain’s N400 waveform in response to sentences like the one about the kite. The N400 is a neural signature of meaning construction that kicks in between 200 and 500 milliseconds after we’re presented with a word, a picture, a symbol, or an otherwise “meaningful” stimulus; its amplitude is understood to increase in response to more incongruous, or less predictable, input.
Instead of measuring N400 responses to the highly predictable word kite, however, researchers measured responses to the article—a or an—presented before the predictable word. Critically, the researchers manipulated this article so that it either matched the predicted noun or a less predicted noun, e.g., The day was breezy so the boy went outside to fly an airplane. They found that the amplitude of N400 responses to the article increased as the predictability of the following noun decreased—strong evidence that, by the time participants encountered fly, they fully expected a kite to follow. Recent research by Jakub Szewczyk and Herbert Schriefers suggests that we spontaneously predict more than single words: we generate predictions for classes of words, such as animate versus inanimate nouns.
And yet, exciting as these findings are, they have obvious limits. The problem, readers, is that much of the time—outside of carefully constructed, highly constrained contexts—the probability of correctly predicting the next word is quite low, sometimes essentially zero. How often we really bother, then, may well depend on the consequences for predicting wrong.