Language change is inevitable. The English of my grandparents is not the English of my parents, which is not my English, which will not be my children’s English, and so it goes until schoolchildren are, 600 years from now, struggling over Hemingway as we struggle over Chaucer. Why does language change? Technological and historical developments have a role: we wouldn’t have the word Internet without the Internet, nor gerrymandering without Vice President Elbridge Gerry. So does individual creativity. According to the OED, we have Shakespeare to thank for 1,600 new words. Contact with other languages and cultures also has its effects (though for English the phrase “you should see the other guy” feels relevant, given how many of our words and phrases have made their way abroad), as does humans’ tendency to shift toward linguistic styles we find prestigious or desirable.
But one of the most psychologically interesting drivers of language change is economy, or the ease with which a given sound, word, or syntactic construction is used. The same frugal whittling that leads us to contract can and not (though not am and not) may also affect which aspects of a language withstand the test of time and which don’t.
The mathematician Benoit Mandelbrot discovered an inverse relationship between word length and word frequency: the words we encounter most often also tend to be the shortest. (This is often mistaken for Zipf’s Law, which describes the frequency distribution of words in a language.) Certainly most of the fixed class of English function words—including pronouns, prepositions, and articles—are both monosyllabic and extremely common. This relationship makes a lot of sense. When a word is long and clunky, it is shortened. When a word is long and clunky and said often, it is shortened quickly. Each of the clunkily named cities I’ve called home (Cincinnati, Columbus, and Champaign-Urbana) have been given nicknames (Cincy, C-bus, and Chambana) by those most at risk of having to say them.
Thus language is shaped by ease of production. What about ease of transmission? A relatively new line of research suggests that the learnability of a language may similarly act as a driving force for change. This research uses iterative learning tasks, where participants are taught artificial (and extremely simplified) languages generated by the experimenters. The output of one participant’s learning then serves as input for a second participant’s. In essence, such tasks press “fast-forward” on language evolution, allowing several generations of learners to shape the language in a single researcher’s lifetime, preferably before lunch.
In one study, published in Cognition two years ago by Kenny Smith of Northumbria University and Elizabeth Wonnacott of Oxford, 10 participants were presented with visual scenes containing common animals (e.g., cows), each accompanied by text. Participants would be shown, for instance, a single moving cow accompanied by the sentence Glim cow, or a pair of moving cows accompanied by either the sentence Glim cow fip or Glim cow tay. Thus, both fip and tay served as plural markers in the artificial language, much as –s does in English. There was no meaningful distinction between the two: tay and fip had the same function, and both appeared with each possible noun. But in some versions of the original language, tay was more common than fip, while in others fip appeared more often tay.
During the training phase, participants retyped the accompanying sentence into a keyboard. During the test phase, however, participants had to generate the sentence from the visual scene. Each set of sentences typed during the test phase was then used to train a second participant, a process that occurred five times for each of the 10 original languages.
What happened? For the most part, individuals successfully learned the idiosyncrasies in the language they inherited, mapping output to input even when the distribution of plural markers didn’t make a lot of sense. The cumulative effect after five generations, however, was powerful. In nine out of the 10 languages, randomness significantly decreased: either the use of plural markers became meaningful (i.e., fip was consistently used as a plural marker for some animals and tay for others) or one of the markers disappeared altogether. In other words, learnability constraints ushered in regularity, a universal property of human languages.
There are, of course, limits to what we can and should extrapolate from research on artificial languages, particularly ones this basic. But next week I’ll discuss decades-old work on a natural language, Nicaraguan Sign Language, which suggests that an extrapolation wouldn’t be entirely unfounded.