The human brain and language go to together like toothpicks and cocktail wieners. The only question is which came first: Were our brains customized for language or did language adapt to our brains?
The ’90s was the era of the language instinct. Indeed, Steven Pinker’s book The Language Instinct overtook bestseller lists and inspired a whole generation of psycholinguists, including me. “Language is no more a cultural invention than is upright posture,” Pinker wrote. Bats use Doppler sonar to hunt insects, birds read constellations to navigate, and humans have a “biological adaptation to communicate information.” We must have helpful biases encoded in our genes: What else could explain the fact that the most complicated skill most humans will ever master is acquired by age four?
But during the last decade, the pendulum of scientific thought has begun its inevitable swing in the other direction. These days, general cognitive mechanisms, not language-specific ones, are all the rage. We humans are really smart. We’re fantastic at recognizing patterns in our environments—patterns that may have nothing to do with language. Who says that the same abilities that allow us to play the violin aren’t also sufficient for learning subject-verb agreement? Perhaps speech isn’t genetically privileged so much as babies are just really motivated to learn to communicate.
If the brain did evolve for language, how did it do so? An idea favored by some scholars is that better communicators may also have been more reproductively successful. Gradually, as the prevalence of these smooth talkers’ offspring increased in the population, the concentration of genes favorable to linguistic communication may have increased as well.
But two recent articles, one published in 2009 in the Proceedings of the National Academy of the Sciences and a 2012 follow-up in PLOS ONE (freely available), rebut this approach. Researchers Nick Chater, Morten Christiansen, and their colleagues mathematically modeled the relationship between genetic variants or “alleles” and linguistic “principles”—abstract features that vary across languages, such as how tense is marked, or whether we should home in on word order to find structure in words.
In their model, people or “agents” are composed of genes, languages are composed of principles, and each principle has a corresponding genetic variant. It is the job of agents to guess all of a language’s principles. These principles are binary—a language either has a given feature or it doesn’t—and corresponding alleles can cause agents to be biased in favor of or against guessing that feature. Or, the agent could exhibit no bias at all.
Then the best communicators—agents whose alleles make it easiest for them to correctly guess the right principles, that is—couple and reproduce. Their offspring are composed of alleles selected randomly from their parents. Again and again this happens, with the fastest language-learners coupling each time. Over the course of many generations, the gene pool thickens with helpful alleles until—voila!—the overwhelming number of these alleles are helpful and learners guesses are so uncannily accurate as to seem instinctual.
Makes sense, no? But now consider that languages change. (And in the real world they do—quickly.) If the language’s principles switch often, many of those helpfully biased alleles are suddenly not so helpful at all. For fast-changing languages, the model finds, neutral alleles win out: they alone provide agents with the needed flexibility to learn the language in whatever form it currently exists.
In another set of simulations, researchers divide the population of agents in half—simulating for instance a geographic split in which one tribe takes the high road and another the low. The language then continues to mutate separately for each of the populations (and each population’s genetic make-up changes differentially in response). Researchers wondered: How different would the two “geographically separated” groups be, both in terms of genes and linguistic principles?
Again they find that when the language is programmed to hardly mutate at all, the genes have a chance to adapt to the new language. The two populations become genetically distinct, their alleles heavily biased toward the idiosyncrasies of their local language—precisely what we don’t see in the real world, where a Chinese infant raised in America will face little trouble learning English. But sure enough when the language is programmed to change quickly, neutral alleles are again favored.
So what does this all mean? We can quibble with the specifics of the model—the idea that there is a one-to-one correspondence between genes and linguistic features is obviously laughable, and there’s an argument to be made that weaker genetic biases might underlie regional variation while still allowing everyone to learn all languages—but it remains an interesting argument: maybe our brains couldn’t have evolved to handle language’s more arbitrary properties, because languages never stay the same and, as far as we know, they never have. What goes unspoken here is that the simulations seem to suggest that truly universal properties—such as language’s hierarchical nature—could have been encoded in our brains.