Historians and Nature

We can hardly speak of evolution without mentioning the name of Charles Darwin. So I will begin with the founding father. Early in 1834 the young naturalist Darwin left his ship’s berth aboard the H.M.S. Beagle to venture across the grassy plains of Patagonia. Those plains left an impact on his mind that was almost as powerful as the impact of the more famous Galápagos Islands. Despite their dusty monotony, the plains offered him vivid glimpses into the deep, unknown history of life on earth–vestiges of creatures now extinct and ways of life now vanished.

At Port St. Julian, where grassland meets ocean, he discovered the fossilized remains of a large mammal. It was the first time in his explorations that he had come across such ancient bones. They turned out to be bones of an extinct giant llama, not the mastodon he supposed, but unmistakably they spoke of “the changed state of the American continent,” where species succeeded species, throwing “more light on the appearance of organic beings on our earth, and their disappearance from it, than any other class of facts.” The grasslands forced Darwin to confront the power of nature, the mutability of organisms, and the ecological revolutions that had occurred in the past.

Darwin is surely the most influential scientist in modern times, not only as the founder of evolutionary biology and ecology but also as the inspirer of anthropologists, economists, psychologists, and philosophers. Despite the stubborn resistance of many religious people, his science has profoundly reshaped our modern worldview–indeed, evolution is its very foundation. His book On the Origin of Species, published in 1859, argued that life has evolved by wholly natural processes, without any supernatural intervention. Every organism, he pointed out, varies in some degree or trait from all other organisms, and that variability is basic to evolution. In a world of limited resources, variation must compete against variation, and those individuals that survive and leave offspring provide the next generation of organisms that, in turn, may leave heirs of their own. The less successful–those less fitted to their environment–eventually vanish into the earth, unless conditions suddenly change in their favor.

Today we know far more about the history of life than Darwin did, although the main outlines of his theory of evolution through natural selection have held up amazingly well. Since the discovery of the structure of DNA in 1953, scientists have learned how to track natural selection and other forms of evolution back in time millions of years. Through the science of ecology we know a lot more about how complex ecosystems evolve, often in response to large-scale and sometimes sudden and violent environmental disturbance by shifting climatic regimes, drifting continents, and crashing meteors. And not least in significance, we have directly observed what Darwin did not actually see in Patagonia or the Galápagos Islands: evolution actually taking place, not in some long ago period and at imperceptible rates, but in the present at a measurable pace.

We also know much more than Darwin did about how humans have evolved in mind and body, and we are beginning to establish scientifically how much our behavior is rooted in our genes, so that we are not solely the product of society or culture. Edward O. Wilson was not the first, back in 1975, to argue that our behavior has much in common with other species. But today that argument no longer meets such fierce resistance. The study of evolutionary psychology is making significant gains toward explaining how the brain has evolved and how it shapes what we see and how we behave. Today, more and more social scientists, following the lead of the natural scientists, are eagerly pursuing neo-Darwinian theories at the level of the individual mind, group interaction, and even religion and culture.

Human beings, those scientists tell us, are not born into this world with minds like blank pages, waiting to be written on by others–family, church, politicians, advertising executives. We emerged as a species a half-million years ago, during the Pleistocene, and ever since we have followed what Wilson calls “epigenetic rules,” which he defines as “innate operations in the sensory system of the brain. These are rules of thumb that allow organisms to find rapid solutions to problems encountered in the environment. They predispose individuals to view the world in a particular innate way and automatically to make certain choices as opposed to others.”

This is not to say that our genes explain every bit of human behavior. In many species, evolution can be cultural as well as biological, as Darwin himself realized and modern scientists agree. Cultural beliefs and ideas–or what some call “memes,” the cultural counterpart to genes–pass from individual to individual or from group to group or are selected for survival. Particularly in the case of Homo sapiens that process of cultural evolution deserves at least an equal place alongside those of biological evolution.

The anthropologist Clifford Geertz has defined culture “as a set of control mechanisms–plans, recipes, rules, instructions (what computer engineers call ‘programs’)–for the governing of behavior.” Those control mechanisms can be a powerful means of survival–more rapid and flexible in their response to environmental change than genetic variation alone.

Human evolutionary theory thus rests on the concept of a “dual inheritance,” in which genes and cultures both are powerful determinants and each co-evolves with the other. We can distinguish cultural change from genetic change and can see how cultural change can follow an independent path from genetic change, but no great or eternal chasm separates them. Over time, genes and cultures interact repeatedly, constraining or reinforcing each other, forming the dual inheritance that shapes the life ways of the human organism.

If ever there was a scientific theory that is fundamentally historical, that purports to explain change over time, it is evolution through natural selection and its corollary, humankind’s dual inheritance. Yet I have to admit that my fellow historians, teaching in history departments and professing to study that process of change, have been highly resistant to evolutionary theory. Why has that been generally so? Why have historians insisted on drawing a rigid boundary separating culture from nature? And what are the possibilities for overcoming this separation?

The good news is that the field of environmental history, which has been emerging over the past two or three decades, has successfully contested the old dualism that separates historians from the natural sciences. Environmental historians focus on the relations that humans have carried on with the rest of nature. They take for granted that humans are part of the natural world and that historians should make history more truthful by placing human life in that broader context. In contrast to social or political historians, environmental historians read books and articles on evolution and ecology and have been trying to bridge the gap separating them from the natural sciences.

So far, however, environmental historians have focused mainly on the human impact on nature–i.e., how humans have changed the land, exploited natural resources, and replaced the wilderness with cities. Let me give you an example that represents an effort to bring evolution into history but does so by emphasizing the growing human impact on evolution.

In 2003 Edmund Russell, a historian at the University of Virginia, published an important essay entitled “Evolutionary History.” He begins by noting that evolution can occur through artificial selection, like the breeding of dogs or cattle, as well as through natural selection. Darwin, Russell reminds us, began his book On the Origin of Species by comparing the two kinds of selection. He observed how plant and animal breeders change their domesticated plants and animals and then concluded that nature does for the whole earth what the cattle breeder does with his livestock. Artificial selection led him to natural selection. However, Russell turns Darwin’s reasoning around to show that what the cattle breeder does for the farm, humans are doing for the whole earth–guiding evolution to suit their needs. The chasm between what is natural and what is artificial vanishes, and in Russell’s view the whole environment is becoming increasingly a product of human intervention.

[adblock-left-01]

But there are a few anomalies in this picture of an increasingly man-made planet. Many organisms are evolving in response to our human presence, but without any human control or management–far from it. Insects, for example, have evolved rapidly to withstand the barrage of modern pesticides. Deadly germs have evolved even in sterilized hospital rooms, defying all efforts to stamp them out. Cougars have learned how to wait for new kinds of prey along the jogger’s trail. This is not natural selection as Darwin understood it, nor is it similar to deliberate breeding or hybridization, which is to say, artificial selection. An insecticide-resistant mosquito or herbicide-resistant thistle is not the product of human intention in the way that a prize bull or a book of poetry is. In such cases organisms are self-evolving. They do so in an environment that humans have created but do not truly manage. Such organisms elude control and often pose a nuisance or a danger to their hosts.

Russell has offered us an important way to bring history and evolution together. However, I want to propose another way of thinking, one that regards human cultures not as completely independent forces changing the world, but as strategies that people develop in order to adjust to the natural world and exploit its resources. Instead of making nature a subset of culture, as Russell does, historians might see culture as a subset of nature. We can think of this approach, following the lead of biologists, as redefining culture as a mental response to opportunities or pressures posed by the natural environment. In other words, culture can be defined as a form of “adaptation.”

The word adaptation is as familiar to historians as it is to biologists. Historians often talk of cultures clashing and adapting to one other, mixing and merging through trade, immigration, and mass communications, or they talk about societies adapting to new technologies like the automobile or computer. More rarely, however, do they talk about people adapting to their natural environments. And this is a huge failing: historians have paid insufficient attention to evolutionary adaptation in general and in particular to the role that culture plays in adaptation to environment–adaptation to the capacity of soils to grow crops, the supply of water needed to sustain life, the vicissitudes of climate, the limits to growth and material consumption in a finite landscape.

The Oxford Dictionary of Science defines adaptation as “any change in the structure or functioning of an organism that makes it better suited to its environment.” After the word “organism,” we should add the phrase, “or society or culture.” In the case of biological organisms, adaptation occurs whenever a new shape of wing or beak allows a bird to fly better or crack more seeds than its rivals. Such change does not, of course, depend on intention or willpower; it proceeds blindly to fit organisms better to their environments, enabling them to use resources more efficiently. In the case of artifact-making organisms–the beaver building a lodge or the heron a nest–adaptation can mean modifying the environment to improve the organism’s chance of survival and those of her offspring. In the case of culture-making organisms like humans, adaptation can involve acquiring new information, learning new rules, and altering one’s behavior.

People have developed strategies to meet changes in climate, in energy sources, or in the diseases they confront. In some cases they have developed, through thoughtful observation, ways to avoid degrading or depleting their environment. They have learned how to become more resilient in the face of change.

But adaptation, even in nature, has never been perfect or sufficient. Before Darwin, naturalists like Bishop William Paley, author of the 1802 religious classic Natural Theology, liked to talk about the marvelous fitness of plants and animals to their environments; a world that was perfectly harmonized and perfectly adapted showed, they believed, the handiwork of a rational God. They insisted that everything in the world is perfectly organized and perfectly adapted, that every creature has its assigned place. But Darwin’s theory of evolution overturned the notion that we live in the “best of all possible worlds.” Darwin, for all of his admiration of natural selection, forced us to begin paying attention to the reality and frequency of maladaptation.

After him, the science of adaptation could no longer claim to reveal a perfect world in which everything works for the best or where nature always achieves the ideal solution to a problem. Nature cobbles together solutions from whatever material is available. When those solutions fail, the costs of mal-adaptation can be severe. Contrary to modern critics like Stephen Jay Gould and Richard Lewontin, the so-called “adaptationist program” in modern biology does not teach that we live in the best of all conceivable worlds. Nature shows us many examples of failure, impoverishment, dysfunction, and death as much as fitness, functionality, and good health. And this maladaptation is certainly evident when we examine human cultures through history.

No society, whether the mammoth hunters in ancient North America or the commuters in today’s suburbs, has ever reached a state of perfect fitness to its environment. True, some societies have managed to sustain themselves far longer than others. Historians should ask why that was so; why some endured over long stretches of time while others did not; why some societies created more effective environmental constraints than others; and how those rules changed as conditions changed. Historians, in other words, should join evolutionists to ask what both adaptation and maladaptation look like. We should be investigating how societies can become trapped by a maladapted cultural inheritance and even become extinct, leaving their remains behind in libraries and museums.

As young Darwin discovered in South America, the grasslands are a highly legible place to watch evolution, adaptation, and maladaptation at work. This is true in North America as much as South America. As recently as the 19th century, North America’s grassland ecosystems stretched across almost a billion acres. At their maximum extent those grasslands swept from the Texas panhandle into Saskatchewan and extended westward from Ohio and Indiana to the Rockies, with significant outliers in the intermountain West. Grasses dominated that vast expanse because the environment was neither consistently humid nor arid, because droughts were frequent and temperatures fluctuated dramatically.

The grasses survived by investing most of their growth underground in elaborate root structures that captured the moisture and held the soil in place. They never achieved a static equilibrium but spread and then shrank back, nearly died and then recovered. Today, because of human intervention, the original grasslands are much diminished: the tall-grass prairies of the Midwest have shrunk to a single-digit percent of their extent in 1800 A.D.; they have been replaced by corn and soybeans. At the same time the native short-grasses of the High Plains have lost more than a third of their original extent.

The first historian of the North American grasslands was Walter Prescott Webb, author of The Great Plains (1931). No historian before him had made adaptation so important a theme. The dry, treeless plains, he argued, forced Americans to alter their way of life. They came out of a forest environment and forest history, going back far into the European past. They were unprepared for the ecological challenge of the prairies and plains. Eventually, through trial and error, they adapted, changing their weaponry, their fencing, their modes of transportation, their laws governing water rights, their farms and agricultural practices. A grassland version of what it meant to be American emerged, distinct from that of the eastern United States.

Nowadays, historians tend to dismiss Webb’s work as a case of “environmental determinism.” They are wrong in doing so because most of the adaptations Webb chose were impossible to dismiss: they were indeed examples of how nature determines what tools or techniques would work and what would not on the plains. On the other hand, Webb’s critics are right in the sense that his list of adaptations were limited to material culture techniques and technologies–and did not include deeper levels of culture and attitudes. Webb convincingly showed, for example, how barbed wire had to be invented where there was little wood for fencing, how cattle came to be herded on open range, and how windmills were needed to pump up groundwater where rainfall failed. But that kind of change did not necessarily indicate a change in the values and beliefs that settlers brought to the grasslands. Their non-material culture did not undergo any significant metamorphosis. The Texas rancher’s view of the world did not differ significantly from that of the Georgia cotton planter or the Massachusetts textile manufacturer.

That American culture did not adapt in any deeper cultural way is a fact that a few other historians have had trouble explaining. Following closely on Webb’s heels, James Malin published in 1936 a study of what he called “adaptation of the agricultural system” to the grasslands and therefore deserves credit, along with Webb, for trying to apply an evolutionary approach to history. But Malin was a champion of the plow. His idea of adaptation was hard to distinguish from conquest.

He had begun writing during the Dust Bowl years of the 1930s, when drought, high winds, exposed soils, and massive erosion turned the region into a world-class disaster. Conservationists, scientists, and government officials were pointing fingers at the plow as the main culprit. Farmers had not adapted, they said; rather, they had destroyed the vegetation that was well adapted to the plains environment and were now suffering the consequences. Contemporary scientists like John Weaver, Frederick Clements, and Paul Sears, along with many rueful ranchers and farmers, had concluded that the plowman was partly responsible for his plight. A committee appointed by President Franklin Roosevelt called for major changes, not merely in machinery or techniques, but in the plowman’s “attitudes of mind,” including his attitude of environmental domination, economic individualism, and extreme risk-taking for the sake of profit. These attitudes, the committee argued, were maladaptive.

Observers had continued to ask whether white settlers who turned the North American prairies into plowed fields of wheat and corn knew what they were doing. Did they know what they were undoing? Has the culture associated with plains farming ever truly respected or adapted to nature to the point of becoming sustainable?

According to the historian Geoff Cunfer, the answer is yes. His recent prize-winning book On the Great Plains marks an impressive milestone in evolutionary history. Cunfer makes far more sophisticated use of census data, plant ecology, and soil chemistry to provide a fuller history than Malin, but he ends up with the same shaky conclusion: Anglo agriculture on the plains, he argues, evolved after an initial period of maladaptation to be more adaptive and sustainable. “Farmers quickly learned,” he writes, “which land could support crops and which would serve only as pasture for cattle.” Already by the 1930s, he believes, that region’s agriculture had reached a state of evolutionary fitness, and only an unforeseeable change in the weather could disrupt that adaptation. The Dust Bowl was, he goes on, “a temporary disruption in a stable system.”

[adblock-right-01]

Such an upbeat assessment, however, cannot be reconciled with the reality of economic decline and persistent vulnerability all over the region. It pays little attention to how farmers have long dismissed the native grasslands as buffers against disaster, or to how they continue to take risks in a competitive economy, or how they put short-term profit over long-term stability.

If farmers had learned how to listen to nature, then why did her voice prove less audible whenever crop prices rose substantially and promised quick and easy returns? In such boom times farmers were easily convinced by the financial and manufacturing sector to buy more machinery, use more chemicals, and put more acres under the plow or to use their lands more intensively. Over and over during the 20th century, farmers chose temporary panaceas over new “attitudes of mind.” For example, in the decades that followed the “dirty ’30s” they put their hopes in the miracle of deep-well irrigation, purchasing powerful water pumps to tap the immense Ogallala Aquifer in order to irrigate fields and free themselves from the persistent threat of drought. This “man-made rain” earned billions of dollars by irrigating crops to feed to cattle in confined animal feedlot operations. But it also led to the breaking out of marginal lands with unstable soils and to keeping lands in production that should have reverted to natural vegetation. And of course finally that strategy exposed the irrigators to a new potential catastrophe–eventual depletion of the ground water leading to economic collapse.

The voice of government, like the voice of credit and industry, has often been a force working against, not in favor of, adaptation. Take, for example, federal subsidies and disaster assistance, which have become a perennial prop to the Great Plains economy. Federal assistance to agriculture has shifted large amounts of cash from urban taxpayers while encouraging a mass-production, factory mentality, often in defiance of environmental realities. It was a government official who in a moment of exuberance proclaimed that through deep-well irrigation farmers could achieve a “climate-free agriculture” on the plains. Not only was it a false promise, it was potentially a dangerous one. In an era of global warming and long-term desiccation of the plains, such wishful thinking might prove to have severe consequences.

The economic culture of the Great Plains is not indigenous to the region. Market or capitalist economics first emerged on the other side of the Atlantic Ocean more than three centuries ago, long before Europeans made it to the interior of North America. Since then those beliefs have traveled far and wide like the invasive dandelion, until they have established themselves in every corner of the region.

The full origin of that economic culture that has dominated American agriculture is far more than we can explore today. To my knowledge, it has never been explored fully as a form of evolutionary adaptation by economists, historians, or sociologists. But let us simply note that the rise of market culture coincided with the “Age of Discovery,” the period from about 1500 A.D. on when ambitious navigators like Christopher Columbus, Ferdinand Magellan, and Francis Drake, through their voyages to the New World and the Pacific, stimulated Europeans of all classes to dream about the possibility of vast natural resources lying on the other side of vast oceans, so much more abundant than their depleted, overtaxed home environments. Capitalists and many others began to ask how those resources might be possessed and exploited.

That discovery of an entirely “new” hemisphere lying on the other side of the Atlantic Ocean, not to mention the hitherto unknown or underappreciated world of the Pacific and Asia, was one of the most extraordinary events in human history. It represents, to borrow from the language of invertebrate paleontologists, a moment of punctuated equilibrium, when evolution abruptly picks up new energy and goes forward at a more rapid pace, after a long period of relative stasis. As in biology, so in culture: A new global environment and an influx of natural resources challenged the fairly stable body of European culture, which had long been characterized by slow, gradual change.

Evolutionists have shown how biological traits like the scales of reptiles or the hemispherical eyeballs of fish can persist more or less unaltered for very long periods. On average, mammalian species survive for a million years, clams for 10 million. In contrast, human cultures can rise and fall much more rapidly.

The life span of a cultural innovation may last less than a year, but market culture has survived much longer than that. No passing fashion, it has been around since what one historian has called “the long 16th century.” From that period down to the present market culture has changed its phenotype a great deal as it has encountered different societies and ecosystems. It has thrived in all kinds of environments and seems to be in no immediate danger of extinction. But its stunning successes in terms of cultural evolution and geographic diffusion neither obscure past failures nor guarantee a permanent future.

The adaptive advantages of market culture lie mainly in its ability to mobilize capital and labor quickly and efficiently in order to seize resources in distant lands and make them available to consumers living far from the site of extraction. But market culture looks more successful as an adaptation at the global rather than local level. While it was spreading rapidly across continents to discover and exploit resources, it was often destroying local ecosystems, depleting soils, forests, and minerals, and piling up wastes–a strategy that can work only until every locale is appropriated and exploited.

The maladaptive aspects of this economic culture need to be examined as carefully as its triumphs. Historians should ask whether market culture has produced in any place a sustainable way of life or whether it has typically led to land depletion and land degradation, pollution of air and water, population instability, derelict rural or industrial districts, dying towns, and abandoned farmsteads. Could this culture have survived as long as it has without the windfall of the New World’s rich ecosystems and geological deposits?

In biology the evolutionist tries to explain change over time by constructing what evolutionary biologist Ernst W. Mayr calls a “historical narrative.” Such a narrative addresses questions like these: Why did this trait appear in an organism when it did, what function did it serve? How did it reshape the whole organism and help it reproduce itself? When did the trait decline and disappear? These are good questions for historians as well as biologists to pursue. They should lead us to create narratives around changes in soil or climate conditions and in accessibility of resources and to relate those changes to technological innovation, the rules people make up and follow, and the moral ideals they invent to guide their relations with the natural world.

Telling such stories would require that historians follow the natural sciences by taking the environment more seriously as a force in human life. Historians would need to acknowledge, with the aid of evolutionary psychology, the reality of a human nature that evolves through time. At the same time they would need to think about the role of cultural beliefs and rules as a quasi-independent but never isolated force on the planet–a force that never functions in an ecological void, a force that can have a devastating effect on other forms of life and can enhance or threaten our survival.

The human mind is remarkable for finding multiple pathways through the natural world, but those paths are always contingent on what came before and what is happening now to the planet. Historians need to acknowledge the importance of the environment and to embrace the theory and worldview of evolution for the dazzling light it sheds on the origins, development, and fate of humanity.

Permission required for reprinting, reproducing, or other uses.

Donald Worster is an emeritus professor of history at the University of Kansas and the author, most recently, of Shrinking the Earth and A Passion for Nature: The Life of John Muir.

● NEWSLETTER

Please enter a valid email address
That address is already in use
The security code entered was incorrect
Thanks for signing up