The Fear Factor
Long-held predictions of economic chaos as baby boomers grow old are based on formulas that are just plain wrong
For the past half century, the biggest demographic changes in the United States have often reflected the stages of life for the 76 million baby boomers. The demographer Dowell Myers has described their passage as “a rolling tsunami that first rocked the schools, then flooded the labor market, and next drove up house prices and triggered waves of gentrification.” When the first wave of boomers turned 65 in 2011, the rest of the country started paying attention to what demographers had been warning about for a generation: the tsunami was turning gray and driving the United States toward fundamental shifts in social policy.
Demographers use a measure called the dependency ratio to prove their point. They add the number of Americans who are regarded as not in the workforce (traditionally, those 14 and younger and 65 and older) and divide that total by the number of people who are regarded as in the workforce (those 15 to 64). Then they multiply by 100. An increase in the ratio is understood to mean a growing burden on each person in the workforce to support the economically dependent.
The calculation is often modified to yield what’s known as the old-age dependency ratio, comparing the numbers of Americans 65 and older and those of working age. For the past four decades, that ratio has been relatively steady, but between 2010 and 2030, it is projected to shoot up from 13 to 22—an increase of close to 60 percent. The U.S. government takes this ratio and projected jump seriously, as the headline in a 2010 government press release indicated: “Aging Boomers Will Increase Dependency Ratio, Census Bureau Projects.”
The addition of “old-age” as the modifier to the starkness of the dividing line between the dependent and the productive bluntly separates older Americans from everyone else. It presents them as being unable to care for themselves. With an objective formula, it seems to document that those who make up the gray tsunami are a large and growing liability, undermining the country’s assets. The old-age dependency ratio appears to justify the view of alarmists like former Federal Reserve Board Chairman Alan Greenspan, who testified before the Senate that this outsized group of aging Americans “makes our Social Security and Medicare programs unsustainable in the long run.”
The dependency ratio is only occasionally mentioned in debates about public policy, but its premise—that the growth in the ratio indicates how greatly baby boomers will burden the rest of society—is shaping some of the most consequential debates in the United States today: about the size of the federal government, about how government expenditures should be allocated, and about the nation’s financial viability in the next generation.
A demographic tool has become an economic one, treating a demographic challenge as both an economic crisis and a basis for pessimism justifying drastic reductions in bedrock government programs, including those supporting children and the poor. Even at state and local levels, the aging boomer demographic is repeatedly blamed for our economic difficulties. That is a lamentable mistake. The United States has serious economic problems, and the aging population poses significant challenges, but those challenges are not the main cause of the problems. They should not be treated that way.
The dependency ratio does not justify the solutions that the alarmists propose. Just as important, perhaps, it fails to account for the striking benefits accruing from the dramatic increase in life expectancy in the United States during the 20th century—what the MacArthur Foundation’s Research Network on an Aging Society called “one of the greatest cultural and scientific advances in our history.”
Although the concept of the dependency ratio dates back to Adam Smith’s 1776 The Wealth of Nations, remarkably, there seems to be no published history of the concept as it is used today. The inclusion of young people over the age of 14 in the productive segment reflected in the traditional ratio suggests that it was developed in the 19th century, when America’s farm economy still required their help. But as late as 1933, in Recent Social Trends, a vast and definitive statistical portrait of the United States in that era, the dependency ratio was not referred to or used.
By the 1940s, however, the ratio became a regular tool among the different measures the government employed to describe the state of the nation and to project what it would look like in the future. Although no group was spared in the Great Depression, older Americans were especially hurt. Social Security was the first national program to ensure that Americans 65 and older would have at least a minimum of income to pay for food, clothing, and shelter. That program, which became law in 1935, started to make monthly payments in 1940. Social Security’s large scale required the government to anticipate how many Americans it would cover, which made the dependency ratio, and the old-age ratio in particular, an important demographic measure.
The U.S. government has been calculating what it calls the economic dependency ratio, which includes in the productive segment people who are still working after the presumed retirement age of 65—in 2010, 22 percent of men that age and older and 14 percent of women and, by 2020, projected to be 27 percent of men and 19 percent of women. Even that ratio is crude, but it is less crude and more accurate than the traditional old-age dependency ratio.
Still, the revised calculation does not address fundamental concerns that scholars have raised in the recent past. They have criticized the dependency ratio for its blatant oversimplification of reality and for its ideological bias. In 1986, sociologists Toni M. Calasanti and Alessandro Bonanno described the bias as the “social construction of the elderly’s obsolescence.” They meant that our society has chosen to regard older people as a burden when age alone does not make them so. Age is accompanied by decline, but different people decline in different ways. A person’s race, ethnicity, wealth, and level of education are often better predictors of that decline than age.
Sociologist Donald E. Gibson wrote in 1989 that “it has become commonplace to predict or assume that demographic trends will lead to an economic crisis in the third or fourth decade of the next century.” He went on: “All current versions of the dependency ratio, however, share one important deficiency.” They fail to take account of economic productivity, and how improvements in productivity lead to progress through increases in income and in “the country’s capacity to support people.”
Projections in the 1980s of an aging crisis rested on the assumption that, in the subsequent half century, the American economy would perform poorly and produce little improvement in real income. In fact, between then and now, real income has grown somewhat more than was predicted, with dramatically unequal distribution. (The rise in real income for nine out of 10 Americans has been slight. The rise for the top 10 percent has been much higher—and higher still for the top one percent.) Gibson’s point was more fundamental, however. “Very often this assumption is not explicated,” he wrote. “Not to do so is to present an economic problem as a demographic problem.”
In a similar argument last year, economists Ronald Lee at Berkeley and Andrew Mason at the University of Hawaii criticized the dependency ratio for being “incomplete and misleading” and for exaggerating “the adverse impact on the macro-economy of population aging,” because it does not reflect that in the United States the elderly “rely heavily” on income from their own private wealth to support them—in economic terms, to pay for their consumption.
In general, Lee and Mason reported, “Net transfers from the working age population, mostly through the public sector in the form of Social Security benefits, Medicare, and Medicaid, make up only about 40 percent or less of funding for consumption.” As a result, the aging of the U.S. population will mean an increase in the number of older people who are only partially dependent on the government, not wholly dependent, as the dependency ratio assumes.
By contrast, Lee and Mason assume that the increasing number of elderly boomers in the United States will arrive at old age having accumulated assets similar in amount on average to those who are currently elderly. More older people per capita will mean more assets per capita. These assets will generate national income, boost productivity, and contribute to a future justifying a more optimistic outlook rather than the pessimistic one that the ratio is used to justify.
In 2010, a respected international team published a study finding that old age generally arrives later than the dependency ratio assumes—if old age is defined as the point at which older people need permanent care, that is, when they are disabled. The demographers Warren C. Sanderson and Sergei Scherbov wrote in Science magazine, “Alternative measures that account for life-expectancy changes”—improvements in health and longevity—“show slower rates of aging than their conventional counterparts,” based on “fixed chronological ages.”
They wrote that chronological age is less useful than life expectancies in predicting national health costs, because “most of those costs occur in the last few years of life.” Sanderson and Scherbov developed a measure they called the adult disability dependency ratio, defined as the number of adults 20 and over with disabilities, divided by the number of adults 20 and over without them. In the United States, this measure will likely remain flat for the next generation, meaning that the cost of caring for the disabled is not likely to skyrocket as a result of a major increase in the number of disabled people.
John Shoven, a Stanford economist, takes that idea a step further: in a scholarly paper called “New Age Thinking,” he argues that age should be defined differently from the universal convention of years since birth. “The measurement of age with different measures is not like choosing between measuring temperature on a Fahrenheit or Centigrade scale,” he warned. The reason to change how age is measured is that the connection between the universal definition of age and the alternatives he proposes is constantly changing. Because of advances in nutrition, sanitation, and other factors, as well as health care, someone who has lived a long time is no longer as old as his or her numerical age once indicated.
A man born in 1900 was expected to live until he was 51 ½ and had less than a 50 percent chance of living until he reached 65. A man born in 2000 is expected to live until he is 80 and has an 86 percent chance of reaching 65. That dramatic advance in longevity indicates that knowing how many years a person has been alive tells only so much about the person’s risk of dying.
Shoven proposes that instead of measuring age backward, as in years since birth, we measure it forward, as in years until projected death. One option is to measure age by mortality risk. A 51-year-old man in 1970 had the same mortality risk (a one percent chance that he would die) as a 58-year-old man in 2000: in one generation, longevity advanced by seven years for that level of risk. Another option is to measure age by remaining life expectancy, a more accessible measure because it is computed in years rather than as a percentage. In 1900, a man who reached 65 had a remaining life expectancy of about 13 years. In 2000, a man who reached 65 had a life expectancy of about 21 years.
Measuring backward yields starkly different results from measuring forward. “Consider two alternative definitions of who is elderly in the population,” Shoven writes, “those who are currently 65 or older and those who have a mortality rate of 1.5 percent or worse.” In 2007, when he wrote this paper, the two definitions were equal: the average mortality rate was 1.5 percent or worse for 65-year-olds. According to the U.S. Census, the population of those who are 65 or older will increase from about 12.5 percent of the population in 2035 to about 20.5 percent in 2050. But “the percent of the population with mortality risks higher than 1.5 percent (currently also 12.5 percent of the population) never gets above 16.5 percent,” because of what James Fries of the Stanford School of Medicine called “the compression of morbidity”—the tendency of illnesses to occur during a short period before death if the first serious illness can be postponed. That number “is projected to be just slightly below 15 percent and declining by 2050.”
By the conventional measure of years since birth, the population considered elderly is expected to grow by 64 percent. By Shoven’s measure, on the other hand, it is expected to grow by just 32 percent. “The point,” he says, “is the great aging of our society is partly a straightforward consequence of how we measure age.”
To Laura Carstensen, a psychologist who directs the Stanford Center on Longevity, the striking advance in lifespan requires “us to answer a uniquely twenty-first-century question: What are we going to do with super-sized lives?” In her book A Long Bright Future, she envisions a transformation in American culture and society that would “expand youth and middle age” as well as old age, in “a new model for longer life” that would “harness the best of each stage at its natural peak.”
She proposes that young adults should ease into the work force, “working fewer hours during the years that they’re caring for young children, completing their educations, and trying to find the right careers.” Around 40, full-time work life would begin, when people “have developed the emotional stability that guides them as leaders.” Older workers, rather than “vaulting into full retirement on their sixty-fifth birthdays,” would continue to work for more years but for fewer hours, and retirement “could be the pinnacle of life, rather than its ‘leftovers.’ ”
Carstensen’s proposal rests on findings in her work about the capabilities of older workers. She learned that they are generally more stable emotionally than younger workers and better at dealing with stress, and that while younger workers, by and large, pick up new information faster, older workers often have wider knowledge and more expertise. One important study by a group at the Rush University Medical Center casts doubt even on the cognitive advantage of younger workers. The decline in cognitive processing speed found in older workers turns out to be negligible when people who later developed Alzheimer’s disease are removed from the group studied. That would include one out of every nine people who are 65 and over.
Carstensen and others are building on the work of the late Robert N. Butler, a psychiatrist whose biographer described him as a “visionary of healthy aging.” The founding director of the National Institute on Aging at the National Institutes of Health, Butler believed that the extension of American lives—especially the extension of the healthy years—requires new thinking about some of America’s basic institutions. “Many of our economic, political, ethical, health, and other institutions, such as education and work life, have been rendered obsolete by the added years of life for so many citizens,” Butler wrote in his 2008 book, The Longevity Revolution.
Butler was a realist about the discrimination that older Americans can face in addition to declines in physical capability, health, and cognitive ability. He coined the term ageism for this form of discrimination and catalogued how it can manifest itself in problems finding appropriate work, housing, transportation, and satisfying other basic needs. But Butler was an optimist, convinced that many healthy older Americans represent not a liability but a great asset of experience, skill, and drive that the country should learn how to exploit.
In a nation whose motto is E pluribus unum, a fundamental disagreement about social policy in recent decades has been about how policymakers should reinforce the mutual support called for in the motto. They could emphasize the value of older Americans working on behalf of children in education, for example, and younger Americans supporting older ones who need help. Or policymakers could strive to ease the allegedly large conflict between generations over the allocation of scarce resources. The shorthand for this difference of opinion in our splintered political culture is “warfare” versus “interdependence” between the boomer generation and the generations that follow. Our emphasis should be on generational interdependence.
Social Security is at the heart of this debate. The country’s largest program, it represents a quarter of the federal budget: in 2012, 56.9 million people each month received a payment from it—one out of every six Americans and one out of every four households. They included 36.9 million retired workers, 8.8 million disabled workers, 4.3 million widows and widowers, 2.4 million spouses, one million adults disabled since childhood, and 3.4 million children.
[adblock-right-01]
The program is also one of the country’s most successful: in the past half century, it has played a vital role in reducing the poverty rate for Americans who are 65 and older from 43.6 percent to 8.7 percent. It has done so by being deliberately progressive, replacing a higher percentage of the income of someone who receives low earnings (56 percent of $20,000) than it does for someone whose earnings are high (28 percent of $113,700).
Given how ardently the program’s finances have been debated, the most remarkable thing about it may be the conspicuous conservatism of the government’s approach to financing it. The people who oversee the program, called trustees, have a legal duty to report annually to Congress about those finances for every year going out 75 years, based on projections of revenue supporting the program and of benefits it will pay out. Because the revenue has generally exceeded money paid out, the surplus in Social Security trust funds is now $2.8 trillion. It is projected to grow to $2.9 trillion by 2020. Most of the revenue to support Social Security comes from taxes (87 percent), but some comes from interest on those reserves (13 percent).
In 2021, the trustees project, Social Security benefits will exceed tax revenues plus interest income and the program will start to draw down its reserves. In 2033, when the last boomers will be turning 69 and the number of Americans on Social Security is projected to double, the reserves are projected to be depleted and tax revenues are projected to cover only 77 percent of benefits due, unless there is a change in revenues or benefits.
For a generation, critics on a spectrum from neoliberals to conservatives have cited projections like these to argue that, unless it is overhauled, Social Security will create a crisis for the nation. Invoking the dependency ratio, they argue that as fewer people earning incomes in the U.S. economy pay Social Security taxes to support more recipients, productive generations will be increasingly supporting unproductive ones in an indefensible breach of generational equity, since each generation should provide for itself. They propose cutting benefits, raising the age when people qualify for Social Security, and instituting a variety of more aggressive reforms, such as making 401k and other personal savings accounts part of the Social Security system. Reining in Social Security has been integral to recommendations of bipartisan blue-ribbon commissions about how to balance the U.S. budget after a decade and a half of economic turmoil has badly unbalanced it.
There are other ways to solve the problem of financing Social Security. Robert M. Ball, who was commissioner of Social Security under Presidents Kennedy, Johnson, and Nixon, proposed that the program return to taxing 90 percent of American earnings, the level set in 1983 when Congress last significantly reformed Social Security. The program gives a “benefit credit” for every dollar of income on which a taxpayer pays the Social Security tax. But there is a limit on how much income is taxed because, as Ball wrote, “most people would find it inappropriate in a social insurance system to pay the very high benefits that would result from crediting million-dollar-plus salaries for benefit purposes.”
“At the same time,” Ball went on, “it would seem unfair—and fundamentally change the nature of Social Security—to require contributions without granting benefit credits.” Balancing the need for revenue against the need for fairness, the government sets a cut-off point each year for taxation of earnings based on the amount of average American earnings. In 2014, that number is $117,000. But since high-end earnings have grown faster than the average, today only about 83.5 percent of earnings are taxed. In Ball’s view, the system has been over-protective of high-end earners and “a major part of the Social Security shortfall is due to the fact that a higher and higher proportion of earnings is escaping Social Security taxation.” If 90 percent of earnings were taxed, the maximum would rise to about $217,000. In 2004, when he was 89, Ball wrote, “This proposal is not a new policy, but an old one restored.”
Despite the good sense of this and similar proposals that make clear how hyperbolic the warnings of financial doom are, the concept of generational equity, as the sociologists John B. Williamson and Diane M. Watts-Roy of Boston College have documented, has leaped beyond the Social Security debate to become a broadly popular one. It has markedly influenced news coverage of the government’s budget challenges and spread to discussion of conservation, ecology, waste reduction, and other issues.
But the fundamental standard of generational equity—that each generation should provide for itself—has never applied to Social Security. Ida May Fuller, the first beneficiary of the system, paid in only $24.75 before she started to receive benefits in 1940. Her first check, for $22.54, used up most of that contribution. By the time she died at age 100, she had received almost $23,000.
The gap between her contributions and her benefits is part of the “legacy debt,” as the economists Peter A. Diamond and Peter Orszag identified it: the difference between what almost all current and past Social Security beneficiaries have received and the lesser amount they have paid into the system through Social Security taxes. Diamond and Orszag wrote, “That is, if earlier cohorts had received only the benefits that could be financed by their contributions plus interest, the trust fund’s assets would be much greater.” They were referring to the Social Security reserve.
“The key issue,” they continued, “is how to finance this legacy debt across different generations, and across different people within generations.” They proposed doing that through “a legacy tax above the maximum taxable earnings base, so that very high earners contribute to financing the legacy debt in proportion to their full earnings” and through other means. The government has not imposed a separate legacy tax, so funds paid by others in Social Security taxes have covered the difference between the amount beneficiaries received and the lesser amount they paid into the system.
Social Security is an outstanding example of generational interdependence—of how one generation contributes to another. That is one kind of interdependence that is central to the American success story. A feisty reminder about this ethic came from Elizabeth Warren in 2011, when she was campaigning in Massachusetts for a seat in the U.S. Senate.
“You built a factory out there? Good for you,” she said. “But I want to be clear: you moved your goods to market on the roads the rest of us paid for; you hired workers the rest of us paid to educate; you were safe in your factory because of police forces and fire forces that the rest of us paid for. You didn’t have to worry that marauding bands would come and seize everything at your factory, and hire someone to protect against this, because of the work the rest of us did.” She continued: “Now look, you built a factory and it turned into something terrific, or a great idea? God bless. Keep a big hunk of it. But part of the underlying social contract is you take a hunk of that and pay forward for the next kid who comes along.”
Interdependence, intergenerational and otherwise, is the foundation of the nation’s most important statement of purpose: “We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defense, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.”
A lot of recent evidence demonstrates how much the generations depend on each other. In 2011, the Pew Research Center reported that one out of every 10 children in the United States was living with a grandparent—most often in the grandparent’s house, under the grandparent’s care. In 2012, 36 percent of millennials between the ages of 18 and 31 lived with their parents in their parents’ home, the highest percentage in four decades. Pew also reported that the percentage of the U.S. population living in family households with two or more adult generations has climbed to more than 16 percent, or 49 million Americans—up from 12 percent in 1980.
This mutual dependence resulted from young people delaying marriage (by an average of about five years since 1970), from past waves of immigration and the habit of some immigrants to live in multigeneration households, or from baby boomers increasingly providing shelter and care to their parents. It has also been a means for families to cope with economic challenges such as, in recent years, unemployment and foreclosures. That’s how the United States has dealt with many national challenges, including the acute problem of poverty among older Americans, as Social Security has done since its inception.
As old age has turned into longevity for many Americans, that stage of life has fragmented into very different experiences. The division between the young-old and the old-old, proposed 40 years ago by the gerontologist Bernice Neugarten—between those enjoying good health and solid finances and those suffering what historian W. Andrew Achebaum of the University of Houston called “the difficulties and decrements associated with old age”—has been further divided by gender, class, race, and geography. Mark R. Cullen, Clint Cummins, and Victor Fuchs of Stanford University found that 82 percent of white women could expect to live until 70, but only 54 percent of black men. They found that the probability of a white male living until 70 in Massachusetts is close to 80 percent, but for a white male in the southeastern ridge of Appalachia, only 55 percent.
Those who are better educated and better off financially generally live longer, and live longer in good health. Those who struggle economically and have less education do not. People in the first group are generally prepared for retirement. People in the second group are not. Of people 65 and older, nearly one in three get almost all their income from Social Security. Nearly two in three get half or more. For people in that age group whose earnings put them in the lowest quarter for income, earnings from Social Security are 85 percent of their total, and earnings from pensions and other assets are only 5.2 percent. For people in the highest quarter, earnings from pensions and other assets are 35.6 percent of total income and earnings from continuing to work 43.7 percent. Social Security is only 18.1 percent of their income.
Those numbers reflect the marked gap in opportunity between this country’s wealthy or comfortable and virtually everyone else. They also reflect what the Yale political scientist Jacob Hacker calls the “great risk shift” of the past generation: “the massive long-term transfer of economic risk from broad structures of insurance—whether sponsored by the corporate sector or by government—onto the fragile balance sheets of American families.” It is a much more daunting problem than how to ensure that Social Security is well financed.
A generation ago, the American system of retirement security included Social Security, private pensions, and personal savings—often described as a three-legged stool. More than 60 percent of workers had a defined benefit plan—a real pension. Today, pensions are largely unavailable: the system depends on Social Security and private savings—and private savings are inadequate for many people and negligible for many more. The stool is now two- or even one-legged: wholly unstable.
[adblock-left-01]
Economic struggles lead many people to start taking Social Security before they qualify for so-called full benefits at 66; 45 percent of men and 50 percent of women claim Social Security benefits at 62, the earliest possible starting age. At 62, a beneficiary receives 75 percent of “full” benefits. Since a beneficiary gets an extra eight percent a year by waiting from 66 until 70 to start, people starting at 62 get only a little more than half what they would obtain if they waited until 70—which is really when full benefits begin.
Many experts believe it is essential to increase the amount of Social Security payments for people whose lifetime earnings have been moderate, low, or less. The dean of experts about the economics of health care and social insurance, Henry J. Aaron at the Brookings Institution, recently described U.S. social insurance as “parsimonious,” with Social Security 30 to 40 percent less than the average in other developed economies. The best prospect for all concerned, he explained in The American Prospect, would be overall economic growth that benefits the middle class and below as well as America’s wealthy. That would enable “people simultaneously to enjoy rising living standards and support social expenditures,” while lowering the tax rate needed to pay for Social Security.
The vision of Robert Butler and others about a new stage of opportunity for older Americans is by no means fully worked out, and these sobering realities will make it harder to realize the vision for the substantial majority of them. Still, that vision is supported by many compelling facts that come together to make a different picture of life for tens of millions of people in their 60s and older, compared to the situation older people found themselves in during the Great Depression. Matilda White Riley, a pioneering gerontologist, observed that the 20th century saw enormous change in human development and aging, but “no comparable revolution” in how American culture understands the roles of older people or in the country’s organizing structures to accommodate this major change.
Many more older people will realize that vision when the nation can turn their energy and capability into a well-directed force for good: working in literacy programs such as AARP Experience Corps, joining national and community service programs like AmeriCorps and Senior Corps, or starting new careers as nurses or teachers, often after going back to school.
Social change takes time, even when it is visibly underway. But change accelerates when outdated ways of thinking are put aside in favor of useful new ones. It is time to replace the dependency ratio, especially the old-age one, with a more accurate measure that takes account of how many of today’s older Americans remain productive—how many continue to “promote the general Welfare, and secure the Blessings of Liberty.” Baby boomers are imposing significant costs on this society, yes: but more than ever before by people in their age group, they are conferring important benefits for “our Posterity,” with a great prospect of changing what the future means for every generation that follows.