Opioids and Paternalism
To help end the crisis, both doctors and patients need to find a new way to think about pain
Until recently, I worked as a physician at a primary care clinic one day a week in addition to my job as a science reporter for a newspaper. In two decades, I had only one patient taking opioids long term. She was in her 50s when I met her. She’d spent many years working in factories making cups, boxes, and other things, until they’d all closed down. She ended up at a commercial laundry, on her feet eight hours a day.
Of her many medical problems, the one that concerned her most was low-back pain. She eventually left her job and sought, and received, Social Security disability. By then she was taking Tylenol No. 3, a combination of acetaminophen and codeine, twice a day. Later, she graduated to Percocet—acetaminophen and oxycodone. When from time to time a new pain supervened, she would demand more medication but at my insistence always returned to the baseline dose.
At some point, I consulted an orthopedic surgeon to see whether she was a candidate for surgery. He said she wasn’t and strenuously advised that she stop taking the narcotics. “This is not something she should be on for the rest of her life,” he wrote. I then referred her to the hospital’s pain clinic. The anesthesiologist there sent her back after one visit with the message: “When I have a patient on a low-dose narcotic whose pain is reasonably well controlled, I call that success!” His comment rang true. Nevertheless, reared in medicine’s puritanical tradition, I wrote the prescriptions grudgingly until she died of a heart attack in her 70s.
Looking back, I was lucky. It’s nothing like what doctors and the nation face now.
The proliferation of opioid use in the United States is called an epidemic, but it more resembles metastatic cancer. The malignant effects extend far beyond the 300,000 Americans who’ve died since 2000. Prescription opioids are creating a pharmaceutically damaged underclass, trapping millions of people in a culture of victimhood and economic dependence, or for the unlucky, a world of criminal behavior and lethal illicit drugs. At the same time, opioids are damaging the medical profession and its practitioners in ways that will take years to acknowledge and redress.
Perhaps once his more pressing problems are solved, our new president will take this one on. It’s a Donald Trump kind of problem—dystopian, apocalyptic, and disproportionately affecting the white, underemployed people who voted for him in great numbers. If he does go searching for solutions, he might look backward. What he’ll see is that American medicine needs a dose of old-fashioned paternalism, something he might also appreciate.
“Paternalism” is a hoary concept in medicine. It has nothing in particular to do with men and fathers except, of course, that it arose at a time when few physicians were women. An essay in the AMA Journal of Ethics several years ago defined it as “an action performed with the intent of promoting another’s good but occurring against the other’s will or without the other’s consent. In medicine, it refers to acts of authority by the physician in directing care and distribution of resources to patients.”
The notion that your doctor knows best, and will make decisions about your treatment with little attention to your desires, has been out of fashion for decades. There are many reasons. Among them are the erosion of authority and its replacement with an overconfident skepticism on the part of people not necessarily well informed; the influx of women into medicine and the unwillingness of some patients to view them as experts; the growth of medical journalism and its ability to disseminate information once confined to scientific journals; the arrival of the Internet, which provides medical “facts” often without statistical and physiological context; and the rise of “self-actualization,” which has turned personal choice into a civil religion.
In most ways, the decline of medical paternalism was a salutary trend, and an inevitable one. It took a few thousand years for Western medicine to learn that knowledge and choice aren’t conjoined twins. Patients may not know a lot, but that doesn’t diminish their right to have a say in their treatment. The trend, however, has gone too far.
This evolution is nowhere better illustrated than in how medicine views pain, a province where the patient now rules. Gone is the day when a doctor could ignore, question, or minimize a patient’s experience of pain. Gone, too, is a doctor’s reluctance to relieve it based on the belief that relief might cause greater hazard.
When all else fails, pain relief usually ends up at opioid analgesics. Opium is one of the few ancient drugs still in use, as a natural derivative (morphine and codeine) or in semisynthetic (oxycodone) or synthetic (fentanyl) form. William Osler, one of the most respected physicians at the turn of the 20th century, called morphine “God’s own medicine”—an acknowledgment of not only its venerableness but also its perfection in solving a problem. But Osler, who represented the apotheosis of scientific medicine, overstated the case.
Of late there is evidence that misuse of opioids has peaked, at least by some measures. The Centers for Disease Control and Prevention (CDC) reported in the summer that the average per-capita dose of “morphine equivalents” fell between 2012 and 2015, although it is still three times higher than it was in 1999 and four times higher than it is in Europe now. Despite that trend, deaths from opioids continue to rise. Unlike in the past, when drug dealers diluted heroin with inert compounds to stretch the supply, today they cut the drug with cheap but more powerful synthetics, such as fentanyl and carfentanil, which can cause a person to stop breathing moments after use.
The opioid epidemic is entangled with economic and cultural forces, but it begins with pain. People—physicians especially—are going to have to think differently about pain for the epidemic to end.
For a very long time, physicians viewed pain with a puritanical eye. A patient’s sensitivity to pain, and demand for its immediate relief, revealed a childish lack of toughness. Tolerance of pain, and silence in its presence, earned a doctor’s respect. Although no doctor would say so, stoicism revealed a patient as worthy of being cured.
Invasive procedures, some of them gruesome, were made more tolerable to the practitioner, if not to the patient, by soft-pedaling the pain they were likely to cause. (“You may feel some discomfort now,” the doctor says as he aspirates bone marrow through a large-bore needle plunged into the wing of the pelvis.) Relieving pain with opiates was further complicated in doctors’ minds by the euphoria-producing aspect of the drugs. In an essay written a quarter-century ago, James S. Goodwin, now the chair of geriatric medicine at the University of Texas Medical Branch, wrote that undertreatment of pain was partly a consequence of “the wariness with which we view unbridled, unearned, and undeserved pleasure.”
About 30 years ago, pain began to raise medicine’s consciousness. It joined heart rate, respiratory rate, temperature, and blood pressure as the fifth vital sign. The medical profession concluded that there was an epidemic of undertreated pain, especially in surgical patients, people with cancer, and the dying. The first clinical practice guideline published by the U.S. Department of Health and Human Services Agency for Health Care Policy and Research (in 1992) was on the management of acute pain. About the same time, some hospitals started requiring doctors to read articles about pain control in order to get privileges to practice in their institutions.
Puritanism was out, and so was paternalism. When it came to pain, the patient’s experience was ascendant, its presence and severity not to be questioned. This situation led not only to more use of opioid drugs but also to friendlier modes of delivery: sustained-release capsules, drug-infused patches and lollipops, and devices allowing post-op patients to dial up their morphine drips by themselves.
Notably, however, the epidemic of undertreatment decried by professional societies had to do with acute pain and terminal pain, not chronic pain lasting for years. But just as the liberal use of opioids took hold, chronic pain was also on the rise, fueled by joint-grinding obesity, physical inactivity, diabetic neuropathy, sports injuries, and the graying of the population.
As important as those trends was a newfound respect for a phenomenon that had been held in contempt during the century-long rise of scientific medicine: the existence of symptoms without disease. The aches and pains that practitioners might once have attributed to hypochondria, psychosomatic causes, exaggeration, or fakery suddenly became worthy of attention because patients declared they were. This change in attitude happened nowhere more dramatically than in the strange phenomenon that came to be known as Gulf War syndrome.
Roughly 700,000 men and women went in 1991 to fight in Saudi Arabia, Kuwait, and Iraq. In the years that followed, hundreds of thousands complained of a long list of irksome but not life-threatening symptoms. No cause could be found, although the federal government has spent $505 million on more than 450 studies looking for one (and for an effective treatment). Contrary to what much of the public believes, no evidence exists that Gulf War syndrome was caused by war-theater exposures such as vaccines, pesticides, oil-fire smoke, artillery shells tipped with extra-heavy depleted uranium, or trace amounts of nerve gas released when an Iraqi munitions dump was blown up. Neither could the symptoms be “reliably ascribed to any known psychiatric disorder,” a panel convened by the National Academy of Sciences Institute of Medicine concluded in 2010.
Nevertheless, as of March 15, 2017, the Department of Veterans Affairs had received 144,655 claims for a “medically unexplained chronic multi-symptom illness” or an “undiagnosed illness” (the names for the condition that the department now prefers). About 27,000 veterans have been awarded service-connected disability for these conditions.
Pain wasn’t the main problem in Gulf War syndrome. Fuzzy thinking, poor concentration, flagging memory, and restless sleep were the bigger complaints. Nevertheless, Gulf War syndrome helped fertilize the soil for the chronic pain epidemic. Its appearance on the national stage confirmed that we had entered a new era in which subjective experience, rather than physiological evidence of damage, was the key to the diagnosis of disabling conditions. It made the sufferers’ explanations and demands paramount; to question their views was considered heartless. This situation almost certainly amplified symptoms for Gulf War veterans, as it has for chronic pain patients. Research reveals (and so, frankly, does ordinary life) that when you anticipate and mentally chronicle every unexpected or unpleasant sensation, you end up feeling bad much of the time.
The two-decade search for causes of Gulf War syndrome made more plausible explanations—ones that invoked cultural and historical circumstances rather than biology—off limits. It now seems likely that Gulf War syndrome was neither a physical nor a psychological illness, but a strange search for victimhood in a war that had relatively few American victims. (In the four days of ground combat, 148 American troops died.) Through mid-2015, the Iraq and Afghanistan wars that followed had produced 6,855 American dead and 1,645 amputees—and no similar outbreaks of undiagnosed disease.
As the Gulf War syndrome mystery deepened, disability in general was on the rise, both as a way Americans described themselves and as a way their government viewed them. This trend also laid the groundwork for the opioid epidemic. A 2013 survey of Americans’ health found that 22 percent of adults reported some sort of disability, up from 17 percent in 2000. Disability was especially prevalent, at about 30 percent, in the central Appalachian states—West Virginia, Kentucky, and Tennessee. Problems walking and climbing stairs were the most common complaints. A different study found that back pain and neck pain were the leading causes of disability in the United States, which, indirectly at least, explains the problems with mobility.
The number of government claims for disability was also on the rise. In 2015, 13 million American adults under age 65 collected disability payments from Social Security Disability Insurance or Supplemental Security Income—almost double the number in 1996.
That same year, welfare reform during the Clinton administration unwittingly boosted incentives for people to seek disability status. The reforms tightened welfare eligibility rules and linked eligibility to work-seeking. Welfare payments were also capped at five years. Disability benefits, on the other hand, can last a lifetime. Thus an incentive was created for a poor person in a high-unemployment region to become labeled as disabled. The result was what social scientists termed the “medicalization of poverty.”
Today, 11 percent of adults in Kentucky and 12 percent of adults in West Virginia get disability checks, compared with six percent nationwide. In 2010, in 14 percent of the nation’s counties, the proportion of working-age adults collecting disability was twice the national average or more. Most of those counties were in Appalachia and the southeastern United States, the regions with the highest prevalence of disability on general health surveys. Kentucky had 13 counties in which adults collected disability at five times the national rate.
Disability was no longer a province of blinded eyes, broken backs, and lost limbs. Increasingly, it was awarded for conditions that cause chronic pain, both physical and psychological. Musculoskeletal system and connective tissue disease accounted for 23 percent of disability awards in 1996 but 36 percent in 2015. (The prevalence of back pain in the general population was stable over that period.) Today, only one in 25 disabled workers receives benefits because of an actual injury, while three times as many attribute their disability to a mood disorder, such as depression.
However, pain alone isn’t enough to get you a disability check. The pain must come from an “underlying disease” judged to be disabling. One way to affirm that a disease is disabling is to treat its symptom—pain—with the strongest drugs available. For musculoskeletal system and connective tissue diseases, that often means opioids. People getting disability checks therefore have a perverse incentive to take opioids forever. (A person’s disability is periodically reviewed, and about eight percent of people are dropped from the rolls each year, although more are added each year, too.)
This domino effect of poverty leading to disability to chronic pain to opioids to addiction and overdose isn’t theoretical. It’s happening.
Recently, researchers at the Dartmouth Institute for Health Policy and Clinical Practice examined the experience of adults under age 65 who received Medicare and disability payments from 2006 through 2012. Just under half of the people in that group filled at least one opioid prescription a year. Among those filling six or more a year, 94 percent had a “musculoskeletal condition.” The overdose death rate in this population was 10 times the national rate. Incredibly, in 2008, “nearly 1 in 4 deaths from prescription-opioid overdose” in the United States was someone on disability, the researchers wrote.
“The medicalization of poverty leads to new risks, including the iatrogenic [medically induced] side effects of the same pharmaceuticals they must take to qualify for income support,” sociologists Helena Hansen, Philippe Bourgois, and Ernest Drucker wrote in 2013. Medical anthropologist Nicholas B. King of McGill University was more sweeping in the Annals of Internal Medicine last year: “We suggest that welfare reform and the medicalization of social support through disability insurance programs may have contributed to increased opioid-related mortality since the mid-1990s.”
Acute pain is caused by torn, cut, compressed, or oxygen-starved tissue. It goes away, almost always, as the damage heals. Chronic pain is different. It arises from tissue that’s no longer healing. Chronic pain is hard to study, although functional MRI and PET imaging shows that, like acute pain, it too causes changes in the spinal cord and brain. How that disordered neurophysiology is affected by environmental cues, thoughts, and social relationships is less clear, and the subject of intense study.
What’s not surprising is that both types of pain have been treated the same way—chiefly with nonsteroidal anti-inflammatories and opioids, both of which work most of the time. Unfortunately, chronic pain rarely disappears, so the treatment continues, as does the false equivalency of the two types of pain. “It is fallacious to treat chronic pain as if it was acute pain; nature has already demonstrated that she is not capable of healing the injury or disease,” neurosurgeon John D. Loeser wrote a decade ago.
Only recently has the medical profession wakened to how little it knows about treating chronic pain. A review conducted in 2015 by the federal Agency for Healthcare Research and Quality found no studies of pain relief, quality of life, or addiction risk comparing opioids with nonopioids (or placebos) for pain lasting more than a year. Most studies of the drugs, in fact, lasted less than six weeks. It’s inconceivable that another class of pharmaceuticals could have taken hold so widely with so little study. At the time of the review, the annual number of prescriptions for opioids was almost equal to the number of adults in the U.S. population.
But perhaps we shouldn’t be surprised. In contemporary medical practice, doctors find it increasingly hard to refuse painkillers to people who say they are in pain.
Of course, some prescribing of opioids may simply be the practice of substandard medicine. Last year, reporters from The Washington Post visited three long-term opioid users—one each in Virginia, West Virginia, and Pennsylvania—and made a short video about them. What they found is revealing. One man took opioids for back pain (and had stopped). One woman took them for gout, and another woman took them for rheumatoid arthritis. Those are diseases for which opioids aren’t recommended. Characterized by intense inflammation, they should be treated with anti-inflammatory drugs (such as prednisone or ibuprofen) or preventive drugs (allopurinol for gout, and a long list of “disease-modifying” agents for rheumatoid arthritis). To say whether the care was inappropriate is impossible; clinical details weren’t given. But there’s at least a hint that these two women should never have been on opioids in the first place.
So, are physicians responsible for the epidemic?
“Yes, along with the pharmaceutical companies,” is the usual answer, and there’s considerable evidence to support it.
A national survey conducted by The Washington Post and the Kaiser Family Foundation in 2016 found that nearly all long-term users of opioid pills began with a prescription from a doctor. Chronic pain was a more common reason (at 44 percent) for those initial prescriptions than postsurgery pain and injury (both 25 percent).
An unusually interesting study, published this year in The New England Journal of Medicine, quantified just how risky a doctor with a prescription pad can be. The researchers sampled 380,000 people with Medicare drug plans who’d gotten a prescription for an opioid after a visit to the emergency room. The treating physicians were ranked according to how frequently they prescribed opioids, and also the strength of the dose they wrote. The 25 percent of doctors who were the “highest-intensity” prescribers prescribed opioids for 24 percent of the patients they saw, while the “lowest-intensity” quarter wrote prescriptions for only seven percent of their patients. A year after the ER visit, people who’d been treated by a liberal prescriber were 30 percent more likely to be long-term opioid users than people treated by a conservative prescriber. (“Long-term use” was defined as 180 days or more.) Disabled people, people with depression, and southerners were most likely to become long-term users.
The researchers made another curious and important observation. They reasoned that patients whose pain was inadequately treated would return to the ER sometime during the month after their first visit. What the study found, however, was that patients treated by parsimonious prescribers were no more likely to return than the patients of the liberal prescribers. Whether patients got opioids or not didn’t seem to affect how well they tolerated their pain.
(That finding brought to mind something an internist friend of mine said recently: “No matter where you draw the line for how much you’re willing to prescribe, there always seems to be the same number of people who think it’s adequate as people who think it’s not enough.”)
Nevertheless, doctors aren’t the only problem. Patients have a lot to answer for, too. People rarely require opioids for pain control for more than a few days. (Burns, pancreatitis, severe fractures, major trauma, and joint replacement are some of the exceptions.) Physicians, however, rarely prescribe just three or four pills. Instead, generous prescriptions, sometimes with refills, are the rule. This practice telegraphs the physician’s sympathy and keeps the patient from returning to ask for more.
The trouble is, generous prescriptions provide an opportunity for patients to experiment with drugs not easily obtained, and to indulge in a guilty pleasure with the rationalization that they’re just doing what the doctor ordered. However, someone who gets a prescription for 48 Percocets after arthroscopy and takes them all bears considerable responsibility for any habit that develops.
Yet that’s what tens of thousands of people are doing. A recent study of 36,000 elective-surgery patients who filled prescriptions for opioids (average number of pills: 53) found that six percent were still taking them three months later. That was true for people who’d had minor surgery (carpal tunnel release, hemorrhoid removal) and people who’d had major surgery (hysterectomy, colon resection). “Patients likely continue opioids for reasons other than intensity of surgical pain,” the authors noted dryly. Indeed, in the Washington Post-Kaiser Family Foundation survey, 20 percent of painkiller users said they took the drugs “for fun or to get high,” 14 percent “to deal with day-to-day stress,” and 10 percent “to relax.”
Long-term use of opioids doesn’t inevitably lead to addiction. How frequently it does is a matter of dispute. Estimates in the medical literature range from less than one percent to about 25 percent—an unhelpful range. The essayist and epidemiologist Philip Alcabes argued in these pages last year that heroin is not as addicting as most people think. He cited a study from the 1970s showing that only five percent of soldiers who used heroin regularly while serving in Vietnam continued to use it when they returned to the United States. Though acknowledging opiates’ ability to create dependence, he said that “the user’s mindset and the social setting in which drugs are used take precedence over pharmacology in determining how we use all medications.”
Alcabes is correct that environmental cues and social context are important in addiction. GIs were returning to a country lacking the myriad “classical conditioning” (Pavlovian) cues associated with heroin use in Vietnam, allowing them to quit the drug easily. That rare natural experiment, however, shouldn’t reassure anyone today. Quite the opposite. For many of America’s opioid users, the environmental cues—chronic pain, joblessness, boredom, domestic spaces cramped by poverty and dilapidation, a sparse landscape—are things not likely to change. As a consequence, they reinforce the habit-forming potential built into the drugs’ chemistry.
That opioids are quickly and deeply addictive really doesn’t need to be argued. The evidence is everywhere. But even if addiction as formally defined doesn’t result, long-term users are guaranteed to develop physical dependence. They will have unpleasant, sometimes harrowing, withdrawal if they stop abruptly. A few will also develop hyperalgesia, a heightened sensitivity to pain that leaves them in worse shape than before.
With these predictable problems added to a fatal overdose rate that is four times higher than it was 15 years ago, and a nonfatal overdose rate that is six times higher, the country needs to think differently about chronic pain and opioids. And soon.
It’s hard to overstate how deeply chronic pain, and the struggle over how to treat it, has colored primary care medicine. In a national survey, 19 percent of American adults reported “persistent pain.” In a large health plan in Washington state, 14 percent of adults said they had “high-impact chronic pain” that was materially affecting their lives. A survey of primary care practices in Massachusetts found that nearly 40 percent of adult visits involved chronic pain complaints.
Chronic pain is, in its prevalence, the new hypertension. For many doctors it is ruining the job. Hardly a day goes by when a general internist or family practitioner doesn’t have to decide whether to write an opioid prescription for someone who complains of chronic pain. An array of skeptical questions, urine tests, and signed “pain contracts” follows, polluting the clinical encounter.
Young doctors learn quickly to expect the difficulties that result. A couple of years ago, while supervising medical residents in a clinic, I heard a patient shouting about how his doctor didn’t believe him and “didn’t give a shit” how he felt. A door slammed, and a thin man in a track suit strode by, told the receptionists to go to hell, and headed for the elevators. A minute later, the resident appeared. Ten weeks out of medical school, she sat down ashen-faced. At that stage of training, every case the resident sees in clinic must be presented to the attending physician for review. Her patient, whom she’d never seen before, had run out of his month’s supply of Percocet early and wanted more. The resident had said no. Halfway through her presentation, she started to cry.
If the use of opioids for chronic pain were just making the practice of medicine less rewarding, the problem would be tolerable. But it’s changing the country, creating a new underclass in the United States, no less real (or less fraught with the potential for controversy) than the black underclass whose existence has been so central to American history of the past half century. The new underclass, mostly white, is distributed widely, with hot spots—Appalachia, rural New England, and surprisingly, far-northern California. Like those in the black underclass, members of the new underclass usually have no more than a high school education and suffer high unemployment. Unlike the black underclass, whose chief impediments are discrimination, social dysfunction, and the trauma of imprisonment, the new underclass is stymied by economic obsolescence, a sense of victimhood, and an exaggerated view of its own physical damage.
In a presentation to the Federal Reserve Bank of Boston in 2016 called “Where Have All the Workers Gone?” Princeton economist Alan B. Krueger sketched a picture of one segment of this new underclass, men ages 25 to 54. About 12 percent of these “prime age men” are not even looking for work: 43 percent of that group describe their health as fair or poor, 44 percent take painkilling drugs on any given day, and 40 percent say that pain prevents them from taking a job for which they qualify. They spend 30 percent of their time alone and, compared with employed men, have higher measures of what they consider “unpleasant time” and less belief that their activities are meaningful.
Prime age men make up 24 percent of the nation’s population and have been and continue to be the backbone of the labor force. Today, millions of these men are living passive, depressed, bored, and isolated lives. For many, chronic pain is how they announce their status to the world.
So, how bad is their pain—really? That’s something doctors can’t help wondering when a patient walks into the examination room, looking like everyone else, and describes disabling pain. But doctors ask the question out loud only in each other’s company. It’s out of fashion, and strategically unwise, to question a patient’s sense of reality. The Social Security Administration actually prohibits it. Starting in 2016, the SSA instructed disability adjudicators to stop assessing the credibility of an applicant’s symptoms. “We clarify that subjective symptom evaluation is not an examination of an individual’s character,” it declared. Henceforth, adjudicators should consider whether symptoms “can reasonably be accepted as consistent with the objective medical … evidence.” That less judgmental stance is, of course, no less subjective—and is where every discussion of pain ends up.
The monster wave of pain breaking over men in their prime is certainly real enough to warrant attention. But that doesn’t mean it’s not exaggerated, or that doctors should take it at face value. Medical scientists in recent years have done a lot of research on how a person’s response to pain is affected by thoughts, mood, and social context. A major finding is that depression and anxiety about unpleasant sensations (“catastrophizing”) make pain worse. Acceptance of pain, on the other hand, makes it more tolerable. These different ways of experiencing pain are visible on functional MRI. They’re in the brain as well as in the head.
Unfortunately, for many patients chronic pain is an identity as much as it’s a sensation. That assertion may be hard to prove, but doctors and nurses will tell you it’s true. They see it every day. Their exam rooms are full of disabled people who can mow lawns, grow vegetable gardens, carry children, put up storm windows, and perform dozens of other tasks. The inescapable conclusion is that they have an exaggerated view of disability, an unwarranted sensitivity to discomfort, or both. It’s not heartless to make that judgment (although, of course, it can be made to seem so). Over time, many of these patients also develop considerable self-pity—another trait that’s dangerous for doctors to identify, even when the evidence is clear.
These patients don’t need opioid painkillers. Many will feel little difference after being slowly and sympathetically weaned off them. In so doing, they’ll avoid the treatment’s worst complication. In 2004, 21 percent of heroin users had previously used pills; in 2013, 45 percent had. The road from prescription narcotics to heroin addiction is now wide and well traveled. Doctors need to do everything they can to keep their patients off it. The first step is to say no.
That’s all? No, it’s not all. But even if it were, saying no is far harder than it sounds.
Doctors want to help patients, and they also want to be liked by them. Primary care physicians, who seek sustained bonds with their patients, want to avoid conflict (especially when an appointment lasts only 20 minutes). To appreciate the hazards of this relationship, you don’t need to look at opioid prescribing. You can look at antibiotic prescribing.
Despite two decades of campaigns against the practice, doctors give prescriptions for antibiotics to more than half of those who go to them for coughs and colds. Overall, one-third to one-half of antibiotic prescribing in doctors’ offices is unnecessary. Practitioners give all kinds of reasons for this. The big one—rarely mentioned because it’s a source of shame—is that they want to please the patient. When people come to you feeling ill, seeking help, and having waited patiently, you want to give them something.
Saying no to opioids will be much harder than saying no to antibiotics. Doctors will have to tell patients they won’t prescribe a drug that may work for a while, or that they won’t keep prescribing one that’s still sort of working. Instead, they’ll need to prescribe medicines that may not work as well, and advise patients to learn to live with the pain left over. Asked why they’re doing this, honest physicians should answer: “Trust me. It’s for your own good.” And of course, explain why. The people hearing this will be poorer, less educated, less healthy, and more skeptical than average patients. Many will be unemployed, disabled, depressed, and saddled with responsibilities they can hardly bear. They will cry, they will yell, they will leave unhappy, and few will offer thanks.
Evidence exists that doctors—surgeons, specifically—can sharply cut the number of opioid pills they prescribe without making patients actually suffer. That’s good news, but it’s low-hanging fruit. Postoperative patients have wounds that will heal and stop hurting. The harder task will be prescribing fewer opioids to people with chronic pain, who can’t look forward to their wounds healing. But progress is being made there, too.
A number of years ago, the Department of Veterans Affairs realized that it had a big problem with opioid use. Twenty-three percent of Iraq and Afghanistan veterans using VA health care had been prescribed opioids at some point, and eight percent were taking them more than 90 days a year. In October 2013, the department launched an initiative to reduce opioid use. Doctors were provided a “taper decision tool” to help them wean patients off the drugs. Patients were urged to use physical therapy, acupuncture, yoga, and alternative medicine to manage their pain, and the VA health system created an app called Pain Coach to help them.
In August 2016, there were 187,000 fewer VA patients taking opioids than in August 2012—a 27 percent reduction. (This number includes far more than just Iraq and Afghanistan veterans.) The number of patients taking the drugs long-term dropped 33 percent, and the percentage taking high doses fell 40 percent. The VA was ahead of most medical systems in addressing this problem, as it has been with other therapeutic trends over the past 25 years—a fact at odds with popular perception.
Today, the Centers for Disease Control and Prevention recommends that clinicians “evaluate benefits and harms with patients within 1 to 4 weeks of starting opioid therapy for chronic pain,” and after that, every three months. It’s not terribly helpful advice. What’s to be gained by starting oxycodone if things aren’t likely to change in one month or three?
“You have to consider the other reality,” Nora D. Volkow, director of the National Institute on Drug Abuse at the National Institutes of Health, told me recently. “Doctors are faced with the situation where the patient is in a tremendous amount of suffering and they have tried other things. So they turn to opioids. It is not black and white.”
It’s true. For some patients with chronic pain, opioids are the answer. But for most, treatment must begin with the doctor saying no. This needn’t be done callously, and people in pain don’t have to be left with nothing. Many things help a little—nonnarcotic drugs, acupuncture, transcutaneous electrical nerve stimulation (TENS), yoga, massage, exercise. Time and sympathy from a doctor, nurse, therapist, or coach are just as important as any of these treatments. The journey back from opioid drugs—or through the land of chronic pain without them—should not be taken alone.
Even if President Trump never gets back to repealing and replacing the Affordable Care Act, he can leave his mark on medicine by making sure that Obamacare, Medicaid, and the Veterans Health Administration have the resources to treat chronic pain without opioids. It would be a huge accomplishment. He could shepherd millions of people to a place where they can work, be happy, and not feel like victims.
In the meantime, however, physicians should embrace a New Paternalism without apology. A New Maternalism, as well, given that 47 percent of medical students are women.
For ages, paternalism was one of the few things physicians could rely on—the shamanistic voice of authority. As society became more sophisticated, medical paternalism faded. Eventually, it became nothing more than a way for doctors to massage their egos and hide their ignorance. The New Paternalism is different. It’s built on knowledge as well as authority—and on the courage to tolerate the patient’s anger.
Getting there won’t be easy. It will require that physicians weather their own version of withdrawal. But like their patients, they’ll feel better on the other side.