To Die of Having Lived
A neurological surgeon reflects on what patients and their families should and should not do when the end draws near
Diminished by a lifetime of smoking, rich food, and inactivity that had limited him to a hobble, my father confided to me that he’d had 75 wonderful years followed by three mediocre ones. Two months later, he and my healthy mother returned home from an evening out with friends; he ate a bowl of ice cream and fell over dead at the age of 78.
A few years later, my mother closed her checking account, left the Midwestern town where she’d lived for most of her life, and moved to a desert community in Arizona, where she flowered. She found new friends, switched political parties, hiked in the canyons, and was happy. One night when she called me, she was a little confused but able to let me know that she’d rapidly lost 15 pounds and, for the first time I could recall, admitted that something hurt: her abdomen. Later, after a CT scan and a liver biopsy, an oncologist in Seattle told her that she had metastatic carcinoma of the pancreas. A frighteningly high serum calcium level caused by the cancer had brought on her confusion; that problem was corrected, and so she was lucid when she said no to the chemotherapy that might have extended her life by several months. At 85, she gathered around her my siblings and me, her brother and sister, and all her grandchildren, and said goodbye. She died in her sleep a few weeks later.
For both of my parents, death came unobstructed. This is what all of us wish to have happen: to run up to the edge and fall over. But it doesn’t happen to many people who ignore how the end might come for them. Although it may be a form of arrogance to attempt the management of one’s own death, is it better to surrender that management to the arrogance of someone else? We know we can’t avoid dying, but perhaps we can avoid dying badly.
Dodging a bad death has become more complicated over the past 30 or 40 years. Before the advent of technological creations that permit vital functions to be sustained so well artificially, medical ethics were less obstructed by abstract definitions of death. The current generally agreed upon criteria for brain death have simplified some of these confusions, but they have not solved them. The broad middle ground between our usual health and consciousness as the expected norm on the one hand, and clear death of the brain on the other, lacks certainty. Doctors and other health-care workers can provide patients and families with probabilities for improvement or recovery, but statistics are hardly what is wanted. Even after profound injury or the diagnosis of an illness that statistically is nearly certain to be fatal, what people hear is the word nearly. How do we not allow the death of someone who might be saved? How do we avoid the equally intolerable salvation of a clinically dead person?
Injecting political agendas into these end-of-life complexities only confuses the problem without providing a solution. Of course we will die, and of course we as a society should ponder the implications of management solutions. The questions are how, when, and on whose terms we depart. It is curious that people might be convinced to avoid confronting death while they are healthy, and that society tolerates ad hominem arguments that obstruct rational debate over an authentic problem of ethics in an uncertain world.
Forty years ago, when I was an intern at the University of Chicago Hospitals, we were the local emergency room for everyone on the South Side: bankers, barbers, heroin dealers, professors, prostitutes, real-estate developers, housewives, students. In those days, most discovery of disease was clinical: listening, looking, feeling, thinking. One of the most common reasons for admission to the hospital by way of the emergency room was a stroke. Diagnosis and treatment for a stroke were based on the patient’s history and a physical exam.
[adblock-left-01]
From outside a curtained-off cubicle in the ER, a senior resident might inform the house officer on call, “Another fascinating case of left middle-cerebral stroke down here.” Admitting stroke victims to the ward was not a sought-after duty for the neurology and general internal-medicine residents who divided that chore on alternate days. The correct diagnosis was usually known by the time the patient reached the ward, but trying to get a complete medical history, do a thorough physical, and attend to the paperwork meant more sleep deprivation for the house staff on the receiving end. Stroke patients were put to bed on a ward where they either died or recovered enough to go to a nursing home or to rehab. Along the way, the interns and residents sorted out the patients’ hypertension, cardiac abnormalities, and diabetes, and the nurses nursed them. The costs were low because testing was scant, drugs were few, and therapies were limited. The patients either died or recovered enough to be quickly moved off the acute-care wards.
All that changed with the advent of computers and other new technologies. Today’s stroke patient, if he is a big-city resident, is rushed to a tertiary care center and seen within minutes by board-certified neurologists and interventional neuroradiologists. He is certain to have a CT scan, probably an MRI, and maybe an angiogram. If he meets certain criteria, he will be given tPA or another clot buster. The manufacturers of the drugs and hospital equipment, as well as the hospitals, promote the advantages of this kind of care, and it may improve the chances of survival by a few percentage points. If the patient does survive the stroke and treatment and makes it into the critical care unit (CCU), the bill will already have reached thousands of dollars.
Any seriously ill older person who winds up in a modern CCU immediately yields his autonomy. Even if the doctors, nurses, and staff caring for him are intelligent, properly educated, humanistically motivated, and correct in the diagnosis, they are manipulated not only by the tyranny of technology but also by the rules established in their hospital. In addition, regulations of local and state licensing agencies and the federal government dictate the parameters of what the hospital workers do and how they do it, and every action taken is heavily influenced by legal experts committed to their client’s best interest—values frequently different from the patient’s. Once an acutely ill patient finds himself in this situation, everything possible will be done to save him; he is in no position to offer an opinion.
Eventually, after hours or days (depending on the illness and who is involved in the care), the wisdom of continuing treatment may come into question. But by then the patient will likely have been intubated and placed on a ventilator, a feeding tube may have been inserted, a catheter placed in the bladder, IVs started in peripheral veins or threaded through a major blood vessel near the heart, and monitors attached to record an EKG, arterial blood pressure, temperature, respirations, oxygen saturation, even pressure inside the skull. Sequential pressure devices will have been wrapped around the legs. All the digital marvels have alarms, so if one isn’t working properly, an annoying beep, like the sound of a backing truck, will fill the patient’s room. Vigilant nurses will add drugs by the dozens to the IV or push them into ports. Families will hover uncertainly. Meanwhile, tens and perhaps hundreds of thousands of dollars will have been transferred from one large corporation—an insurer of some kind—to another large corporation—a health care delivery system of some kind.
While the expense of the drugs, manpower, and technology required to make a diagnosis and deliver therapy does sop up resources and thereby deny treatment that might be more fruitful for others, including the 46.3 million Americans who, according to the Census Bureau, have no health insurance, that isn’t the real dilemma of the critical care unit. The problem is in divining from the outset who will live, who will resume a meaningful life, and who will spend his final days or weeks comatose or in misery. The last person able to comment is an older patient already critically ill with stroke, trauma, metastatic cancer, organ failure, heart disease, or some combination of all these conditions. In America, the problem isn’t getting into or out of a CCU; the predicament is in knowing who should be there in the first place.
Before we become ill, we tend to assume that everything can be treated and treated successfully. The prelate in Willa Cather’s Death Comes for the Archbishop was wiser. Approaching the end, he said to a younger priest, “I shall not die of a cold, my son. I shall die of having lived.”
The best way to avoid unwanted admission to a critical care unit at or near the end of life is to write an advance directive (a living will or durable power of attorney for health care) when healthy. Regrettably, not many people do this and, more regrettably, often the document is not included in the patient’s chart or it goes unnoticed. In one large study, only about 7 percent in a cohort of geriatric patients from a large group practice had written advance directives, and of these, slightly more than half of the paperwork made it to the patient’s chart. Even then, not many of the staff members knew it was there. Fewer than half the charts containing the advance directives identified that fact in any way, and in about a third the presence of the documents was not recorded. So even if an advance directive has been executed, it is up to family and friends of the disabled sick person to make this fact known to the attending medical staff, to make sure the document is properly included in the records, and to be unanimous in agreement with the patient’s wishes. The bedside in a CCU is not the place to adjudicate a family argument, for, as Tolstoy put it, “Every unhappy family is unhappy in its own way.”
[adblock-right-01]
I once cared for an elderly man who had been in good health until a car hit him in a crosswalk. He had a hemorrhage in his brain that would soon have been fatal had he not wound up being intubated in our emergency room, then transferred to the CCU. This patient was widowed, had six children, and did have advance directives in his chart. The large family gathered, except for one son who lived in another state. After 36 hours an EEG confirmed brain death. I suggested that recovery was not possible and that we should abide by the patient’s clear wish that he be allowed to die. Everyone present sobbed in agreement but wanted to maintain the ventilator until the last brother arrived from out of town. This brother, they all warned, probably would not agree with them.
Indeed, the sick man’s youngest son didn’t get along with the rest of the family, and as soon as he walked into the CCU, he began to obstruct his father’s wishes. For six days the son demanded we continue treatment, until finally the old man’s heart slowly stopped on its own. Medical devices cluttered the dark room, which smelled like coma (an identifiable odor). Everyone was sad and angry. Tears and shouts became the epitaph for a thoughtful old man who had made it plain that in such a situation he simply hoped to die. The things his children remembered about his final hours were the squeak of a ventilator and the slowing beep of the cardiac monitor measuring a heart rate headed for zero.
ER doctors and internists fear making errors of omission much more than errors of commission, and so questions arising from an excess of caution are often asked over and over. Not long ago, an emergency-room doctor called asking me to look at the MRI of an 80-year-old woman found collapsed in her bathroom. She had hemorrhaged into her brainstem—the entirety of that small tightly wound bundle of cells connecting the brain to the spinal cord. The clot leaped out at me from the computer screen; it clearly meant death. Blood had replaced all the vital neurons connecting the patient to her life. “She’s already dead,” I explained to the doctor on the phone. “There isn’t anything to treat.”
An hour later, our intensivist phoned from the CCU with the same questions. “Shouldn’t we treat her hydrocephalus?” he wanted to know. While it was true that the clot was so huge the patient’s spinal fluid pathways were obstructed and the hollow center of her brain where spinal fluid is manufactured had grown larger, that was hardly the problem. Treating it would only delay her heart from stopping. “No,” I replied. “She’s dead. She shouldn’t even be in the CCU. Why don’t we transfer her to the ward?” At the end of the morning clinic, my pager went off again. This time it was the internist calling from the ward. “What are we going to do about this older woman with the clot in her brainstem?” she wondered.
Any older patient admitted through a big-city emergency department these days would be thoroughly evaluated and imaged while they’re there. This is the moment for family members and friends to state the person’s wishes strongly. Though outcomes can’t be predicted with complete accuracy, in many cases the odds at least are very well known. If there is only a small chance for meaningful recovery, it is best to keep the patient out of critical care units and let him take what comes on the regular ward. A sick person may still recover there, but only on his own terms. The ability to retain this control requires advance directives and the agreement of family and friends. It also requires these people to be present at the hospital and for them to be vigilant. Many members of the professional staff in large hospitals—residents, fellows, junior attending staff, and some nurses—are young people with great learning but little experience. They do best when offered direction.
Though accidents will always happen, some personal behaviors also can lead to ruin. Part of our ability to marginalize death comes from our early experiences in living. Because chance allows us to get away with a good deal of risky behavior when we are young—smoking, drinking and eating to excess, driving fast—we assume too often that it will always be so. But statistics catch up with us, and excess increases the odds against us. To reduce the odds of dying badly, we should avoid living badly, and to do that we have to examine our motivations.
The other side of this same page is the necessity to treat those chronic conditions with proven remedies and not to ignore them. Diabetes, heart disease, and high blood pressure are illnesses that, while not often curable, can be effectively controlled. Rather than trying to treat a chronic disease that has already done its damage, controlling it can result in greater longevity and happier lives. Edith Piaf famously sang, “Non, je ne regrette rien” (No, I regret nothing), but most sick, old people are unlikely to feel that way.
Since we are sure to die of having lived, we should prepare for death before the last minute. Entire corporations are dedicated to teaching people how to retire well. All of their written materials, Web sites, and seminars begin with the same advice: start planning early. Shouldn’t we at least occasionally think about how we want to leave our lives?
We tend to plot our lives, like we view the course of history, in decades, but our bodies speak to us in their own language and at their own rate. When I played football in college on Saturday afternoons, I was beaten up and spent most of the next day recovering. But by Monday it was over, and I was able to swagger back to the gym for practice. Now that I am nearing retirement, my recoveries are longer—or not at all. When I get up off the floor, I grunt as I remember my grandfather doing. My hair is gone, and my beard is white. These are changes in my physical self that should be accepted. Flannery O’Connor, who died young of systemic lupus, wrote, “Sickness before death is a very appropriate thing and I think those who don’t have it miss one of God’s mercies.”
Doctors don’t cure much; they only delay things. As conventional medical treatment has become more technological and less humanistic, some patients seek magic in other disciplines: acupuncture, chiropractic, holistic medicine or naturopathy, faith healing. These hopeful ideas will all fail too. In the end, we will die not only because we have lived, but as we have lived. Sick people acquire no great insights just because they are sick. If you hope for a miracle, look now, because one isn’t likely to find you when you’re on a ventilator in the CCU.
Because we understand the metaphor of conflict so well, we are easily sold on the idea that we must resolutely fight against our afflictions (although there was once an article in The Onion titled “Man Loses Cowardly Battle With Cancer”). And there is a place to contest an abnormal metabolism, a mutation, a trauma, or an infection. But there is also a place to surrender. When the organs have failed, when the mind has dissolved, when the body that has faithfully housed us for our lifetime has abandoned us, what’s wrong with giving up?