Point of Departure - Autumn 2009

What’s Wrong (and Right) with Science Journalism


Remarks to the University of Iowa on October 8, 2008, for the Project on Rhetoric of Inquiry

By David Brown

September 1, 2009


I want to take the next little while to talk about writing about science in the popular press. My comments will be principally confined to science as it is reported in newspapers—how it’s done, and how to do it better.

I have been writing for newspapers for much of my working life. They are now what might be considered the wooly mammoths of the American media. People like me are hoping the Ice Age lasts as long as possible. But even when it ends, and the last newspaper keels over into the peat, there will be descendants of these once thriving behemoths, and some of them will be writing about science.

Enough of that metaphor.

What I’m trying to say is that science writing will go on forever, in one form or another.

Why? Because there’s so much science in our world, and it’s so interesting. There is also a desire—even a hunger, I believe—to learn things about our world that are not likely to be eclipsed by the next day’s events, as is the case with so much news. There is considerably more hunger for science reporting—and, in my opinion, a considerably more sophisticated readership for it—than many of the people who run newspapers realize. I’ll speculate on why that may be in a minute.

Suffice it to say, science reporting is a growth industry in journalism, and has been for about 20 years. It will continue to be because science is something Americans do very well. Science is also something that Americans are counting on, rightly or wrongly, to solve a lot of the problems facing them.

I believe this truth—that people want to learn about scientific discoveries, the systematic exploration of our material world—carries with it an opportunity for journalism to improve itself and in some sense to remake itself.

An opportunity for journalism? I hear some of you ask. Who cares about opportunities for journalism? Well, I think we should all care about the future health and quality of this particular business. It is, like it or not, the disposable addition to our education that either lands on the doorstep, or pops up on the computer screen, every day.

But journalism is more than just a business and more than just a continuing education course. It is something that both reflects the spirit of the times, the zeitgeist, and can shape it. Good journalism can make public discourse more honest and less oversimplified, more dominated by logical thinking and less dominated by rhetoric.

Good journalism can do this. And science journalism can lead the way. At least that’s the argument I will make.

What does science journalism have that makes it a model for good journalism as a whole? Quite a few things, I think. But the most important thing it has is evidence. Science is built on speculative ideas—hypotheses—and the evidence that either confirms them, challenges them, or occasionally overthrows them.

Scientific evidence in a form that is explicable, even if boiled down, should be a part of almost every story about a discovery, a new insight, a revised theory, a more precise diagnostic strategy, a better therapy. Science reporting, in fact, should be the model for evidence-based journalism.

So what is evidence? For starters, evidence usually involves numbers. Science requires measuring things, and numbers are the language of measurement. So, numbers in a news story are important. But they alone don’t constitute evidence. Too often, numbers in news stories are used to garland and decorate what is treated as the real news—namely, someone telling you how great something is, what it all means, and all the ways it’s going to change our world. But of course, that isn’t evidence.

In science, there is a natural tension between evidence and opinion, and evidence always wins. Unfortunately, in a lot of science reporting, as in a lot of reporting in general, that isn’t the case. Take a look sometime at a science story in the daily news. Say a story about the effectiveness of a new cancer treatment, or the discovery of a skull that may revise our view of human history, or environmental changes suggesting a faster rate of global warming, or experiments showing that bisphenol A damages health.

Read the story. Then notice how much space is devoted to describing the evidence for what is purportedly new in this news, and how much is devoted to someone telling you what to think about it. Ask yourself whether there is enough information in the story to permit you to reach your own opinion about its newsworthiness.

I think you’ll be surprised. If there isn’t enough information to give you, the reader, a fighting chance to decide for yourself whether something is important, then somebody isn’t doing his or her job.

The reporter has two tasks. (And I will say parenthetically this is all reporters; it’s just a clearer assignment when the subject is science.) The first is to provide enough information so a reader can judge the strength of a claim. The second is to describe how the news fits into what’s already known about the subject. How much of a breakthrough is this new study that found that statin drugs can reverse atherosclerosis? How plausible are these descriptions of Gulf War syndrome the Senate committee is hearing? Is the so-called ABC (abstinence, be faithful, use condoms) approach to AIDS prevention in Africa a success or a failure? Are standardized tests changing public school curriculums, as this critic claims?

All of these are questions for which there is an “evidence base” that has a life quite apart from the Yes or No answers people may offer when the question is put to them. Any story that wants to engage these questions—and thousands more like them—needs to seek out that evidence and present at least a bit of to the reader.

This seems like a tall order—and it is. That’s because it isn’t always easy to boil down research findings to a few numbers that capture the essence of what happened in an experiment or a study. And it isn’t always possible to find out how a new piece of knowledge fits into what’s already known about a subject. Sometimes it just can’t be done, or can’t be done by deadline.

On the first task—knowing how much to boil something down without making the final dish indigestible—that’s a big part of the fun of being a science reporter. And on the second task, we have technology on our side. Thanks to the Internet and e-mail, search engines like PubMed and Google Scholar, and data bases like the Cochrane Library, we can round up information faster and more systematically than anyone could have imagined even 10 years ago.

When I say this is a tall order I don’t mean to imply nobody is doing it. There is lots of great daily reporting on the flood of scientific knowledge and discovery that daily washes over us. But as a rule, this “evidence-based paradigm” hasn’t caught on in the reporting of scientific and technical subjects to the extent one might hope.

I have a few theories why it hasn’t. First, the people who are in charge of American journalism tend to be intimidated by science and scientists. They are often unwilling to bring their own natural (and legendary) skepticism to scientific topics. Few had anything other than the minimum number of science courses required in college. They tend to think of science of as a realm of priestly knowledge where the thinking, logic and judgment of ordinary people fear to tread. This last misconception is particularly unfortunate, in my opinion. And I hasten to add, in Albert Einstein’s, as well.

Einstein said, in 1936, “The whole of science is nothing more than a refinement of everyday thinking.” He probably should have added, “except for quantum mechanics.” Nevertheless, I think this is a profound, and profoundly democratic, assertion. Every executive editor should have it taped to his or her computer.

Second, most editors—and here I’m talking about managing editors and executive editors, not science editors—don’t understand that science is iterative and conditional. It moves in small steps and those steps are sometimes in the wrong direction. Like most Americans, editors were weaned on the notion that American science (and especially medicine) is one breakthrough after another, interrupted by the occasional scandalous failure. Furthermore, there is the notion that things are either right or wrong. Look at how the word “flawed” is thrown around by the press. It’s wielded like some sort of wand that freezes a piece of research or a report in its tracks. Someone asserts a study is “flawed” and the assumption is the conversation is over. In science, though, calling something “flawed” says nothing. It starts the conversation, not ends it.

How is the study flawed? Do the flaws invalidate it? Do the flaws give it less credibility even though it retains some? Is there any way to compensate for the flaws? Every piece of scientific research, like every human being, has some flaw if you look hard enough. The question is: How serious is the flaw, and what is its effect?

Third, the press tends to be very interested in what authority figures say about an event in the news. This, I think, is an important and overlooked fact. It explains why many editors, faced with a scientific assertion in the news, are more likely to say to a reporter: “Find out what other people think about this” than to ask: “Well, what does the evidence say about this claim?”

In politics, diplomacy, popular culture, and even economics—the subjects that editors are most conversant with—what authority figures have to say actually is evidence much of the time. What Larry Sabato of the University of Virginia says about the latest doings of the presidential candidates is evidence. His opinion—or, more precisely, the press’s opinion of his opinion—can change minds and alter the perception of things. The same is true, although to a lesser extent, of what the chairman of the Federal Reserve says about the economy. In politics and economics, a changed perception can mean a changed reality.

But that’s not true with science and medicine. Until about 20 years ago, peptic ulcers were a disease of the high-strung, the impatient, the worried. And when they weren’t caused by stress, they were caused by spicy food. This is what every authority said for a century. But it turns out 90 percent of ulcers are caused by a bacterium called Helicobacter pylori. That discovery won Australian physicians Barry Marshall and Robin Warren a Nobel Prize in 2005.

This means that all those years when men in gray flannel suits came home to their attentive wives to consume suppers of milk toast and bouillon didn’t do anything to change the fact that a bacterium was sitting in their duodenums eroding their intestinal mucosa. Nor did the surgical procedures that presumed to cure the disease mean much. Nor did the assertions of full professors of medicine who proclaimed the truth of these therapies.

What authority figures have to say about findings in science is ultimately irrelevant. When all is said and done, all that counts is the evidence. So, to give more space to the commentary than to the evidence probably isn’t a good idea. And yet that’s often the case.

Here is one recent example. In the summer of 2008, the United States Preventive Services Task Force released a new recommendation that men over 75 years of age not be given prostate specific antigen, or PSA, blood tests as a screen for prostate cancer. The task force, which is assembled by the U.S. Public Health Service, is strictly evidence-based, examining every study on a subject and rating the strength of its recommendations based on the strength of the evidence reviewed.

The 1,000-word news story contained these statements:

“There is this idea that more is always better, and if a test is available we should use it,” said Howard A. Brody, a professor of family medicine at the University of Texas Medical Branch at Galveston. “A lot of times, we’re doing more harm than good.”

E. David Crawford, a professor of surgery at the University of Colorado at Denver: “You have to individualize treatment. If a 75-year-old man is found to have high-grade prostate cancer, it’s going to kill him, and we can intervene and do something for him.”

And then there was J. Brantley Thrasher, chairman of the urology department at the University of Kansas and a spokesman for the American Urological Association, who said: “We have seen a dramatic drop in mortality. They’re not paying attention to that.”

What the story didn’t contain was any data on the relationship between PSA testing and cancer mortality in older men, what kind of studies the task force drew upon, and what the most credible of them showed. It also provided no data on younger men, for whom (according to the article) there may be some benefits of testing. And this got on Page 1, which is pricey real estate in the newspaper world.

Giving the reader the evidence, rather than the words of an expert telling him what to think, is also the democratic thing to do. I find it curious that the press—the great defender of the people’s right to know—often doesn’t give the reader enough information to make a decision about how important something is.

The fourth reason the “evidence-based paradigm” hasn’t caught on is that some version of a narrative—with a protagonist, emotional engagement, conflict and resolution—is the archetypal journalistic form. In pure form, narrative structure exists only in long feature stories, but elements of it suffuse all of journalism. It is even captured in what we in the trade call anecdotal ledes—the two- or three-paragraph vignette, usually featuring one person, that starts a story. It often, too, is apparent when the writer returns to that person at the end of the story, the kicker.

There is a lot to be said for narrative, and even for the overdone anecdotal lede. But for science writing, it is a hazardous form.

Why? Because the narrative form makes an anecdote evidence. That’s okay, as long as the anecdote is representative of the body of evidence and that body is adequately described. However, problems arise when the anecdote, or often three or four anecdotes, become all the evidence there is. All the evidence, that is, except for the opinions of people interpreting the anecdotes as evidence.

This is what happened with Gulf War syndrome. Gulf War syndrome was built largely from the empathetic recounting of soldiers’ stories, even though when they were investigated rigorously there was little or no evidence that any new illness arose from the United States’s first war against Iraq.

This phenomenon is also what led to the demonization of insurance companies in the 1990s when there was uncertainty over whether high-dose chemotherapy, followed by bone marrow transplantation, was the best strategy for treating breast cancer. Needy vulnerable patients, and heartless penny-pinching insurers, battling each other while cancer slowly killed one of the combatants—this was the story line in hundreds of news stories. What many stories failed to mention was that the evidence base for this unusually painful, expensive and dangerous treatment was extremely weak. In fact, when randomized clinical trials were finally performed—which was after some state legislatures made it illegal for insurers to refuse to pay for this treatment—they showed that bone marrow transplant did not lead to better survival than standard treatment. In the meantime, 48,000 women had undergone it.

The fifth and most troubling reason we don’t see more evidence in a lot of science stories is that often it is not in the interest of reporters or editors to tell a story completely. We in the media live in a time of incredible competition for readers’ attention—from television, radio, magazines both real and online, the gigantic blogosphere, and sometimes even our own websites. At the same time, getting on Page 1 is almost all that counts. It has always been important. Reporters who wrote accounts of the Battle of Gettysburg I am sure wanted them on the front of their oversize, gray, art-free, and barely readable newspapers. But today Page 1 is a Golden Calf we worship obscenely.

Science stories, and especially medical stories, have a really good shot of getting out on Page 1. They are inherently interesting and they appeal to what might be termed, somewhat cynically, as the narcissism of the reader. But that often isn’t enough to get them on the front page. To get there, the story must emphasize novelty, potency, and certainty in a way that, as a general rule, rarely exists in a piece of scientific research. That truth is why so many medical stories only mention the magnitude of change that occurs with a new diagnostic test or treatment, and not the absolute change it brings about.

Consider, as an example, a hypothetical innovation that increases the diagnosis of a certain illness by 50 percent, or triples the survival duration of patients with specific disease. That sounds like big news—and of course it may be. However, if a condition is diagnosed in time to cure it in only 5 percent of cases, and the new test increases that to 7.5 percent of cases, that may not be such a breakthrough—although of course, it is an improvement of 50 percent. Same with a disease, such as liver cancer, in which the survival after diagnosis is roughly six months. Tripling it to 18 months is not a trivial improvement, but is probably not what a reader thinks of when he reads that a new treatment “triples the life expectancy” of someone with liver cancer. The absolute risk of an outcome, and not just the relative risk of it, should be absolutely required of any story about a medical innovation.

But it isn’t, and part of the reason is that saying that the likelihood of an event (such as a cure) goes from 5 percent to 10 percent, instead of saying that there was a 100 percent improvement in cure rate, tends to deflate the perceived news value of a story. Consequently, it isn’t surprising that it’s almost never said the first way, and almost always said the second way.

I would like to give you an example of some of these forces in play. The story I have chosen is a disposable one. The inaccurate reporting surrounding it didn’t cause much outrage at the time, and by now has been forgotten by just about everyone. A lot of people might not even consider what I will now describe to be a problem. Which, to me, frames the problem perfectly. In 2006, at the American College of Cardiology conference, a study was presented showing that when patients with blockages in their coronary arteries were given high doses of a statin—the family of drugs that millions of Americans take for their cholesterol—the blockages actually shrunk in size.

This was presented as a breakthrough—a new path leading to a cure for coronary heart disease, the leading cause of death in the United States. The study was led by a prominent cardiologist who was the incoming president of the organization at whose meeting it was presented. At a press conference, this man told reporters that previous studies had “shown slowing of coronary disease, but not regression.” The pharmaceutical company that makes the drug used in the study, and the American College of Cardiology, each put out press releases saying the study demonstrated regression of coronary disease “for the first time.”

This news literally went around the world. It was one Page 1 of the Daily Mail in London, where the story asserted the study showed the “first conclusive evidence” the drugs could reduce coronary plaque. It was on Page 1 of the West Australian in Perth, under the headline “Miracle drug to stop heart attacks.” It was on the front page of USA Today, in the Wall Street Journal, and on NPR—each time with claims that this was the first time that drugs had reversed coronary artery disease.

The only trouble was it wasn’t the first. It wasn’t the second. It wasn’t even the fifth. The reporters and editors should have smelled something funny when they were told a dramatic effect like this had burst forth fully formed 19 years after the first of these drugs arrived on the market. Science just doesn’t work that way. Science moves in small steps. Something unusual is noticed in a few people and then an astute researcher explores what is noticed by magnifying the conditions under which it occurred (assuming this can be done ethically) to get a better look at it. That is exactly what happened with statins and the shrinking of coronary artery blockages.

The first drug in this family, a compound called lovastatin, was approved in 1987. Three years later, in 1990, a study appeared in the New England Journal of Medicine that described how out of about 150 patients taking lovastatin, 32 percent showed regression of coronary blockages compared to 11 percent of patients not getting the drugs. That was the first signal that a statin could reverse coronary artery disease. There followed about a dozen more that reproduced, refined, and magnified the finding. The study presented in 2006 at the American College of Cardiology by its incoming president was the latest refinement. It wasn’t even remotely the first.

But we in the press love “firsts”. We are total suckers for “firsts”.
“Firsts” get stories on Page 1. “Betters” don’t ever get stories on Page 1. The researchers, the American College of Cardiology, the medical reporters, and their editors know this. The ironic thing is that there was a “first” in this study. It was the first time that the majority of patients getting an extremely aggressive dose of statins showed a shrinkage of their coronary blockages. But somebody judged that wasn’t quite good enough. The news had to be juiced to be the first time it ever happened.

So you ask, who cares? What damage is done? Especially if the story gets doctors and patients to think about using statins at higher doses—blockage-shrinking doses. The trouble is that it’s an exaggeration. And exaggerations at some point—it’s hard to say exactly at what point—become lies. But the sad truth is that people, as a rule, don’t complain about exaggerations. Nobody is damaged by them, and many people actually benefit from them. Only truth is damaged by a “first” that didn’t actually happen first.

Which is why exaggerations do more damage to the credibility of science—and I think to all of society—than outright lies or mistakes. They float by, and eventually people come to expect them. It’s like inflation. It creeps up and one day words—like dollars—just aren’t worth what they used to be.

Is the assertion that in 2006, statin drugs were shown to shrink coronary blockages for the first time any different from the assertion by Sarah Palin that she stopped the “Bridge to Nowhere,” or Barack Obama’s assertion that John McCain is willing to have the Iraq war go on another 100 years? I don’t think so.

We live in a time when saying something over and over makes it true, or at least gives what’s said some of the heft and power of truth. As Tommy Smothers said at the Emmy Awards the other night, “Truth is that you believe what I told you.” We also live in a time where—with blogs, zines, and blast e-mails—everyone can buy ink by the barrel (at least metaphorically speaking), even if almost nobody can afford to run a newspaper.

In this new world, science reporting—done without exaggeration, and with an eye to giving the reader enough information to make up his own mind—can be a model for intelligent discourse. It can be an exercise in truth-telling and democracy. And what could be better than that?

David Brown has been a staff writer for The Washington Post since 1991. A journalist and a physician, he works four days a week at the Post and two-thirds of a day at a general internal medicine clinic in Baltimore supervising third-year medical students.

Comments are closed for this post.