|
| Volume 50, Number 4, Fall 2001 |
AVOID THE SEVEN DEADLY SINS OF MEDICAL REPORTINGby Trudy Lieberman In the name of news and the desire to build audience, the media are stimulating demand for medical tests and treatments that are unproven and untested, and may even be harmful. The lure of stories about medical breakthroughs and miracles is so strong that the press rushes to report on them even if there is little or no evidence that they are safe and effective. "The cultural proclivity to see medicine as heroic and triumphant is incredible," says Barton Laws, the senior investigator at the Latin-American Health Institute in Boston. The press is part of that culture. Print and TV journalists know that the public has high interest in health-related stories. If your ratings are low, says Tom Bettag, executive producer of Nightline, run a medical story. If your ratings are really low, he adds, run two medical stories. But more than a cultural bias is at work. Journalists often fall victim to powerful public-relations machines representing some very big money. Reporting on a product or technology not yet proven clinically effective generates sales for manufacturers and stimulates a momentum that is hard to reverse. "It's extremely frustrating for us to figure out what works and what doesn't," says David Eddy, a physician and medical commentator. The media don't help. Dr. Andrew Wiesenthal, who is in charge of clinical systems for the Kaiser Permanente HMOs, says that the way journalists cover new medical technologies often has a "crippling" effect on getting them to the public safely and effectively. Too many journalists take a formulaic approach to supposed medical breakthroughs. They start with the premise that a technology works or is effective, so the formula almost always dictates a positive spin and produces a predictable story. Too often, stories omit contrary information or do not acknowledge the uncertainty that often surrounds new tests and treatments. Janet Vasil is a reporter/producer for Medstar Television, a firm that produces prepackaged medical news segments for TV stations. Almost all the stories her company does are "patient-based," she says: "Mary Jones had this problem. Dr. Smith had this answer. It's a problem-solution structure." Most important, Vasil adds, there must be payoff for viewers-"a kernel of information they can take away." All too often the take-away message is: buy this drug, have this test, ask for this technology, whether or not it is appropriate. It's a classic story model. In 1998, the NBC News correspondent Robert Hager reported on a new medical device, the Ultrafast CT, which takes pictures of the heart and detects calcium buildup in the coronary arteries. Calcium has been associated with heart disease. The segment showed a man who had the test, discovered calcium deposits, and eventually needed heart bypass surgery. It included the requisite poke at the insurance industry for not covering the new procedure. Hager did not discuss any of the scientific data on calcium scanning for coronary artery disease or acknowledge that there were (and still are) serious doubts about its value.
The segment ended with news that the man who had needed bypass surgery was now healthy and that "some believe this new scanner could keep many others healthy too." Hager did not say that the doctor from George Washington University Hospital whom he'd interviewed on camera for the story was also the medical director for HeartScan, a diagnostic facility in Washington, D.C., and could benefit from patients coming his way. Nor did he disclose that GE Medical Systems, part of the same corporate family as NBC, was negotiating with Imatron (the manufacturer) to market the Ultrafast CT. The deal was signed two months after Hager's broadcast. Diagnostic imaging devices make up a large share of GE Medical's $8 billion business. The firm is one of the biggest players in the $60 billion U.S. medical device market. Alan Garber, a physician and economist at Stanford who has studied the data on heart scanning, says: "There is no persuasive evidence that calcium is a better predictor of heart attacks or coronary events than other risk factors. The message the public should get is that this is one of several technologies, but there are still unanswered questions. Few cardiologists would say this is the best way to learn if you are at increased risk of having a heart attack." Last year, the American College of Cardiology and the American Heart Association issued a statement saying that electron beam CT, the technology's official name, could not be recommended for diagnosing coronary artery disease because of the high percentage of false positives that result from the test. The technology may have some use for monitoring treatment, the joint statement said, but only after more research corroborates "the small number of published studies" that have been done. The format for covering medical stories may help build audiences, but it does little to help them understand complex issues that are seldom all black or all white. Too many stories suffer from what we'll call the seven deadly sins of medical reporting. Sin 1: Accentuating the Positive and Ignoring the NegativePerhaps because so much medical news is manufactured by commercial interests trying to sell a product, it is not surprising that many stories carry a positive twist. In their haste to report any new medical achievement, many news outlets either ignore the negative or slip it in at the end of a story that already has been framed as a positive report. What's worse is omitting the negative altogether, even when good scientific evidence shows that a treatment is not effective. Coverage of Nickolas Zervos earlier this year is an example. In January, Zervos sued his insurance company, Empire HealthChoice, in New York City, for refusing to pay for treatment for his late-stage breast cancer, claiming that Empire was discriminating against him because he was a man. Zervos wanted a treatment involving the transplantation of blood stem cells. Throughout the 1990s, some 30,000 women underwent similar treatment, which involved very high doses of chemotherapy to kill cancer cells and then the implantation of stem or bone marrow cells to replace those killed during treatment. There was no proof that the $50,000-to-$100,000 procedure arrested the disease. In fact, two years ago, results from four clinical trials gave the definitive answer: It did not work. Insurance companies that had refused to pay for the treatment, or had done so only in the face of pressure from the media, politicians, and doctors with a financial stake in promoting the treatment, had been right all along. When Zervos and his lawyer sought publicity to force Empire to pay, the media jumped on the story, but not the facts. Negative comments about the treatment's effectiveness came in the form of a rebuttal from Empire and were framed to make the insurer, which had stopped paying for the treatment once clinical results came in, look like the bad guy. MSNBC didn't even include Empire's side. Sin 2: Generalizing from AnecdotesStory after story on Ultrafast CT begins with anecdotes about patients claiming that scanning saved their lives. Often, sellers of technology or their PR firms recommend the people featured as leads for stories. Last October, the Dallas Morning News ran a piece about heart scans. A sidebar was titled, "Success stories: three patient profiles." One of those profiled-identified as a marketing manager for Imatron-said: "If it weren't for Imatron and my heart scan, I could have been out riding my bike and had the mystery heart attack." Too often, such quotes imply that the test, treatment, or technology is for everyone when it may not be, or, as in the case of heart scanning, when evidence of benefits is not clear. Sometimes, such leaps result in bad outcomes. In a sample of 74 stories about heart scanning, only the Rocky Mountain News and the Washington Post mentioned patients who did not benefit from the test. Sin 3: Failing to Recognize Weaknesses in Scientific StudiesWithout knowing the level of rigor that went into a particular study, journalists can't reliably tell the public what to make of the results and whether they should have the test or treatment. An AP story in 1996 discussed early research on heart scanning done at St. Francis Hospital, in Roslyn, N.Y. Dr. Alan Guerci, the chief investigator, was quoted as saying that scanning proved to be 10 times more powerful a predictor of heart attacks and blockages than the standard nonsurgical techniques such as cholesterol testing. But the story did not discuss the study's shortcomings, which Guerci later told me could have included "selection bias," in other words, the lack of a representative sample of patients, a weakness other researchers would consider significant enough to negate the results. Sin 4: Failing to Interpret the NumbersToo often medical stories do not report the key concepts that are crucial to understanding what a test will and will not show. These are: sensitivity, which tells what proportion of people with a disease will test positive; specificity, which tells the proportion of people without the disease who will correctly test negative; positive predictive value, which tells the proportion of people whose tests are positive who actually have the disease; and negative predictive value, which indicates the proportion of people with a negative test who do not have the disease. A test with high sensitivity avoids false negatives; one with high specificity avoids a lot of false positives. Without some notion of how a test stacks up on those parameters, it's impossible for a reporter to convey to the public whether to have a test, particularly one that can lead to risky and sometimes unnecessary treatment. Few journalists writing about either heart scanning or about ThinPrep, a pap smear test for cervical cancer, offered a numerical context for making a judgment. With ThinPrep, media coverage largely ignored the question of specificity-how many false positives result from preparing cells in a different way, which is what ThinPrep does. "That should have been automatic in the questioning by reporters," says Alan Garber. They should have asked if the technology falsely led women to believe they had precancerous lesions, he said. When reporters did take a stab at the numbers, they sometimes used them in inaccurate or misleading ways. A July 2000 story on heart scanning in the Chicago Sun Times reported that "A finding of no blocks is 98 percent assurance you won't have a heart attack for several years." The reporter simply tossed out that number without saying where it came from, or adding any caveats. He did not, for instance, identify the population he was talking about. That percentage might be different for a group of young women who don't have many heart attacks than for a group of older men who do. Sin 5: Failing to Disclose Sources' Conflicts of InterestConflicts of interest, primarily financial ties to manufacturers and sellers of technology, abound in medical reporting. Those quoted may be experts, but their judgments may be colored if some fraction of their income comes from those who make the technology. But journalists eager to quote an expert, or someone who appears to be an expert, don't routinely inquire about those conflicts. Dr. Kenneth Noller, head of OB-GYN at the New England Medical Center, who was the spokesman for the American College of Obstetrics and Gynecology during the campaign for ThinPrep, says he was not asked more than once or twice if he had any financial connection to companies that made pap smear slides. Noller said, in fact, he received no money from ThinPrep's manufacturer. But how would journalists know this unless they asked? Mainstream journalists are not the only ones failing to disclose conflicts of interest. Medical and scientific journals-which publish the results of studies that eventually make their way to lay audiences-often don't disclose them either. Dr. Sheldon Krimsky, a professor at Tufts University, found that two-thirds of peer-reviewed journals that had disclosure policies were nonetheless not requiring disclosure. "I think you should know where funding is coming from," Krimsky says.
Sometimes bias is subtle and hard to detect. In its story on scanning, in June 1999, Better Homes and Gardens quoted an Atlanta cardiologist, Dr. John Cantwell, who called it a "very promising technology. In the past, we've had people take a treadmill test, walk away, and think everything was okay, only to have a heart attack soon after." The magazine failed to note that Cantwell analyzes scans for Lifetest Cardiac Imaging. In a sidebar next to his comments, Lifetest was listed as one of 36 places around the country where people could have their hearts scanned. Sin 6: Confusing an Intermediate Outcome with a Health OutcomeToo often journalists mistake an intermediate outcome for an ultimate health outcome, which results in a misleading presentation. An intermediate outcome is one that portends the expected health outcome. Lowering blood pressure is an intermediate outcome; reducing deaths from heart attacks and strokes is a health outcome. There is no absolute connection between the two. A particular treatment may reduce blood pressure, but there's no guarantee the person won't have a stroke. The press often failed to make those distinctions when it produced stories on the treatment known as autologous bone marrow transplantation (ABMT) for women with late-stage breast cancer. In this case, oncologists became advocates for the treatment because X-rays showed that tumors appeared to recede more often-an intermediate outcome. Journalists took their word. The St. Louis Post-Dispatch in 1995 allowed a local doctor to tell how his patient's tumor had shrunk from "seven centimeters to a grain of sand," the result of conventional chemotherapy. He then implied that the same might happen with higher doses of the drugs. "I have no doubt that every expert in the country would favor giving this treatment to this patient," he said. The story did not point out that randomized controlled trials were necessary to see if the treatment actually improved survival rates-a health outcome. Sin 7: Offering Tips that May Be Misleading or HarmfulDespite concerns about the use of scanning as a mass screening tool and despite a letter from the FDA warning Imatron that it was making unapproved claims for its machine, several news outlets encouraged the public to seek a heart scan. The St. Petersburg Times ran an item in 1999 that told readers where they could attend a forum on heart scanning, as well as the number of a diagnostic facility. The item was presented benignly, as if it were an announcement of a garden club meeting. Better Homes and Gardens also published a list in 1999 and went on to tell readers that, while hospitals generally require patients to have referrals from doctors, those with heart disease who want to take the test anyway should simply "call a screening center and make your own appointment." For journalists who want to do a good job covering new technology, there are few places to consult for independent assessments. Congress killed the Office of Technology Assessment during Newt Gingrich's term as House speaker. Biotech firms, which had much to gain from the agency's demise, were heavy contributors to Gingrich's political action committee. Few reporters turn to the Blue Cross and Blue Shield Association Technology Evaluation Center. The center assesses clinical evidence and sometimes the cost-effectiveness of a procedure, but does not tell individual Blue Cross plans whether to pay for the technology. Of the 74 stories about heart scanning, only two mentioned evaluations by the center. Of the 91 stories sampled about ThinPrep, only one noted that the center had evaluated the evidence for the technology. During the 1990s, when newspapers were publishing hundreds of stories about women with late-stage breast cancer who wanted ABMT therapy, few journalists turned to ECRI, a nonprofit, independent health-care research organization in Pennsylvania that evaluates medical technology. In early 1995, ECRI published a report that examined the data on the treatment and found no evidence that ABMT produced any advantage over conventional chemotherapy. A Nexis search of news outlets turned up only eight stories in the general media and seven in the trade press that mentioned the report in the months before official publication and in the two years after it was released. Pressures from editors to shorten, simplify, and produce a dramatic story line can also work against thoughtful and honest coverage. As the budget for the National Institutes of Health continues to increase, the difficulty of conveying accurate information to patients, as well as public policy questions surrounding the use of new technology, will intensify. This huge infusion of federal money will spawn more research, more new devices-and more public relations efforts on behalf of sellers who will profit from the government's largesse. "We're great at inventing new tests and treatments, but terrible at figuring out whether they work-and even worse at limiting their uses to proven effective indications," says Dr. Mark Chassin, chairman of the Health Policy department at Mt. Sinai School of Medicine, in New York City. Journalists trying to sort out new technology and judge the claims of sellers, manufacturers, and health-care providers against the needs of patients and the costs to the health care system would do well to think of Archie Cochrane. Cochrane was the British physician and epidemiologist whose work promoting evidence-based medicine is memorialized in the Cochrane Collaboration, an organization of more than 5,000 investigators from more than 40 countries who prepare systematic reviews of research on health care. "Until the second quarter of this century, therapy had very little effect on morbidity and mortality," Cochrane wrote in 1972. "One should, therefore, 40 years later, be delightfully surprised when any treatment at all is effective, and always assume that a treatment is ineffective unless there is evidence to the contrary." Today his remarks are more relevant than ever. # Trudy Lieberman, a contributing editor to Columbia Journalism Review, was a fellow last semester at the Joan Shorenstein Center on the Press, Politics and Public Policy at Harvard's Kennedy School of Government. She looked at media coverage of new medical technology. This story is based on that research. Ryan Smee, a CJR intern, provided additional reporting. "Covering Medical Technology: The Seven Deadly Sins," CJR September/October 2001. |