Clinical Trials Boot Camp, Part I

During the first Clinical Trials Basic Training session an Food and Drug Administration official and an academic explained the basic standards and issues involved in properly designing a human clinical trial. The goal was to teach reporters enough about the process to adequately analyze trial data and avoid inaccurate coverage of results. To that end, the speakers pointed out some specific questions for reporters to ask and information to request and scrutinize to properly assess trial results.

As Robert Temple, an associate director for medical policy at the FDA, explained, beginning in 1962 FDA treatment approval became dependent on favorable results in "Adequate and Well-Controlled Studies." Prior to that, and perhaps of interest to those around before 1962 and taking drugs, there was no such established requirement. Meeting this standard requires a valid control group for comparison and minimization of bias, both of which may sound simple enough but pose substantial challenges depending on a given drug or other treatment and its target. "There are millions of ways that clinical trials can go wrong," said Temple, and he and the second speaker clearly explained the general classes into which these problems can be categorized.

Of particular interest to reporters was Temple's discussion of bias, and some ways it can intentionally or unintentionally be incorporated into studies. The discussion included the importance of proper randomization of participants and masking participant details for those who analyze experimental results ("Masked" study, we learned, is now the politically correct replacement term for a "blind" study.)

A particularly illustrative, if extreme, case study comes from 1980 when a certain group of researchers presented favorable results from a study in which a few--just six--participants out of a large sample size had been disqualified and so not considered in final study analyses. Only after the research was published was the interesting tidbit discovered that the criterion for the disqualifications was that the patients had died, presumably due to the test drug. The medical community apparently frowns on this sort of thing and Temple suggested that to avoid publishing such falsities, medical journals should demand that researchers provide their complete protocols for review. This topic led to a discussion of whether journalists were likely to gain access to full protocols. Research groups may or may not be willing to disclose their protocols, according to Temple, but legislation is pending that would make this information more readily available.

Temple also discussed such issues as the still widely debated ethics of placebo use. Potential trial participants would no doubt be pleased to learn that it is considered unethical and, hence, unallowable, to deny someone lifesaving treatment by feeding them a placebo in their time of need. However, doctors are still free, for now, to subject participants to a bit of discomfort, for instance extending the duration of a headache through placebo administration.

When placebos can't be used, other options include comparing a new drug to an established drug, though this poses its own problem, including the difficulty of knowing for sure that the established drug is performing as it did in past trials. Even when placebos can be used, a "perfectly good drug" may fail. Trials for antidepressants, for instance, are notorious for failing in part because patients can experience spontaneous improvements for a variety of reasons, including special care received during a study even for placebo recipients. Assessing antidepressant effects is also highly subjective, adding further difficulty.

A second speaker, Sheryl Kelsey, director of the University of Pittsburgh's Epidemiology Data Center, highlighted several potential difficulties in designing and reporting on clinical trials. She cautioned reporters to look closely at the raw statistics generated by a study to evaluate their significance. For instance, a study that can honestly claim a 50 percent reduction in mortality or other measure of success might have only generated a small absolute reduction, such as a 5 percent mortality rate compared to a 10 percent mortality rate in a control group. Considering absolute change can at times be critical to understanding the true significance of results, she said.

One statistical challenge faced by researchers is getting a large enough sample size for a trial, because it is often difficult to find participants. As a result, some studies may include fewer participants than needed for strong statistical results. Another of Kelsey's suggestions for proper study analysis was to find out from the researcher how many participants they set out to include to determine if the actual number was much smaller, which could be a red flag. Kelsey also discussed the need to analyze the degree of statistical manipulation, offering a quote from Fred Menger to describe the potential problem: "If you torture data sufficiently, it will confess to almost anything."

One of Kelsey's parting messages was a plea to reporters to write about the need for individuals to take part in clinical trials even if it will not benefit them personally because trials often aid in medical advances. One attendee found this request "troubling" because so many trials, including some that have received wide press coverage, have been badly designed or improper in some way. To avoid becoming part of a bad trial, Kelsey suggested that potential participants ask detailed questions about the trial to decide for themselves whether it is valuable. She pointed attendees to part II of the session for further information on the topic of patient participation. In a later discussion Kelsey pointed out the importance of separating cases where researchers or companies had done something fraudulent related to a trial, from cases where they spun the data to make the results seem as favorable as possible.

Mark Schrope is a full-time freelance based in Melbourne, Florida. His articles have appeared in Nature, New Scientist, Popular Science, Outside, and others.

Oct. 29, 2006

Biedler Price for Cancer Journalism