RESEARCH ON REPORTING

by Sharon Dunwoody


This is one of an intermittent series of columns about science communication research that has utility for the practice of science writing.

Some years ago, scientists at two separate universities succeeded in implanting bits of human immune system into mice. Each team wrote up its results, and two prestigious journals said "You betcha!" and scheduled the manuscripts for publication.

The publication dates for the two issues were only a week or two apart; that meant the teams would share priority for the discovery.

But the scientists consigned to the journal with the later publication date successfully pleaded with its editor to break the publication's information embargo so that news of their success could accompany stories written about the work of the other team.

Why would a group of scientists go to all this trouble to get into the mass media? The officially sanctioned reaction of the scientific culture to media coverage, after all, is disdain. Consonant with that stereotype, the few studies that have queried scientists and physicians about their use of mass media are peppered with results like those that, years ago, fell into the lap of M. Timothy O'Keefe. His sample of North Carolina physicians/researchers said stories about their own medical specialties occurred intermittently, at best, in the media and were given short shrift by physician-readers.

That "intermittent" assessment may have been quite accurate, but the North Carolina physicians were otherwise way off the mark. Recent research suggests that scientists use media accounts of science not only to keep track of what's going on but also--and this was unexpected--to decide what's important science and what's not within their own specialty areas.

Back in the 1970s, a sociologist hinted at this legitimizing function of the media when she wrote up her popularizing experiences for the journal, The American Sociologist. Laurel Richardson Walum had presented a paper at a meeting of the American Sociological Association in New York City, prompting a New York Times story that went out on the wire and resulted in interview requests from scores of other journalists worldwide.

In her journal article and a subsequent personal interview, Walum noted that the plurality of information requests did not come from journalists, however; they came from other sociologists who had read about her work in the med/ia and were eager to learn more. Additionally, she noted somewhat ruefully, although the particular research project discussed in this paper was by no means her most important, it received far more attention within sociology than did her other work. Other sociologists were using the mass media not only as mechanisms to alert them to sociological work but also as arbiters of good sociology!

This tendency of scientists to buy into the argument that "if it's in the mass media then it's important science" has been illuminated most clearly by a team of California sociologists, who devised a nifty test of the legitimation principle. Briefly described in a 1992 issue of ScienceWriters, it deserves elaboration here.

UC-San Diego sociologist David P. Phillips and colleagues compared peer-reviewed manuscripts from The New England Journal of Medicine that got coverage in The New York Times with those that did not get coverage by examining the number of times each article was cited by other scientists during the ensuing 10 years. They found that, in the first year after publication, the typical article written up in the NYT received 72.8% more citations than did NEJM manuscripts that did not attract the attention of the NYT. The popularity of the NYT-covered articles among other scientists persisted for the entire 10-year period.

Simply put, scientists judged articles covered by the NYT to be more important "science" than those not covered by the NYT. And, given that the sociologists who did this study used citations within the peer-reviewed literature as their measure of influence, we can conclude that this judgment took place within the specialty arenas that these scientists presumably knew best.

The beauty of this little study is that it takes the time to test some alternate explanations. For example:

-- What if the NYT reporters simply picked the best science offered in NEJM, so we're seeing nothing more than a consensus about quality? Phillips and colleagues came up with an ingenious way to test this possibility. They selected NEJM articles that were published in two years: 1979 and 1978. As luck would have it, the NYT was on strike for 12 weeks in 1978. During that time, the newspaper continued to publish a smaller newspaper that was not circulated publicly, so reporters kept selecting and writing stories that no one saw.

Thus, the sociologists had a sample of NYT-selected articles that saw the light of day and another sample that didn't. That second group in 1978 received no more citations than did NEJM articles in 1978 that attracted no NYT attention.

-- What if only a few outstanding articles account for the difference in number of citations? If this were so, then these powerful outliers would be a better explanation than would media attention. Phillips and colleagues wrestled with this possibility by statistically racheting down the effect of articles that engendered either very high or very low citation levels. The big citation difference persisted.

-- Finally, what if the difference is a product of the Science Citation Index's use of a wide variety of journals and publications to assemble its citation totals? If one could look only at the most important journals--where the most important scientists presumably publish--perhaps the effect of NYT visibility would go away. Phillips and colleagues limited their citation analysis to cites in NEJM, Lancet, the Journal of the American Medical Association, Nature and Science. The citation difference not only persisted but became even more pronounced! As the authors note (p.1182): "When we restricted our attention to core journals, we found that the average study-group article generated 92.7% more citations than the average control-group article."

In sum, media coverage of science seems to serve as a kind of social legitimizer not only for the general public but also for scientists. Scientists seem to make the assumption that research covered by The New York Times (and, presumably, by other high-status media outlets as well) is more important, more worthy of attention than research that does not get such visibility. And most remarkably, they apparently buy into this assumption even within their own specialty areas!

No wonder the California team at the top of this story worked so assiduously to "make the papers."

#

Read all about it:

O'Keefe, M.T. (1970) The mass media as sources of information for doctors. Journalism Quarterly 47(1),

95-100.

Phillips, D.P., E.J. Kanter, B. Bednarczyk & P.L. Tastad (1991) Importance of the lay press in the transmission of medical knowledge to the scientific community. The New England Journal of Medicine 325(16), 1180-1183.

Walum, L.R. (1975) Sociology and the mass media: Some major problems and modest proposals. The American Sociologist 10 (February), 28-32.

Return to ScienceWriters table of contents.