Issues in science writing

Subscribe to RSS - Issues in science writing

A New York hospital group's PR firm sends out press releases urging reporters to quote its experts on embargoed studies by other researchers, Ivan Oransky writes on Embargo Watch. A clever way to help writers and get ink? Or too clever, since the journal publishing the studies did not consent? "Other hospitals, playing closer to the rules, might wish they had thought of this strategy before. But I wonder if this isn’t the scientific embargo version of insider trading."

Blame writers who can't tell the difference between solid and shoddy medical research, let alone explain it to their readers, David H. Freedman writes in CJR: "Even while following what are considered the guidelines of good science reporting, they still manage to write articles that grossly mislead the public, often in ways that can lead to poor health decisions with catastrophic consequences." More from Colin Lecher in Popular Science.

Kevin Lomangino warns on HealthNewsReview.org about the dangers in reporting on studies that use composite outcomes to gain greater statistical power: "When these studies report a benefit, reporters should evaluate whether there was a similar effect on all components of the composite; if not, they should identify which component of the composite was primarily responsible for the result, and explain whether that component is more or less important than the others."

The New York Times gets taken to Gary Schwitzer's woodshed for a story about a study on coffee consumption and oral cancer deaths. It's not that the story wasn't accurate, Schwitzer writes on HealthNewsReview.org. But it should have emphasized that such "observational" studies cannot show cause-and-effect relationships: "Please, Grey Lady, don’t let your writers contribute to the back-and-forth ping-pong games of 'coffee lowers risk,' 'coffee heightens risk' stories."

Brisk debate last week, prompted by these SciLogs posts from Akshat Rathi and Jalees Rehman and this reply from SciCurious. Writes the latter: "It's true that science writing isn't the most lucrative way to pay the bills. But getting a PhD to go INTO science writing is hardly better ... You'd be better off building your portfolio as a full time science writer, supplementing with other types of writing or other work."

Science has triumphed recently by forecasting Hurricane Sandy's path and predicting President Obama's re-election, Larry Pryor writes at Online Journalism Review. Now, Pryor says, science writers need to educate the public on climate change models: "Effort is now being spent on making scientists into better communicators, but more might be accomplished if mainstream journalists ... made themselves better acquainted with satellite technology and its impact on science."

Two posts on Elsevier Connect discuss the Oransky/Marcus blog that digs out details on scientific paper retractions. First, Tom Reller advises editors on handling an RW inquiry. Then NASW's David Levine collects thoughts from four science writers, including former USA Today reporter Doug Levy, who says RW is "bringing the scientific community itself more directly into the discussion. That’s a good thing for everyone."

Ivan Oransky has disturbing news from Europe. French researchers offered a paper on genetically modified food to reporters under embargo, but only if the journalists pledged not to consult other scientists before the embargo lifted: "One of the main reasons for embargoes ... is to give reporters more time to write better stories. Part of how you do that is talking to outside experts." Comment from Carl Zimmer, Deborah Blum.

Kelly McBride on the Poynter site discusses something called "patchwriting," which she calls "a dishonest writing technique that is common on college campuses and among journalists." What is it? Unlike plagiarism, it's not verbatim copying. Rather, it's a clumsy and incomplete paraphrasing in which the writer lifts ideas and thoughts if not entire sentences: "It’s a form of intellectual dishonesty that indicates that the writer is not actually thinking for herself."

There's cause for hope in recent bad news about science writing, Seth Mnookin writes on his PLOS blog. True, "one of our biggest stars was revealed as a fraud; publications that should be exemplars of nuanced, high-quality reporting are allowing confused speculation to clutter their pages; researchers and PIOs are nudging reporters towards overblown interpretations," and so on. But shoddy journalists are quickly set straight by a barrage of authoritative responses.