Criticism

Subscribe to RSS - Criticism

Tom Siegfried examines how journalism's ingrained definitions of what makes a big news story can lead reporters to write about studies that are anything but sound science: "Many of the criteria that confer newsworthiness on a scientific report tend to skew coverage toward results that are unlikely to stand up to future scrutiny. Journalists like to write stories about findings that are 'contrary to previous belief,' for instance. But such findings are often bogus."

The venerable Columbia Journalism Review devotes its new cover to a glowing profile of Elise Andrew, creator of the popular IFLScience Facebook page. But as Nadia Drake writes, the story includes only a brief mention of a long history of complaints against Andrew for copyright infringement and other sins: "For CJR to celebrate the rise of IFLS and its creator without giving proper weight to legitimate, serious concerns is a major misstep."

The disruptions associated with global warming have become the "new normal" in many news stories, but Dawn Stover sees a problem with that formulation: "There’s a dangerous psychology at work here. The 'new normal' meme, put forward by climate scientists as well as headline writers, is intended to make people realize how weird the world’s weather has become, and to encourage them to plan for an altered climate. But what if the phrase is having just the opposite effect?"

After 10,000 posts on science writing's hits and misses, the Knight Science Journalism Tracker will cease to be at the end of this year, the incoming and interim directors of MIT's Knight Science Journalism fellowships announced Thursday. Deborah Blum and Wade Roush write that they want "to clear some space for experimentation." Commenters on their post were unconvinced. More from trackers Faye Flam and Charlie Petit.

Regarding a White House video linking global warming to wildfires, Charlie Petit argues that reporters need to break the habit of saying that no single event can be tied to climate change: "As journalists, we don't have to ape public speakers if they trot out such fuzzy thinking. If weather and climate are different, this way of stressing the distinction only feeds confusion and may backfire. What does it mean, we can't blame any single weather event on climate change?"

A new blood test for Alzheimer's disease is 87% accurate. That's a big story, right? Not so fast, according to the British NHS Choices site, which complained that none of the news coverage "reported the positive predictive value of the test. This reduces the impressive sounding 87% accurate figure to around the 50% level … giving the test the same predictive value as a coin toss." More from Gary Schwitzer, John Gever.

Gary Schwitzer discusses what journalists can learn from a BMJ editorial on sloppy animal tests: "If you saw the number of hyped stories about animal research that I've seen in 8 years of daily scouring of journalism, you'd know why I direct this note to journalists," he writes. "There usually aren't enough caveats. There usually isn't adequate explanation provided of how long and tortuous is the path between animal research and any human implication or application."

Nate Silver's new web site has launched, but the reviews so far have been decidedly mixed. At the Knight Science Journalism Tracker, Paul Raeburn gives an "abysmal" rating to one of the site's initial science stories. Meanwhile, one of Silver's science reporters, University of Colorado Professor Roger Pielke Jr., is accused by Kiley Kroh of "a long history of data distortion and confrontations with climate scientists."

John Gever looks at the statistics in the Alzheimer's blood-test study that made the news last week, and what he finds suggests that many reporters missed a crucial detail — the test's low "positive predictive value" for cognitive impairment. Giving the test to 1,000 patients, Gever writes, "produces a total of 140 positive results, 45 of which are correct and 95 false." Because there's no followup test to weed out the false positives, the test may have little value.

If a lot of people read your science stories, that means they're low in accuracy and the readers are low in knowledge. That in a nutshell is the theory proposed by Swedish physicist Sabine Hossenfelder in a post reviewed by Poynter's Andrew Beaujon. But the Knight Tracker's Faye Flam calls out Hossenfelder for some flaws in her pretty but "fishy" graphs: "The graphs do not appear to be based on any data. There is not a data point to be found."