April 2010


Questioning the Value of Recommendations in Peer Review

ResearchBlogging.org Virtually every professor has stories about grant applications and research article submissions that were (according to the tale) unfairly rejected. Given that the journal in which a research article is published and the amount of money squeezed from funding agencies can make or break a professor's career, this isn't an issue that can be taken lightly.

There's a lot of reason to question the peer review and editorial process, as commonly practiced. Many technical medical editors are indifferent to plagiarism; nonplagiaristic fraud is common in biomedical and clinical research; and research on "popular" scientific topics is surprisingly more erroneous than that on less-popular topics.

Most proposed solutions center on somehow "improving" peer-review, the process by which manuscripts are sent to other scientists for a critical evaluation. Based on their research, Richard Kravitz (University of California Davis, United States) and coworkers recommend, among other possibilities, that technical medical and science editors should consider dispensing with reviewer recommendations, and place more focus on reviewer comments.

Why dispense with reviewer recommendations?

There's a difference between reviewer recommendations and reviewer comments. Reviewer recommendations are the bit where a reviewer recommends whether a manuscript be accepted (with or without revisions) or rejected by a journal; reviewer comments are detailed manuscript assessments.

With the goal of evaluating the state of peer review, Kravitz and coworkers set out to determine the importance editors place on reviewer recommendations and reviewer comments in their decision to publish a manuscript. They also set out to determine whether there was more consistency in multiple evaluations by the same reviewer, or between multiple reviews of the same manuscript.

They studied nearly 6000 reviews of over 2000 manuscripts sent out for peer review by the Journal of General Internal Medicine (an interdisciplinary clinical and health journal), between 2004 and 2008. Almost all of these manuscripts were reviewed by two or three scientists.

Seven percent of the manuscripts came back with unanimous agreement to reject; 89% of these manuscripts ended up being rejected. Forty-eight percent of the manuscripts came back with unanimous agreement to accept; 80% of these manuscripts ended up being accepted.

Forty-five percent of the manuscripts came back with reviewer disagreement on rejection or acceptance; 71% of these manuscripts ended up being rejected. All is in working order, right? Reviewer recommendations seem to closely inform editorial decisions.

Not so fast. Look closely at this data. It says that although the probability of two reviewer recommendations agreeing with each other only slightly exceeds that expected by chance, editors nevertheless heavily rely on these recommendations.

Furthermore, across all of the manuscripts, the same reviewer was more likely to produce consistent recommendations on different manuscripts than multiple reviewers on the same manuscript. Taken together, it's clear that receiving a "tough" reviewer increases a manuscript's odds of rejection; the effect of these "tough" reviewers is not averaged out.

What can be done?

The scientists propose that journals can attempt to educate reviewers on how to conduct a proper peer review, or consider abolishing peer review recommendations altogether. I personally question whether either one will work, for most journals.

I've known some professors who put very little time into their reviews because it takes time away from their research. They brag about it too; being able to complain about "not having the time" for peer review (or anything else that has no effect on either tenure, grant money, or "prestige") is a status symbol.

These professors (not all professors) are contributing to the problem they complain about so much. You can lead a jackass to water, but you can't make him drink.

Additionally, it's easy to tell whether a reviewer approves or disapproves of a manuscript by their commentary alone. Removing the explicit recommendation of whether or not to accept for publication may not prevent an editor from following the implied advice anyway.

I personally recommend taking the approach of PLoS ONE, which is to accept all manuscripts as long as they're technically sound. The peer review process definitely has its place, but instead of allowing reviewers to make or break a solid body of research, let the broad scientific community decide for itself whether a manuscript is a worthy contribution.

for more information:
Kravitz, R. L., Franks, P., Feldman, M. D., Gerrity, M., Byrne, C., & Tierney, W. M. (2010). Editorial Peer Reviewers' Recommendations at a General Medical Journal: Are They Reliable and Do Editors Care? PLoS ONE, 5 (4) DOI: 10.1371/journal.pone.0010072