Peer Review Review

From a variety of sources, I have learned about this not-at-all recent study. From the abstract:

A growing interest in and concern about the adequacy and fairness of modern peer-review practices in publication and funding are apparent across a wide range of scientific disciplines. Although questions about reliability, accountability, reviewer bias, and competence have been raised, there has been very little direct research on these variables. 
The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices. 
With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.
 While I don't want to get carried away, or anything, this would seem to be a somewhat serious indictment of the peer-review practices of the psychology journals punked investigated by the researchers. Looks bad. Best case scenario is, publishing in these journals is a crapshoot where the odds are an abysmal nine-to-one against; worst case scenario is we're all getting butchered.

However, I was a little disappointed that the study didn't include something in the way of a control group--it seems to me that it would have been a better design if they'd substituted (actual) prestigious institutional affiliations for some of the articles, instead of using all fictitious institutions. And, along the same lines, it would have been a better design if they'd have given the same treatment to a group of papers published in C-level journals--submitted them to high-level journals, half with affiliations with fictitious institutions and half with affiliations with real, prestigious institutions. It seems to me that a study with that design would be a lot more conclusive. (Not to say, 'conclusive.')

(Also, a commenter at Philosophers' Cocoon says that the journals investigated all practiced non-blind review procedures. I'm working from home today, and am unwilling to jump through the hoops I'd need to in order to read the article, so I'm just going to take her word for it. But if that's right, it takes almost all of the "wow" factor away. It's still kind of bad that they didn't recognize the articles as having been already published by them, but if your job is mostly to receive submissions, send them out to review, and deal with the results, it's easy to imagine that you wouldn't catch on to something like that. I, for one, wouldn't be on the lookout for it.)

--Mr. Zero

Tidak ada komentar:

Posting Komentar