4

A former editor of the prestigious scientific journal the BMJ has recently argued that the peer review process for scientific publications is broken and should be abandoned. He argues (my emphasis):

Richard Smith, who edited the BMJ between 1991 and 2004, told the Royal Society’s Future of Scholarly Scientific Communication conference on 20 April that there was no evidence that pre-publication peer review improved papers or detected errors or fraud.

Referring to John Ioannidis’ famous 2005 paper “Why most published research findings are false”, Dr Smith said “most of what is published in journals is just plain wrong or nonsense”. He added that an experiment carried out during his time at the BMJ had seen eight errors introduced into a 600-word paper that was sent out to 300 reviewers.

“No one found more than five [errors]; the median was two and 20 per cent didn’t spot any,” he said. “If peer review was a drug it would never get on the market because we have lots of evidence of its adverse effects and don’t have evidence of its benefit.”

Is he right that there is no evidence that peer review improves the quality or reduces the errors of scientific publications?

Update and clarification

I'm guilty of expressing this question is a way that missed the point it should be addressing. The current question above allows the interpretation that any improvement ever in the history of scientific publishing proves the question to be false. That wasn't my intent.

There is a serious argument (perhaps expressed too strongly by Smith) that peer review does, on balance, a rotten job. We expect it to eliminate serious statistical errors, mistakes, evidence that doesn't actually back key conclusions and other related mistakes. He argues it does a poor job at that process and that there are alternatives.

The way I hoped people might try to answer this question would be to address the question of the underlying evidence not to aim for the rhetorically satisfying hit of showing that any proof a paper was ever improved is a satisfactory answer. Smith shows that the majority of know errors are missed. He shows that most reviewers miss most errors and some miss all the errors.

What I hoped was that answers might address whether Smith's evidence is any good and whether other people have similar or contradictory evidence. That evidence will be the same whether or not we all agree on the detailed purpose of peer review. And it is an important question for this site as it puts a lot of weight on the "peer-reviewed" scientific literature.

I'm adding this as an update as I suspect that rewriting the whole question would just annoy those who have already answered.

matt_black
  • 56,186
  • 16
  • 175
  • 373
  • 1
    Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/57992/discussion-on-question-by-matt-black-is-there-no-evidence-that-pre-publication-p). – Oddthinking May 01 '17 at 10:37

2 Answers2

12

Pre-publication peer review is not supposed to catch all errors or mistakes. It's a relatively superficial review (they aren't going to generally perform a rigorous study to validate or invalidate the findings in the pre-publication step) to see if basic, remedial standards of professionalism have been followed before publication.

Peer review is not currently designed to detect deception, nor does it guarantee the validity of research findings. It should, however, identify flaws in the design, presentation, analysis and interpretation of science and provide prompt, detailed, constructive criticism to improve research. (Lee, Bero, Nature (2006))

Nature (2006): Ethics - Increasing accountability; peer review debate

The actual post-publication process is the part that really is "peer review" - this is where methods, data and conclusions are rigorously examined and tested and conclusions about the validity of the papers is really done. No scientific finding is considered valid or accepted until other scientists have examined, critiqued and attempted to replicate results, using the same methods as the original experiments/studies, or different methods.

The process of peer review does not end after a paper completes the pre-publication peer review process. After being put to press, or having been digitally published, the process of peer review continues a publications are read.

Wikipedia: Scholarly peer review

By his own examination, 80% caught errors in the submitted papers, so in 80% of the cases the paper would have undergone some correction or not published as it was, and that's before the more rigorous post-publication review process. That means, in 80% of those cases, the paper would have been improved in the pre-publication process. That's quite different than "no evidence" of improvement. Mind you, catching some, but not all, errors is an improvement.

Sounds like his claims don't match with his data. Perhaps if he would have submitted his speech as a paper, it would not have been published in a peer-reviewed journal.

PoloHoleSet
  • 9,608
  • 3
  • 34
  • 41
  • I think you are redefining what "peer review" is. In the usual use of the term it does not refer to post-publication challenge of results. That's the the scientific process (when it works) not what we normally call peer review. – matt_black Apr 26 '17 at 22:45
  • 1
    The term "peer review" can mean different things in a professional academic setting depending on context. This is why your own source qualifies one form as "pre-publication" peer review. In other contexts, it can be understood that your community of peers critiquing the work, attempting to reproduce the results, and publishing their follow-ups is also a form of peer review. Note that most scientists that I'm familiar with use the term "refereed" to refer to pre-publication peer review to better distinguish between the two. – KAI Apr 26 '17 at 23:55
  • 6
    The "refereed" term also highlights the correct points from this response - the point of pre-publication peer review is not to detect fraud or even to guarantee correctness. It is to ensure that the study meets some minimal standards - only obvious flaws are generally found. The greater community ultimately will judge correctness. – KAI Apr 26 '17 at 23:56
  • 4
    @matt_black I don't think you have read the answer properly, I read PoloHoleSet's answer as saying that pre-publication review is not supposed to detect errors, that is the purpose of post-publication review. The answer also points out that the editors own study refutes his claim as the reviewers did spot some of the errors, and thus it does improve the papers and/or detects some of the errors. For some papers, the error ***can't*** be reliably detected without repeating the study, which the reviewer simply doesn't have the time or resources to do. –  Apr 27 '17 at 06:37
  • @KAI my source makes no distinction about types of peer review. It simply uses a very clear piece of terminology. And that is what the question is about. You can't use their language to argue that they agree most peer review happens after publication. Besides the distinction is irrelevant as the question is about pre-publication peer review not anything else scientists do. – matt_black Apr 27 '17 at 08:28
  • @DikranMarsupial I read the quoted evidence very differently. Deliberately introduced errors in papers that *should* have been spotted by pre-publication review usually were not. And while we don't expect peer review to replicate results, we should expect it to spot obvious errors is statistics and design. – matt_black Apr 27 '17 at 08:35
  • You do what many papers do when you say that "80% caught errors in the submitted paper": you use a misleading statistic. That is the number of reviewers who spotted *any* errors. On average only 2 in 8 errors were caught and even the best reviewer let 3/8 through. That isn't a very positive picture of the effectiveness of review. – matt_black Apr 27 '17 at 08:39
  • 2
    @matt_black scholarly peer review is not supposed to be a guarantee of finding all errors in a paper. Whether they *should* be picked up or whether they are *obvious* is a matter of opinion, however a peer review that identifies *any* error will have *improved* the paper, and hence the results refute the hypothesis as posed. –  Apr 27 '17 at 08:39
  • @matt_black "you use a misleading statistic. " no, not at all, the original claim was "no evidence that pre-publication peer review improved papers or detected errors or fraud.". If you detect even one error, you have improved the paper, and the question as posed does not say pre-publication peer review detects all errors, just "errors". Before accusing others of using misleading statistics, perhaps you should consider whether you have understood the point being made. –  Apr 27 '17 at 08:47
  • @DikranMarsupial I didn't accuse *you* of using a misleading statistic, i was responding to the answer (so PoloHoleSet). Besides you insist on a rigid binary interpretation of Smiths argument rather than a more reasobaqle one that looks at the other statistics. If the process we used to approve medicines missed *any* problems 20% of the time and missed the majority of the problems the rest of the time, most people won't interpret that as a successful process. Your binary logic would. – matt_black Apr 27 '17 at 08:57
  • I think it is also worth pointing out that papers generally have more than one reviewer (normally three in my field), which suggests that between them they are likely to catch more of the errors than any of them do individually. –  Apr 27 '17 at 08:58
  • 2
    @matt_black you need to apologise to PolpHoleSet then as there is nothing in their answer that uses a misleading statistic. As I pointed out the original claim doesn't require that peer review catches all errors. I disagree that your intepretation of the claim is more reasonable, it isn't it is more extreme as you appear to want review to detect **all** errors, rather than just "detect errors", which is what Smith actually wrote. If your last point were reasonable, we wouldn't have need of meta-analysis, which is very commonly used in medical journals. –  Apr 27 '17 at 09:04
  • If an individual reviewer has an 80% chance of finding one error, and reviewers are independent, then if you have three reviewers, the chance of none of them identifying an error is (1-0.8)^3 = 0.008 (i.e. 0.8%). Of course the process used to approve medicines is far more careful than the process used to approve papers for publication, but it would be ridiculous for it to be any other way as it would require huge resources to be put into the review of papers entirely disproportionate with the consequences of getting it wrong (unlike those for medicines). –  Apr 27 '17 at 09:22
  • 2
    @matt_black - it's not really a "clear piece of terminology" if it has many different meanings, depending on the context. Also, there's nothing misleading, since I explicitly point out that I'm not claiming that they are catching all errors. It would be more misleading to claim that catching some errors is "no improvement." I'm not redefining anything, I'm quoting what others say it is. – PoloHoleSet Apr 27 '17 at 13:35
9

While many agree that the current peer review system is not as effective as people would like it to be, it is not true that there is "no evidence that pre-publication peer review improved papers or detected errors or fraud". Authors overwhelmingly say that peer review improved the quality of their own last published paper.

In a 2009 survey of academics [1],

Ninety-one percent (±0.9%, p < 0.05, n = 4,037) of respondents agreed that the review process improved the quality of the last paper they published.

They further indicated which aspects of their last paper were improved by peer review:enter image description here

Similarly, in a 2008 survey of academics [2] supported by the Publishing Research Consortium, a group representing publishers and societies interested in research on scholarly communication,

the large majority of authors (around 90%) were clear that peer review had improved their own last published paper and a similar proportion agreed with the more general statement ‘peer review improves the quality of the published paper’.

These respondents were also asked to identify the specific area of improvement in their own last published paper due to peer review:

enter image description here

(Notably, 64% said that peer review of their last published paper had identified scientific errors.)

This is not evidence based on a blinded controlled study, so not the highest standard of evidence - but it is evidence nonetheless.

There is, however, general agreement that peer review is not as effective at detecting fraud and plagiarism as academics think it should be.

In [1],

Eighty-one percent (±1.2%, n = 4,037, p < 0.05) expect peer review to detect plagiarism, but just 38% (±1.5%, p < 0.05) feel that the current system is able to do this. Similarly, 79% (±1.3%, p < 0.05, n = 4,037) would like peer review to detect fraud, compared with 33% (±1.5%, p < 0.05, n = 4,037) who feel it is successful in this aspect.

And in [2], far fewer researchers surveyed said that peer review is effective at detecting fraud and plagiarism, than said it improves paper quality:

enter image description here

Whether peer review is "broken", and whether it is worth the costs associated with it, are matters of opinion. (Researchers' opinions on this are also included in the two surveys cited below.)


References

[1] Mulligan, Adrian, Louise Hall, and Ellen Raphael. "Peer review in a changing world: An international study measuring the attitudes of researchers." Journal of the American Society for Information Science and Technology 64, no. 1 (2013): 132-161. DOI: 10.1002/asi.22798

[2] Ware, Mark. "Peer review: benefits, perceptions and alternatives." Publishing Research Consortium 4 (2008). URL

ff524
  • 10,181
  • 6
  • 47
  • 60
  • 1
    Whether *authors* like peer review isn't the same as whether peer review does a good job for science as a whole. Indeed one major criticism is that the process is nepotistic allowing bad publications from small communities of peers who agree with each other but often rejecting interesting but controversial but well conducted work from outsiders. But, apart from that, a useful answer. – matt_black Apr 27 '17 at 09:03
  • 5
    @matt_black The question doesn't ask if peer review is good for science (that's a matter of opinion), it asks if peer review improves paper quality. "Author perception of paper quality" is a valid quality metric (though certainly, not the only one). – ff524 Apr 27 '17 at 09:12