Regarding “formal peer review”: the same goes for writing source code or novels. It’s good to automate linting and typesetting. It takes medium effort to verify existence of some formal requirements of being a “good code”, “promising novel” or “substantial science”. It’s harder but still possible to run the reviewed code, try novel on a small audience, reproduce scientific research. It’s next to impossible distinguishing valuable artifact from white noise.
“People tend to stress the ability of “formal peer review” to set apart the good work from the less significant work. People greatly overestimate the value of formal peer review.”
The reject/revise/accept classification process is clearly biased towards conservativism. Both the best and the worst papers are often rejected. But formal peer review is more than this reject/revise/accept classification. I have benefitted greatly from the detailed comments of reviewers. This detailed feedback is far more important than the simple reject/revise/accept classification. Some publishers, such as PLOS, minimize the reject/revise/accept classification and focus much more on the detailed comments of reviewers. It would be a huge service to science if all publishers were to follow the example of PLOS. In particular, reviewers are often good at spotting errors and lack of clarity, but reviewers are often very bad at estimating the importance of a paper, especially if the paper has some truly new ideas.
Regarding “formal peer review”: the same goes for writing source code or novels. It’s good to automate linting and typesetting. It takes medium effort to verify existence of some formal requirements of being a “good code”, “promising novel” or “substantial science”. It’s harder but still possible to run the reviewed code, try novel on a small audience, reproduce scientific research. It’s next to impossible distinguishing valuable artifact from white noise.
“People tend to stress the ability of “formal peer review” to set apart the good work from the less significant work. People greatly overestimate the value of formal peer review.”
The reject/revise/accept classification process is clearly biased towards conservativism. Both the best and the worst papers are often rejected. But formal peer review is more than this reject/revise/accept classification. I have benefitted greatly from the detailed comments of reviewers. This detailed feedback is far more important than the simple reject/revise/accept classification. Some publishers, such as PLOS, minimize the reject/revise/accept classification and focus much more on the detailed comments of reviewers. It would be a huge service to science if all publishers were to follow the example of PLOS. In particular, reviewers are often good at spotting errors and lack of clarity, but reviewers are often very bad at estimating the importance of a paper, especially if the paper has some truly new ideas.