, 3 min read
Science is self-regulatory… really?
Many theoretical systems are self-regulatory. For example, in a free market, prices will fluctuate until everyone gets a fair price. But free markets are a mathematical abstraction.
The business of science should also be self-regulatory. Scientists who produce bad work should build poor reputations. We have journals that have strict peer review: these journals will filter out the insignificant and poor work. Yet I believe that the business of science fails to be self-regulatory. Ioannidis et al. (2010) make several observations which support my belief.
- Myth 1: peer review is a sign of quality. There is what looks like a never-ending “bubble” in academic publishing: Nowadays, some authors have been co-authoring more than 100 papers annually. Some of these researchers actually published only 3 or 4 papers per year until their mid-forties and fifties. How do you explain that so many researchers became suddenly so prolific? The increasing competition for funding and jobs has a role to play… but can competition increase the productivity of researchers so much? It is doubtful. The dirty little secret in science is that you can endlessly resubmit your work to as many journals as you want. In fact, you can even simultaneously submit similar papers to several different journals. Eventually, if only through random chance, your work will be accepted. Why do people expect such a system to be self-regulatory? There is no penalty for writing poor papers, ignoring reviewers, publishing junk… but great rewards for being prolific. Just as long as you stay away from outright fraud, there is no price to pay for empty work. The truth is that peer review is not regulation, peer review is an honor-based system: it only works well when both reviewers and authors are committed to the greater good. But there is no penalty for being evil! If I spot fraud or a substantial failure in some paper I review, the authors can just go around and resubmit the paper elsewhere, free of charge! Compare this with blogging: I could write 12 boring blog posts a day… and what would happen? People would quickly start ignoring me. I have a strong incentive to limit the frequency of my posts and keep the quality high if I want to attract and retain more readers. And before you object that journals have the same incentive, consider that journals are not assessed on their readership: at best, they are measured by the number of citations they receive (the so-called Impact Factor).
- Myth 2: even if journals publish junk, counting how many citations a paper has received will tell you how good it is. This assumes that authors cite the very best work after reading it carefully. According to Ioannidis et al., anything will be cited: Two decades ago, only 45% of published papers indexed in the Web of Science received at least one citation within 5 years. This pattern has now changed: 88% of medical papers published in 2002 were cited by 2007. Almost anything published this year will eventually be cited. But what about looking at how often papers get cited? The problem is that if citations are attributed at random, and you publish many papers, then eventually you will get lucky and get a few highly cited papers. It is not a matter of producing quality, just quantity. And, of course, I frequently find great papers that have received few or no citation. The hard truth is that there is no substitute for reading the papers!
So you disagree with me? You believe that the business of science is self-regulatory? Then please, explain the mechanism.