, 4 min read
Creating incentives for better science
Popper argued that science should be falsifiable. To determine truth, we simply try to disprove an hypothesis until we are exhausted. It is a nice theory, but actual science does not follow this process. I read many funding proposals, and I have yet to read one that says: "This other guy came up with an hypothesis and he wrote about it. We are going to try to prove him wrong." In science, everyone is busy promoting their own idea, and gathering up evidence to convince others. Trying to publish a negative result is very difficult so there is little incentive to disprove ideas. There is simply not as much falsification going on as Popper thought.
Instead, much of truth of modern science is determined by authority. In this sense, we have not evolved far beyond the Middle Ages. For example, whenever someone makes a controversial scientific argument, you can be certain that someone will ask to see the "peer-reviewed version". This blind faith is peer review is dangerous, Indeed, peer review is an honor-based system: it is implicitly assumed that the authors have done everything possible to check their work. No reviewer will redo the experiments or check every tiny mathematical argument. Moreover, between an unexciting article reporting mundane findings, and an exciting article making bold claims, journals will often prefer the later even though it is much more likely to be wrong. In fact, all scientists learn to "boost" their claims as much as possible to improve their chances during peer review. Carefully reporting all the limitations of your work is probably unwise.
Though it might be that truth eventually wins in science, in the long run, we are all dead. So what should we do? Nosek et al. have practical recommendations:
- Make it easier to publish results. This appears counter-intuitive. Surely, raising the barrier to entry should make science more reliable? The problem with strict filters is that it creates an incentive for volume. That is, because it is difficult to publish a research paper, the more you publish, the higher your status. In principle, there is no reason to believe that highly prolific researchers are less reliable. In fact, if you write many research papers, then you are probably good at it. But consider the evidence. It used to be that only a few people in the world could publish tens of major research articles per year. These days, we have one or several such researchers per department. In fact, today, many assistant professors have published more than star researchers like Claude Shannon, John Nash or Richard Feynman have in their entire life. Maybe we spend too much time on the publication process, and too little time on the actual science? Moreover, if it were easier to publish, we could expect to see more articles where the focus is not on novelty, but rather on assessing our current knowledge. More introspection could only be beneficial. Next time you are involved in determining the acceptance rate at a conference or in a journal, consider promoting an increase. Also, do not dismiss so easily publications on arXiv: a good paper is a good paper wherever you find it.
- Do not use formal peer review to predict significance. Though it is often believed that peer review is meant to verify whether the work is correct, its actual purpose in practice is to determine what is likely to be important. Unfortunately, I believe that our ability to predict what work is going to be important is quite limited. Could you really have seen the seminal work of the past as seminal? By trying to predict significance with committees of expert, we are encouraging people to focus on the positive and ignore the negative. We also focus on novelty at the expense of deeper work. Some would argue that we need filters because we cannot read everything. Indeed, we do. However, we have years of experience building automated recommender systems and we find that manual curation by experts is only one of several effective strategies. Journals like PLoS One are showing that it is quite practical and effective to publish more papers without trying to select only those most likely to be significant. Whenever you are asked to review a paper, please be humble and do not spend too much time guessing its long term significance. Focus on assessing its overall quality irrespective of how exciting its findings are. I call this being generous.
- Be clearer about how scientists are assessed. I have participated in a few committees to assess young and old researchers. Rarely is the number of research papers a determining factor. Many other scientists have this experience. Routinely, scientists with moderate productivity are hired or promoted over scientists who published more. For example, I was recently able to renew my federal research grant. By comparing myself with colleagues who publish twice or three times as many research papers, I find that I am funded on par with them. That is, at least in this particular case, the actual number of research papers I wrote was not a determining factor in how I was assessed. I am not advocating that we tell young researchers to publish sparingly. In science, if you don't publish, you don't exist. But, similarly, if you don't breathe, you die, yet breathing more does not make you more alive. Increasing your number of research papers without increasing their quality might not help you as much as you think. Consider how many new products Apple markets every year. A small handful. If it is enough for Apple, it might be enough for you.
Source: Thanks to Marcel Blattner for pointing out the original article to me.