, 5 min read
Is peer review slowing down science and technology?
Ten years ago, a team lead by Irina Conboy at the University of California at Berkeley showed something remarkable in a Nature paper: if you take old cells and put them in a young environment, you effectively rejuvenate them. This is remarkable work that was cited hundreds of times. Their work shows that vampire stories have a grain of truth in them. It seems that old people could be made young again by using the blood of the young. But unlike vampire stories, this is serious science. So maybe there are “aging factors” in the blood of old people, or “youth factors” in the blood of young people. Normalizing these factors to a youthful level, if possible, could keep older people from getting sick. Even if that is not possible in the end, the science is certainly fascinating.
So whatever happened to this work? It was cited and it lead to further academic research… There were a few press releases over the years…
But, on the whole, not much happened. Why?
One explanation could be that the findings were bogus. Yet they appear to be remarkably robust.
So why did we not see much progress in the last ten years? Conboy et al. have produced their own answer regarding this lack of practical progress:
If all this has been known for 10 years, why is there still no therapeutics?
One reason is that instead of reporting broad rejuvenation of aging in three germ layer derivatives, muscle, liver, and brain by the systemic milieu, the impact of the study published in 2005 became narrower. The review and editorial process forced the removal of the neurogenesis data from the original manuscript. Originally, some neurogenesis data were included in the manuscript but, while the findings were solid, it would require months to years to address the reviewer’s comments, and the brain data were removed from the 2005 paper as an editorial compromise. (…)
Another reason for the slow pace in developing therapies to broadly combat age-related tissue degenerative pathologies is that defined strategies (…) have been very difficult to publish in high impact journals; (…)
If you have not been subject to peer review, it might be hard to understand how peer comments can slow down researchers so much… and even discourage entire lines of research. To better understand the process… imagine that you have to convince four strangers of some result… and the burden is entirely on you to convince them… and if only just one of them refuses to accept your argument, for whatever reason, he may easily convince an editor to reject your work… The adversarial referee does not even have to admit he does not believe your result, he can simply say benign things like “they need to run larger or more complicated experiments”. In one project I did, one referee asked us to redo all the experiments in a more realistic setting. So we did. Then he complained that they were not extensive enough. We extended them. By that time I had invested months of research on purely mundane tasks like setting up servers and writing data management software… then the referee asked for a 100x extension of the data sizes… which would have implied a complete overhaul of all our work. I wrote a fifteen-page rebuttal arguing that no other work had been subjected to such levels of scrutiny in the recent past, and the editor ended up agreeing with us.
Your best strategy in such case might be to simply “give up” and focus on producing “uncontroversial” results. So there are research projects that neither I nor many other researchers will touch… I was reminded of what a great computer scientist, Edsger Dijkstra, wrote on this topic:
Not only does the mechanism of peer review fail to protect us from disasters, in a certain way it guarantees mediocrity (…) At the time, it is done, truly original work—which, in the scientific establishment, is as welcome as unwanted baby (…)
Dijkstra was a prototypical blogger: he wrote papers that he shared with his friends. Why can’t Conboy et al. do the same thing and “become independent” of peer review? Because they fear that people would dismiss their work as being “fringe” research with no credibility. They would not be funded. Without funding, they would quickly lose their laboratory, and so forth.
In any case, the Conboy et al. story reminds us that seemingly innocent cultural games, like peer review, can have a deep impact on what gets researched and how much progress we make over time. Ultimately, we have to allocate finite resources, if only the time of our trained researchers. How we do it matters very much.
Thankfully, since Conboy et al. published their 2005, the world of academic publishing has changed. Of course, the underlying culture can only change so much, people are still tailoring their work so that it will get accepted in prestigious venues… even if it makes said work much less important and interesting… But I also think that the culture is being transformed. Initiatives like the Public Library of Science (PLoS) launched in 2003 have shown the world that you could produce high impact serious work without going through an elitist venue.
I think that, ultimately, it is the spirit of open source that is gaining ground. That’s where the true meaning of science thrived: it does not matter who you are, what matters is whether you are proposing works. Good science is good science no matter what the publishing venue is… And there is more to science than publishing papers… Increasingly, researchers share their data and software… instead of trying to improve your impact through prestige, you can improve your impact by making life easier for people who want to use your work.
The evolution of how we research may end up accelerating research itself…