, 3 min read
Trading latency for quality in research
I am not opposed to the Publish or Perish mantra. I am an academic writer. I am what I publish. We all think of researchers as people wearing laboratory coats, working on exotic devices. And my own laboratory includes a one-million-dollar computer cluster with a SAN server as large as a fridge. I also generate much software. But you know what? The writing is what matters.
And publishing is easy. Write and submit many papers conforming to the expectations of the editors. Eventually, some of your work will be accepted. And there are thousands of journals, conferences and workshops. Just write a lot.
Yet, don’t publish everything you write—even when what you wrote looks like a research paper. Hold on to it. Because, publishing everything that looks like a research paper leads to what Feynman famously described as Cargo Cult Science. Indeed, there is a real danger that we become so good at faking science that we are no longer doing science at all! We become dishonest.
In our haste to be published…
- we cut corners in our experiments, when we validate our ideas at all;
- we pretend that our work is applicable in the real world, when it isn’t;
- we don’t take the time to reproduce and reflect on known results;
- we give the positive aspects of our research while omitting to mention the negatives;
- we complexify the issues so that our research looks fancier;
- we get lost in abstract nonsense.
If you want your work to really matter, you should be honest. You should not fool yourself and others. So what do we do? Maybe we should publish carefully. While barely reducing our output rate as academic writers, we can introduce extra steps to keep us more honest. What do we need?
- Diverse point of views: it is easy to fool a small group of like-minded experts, but comparatively more difficult to fool the readers of my blog.
- Time to reflect: if you read what you wrote months ago, and you don’t feel the urgency to communicate it more broadly, maybe it wasn’t all that good to begin with?
The problem is that once a paper is published in a journal or a conference, we tend to move on. Anyhow, we cannot easily revise our published work. Are there other models? Economists regularly publish working papers—commonly known in Computer Science as technical reports. But the difference between computer scientists and economists is that economists revise their working papers. And only when their work has stood the test of time, that is, has been available freely for months or years, do they submit it to conventional peer review.
This year, I will try the following experiment. Both on this blog and on my publication page, I will “publish” working papers and specifically ask readers to be critical of my work. Only after a couple of months have passed (or more) will I submit my work to a journal or conference.
This will introduce some latency in my publication output. Can I trade latency for quality? I plan to report back in a year on this (very public) experiment.
Further reading: Time for computer science to grow up by Lance Fortnow.