Daniel Lemire's blog

, 11 min read

Is science more art or industry?

11 thoughts on “Is science more art or industry?”

  1. If the reviewers’ point of view is merely tainted (in an “honest” way), we don’t need double-blind reviews. We already require 3-4 different people to evaluate a paper in order to average out any reasonable bias related to a paper’s *content*.

    In my view, double-blind reviews are a tacit acknowledgment that some reviewers are not just biased, but downright hostile and dishonest when evaluating other people’s work when they know who they are. We should be concerned that we let such people review papers in the first place.

    A conference which I won’t name even had to let each member of the PC select one paper to be accepted without argument. I presume the reason of this measure is that paper reviews regularly degenerate into cockfights, to the point of being dysfunctional.

    Bottom line: starting to patch your review process is probably a sign of a deeper problem in your community that you are avoiding to address.

  2. @Hallé

    To some extend, people react how we expect them to react. We keep the reviewers anonymous, so that they can be nasty to the authors without fear of reprisal. And guess what happens?

    The assumption, it seems, is that people would not be openly critical in the open. But we know that to be false. For example, most of us got Ph.D.s under an open process. You know who reads your thesis, and they know who you are. [After all, we do put our names on our Ph.D. thesis, don’t we?] Well… Nobody that I know has ever hinted that getting a Ph.D. was a joke. It is hard work and much rigor is needed. The system works rather well. Getting a Ph.D. does not mean that you are a genius, but generally speaking, most people who have a Ph.D. have done a sizeable chunk of scientific work.

    It works because external reviewers will come down hard on a Ph.D. student when needed. Why? Because nobody wants to have been the external on a “garbage Ph.D. thesis”. It looks bad on a c.v.

    Overwhelmingly, external reviewers, when they are not anonymous, try to be fair… because they know that it would tarnish their reputation if they were unfair. You don’t want to sink a good student just for spite because it can come back to haunt you later.

    As I wrote, we need more transparency, not less. This will make communities healthier.

  3. @Daniel: I agree with your view of openness. I once stumbled upon this text:

    http://www.cs.rutgers.edu/~muthu/ccmfun.pdf

    Go look at p. 9, section “E”. The author suggests that the reviewing process be public, so that paper reviews of both accepted and rejected papers be freely available on some form of centralized repository. I would totally go for such an open model. We’ve all once received reviews that were plain nasty, and I bet the reviewers would be more nuanced if their responses were published.

  4. Paul says:

    Anonymity != Interchangeability. Even on the assembly line there are varying levels of skill. One worker might thread a screw perfectly every time, another might leave it loose now and again. That mistake could cause my GPS to die after 6 months. I have no knowledge of who put my technology together, but it can certainly matter. Thus anonymous authors by no means implies you could evaluate a researcher by article count. You could, however, read anonymized articles by all candidates, and rank them that way.

    What if Einstein hadn’t signed his papers? From a human drama angle, sure, our scientific story cache would be poorer without the Annus Mirabilis. But if nobody knew the stochastic explanation of brownian motion and special relativity had come from the same person, would science have been held back?

    Which isn’t to say that we should write anonymous articles. From a PR standpoint, from a practicality standpoint, from a transparency standpoint it all makes sense to sign away. But I, personally, don’t see the signature as at all central to the process of science. Maybe it’s because exist outside academia, but when I need to learn about the cutting edge on some topic I find the relevant papers and read them, not particularly noticing who wrote what.

  5. Pradeep Padala says:

    @Daniel nice arguments. You probably have seen this, but for the readers of the blog. Workshop on organizing conferences (http://www.usenix.org/events/wowcs08/index.html), good collection of experiences in organizing.

  6. @Pradeep

    A workshop about organizing workshop?

    What next, we are going to have papers about writing papers?

    ;-0

  7. Pradeep Padala says:

    “A workshop about organizing workshop?”

    I actually wish there were more workshops like these to bring more transparency. This is especially useful for new grad students and some research communities that don’t have a good ecosystem.

    “What next, we are going to have papers about writing papers?”

    🙂 There are plenty of papers like those already.

  8. Denzil says:

    Daniel,

    “Let us be candid here. When reviewing research papers, there is no such thing as objectivity. ”

    If objectivity isn’t maintained I don’t see any type of review process to help. I would rather review work in the context of the conference rather than the context of author’s previous work. As I mentioned in your earlier blog post, the proposed review system is heavily loaded towards people working in an area for some time and hence, may act as an extra barrier to new entrants.

  9. Itman says:

    What next, we are going to have papers about writing papers?

    Actually, not a bad idea. If I had to write such a paper, I would start it with: Never, ever, ever use Microsoft Word!

    I support this idea of open reviews. It certainly would not stop people being critical, but it would make them polite and constructive. If a reviewer claims some bullshit she/he must be responsible for it.

  10. @Pradeep I agree it is a good idea. I was joking.

  11. alain_desilets says:

    Since 2009, the Agile conference has been using a completely open, collaborative process for selecting most of its content. I am not sure that they have applied this process to ALL tracks (I think the more academic track still follows a traditional process).

    http://agilepainrelief.com/notesfromatooluser/2011/04/how-we-reviewedagile-2011-coaching-stage.html

    I participated in the 2009 edition (both as a reviewer and a submitter), and I have to say that this process did have the advantage of offering early feedback to authors. Not sure that it’s necessarily more fair, or that it leads to the best papers being selected in the end. It’s too complex a question to be answerable through a single experiment like that.