Daniel Lemire's blog

, 37 min read

Open Access is the short-sighted fight

22 thoughts on “Open Access is the short-sighted fight”

  1. @Tunkelang I’m glad that I have at least one person on my side! Thanks!

  2. Not surprisingly, I agree with you–though I’ll make a mental note that I have to call you out as silly one of these days. It makes sense for prestige to be scarce, but there’s no reason that publication should mirror that scarcity. We want to promote a system where anyone can publish findable research, but you have to earn reputation for your research to be sought after. I think we’ll inevitably get there, but the process will be very disruptive to both universities and the ecosystem of journals and conferences. Progress is fun!

  3. That’s an interesting take on my position.

    But it does offer the interpretation – not stated in my comment – that Harnad’s argument in effect dismisses work like mine as ‘not real academic work’.

    I think that if this is the case, then my objection is a bit more substantial than you suggest.

  4. Stevan Harnad says:

    PRENON LES CHOSES DANS L’ORDRE

    (1) Green OA self-archiving means providing immediate, permanent, free access to the text of the refereed journal article online, not a “free preview.”

    (2) The work of Woody Allen is not an author give-away, done only for uptake, usage and impact, as all refereed research articles are. (That is why refereed research, and not “every digital product under the sun” is OA’s primary target content.)

    (3) Academic culture does not just reward the prestige of research publications, but their uptake, usage and impact. That’s what OA metrics measure and what OA mandates maximize.

    (4) PLoS is certainly prestigious (and impactful) enough: Its half dozen journals are, however, only 6 out of 25,000 peer reviewed journals, 80% of them not OA (and most of the OA ones nowhere near PLoS in quality). There’s a long, long way to go on that Golden road, and ever arriving successfully at the destination is very far from a certainty.

    (5) The only silly thing is to just keep pressing directly for publication reform instead of providing immediate Green OA by self-archiving (and mandating self-archiving). It is silly precisely because not only does it hold Universal Green OA back, but it holds back Gold OA publication reform too (for that will only follow universal Green OA, not precede it).

    (6) None of this has anything whatsoever to do with unrefereed self-publishing. No one is holding people back from doing that it they wish, and there’s no need to reform the publishing system in order to do it.

    (7) As I’ve said many times before, the objective is to free peer-reviewed research from access-barriers, not to free it from peer review.

    (8) PLoS journals are all (rigorously) peer-reviewed.

  5. (1) If scientific culture rewarded readership and impact above all else, we would not have to force authors toward Open Access. You know full well that many researchers are just happy to have the paper appear in a prestigious journal. They will not make any effort to make their work widely available because they are not rewarded for it. Publishing is enough to receive tenure, grants and promotions. And the reward system is what needs to be fixed.

    (2) I love peer review. My blog is peer reviewed. You are a peer and just reviewed my blog post. And you made me feel good because you indicated, by posting this comment, that my blog matters enough to you for you to write a correction. (Of course, you are fully aware that I was fishing for it: I was just waiting for you to come by. But fair is fair.)

    (3) PLoS has different types of peer review where correctness is reviewed, but no prediction is made as to the perceived importance of the work. Let me quote them:

    “Too often a journal’s decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership — both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. PLoS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them).”

    (4) Moreover, PLoS does publish non-peer-reviewed material, see PLoS Currents: Influenza for example.

  6. Corrections:
    title: PRENONS
    (6) …doing that if they wish…
    (7) …before, OA’s objective is…

  7. (continued)

    I refuse to be limited to traditional peer review, that is filter early, as the high standard. We inherited this from another era and the time has come to question it.

    The high standard I adhere to is to produce high quality research outputs. There rest is unneeded artificial scarcity.

    As an aside, OA presently offers drastically limited access, by your own arguments, stated on your own blog. Open Access is to be limited to reseach articles, and should exclude books (your own books are not OA), conference proceedings and so on. If I am stranded without an acquisition budget, I can only access a small fraction of the science. I do not get the full picture. That is especially true in Computer Science where 80% of the research is published in conference proceedings which are excluded from Open Access policies.

    By preserving clique-based peer review and locking up books and proceedings, we are failing to move toward Open Scholarship which should be our only goal.

    Open Scholarship is the noble long term goal we must work toward. Open Access policies are short-sighted because they fail to address the real problem which is our very own culture and reward system.

  8. @Smith The same arguments you use could be made to justify limiting blogging to few certified individuals, and to require all blog posts to be privately screened and authorized.

    They can also be used to demonstrate that Wikipedia cannot work. After all, an online encyclopedia where anyone can edit the content is going to be dominated by “the obsequious or contentious”?

    And what might happen if anyone can post a research paper on arxiv, without an editor’s stamp of approval? Surely, most work will be of low quality? No?

    Well. Gregory Perelman happened, that’s what. Arguably the most important mathematical result of the century was posted on arxiv, without any trace of formal peer review.

    You see, once you lift the artificial barriers to publication, there is little incentive to publish nonsense for serious people. People who want to get ahead are forced to focus on the quality of their work once there is no artificial scarcity in the number of publication slots.

    You can no longer stand out by how many papers you published or by where you published them. You are left to worry about producing work that people will want to read, work that will prove useful.

    Artificial scarcities are inefficient. They lower the overall quality. No matter how we justify them.

  9. EMPTOR CAVEATS

    DL: “(1) If scientific culture rewarded readership and impact above all else, we would not have to force authors toward Open Access.”

    (a) University hiring and performance evaluation committees do reward impact. (It is no longer true that only publications are counted: their citation impact is counted and rewarded too.)

    (b) Soon readership (e.g., download counts, link counts, tags, comments) too will be counted among the metrics of impact, and rewarded — but this will only become possible once the content itself is Open Access (OA), hence fully accessible online, its impact measurable and rewardable. (See references cited at the end of this commentary under heading METRICS.)

    (c) OA mandates do not force authors toward OA — or no moreso than the universal “publish or perish” mandates force authors toward doing and publishing research: What these mandates do is close the loop between research performance and its reward system.

    (d) In the case of OA, it has taken a long time for the world scholarly and scientific community to become aware of the causal connection between OA and research impact (and its rewards), but awareness is at long last beginning to grow. (Stay tuned for the announcement of more empirical findings on the OA impact advantage later today, in honor of OA week.)

    DL: “You know full well that many researchers are just happy to have the paper appear in a prestigious journal. They will not make any effort to make their work widely available because they are not rewarded for it. Publishing is enough to receive tenure, grants and promotions. And the reward system is what needs to be fixed.”

    This is already incorrect: Publishing is already not enough. Citations already count. OA mandates will simply make the causal contingency between access and impact, and between impact and employment/salary/promotion/funding/prizes more obvious and explicit to all. In other words, the reward system will be fixed (including the development and use of a rich and diverse new battery of OA metrics of impact) along with fixing the access system.

    DL: “(2) I love peer review. My blog is peer reviewed. You are a peer and just reviewed my blog post.”

    Peer commentary is not peer review (as surely I — who founded and edited for a quarter century a rather good peer-reviewed journal that also provided open peer commentary — ought to be in a position to know!). Peer commentary (as well as post-hoc metrics themselves) are an increasingly important SUPPLEMENT to peer review, but they are themselves neither peer review nor a SUBSTITUTE for it. (Again, see references at the end of this commentary under the heading PEER REVIEW.)

    DL: “(3) PLoS has different types of peer review where correctness is reviewed, but no prediction is made as to the perceived importance of the work. Let me quote them:”

    “Too often a journal’s decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership — both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. PLoS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them).”

    You have profoundly misunderstood this, Daniel:

    (i) It is most definitely a part of peer review to evaluate (and where necessary correct) the quality, validity, rigor, originality, relevance, interest and importance of candidates for publication in the journal for which they are refereeing.

    (ii) Journals differ in the level of their peer review standards (and with those standards co-vary their acceptance criteria, selectivity, acceptance rates — and hence their quality and reliability).

    (iii) PLoS Biology and PLoS Medicine were created explicitly in order to maintain the highest standards of peer review (with acceptance criteria selectivity and acceptance rates at the level of those of Nature and Science [which, by the way, are, like all peer judgments and all human judgment, fallible, but also corrigible post-hoc, thanks to the supplementary scrutiny of peer commentary and follow-up publications)).

    (iv) PLoS ONE was created to cater for a lower level in the hierarchy of journal peer review standards. (There is no point citing the lower standards of mid-range journals in that pyramid as if they were representative of peer review itself.)

    (vi) Some busy researchers need to know the quality level of a new piece of refereed research a-priori, at point of publication — before they invest their scarce time in reading it, or, worse, their even scarcer and more precious research time and resources in trying to build upon it — rather than waiting for months or years of post-hoc peer scrutiny or metrics to reveal it.

    (v) Once again: commentary — and, rarer, PEER commentary — is a SUPPLEMENT, not a SUBSTITUTE for peer review.

    DL: “(4) Moreover, PLoS does publish non-peer-reviewed material, see PLoS Currents: Influenza for example.”

    And the journal hierarchy also includes unrefereed journals at the bottom of the pyramid. Users are quite capable of weighting publications by the quality track-record of their provenance, whether between journals, or between sections of the same journal. Caveat Emptor.

    METRICS:

    Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003) Digitometric Services for Open Archives Environments. In Proceedings of European Conference on Digital Libraries 2003, pp. 207-220, Trondheim, Norway. http://eprints.ecs.soton.ac.uk/7503/

    Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment. Technical Report, ECS, University of Southampton. http://eprints.ecs.soton.ac.uk/12130/

    Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to Metrics. Research Fortnight pp. 17-18. http://eprints.ecs.soton.ac.uk/14329/

    Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3). http://eprints.ecs.soton.ac.uk/14418/

    Harnad, S. (2008) Self-Archiving, Metrics and Mandates. Science Editor 31(2) 57-59
    http://www.councilscienceeditors.org/members/secureDocument.cfm?docID=1916

    Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) doi:10.3354/esep00088 The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance http://eprints.ecs.soton.ac.uk/15619/

    Harnad, S., Carr, L. and Gingras, Y. (2008) Maximizing Research Progress Through Open Access Mandates and Metrics. Liinc em Revista 4(2). http://eprints.ecs.soton.ac.uk/16617/

    Harnad, S. (2009) Multiple metrics required to measure research performance. Nature (Correspondence) 457 (785) (12 February 2009)

    Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1) Also in Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. (2007)

    Harnad, S; Carr, L; Swan, A; Sale, A & Bosc H. (2009) Maximizing and Measuring Research Impact Through University and Research-Funder Open-Access Self-Archiving Mandates. Wissenschaftsmanagement 15(4) 36-41

    PEER REVIEW:

    Harnad, S. (1978) BBS Inaugural Editorial. http://www.ecs.soton.ac.uk/%7Eharnad/Temp/Kata/bbs.editorial.html

    Harnad, S. (ed.) (1982) Peer commentary on peer review: A case study in scientific quality control, New York: Cambridge University Press. http://eprints.ecs.soton.ac.uk/3389/

    Harnad, S. (1984) Commentaries, opinions and the growth of scientific knowledge. American Psychologist 39: 1497 – 1498.

    Harnad, Stevan (1985) Rational disagreement in peer review. Science, Technology and Human Values, 10 p.55-62.
    http://cogprints.org/2128/

    Harnad, S. (1986) Policing the Paper Chase. (Review of S. Lock, A difficult balance: Peer review in biomedical publication.) Nature 322: 24 – 5.

    Harnad, S. (1995) Interactive Cognition: Exploring the Potential of Electronic Quote/Commenting. In: B. Gorayska & J.L. Mey (Eds.) Cognitive Technology: In Search of a Humane Interface. Elsevier. Pp. 397-414. http://cogprints.org/1599/

    Harnad, S. (1996) Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals. In: Peek, R. & Newby, G. (Eds.) Scholarly Publishing: The Electronic Frontier. Cambridge MA: MIT Press. Pp 103-118. http://cogprints.org/1692/

    Harnad, S. (1997) Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright. Learned Publishing 11(4) 283-292. Short version appeared in 1997 in Antiquity 71: 1042-1048. Excerpts also appeared in the University of Toronto Bulletin: 51(6) P. 12. http://cogprints.org/1694/

    Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242. http://cogprints.org/1646/

    Harnad, S. (2002) BBS Valedictory Editorial. http://www.ecs.soton.ac.uk/%7Eharnad/Temp/bbs.valedict.html

  10. Arthur Smith says:

    Scarcity of research output has a benefit to researchers on the other end, who save valuable time if they only need to keep up with a limited selection of top journals and single definitive versions of research rather than multiple partial accounts. In an age of abundance that time-saving function is still needed, but we seem not yet to have developed suitable tools. Open commentary is far too often dominated by the obsequious or contentious. If real shared insight is limited to private offline communications we have gone backwards, not forwards. The current anonymous peer-review system serves subtle functions in science communication that this discussion doesn’t seem to acknowledge.

  11. BREAKING DOWN OPEN DOORS

    DL: “I refuse to be limited to traditional peer review.”

    No one is blocking you. (But my access to peer-reviewed articles is being blocked by toll-barriers until they are made OA).

    DL: “The high standard I adhere to is to produce high quality research outputs. The rest is unneeded artificial scarcity.”

    The same is probably said by drug-manufacturers — but I’d still rather have external quality control — and not only through hearing about how many users poisoned themselves from trying out a new drug without external quality testing. (Sorry for the shrillness of the analogy. but do we really rate scholarly/scientific reliability so much lower than health-safety?)

    (By the way, I’d say the scarcity of high quality research is not artificial but a natural consequence of traits that tend to be distributed guassianly or even logarithmically in the population: even the criterion for “high” is relative to the distribution…)

    DL: “As an aside, OA presently offers drastically limited access, by your own arguments, stated on your own blog. Open Access is to be limited to reseach articles, and should exclude books (your own books are not OA), conference proceedings and so on.”

    OA offers nothing. Authors offer drastically limited access to their own research, because most are not yet self-archiving their work to make it OA. That’s why the mandates are needed. But it is the mandates that are limited to refereed research articles, not OA, which any author can provide to whatever work (of their own) they wish to make freely accessible online (just as you can post whatever you wish, without having to submit to peer review).

    The problem is the toll-access barriers, and authors’ failure to free what they want to give away already (namely, all their refereed journal articles, written only for usage and impact, not for income). But for most authors that does not mean their books (or music, or films, or software).

    I have not written any books, only edited a few. The one book (edited by Okerson and O’Donnell) the lion’s share of whose contents I wrote is completely free online. So are all the chapters I have written in edited books, including the ones I edited myself. But again, I self-archived those facultatively; it is premature to speak of self-archiving content that most authors don’t want to give away free today when we have not even mandated self-archiving the content that they all do want to give away free today.

    DL: “If I am stranded without an acquisition budget, I can only access a small fraction of the science. I do not get the full picture. That is especially true in Computer Science where 80% of the research is published in conference proceedings which are excluded from Open Access policies.”

    Daniel, forgive me, but what you are saying makes no sense. Authors can self-archive anything they want to give away free online. The purpose of the institutional policies is to mandate self-archiving journal articles, which they all already want to give away free, but are not doing it. Many may also want to give away their book chapters, and some may even want to give away their books. None of that can be mandated, because it may conflict with what the author does and does not want to give away, but it is not “excluded,” any more than self-archived unrefereed content is excluded. (By the way, most Green OA mandates also cover refereed conference article.)

    DL: “By preserving clique-based peer review and locking up books and proceedings, we are failing to move toward Open Scholarship which should be our only goal.”

    You are conflating the problem of access with the problem of quality control. They are orthogonal, and insisting on freedom from peer review as part and parcel with freedom from toll-access neither makes sense nor does it do the cause of OA any good. (There is nothing stopping scholars from freeing their scholarship from peer review today — except possibly, as Stephen Downes pointed out, their career prospects.)

    DL: “Open Scholarship is the noble long term goal we must work toward. Open Access policies are short-sighted because they fail to address the real problem which is our very own culture and reward system.”

    We have “Open Scholarship” already, for any author who wants it; the barriers are to the access that you find so “short-sighted”…

  12. @Stevan

    I’ll make a new blog post out of this.

    I’d still rather have external quality control — and not only through hearing about how many users poisoned themselves from trying out a new drug without external quality testing. (Sorry for the shrillness of the analogy. but do we really rate
    scholarly/scientific reliability so much lower than health-safety?)

    How can you say this when wikipedia has been shown to be more reliable than the traditionally-reviewed alternatives?

    Traditional peer review is not the way we determine what is wrong and what is right in science, it is also a recent “innovation” (very recent in the history of science), and it is certainly not reliable:

    http://www.daniel-lemire.com/blog/archives/2009/01/09/the-purpose-of-peer-review/

    Here is another reference:

    “Empirical evidence on expert opinion shows that it is extremely unreliable.”

    Reference: Why Most Published Research Findings Are False, http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

    To put it bluntly, you’d be silly to trust your health to peer review. What you can trust, a lot more, is independent verification by several teams of the same findings.

    Basically, there is strength in numbers. If many people have independently verified something, then chances are that it is right (but it may still be wrong!!!).

    But simply because a bunch of anonymous professors read a paper and think it is good enough to appear in Nature… that does not say anything about the truth of the paper.

    There is no way reviewers can know whether the experiments were done properly. Often they don’t even read carefully the papers, and sometimes they are not even qualified to read them. There is no way reviewers can verify the truth. Reading over a paper is not the same as independently verifying it.

    That is not to say that feedback from your peers is not extremely valuable.

    There is nothing stopping scholars from freeing their scholarship from peer review today — except possibly, as Stephen Downes pointed out, their career prospects.

    Stephen has not freed himself from peer review.

    And I don’t advocate that anyone frees himself from peer review.

    Heck! All my research papers even go through traditional peer review. But I have no illusion that it makes them “right”.

    And the reason I still go through traditional peer review is that this is the only way that I can force people to criticize my research. (I’m not famous or infamous enough to be able to pull a Perelman and just have the whole world scrutinizing my arxiv submissions).

    We have “Open Scholarship” already, for any author who wants it; the barriers are to the access that you find so “short-sighted”…

    It seems to me that we have Open Access already, for any author who wants it. We are building Open Scholarship right now, but we still don’t have all the pieces. (Including the part where there is no good alternative to traditional peer review for many of us.)

  13. @Harnad

    unless it is reviewed and accepted by a journal with an established track-record for quality standards, most busy researchers will not invest their scarce time and resources into reading and using it

    Just like none of them are reading my blog?

    (Lots of famous people comment on my blog, beside you: http://www.daniel-lemire.com/blog/my-readers/ And my blog is not exceptional in any way.)

    Should we also each individual hand-test our mushrooms, to see whether their are toxic o safe to eat, rather than relying on experts, because they are human, and might have occasional lapses?

    You are misrepresenting my point. And by your arguments, wikipedia ought to be highly unreliable compared to expert-reviewed solutions, which it isn’t.

    My quote about Wikipedia was directly from Daniel Lemire…

    No.

    I did not write or imply anywhere that people should replace journals with wikipedia.

    By your arguments, busy people would not bother with wikipedia. They would prefer the more rigidly reviewed alternatives (of which several are free).

    My answer to the rest of your comments took the form of a new blog post:

    Become independent of peer review
    http://www.daniel-lemire.com/blog/archives/2009/10/26/become-independent-of-peer-review/

  14. TRANSMUTING SCIENCE INTO WIKIPEDALCHEMY

    DL: “wikipedia has been shown to be more reliable than the traditionally-reviewed alternatives”

    Really? So researchers have abandoned their peer-reviewed journals and are now relying instead on the anonymous gremlins of Wikipedia, or ought to? (That’s an interesting stance…)

    DL: “Traditional peer review is not the way we determine what is wrong and what is right in science”

    But it’s our best bet for filtering in advance what is worth investing our time to read and taking the risk of trying to use and build upon…

    DL: “’Empirical evidence on expert opinion shows that it is extremely unreliable.’ http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

    Less reliable than inexpert opinion, or no binding, answerable advance expert filtering at all?

    DL: “You’d be silly to trust your health to peer review.”

    Trust it instead to Wikipedia, or web chat groups?

    DL: “What you can trust, a lot more, is independent verification by several teams of the same findings.”

    It’s whether or not enough of that verification has already happened that peer review is meant to determine.

    DL: “If many people have independently verified something, then chances are that it is right.”

    That’s what the collective, cumulative, self-corrective process of scientific publication is meant to converge on in the long run. But newly published research findings are not the long run, and active research cannot wait for everything to be eventually checked out in the long run. Peer review is an advance quality-control filter tomaximize in advance the probability that findings are sound enough to warrant reading and using now.

    DL: “that a bunch of anonymous professors read a paper and think it is good enough to appear in Nature… does not say anything about the truth of the paper.”

    It says a lot more than posting raw papers on the web and waiting to see if there turns out to be anything wrong with them.

    DL: “There is no way reviewers can know whether the experiments were done properly.”

    Even less can the reader and user know.

    DL: “Often [reviewers] don’t even read carefully the papers, and sometimes they are not even qualified to read them.”

    True. And your point is…?

    Editors and the journal’s reputation and track-record are answerable for the reliability of their peer review. Different journals have different quality standards, but if those standards fall (not occasionally, because of lapses in human nature, but systematically, because of declining rigor), so does their readership.

    DL: “There is no way reviewers can verify the truth. Reading over a paper is not the same as independently verifying it.”

    There is even less way for potential readers and users to verify the truth. Do you propose that everyone must first try to independently verify every purported new finding rather than continue to rely on the advance filtering provided by classical peer review? But why on earth would you propose that?

    (Should we also each individual hand-test our mushrooms, to see whether their are toxic o safe to eat, rather than relying on experts, because they are human, and might have occasional lapses?)

    DL: “the reason I still go through traditional peer review is that this is the only way that I can force people to criticize my research.”

    Another reason is that unless it is reviewed and accepted by a journal with an established track-record for quality standards, most busy researchers will not invest their scarce time and resources into reading and using it (unless you are famous).

    DL: “It seems to me that we have Open Access already, for any author who wants it.”

    You miss the point. It is *users* who want Open Access, and only 15% of authors are *providing* it (except if mandated). Hence users do not have it.

    DL: “We are building Open Scholarship right now, but we still don’t have all the pieces. (Including the part where there is no good alternative to traditional peer review for many of us.)”

    I can only repeat: You are dreaming if you think that the already overloaded specialists in each field have nothing better to do than to review everything that is posted (unbidden, and at the risk of having their recommendations unheeded) — when they can barely keep up with the formal requests to peer review papers from the editors of journals who seek their expertise and will see to it that their advice is heeded…

  15. …only the anonymous gremlins at wikipedia have that kind of time on their hands — and that largely matches their absence of “expertise” in the tiny parts of wikipedia that purport to cover what peer-reviewed journals cover…

  16. > So researchers have abandoned their peer-reviewed journals and are now relying instead on the anonymous gremlins of Wikipedia, or ought to?

    This is a blatant misrepresentation of the position, and Stevan Harnad should know better. There are many other forms of online publication other than wikipedia.

  17. My quote about Wikipedia was directly from Daniel Lemire…

    The contention is not that there cannot be the occasional rare unrefereed gem.

    The contention is that (1) unrefeeed postings are not a viable way of reporting, following and using the vast daily body of scholarly and scientific research; (2) post-posting “corrective” commentary comes too late (if it comes at all); and (3) no one has tested and demonstrated the capability and of any alternative to classical peer review to generate a research literature that is of at least the quality and useability as the one we have now, in our 25,000 peer-reviewed journals.

    That’s the literature that the Open Access movement is trying to set free from fee-based access barriers — not from peer review.

  18. Seb says:

    Wow. You guys have unbelievable stamina. You know what? I’ll let the future sort it all out. It’ll be faster 🙂

  19. This is informal discussion. Peer-reviewed journals are for reporting scholarly and scientific research. And, no, that’s not what people use Wikipedia for.

  20. I use my blogs to report scholarly and scientific information. You cannot by some act of stipulation make it otherwise.

  21. Reporting one’s research findings in unrefereed blogs instead of refereed journals can certainly be done, but it is a strategy that does not scale, either for authors or for users. And you’ve already described some of the disadvantages.

  22. Problem is, the future has been *incredibly* slow for Open Access, which has been reachable for at least 2 decades now, through Green OA self-archiving, and still not grasped. All these keystrokes speculating about other things — copyright, peer review, Gold OA publishing — has been distracting us from doing the few keystrokes it would have taken to provide universal OA…