I’ve written up a summary of what I though were the highlights. Janez Demsar made the argument that there is too much emphasis on positive results and, consequently, as a reviewer it is difficult to prefer a paper that shows big improvements over existing techniques over one that does but puts forward an otherwise interesting idea.
Publishing bad results is also good as an alert sign for future researchers. If bad results are unpublishable then we are faced with a situation where future generations are condemn to repeat intuitions that have been proved wrong previously.
Refuting a thesis should be as valid as proving it. And it is.
Sylviesays:
This is also a problem in psychology, where the same mindset prevails: once an experiment has “proven” something, it is rarely redone (although it does happen).
In biomedicine, they have a Journal of Negative Results (http://www.jnrbm.com/). We need more of those types of journals.
Anonymoussays:
In complete agreement but would go further to argue that finding out what doesn’t work and why is a lot more informative than the majority of positive results.
There is a once well-known paper by Drew McDermott from the ACM SIGART Bulletin (Issue 57 (April 1976), Pages: 4 – 9) entitled “Artificial Intelligence Meets Natural Stupidity” that concludes with:
“…AI as a field is starving for a few carefully documented failures. Anyone can think of several theses that could be improved stylistically and substantively by being rephrased as reports on failures. I can learn more by just being told why a technique won’t work than by being made to read between the lines.”
Related:
Journal of Interesting Negative Results
http://jinr.site.uottawa.ca/
Journal of Negative Results in Biomedicine
http://www.jnrbm.com/
Journal of Negative Results: Ecology and Evolutionary Biology
http://www.jnr-eeb.org/
How to Maximize Citations
(7. Positivity)
http://tinyurl.com/5laylg
This topic came up at an ICML workshop on evaluation methods in machine learning I recently attended.
I’ve written up a summary of what I though were the highlights. Janez Demsar made the argument that there is too much emphasis on positive results and, consequently, as a reviewer it is difficult to prefer a paper that shows big improvements over existing techniques over one that does but puts forward an otherwise interesting idea.
Publishing bad results is also good as an alert sign for future researchers. If bad results are unpublishable then we are faced with a situation where future generations are condemn to repeat intuitions that have been proved wrong previously.
Refuting a thesis should be as valid as proving it. And it is.
This is also a problem in psychology, where the same mindset prevails: once an experiment has “proven” something, it is rarely redone (although it does happen).
In biomedicine, they have a Journal of Negative Results (http://www.jnrbm.com/). We need more of those types of journals.
In complete agreement but would go further to argue that finding out what doesn’t work and why is a lot more informative than the majority of positive results.
The work of John Ioannidis in other fields such as medince and epidemiology http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pmed.0020124
must have implications in computing science.
There is a once well-known paper by Drew McDermott from the ACM SIGART Bulletin (Issue 57 (April 1976), Pages: 4 – 9) entitled “Artificial Intelligence Meets Natural Stupidity” that concludes with:
“…AI as a field is starving for a few carefully documented failures. Anyone can think of several theses that could be improved stylistically and substantively by being rephrased as reports on failures. I can learn more by just being told why a technique won’t work than by being made to read between the lines.”
Nice cartoon on Negative results at Vadlo website.