I totally agree. My favorite papers always have something negative to say about their results.
I also like totally negative results. If something doesn’t work like you expected based on your intuition, you can learn much more from that.
eladsays:
This is perhaps the best post I read on your blog. “I believe this is the main reason why practitioners ignore researchers.”. You nailed it straight on the head.
Is there any way to fix this situation? Medical researchers can’t hide negative evidence. Students in all disciplines can be instructed to maintain academic honesty (e.g., to report all joint work on exercises) and they seem to follow. Is there a good way to encourage academic honesty in our community as well? Maybe to include a mandatory special section in referee reports, titled “criticism/drawbacks of results”?
Nice post. But I wonder if the problem is deeper–that it’s not that researchers are hiding the negative evidence, but are oblivious to it. I’ve seen information retrieval researchers who don’t think of efficiency as an interesting metric, and therefore don’t see a cost when their improvements lead to much higher computational expenses. And I suspect that many of those same researchers are also serving as reviewers.
I totally agree. My favorite papers always have something negative to say about their results.
I also like totally negative results. If something doesn’t work like you expected based on your intuition, you can learn much more from that.
This is perhaps the best post I read on your blog. “I believe this is the main reason why practitioners ignore researchers.”. You nailed it straight on the head.
Is there any way to fix this situation? Medical researchers can’t hide negative evidence. Students in all disciplines can be instructed to maintain academic honesty (e.g., to report all joint work on exercises) and they seem to follow. Is there a good way to encourage academic honesty in our community as well? Maybe to include a mandatory special section in referee reports, titled “criticism/drawbacks of results”?
Nice post. But I wonder if the problem is deeper–that it’s not that researchers are hiding the negative evidence, but are oblivious to it. I’ve seen information retrieval researchers who don’t think of efficiency as an interesting metric, and therefore don’t see a cost when their improvements lead to much higher computational expenses. And I suspect that many of those same researchers are also serving as reviewers.