Thank you Daniel for voicing such an important concern. I’ve encountered resonating concept aptly named “research debt”: https://distill.pub/2017/research-debt/ — and we definitely should incentivize paying it off!
jldsays:
Yes, the overwhelming majority of publications is plain junk.
If you remove the gloses, the sales pitches and the “surveys” only a few paragraphs barely remains worthy of any interest.
And they are obviously not meant to be of any use, the structure of the “novel points” is swiftly mentioned in passing and lengthy “proofs” are rehashed over and over which usually do not bring much enlightenment.
The end results are correct but contrived and useless.
I agree completely. I suspect publications have become the primary metric because it is easy to measure. I’m not sure what should replace it, but I think science, and therefore possibly the world, would be better if the incentives encouraged a broader definition of success.
This was my biggest complaint about the academic world during my PhD. I always say the most important thing I learned is that I don’t want to be a professor. I don’t want to be a professor because your output is papers, which felt like a waste. Don’t get me wrong: the best papers are valuable! But I would argue that a large percentage (95%?) are not worth reading, which means writing and reviewing them was also a waste.
I certainly do not recommend anyone read any of the papers I’ve published! But I was trying to play the game, so I had to try to publish something to “prove” that I could.
RADsays:
I agree wholeheartedly. Well said.
Question: how much is of the problem is human agency and how much is structural due to the physical medium of “paper” (paper in the material science sense)?
We have lived though a digital revolution in terms of web/mobile/cloud yet the publishing processes surrounding academic research hasn’t changed? At the very least, all data and content should be stored/revised in Git (or an equivalent), the content should be in Markdown (or another equivalent representation of HTML) and the licensing should be sane (unlike the tradition of handing over the copyright to the publishing journal).
I think that the problem is closely related to peer review. By putting a stamp of approval on papers (physical or digital), we create units that can be viewed as accomplishments. Whether it is actual paper or not does not matter much.
In contrast, the blog post you just read is not approved or reviewed. Thus it is, for me, an accomplishment. Hence, my primary motivation is to communicate an idea… not to earn points in some background game.
How we evolve beyond where we are is not clear and maybe not easy to predict.
One thing is for certain, however. Professional scientists are in charge of the system. They are the ones granting jobs and promotions.
RADsays:
I agree with your assessment of peer reviews and the flawed incentive system built around them. I should have thought through my question better. I guess I’m wondering out loud whether the future of peer reviewed research is not only correcting past mistakes but embracing new tools to make the research papers “living documents” that are easy to integrate into new research. A research document should have a version number, not just a publishing date.
“This is a very different experience from any outbreak that I’ve been a part of,” says epidemiologist Marc Lipsitch of the Harvard T.H. Chan School of Public Health. The intense communication has catalyzed an unusual level of collaboration among scientists that, combined with scientific advances, has enabled research to move faster than during any previous outbreak. “An unprecedented amount of knowledge has been generated in 6 weeks,” says Jeremy Farrar, head of the Wellcome Trust.
Krishnansays:
Most papers dont have a limit on the number of citations. This creates mutual back scratching incentives. If conferences limit number of citations per paper to 5 or 10, then the output of a researchers career might look more honest.
Universities should aim to create balance between publishing and transfer. For every 10 papers published, the goal should be to have atleast one transfer. Transfer does not have to mean a startup. Even a small software library that industry finds useful can be counted as a transfer. It is probably more valuable than the multiple papers because the real test of good research is someone using it in an implementation
Andrew Dalkesays:
I have a hard time understanding how there is so much enthusiastic support when I see a bunch of real concerns.
If I’m an academic researcher in archeology, and I publish a paper on 18th century farmstead construction methods in Lower Saxony, how do my results “transfer”?
How do others judge if my paper on transfers enough to meet, for example, Krishnan’s proposal that “[f]or every 10 papers published, the goal should be to have at least one transfer”? In my archeology example, the next link in the chain may be 20 years later for that sort of scholarship!
If I collaborate on a paper with someone in industry, does that count as an automatic transfer? If not, and if my industry partner decides to not pursue the work, what should I do if I don’t know that domain nor have other contacts?
Similarly, if one of my PhD students does some excellent work in a subfield related to mine, and we publish, then the student graduates and decides to work in another field, then is it my “job” to change my research focus and continue my previous student’s work? Even if it doesn’t really interest me? Do I need to get all of my other PhD students to join in that new focus?
Because in that scenario I can see a senior professor pointing out the previous work that his team did, and comment that “nobody ever did anything with it” … with himself as part of the “nobody.” If you’ve had 15 PhD students, can you really follow all of the paths each of them went down?
What about publishing negative results? If I’m an academic medical researcher, I may need to register a clinical trial, and report negative results. This helps minimize selective reporting and publication skew towards positive results. How are these supposed to transfer? Or even determined that there was a transfer?
Lastly, suppose I publish a method which is 5x faster than the existing algorithms. There’s initial industry interest because there haven’t been any big improvements for over a decade. Then three weeks later another group publishes a fundamentally different algorithm which is another 7x faster than mine. The field switches to that new algorithm and my work becomes a footnote. Does that count as a transfer, or does it count against me because there was no transfer? How long should I continue to develop my algorithm if my job is to do something with it?
I am not arguing that you should keep up the work forever. If you do some work and a student of yours pick it up and makes something out of it… then you can and maybe you should move on. Same story if you invent something and later someone invents something even better, then you should move on. In fact, I would urge folks to move out of the way as soon as possible.
There is a finite number of jobs, grants and promotions. So when you ask “what if an archeologist wrote this narrow paper”…?…. currently, this does get assessed. How? Probably on the prestige of the venue where it was published. And then, maybe, by counting citations. If the researcher worked on something that was not too fashionable, then it is likely that it will get published in a little known journal, it will not get much of a readership, and the researcher in question may struggle to translate this work into a job or a promotion. It better not be a negative results because that’s hard to publish. And it has to be about a “new” ideas. That is how it works right now.
My post is less about how we should assess researchers… and more about what should motivate researchers. I am saying that they should not write the paper and consider their job done. They should keep working till the work bear its fruits.
What if the work is not fruitful… what if the work will never amount to anything for anyone…
Then I will say: do something else.
What is “transfer”? It is up to the researcher to know.
Everyone’s life ought to be impactful. But I don’t think that there can be one universal measure of “impact”.
Currently, we are running along under the assumption that a research paper that is never read is “impact” because, well, because it has received a stamp of approval. That is putting the bar very low.
It is a recent social construction. The whole thing would have made no sense to scholars from the first half of the XXth century.
My 2 cents. The PhD became an “industry” long time back, and one cannot really judge a “transfer” objectively, as Andrew Dalke’s comment points out very well. Learning, knowledge – it shouldn’t be an industry, though. The solution, for me, is to remove those ample incentives that make this a kind of freebie club. A PhD researcher should have accommodation and food coupons for free, and access to Internet, books, other papers, libraries, but no salary: a learner should be into learning for its own sake. It’s a radical proposal (not so radical if one considers that is how monks and Brahmins used to live and research in the old days), but the current system doesn’t work. As for professors, they should be judged purely on the basis of teaching outcomes, rather than their paper output.
Whether PhD students have a salary or not, they are assuredly looking for a job after the PhD.
I am not sure how assessing professors strictly on their teaching would help research. It might help teaching though.
Ivansays:
Love the directness of the post, but am not a fan of the black and white dichotomy. The academic world isn’t so simple. Universities are first and foremost institutions of learning, intended to train students. So, naturally, some academics approach research primarily as a way to teach MSc and PhD students. Who cares whether or not the research was eventually used by anyone outside of academia, what matters is that the students (and the PI) on the project learned something along the way. Universities are also intended to contribute to the sum of human knowledge. If you adopt the view that “If your idea is worthwhile and you let it end with a research paper, you have failed” then researchers who are working on ideas that are 10-20 years out from being commercialized/used have failed.
Reductionism isn’t appropriate to describe complex systems like Universities. It’s like saying that government is a necessary evil and should be minimized at all costs. But, I do appreciate your point, especially in the context of engineering-focused research that intentionally aims to solve real problems.
I agree that it is worthwhile to do a project even if the sole outcome is that a student acquired new skills or knowledge. I personally do this all the time. But if that’s the outcome, then you do not need the research paper. And I would argue that it is not research, but rather training/teaching. That is fine, but let us not get distracted.
I did not write that people had to achieve commercialization of their ideas to be successful. Industrial transfer is certainly one way to go, but we should not expect such an outcome in general.
One of my colleagues work with folks who live in locations with industrial contamination. Much of her work does not go into the research paper. She has to meet with the people, learn about them, figure out the problems… and then, when she thinks she has figured it out, she goes back to them to make sure, and so forth. It would be tragic to assess her work based on the publication record alone.
Couldn’t agree more. I’m constantly amazed how many good ideas in the data structures field have languished for decades. Here are a couple from the 70s (!) that I’ve never seen implemented anywhere but yielded great results when I implemented and benchmarked them:
I agree that the publishing game is often damaging, and sometimes pointless. However, it really isn’t such a bad idea to make people write up what they have done. In many ways, I feel like there is a bit of an unhealthy focus on publishing in the first place. It shouldn’t be that hard (and often is not) to write up results in a way the community will be able to refer to it. Journal metrics on the other hand are truly ridiculous. Far from understanding that most people will simply use a result aggregator like Google Scholar or the Web of Science, some researchers I have known will doggedly try to get their work published in a ‘reputed’ journal instead of just publishing it in a specialized journal and moving on.
Thank you Daniel for voicing such an important concern. I’ve encountered resonating concept aptly named “research debt”: https://distill.pub/2017/research-debt/ — and we definitely should incentivize paying it off!
Yes, the overwhelming majority of publications is plain junk.
If you remove the gloses, the sales pitches and the “surveys” only a few paragraphs barely remains worthy of any interest.
And they are obviously not meant to be of any use, the structure of the “novel points” is swiftly mentioned in passing and lengthy “proofs” are rehashed over and over which usually do not bring much enlightenment.
The end results are correct but contrived and useless.
(I have read over 10000 CS papers)
I agree completely. I suspect publications have become the primary metric because it is easy to measure. I’m not sure what should replace it, but I think science, and therefore possibly the world, would be better if the incentives encouraged a broader definition of success.
This was my biggest complaint about the academic world during my PhD. I always say the most important thing I learned is that I don’t want to be a professor. I don’t want to be a professor because your output is papers, which felt like a waste. Don’t get me wrong: the best papers are valuable! But I would argue that a large percentage (95%?) are not worth reading, which means writing and reviewing them was also a waste.
I certainly do not recommend anyone read any of the papers I’ve published! But I was trying to play the game, so I had to try to publish something to “prove” that I could.
I agree wholeheartedly. Well said.
Question: how much is of the problem is human agency and how much is structural due to the physical medium of “paper” (paper in the material science sense)?
We have lived though a digital revolution in terms of web/mobile/cloud yet the publishing processes surrounding academic research hasn’t changed? At the very least, all data and content should be stored/revised in Git (or an equivalent), the content should be in Markdown (or another equivalent representation of HTML) and the licensing should be sane (unlike the tradition of handing over the copyright to the publishing journal).
I think that the problem is closely related to peer review. By putting a stamp of approval on papers (physical or digital), we create units that can be viewed as accomplishments. Whether it is actual paper or not does not matter much.
In contrast, the blog post you just read is not approved or reviewed. Thus it is, for me, an accomplishment. Hence, my primary motivation is to communicate an idea… not to earn points in some background game.
How we evolve beyond where we are is not clear and maybe not easy to predict.
One thing is for certain, however. Professional scientists are in charge of the system. They are the ones granting jobs and promotions.
I agree with your assessment of peer reviews and the flawed incentive system built around them. I should have thought through my question better. I guess I’m wondering out loud whether the future of peer reviewed research is not only correcting past mistakes but embracing new tools to make the research papers “living documents” that are easy to integrate into new research. A research document should have a version number, not just a publishing date.
And almost on cue the real world confirms that “necessity is the mother of invention”: ‘A completely new culture of doing research.’ Coronavirus outbreak changes how scientists communicate by Kai Kupferschmidt in Science.
Most papers dont have a limit on the number of citations. This creates mutual back scratching incentives. If conferences limit number of citations per paper to 5 or 10, then the output of a researchers career might look more honest.
Universities should aim to create balance between publishing and transfer. For every 10 papers published, the goal should be to have atleast one transfer. Transfer does not have to mean a startup. Even a small software library that industry finds useful can be counted as a transfer. It is probably more valuable than the multiple papers because the real test of good research is someone using it in an implementation
I have a hard time understanding how there is so much enthusiastic support when I see a bunch of real concerns.
If I’m an academic researcher in archeology, and I publish a paper on 18th century farmstead construction methods in Lower Saxony, how do my results “transfer”?
How do others judge if my paper on transfers enough to meet, for example, Krishnan’s proposal that “[f]or every 10 papers published, the goal should be to have at least one transfer”? In my archeology example, the next link in the chain may be 20 years later for that sort of scholarship!
If I collaborate on a paper with someone in industry, does that count as an automatic transfer? If not, and if my industry partner decides to not pursue the work, what should I do if I don’t know that domain nor have other contacts?
Similarly, if one of my PhD students does some excellent work in a subfield related to mine, and we publish, then the student graduates and decides to work in another field, then is it my “job” to change my research focus and continue my previous student’s work? Even if it doesn’t really interest me? Do I need to get all of my other PhD students to join in that new focus?
Because in that scenario I can see a senior professor pointing out the previous work that his team did, and comment that “nobody ever did anything with it” … with himself as part of the “nobody.” If you’ve had 15 PhD students, can you really follow all of the paths each of them went down?
What about publishing negative results? If I’m an academic medical researcher, I may need to register a clinical trial, and report negative results. This helps minimize selective reporting and publication skew towards positive results. How are these supposed to transfer? Or even determined that there was a transfer?
Lastly, suppose I publish a method which is 5x faster than the existing algorithms. There’s initial industry interest because there haven’t been any big improvements for over a decade. Then three weeks later another group publishes a fundamentally different algorithm which is another 7x faster than mine. The field switches to that new algorithm and my work becomes a footnote. Does that count as a transfer, or does it count against me because there was no transfer? How long should I continue to develop my algorithm if my job is to do something with it?
Andrew:
I am not arguing that you should keep up the work forever. If you do some work and a student of yours pick it up and makes something out of it… then you can and maybe you should move on. Same story if you invent something and later someone invents something even better, then you should move on. In fact, I would urge folks to move out of the way as soon as possible.
There is a finite number of jobs, grants and promotions. So when you ask “what if an archeologist wrote this narrow paper”…?…. currently, this does get assessed. How? Probably on the prestige of the venue where it was published. And then, maybe, by counting citations. If the researcher worked on something that was not too fashionable, then it is likely that it will get published in a little known journal, it will not get much of a readership, and the researcher in question may struggle to translate this work into a job or a promotion. It better not be a negative results because that’s hard to publish. And it has to be about a “new” ideas. That is how it works right now.
My post is less about how we should assess researchers… and more about what should motivate researchers. I am saying that they should not write the paper and consider their job done. They should keep working till the work bear its fruits.
What if the work is not fruitful… what if the work will never amount to anything for anyone…
Then I will say: do something else.
What is “transfer”? It is up to the researcher to know.
Everyone’s life ought to be impactful. But I don’t think that there can be one universal measure of “impact”.
Currently, we are running along under the assumption that a research paper that is never read is “impact” because, well, because it has received a stamp of approval. That is putting the bar very low.
It is a recent social construction. The whole thing would have made no sense to scholars from the first half of the XXth century.
My 2 cents. The PhD became an “industry” long time back, and one cannot really judge a “transfer” objectively, as Andrew Dalke’s comment points out very well. Learning, knowledge – it shouldn’t be an industry, though. The solution, for me, is to remove those ample incentives that make this a kind of freebie club. A PhD researcher should have accommodation and food coupons for free, and access to Internet, books, other papers, libraries, but no salary: a learner should be into learning for its own sake. It’s a radical proposal (not so radical if one considers that is how monks and Brahmins used to live and research in the old days), but the current system doesn’t work. As for professors, they should be judged purely on the basis of teaching outcomes, rather than their paper output.
Whether PhD students have a salary or not, they are assuredly looking for a job after the PhD.
I am not sure how assessing professors strictly on their teaching would help research. It might help teaching though.
Love the directness of the post, but am not a fan of the black and white dichotomy. The academic world isn’t so simple. Universities are first and foremost institutions of learning, intended to train students. So, naturally, some academics approach research primarily as a way to teach MSc and PhD students. Who cares whether or not the research was eventually used by anyone outside of academia, what matters is that the students (and the PI) on the project learned something along the way. Universities are also intended to contribute to the sum of human knowledge. If you adopt the view that “If your idea is worthwhile and you let it end with a research paper, you have failed” then researchers who are working on ideas that are 10-20 years out from being commercialized/used have failed.
Reductionism isn’t appropriate to describe complex systems like Universities. It’s like saying that government is a necessary evil and should be minimized at all costs. But, I do appreciate your point, especially in the context of engineering-focused research that intentionally aims to solve real problems.
I agree that it is worthwhile to do a project even if the sole outcome is that a student acquired new skills or knowledge. I personally do this all the time. But if that’s the outcome, then you do not need the research paper. And I would argue that it is not research, but rather training/teaching. That is fine, but let us not get distracted.
I did not write that people had to achieve commercialization of their ideas to be successful. Industrial transfer is certainly one way to go, but we should not expect such an outcome in general.
One of my colleagues work with folks who live in locations with industrial contamination. Much of her work does not go into the research paper. She has to meet with the people, learn about them, figure out the problems… and then, when she thinks she has figured it out, she goes back to them to make sure, and so forth. It would be tragic to assess her work based on the publication record alone.
Couldn’t agree more. I’m constantly amazed how many good ideas in the data structures field have languished for decades. Here are a couple from the 70s (!) that I’ve never seen implemented anywhere but yielded great results when I implemented and benchmarked them:
https://github.com/senderista/rotated-array-set
https://github.com/senderista/hashtable-benchmarks
I agree that the publishing game is often damaging, and sometimes pointless. However, it really isn’t such a bad idea to make people write up what they have done. In many ways, I feel like there is a bit of an unhealthy focus on publishing in the first place. It shouldn’t be that hard (and often is not) to write up results in a way the community will be able to refer to it. Journal metrics on the other hand are truly ridiculous. Far from understanding that most people will simply use a result aggregator like Google Scholar or the Web of Science, some researchers I have known will doggedly try to get their work published in a ‘reputed’ journal instead of just publishing it in a specialized journal and moving on.