, 3 min read
Science and Technology links (October 10th 2021)
- Evans and Chu suggest, using data and a theoretical model, that as the number of scientists grow, progress may stagnate. Simply put, in a large field, with many researchers, a few papers and a few people are able to acquire a decisive advantage over newcomers. Large fields allow more inequality. One can review their model critically. Would you not expect large fields to fragment into smaller fields? And haven’t we seen much progress in fields that have exploded in size?
- Keeping your iron stores low might be important to slow your aging. Sadly, much of what you eat has been supplemented with iron because a small fraction of the population needs iron supplementation. It is widely belief that you cannot have too much iron supplementation, but, to my knowledge, long-term effects of iron supplementation have not been carefully assessed.
- If you lose weight while having high levels of insulin, you are more likely to lose lean tissues (e.g., muscle) than fat.
- A drug similar to viagra helps mice fight obesity.
- Age-related hair loss might be due to stem cells escaping the hair follicle bulge. This new work contradicts the commonly held belief that the stem cells die over time. This work may not relate to male-pattern baldness.
- People tend to stress the ability of “formal peer review” to set apart the good work from the less significant work. People greatly overestimate the value of formal peer review. They forget that much of the greatest works of science occured before formal peer review had even been considered. Cortes and Lawrence have assessed the quality of peer review at a (or “the”) major conference these days (NeurIPS). They found several years ago that when two teams of referees independently assessed the same work, they only agreed on about 50% of the assessment. They have extended their work with a new finding:
Further, with seven years passing since the experiment we find that for accepted papers, there is no correlation between quality scores and impact of the paper as measured as a function of citation count.
The lesson is pretty clear. Formal peer review can identify and reject the “obviously bad work”. But it is not a difficult task. In fact, I am quite certain that a fully automated system could quickly identify work that nobody should ever read. However, if you have work that has been visibly well executed, following the state-of-the-art, citing the required prior work, using the right data for the problem, and so forth… then it becomes really difficult to tell whether it is great work, or simply work that meets minimal standards. It is obvious if you know how formal peer review works. You have between two and five researchers, with various levels of familiarity with the domain of the paper, who read it over. They do not redo the experiments. They do not redo the demonstrations. Sometimes you get lucky and find a referee that has deep knowledge of the domain (often because they have worked on a similar problem) and they can be really critical. Even so, in the context of a conference peer review process, there is only so much a highly knowledgeable referee can do, since they cannot freely interact with the authors.