Daniel Lemire's blog

, 2 min read

Productivity measures are counterproductive?

Michael has a long post on why it seems foolish to measure scientist according to one unidimensional metric (such as the H-index). His argument is mostly that you can game these metrics rather easily if you have a large enough social network. Given how hard people work at gaming the PageRank metric, and the often quoted fact that over 50% of all married people cheat on their spouse, we would be naive to think that researchers do not game the metrics. For that matter, it is known that several journals cheat to increase their impact factor (another unidimensional metric).

The question really is, does it hurt us that people play these games? After all, if we accept that the rule of the game is to get a high h-index, then why should I care how people go about it?

Michael is actually reacting on an article, The Mismeasurement of Science, which identifies several ill-effects of these unidimensional measures, including the facts that:

  • many authors ignore or hide results that do not fit with the story being told in the paper because doing so makes the paper less complicated and thus, more appealing;
  • science is becoming a more ruthlessly self-selecting field where those who are less aggressive and less self-aggrandizing are also less likely to receive recognition.

In turn, I conjecture that we have the following measurable effects:

  • Science is becoming less attractive as a career. If you are going to pursue a high H-index, if this becomes your goal, then how is this more interesting, as a game, than to make a lot of money? Should we be surprised that Science Faculties are bleeding students while Business Schools are turning down students? When accounting becomes sexier than Physics, we have a problem. Women, who are less attracted to career where you compare the size of your appendage, are harder to find than ever in Computer Science. Should we get a clue?
  • Research papers, while becoming easier to read and cite, fail to provide us with enough data to correctly appreciate the results and their applications. In particular, research papers are increasingly dismissed by practitioners who need not only a nice story, but also the full story, including the dirty secrets.

Whatever rules we set, they have consequences. I am particularly worried about the fact that we are making science uninteresting by redefining it from “scientific discovery” to “achieving a high H-index”. Maybe we have to go back and ask fundamental questions. Why do we do science? What do we really expect from scientists? What should we really reward

See also my posts Are we destroying research by evaluating it?, On the upcoming collapse of peer review, and Assessing a researcher… in 2007.