Thursday, August 30, 2012

Bad Incentives in Academia

It's really tough to incentivize the correct amount of "general research." This is research that advances technology or humanity in important ways but cannot be easily monetized because once it is known, "anyone" can apply it. Thus, there little incentive to do it since people can free-ride on others' hard work. Patents don't usually help because there is usually no product to protect. Prizes don't usually help because it's really hard to know the value of research and thus, what is the optimal prize amount. Since prizes and patents don't usually help, most general research is done in universities or governments using government funding or university money (and you thought Universities paid Professors to teach!). Sometimes it's even hard to determine after the fact whether research is important or valuable.

Because of this, academia has settled on using citations as a proxy for importance. The more papers written and citations received, the more important one's research is perceived to be. The idea is that you are doing a lot of work if you publish a lot of papers (or more papers in better journals), and that if lots of people have cited particular papers, those papers must have been important.

Of course, because citations and not general research is incentivized, there are distortions as academics and journals try to game the system. One incentive is over-citation of irrelevant articles encouraged by journals in order to increase their perceived quality and importance. This was recently reported in the WSJ. The WSJ has reported that Scientific World Journal and Cell Transplantation have had rankings suspended because of excessive gaming of citations. Also from the article:
One in five academics in economics, sociology, psychology, and business said they had been asked by editors to pad their papers with unnecessary citations to articles in the same journal, according to a study published in Science in February.
 Another incentive is under-citation of relevant articles by academics. The emphasis on citations in research journals in determining your job status leads to an emphasis on production over dissemination of existing quality research. There is little emphasis on teaching or repackaging of existing research to broader audiences or across disciplines the higher up in the academic chain one goes. In addition, if you find an article that does research that you do, that means you are much less likely to be published. So guess, what? There is an incentive not to even look for past research. This quality article on dismissive literature reviews (HT: Marginal Revolution) explains the incentive to say your research is unique and first of its kind -- even if there is already extensive work on the topic -- and provides many examples; even Chicago economists are guilty of it. As Phelps puts it:
Research has accumulated in many fields to such a volume that familiarity with an entire literature would now be too time-consuming for any individual. In most fields, when someone writes a dismissive review and claims command of an entire research literature, they claim a near impossible accomplishment.
In addition,
Whereas rich professional rewards await those considered to be the first to study a topic, conducting a top-notch, high-quality literature review bestows none. After all, it isn’t “original work.” (Note also which of the two activities is more likely to be called a “contribution” to scholarship.) In addition, there are substantial opportunity costs. Thorough reviews demand a huge investment of time—one that grows larger with the accumulation of each new journal issue. In a publish-or-perish environment, really reviewing the research literature before presenting one’s own research impedes one’s professional progress.
Why has this happened? Phelps cites 3 reasons: review complacency, the proliferation of subject fields, and "winning":
Claiming that others’ work does not exist is an easy way to win a debate.
This is especially true in politicized fields like education research, he claims. There are dozens of examples of false claims of "firstness" in the education literature that Phelps cites, and it's worth a read-through. He lists some of my current and former professors as culprits. 

There are also incentives to over produce some types of general research and incentives to not do enough of other types, in particular replication of experiments and studies. But those are topics for another day.

No comments:

Post a Comment