What Do (Academic) Economists Value?

Which do economists consider most important in evaluating one another? The quality of the journals in which they publish, or the number of citations of the articles they have written?

It turns out, according to a recent study by John Gibson, David. L. Anderson and John Tressler coming out in Economic Inquiry, that economists seem to place more value on the prestige of the journal than whether or not anyone cites–or even reads–the research. At least among top-ranked economics departments. Or, at the top-ranked economics departments in the University of California system, but there’s little reason to think that the behavior of economists across the UC system is not representative of the profession as a whole in this regard. Their abstract reads:

Research quality can be evaluated from citations or from the prestige of journals publishing the research. We relate salary of tenured University of California (UC) economists to their lifetime publications of 5,500 articles and to the 140,000 citations to these articles. Citations hardly affect salary, especially in top-ranked UC departments where impacts of citations are less than one-tenth those of journals. In lower ranked departments, and when journal quality is less comprehensively measured, effects of citations on salary increase. If journal quality is just measured by counting articles in journal tiers, apparent effects of citations are overstated.

This is an interesting–and to my mind, sad–result. As the authors explain in their paper, there are many reasons why citations would be a more meaningful measure of the quality and impact of a particular paper than would be the journal in which it is published. After all, the decision to publish is based on the opinions of a handful (at most) of individuals (i.e., the editor and a few reviewers picked by the editor). Citations, on the other hand, reflect the opinion of the broader academic community.

And in terms of relevance to anything of value, one might also argue that citations are a much better metric of the “So What?” question. If no one cites a paper, it suggests either no one read it or no one found the ideas or results it contained to be worth mentioning in the market of ideas or future research. Which begs the question of whether it is really all that important or meaningful, whether within academia or, heaven forbid, more broadly? And if not meaningful, then what is the basis of “quality”?

The authors also identify another wrinkle. Among lower ranked economics departments, journal quality is less important and citations tend to be given more weight. The authors, citing Liebowitz (2014) , suggest this may be because “faculty at lower ranked schools may rely on easily available proxies, such as counts of articles or of citations, rather than making their own determination based on reading the articles when they have to evaluate a research record in order to make a labor market decision.” This may be because faculty at lower ranked programs are more likely to have published in lower ranked journals–and there is no general consensus on relative journal quality rankings beyond the top few journals. Hence the appeal and use of such metrics as Impact Factors

I’d like to think there’s something more going on. Rather than simply using citations as a low-cost way of evaluating research quality, perhaps lower ranked programs, which tend to be more teaching-focused, actually value  the citations in and of themselves as an indication that the work actually has meaningful relevance and impact.

One would expect there to be some correlation between citations and the quality of journal in which an article appears. The traditionally more highly ranked journals likely have larger readership, which should translate into more citations. One metric of journal quality–its impact factor–is based on the number of citations it receives. But that is based on the average for the journal as a whole, not any one specific article. As illustrated here, it’s quite likely that a small percentage of articles in a given journal generate a substantial proportion of its citations, meaning the journal’s quality metric may be a rather poor proxy for any given article’s impact.

When push comes to shove, however, Gibson et al. suggest that what matters most to academic economists–especially those at the more prestigious departments–is not necessarily how much influence or relevance a particular paper has for shaping the intellectual debate, but whether it appears with a more prestigious cover.

That says something about the profession, I think. And perhaps not something good.

 

 

 

The (Fake) Academic Publishing Game

Last month Vox reported on a “scientific paper” written by Maggie SimpsonMaggieSimpson1, et al., being accepted by two scientific journals. The paper, a spoof generated by engineer Alex Smolyanitsky using a random text editor, was allegedly peer reviewed and accepted for publication by two of the many for-profit open access science journals that have sprung up over the past decade.The article (here) provides a nice overview of how rampant the trolling by fake scientific journals has become and some of the economic incentives behind them.

If you’re in academia, you probably receive email solicitations from these predatory journals regularly. I probably delete a handful of solicitations per day from such journals. I just assumed they were bogus, but the Vox article also provided a link to a useful listing of suspected predatory publishers created by Jeffrey Beall. Sure enough, my most recent email was from one of the publishers on this list.

While the article focuses on the problems these journals create for trust in scientific publications, the credibility of real peer reviewed scientific research, and evaluation of a given scholar’s publication resume, it fails to mention the complementary cause of the problem: Continue reading “The (Fake) Academic Publishing Game”