What Do (Academic) Economists Value?

Which do economists consider most important in evaluating one another? The quality of the journals in which they publish, or the number of citations of the articles they have written?

It turns out, according to a recent study by John Gibson, David. L. Anderson and John Tressler coming out in Economic Inquiry, that economists seem to place more value on the prestige of the journal than whether or not anyone cites–or even reads–the research. At least among top-ranked economics departments. Or, at the top-ranked economics departments in the University of California system, but there’s little reason to think that the behavior of economists across the UC system is not representative of the profession as a whole in this regard. Their abstract reads:

Research quality can be evaluated from citations or from the prestige of journals publishing the research. We relate salary of tenured University of California (UC) economists to their lifetime publications of 5,500 articles and to the 140,000 citations to these articles. Citations hardly affect salary, especially in top-ranked UC departments where impacts of citations are less than one-tenth those of journals. In lower ranked departments, and when journal quality is less comprehensively measured, effects of citations on salary increase. If journal quality is just measured by counting articles in journal tiers, apparent effects of citations are overstated.

This is an interesting–and to my mind, sad–result. As the authors explain in their paper, there are many reasons why citations would be a more meaningful measure of the quality and impact of a particular paper than would be the journal in which it is published. After all, the decision to publish is based on the opinions of a handful (at most) of individuals (i.e., the editor and a few reviewers picked by the editor). Citations, on the other hand, reflect the opinion of the broader academic community.

And in terms of relevance to anything of value, one might also argue that citations are a much better metric of the “So What?” question. If no one cites a paper, it suggests either no one read it or no one found the ideas or results it contained to be worth mentioning in the market of ideas or future research. Which begs the question of whether it is really all that important or meaningful, whether within academia or, heaven forbid, more broadly? And if not meaningful, then what is the basis of “quality”?

The authors also identify another wrinkle. Among lower ranked economics departments, journal quality is less important and citations tend to be given more weight. The authors, citing Liebowitz (2014) , suggest this may be because “faculty at lower ranked schools may rely on easily available proxies, such as counts of articles or of citations, rather than making their own determination based on reading the articles when they have to evaluate a research record in order to make a labor market decision.” This may be because faculty at lower ranked programs are more likely to have published in lower ranked journals–and there is no general consensus on relative journal quality rankings beyond the top few journals. Hence the appeal and use of such metrics as Impact Factors

I’d like to think there’s something more going on. Rather than simply using citations as a low-cost way of evaluating research quality, perhaps lower ranked programs, which tend to be more teaching-focused, actually value  the citations in and of themselves as an indication that the work actually has meaningful relevance and impact.

One would expect there to be some correlation between citations and the quality of journal in which an article appears. The traditionally more highly ranked journals likely have larger readership, which should translate into more citations. One metric of journal quality–its impact factor–is based on the number of citations it receives. But that is based on the average for the journal as a whole, not any one specific article. As illustrated here, it’s quite likely that a small percentage of articles in a given journal generate a substantial proportion of its citations, meaning the journal’s quality metric may be a rather poor proxy for any given article’s impact.

When push comes to shove, however, Gibson et al. suggest that what matters most to academic economists–especially those at the more prestigious departments–is not necessarily how much influence or relevance a particular paper has for shaping the intellectual debate, but whether it appears with a more prestigious cover.

That says something about the profession, I think. And perhaps not something good.

 

 

 

NSF Funding for Economics Research: Good or Bad?

The latest Journal of Economic Perspectives includes a pair of papers debating the social value of ecNSFonomics research funding from the National Science Foundation, featuring Robert Moffitt from Johns Hopkins and Tyler Cowen and Alex Tabarrock from George Mason. The abstracts of their respective viewpoints follow:

Robert Moffitt: “In Defense of the NSF Economics Program
The NSF Economics program funds basic research in economics across all its disparate fields. Its budget has experienced a long period of stagnation and decline, with its real value in 2013 below that in 1980 and having declined by 50 percent as a percent of the total NSF budget. The number of grants made by the program has also declined over time, and its current budget is very small compared to that of many other funders of economic research. Over the years, NSF-supported research has supported many of the major intellectual developments in the discipline that have made important contributions to the study of public policy. The public goods argument for government support of basic economic research is strong. Neither private firms, foundations, nor private donors are likely to engage in the comprehensive support of all forms of economic research if NSF were not to exist. Select universities with large endowments are more likely to have the ability to support general economic research in the absence of NSF, but most universities do not have endowments sufficiently large to do so. Support for large-scale general purpose dataset collection is particularly unlikely to receive support from any nongovernment agency. On a priori grounds, it is likely that most NSF-funded research represents a net increase in research effort rather than displacing already-occurring effort by academic economists. Unfortunately, the empirical literature on the net aggregate impact of NSF economics funding is virtually nonexistent.

Tyler Cowen & Alex Tabarrock: “A Skeptical View of the National Science Foundation’s Role in Economic Research
We can imagine a plausible case for government support of science based on traditional economic reasons of externalities and public goods. Yet when it comes to government support of grants from the National Science Foundation (NSF) for economic research, our sense is that many economists avoid critical questions, skimp on analysis, and move straight to advocacy. In this essay, we take a more skeptical attitude toward the efforts of the NSF to subsidize economic research. We offer two main sets of arguments. First, a key question is not whether NSF funding is justified relative to laissez-faire, but rather, what is the marginal value of NSF funding given already existing government and nongovernment support for economic research? Second, we consider whether NSF funding might more productively be shifted in various directions that remain within the legal and traditional purview of the NSF. Such alternative focuses might include data availability, prizes rather than grants, broader dissemination of economic insights, and more. Given these critiques, we suggest some possible ways in which the pattern of NSF funding, and the arguments for such funding, might be improved.

The Editorial Process in Economics and Social Sciences

Marc Bellemare offers some thoughts about the editorial review process in economics and social sciences…from an editors perspective. His insights are helpful for new or younger scholars, and a good reminder for those more seasoned.

On May 1, I will become editor of Food Policy, replacing the University of London’s School of Oriental and African Studies’ Bhavani Shankar, and sharing the role of editor with the University of Bologna’s Mario Mazzocchi, serving for an initial term of three years.

Given that, I thought now would be as good a time as any to write my thoughts about the editorial process. This will allow me to go back to these thoughts once my term as editor ends, to see what else I might have learned. So here goes–in no particular order–some thoughts I’ve accumulated on the editorial process in the social sciences. I hope others with editorial experience can chime in with their own additional thoughts in the comments.

The (Fake) Academic Publishing Game

Last month Vox reported on a “scientific paper” written by Maggie SimpsonMaggieSimpson1, et al., being accepted by two scientific journals. The paper, a spoof generated by engineer Alex Smolyanitsky using a random text editor, was allegedly peer reviewed and accepted for publication by two of the many for-profit open access science journals that have sprung up over the past decade.The article (here) provides a nice overview of how rampant the trolling by fake scientific journals has become and some of the economic incentives behind them.

If you’re in academia, you probably receive email solicitations from these predatory journals regularly. I probably delete a handful of solicitations per day from such journals. I just assumed they were bogus, but the Vox article also provided a link to a useful listing of suspected predatory publishers created by Jeffrey Beall. Sure enough, my most recent email was from one of the publishers on this list.

While the article focuses on the problems these journals create for trust in scientific publications, the credibility of real peer reviewed scientific research, and evaluation of a given scholar’s publication resume, it fails to mention the complementary cause of the problem: Continue reading “The (Fake) Academic Publishing Game”

Research Productivity of New Economics PhDs

The Economist posted a blog last week about the research productivity of new PhDs in economics. They point to a recent paper by John Conley and Ali Sina Önder in the Journal of Economic Perspectives. Below is the abstract:

We study the research productivity of new graduates from North American PhD programs in economics from 1986 to 2000. We find that research productivity drops off very quickly with class rank at all departments, and that the rank of the graduate departments themselves provides a surprisingly screen_shot_2014-11-05_at_16.31.22poor prediction of future research success. For example, at the top ten departments as a group, the median graduate has fewer than 0.03 American Economic Review (AER)-equivalent publications at year six after graduation, an untenurable record almost anywhere. We also find that PhD graduates of equal percentile rank from certain lower-ranked departments have stronger publication records than their counterparts at higher-ranked departments. In our data, for example, Carnegie Mellon’s graduates at the 85th percentile of year-six research productivity outperform 85th percentile graduates of the University of Chicago, the University of Pennsylvania, Stanford, and Berkeley. These results suggest that even the top departments are not doing a very good job of training the great majority of their students to be successful research economists. Hiring committees may find these results helpful when trying to balance class rank and place of graduate in evaluating job candidates, and current graduate students may wish to re-evaluate their academic strategies in light of these findings.

I remember one of my graduate advisers, Lee Benham, claiming that the mode number of publications among PhD economists was zero. I think that was Lee’s way of encouraging grad students who are sweating out their dissertations and trying to get papers out for publication. Conley and Önder’s results would seem to substantiate his claim.