Skip to content

What Do (Academic) Economists Value?

What Do (Academic) Economists Value? published on

Which do economists consider most important in evaluating one another? The quality of the journals in which they publish, or the number of citations of the articles they have written?

It turns out, according to a recent study by John Gibson, David. L. Anderson and John Tressler coming out in Economic Inquiry, that economists seem to place more value on the prestige of the journal than whether or not anyone cites–or even reads–the research. At least among top-ranked economics departments. Or, at the top-ranked economics departments in the University of California system, but there’s little reason to think that the behavior of economists across the UC system is not representative of the profession as a whole in this regard. Their abstract reads:

Research quality can be evaluated from citations or from the prestige of journals publishing the research. We relate salary of tenured University of California (UC) economists to their lifetime publications of 5,500 articles and to the 140,000 citations to these articles. Citations hardly affect salary, especially in top-ranked UC departments where impacts of citations are less than one-tenth those of journals. In lower ranked departments, and when journal quality is less comprehensively measured, effects of citations on salary increase. If journal quality is just measured by counting articles in journal tiers, apparent effects of citations are overstated.

This is an interesting–and to my mind, sad–result. As the authors explain in their paper, there are many reasons why citations would be a more meaningful measure of the quality and impact of a particular paper than would be the journal in which it is published. After all, the decision to publish is based on the opinions of a handful (at most) of individuals (i.e., the editor and a few reviewers picked by the editor). Citations, on the other hand, reflect the opinion of the broader academic community.

And in terms of relevance to anything of value, one might also argue that citations are a much better metric of the “So What?” question. If no one cites a paper, it suggests either no one read it or no one found the ideas or results it contained to be worth mentioning in the market of ideas or future research. Which begs the question of whether it is really all that important or meaningful, whether within academia or, heaven forbid, more broadly? And if not meaningful, then what is the basis of “quality”?

The authors also identify another wrinkle. Among lower ranked economics departments, journal quality is less important and citations tend to be given more weight. The authors, citing Liebowitz (2014) , suggest this may be because “faculty at lower ranked schools may rely on easily available proxies, such as counts of articles or of citations, rather than making their own determination based on reading the articles when they have to evaluate a research record in order to make a labor market decision.” This may be because faculty at lower ranked programs are more likely to have published in lower ranked journals–and there is no general consensus on relative journal quality rankings beyond the top few journals. Hence the appeal and use of such metrics as Impact Factors

I’d like to think there’s something more going on. Rather than simply using citations as a low-cost way of evaluating research quality, perhaps lower ranked programs, which tend to be more teaching-focused, actually value  the citations in and of themselves as an indication that the work actually has meaningful relevance and impact.

One would expect there to be some correlation between citations and the quality of journal in which an article appears. The traditionally more highly ranked journals likely have larger readership, which should translate into more citations. One metric of journal quality–its impact factor–is based on the number of citations it receives. But that is based on the average for the journal as a whole, not any one specific article. As illustrated here, it’s quite likely that a small percentage of articles in a given journal generate a substantial proportion of its citations, meaning the journal’s quality metric may be a rather poor proxy for any given article’s impact.

When push comes to shove, however, Gibson et al. suggest that what matters most to academic economists–especially those at the more prestigious departments–is not necessarily how much influence or relevance a particular paper has for shaping the intellectual debate, but whether it appears with a more prestigious cover.

That says something about the profession, I think. And perhaps not something good.

 

 

 

The Old College ROI

The Old College ROI published on

Today I ran across a graphic from The Economist in March 2015 that shows the return on investment (ROI) to different college majors by level of selectivity of the college the student attended. The charts show that while college pays, it does not pay the same for everyone. More specifically, it does not pay the same for every major. Engineering and math majors have high ROIs, followed by business and economics majors. Humanities and arts majors have lower ROIs on average.

If you’re underwhelmed by the realization, you should be. After all, it’s really common sense and something I’ve written about before here. But it’s a fact that seems incomprehensible to so many (for starters, count the number of votes Bernie Sanders has received). This is imCollege ROIportant because college education is subsidized not by degree, but by the expense of the school the student chooses. An arts major at Stanford is paying the same tuition as the engineering major–and likely borrowing just as much money–but their returns on investment for those educations are vastly different. Put another way, the value of those degrees are very different, even if the price of the degrees is the same.

Interestingly, though, the ROI by degree does not change much based on the selectivity of the school (typically a measure of quality). Looking at each of the degree types, there is very little obvious correlation between selectivity and ROI (taking into account financial aid; i.e., based on net-cost not listed tuition). While students from more selective schools may earn higher starting salaries, the higher cost of their education means they are getting no better return on their financial investment than students of similar majors at much less selective schools.

This suggests that the market for college graduates is actually working pretty darn well when you take into account students’ degrees (i.e., the value of the human capital they develop in college).

It also suggests we should reconsider federal policy for student loans. If we insist on continuing to subsidize higher education (and all the ills that creates), at least we could do it more intelligently by tying loan amounts to degree programs rather than tuition levels.

How Federal Student Loans Increase College Costs

How Federal Student Loans Increase College Costs published on 1 Comment on How Federal Student Loans Increase College Costs

A recent paper by researchers at the Federal Reserve Bank of New York shows how increases in federal student loan programs–intended to make college more affordable–actually increase the cost of college. As with other markets, when the supply of money available to pay tuition increases, the price of tuition rises. The abstract reads:

When students fund their education through loans, changes in student borrowing and tuition are interlinked. Higher tuition costs raise loan demand, but loan supply also affects equilibrium tuition costs—for example, by relaxing students’ funding constraints.To resolve this simultaneity problem, we exploit detailed student-level financial data and changes in federal student aid programs to identify the impact of increased student loan funding on tuition. We find that institutions more exposed to changes in the subsidized federal loan program increased their tuition disproportionately around these policy changes, with a sizable pass-through effect on tuition of about 65 percent. We also find that Pell Grant aid and the unsubsidized federal loan program have pass-through effects on tuition, although these are economically and statistically not as strong. The subsidized loan effect on tuition is most pronounced for expensive, private institutions that are somewhat, but not among the most, selective.
But the effects don’t stop with rising tuition. This increased demand for college education also exacerbates income inequality by inflating the supply of college graduates. (See this piece by George Leef for a full overview of both the NY Fed paper and the income inequality effects).
It’s not rocket science. It’s pretty simple supply-and-demand stuff, actually. No matter how good the intentions, policies that ignore these effects tend to do more harm than good. In this case, generous federal student loan programs not only lead to increases in tuition that result in even higher loans, but reduce the earning power of graduates (on average) and decrease their ability to repay those loans. A pretty perverse circle of effects indeed.

Primary Sidebar