Race, College Admissions, Harvard & Opportunity Costs

Reading this article in the Chronicle of Higher Education about the on-going affirmative action lawsuit against Harvard, this line (buried deep in the article) jumped out at me:

An applicant’s race, they [Harvard admission officials] said, can help, but not hurt, his or her chances of admission.

Earlier in the article, the author pointed out that Harvard has 37,000 applicants for 2019, 8,200 with perfect GPAs, 2,700 with perfect verbal SAT scores, but only 1,700 spots to offer entering students.

If you’re a student of opportunity costs, you immediately see the problem: It is impossible for an applicant’s race to “help, but not hurt,” the applicant’s chance of admission to Harvard.

If one applicant’s race helps that student’s chance for admission, it reduces the number of slots remaining available for other applicants. If an applicant’s race does not help them, their chances of admission–all else equal–are lower because there are fewer slots available. In other words, it’s hurts the chances of “not helped” applicants because there is a limited number of total slots.

The Harvard admission officials’ comment could only be true if there was no limit on the number of students offered admission. With no admissions cap, admitting one more student with “Attribute A” would have no consequence for the admission of applicants without “Attribute A.” Capping admission makes slots a scarce resource, which means there is an opportunity cost of offering a slot to any one person in the form of fewer remaining slots for others. Consequently, any criteria that advantages one (group of) applicant(s) necessarily hurts the chances of an applicant not matching that criteria–including race.

So in Harvard’s case, if race helps anyone, it must hurt others. By definition. Because opportunity costs.

To LSAT, Or Not to LSAT?

An article in today’s WSJ Online reports that a growing number of law schools in the US are planning to forego the LSAT (Law School Admission Test) as their required entrance examination and to begin accepting the GRE (Graduate Record Examinations) that most (general) graduate schools accept for entry into MS and PhD programs.

Proponents argue that it will broaden the applicant pool for law schools to consider individuals who may not want–for whatever reason–to take the LSAT and the GRE as they weigh grad school options. Opponents argue it will dilute the quality and preparation of students for the rigors of law school. Well that, an cut revenues from test fees if you’re the Law School Admission Council that administers the test. Can both (or all) sides be right?

Yes, they can.

Admission to graduate school (or any academic program really) suffers from a hidden information problem. Applicants have a better idea of their ability to succeed in grad school than do admissions committees. They also have an incentive to over-represent their abilities. (They also may actually over-estimate their abilities, if you follow the behavioral economics literature, but we don’t even need that for the story to be interesting.) Likewise, admissions committees have a better idea of how rigorous their program is and what it takes to succeed than do prospective students. And they don’t want to waste time on students who are not likely to be successful in their program.

In economics we refer to this as an ‘adverse selection’ problem. It results anytime there is an information asymmetry between potential trading partners in which one party has better information than the other, and may have an incentive to hide that information (because the truth would hurt their prospects in the trade).

There are (basically) two solutions for dealing with this adverse selection information problem. First, the information-disadvantaged party (in our case, the admissions committee) can use any number of screening devices to reduce the information asymmetry and sort out the good prospects from the bad prospects. That’s exactly the purpose the LSAT (or the GRE) serves. It reveals something about the applicant’s reasoning ability (especially the LSAT) or general knowledge base (more so the GRE). So the question is, are the attributes the LSAT and the GRE screen for sufficiently similar that relying on either one would be a good screen? Obviously, some law schools–and some very good ones–appear to believe that’s the case. Or they at least believe it’s likely enough to give it a shot. Since some schools have allowed the GRE as a special case in the past, they may even have evidence to support that conclusion.

The second way of dealing with adverse selection problems is for the party with the information advantage (in our case, the law school applicants) to signal their quality by undertaking some special effort that would only make sense if they were actually good prospects. In other words, choosing to send the signal is a self-selection mechanism that makes the applicant’s information claim more credible.

Here is where critics of the GRE standard are likely correct. If schools only accept the LSAT, applicants have to go out of their way to take this “grueling test” if they want to go to law school. Only students who really want to go to law school and who believe they have the ability are willing to put themselves through that (and pay the explicit costs as well). If schools will accept the LSAT or the GRE, on the other hand, some GRE-takers may apply to law school who wouldn’t have if they had to take the LSAT. Which means, as critics point out, there may be more applicants that aren’t really dedicated to the idea of law school and therefore may be less desirable students.

However, it might also be the case that some prospective students have broader interests, and the GRE has a greater utility for a wider set of possible graduate programs. In a world where college students have limited dollars and time to prep for a graduate admission test, the GRE is the more economical investment. While that might make the signal value of taking the LSAT that might higher, since you really have to want to go to law school to take it, it might also be reasonable to infer that the value of the LSAT signal to admissions committees is not high enough to risk missing out on good potential students who choose to economize on their admission test choice.

So ultimately, it’s not about whether one side (or which) is right about their concern. The real question is how much difference do their concerns actually make in the outcome of law school admissions, and is the difference worth the costs associated with either policy?

Of course, there is yet another possibility–and one the American Bar Association seems to be considering. That is, that neither screening device is valuable enough to require it for admission to law school. At least not enough to make requiring a test part of the accreditation standards for law schools. Apparently the ABA believes the information asymmetry may no longer be so great that the screen/signal adds all that much value after all.

If I were in the market of selling that screening device (or either of them) and the ancillary services that go with it (e.g., test prep courses), I’d be a bit concerned about the future of our business model.

What Do (Academic) Economists Value?

Which do economists consider most important in evaluating one another? The quality of the journals in which they publish, or the number of citations of the articles they have written?

It turns out, according to a recent study by John Gibson, David. L. Anderson and John Tressler coming out in Economic Inquiry, that economists seem to place more value on the prestige of the journal than whether or not anyone cites–or even reads–the research. At least among top-ranked economics departments. Or, at the top-ranked economics departments in the University of California system, but there’s little reason to think that the behavior of economists across the UC system is not representative of the profession as a whole in this regard. Their abstract reads:

Research quality can be evaluated from citations or from the prestige of journals publishing the research. We relate salary of tenured University of California (UC) economists to their lifetime publications of 5,500 articles and to the 140,000 citations to these articles. Citations hardly affect salary, especially in top-ranked UC departments where impacts of citations are less than one-tenth those of journals. In lower ranked departments, and when journal quality is less comprehensively measured, effects of citations on salary increase. If journal quality is just measured by counting articles in journal tiers, apparent effects of citations are overstated.

This is an interesting–and to my mind, sad–result. As the authors explain in their paper, there are many reasons why citations would be a more meaningful measure of the quality and impact of a particular paper than would be the journal in which it is published. After all, the decision to publish is based on the opinions of a handful (at most) of individuals (i.e., the editor and a few reviewers picked by the editor). Citations, on the other hand, reflect the opinion of the broader academic community.

And in terms of relevance to anything of value, one might also argue that citations are a much better metric of the “So What?” question. If no one cites a paper, it suggests either no one read it or no one found the ideas or results it contained to be worth mentioning in the market of ideas or future research. Which begs the question of whether it is really all that important or meaningful, whether within academia or, heaven forbid, more broadly? And if not meaningful, then what is the basis of “quality”?

The authors also identify another wrinkle. Among lower ranked economics departments, journal quality is less important and citations tend to be given more weight. The authors, citing Liebowitz (2014) , suggest this may be because “faculty at lower ranked schools may rely on easily available proxies, such as counts of articles or of citations, rather than making their own determination based on reading the articles when they have to evaluate a research record in order to make a labor market decision.” This may be because faculty at lower ranked programs are more likely to have published in lower ranked journals–and there is no general consensus on relative journal quality rankings beyond the top few journals. Hence the appeal and use of such metrics as Impact Factors

I’d like to think there’s something more going on. Rather than simply using citations as a low-cost way of evaluating research quality, perhaps lower ranked programs, which tend to be more teaching-focused, actually value  the citations in and of themselves as an indication that the work actually has meaningful relevance and impact.

One would expect there to be some correlation between citations and the quality of journal in which an article appears. The traditionally more highly ranked journals likely have larger readership, which should translate into more citations. One metric of journal quality–its impact factor–is based on the number of citations it receives. But that is based on the average for the journal as a whole, not any one specific article. As illustrated here, it’s quite likely that a small percentage of articles in a given journal generate a substantial proportion of its citations, meaning the journal’s quality metric may be a rather poor proxy for any given article’s impact.

When push comes to shove, however, Gibson et al. suggest that what matters most to academic economists–especially those at the more prestigious departments–is not necessarily how much influence or relevance a particular paper has for shaping the intellectual debate, but whether it appears with a more prestigious cover.

That says something about the profession, I think. And perhaps not something good.

 

 

 

The Old College ROI

Today I ran across a graphic from The Economist in March 2015 that shows the return on investment (ROI) to different college majors by level of selectivity of the college the student attended. The charts show that while college pays, it does not pay the same for everyone. More specifically, it does not pay the same for every major. Engineering and math majors have high ROIs, followed by business and economics majors. Humanities and arts majors have lower ROIs on average.

If you’re underwhelmed by the realization, you should be. After all, it’s really common sense and something I’ve written about before here. But it’s a fact that seems incomprehensible to so many (for starters, count the number of votes Bernie Sanders has received). This is imCollege ROIportant because college education is subsidized not by degree, but by the expense of the school the student chooses. An arts major at Stanford is paying the same tuition as the engineering major–and likely borrowing just as much money–but their returns on investment for those educations are vastly different. Put another way, the value of those degrees are very different, even if the price of the degrees is the same.

Interestingly, though, the ROI by degree does not change much based on the selectivity of the school (typically a measure of quality). Looking at each of the degree types, there is very little obvious correlation between selectivity and ROI (taking into account financial aid; i.e., based on net-cost not listed tuition). While students from more selective schools may earn higher starting salaries, the higher cost of their education means they are getting no better return on their financial investment than students of similar majors at much less selective schools.

This suggests that the market for college graduates is actually working pretty darn well when you take into account students’ degrees (i.e., the value of the human capital they develop in college).

It also suggests we should reconsider federal policy for student loans. If we insist on continuing to subsidize higher education (and all the ills that creates), at least we could do it more intelligently by tying loan amounts to degree programs rather than tuition levels.

How Federal Student Loans Increase College Costs

A recent paper by researchers at the Federal Reserve Bank of New York shows how increases in federal student loan programs–intended to make college more affordable–actually increase the cost of college. As with other markets, when the supply of money available to pay tuition increases, the price of tuition rises. The abstract reads:

When students fund their education through loans, changes in student borrowing and tuition are interlinked. Higher tuition costs raise loan demand, but loan supply also affects equilibrium tuition costs—for example, by relaxing students’ funding constraints.To resolve this simultaneity problem, we exploit detailed student-level financial data and changes in federal student aid programs to identify the impact of increased student loan funding on tuition. We find that institutions more exposed to changes in the subsidized federal loan program increased their tuition disproportionately around these policy changes, with a sizable pass-through effect on tuition of about 65 percent. We also find that Pell Grant aid and the unsubsidized federal loan program have pass-through effects on tuition, although these are economically and statistically not as strong. The subsidized loan effect on tuition is most pronounced for expensive, private institutions that are somewhat, but not among the most, selective.
But the effects don’t stop with rising tuition. This increased demand for college education also exacerbates income inequality by inflating the supply of college graduates. (See this piece by George Leef for a full overview of both the NY Fed paper and the income inequality effects).
It’s not rocket science. It’s pretty simple supply-and-demand stuff, actually. No matter how good the intentions, policies that ignore these effects tend to do more harm than good. In this case, generous federal student loan programs not only lead to increases in tuition that result in even higher loans, but reduce the earning power of graduates (on average) and decrease their ability to repay those loans. A pretty perverse circle of effects indeed.