The Labeling Problem

(Part 1 of 2)

“Labeling” is a big thing these days. After all, as I hope you have concluded if you’ve read many (any?) of my previous posts, information (or lack thereof) is one of the biggest challenges for an effective market-based economy. But does that mean “labeling” is necessarily a good thing?

I bet you thought I was going to talk about food, didn’t you? A little bit, but first…

The university where I work has a “Writing Intensive” requirement for which students must take at least two courses that are designated as “writing intensive” or WI. WI courses have to be approved as satisfying certain criteria, including a minimum number of pages of revisions and a significant portion of the overall grade being based on students’ writing. It’s largely up to the instructor as to whether to apply for the WI designation in any particular course.

Naturally, not all students are thrilled about taking WI courses that actually require them to write, in real English, with some evidence of proper grammar and structure and all those nasty, time-consuming details. (Like OMG uv got 2 b kidding!)

Some students complained about unwittingly enrolling in courses that were WI. You know, because heaven forbid they end up in a class that requires writing they could have avoided. Professional advisors and administrators concurred, and contrived a labeling scheme to make it clear that a course is approved as WI–because the language in the course description itself was apparently insufficient. (Perhaps the dislike for writing stems from a dislike for reading as well.) Now courses have to be reviewed and approved in time to be listed in the course catalogue for registration the following semester so the course number can be affixed with a W on the end (e.g., ABM 4971W vs ABM 4971).

But this course designator has created a different kind of problem: Students are now upset anytime a course without a W on the number includes any substantive amount of writing.

The presence of a WI label changes students’ expectations and perceptions
about all courses, not just WI courses.

And this is part of the problem in the larger ‘labeling’ debates we face as a society. When we add information on labels, it doesn’t just provide information; it shapes consumers’ perceptions not just of the labeled product, but of all similar products. This is especially true for mandatory labels for attributes that consumers do not fully understand and for which consumers’ personal valuations are more subjective and varied.

Take foods containing genetically modified (GM), or genetically engineered (GE), organisms, for instance. Survey results suggest that a majority of US consumers has little knowledge or understanding about what GM foods are (for instance, see here and here), never mind the fact that a consensus report from the National Academy of Sciences, Engineering and Medicine (p. 2) “found no substantiated evidence that foods from GE crops were less safe than foods from non-GE crops.”

Notwithstanding that lack of knowledge, a large majority of consumers, if asked, will agree with the idea that consumers have a right to know what’s in their food and that GM content should be labeled. Kind of like college kids who object simply to the idea of writing. That said, only a small percentage of consumers (1 in 6) actually care deeply about having that information themselves.

Despite the value of more information, GM labeling runs a couple different risks. First, if food products containing GMOs are required to be labeled as such, most all food would carry the label because most prepared foods contain products derived from soybeans and corn, which are predominantly grown using GM biotechnologies. If everything in a store carries the same ‘warning’ label, the label doesn’t convey any relative information. That is, it doesn’t help distinguish between products. In fact, it would be harder to find the products without the label. Imagine students scrolling through the 90%+ of courses marked “Not WI” in order to find the 10% that are WI. A “GMO-Free” label, on the other hand, would communicate more effectively by standing out relative to other products.

Second, even with a (voluntary) “GMO-Free” label, the label itself implies that GMOs are bad by comparison, just like having WI courses marked “W” seemingly implies courses without “W” don’t involve much writing. In that case, the label actually misinforms consumers, or at least misleads them, relative to the science of GM foods and their safety (or to instructors’ pedagogy, as the case may be).

The presence of a GM label changes consumers’ expectations and perceptions
about all food products, not just the ones labeled.

Having labels that misinform or mislead consumers, whether explicitly or implicitly, defeats the purpose of labeling to begin with, and is therefore an ineffective policy tool. There is also the issue of what is an economically sensible policy for providing information in the market place, even if labeling were to be effective. Stay tuned for a follow-up post on that.

To LSAT, Or Not to LSAT?

An article in today’s WSJ Online reports that a growing number of law schools in the US are planning to forego the LSAT (Law School Admission Test) as their required entrance examination and to begin accepting the GRE (Graduate Record Examinations) that most (general) graduate schools accept for entry into MS and PhD programs.

Proponents argue that it will broaden the applicant pool for law schools to consider individuals who may not want–for whatever reason–to take the LSAT and the GRE as they weigh grad school options. Opponents argue it will dilute the quality and preparation of students for the rigors of law school. Well that, an cut revenues from test fees if you’re the Law School Admission Council that administers the test. Can both (or all) sides be right?

Yes, they can.

Admission to graduate school (or any academic program really) suffers from a hidden information problem. Applicants have a better idea of their ability to succeed in grad school than do admissions committees. They also have an incentive to over-represent their abilities. (They also may actually over-estimate their abilities, if you follow the behavioral economics literature, but we don’t even need that for the story to be interesting.) Likewise, admissions committees have a better idea of how rigorous their program is and what it takes to succeed than do prospective students. And they don’t want to waste time on students who are not likely to be successful in their program.

In economics we refer to this as an ‘adverse selection’ problem. It results anytime there is an information asymmetry between potential trading partners in which one party has better information than the other, and may have an incentive to hide that information (because the truth would hurt their prospects in the trade).

There are (basically) two solutions for dealing with this adverse selection information problem. First, the information-disadvantaged party (in our case, the admissions committee) can use any number of screening devices to reduce the information asymmetry and sort out the good prospects from the bad prospects. That’s exactly the purpose the LSAT (or the GRE) serves. It reveals something about the applicant’s reasoning ability (especially the LSAT) or general knowledge base (more so the GRE). So the question is, are the attributes the LSAT and the GRE screen for sufficiently similar that relying on either one would be a good screen? Obviously, some law schools–and some very good ones–appear to believe that’s the case. Or they at least believe it’s likely enough to give it a shot. Since some schools have allowed the GRE as a special case in the past, they may even have evidence to support that conclusion.

The second way of dealing with adverse selection problems is for the party with the information advantage (in our case, the law school applicants) to signal their quality by undertaking some special effort that would only make sense if they were actually good prospects. In other words, choosing to send the signal is a self-selection mechanism that makes the applicant’s information claim more credible.

Here is where critics of the GRE standard are likely correct. If schools only accept the LSAT, applicants have to go out of their way to take this “grueling test” if they want to go to law school. Only students who really want to go to law school and who believe they have the ability are willing to put themselves through that (and pay the explicit costs as well). If schools will accept the LSAT or the GRE, on the other hand, some GRE-takers may apply to law school who wouldn’t have if they had to take the LSAT. Which means, as critics point out, there may be more applicants that aren’t really dedicated to the idea of law school and therefore may be less desirable students.

However, it might also be the case that some prospective students have broader interests, and the GRE has a greater utility for a wider set of possible graduate programs. In a world where college students have limited dollars and time to prep for a graduate admission test, the GRE is the more economical investment. While that might make the signal value of taking the LSAT that might higher, since you really have to want to go to law school to take it, it might also be reasonable to infer that the value of the LSAT signal to admissions committees is not high enough to risk missing out on good potential students who choose to economize on their admission test choice.

So ultimately, it’s not about whether one side (or which) is right about their concern. The real question is how much difference do their concerns actually make in the outcome of law school admissions, and is the difference worth the costs associated with either policy?

Of course, there is yet another possibility–and one the American Bar Association seems to be considering. That is, that neither screening device is valuable enough to require it for admission to law school. At least not enough to make requiring a test part of the accreditation standards for law schools. Apparently the ABA believes the information asymmetry may no longer be so great that the screen/signal adds all that much value after all.

If I were in the market of selling that screening device (or either of them) and the ancillary services that go with it (e.g., test prep courses), I’d be a bit concerned about the future of our business model.

Certification, Teacher Quality, and Click-bait Academic Publishing

Today’s email brought a content alert from Economic Inquiry on a newly accepted paper titled “New Evidence on National Board Certification as a Signal of Teacher Quality”. The abstract of the paper reads:

“Using longitudinal data from North Carolina that contains detailed identifiers, we estimate the effect of having a National Board for Professional Teaching Standards (NBPTS) teacher on academic achievement. We identify the effects of an NBPTS teacher exploiting multiple sources of variation including traditional-lagged achievement models, twin- and sibling-fixed effects, and aggregate grade-level variation. Our preferred estimates show that students taught by National Board certified teachers have higher math and reading scores by 0.04 and 0.01 of a standard deviation. We find that an NBPTS math teacher increases the present value of students’ lifetime income by $48,000.” (emphasis added)

Based on the abstract, one might infer that having NBPTS certification makes for a better teacher and that having NBPTS certification allows math teachers to have a meaningful lifetime income effect on students. If you read just a bit further, you might feel comfortable that you made the right inference when you read:

With aggregation and school-by-year fixed effects only variation between cohorts is used to identify the effect of NBPTS on test scores.

Unfortunately, if you concluded that being NBPTS certified has a meaningful relevance to student performance, you’d be completely wrong–as the paper itself explains.

Reading the abstract, the first question one should ask is “How does one become NBPTS certified, and what does that have to do with teacher quality?” In statistical terms, there’s a significant question of endogeneity and causation. Namely, does getting certified make one a better teacher, or do only better teachers get certified? If the latter, then whether or not one is certified has nothing to do with academic outcomes. It might provide a signal that the teacher is already a good teacher, but having the certification would have no meaningful effect on academic outcomes or lifetime earnings.

And indeed, that is exactly the case, as the authors themselves explain if you read just a little further than the statement about their method “to identify the effect of NBPTS on test scores.” What matters is the teacher and the teacher’s skills and practices. The certification itself is superfluous to the academic achievement result.

“Comparisons of teacher performance before and after certification suggest that greater average effectiveness of certified teachers reflects fixed quality differences identified by the certification as opposed to human capital effects. Implementing policies with a primary goal to modify the effectiveness of teachers should place little weight on the NBPTS certification as a potential facilitator. Rather the certification can be used to reward more effective teachers where use of direct evidence on performance in the districts is not feasible.” (emphasis added)

In other words, it’s the teacher-effect, not the NBPTS effect, that matters–to the point that the authors specifically say that little weight should be place on NBPTS certification as a potential facilitator (i.e., a policy tool) for improving student outcomes.

What the authors really purport to show, as the paper title alludes, is that being NBPTS certified is a pretty good indicator that a teacher is a good teacher. The NBPTS standards appear to be well-aligned with effective teaching practices. But if that’s the actual research objective, then the authors should also have looked at the causation from the other direction and tried to sort out the selection bias in who wants to get certified and why.

It’s unfortunate that the abstract of the paper is so misleading, because many people economize on their time by only reading the abstracts of articles to get a sense of the paper’s results. After all, that’s the purpose of an abstract. In this case, however, the abstract is written so poorly that it buries the actual results beneath a misleading presentation and might be perceived as a serious case of ‘bait and switch’. While the title of the paper is still correct–having national board certification is a signal of teacher quality–the abstract’s wording risks painting a false picture of the relevance of certification for student achievement. And that false picture is perpetuated by their description of their methods.

The editors of Economic Inquiry should be a bit ashamed for allowing such a bait and switch. The authors should as well. At best, it’s carelessly poor writing. At worst, it’s the academic equivalent of click-bait. Unfortunately, some people may look no further than the abstract as the basis for what would ultimately be a misguided potential education policy.

 

The Old College ROI

Today I ran across a graphic from The Economist in March 2015 that shows the return on investment (ROI) to different college majors by level of selectivity of the college the student attended. The charts show that while college pays, it does not pay the same for everyone. More specifically, it does not pay the same for every major. Engineering and math majors have high ROIs, followed by business and economics majors. Humanities and arts majors have lower ROIs on average.

If you’re underwhelmed by the realization, you should be. After all, it’s really common sense and something I’ve written about before here. But it’s a fact that seems incomprehensible to so many (for starters, count the number of votes Bernie Sanders has received). This is imCollege ROIportant because college education is subsidized not by degree, but by the expense of the school the student chooses. An arts major at Stanford is paying the same tuition as the engineering major–and likely borrowing just as much money–but their returns on investment for those educations are vastly different. Put another way, the value of those degrees are very different, even if the price of the degrees is the same.

Interestingly, though, the ROI by degree does not change much based on the selectivity of the school (typically a measure of quality). Looking at each of the degree types, there is very little obvious correlation between selectivity and ROI (taking into account financial aid; i.e., based on net-cost not listed tuition). While students from more selective schools may earn higher starting salaries, the higher cost of their education means they are getting no better return on their financial investment than students of similar majors at much less selective schools.

This suggests that the market for college graduates is actually working pretty darn well when you take into account students’ degrees (i.e., the value of the human capital they develop in college).

It also suggests we should reconsider federal policy for student loans. If we insist on continuing to subsidize higher education (and all the ills that creates), at least we could do it more intelligently by tying loan amounts to degree programs rather than tuition levels.

How Federal Student Loans Increase College Costs

A recent paper by researchers at the Federal Reserve Bank of New York shows how increases in federal student loan programs–intended to make college more affordable–actually increase the cost of college. As with other markets, when the supply of money available to pay tuition increases, the price of tuition rises. The abstract reads:

When students fund their education through loans, changes in student borrowing and tuition are interlinked. Higher tuition costs raise loan demand, but loan supply also affects equilibrium tuition costs—for example, by relaxing students’ funding constraints.To resolve this simultaneity problem, we exploit detailed student-level financial data and changes in federal student aid programs to identify the impact of increased student loan funding on tuition. We find that institutions more exposed to changes in the subsidized federal loan program increased their tuition disproportionately around these policy changes, with a sizable pass-through effect on tuition of about 65 percent. We also find that Pell Grant aid and the unsubsidized federal loan program have pass-through effects on tuition, although these are economically and statistically not as strong. The subsidized loan effect on tuition is most pronounced for expensive, private institutions that are somewhat, but not among the most, selective.
But the effects don’t stop with rising tuition. This increased demand for college education also exacerbates income inequality by inflating the supply of college graduates. (See this piece by George Leef for a full overview of both the NY Fed paper and the income inequality effects).
It’s not rocket science. It’s pretty simple supply-and-demand stuff, actually. No matter how good the intentions, policies that ignore these effects tend to do more harm than good. In this case, generous federal student loan programs not only lead to increases in tuition that result in even higher loans, but reduce the earning power of graduates (on average) and decrease their ability to repay those loans. A pretty perverse circle of effects indeed.

(Not-so) Public Higher Education

NPR reported yesterday on a Government Accountability Office (GAO) report which finds that as of Fiscal Year 2012 (the 2011-12 school year), more of public colleges’ revenues came from students’ tuition than from state (public) funding. The following graph from the GAO report shows the breakdown in revenue sources over the previous 10 years:

gao-college-funding_wide-4b0e1c3e43499af008def569adef93a7909ae7cb-s700-c85As a faculty member at Missouri’s flagship public university, all I can say is, “Welcome to the party! What took you so long?” At the University of Missouri, tuition topped State appropriations for the first time in 2004. According to the FY13 Budget Book for the University of Missouri System (which includes four autonomous campuses and two hospital systems), State appropriations constituted less than 15% of ScreenHunter_01 Jan. 06 13.57total revenue, with another 2% from State grants. The table to the right shows the breakdown by each “business segment” of the University system. For the flagship campus (MU), net tuition and fees were 40% more than State appropriations. The largest source of income for the University came from “sales and services of educational activities and auxiliary enterprises.” That includes things like Residential Life and Campus Dining (which are also paid by students), Parking & Transportation Services, the University Store (which is much more than just textbooks, but includes those as well), and Athletics (which proudly boasts that it is self-funding from ticket sales, radio and television revenues, licensing, etc.).

The neighboring graph from the University’s 2014 Budget Update shows the breakdown of MU’s Operating Budget revenue, which might be ScreenHunter_02 Jan. 06 14.12considered the heart of the direct educational expenses. It shows that over the past 25 years, State support has dropped from 70% to just 32% of operating revenue. Meanwhile, tuition has increased from just 27% to 62% of operating revenue. And this over a period of time that operating expenditures increased, so tuition is a much bigger slice of an even bigger pie. Looking at the University as a whole (not just operating), students foot the bill for about 33% of total revenues (including room and board) compared to just under 17% in State funding. And that doesn’t include parking fees or bookstore purchases.

Some complain about the cost of higher education skyrocketing, and total expenditures have increased substantially (largely a result of increased administrative expenses). However, when students complain about the costs of higher education, they are focused on their tuition bills. And tuition has gone up at public universities, no doubt. Since 2000, the average annual increase in tuition at MU is about 16% (much of that in the early 2000s), which is much higher than the rate of inflation. But students (and their parents) need to recognize that the reason tuition rates have grown so much is to offset the decline in State appropriations (which not coincidentally, started hitting hard in the early 2000s). Expenditures have gone up nowhere near what tuition has.

Which is all to say, the myth of “public higher education” is really just that; a myth. Yes, there is still some State funding for “public” universities, but it is an increasingly small percentage. Public universities are now much more dependent on tuition–just like private universities–than on State funds. And while that scale may have tipped just recently across the country as a whole, in Missouri it has been that way for quite some time.

 

 

The Blockbuster Lesson for Higher Education

I currently have the…pleasure?… of serving on a campus committee that’s charge ostensibly is “to advise the vice chancellor for Administrative Services on the facility needs of the campus.” This is my second year (of a three-year term) on the committee. At one of our meetings last year, as we were being briefed on several planned construction and remodeling projects, I raised the question, “Has anyone considered that we may be acting like Blockbuster in an age of Netflix? Given trends in higher education, with increasing use of online technology, does it make sense to continue investing so much in brick-and-mortar facilities?” Few seemed to understand (or appreciate) my question, and it went largely unaddressed.

Earlier this week, I ran across an article by Clayton Christiansen and Michael Horn in the New York Times arguing that online education is going to be an agent of transformation in higher education. They argue that most traditional higher education institutions are, at best, reacting to online education in the same way sailing ship companies reluctantly adopted steam engine technology by just supplementing their sailing vessels with a steam engine, rather than embracing the new technology and replacing their sailing ships with steamships. Yes, that is precisely the way most universities–including mine–seem to be reacting to online education. I sent a link to the article to my fellow committee members, reminding them of my comments last year about Blockbuster.

It seems that, in the minds of some at least, advising the vice chancellor on the facilities needs of the campus does not include taking into account the potential changing nature of higher education and its implications for the facilities needs of the campus. Such a “bigger picture” is beyond our pay grade. Not surprisingly, I suppose, those observations were shared by administrators who sit on the committee. I was referred to our University’s fairly fresh “strategic plan,” which itself ignores the external higher education environment in which we operate. Point made.

But then, the real irony of the story. Within hours of these email exchanges, DISH Network announced they are shuttering the remaining Blockbuster stores in the US and shutting down Blockbuster’s mail-order deliver service. The company that, in its heyday, revolutionized the video rental industry is now dead; a victim of ignoring what it perceived to be an inconsequential technological change. As Larry Downes and Paul Nunes share in their Harvard Business Review blog today, Blockbuster became a casualty of “Big Bang Disruption.”

I doubt my committee colleagues noticed the announcement. If they did, I’m sure most disregarded it as purely coincidental and not relevant to our work–if they even connected the dots. And yet the questions remain for traditional universities and colleges, including mine:

  • Are we ignoring a big bang, disruptive technology in online education? (For a nice piece arguing yes, see Alex Tabarrock’s “Why Online Education Works“)
  • Are we still investing in long-lived brick-and-mortar assets that are likely ill-suited to compete in the future market for higher education?
  • Do we recognize that our market niche, if we are to have one at all, will likely be less about selling higher education than about selling a collegiate experience?
  • What kind of facilities are best suited to serving that market?

But those aren’t questions for me or this committee. Above our pay grade. Not what the kind of question the facilities committee should be asking. I wonder who in Blockbuster was similarly dismissed?