The Labeling Problem, Part 2

In The Labeling Problem, I explained how the presence of a label, whether on a college course or a food item, does more than just identify the product. It actually can influence consumers’ perceptions about the attribute the label identifies. As a result, it can also influence consumers’ perceptions of similar products that don’t have the label.

Consequently, labels have the potential not just to inform consumers, but to misinform them; particularly when the label is for an attribute that consumers do not fully understand. For instance, with genetically modified (GM), or genetically engineered (GE), food products.

There is another dimension of the labeling issue that I promised to return to: what makes economic sense? Remember the Three Simple Rules? What makes economic sense comes down to this: What’s the marginal benefit of providing the additional information? What’s the marginal cost of providing that information? And because we’re talking about a diverse set of consumers with different interests, that leads to the question of “who should pay for it?”

So What’s the Marginal Benefit?

Information is economically valuable only if it will change the outcome of a decision. Consequently, a GM label would create personal (or private) benefit only if the label would change the consumer’s decision to purchase the product. A Pew Foundation study found only 1-in-6 people (16%) “care a great deal about the issue of GM foods.” Another 37% “care some.” But do they care enough that it would make them willing to change their behavior even if it cost them additional money to buy the GMO-free product? Some scholars have attempted to estimate consumers’ willingness-to-pay (WTP) for GMO-free products as a measure of the value of labels (for instance, see here and here). The results tend to show individual consumers, on average, are willing to pay at least a little more for GMO-free products, whether the label denotes the presence–or the absence–of GMOs. Of course, this is also based on the fact that a large percentage of consumers lack knowledge about what GMOs are.

From a public policy perspective, the label only has value if it would lead consumers to make decisions that improve public well-being. The consensus of the scientific community is that there is no substantive nutritional, quality. or health safety difference between food containing GM ingredients and GMO-free foods (see here, here, and here). That suggests that there is no real public benefit to having the information provided.

What’s the Marginal Cost?

A wide range of numbers have been thrown around about the potential cost of mandatory labeling. At the low end, the Consumers Union (a pro-labeling group) commissioned a study that found the cost would be only $2.30 per person (or about $740 million) per year. That estimate is based primarily on the costs of labeling itself. It does not include costs of regulatory enforcement or increased costs in sourcing inputs, keeping the inputs segregated to prevent contamination by GM inputs, and product reformulation. Other studies (funded by anti-labeling groups) have suggested costs on the order of $450 per household (or about $56.7 billion) per year. In addition to including a more systemic view of the costs, these studies also make assumptions about manufacturers shifting more of their products to being GMO-free to avoid the negative stigma of having a “contains GMOS” label.

So while the predicted cost is wide-ranging, one thing is clear: The costs are bigger than zero.

Sound economic decision making (and therefore sound policy) requires the marginal benefits of any action to be at least as big as the marginal costs of the action. From a public perspective, the benefits are arguably zero, while the costs are greater than zero. That suggests a regulation requiring labels would not make economic sense.

Does that mean there should be no labels? Not at all. It simply means it doesn’t make sense to have a law that forces all consumers (many of whom are not concerned about GMOs anyhow) to pay for a regulation that has little or no public benefit in the first place.

But the fact that there are potential private benefits to labeling suggest that voluntary labeling may be desirable. Clearly, the value of labeling information to some consumers is greater than zero. And some of those consumers would both pay for that information and change their consumption decision based on it. Manufacturers who believe they can deliver that value at a low enough cost to make a profit on it have every incentive to make that happen. And in fact, that’s exactly the situation we have now in the US with voluntary “GMO-Free” and/or “Organic” labeling.

One might still object that these voluntary labels may create a negative stigma about non-labeled products. And that’s a fair point. But that also means that industry has an incentive to more proactively educate consumers about the science behind GM-foods, so they won’t be fooled into paying more for something that may not provide the benefits they think.

Cass Sunstein, a Harvard law professor, summarizes the whole point fairly well in the abstract of a recent paper:

Many people favor labeling GM food on the ground that it poses serious risks to human health and the environment, but with certain qualifications, the prevailing scientific judgment is that it does no such thing. In the face of that judgment, some people respond that even in the absence of evidence of harm, people have “a right to know” about the contents of what they are eating. But there is a serious problem with this response: there is a good argument that the benefits of such labels would be lower than the costs.

Consumers would obtain no health benefits from which labels. To the extent that they would be willing to pay for them, the reason (for many though not all) is likely to be erroneous beliefs, which are not a sufficient justification for mandatory labels. Moreover, GMO labels might well lead people to think that the relevant foods are harmful and thus affirmatively mislead them.

 

 

 

The Labeling Problem

(Part 1 of 2)

“Labeling” is a big thing these days. After all, as I hope you have concluded if you’ve read many (any?) of my previous posts, information (or lack thereof) is one of the biggest challenges for an effective market-based economy. But does that mean “labeling” is necessarily a good thing?

I bet you thought I was going to talk about food, didn’t you? A little bit, but first…

The university where I work has a “Writing Intensive” requirement for which students must take at least two courses that are designated as “writing intensive” or WI. WI courses have to be approved as satisfying certain criteria, including a minimum number of pages of revisions and a significant portion of the overall grade being based on students’ writing. It’s largely up to the instructor as to whether to apply for the WI designation in any particular course.

Naturally, not all students are thrilled about taking WI courses that actually require them to write, in real English, with some evidence of proper grammar and structure and all those nasty, time-consuming details. (Like OMG uv got 2 b kidding!)

Some students complained about unwittingly enrolling in courses that were WI. You know, because heaven forbid they end up in a class that requires writing they could have avoided. Professional advisors and administrators concurred, and contrived a labeling scheme to make it clear that a course is approved as WI–because the language in the course description itself was apparently insufficient. (Perhaps the dislike for writing stems from a dislike for reading as well.) Now courses have to be reviewed and approved in time to be listed in the course catalogue for registration the following semester so the course number can be affixed with a W on the end (e.g., ABM 4971W vs ABM 4971).

But this course designator has created a different kind of problem: Students are now upset anytime a course without a W on the number includes any substantive amount of writing.

The presence of a WI label changes students’ expectations and perceptions
about all courses, not just WI courses.

And this is part of the problem in the larger ‘labeling’ debates we face as a society. When we add information on labels, it doesn’t just provide information; it shapes consumers’ perceptions not just of the labeled product, but of all similar products. This is especially true for mandatory labels for attributes that consumers do not fully understand and for which consumers’ personal valuations are more subjective and varied.

Take foods containing genetically modified (GM), or genetically engineered (GE), organisms, for instance. Survey results suggest that a majority of US consumers has little knowledge or understanding about what GM foods are (for instance, see here and here), never mind the fact that a consensus report from the National Academy of Sciences, Engineering and Medicine (p. 2) “found no substantiated evidence that foods from GE crops were less safe than foods from non-GE crops.”

Notwithstanding that lack of knowledge, a large majority of consumers, if asked, will agree with the idea that consumers have a right to know what’s in their food and that GM content should be labeled. Kind of like college kids who object simply to the idea of writing. That said, only a small percentage of consumers (1 in 6) actually care deeply about having that information themselves.

Despite the value of more information, GM labeling runs a couple different risks. First, if food products containing GMOs are required to be labeled as such, most all food would carry the label because most prepared foods contain products derived from soybeans and corn, which are predominantly grown using GM biotechnologies. If everything in a store carries the same ‘warning’ label, the label doesn’t convey any relative information. That is, it doesn’t help distinguish between products. In fact, it would be harder to find the products without the label. Imagine students scrolling through the 90%+ of courses marked “Not WI” in order to find the 10% that are WI. A “GMO-Free” label, on the other hand, would communicate more effectively by standing out relative to other products.

Second, even with a (voluntary) “GMO-Free” label, the label itself implies that GMOs are bad by comparison, just like having WI courses marked “W” seemingly implies courses without “W” don’t involve much writing. In that case, the label actually misinforms consumers, or at least misleads them, relative to the science of GM foods and their safety (or to instructors’ pedagogy, as the case may be).

The presence of a GM label changes consumers’ expectations and perceptions
about all food products, not just the ones labeled.

Having labels that misinform or mislead consumers, whether explicitly or implicitly, defeats the purpose of labeling to begin with, and is therefore an ineffective policy tool. There is also the issue of what is an economically sensible policy for providing information in the market place, even if labeling were to be effective. Stay tuned for a follow-up post on that.

To LSAT, Or Not to LSAT?

An article in today’s WSJ Online reports that a growing number of law schools in the US are planning to forego the LSAT (Law School Admission Test) as their required entrance examination and to begin accepting the GRE (Graduate Record Examinations) that most (general) graduate schools accept for entry into MS and PhD programs.

Proponents argue that it will broaden the applicant pool for law schools to consider individuals who may not want–for whatever reason–to take the LSAT and the GRE as they weigh grad school options. Opponents argue it will dilute the quality and preparation of students for the rigors of law school. Well that, an cut revenues from test fees if you’re the Law School Admission Council that administers the test. Can both (or all) sides be right?

Yes, they can.

Admission to graduate school (or any academic program really) suffers from a hidden information problem. Applicants have a better idea of their ability to succeed in grad school than do admissions committees. They also have an incentive to over-represent their abilities. (They also may actually over-estimate their abilities, if you follow the behavioral economics literature, but we don’t even need that for the story to be interesting.) Likewise, admissions committees have a better idea of how rigorous their program is and what it takes to succeed than do prospective students. And they don’t want to waste time on students who are not likely to be successful in their program.

In economics we refer to this as an ‘adverse selection’ problem. It results anytime there is an information asymmetry between potential trading partners in which one party has better information than the other, and may have an incentive to hide that information (because the truth would hurt their prospects in the trade).

There are (basically) two solutions for dealing with this adverse selection information problem. First, the information-disadvantaged party (in our case, the admissions committee) can use any number of screening devices to reduce the information asymmetry and sort out the good prospects from the bad prospects. That’s exactly the purpose the LSAT (or the GRE) serves. It reveals something about the applicant’s reasoning ability (especially the LSAT) or general knowledge base (more so the GRE). So the question is, are the attributes the LSAT and the GRE screen for sufficiently similar that relying on either one would be a good screen? Obviously, some law schools–and some very good ones–appear to believe that’s the case. Or they at least believe it’s likely enough to give it a shot. Since some schools have allowed the GRE as a special case in the past, they may even have evidence to support that conclusion.

The second way of dealing with adverse selection problems is for the party with the information advantage (in our case, the law school applicants) to signal their quality by undertaking some special effort that would only make sense if they were actually good prospects. In other words, choosing to send the signal is a self-selection mechanism that makes the applicant’s information claim more credible.

Here is where critics of the GRE standard are likely correct. If schools only accept the LSAT, applicants have to go out of their way to take this “grueling test” if they want to go to law school. Only students who really want to go to law school and who believe they have the ability are willing to put themselves through that (and pay the explicit costs as well). If schools will accept the LSAT or the GRE, on the other hand, some GRE-takers may apply to law school who wouldn’t have if they had to take the LSAT. Which means, as critics point out, there may be more applicants that aren’t really dedicated to the idea of law school and therefore may be less desirable students.

However, it might also be the case that some prospective students have broader interests, and the GRE has a greater utility for a wider set of possible graduate programs. In a world where college students have limited dollars and time to prep for a graduate admission test, the GRE is the more economical investment. While that might make the signal value of taking the LSAT that might higher, since you really have to want to go to law school to take it, it might also be reasonable to infer that the value of the LSAT signal to admissions committees is not high enough to risk missing out on good potential students who choose to economize on their admission test choice.

So ultimately, it’s not about whether one side (or which) is right about their concern. The real question is how much difference do their concerns actually make in the outcome of law school admissions, and is the difference worth the costs associated with either policy?

Of course, there is yet another possibility–and one the American Bar Association seems to be considering. That is, that neither screening device is valuable enough to require it for admission to law school. At least not enough to make requiring a test part of the accreditation standards for law schools. Apparently the ABA believes the information asymmetry may no longer be so great that the screen/signal adds all that much value after all.

If I were in the market of selling that screening device (or either of them) and the ancillary services that go with it (e.g., test prep courses), I’d be a bit concerned about the future of our business model.