Skip to content

Shifting the (Online) Sales Tax Burden

Shifting the (Online) Sales Tax Burden published on 1 Comment on Shifting the (Online) Sales Tax Burden

Recently a friend (and former student) Tweeted about the politics of regulation (see the Tweet below). Today, the city (and county) where I live is voting on a new “use tax” that would be applied to all “purchases made from out-of-state vendors”. Ostensibly, this is to offset lost sales tax revenue due to online shopping–the bugaboo of many local retailers and governments.

This morning I lectured on the relationship of demand elasticity to the question of who pays for a sales tax. As illustrated in the simple example below, both the consumer and the seller pay portions of the sales tax–provided neither Demand nor Supply are perfectly elastic (flat) or inelastic (vertical). When the supply curve shifts up due to the tax, it reduces how many units consumers buy (Q1 vs Q). Consumer pay more than they would have without the tax (P1 vs P), and sellers receive a lower price (net of the tax) than they would have without the tax (Ps vs P). So both sides bear some of the burden of the sales tax. The question is, which side pays more? It’s a pretty simple exercise to show that the “steeper” curve pays the bigger share just by redrawing the picture with lines of different relative steepness. Try it for yourself using this simple graph as an example.

The steepness of the curve reflects the sensitivity of consumers (on the Demand side) and sellers (on the Supply side) to changes in price. The more sensitive either side is, the more their quantity decision will change when price changes. More sensitivity means ‘flatter’ lines. Less sensitivity means ‘steeper’ lines.

Consider demand for something that’s very important to the consumer, like insulin. Even if the price goes up, insulin-dependent consumers will not reduce their consumption of insulin very much at all–unless the price really goes up a lot. In that case, the Demand line would be very steep.

What affects the steepness of Demand? Lots of things, but one of the biggest is the availability of other goods that provide the same (or very similar) benefits to the consumer. For instance, if you need insulin, you don’t have many alternatives. Orange juice, on the other hand, has several. If the price of orange juice goes up much, I might choose another juice, milk, or just plain water.

What else has substitutes? Stuff sold in local stores that can also be bought online. If I’m willing to wait two days, a large percentage of the things I might buy locally can be delivered right to my doorstep without me having to leave my house. If the price of buying things online is lower than the price of things sold locally, buying online is a really good substitute–if I can exercise a little patience. And if the local sales tax does not apply to my online purchases, that gives online sellers an automatic price advantage over local sellers–as much as 4.5% where I live (and that doesn’t count State sales tax).

So, what does that have to do with who pays the sales tax or Per’s tweet above?

First, demand for local purchases is more price-sensitive because the price of substitute items (from online sellers) is lower. If we impose (and actually enforce) a use tax on online sales, that will make consumers a little less price sensitive in their demand for local stuff, i.e., their Demands for local goods will get steeper. That means the relative tax burden will shift from local sellers to consumers. And this is even before the retailer’s ability to increase prices (or to not lower them as much to compete with online sellers). Little wonder local retailers are more than happy to “level the playing field” with online sellers by imposing the tax. They get to increase prices and pay less of the sales tax themselves; a win-win.

Second, the process giving rise to this proposed tax follows exactly the process Per outlines:

  1. Government imposes a regulation (a sales tax) that makes local retailers less competitive in the market place. There are other forms of taxing municipalities can and do use. Ours chooses to use a sales tax (for lots of things).
  2. Higher sales taxes encourage more people to buy online to avoid the sales tax, which not only reduces local sales, but local government tax revenue. Two problems for the price of one!
  3. Blame “The Market” for the loss of sales and tax revenue. Our City Manager even suggested it is immoral for people to live in the City and use City services while buying stuff online.
  4. Demand new regulations (i.e., the use tax) to “fix” the problem created in Step 1.

So here we are on election day in Boone County, Missouri. The voters that bother to show up will get to decide: who should pay more of the tax burden? Consumers or retailers?

Epilogue
Record low voter turn out (suggesting few people cared–if they were even aware of the special election) and the proposed use taxes failed 49.2-50.8 and 45.3-54.7 in the city and county, respectively. Not surprisingly, those residents for whom brick-and-mortar shopping is even less convenient voted even more to keep the price of the substitute lower.

 

Gambling on Your Kid’s Life

Gambling on Your Kid’s Life published on

My Twitter feed brought me an interesting piece by Christian Britschgi at Reason’s Hit & Run blog. In it, he lambasts this Washington Post op-ed decrying what they portray as the rampant abuse of life insurance policies by individuals who insure children only to then kill them and collect the insurance benefits. WaPo goes on to call for stricter regulations, naturally, to put a stop to this abuse.

Britschgi adeptly points out the self-refuting assertions in the WaPo piece–particularly that in each case cited either a) the victim wasn’t a child (not that adults’ lives don’t count, but it doesn’t comport with the drama of the headline), and/or b) the killer had already committed fraud (i.e., violated existing laws and regulations) in the process of buying the life insurance policy and then didn’t receive the payment because they committed further “fraud” by killing the insured. He also highlights that there’s nothing very ‘rampant’ about this kind of fraudulent behavior.

These articles reminded me of a conversation in one of my classes just a few weeks ago–and one I posted on four years ago here. Namely, the idea that life insurance is basically a bet that the insured person is going to die in the next year–and that when you lose the bet (i.e., the person doesn’t die), you pay up again for the next year.

This is a rather disturbing perspective for (apparently only an overwhelming majority of, but not all) parents who consider taking out life insurance policies on their children.  After all, how many parents would admit to gambling on the prospect that their kid will die in the next year? And yet, that is exactly what they do when they buy that insurance policy. (Yes, I know; there are other reasons to insure children, as I discussed in that earlier post…but the point remains.)

In Vegas, bets don’t get paid if the bettor is found to rig the game. Counting cards, loading die, rigging jackpot machines, etc. And the house has a strong incentive to monitor betting behavior to weed out cheaters. As the Reason blog shows, the same is true for life insurance companies.

What Do (Academic) Economists Value?

What Do (Academic) Economists Value? published on

Which do economists consider most important in evaluating one another? The quality of the journals in which they publish, or the number of citations of the articles they have written?

It turns out, according to a recent study by John Gibson, David. L. Anderson and John Tressler coming out in Economic Inquiry, that economists seem to place more value on the prestige of the journal than whether or not anyone cites–or even reads–the research. At least among top-ranked economics departments. Or, at the top-ranked economics departments in the University of California system, but there’s little reason to think that the behavior of economists across the UC system is not representative of the profession as a whole in this regard. Their abstract reads:

Research quality can be evaluated from citations or from the prestige of journals publishing the research. We relate salary of tenured University of California (UC) economists to their lifetime publications of 5,500 articles and to the 140,000 citations to these articles. Citations hardly affect salary, especially in top-ranked UC departments where impacts of citations are less than one-tenth those of journals. In lower ranked departments, and when journal quality is less comprehensively measured, effects of citations on salary increase. If journal quality is just measured by counting articles in journal tiers, apparent effects of citations are overstated.

This is an interesting–and to my mind, sad–result. As the authors explain in their paper, there are many reasons why citations would be a more meaningful measure of the quality and impact of a particular paper than would be the journal in which it is published. After all, the decision to publish is based on the opinions of a handful (at most) of individuals (i.e., the editor and a few reviewers picked by the editor). Citations, on the other hand, reflect the opinion of the broader academic community.

And in terms of relevance to anything of value, one might also argue that citations are a much better metric of the “So What?” question. If no one cites a paper, it suggests either no one read it or no one found the ideas or results it contained to be worth mentioning in the market of ideas or future research. Which begs the question of whether it is really all that important or meaningful, whether within academia or, heaven forbid, more broadly? And if not meaningful, then what is the basis of “quality”?

The authors also identify another wrinkle. Among lower ranked economics departments, journal quality is less important and citations tend to be given more weight. The authors, citing Liebowitz (2014) , suggest this may be because “faculty at lower ranked schools may rely on easily available proxies, such as counts of articles or of citations, rather than making their own determination based on reading the articles when they have to evaluate a research record in order to make a labor market decision.” This may be because faculty at lower ranked programs are more likely to have published in lower ranked journals–and there is no general consensus on relative journal quality rankings beyond the top few journals. Hence the appeal and use of such metrics as Impact Factors

I’d like to think there’s something more going on. Rather than simply using citations as a low-cost way of evaluating research quality, perhaps lower ranked programs, which tend to be more teaching-focused, actually value  the citations in and of themselves as an indication that the work actually has meaningful relevance and impact.

One would expect there to be some correlation between citations and the quality of journal in which an article appears. The traditionally more highly ranked journals likely have larger readership, which should translate into more citations. One metric of journal quality–its impact factor–is based on the number of citations it receives. But that is based on the average for the journal as a whole, not any one specific article. As illustrated here, it’s quite likely that a small percentage of articles in a given journal generate a substantial proportion of its citations, meaning the journal’s quality metric may be a rather poor proxy for any given article’s impact.

When push comes to shove, however, Gibson et al. suggest that what matters most to academic economists–especially those at the more prestigious departments–is not necessarily how much influence or relevance a particular paper has for shaping the intellectual debate, but whether it appears with a more prestigious cover.

That says something about the profession, I think. And perhaps not something good.

 

 

 

Economics and the Millennial Marriage Drought

Economics and the Millennial Marriage Drought published on

“Why aren’t Millennials getting married? Despite the popularity of dating apps like Tinder, Grindr, and OKCupid, Millennials are not pairing off.”

That’s the opening line of an interesting post by Olivia Gonzalez and Erikagrace Davies over at LearnLiberty.org. It’s an important question. And an important observation.

They go on to suggest that a significant reason for the drop in marriage rates among millennials is due to lower real incomes relative to prior generations, combined with the need to be more mobile in today’s society and the difficulty that creates for young, dual-income couples. They suggest that the gig economy may create more non-traditional income opportunities that make it easier for such couples to work and, thereby, afford to get married.

I’d propose an alternate hypothesis–though one still rooted in economics–and it’s based on the opening lines themselves. The popularity of dating apps like Tinder, Grindr and OKCupid do more than just increase the ease of finding one’s true love. They highlight the great diversity of potential mates–not even just locally, but around the globe. What’s more, such apps and other social media make it easier to access–and assess–that much more diverse population of potential mates.

So what’s economic about that? Option theory.

An option is the right, but not the obligation, to take some action. Think of the decision to marry as an option. Dating allows someone to do more than just sow their wild oats. It allows them to consider different potential mates to find the “best one” for them, however one might define “best”.  But when one exercises the option to marry, the option to continue looking for a better mate is killed (at least in a society still dominated by norms of monogamy). In other words, there is an opportunity cost to getting married in the form of the foregone opportunity to find someone even better.

That means the opportunity cost of getting married is higher when there is a greater diversity of potential mates in the world. To use the language of option theory, the value of the “option” to marry is higher when there is a greater variance in the value (or quality) of potential mates. But once one executes the option and gets married, that value is lost.

This is really nothing new. According to the US Census Bureau, the rate of marriage among young adults has been declining for decades (as shown in the nearby graph from FiveThirtyEight)–well before social media and well before the economic constraints Gonzalez and Davies describe, but in line with the increased education and labor force participation of women. This provided women more economic independence, which allowed women to retain the option to marry longer–not needing to “cash it in” at a discount in return for economic security. Similarly, men encountered a more heterogeneous population of potential mates with a higher variance in potential quality. Throw in changing norms on same-sex partners and marriage over the past few decades, and the pool of potential partners is even more diverse.

Social media has not only amplified that fact, but has made it easier to consider and explore the greater variety of potential partners–making it that much easier to draw a sample from the population distribution. That means one can keep looking at relatively low cost, which increases the probability of drawing someone from the “best” end of the distribution. That makes the option all the more valuable–and the opportunity cost of executing the option that much higher.

Marriage is a complex issue, as reflected in this Pew Research Center report. (Staying married is even more complex.) I’m not suggesting an option framework fully captures the motivation and explanation for the “millennial marriage drought”, but understanding that perspective sheds more light onto what otherwise might be oversimplified as a simple budget-constraint argument that fails to account for the value created in the uncertainty of the process.

 

 

Board Independence Gone Too Far?

Board Independence Gone Too Far? published on

The corporate governance literature has long argued that corporate boards should be comprised of a majority of independent directors. This is the result of a simple agency theory argument: Boards comprised of insiders (i.e., firm employees) will put their own interests ahead of the shareholders’. Moreover, any insider other than the CEO may have incentive to accommodate, rather than challenge, the CEO in the boardroom. Independent directors are assumed not to have such conflicts of interest and therefore to be better monitors of management on behalf of shareholders.

This argument, combined with corporate scandals in the early 2000s, has led to both regulatory requirements and shareholder activist pressure for increased board independence–to the point that many firms now have only one insider on the board, the CEO. That’s well beyond the theoretical justification for increased independence. But is it actually a good thing for the CEO to be “home alone” as the sole insider on the board? Has the push for board independence gone too far?

A forthcoming paper in the Strategic Management Journal by Michelle Zorn, Christine Shropshire, John Martin, James Combs and David Ketchen, titled “Home Alone: The Effect of Lone-Insider Boards on CEO Pay, Financial Misconduct, and Firm Performance,” suggests that such extreme independence is actually a bad thing. The abstract follows:

ABSTRACT

Research summary

Corporate scandals of the previous decade have heightened attention on board independence. Indeed, boards at many large firms are now so independent that the CEO is ‘home alone’ as the lone inside member. We build upon ‘pro-insider’ research within agency theory to explain how the growing trend toward lone-insider boards affects key outcomes and how external governance forces constrain their impact. We find evidence among S&P 1500 firms that having a lone-insider board is associated with (1) excess CEO pay and a larger CEO-top management team pay gap, (2) increased likelihood of financial misconduct, and (3) decreased firm performance, but that stock analysts and institutional investors reduce these negative effects. The findings raise important questions about the efficacy of leaving the CEO ‘home alone.

Managerial summary

Following concerns that insider-dominated boards failed to protect shareholders, there has been a push for greater board independence. This push has been so successful that the CEO is now the only insider on the boards of more than half of S&P 1500 firms. We examine whether lone-insider boards do in fact offer strong governance or whether they enable CEOs to benefit personally. We find that lone-insider boards pay CEOs excessively, pay CEOs a disproportionately large amount relative to other top managers, have more instances of financial misconduct, and have lower performance than boards with more than one insider. Thus, it appears that lone-insider boards do not function as intended and firms should reconsider whether the push towards lone-insider boards is actually in shareholders’ best interests.

Franchising and Firm Performance

Franchising and Firm Performance published on

Much of the research on franchising as an organizational form relies on an agency theory explanation. In short, it assumes operators of local franchise establishments will have greater incentive to operate efficiently if they are owners of the establishment (i.e., franchisees) rather than managers employed by the franchisor-owner. However, there isn’t a lot of empirical research substantiating that assumption. Matt Sveum and my recent working paper finds that there does appear to be a franchise effect–but it depends on the nature of the business format. We use US Census data for essentially all limited- and full-service restaurants in the US and find franchising explains differences in establishment performance for full-service, but not for limited-service, restaurants. The abstract follows:

While there has been significant research on the reasons for franchising, little work has examined the effects of franchising on establishment performance. This paper attempts to fill that gap. We use restricted-access US Census Bureau microdata from the 2007 Census of Retail Trade to examine establishment-level productivity of franchisee- and franchisor-owned restaurants. We do this by employing a two-stage data envelopment analysis model where the first stage uses DEA to measure each establishment’s efficiency. The DEA efficiency score is then used as the second-stage dependent variable. The results show a strong and robust effect attributed to franchisee ownership for full service restaurants, but a smaller and insignificant difference for limited service restaurants. We believe the differences in task programmability between limited and full service restaurants results in a very different role for managers/franchisees and is the driving factor behind the different results.

Innovation trends in agriculture and their implications for M & A analysis

Innovation trends in agriculture and their implications for M & A analysis published on

This is a repost from the Mergers in Ag-Biotech blog symposium over at Truth on the Market. If you’re interested in more perspectives on the topic, I encourage you to read the other posts there.  If you’d like to comment, please do so on the TOTM version so it’s part of the general discussion.

The US agriculture sector has been experiencing consolidation at all levels for decades, even as the global ag economy has been growing and becoming more diverse. Much of this consolidation has been driven by technological changes that created economies of scale, both at the farm level and beyond.

Likewise, the role of technology has changed the face of agriculture, particularly in the past 20 years since the commercial introduction of the first genetically modified (GMO) crops. However, biotechnology itself comprises only a portion of the technology change. The development of global positioning systems (GPS) and GPS-enabled equipment have created new opportunities for precision agriculture, whether for the application of crop inputs, crop management, or yield monitoring. The development of unmanned and autonomous vehicles and remote sensing technologies, particularly unmanned aerial vehicles (i.e. UAVs, or “drones”), have created new opportunities for field scouting, crop monitoring, and real-time field management. And currently, the development of Big Data analytics is promising to combine all of the different types of data associated with agricultural production in ways intended to improve the application of all the various technologies and to guide production decisions.

Now, with the pending mergers of several major agricultural input and life sciences companies, regulators are faced with a challenge: How to evaluate the competitive effects of such mergers in the face of such a complex and dynamic technology environment—particularly when these technologies are not independent of one another? What is the relevant market for considering competitive effects and what are the implications for technology development? And how does the nature of the technology itself implicate the economic efficiencies underlying these mergers?

Before going too far, it is important to note that while the three cases currently under review (i.e., ChemChina/Syngenta, Dow/DuPont, and Bayer/Monsanto) are frequently lumped together in discussions, the three present rather different competitive cases—particularly within the US. For instance, ChemChina’s acquisition of Syngenta will not, in itself, meaningfully change market concentration. However, financial backing from ChemChina may allow Syngenta to buy up the discards from other deals, such as the parts of DuPont that the EU Commission is requiring to be divested or the seed assets Bayer is reportedly looking to sell to preempt regulatory concerns, as well as other smaller competitors.

Dow-DuPont is perhaps the most head-to-head of the three mergers in terms of R&D and product lines. Both firms are in the top five in the US for pesticide manufacturing and for seeds. However, the Dow-DuPont merger is about much more than combining agricultural businesses. The Dow-DuPont deal specifically aims to create and spin-off three different companies specializing in agriculture, material science, and specialty products. Although agriculture may be the business line in which the companies most overlap, it represents just over 21% of the combined businesses’ annual revenues.

Bayer-Monsanto is yet a different sort of pairing. While both companies are among the top five in US pesticide manufacturing (with combined sales less than Syngenta and about equal to Dow without DuPont), Bayer is a relatively minor player in the seed industry. Likewise, Monsanto is focused almost exclusively on crop production and digital farming technologies, offering little overlap to Bayer’s human health or animal nutrition businesses.

Despite the differences in these deals, they tend to be lumped together and discussed almost exclusively in the context of pesticide manufacturing or crop protection more generally. In so doing, the discussion misses some important aspects of these deals that may mitigate traditional competitive concerns within the pesticide industry.

Mergers as the Key to Unlocking Innovation and Value

First, as the Dow-DuPont merger suggests, mergers may be the least-cost way of (re)organizing assets in ways that maximize value. This is especially true for R&D-intensive industries where intellectual property and innovation are at the core of competitive advantage. Absent the protection of common ownership, neither party would have an incentive to fully disclose the nature of its IP and innovation pipeline. In this case, merging interests increases the efficiency of information sharing so that managers can effectively evaluate and reorganize assets in ways that maximize innovation and return on investment.

Dow and DuPont each have a wide range of areas of application. Both groups of managers recognize that each of their business lines would be stronger as focused, independent entities; but also recognize that the individual elements of their portfolios would be stronger if combined with those of the other company. While the EU Commission argues that Dow-DuPont would reduce the incentive to innovate in the pesticide industry—a dubious claim in itself—the commission seems to ignore the potential increases in efficiency, innovation and ability to serve customer interests across all three of the proposed new businesses. At a minimum, gains in those industries should be weighed against any alleged losses in the agriculture industry.

This is not the first such agricultural and life sciences “reorganization through merger”. The current manifestation of Monsanto is the spin-off of a previous merger between Monsanto and Pharmacia & Upjohn in 2000 that created today’s Pharmacia. At the time of the Pharmacia transaction, Monsanto had portfolios in agricultural products, chemicals, and pharmaceuticals. After reorganizing assets within Pharmacia, three business lines were created: agricultural products (the current Monsanto), pharmaceuticals (now Pharmacia, a subsidiary of Pfizer), and chemicals (now Solutia, a subsidiary of Eastman Chemical Co.). Merging interests allowed Monsanto and Pharmacia & Upjohn to create more focused business lines that were better positioned to pursue innovations and serve customers in their respective industries.

In essence, Dow-DuPont is following the same playbook. Although such intentions have not been announced, Bayer’s broad product portfolio suggests a similar long-term play with Monsanto is likely.

Interconnected Technologies, Innovation, and the Margins of Competition

As noted above, regulatory scrutiny of these three mergers focuses on them in the context of pesticide or agricultural chemical manufacturing. However, innovation in the ag chemicals industry is intricately interwoven with developments in other areas of agricultural technology that have rather different competition and innovation dynamics. The current technological wave in agriculture involves the use of Big Data to create value using the myriad data now available through GPS-enabled precision farming equipment. Monsanto and DuPont, through its Pioneer subsidiary, are both players in this developing space, sometimes referred to as “digital farming”.

Digital farming services are intended to assist farmers’ production decision making and increase farm productivity. Using GPS-coded field maps that include assessments of soil conditions, combined with climate data for the particular field, farm input companies can recommend the types of rates of applications for soil conditioning pre-harvest, seed types for planting, and crop protection products during the growing season. Yield monitors at harvest provide outcomes data for feedback to refine and improve the algorithms that are used in subsequent growing seasons.

The integration of digital farming services with seed and chemical manufacturing offers obvious economic benefits for farmers and competitive benefits for service providers. Input manufacturers have incentive to conduct data analytics that individual farmers do not. Farmers have limited analytic resources and relatively small returns to investing in such resources, while input manufacturers have broad market potential for their analytic services. Moreover, by combining data from a broad cross-section of farms, digital farming service companies have access to the data necessary to identify generalizable correlations between farm plot characteristics, input use, and yield rates.

But the value of the information developed through these analytics is not unidirectional in its application and value creation. While input manufacturers may be able to help improve farmers’ operations given the current stock of products, feedback about crop traits and performance also enhances R&D for new product development by identifying potential product attributes with greater market potential. By combining product portfolios, agricultural companies can not only increase the value of their data-driven services for farmers, but more efficiently target R&D resources to their highest potential use.

The synergy between input manufacturing and digital farming notwithstanding, seed and chemical input companies are not the only players in the digital farming space. Equipment manufacturer John Deere was an early entrant in exploiting the information value of data collected by sensors on its equipment. Other remote sensing technology companies have incentive to develop data analytic tools to create value for their data-generating products. Even downstream companies, like ADM, have expressed interest in investing in digital farming assets that might provide new revenue streams with their farmer-suppliers as well as facilitate more efficient specialty crop and identity-preserved commodity-based value chains.

The development of digital farming is still in its early stages and is far from a sure bet for any particular player. Even Monsanto has pulled back from its initial foray into prescriptive digital farming (call FieldScripts). These competitive forces will affect the dynamics of competition at all stages of farm production, including seed and chemicals. Failure to account for those dynamics, and the potential competitive benefits input manufacturers may provide, could lead regulators to overestimate any concerns of competitive harm from the proposed mergers.

Conclusion

Farmers are concerned about the effects of these big-name tie-ups. Farmers may be rightly concerned, but for the wrong reasons. Ultimately, the role of the farmer continues to be diminished in the agricultural value chain. As precision agriculture tools and Big Data analytics reduce the value of idiosyncratic or tacit knowledge at the farm level, the managerial human capital of farmers becomes relatively less important in terms of value-added. It would be unwise to confuse farmers’ concerns regarding the competitive effects of the kinds of mergers we’re seeing now with the actual drivers of change in the agricultural value chain.

Primary Sidebar