More than 20 countries have introduced taxation on e-cigarettes and other vapor products. In the United States, several states and local jurisdictions have enacted e-cigarette taxes.

The concept of tobacco harm reduction began in 1976 when Michael Russell, a psychiatrist and lecturer at the Addiction Research Unit of Maudsley Hospital in London, wrote: “People smoke for nicotine but they die from the tar.” Russell hypothesized that reducing the ratio of tar to nicotine could be the key to safer smoking.

Since then, much of the harm from smoking has been well-established as caused almost exclusively by toxicants released through the combustion of tobacco. Public Health England and the American Cancer Society have concluded non-combustible tobacco products as well as pure nicotine products are considerably less harmful than combustible products. Earlier this year, the American Cancer Society shifted its position on e-cigarettes, recommending that individuals who do not quit smoking, “… should be encouraged to switch to the least harmful form of tobacco product possible; switching to the exclusive use of e-cigarettes is preferable to continuing to smoke combustible products.”

In contrast, some public health advocates urge a precautionary approach in which the introduction and sale of e-cigarettes be limited or halted until the products are demonstrably safe.

Policymakers face a wide range of strategies regarding the taxation of vapor products. On the one hand, principles of harm reduction suggest vapor products should face no taxes or low taxes relative to conventional cigarettes, to guide consumers toward a safer alternative to smoking. the U.K. House of Commons Science and Technology Committee concludes:

The level of taxation on smoking-related products should directly correspond to the health risks that they present, to encourage less harmful consumption. Applying that logic, e-cigarettes should remain the least-taxed and conventional cigarettes the most, with heat-not-burn products falling between the two.

In contrast, the precautionary principle as well as principles of tax equity point toward the taxation of vapor products at rates similar to conventional cigarettes.

Analysis of tax policy issues is complicated by divergent—and sometimes obscured—intentions of such policies. Some policymakers claim that the objective of taxing nicotine products is to reduce nicotine consumption. Other policymakers indicate the objective is to raise revenues to support government spending. Often missed in the policy discussion is the effect of fiscal policies on innovation and the development and commercialization of harm-reducing products. Also, often missed are the consequences for current consumers of nicotine products, including smokers seeking to quit using harmful conventional cigarettes.

Policy decisions regarding taxation of vapor products should take into account both long-term fiscal effects, as well as broader economic and welfare effects. These effects might (or might not) suggest very different tax policies to those that have been enacted or are under consideration.

Apart from being a significant source of revenue, the cigarette taxes have been promoted as “sin” taxes to discourage consumption either because of externalities caused by smoking (increased costs for third-party health payers and health consequences) or paternalism. According to Centers for Disease Control and Prevention in U.S., smoking-related illness in the U.S. costs more than $300 billion each year, including; (1) nearly $170 billion for direct medical care for adults and (2) more than $156 billion in lost productivity, including $5.6 billion in lost productivity due to secondhand smoke exposure.

The CDC’s cost estimates raise important questions regarding who bears the burden of smoking related illness. Much of the cost is borne by private insurance, which charges steeper premiums for customers who smoke. In addition, the CDC estimates reflect costs imposed by people who have smoked for decades—many of whom have now quit. A proper accounting of the costs vis-à-vis tax policy would measure the incremental discounted costs imposed by today’s smokers.

According to Levy et al. (2017), a strategy of replacing cigarette smoking with e-cigarettes would yield substantial life year gains, even under pessimistic assumptions regarding cessation, initiation, and relative harm. Increased longevity does not simply extend the individual’s years of retirement and reliance on government transfers but has impact on greater work effort and productivity together with higher tax payments on consumption.

Vapor products that cause less direct harm or have lower externalities (e.g., the absence of “second hand smoke”) should be subject to a lower “sin” tax. A cost-benefit analysis of the desired excise tax rate on vapor products would include reduced health spending as an offset against excise tax revenue that was foregone by putting a lesser rate on those products.

State and local governments in the U.S. collect more than $18 billion a year in tobacco taxes. While some jurisdictions earmark a portion of tobacco taxes for prevention and cessation efforts, in practice most tobacco taxes are treated by policymakers as general revenues to be spent in whatever way the legislative body determines.

In the long-run, the goals of reducing or eliminating consumption of the taxed good and generating revenues are in conflict. If the tax is successful in reducing consumption, it falls short in generating revenue. Similarly, if the tax succeeds in generating revenues, it falls short in reducing or eliminating consumption.

Substitutability is another consideration. An increase in the tax on spirits will result in an increase in beer and wine purchases. A high toll on a road will divert traffic to untolled streets that may not be designed for increased traffic volumes. Evidence from the U.S. and Europe indicate high or rising tobacco taxes in one jurisdiction will result in increased sales in bordering jurisdictions as well as increase illegal cross-jurisdiction sales or smuggling.

As of March 2018, nine U.S. states have enacted taxes on e-cigarettes:

California

65.08% on wholesale price

Delaware

0.05 USD/ml

DC

70% on wholesale price

Kansas

0.05 USD/ml

Louisiana

0.05 USD/ml

Minnesota

95% of wholesale price

North Carolina

0.05 USD/ml

Pennsylvania

40% of wholesaler price

West Virginia

0.075 USD/ml

In addition, 22 countries outside of the U.S. have introduced taxation on e-cigarettes.

The effects of different types of taxation on usage and thus economic outcomes varies. Research to date finds a wide range of own price and cross price elasticities for e-cigarettes. While most researchers conclude that the demand for e-cigarettes is more elastic than the demand for combustible cigarettes, some studies find inelastic demand and some studies find highly elastic demand. Economic theory would point to e-cigarettes as a substitute for combustible cigarettes. Some empirical research supports this hypothesis, while others conclude the two products are complements.

In addition to e-cigarettes, little cigars and smokeless tobacco are also potential substitutes for cigarettes. The results from Zheng, et al. (2016) suggest increases in sales of little cigars and smokeless tobacco products would account for about 14 percent of the decline in cigarette sales associated with a hypothetical 10 percent increase in the price of cigarettes. On the other hand, another study using a seemingly identical data set (Zheng, et al., 2017), suggests that sales of little cigars and smokeless tobacco would decrease in the face of an increase in cigarette prices.

The wide range of estimated elasticities calls into question the reliability of published estimates. As a nascent area of research, the policy debate would benefit from additional research that involves larger samples with better statistical power, reflects the dynamic nature of this relatively new product category, and accounts for the wide variety of vapor products.

More importantly, demand and supply conditions for e-cigarettes, heated tobacco products and other electronic nicotine delivery products have been changing rapidly over the past few years—and are expected for rapidly change into the foreseeable future. Thus, estimates of demand parameters, such as elasticity and cross-price elasticity estimates, are almost certain to vary over time as users gain knowledge and experience and as products and suppliers enter the market.

Because the market for e-cigarettes and other vapor products is small and developing, the tax bearing capacity of these new product segments are untested and unknown. Moreover, current tax levels and prices could be also misleading based on the relatively sparse empirical data, in which case more data points and evaluation is needed. One can argue, given the slow growth rates of these segments in many markets, that current prices of e-cigarettes and heat-not-burn products are relatively high when compared to cigarettes and a tax or an increase on existing tax would slow down the segment growth or even lead to a decline.

Separately, the challenges in assessing a tax on electronic nicotine delivery products indicate the costs of collecting the tax, especially an excise tax, may be much higher than similar taxes levied on combustible cigarettes. In addition, as discussed above, heavy taxation of this relatively new industry would likely stifle innovation in a way that is contrary to the goal harm reduction.

Principles of harm reduction recognize that every proposal has uncertain outcomes as well as potential spillovers and unforeseen consequences. Nevertheless, the basic principle of harm reduction is a focus on safer rather than safe. Policymakers must make their decisions weighing the expected benefits and expected costs. With such high risks and costs associated with cigarette and other combustible use, taxes and regulations must be developed in an environment of uncertainty and with an eye toward a net reduction in harm, rather than an unattainable goal of zero harm.

Like this:

Ours is not an age of nuance. It’s an age of tribalism, of teams—“Yer either fer us or agin’ us!” Perhaps I should have been less surprised, then, when I read the unfavorable review of my book How to Regulate in, of all places, the Federalist Society Review.

I had expected some positive feedback from reviewer J. Kennerly Davis, a contributor to the Federalist Society’s Regulatory Transparency Project. The “About” section of the Project’s website states:

In the ultra-complex and interconnected digital age in which we live, government must issue and enforce regulations to protect public health and safety. However, despite the best of intentions, government regulation can fail, stifle innovation, foreclose opportunity, and harm the most vulnerable among us. It is for precisely these reasons that we must be diligent in reviewing how our policies either succeed or fail us, and think about how we might improve them.

I might not have expressed these sentiments in such pro-regulation terms. For example, I don’t think government should regulate, even “to protect public health and safety,” absent (1) a market failure and (2) confidence that systematic governmental failures won’t cause the cure to be worse than the disease. I agree, though, that regulation is sometimes appropriate, that government interventions often fail (in systematic ways), and that regulatory policies should regularly be reviewed with an eye toward reducing the combined costs of market and government failures.

Those are, in fact, the central themes of How to Regulate. The book sets forth an overarching goal for regulation (minimize the sum of error and decision costs) and then catalogues, for six oft-cited bases for regulating, what regulatory tools are available to policymakers and how each may misfire. For every possible intervention, the book considers the potential for failure from two sources—the knowledge problem identified by F.A. Hayek and public choice concerns (rent-seeking, regulatory capture, etc.). It ends up arguing:

that recognizing property rights, rather than allocating usage, is the best way to address the tragedy of the commons;

that market-based mechanisms, not shareholder suits and mandatory structural rules like those imposed by Sarbanes-Oxley and Dodd-Frank, are the best way to constrain agency costs in the corporate context;

that insider trading restrictions should be left to corporations themselves;

that antitrust law should continue to evolve in the consumer welfare-focused direction Robert Bork recommended;

against the FCC’s recently abrogated net neutrality rules;

that occupational licensure is primarily about rent-seeking and should be avoided;

that incentives for voluntary disclosure will usually obviate the need for mandatory disclosure to correct information asymmetry;

that the claims of behavioral economics do not justify paternalistic policies to protect people from themselves; and

that “libertarian-paternalism” is largely a ruse that tends to morph into hard paternalism.

Given the congruence of my book’s prescriptions with the purported aims of the Regulatory Transparency Project—not to mention the laundry list of specific market-oriented policies the book advocates—I had expected a generally positive review from Mr. Davis (whom I sincerely thank for reading and reviewing the book; book reviews are a ton of work).

I didn’t get what I’d expected. Instead, Mr. Davis denounced my book for perpetuating “progressive assumptions about state and society” (“wrongheaded” assumptions, the editor’s introduction notes). He responded to my proposed methodology with a “meh,” noting that it “is not clearly better than the status quo.” His one compliment, which I’ll gladly accept, was that my discussion of economic theory was “generally accessible.”

Following are a few thoughts on Mr. Davis’s critiques.

Are My Assumptions Progressive?

According to Mr. Davis, my book endorses three progressive concepts:

(i) the idea that market based arrangements among private parties routinely misallocate resources, (ii) the idea that government policymakers are capable of formulating executive directives that can correct private ordering market failures and optimize the allocation of resources, and (iii) the idea that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.

I agree with Mr. Davis that these are progressive ideas. If my book embraced them, it might be fair to label it “progressive.” But it doesn’t. Not one of them.

Market Failure

Nothing in my book suggests that “market based arrangements among private parties routinely misallocate resources.” I do say that “markets sometimes fail to work well,” and I explain how, in narrow sets of circumstances, market failures may emerge. Understanding exactly what may happen in those narrow sets of circumstances helps to identify the least restrictive option for addressing problems and would thus would seem a pre-requisite to effective policymaking for a conservative or libertarian. My mere invocation of the term “market failure,” however, was enough for Mr. Davis to kick me off the team.

Mr. Davis ignored altogether the many points where I explain how private ordering fixes situations that could lead to poor market performance. At the end of the information asymmetry chapter, for example, I write,

This chapter has described information asymmetry as a problem, and indeed it is one. But it can also present an opportunity for profit. Entrepreneurs have long sought to make money—and create social value—by developing ways to correct informational imbalances and thereby facilitate transactions that wouldn’t otherwise occur.

I then describe the advent of companies like Carfax, AirBnb, and Uber, all of which offer privately ordered solutions to instances of information asymmetry that might otherwise create lemons problems. I conclude:

These businesses thrive precisely because of information asymmetry. By offering privately ordered solutions to the problem, they allow previously under-utilized assets to generate heretofore unrealized value. And they enrich the people who created and financed them. It’s a marvelous thing.

That theme—that potential market failures invite privately ordered solutions that often obviate the need for any governmental fix—permeates the book. In the public goods chapter, I spend a great deal of time explaining how privately ordered devices like assurance contracts facilitate the production of amenities that are non-rivalrous and non-excludable. In discussing the tragedy of the commons, I highlight Elinor Ostrom’s work showing how “groups of individuals have displayed a remarkable ability to manage commons goods effectively without either privatizing them or relying on government intervention.” In the chapter on externalities, I spend a full seven pages explaining why Coasean bargains are more likely than most people think to prevent inefficiencies from negative externalities. In the chapter on agency costs, I explain why privately ordered solutions like the market for corporate control would, if not precluded by some ill-conceived regulations, constrain agency costs better than structural rules from the government.

Disregarding all this, Mr. Davis chides me for assuming that “markets routinely fail.” And, for good measure, he explains that government interventions are often a bigger source of failure, a point I repeatedly acknowledge, as it is a—perhaps the—central theme of the book.

Trust in Experts

In what may be the strangest (and certainly the most misleading) part of his review, Mr. Davis criticizes me for placing too much confidence in experts by giving short shrift to the Hayekian knowledge problem and the insights of public choice.

a. The Knowledge Problem

According to Mr. Davis, the approach I advocate “is centered around fully functioning experts.” He continues:

This progressive trust in experts is misplaced. It is simply false to suppose that government policymakers are capable of formulating executive directives that effectively improve upon private arrangements and optimize the allocation of resources. Friedrich Hayek and other classical liberals have persuasively argued, and everyday experience has repeatedly confirmed, that the information needed to allocate resources efficiently is voluminous and complex and widely dispersed. So much so that government experts acting through top down directives can never hope to match the efficiency of resource allocation made through countless voluntary market transactions among private parties who actually possess the information needed to allocate the resources most efficiently.

Amen and hallelujah! I couldn’t agree more! Indeed, I said something similar when I came to the first regulatory tool my book examines (and criticizes), command-and-control pollution rules. I wrote:

The difficulty here is an instance of a problem that afflicts regulation generally. At the end of the day, regulating involves centralized economic planning: A regulating “planner” mandates that productive resources be allocated away from some uses and toward others. That requires the planner to know the relative value of different resource uses. But such information, in the words of Nobel laureate F.A. Hayek, “is not given to anyone in its totality.” The personal preferences of thousands or millions of individuals—preferences only they know—determine whether there should be more widgets and fewer gidgets, or vice-versa. As Hayek observed, voluntary trading among resource owners in a free market generates prices that signal how resources should be allocated (i.e., toward the uses for which resource owners may command the highest prices). But centralized economic planners—including regulators—don’t allocate resources on the basis of relative prices. Regulators, in fact, generally assume that prices are wrong due to the market failure the regulators are seeking to address. Thus, the so-called knowledge problem that afflicts regulation generally is particularly acute for command-and-control approaches that require regulators to make refined judgments on the basis of information about relative costs and benefits.

That was just the first of many times I invoked the knowledge problem to argue against top-down directives and in favor of market-oriented policies that would enable individuals to harness local knowledge to which regulators would not be privy. The index to the book includes a “knowledge problem” entry with no fewer than nine sub-entries (e.g., “with licensure regimes,” “with Pigouvian taxes,” “with mandatory disclosure regimes”). There are undoubtedly more mentions of the knowledge problem than those listed in the index, for the book assesses the degree to which the knowledge problem creates difficulties for every regulatory approach it considers.

Mr. Davis does mention one time where I “acknowledge[] the work of Hayek” and “recognize[] that context specific information is vitally important,” but he says I miss the point:

Having conceded these critical points [about the importance of context-specific information], Professor Lambert fails to follow them to the logical conclusion that private ordering arrangements are best for regulating resources efficiently. Instead, he stops one step short, suggesting that policymakers defer to the regulator most familiar with the regulated party when they need context-specific information for their analysis. Professor Lambert is mistaken. The best information for resource allocation is not to be found in the regional office of the regulator. It resides with the persons who have long been controlled and directed by the progressive regulatory system. These are the ones to whom policymakers should defer.

I was initially puzzled by Mr. Davis’s description of how my approach would address the knowledge problem. It’s inconsistent with the way I described the problem (the “regional office of the regulator” wouldn’t know people’s personal preferences, etc.), and I couldn’t remember ever suggesting that regulatory devolution—delegating decisions down toward local regulators—was the solution to the knowledge problem.

When I checked the citation in the sentences just quoted, I realized that Mr. Davis had misunderstood the point I was making in the passage he cited (my own fault, no doubt, not his). The cited passage was at the very end of the book, where I was summarizing the book’s contributions. I claimed to have set forth a plan for selecting regulatory approaches that would minimize the sum of error and decision costs. I wanted to acknowledge, though, the irony of promulgating a generally applicable plan for regulating in a book that, time and again, decries top-down imposition of one-size-fits-all rules. Thus, I wrote:

A central theme of this book is that Hayek’s knowledge problem—the fact that no central planner can possess and process all the information needed to allocate resources so as to unlock their greatest possible value—applies to regulation, which is ultimately a set of centralized decisions about resource allocation. The very knowledge problem besetting regulators’ decisions about what others should do similarly afflicts pointy-headed academics’ efforts to set forth ex ante rules about what regulators should do. Context-specific information to which only the “regulator on the spot” is privy may call for occasional departures from the regulatory plan proposed here.

As should be obvious, my point was not that the knowledge problem can generally be fixed by regulatory devolution. Rather, I was acknowledging that the general regulatory approach I had set forth—i.e., the rules policymakers should follow in selecting among regulatory approaches—may occasionally misfire and should thus be implemented flexibly.

b. Public Choice Concerns

A second problem with my purported trust in experts, Mr. Davis explains, stems from the insights of public choice:

Actual policymakers simply don’t live up to [Woodrow] Wilson’s ideal of the disinterested, objective, apolitical, expert technocrat. To the contrary, a vast amount of research related to public choice theory has convincingly demonstrated that decisions of regulatory agencies are frequently shaped by politics, institutional self-interest and the influence of the entities the agencies regulate.

Again, huzzah! Those words could have been lifted straight out of the three full pages of discussion I devoted to public choice concerns with the very first regulatory intervention the book considered. A snippet from that discussion:

While one might initially expect regulators pursuing the public interest to resist efforts to manipulate regulation for private gain, that assumes that government officials are not themselves rational, self-interest maximizers. As scholars associated with the “public choice” economic tradition have demonstrated, government officials do not shed their self-interested nature when they step into the public square. They are often receptive to lobbying in favor of questionable rules, especially since they benefit from regulatory expansions, which tend to enhance their job status and often their incomes. They also tend to become “captured” by powerful regulatees who may shower them with personal benefits and potentially employ them after their stints in government have ended.

That’s just a slice. Elsewhere in those three pages, I explain (1) how the dynamic of concentrated benefits and diffuse costs allows inefficient protectionist policies to persist, (2) how firms that benefit from protectionist regulation are often assisted by “pro-social” groups that will make a public interest case for the rules (Bruce Yandle’s Bootleggers and Baptists syndrome), and (3) the “[t]wo types of losses [that] result from the sort of interest-group manipulation public choice predicts.” And that’s just the book’s initial foray into public choice. The entry for “public choice concerns” in the book’s index includes eight sub-entries. As with the knowledge problem, I addressed the public choice issues that could arise from every major regulatory approach the book considered.

For Mr. Davis, though, that was not enough to keep me out of the camp of Wilsonian progressives. He explains:

Professor Lambert devotes a good deal of attention to the problem of “agency capture” by regulated entities. However, he fails to acknowledge that a symbiotic relationship between regulators and regulated is not a bug in the regulatory system, but an inherent feature of a system defined by extensive and continuing government involvement in the allocation of resources.

To be honest, I’m not sure what that last sentence means. Apparently, I didn’t recite some talismanic incantation that would indicate that I really do believe public choice concerns are a big problem for regulation. I did say this in one of the book’s many discussions of public choice:

A regulator that has both regular contact with its regulatees and significant discretionary authority over them is particularly susceptible to capture. The regulator’s discretionary authority provides regulatees with a strong motive to win over the regulator, which has the power to hobble the regulatee’s potential rivals and protect its revenue stream. The regular contact between the regulator and the regulatee provides the regulatee with better access to those in power than that available to parties with opposing interests. Moreover, the regulatee’s preferred course of action is likely (1) to create concentrated benefits (to the regulatee) and diffuse costs (to consumers generally), and (2) to involve an expansion of the regulator’s authority. The upshot is that that those who bear the cost of the preferred policy are less likely to organize against it, and regulators, who benefit from turf expansion, are more likely to prefer it. Rate-of-return regulation thus involves the precise combination that leads to regulatory expansion at consumer expense: broad and discretionary government power, close contact between regulators and regulatees, decisions that generally involve concentrated benefits and diffuse costs, and regular opportunities to expand regulators’ power and prestige.

In light of this combination of features, it should come as no surprise that the history of rate-of-return regulation is littered with instances of agency capture and regulatory expansion.

Even that was not enough to convince Mr. Davis that I reject the Wilsonian assumption of “disinterested, objective, apolitical, expert technocrat[s].” I don’t know what more I could have said.

Social Welfare

Mr. Davis is right when he says, “Professor Lambert’s ultimate goal for his book is to provide policymakers with a resource that will enable them to make regulatory decisions that produce greater social welfare.” But nowhere in my book do I suggest, as he says I do, “that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.” What I mean by “social welfare” is the aggregate welfare of all the individuals in a society. And I’m careful to point out that only they know what makes them better off. (At one point, for example, I write that “[g]overnment planners have no way of knowing how much pleasure regulatees derive from banned activities…or how much displeasure they experience when they must comply with an affirmative command…. [W]ith many paternalistic policies and proposals…government planners are really just guessing about welfare effects.”)

I agree with Mr. Davis that “[t]here is no single generally accepted methodology that anyone can use to determine objectively how and to what extent the welfare of society will be affected by a particular regulatory directive.” For that reason, nowhere in the book do I suggest any sort of “metes and bounds” measurement of social welfare. (I certainly do not endorse the use of GDP, which Mr. Davis rightly criticizes; that term appears nowhere in the book.)

Rather than prescribing any sort of precise measurement of social welfare, my book operates at the level of general principles: We have reasons to believe that inefficiencies may arise when conditions are thus; there is a range of potential government responses to this situation—from doing nothing, to facilitating a privately ordered solution, to mandating various actions; based on our experience with these different interventions, the likely downsides of each (stemming from, for example, the knowledge problem and public choice concerns) are so-and-so; all things considered, the aggregate welfare of the individuals within this group will probably be greatest with policy x.

It is true that the thrust of the book is consequentialist, not deontological. But it’s a book about policy, not ethics. And its version of consequentialism is rule, not act, utilitarianism. Is a consequentialist approach to policymaking enough to render one a progressive? Should we excise John Stuart Mill’s On Liberty from the classical liberal canon? I surely hope not.

Is My Proposed Approach an Improvement?

Mr. Davis’s second major criticism of my book—that what it proposes is “just the status quo”—has more bite. By that, I mean two things. First, it’s a more painful criticism to receive. It’s easier for an author to hear “you’re saying something wrong” than “you’re not saying anything new.”

Second, there may be more merit to this criticism. As Mr. Davis observes, I noted in the book’s introduction that “[a]t times during the drafting, I … wondered whether th[e] book was ‘original’ enough.” I ultimately concluded that it was because it “br[ought] together insights of legal theorists and economists of various stripes…and systematize[d] their ideas into a unified, practical approach to regulating.” Mr. Davis thinks I’ve overstated the book’s value, and he may be right.

The current regulatory landscape would suggest, though, that my book’s approach to selecting among potential regulatory policies isn’t “just the status quo.” The approach I recommend would generate the specific policies catalogued at the outset of this response (in the bullet points). The fact that those policies haven’t been implemented under the existing regulatory approach suggests that what I’m recommending must be something different than the status quo.

Mr. Davis observes—and I acknowledge—that my recommended approach resembles the review required of major executive agency regulations under Executive Order 12866, President Clinton’s revised version of President Reagan’s Executive Order 12291. But that order is quite limited in its scope. It doesn’t cover “minor” executive agency rules (those with expected costs of less than $100 million) or rules from independent agencies or from Congress or from courts or at the state or local level. Moreover, I understand from talking to a former administrator of the Office of Information and Regulatory Affairs, which is charged with implementing the order, that it has actually generated little serious consideration of less restrictive alternatives, something my approach emphasizes.

What my book proposes is not some sort of governmental procedure; indeed, I emphasize in the conclusion that the book “has not addressed … how existing regulatory institutions should be reformed to encourage the sort of analysis th[e] book recommends.” Instead, I propose a way to think through specific areas of regulation, one that is informed by a great deal of learning about both market and government failures. The best audience for the book is probably law students who will someday find themselves influencing public policy as lawyers, legislators, regulators, or judges. I am thus heartened that the book is being used as a text at several law schools. My guess is that few law students receive significant exposure to Hayek, public choice, etc.

So, who knows? Perhaps the book will make a difference at the margin. Or perhaps it will amount to sound and fury, signifying nothing. But I don’t think a classical liberal could fairly say that the analysis it counsels “is not clearly better than the status quo.”

A Truly Better Approach to Regulating

Mr. Davis ends his review with a stirring call to revamp the administrative state to bring it “in complete and consistent compliance with the fundamental law of our republic embodied in the Constitution, with its provisions interpreted to faithfully conform to their original public meaning.” Among other things, he calls for restoring the separation of powers, which has been erased in agencies that combine legislative, executive, and judicial functions, and for eliminating unchecked government power, which results when the legislature delegates broad rulemaking and adjudicatory authority to politically unaccountable bureaucrats.

Once again, I concur. There are major problems—constitutional and otherwise—with the current state of administrative law and procedure. I’d be happy to tear down the existing administrative state and begin again on a constitutionally constrained tabula rasa.

But that’s not what my book was about. I deliberately set out to write a book about the substance of regulation, not the process by which rules should be imposed. I took that tack for two reasons. First, there are numerous articles and books, by scholars far more expert than I, on the structure of the administrative state. I could add little value on administrative process.

Second, the less-addressed substantive question—what, as a substantive matter, should a policy addressing x do?—would exist even if Mr. Davis’s constitutionally constrained regulatory process were implemented. Suppose that we got rid of independent agencies, curtailed delegations of rulemaking authority to the executive branch, and returned to a system in which Congress wrote all rules, the executive branch enforced them, and the courts resolved any disputes. Someone would still have to write the rule, and that someone (or group of people) should have some sense of the pros and cons of one approach over another. That is what my book seeks to provide.

A hard core Hayekian—one who had immersed himself in Law, Legislation, and Liberty—might respond that no one should design regulation (purposive rules that Hayek would call thesis) and that efficient, “purpose-independent” laws (what Hayek called nomos) will just emerge as disputes arise. But that is not Mr. Davis’s view. He writes:

A system of governance or regulation based on the rule of law attains its policy objectives by proscribing actions that are inconsistent with those objectives. For example, this type of regulation would prohibit a regulated party from discharging a pollutant in any amount greater than the limiting amount specified in the regulation. Under this proscriptive approach to regulation, any and all actions not specifically prohibited are permitted.

Mr. Davis has thus contemplated a purposive rule, crafted by someone. That someone should know the various policy options and the upsides and downsides of each. How to Regulate could help.

Conclusion

I’m not sure why Mr. Davis viewed my book as no more than dressed-up progressivism. Maybe he was triggered by the book’s cover art, which he says “is faithful to the progressive tradition,” resembling “the walls of public buildings from San Francisco to Stalingrad.” Maybe it was a case of Sunstein Derangement Syndrome. (Progressive legal scholar Cass Sunstein had nice things to say about the book, despite its criticisms of a number of his ideas.) Or perhaps it was that I used the term “market failure.” Many conservatives and libertarians fear, with good reason, that conceding the existence of market failures invites all sorts of government meddling.

At the end of the day, though, I believe we classical liberals should stop pretending that market outcomes are always perfect, that pure private ordering is always and everywhere the best policy. We should certainly sing markets’ praises; they usually work so well that people don’t even notice them, and we should point that out. We should continually remind people that government interventions also fail—and in systematic ways (e.g., the knowledge problem and public choice concerns). We should insist that a market failure is never a sufficient condition for a governmental fix; one must always consider whether the cure will be worse than the disease. In short, we should take and promote the view that government should operate “under a presumption of error.”

That view, economist Aaron Director famously observed, is the essence of laissez faire. It’s implicit in the purpose statement of the Federalist Society’s Regulatory Transparency Project. And it’s the central point of How to Regulate.

A recent exchange between Chris Walker and Philip Hamburger about Walker’s ongoing empirical work on the Chevron doctrine (the idea that judges must defer to reasonable agency interpretations of ambiguous statutes) gives me a long-sought opportunity to discuss what I view as the greatest practical problem with the Chevron doctrine: it increases both politicization and polarization of law and policy. In the interest of being provocative, I will frame the discussion below by saying that both Walker & Hamburger are wrong (though actually I believe both are quite correct in their respective critiques). In particular, I argue that Walker is wrong that Chevron decreases politicization (it actually increases it, vice his empirics); and I argue Hamburger is wrong that judicial independence is, on its own, a virtue that demands preservation. Rather, I argue, Chevron increases overall politicization across the government; and judicial independence can and should play an important role in checking legislative abdication of its role as a politically-accountable legislature in a way that would moderate that overall politicization.

Walker, along with co-authors Kent Barnett and Christina Boyd, has done some of the most important and interesting work on Chevron in recent years, empirically studying how the Chevron doctrine has affected judicial behavior (see here and here) as well as that of agencies (and, I would argue, through them the Executive) (see here). But the more important question, in my mind, is how it affects the behavior of Congress. (Walker has explored this somewhat in his own work, albeit focusing less on Chevron than on how the role agencies play in the legislative process implicitly transfers Congress’s legislative functions to the Executive).

My intuition is that Chevron dramatically exacerbates Congress’s worst tendencies, encouraging Congress to push its legislative functions to the executive and to do so in a way that increases the politicization and polarization of American law and policy. I fear that Chevron effectively allows, and indeed encourages, Congress to abdicate its role as the most politically-accountable branch by deferring politically difficult questions to agencies in ambiguous terms.

One of, and possibly the, best ways to remedy this situation is to reestablish the role of judge as independent decisionmaker, as Hamburger argues. But the virtue of judicial independence is not endogenous to the judiciary. Rather, judicial independence has an instrumental virtue, at least in the context of Chevron. Where Congress has problematically abdicated its role as a politically-accountable decisionmaker by deferring important political decisions to the executive, judicial refusal to defer to executive and agency interpretations of ambiguous statutes can force Congress to remedy problematic ambiguities. This, in turn, can return the responsibility for making politically-important decisions to the most politically-accountable branch, as envisioned by the Constitution’s framers.

A refresher on the Chevron debate

Chevron is one of the defining doctrines of administrative law, both as a central concept and focal debate. It stands generally for the proposition that when Congress gives agencies ambiguous statutory instructions, it falls to the agencies, not the courts, to resolve those ambiguities. Thus, if a statute is ambiguous (the question at “step one” of the standard Chevron analysis) and the agency offers a reasonable interpretation of that ambiguity (“step two”), courts are to defer to the agency’s interpretation of the statute instead of supplying their own.

This judicially-crafted doctrine of deference is typically justified on several grounds. For instance, agencies generally have greater subject-matter expertise than courts so are more likely to offer substantively better constructions of ambiguous statutes. They have more resources that they can dedicate to evaluating alternative constructions. They generally have a longer history of implementing relevant Congressional instructions so are more likely attuned to Congressional intent – both of the statute’s enacting and present Congresses. And they are subject to more direct Congressional oversight in their day-to-day operations and exercise of statutory authority than the courts so are more likely concerned with and responsive to Congressional direction.

Chief among the justifications for Chevron deference is, as Walker says, “the need to reserve political (or policy) judgments for the more politically accountable agencies.” This is at core a separation-of-powers justification: the legislative process is fundamentally a political process, so the Constitution assigns responsibility for it to the most politically-accountable branch (the legislature) instead of the least politically-accountable branch (the judiciary). In turn, the act of interpreting statutory ambiguity is an inherently legislative process – the underlying theory being that Congress intended to leave such ambiguity in the statute in order to empower the agency to interpret it in a quasi-legislative manner. Thus, under this view, courts should defer both to this Congressional intent that the agency be empowered to interpret its statute (and, should this prove problematic, it is up to Congress to change the statute or to face political ramifications), and the courts should defer to the agency interpretation of that statute because agencies, like Congress, are more politically accountable than the courts.

Chevron has always been an intensively studied and debated doctrine. This debate has grown more heated in recent years, to the point that there is regularly scholarly discussion about whether Chevron should be repealed or narrowed and what would replace it if it were somehow curtailed – and discussion of the ongoing vitality of Chevron has entered into Supreme Court opinions and the appointments process with increasing frequency. These debates generally focus on a few issues. A first issue is that Chevron amounts to a transfer of the legislature’s Constitutional powers and responsibilities over creating the law to the executive, where the law ordinarily is only meant to be carried out. This has, the underlying concern is, contributed to the increase in the power of the executive compared to the legislature. A second, related, issue is that Chevron contributes to the (over)empowerment of independent agencies – agencies that are already out of favor with many of Chevron’s critics as Constitutionally-infirm entities whose already-specious power is dramatically increased when Chevron limits the judiciary’s ability to check their use of already-broad Congressionally-delegated authority.

A third concern about Chevron, following on these first two, is that it strips the judiciary of its role as independent arbiter of judicial questions. That is, it has historically been the purview of judges to answer statutory ambiguities and fill in legislative interstices.

Chevron is also a focal point for more generalized concerns about the power of the modern administrative state. In this context, Chevron stands as a representative of a broader class of cases – State Farm, Auer, Seminole Rock, Fox v. FCC, and the like – that have been criticized as centralizing legislative, executive, and judicial powers in agencies, allowing Congress to abdicate its role as politically-accountable legislator, abdicating the judiciary’s role in interpreting the law, as well as raising due process concerns for those subject to rules promulgated by federal agencies..

Walker and his co-authors have empirically explored the effects of Chevron in recent years, using robust surveys of federal agencies and judicial decisions to understand how the doctrine has affected the work of agencies and the courts. His most recent work (with Kent Barnett and Christina Boyd) has explored how Chevron affects judicial decisionmaking. Framing the question by explaining that “Chevron deference strives to remove politics from judicial decisionmaking,” they ask whether “Chevron deference achieve[s] this goal of removing politics from judicial decisionmaking?” They find that, empirically speaking, “the Chevron Court’s objective to reduce partisan judicial decision-making has been quite effective.” By instructing judges to defer to the political judgments (or just statutory interpretations) of agencies, judges are less political in their own decisionmaking.

Hamburger responds to this finding somewhat dismissively – and, indeed, the finding is almost tautological: “of course, judges disagree less when the Supreme Court bars them from exercising their independent judgment about what the law is.” (While a fair critique, I would temper it by arguing that it is nonetheless an important empirical finding – empirics that confirm important theory are as important as empirics that refute it, and are too often dismissed.)

Rather than focus on concerns about politicized decisionmaking by judges, Hamburger focuses instead on the importance of judicial independence – on it being “emphatically the duty of the Judicial Department to say what the law is” (quoting Marbury v. Madison). He reframes Walker’s results, arguing that “deference” to agencies is really “bias” in favor of the executive. “Rather than reveal diminished politicization, Walker’s numbers provide strong evidence of diminished judicial independence and even of institutionalized judicial bias.”

So which is it? Does Chevron reduce bias by de-politicizing judicial decisionmaking? Or does it introduce new bias in favor of the (inherently political) executive? The answer is probably that it does both. The more important answer, however, is that neither is the right question to ask.

What’s the correct measure of politicization? (or, You get what you measure)

Walker frames his study of the effects of Chevron on judicial decisionmaking by explaining that “Chevron deference strives to remove politics from judicial decisionmaking. Such deference to the political branches has long been a bedrock principle for at least some judicial conservatives.” Based on this understanding, his project is to ask whether “Chevron deference achieve[s] this goal of removing politics from judicial decisionmaking?”

This framing, that one of Chevron’s goals is to remove politics from judicial decisionmaking, is not wrong. But this goal may be more accurately stated as being to prevent the judiciary from encroaching upon the political purposes assigned to the executive and legislative branches. This restatement offers an important change in focus. It emphasizes the concern about politicizing judicial decisionmaking as a separation of powers issue. This is in apposition to concern that, on consequentialist grounds, judges should not make politicized decisions – that is, judges should avoid political decisions because it leads to substantively worse outcomes.

It is of course true that, as unelected officials with lifetime appointments, judges are the least politically accountable to the polity of any government officials. Judges’ decisions, therefore, can reasonably be expected to be less representative of, or responsive to, the concerns of the voting public than decisions of other government officials. But not all political decisions need to be directly politically accountable in order to be effectively politically accountable. A judicial interpretation of an ambiguous law, for instance, can be interpreted as a request, or even a demand, that Congress be held to political account. And where Congress is failing to perform its constitutionally-defined role as a politically-accountable decisionmaker, it may do less harm to the separation of powers for the judiciary to make political decisions that force politically-accountable responses by Congress than for the judiciary to respect its constitutional role while the Congress ignores its role.

Before going too far down this road, I should pause to label the reframing of the debate that I have impliedly proposed. To my mind, the question isn’t whether Chevron reduces political decisionmaking by judges; the question is how Chevron affects the politicization of, and ultimately accountability to the people for, the law. Critically, there is no “conservation of politicization” principle. Institutional design matters. One could imagine a model of government where Congress exercises very direct oversight over what the law is and how it is implemented, with frequent elections and a Constitutional prohibition on all but the most express and limited forms of delegation. One can also imagine a more complicated form of government in which responsibilities for making law, executing law, and interpreting law, are spread across multiple branches (possibly including myriad agencies governed by rules that even many members of those agencies do not understand). And one can reasonably expect greater politicization of decisions in the latter compared to the former – because there are more opportunities for saying that the responsibility for any decision lies with someone else (and therefore for politicization) in the latter than in the “the buck stops here” model of the former.

In the common-law tradition, judges exercised an important degree of independence because their job was, necessarily and largely, to “say what the law is.” For better or worse, we no longer live in a world where judges are expected to routinely exercise that level of discretion, and therefore to have that level of independence. Nor do I believe that “independence” is necessarily or inherently a criteria for the judiciary, at least in principle. I therefore somewhat disagree with Hamburger’s assertion that Chevron necessarily amounts to a problematic diminution in judicial independence.

Again, I return to a consequentialist understanding of the purposes of judicial independence. In my mind, we should consider the need for judicial independence in terms of whether “independent” judicial decisionmaking tends to lead to better or worse social outcomes. And here I do find myself sympathetic to Hamburger’s concerns about judicial independence. The judiciary is intended to serve as a check on the other branches. Hamburger’s concern about judicial independence is, in my mind, driven by an overwhelmingly correct intuition that the structure envisioned by the Constitution is one in which the independence of judges is an important check on the other branches. With respect to the Congress, this means, in part, ensuring that Congress is held to political account when it does legislative tasks poorly or fails to do them at all.

The courts abdicate this role when they allow agencies to save poorly drafted statutes through interpretation of ambiguity.

Judicial independence moderates politicization

Hamburger tells us that “Judges (and academics) need to wrestle with the realities of how Chevron bias and other administrative power is rapidly delegitimizing our government and creating a profound alienation.” Huzzah. Amen. I couldn’t agree more. Preach! Hear-hear!

Allow me to present my personal theory of how Chevron affects our political discourse. In the vernacular, I call this Chevron Step Three. At Step Three, Congress corrects any mistakes made by the executive or independent agencies in implementing the law or made by the courts in interpreting it. The subtle thing about Step Three is that it doesn’t exist – and, knowing this, Congress never bothers with the politically costly and practically difficult process of clarifying legislation.

To the contrary, Chevron encourages the legislature expressly not to legislate. The more expedient approach for a legislator who disagrees with a Chevron-backed agency action is to campaign on the disagreement – that is, to politicize it. If the EPA interprets the Clean Air Act too broadly, we need to retake the White House to get a new administrator in there to straighten out the EPA’s interpretation of the law. If the FCC interprets the Communications Act too narrowly, we need to retake the White House to change the chair so that we can straighten out that mess! And on the other side, we need to keep the White House so that we can protect these right-thinking agency interpretations from reversal by the loons on the other side that want to throw out all of our accomplishments. The campaign slogans write themselves.

So long as most agencies’ governing statutes are broad enough that those agencies can keep the ship of state afloat, even if drifting rudderless, legislators have little incentive to turn inward to engage in the business of government with their legislative peers. Rather, they are freed to turn outward towards their next campaign, vilifying or deifying the administrative decisions of the current government as best suits their electoral prospects.

The sharp-eyed observer will note that I’ve added a piece to the Chevron puzzle: the process described above assumes that a new administration can come in after an election and simply rewrite all of the rules adopted by the previous administration. Not to put too fine a point on the matter, but this is exactly what administrative law allows (see Fox v. FCC and State Farm). The underlying logic, which is really nothing more than an expansion of Chevron, is that statutory ambiguity delegates to agencies a “policy space” within which they are free to operate. So long as agency action stays within that space – which often allows for diametrically-opposed substantive interpretations – the courts say that it is up to Congress, not the Judiciary, to provide course corrections. Anything else would amount to politically unaccountable judges substituting their policy judgments (this is, acting independently) for those of politically-accountable legislators and administrators.

In other words, the politicization of law seen in our current political moment is largely a function of deference and a lack of stare decisis combined. A virtue of stare decisis is that it forces Congress to act to directly address politically undesirable opinions. Because agencies are not bound by stare decisis, an alternative, and politically preferable, way for Congress to remedy problematic agency decisions is to politicize the issue – instead of addressing the substantive policy issue through legislation, individual members of Congress can campaign on it. (Regular readers of this blog will be familiar with one contemporary example of this: the recent net neutrality CRA vote, which is widely recognized as having very little chance of ultimate success but is being championed by its proponents as a way to influence the 2018 elections.) This is more directly aligned with the individual member of Congress’s own incentives, because, by keeping and placing more members of her party in Congress, her party will be able to control the leadership of the agency which will thus control the shape of that agency’s policy. In other words, instead of channeling the attention of individual Congressional actors inwards to work together to develop law and policy, it channels it outwards towards campaigning on the ills and evils of the opposing administration and party vice the virtues of their own party.

The virtue of judicial independence, of judges saying what they think the law is – or even what they think the law should be – is that it forces a politically-accountable decision. Congress can either agree, or disagree; but Congress must do something. Merely waiting for the next administration to come along will not be sufficient to alter the course set by the judicial interpretation of the law. Where Congress has abdicated its responsibility to make politically-accountable decisions by deferring those decisions to the executive or agencies, the political-accountability justification for Chevron deference fails. In such cases, the better course for the courts may well be to enforce Congress’s role under the separation of powers by refusing deference and returning the question to Congress.

The cause of basing regulation on evidence-based empirical science (rather than mere negative publicity) – and of preventing regulatory interference with First Amendment commercial speech rights – got a judicial boost on February 26.

Specifically, in National Association of Wheat Growers et al. v. Zeise (Monsanto Case), a California federal district court judge preliminarily enjoined application against Monsanto of a labeling requirement imposed by a California regulatory law, Proposition 65. Proposition 65 mandates that the Governor of California publish a list of chemicals known to the State to cause cancer, and also prohibits any person in the course of doing business from knowingly and intentionally exposing anyone to the listed chemicals without a prior “clear and reasonable” warning. In this case, California sought to make Monsanto place warning labels on its popular Roundup weed killer products, stating that glyphosate, a widely-used herbicide and key Roundup ingredient, was known to cause cancer. Monsanto, joined by various agribusiness entities, sued to enjoin California from taking that action. Judge William Shubb concluded that there was insufficient evidence that the active ingredient in Roundup causes cancer, and that requiring Roundup to publish warning labels would violate Monsanto’s First Amendment rights by compelling it to engage in false and misleading speech. Salient excerpts from Judge Shubb’s opinion are set forth below:

[When, as here, it compels commercial speech, in order to satisfy the First Amendment,] [t]he State has the burden of demonstrating that a disclosure requirement is purely factual and uncontroversial, not unduly burdensome, and reasonably related to a substantial government interest. . . . The dispute in the present case is over whether the compelled disclosure is of purely factual and uncontroversial information. In this context, “uncontroversial” “refers to the factual accuracy of the compelled disclosure, not to its subjective impact on the audience.” [citation omitted]

On the evidence before the court, the required warning for glyphosate does not appear to be factually accurate and uncontroversial because it conveys the message that glyphosate’s carcinogenicity is an undisputed fact, when almost all other regulators have concluded that there is insufficient evidence that glyphosate causes cancer. . . .

It is inherently misleading for a warning to state that a chemical is known to the state of California to cause cancer based on the finding of one organization [, the International Agency for Research on Cancer] (which as noted above, only found that substance is probably carcinogenic), when apparently all other regulatory and governmental bodies have found the opposite, including the EPA, which is one of the bodies California law expressly relies on in determining whether a chemical causes cancer. . . . [H]ere, given the heavy weight of evidence in the record that glyphosate is not in fact known to cause cancer, the required warning is factually inaccurate and controversial. . . .

The court’s First Amendment inquiry here boils down to what the state of California can compel businesses to say. Whether Proposition 65’s statutory and regulatory scheme is good policy is not at issue. However, where California seeks to compel businesses to provide cancer warnings, the warnings must be factually accurate and not misleading. As applied to glyphosate, the required warnings are false and misleading. . . .

As plaintiffs have shown that they are likely to succeed on the merits of their First Amendment claim, are likely to suffer irreparable harm absent an injunction, and that the balance of equities and public interest favor an injunction, the court will grant plaintiffs’ request to enjoin Proposition 65’s warning requirement for glyphosate.

The Monsanto Case commendably highlights a little-appreciated threat of government overregulatory zeal. Not only may excessive regulation fail a cost-benefit test, and undermine private property rights, it may violates the First Amendment speech rights of private actors when it compels inaccurate speech. The negative economic consequences may be substantial when the government-mandated speech involves a claim about a technical topic that not only lacks empirical support (and thus may be characterized as “junk science”), but is deceptive and misleading (if not demonstrably false). Deceptive and misleading speech in the commercial market place reduces marketplace efficiency and reduces social welfare (both consumer’s surplus and producer’s surplus). In particular, it does this by deterring mutually beneficial transactions (for example, purchases of Roundup that would occur absent misleading labeling about cancer risks), generating suboptimal transactions (for example, purchases of inferior substitutes to Roundup due to misleading Roundup labeling), and distorting competition within the marketplace (the reallocation of market shares among Roundup and substitutes not subject to labeling). The short-term static effects of such market distortions may be dwarfed by the dynamic effects, such as firms’ disincentives to invest in innovation (or even participate) in markets subject to inaccurate information concerning the firms’ products or services.

In short, the Monsanto Case highlights the fact that government regulation not only imposes an implicit tax on business – it affirmatively distorts the workings of individual markets if it causes the introduction misleading or deceptive information that is material to marketplace decision-making. The threat of such distortive regulation may be substantial, especially in areas where regulators interact with “public interest clients” that have an incentive to demonize disfavored activities by private commercial actors – one example being the health and safety regulation of agricultural chemicals. In those areas, there may be a case for federal preemption of state regulation, and for particularly close supervision of federal agencies to avoid economically inappropriate commercial speech mandates. Stay tuned for future discussion of such potential legal reforms.

This week the FCC will vote on Chairman Ajit Pai’s Restoring Internet Freedom Order. Once implemented, the Order will rescind the 2015 Open Internet Order and return antitrust and consumer protection enforcement to primacy in Internet access regulation in the U.S.

In anticipation of that, earlier this week the FCC and FTC entered into a Memorandum of Understanding delineating how the agencies will work together to police ISPs. Under the MOU, the FCC will review informal complaints regarding ISPs’ disclosures about their blocking, throttling, paid prioritization, and congestion management practices. Where an ISP fails to make the proper disclosures, the FCC will take enforcement action. The FTC, for its part, will investigate and, where warranted, take enforcement action against ISPs for unfair, deceptive, or otherwise unlawful acts.

Critics of Chairman Pai’s plan contend (among other things) that the reversion to antitrust-agency oversight of competition and consumer protection in telecom markets (and the Internet access market particularly) would be an aberration — that the US will become the only place in the world to move backward away from net neutrality rules and toward antitrust law.

But this characterization has it exactly wrong. In fact, much of the world has been moving toward an antitrust-based approach to telecom regulation. The aberration was the telecom-specific, common-carrier regulation of the 2015 Open Internet Order.

The longstanding, global transition from telecom regulation to antitrust enforcement

The decade-old discussion around net neutrality has morphed, perhaps inevitably, to join the larger conversation about competition in the telecom sector and the proper role of antitrust law in addressing telecom-related competition issues. Today, with the latest net neutrality rules in the US on the chopping block, the discussion has grown more fervent (and even sometimes inordinately violent).

On the one hand, opponents of the 2015 rules express strong dissatisfaction with traditional, utility-style telecom regulation of innovative services, and view the 2015 rules as a meritless usurpation of antitrust principles in guiding the regulation of the Internet access market. On the other hand, proponents of the 2015 rules voice skepticism that antitrust can actually provide a way to control competitive harms in the tech and telecom sectors, and see the heavy hand of Title II, common-carrier regulation as a necessary corrective.

While the evidence seems clear that an early-20th-century approach to telecom regulation is indeed inappropriate for the modern Internet (see our lengthy discussions on this point, e.g., here and here, as well as Thom Lambert’s recent post), it is perhaps less clear whether antitrust, with its constantly evolving, common-law foundation, is up to the task.

To answer that question, it is important to understand that for decades, the arc of telecom regulation globally has been sweeping in the direction of ex post competition enforcement, and away from ex ante, sector-specific regulation.

Howard Shelanski, who served as President Obama’s OIRA Administrator from 2013-17, Director of the Bureau of Economics at the FTC from 2012-2013, and Chief Economist at the FCC from 1999-2000, noted in 2002, for instance, that

[i]n many countries, the first transition has been from a government monopoly to a privatizing entity controlled by an independent regulator. The next transformation on the horizon is away from the independent regulator and towards regulation through general competition law.

Globally, nowhere perhaps has this transition been more clearly stated than in the EU’s telecom regulatory framework which asserts:

The aim is to progressively reduce ex ante sector-specific regulation progressively as competition in markets develops and, ultimately, for electronic communications [i.e., telecommunications] to be governed by competition law only. (Emphasis added.)

To facilitate the transition and quash regulatory inconsistencies among member states, the EC identified certain markets for national regulators to decide, consistent with EC guidelines on market analysis, whether ex ante obligations were necessary in their respective countries due to an operator holding “significant market power.” In 2003 the EC identified 18 such markets. After observing technological and market changes over the next four years, the EC reduced that number to seven in 2007 and, in 2014, the number was further reduced to four markets, all wholesale markets, that could potentially require ex ante regulation.

It is important to highlight that this framework is not uniquely achievable in Europe because of some special trait in its markets, regulatory structure, or antitrust framework. Determining the right balance of regulatory rules and competition law, whether enforced by a telecom regulator, antitrust regulator, or multi-purpose authority (i.e., with authority over both competition and telecom) means choosing from a menu of options that should be periodically assessed to move toward better performance and practice. There is nothing jurisdiction-specific about this; it is simply a matter of good governance.

And since the early 2000s, scholars have highlighted that the US is in an intriguing position to transition to a merged regulator because, for example, it has both a “highly liberalized telecommunications sector and a well-established body of antitrust law.” For Shelanski, among others, the US has been ready to make the transition since 2007.

Far from being an aberrant move away from sound telecom regulation, the FCC’s Restoring Internet Freedom Order is actually a step in the direction of sensible, antitrust-based telecom regulation — one that many parts of the world have long since undertaken.

How antitrust oversight of telecom markets has been implemented around the globe

In implementing the EU’s shift toward antitrust oversight of the telecom sector since 2003, agencies have adopted a number of different organizational reforms.

Some telecom regulators assumed new duties over competition — e.g., Ofcom in the UK. Other non-European countries, including, e.g., Mexico have also followed this model.

Other European Member States have eliminated their telecom regulator altogether. In a useful case study, Roslyn Layton and Joe Kane outline Denmark’s approach, which includes disbanding its telecom regulator and passing the regulation of the sector to various executive agencies.

Meanwhile, the Netherlands and Spain each elected to merge its telecom regulator into its competition authority. New Zealand has similarly adopted this framework.

A few brief case studies will illuminate these and other reforms:

The Netherlands

In 2013, the Netherlands merged its telecom, consumer protection, and competition regulators to form the Netherlands Authority for Consumers and Markets (ACM). The ACM’s structure streamlines decision-making on pending industry mergers and acquisitions at the managerial level, eliminating the challenges arising from overlapping agency reviews and cross-agency coordination. The reform also unified key regulatory methodologies, such as creating a consistent calculation method for the weighted average cost of capital (WACC).

The Netherlands also claims that the ACM’s ex postapproach is better able to adapt to “technological developments, dynamic markets, and market trends”:

The combination of strength and flexibility allows for a problem-based approach where the authority first engages in a dialogue with a particular market player in order to discuss market behaviour and ensure the well-functioning of the market.

The Netherlands also cited a significant reduction in the risk of regulatory capture as staff no longer remain in positions for long tenures but rather rotate on a project-by-project basis from a regulatory to a competition department or vice versa. Moving staff from team to team has also added value in terms of knowledge transfer among the staff. Finally, while combining the cultures of each regulator was less difficult than expected, the government reported that the largest cause of consternation in the process was agreeing on a single IT system for the ACM.

Spain

In 2013, Spain created the National Authority for Markets and Competition (CNMC), merging the National Competition Authority with several sectoral regulators, including the telecom regulator, to “guarantee cohesion between competition rulings and sectoral regulation.” In a report to the OECD, Spain stated that moving to the new model was necessary because of increasing competition and technological convergence in the sector (i.e., the ability for different technologies to offer the substitute services (like fixed and wireless Internet access)). It added that integrating its telecom regulator with its competition regulator ensures

a predictable business environment and legal certainty [i.e., removing “any threat of arbitrariness”] for the firms. These two conditions are indispensable for network industries — where huge investments are required — but also for the rest of the business community if investment and innovation are to be promoted.

Like in the Netherlands, additional benefits include significantly lowering the risk of regulatory capture by “preventing the alignment of the authority’s performance with sectoral interests.”

Denmark

In 2011, the Danish government unexpectedly dismantled the National IT and Telecom Agency and split its duties between four regulators. While the move came as a surprise, it did not engender national debate — vitriolic or otherwise — nor did it receive much attention in the press.

Since the dismantlement scholars have observed less politicization of telecom regulation. And even though the competition authority didn’t take over telecom regulatory duties, the Ministry of Business and Growth implemented a light touch regime, which, as Layton and Kane note, has helped to turn Denmark into one of the “top digital nations” according to the International Telecommunication Union’s Measuring the Information Society Report.

New Zealand

The New Zealand Commerce Commission (NZCC) is responsible for antitrust enforcement, economic regulation, consumer protection, and certain sectoral regulations, including telecommunications. By combining functions into a single regulator New Zealand asserts that it can more cost-effectively administer government operations. Combining regulatory functions also created spillover benefits as, for example, competition analysis is a prerequisite for sectoral regulation, and merger analysis in regulated sectors (like telecom) can leverage staff with detailed and valuable knowledge. Similar to the other countries, New Zealand also noted that the possibility of regulatory capture “by the industries they regulate is reduced in an agency that regulates multiple sectors or also has competition and consumer law functions.”

Advantages identified by other organizations

The GSMA, a mobile industry association, notes in its 2016 report, Resetting Competition Policy Frameworks for the Digital Ecosystem, that merging the sector regulator into the competition regulator also mitigates regulatory creep by eliminating the prodding required to induce a sector regulator to roll back regulation as technological evolution requires it, as well as by curbing the sector regulator’s temptation to expand its authority. After all, regulators exist to regulate.

At the same time, it’s worth noting that eliminating the telecom regulator has not gone off without a hitch in every case (most notably, in Spain). It’s important to understand, however, that the difficulties that have arisen in specific contexts aren’t endemic to the nature of competition versus telecom regulation. Nothing about these cases suggests that economic-based telecom regulations are inherently essential, or that replacing sector-specific oversight with antitrust oversight can’t work.

Contrasting approaches to net neutrality in the EU and New Zealand

Unfortunately, adopting a proper framework and implementing sweeping organizational reform is no guarantee of consistent decisionmaking in its implementation. Thus, in 2015, the European Parliament and Council of the EU went against two decades of telecommunications best practices by implementing ex ante net neutrality regulations without hard evidence of widespread harm and absent any competition analysis to justify its decision. The EU placed net neutrality under the universal service and user’s rights prong of the regulatory framework, and the resulting rules lack coherence and economic rigor.

Notably, while BEREC has not provided clear guidance, a 2017 report commissioned by the EU’s Directorate-General for Competition weighing competitive benefits and harms of zero rating concluded “there appears to be little reason to believe that zero-rating gives rise to competition concerns.”

The report also provides an ex post framework for analyzing such deals in the context of a two-sided market by assessing a deal’s impact on competition between ISPs and between content and application providers.

The EU example demonstrates that where a telecom regulator perceives a novel problem, competition law, grounded in economic principles, brings a clear framework to bear.

In New Zealand, if a net neutrality issue were to arise, the ISP’s behavior would be examined under the context of existing antitrust law, including a determination of whether the ISP is exercising market power, and by the Telecommunications Commissioner, who monitors competition and the development of telecom markets for the NZCC.

The TCF Code is a mandatory code of practice establishing requirements concerning the information ISPs are required to disclose to consumers about their services. For example, ISPs must disclose any arrangements that prioritize certain traffic. Regarding traffic management, complaints of unfair contract terms — when not resolved by a process administered by an independent industry group — may be referred to the NZCC for an investigation in accordance with the Fair Trading Act. Under the Commerce Act, the NZCC can prohibit anticompetitive mergers, or practices that substantially lessen competition or that constitute price fixing or abuse of market power.

In addition, the NZCC has been active in patrolling vertical agreements between ISPs and content providers — precisely the types of agreements bemoaned by Title II net neutrality proponents.

In February 2017, the NZCC blocked Vodafone New Zealand’s proposed merger with Sky Network (combining Sky’s content and pay TV business with Vodafone’s broadband and mobile services) because the Commission concluded that the deal would substantially lessen competition in relevant broadband and mobile services markets. The NZCC was

unable to exclude the real chance that the merged entity would use its market power over premium live sports rights to effectively foreclose a substantial share of telecommunications customers from rival telecommunications services providers (TSPs), resulting in a substantial lessening of competition in broadband and mobile services markets.

Such foreclosure would result, the NZCC argued, from exclusive content and integrated bundles with features such as “zero rated Sky Sport viewing over mobile.” In addition, Vodafone would have the ability to prevent rivals from creating bundles using Sky Sport.

The substance of the Vodafone/Sky decision notwithstanding, the NZCC’s intervention is further evidence that antitrust isn’t a mere smokescreen for regulators to do nothing, and that regulators don’t need to design novel tools (such as the Internet conduct rule in the 2015 OIO) to regulate something neither they nor anyone else knows very much about: “not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.” Instead, with ex post competition enforcement, regulators can allow dynamic innovation and competition to develop, and are perfectly capable of intervening — when and if identifiable harm emerges.

Conclusion

Unfortunately for Title II proponents — who have spent a decade at the FCC lobbying for net neutrality rules despite a lack of actionable evidence — the FCC is not acting without precedent by enabling the FTC’s antitrust and consumer protection enforcement to police conduct in Internet access markets. For two decades, the object of telecommunications regulation globally has been to transition away from sector-specific ex ante regulation to ex post competition review and enforcement. It’s high time the U.S. got on board.

The FTC will hold an “Informational Injury Workshop” in December “to examine consumer injury in the context of privacy and data security.” Defining the scope of cognizable harm that may result from the unauthorized use or third-party hacking of consumer information is, to be sure, a crucial inquiry, particularly as ever-more information is stored digitally. But the Commission — rightly — is aiming at more than mere definition. As it notes, the ultimate objective of the workshop is to address questions like:

How do businesses evaluate the benefits, costs, and risks of collecting and using information in light of potential injuries? How do they make tradeoffs? How do they assess the risks of different kinds of data breach? What market and legal incentives do they face, and how do these incentives affect their decisions?

How do consumers perceive and evaluate the benefits, costs, and risks of sharing information in light of potential injuries? What obstacles do they face in conducting such an evaluation? How do they evaluate tradeoffs?

Understanding how businesses and consumers assess the risk and cost “when information about [consumers] is misused,” and how they conform their conduct to that risk, entails understanding not only the scope of the potential harm, but also the extent to which conduct affects the risk of harm. This, in turn, requires an understanding of the FTC’s approach to evaluating liability under Section 5 of the FTC Act.

The core of the problem arises from the Commission’s reliance on what it calls a “reasonableness” standard for its evaluation of data security. By its nature, a standard that assigns liability for only unreasonable conduct should incorporate concepts resembling those of a common law negligence analysis — e.g., establishing a standard of due care, determining causation, evaluating the costs of and benefits of conduct that would mitigate the risk of harm, etc. Unfortunately, the Commission’s approach to reasonableness diverges from the rigor of a negligence analysis. In fact, as it has developed, it operates more like a strict liability regime in which largely inscrutable prosecutorial discretion determines which conduct, which firms, and which outcomes will give rise to liability.

Most troublingly, coupled with the Commission’s untenably lax (read: virtually nonexistent) evidentiary standards, the extremely liberal notion of causation embodied in its “reasonableness” approach means that the mere storage of personal information, even absent any data breach, could amount to an unfair practice under the Act — clearly not a “reasonable” result.

The notion that a breach itself can constitute injury will, we hope, be taken up during the workshop. But even if injury is limited to a particular type of breach — say, one in which sensitive, personal information is exposed to a wide swath of people — unless the Commission’s definition of what it means for conduct to be “likely to cause” harm is fixed, it will virtually always be the case that storage of personal information could conceivably lead to the kind of breach that constitutes injury. In other words, better defining the scope of injury does little to cabin the scope of the agency’s discretion when conduct creating any risk of that injury is actionable.

Our comments elaborate on these issues, as well as providing our thoughts on how the subjective nature of informational injuries can fit into Section 5, with a particular focus on the problem of assessing informational injury given evolving social context, and the need for appropriately assessing benefits in any cost-benefit analysis of conduct leading to informational injury.

On July 24, as part of their newly-announced “Better Deal” campaign, congressional Democrats released an antitrust proposal (“Better Deal Antitrust Proposal” or BDAP) entitled “Cracking Down on Corporate Monopolies and the Abuse of Economic and Political Power.” Unfortunately, this antitrust tract is really an “Old Deal” screed that rehashes long-discredited ideas about “bigness is badness” and “corporate abuses,” untethered from serious economic analysis. (In spirit it echoes the proposal for a renewed emphasis on “fairness” in antitrust made by then Acting Assistant Attorney General Renata Hesse in 2016 – a recommendation that ran counter to sound economics, as I explained in a September 2016 Truth on the Market commentary.) Implementation of the BDAP’s recommendations would be a “worse deal” for American consumers and for American economic vitality and growth.

The BDAP’s Portrayal of the State of Antitrust Enforcement is Factually Inaccurate, and it Ignores the Real Problems of Crony Capitalism and Regulatory Overreach

The Better Deal Antitrust Proposal begins with the assertion that antitrust has failed in recent decades:

Over the past thirty years, growing corporate influence and consolidation has led to reductions in competition, choice for consumers, and bargaining power for workers. The extensive concentration of power in the hands of a few corporations hurts wages, undermines job growth, and threatens to squeeze out small businesses, suppliers, and new, innovative competitors. It means higher prices and less choice for the things the American people buy every day. . . [This is because] [o]ver the last thirty years, courts and permissive regulators have allowed large companies to get larger, resulting in higher prices and limited consumer choice in daily expenses such as travel, cable, and food and beverages. And because concentrated market power leads to concentrated political power, these companies deploy armies of lobbyists to increase their stranglehold on Washington. A Better Deal on competition means that we will revisit our antitrust laws to ensure that the economic freedom of all Americans—consumers, workers, and small businesses—come before big corporations that are getting even bigger.

This statement’s assertions are curious (not to mention problematic) in multiple respects.

First, since Democratic administrations have held the White House for sixteen of the past thirty years, the BDAP appears to acknowledge that Democratic presidents have overseen a failed antitrust policy.

Second, the broad claim that consumers have faced higher prices and limited consumer choice with regard to their daily expenses is baseless. Indeed, internet commerce and new business models have sharply reduced travel and entertainment costs for the bulk of American consumers, and new “high technology” products such as smartphones and electronic games have been characterized by dramatic improvements in innovation, enhanced variety, and relatively lower costs. Cable suppliers face vibrant competition from competitive satellite providers, fiberoptic cable suppliers (the major telcos such as Verizon), and new online methods for distributing content. Consumer price inflation has been extremely low in recent decades, compared to the high inflationary, less innovative environment of the 1960s and 1970s – decades when federal antitrust law was applied much more vigorously. Thus, the claim that weaker antitrust has denied consumers “economic freedom” is at war with the truth.

Third, the claim that recent decades have seen the creation of “concentrated market power,” safe from antitrust challenge, ignores the fact that, over the last three decades, apolitical government antitrust officials under both Democratic and Republican administrations have applied well-accepted economic tools (wielded by the scores of Ph.D. economists in the Justice Department and Federal Trade Commission) in enforcing the antitrust laws. Antitrust analysis has used economics to focus on inefficient business conduct that would maintain or increase market power, and large numbers of cartels have been prosecuted and questionable mergers (including a variety of major health care and communications industry mergers) have been successfully challenged. The alleged growth of “concentrated market power,” untouched by incompetent antitrust enforcers, is a myth. Furthermore, claims that mere corporate size and “aggregate concentration” are grounds for antitrust concern (“big is bad”) were decisively rejected by empirical economic research published in the 1970s, and are no more convincing today. (As I pointed out in a January 2017 blog posting at this site, recent research by highly respected economists debunks a few claims that federal antitrust enforcers have been “excessively tolerant” of late in analyzing proposed mergers.)

More interesting is the BDAP’s claim that “armies of [corporate] lobbyists” manage to “increase their stranglehold on Washington.” This is not an antitrust concern, however, but, rather, a complaint against crony capitalism and overregulation, which became an ever more serious problem under the Obama Administration. As I explained in my October 2016 critique of the American Antitrust Institute’s September 2008 National Competition Policy Report (a Report which is very similar in tone to the BDAP), the rapid growth of excessive regulation during the Obama years has diminished competition by creating new regulatory schemes that benefit entrenched and powerful firms (such as Dodd-Frank Act banking rules that impose excessive burdens on smaller banks). My critique emphasized that, “as Dodd-Frank and other regulatory programs illustrate, large government rulemaking schemes often are designed to favor large and wealthy well-connected rent-seekers at the expense of smaller and more dynamic competitors.” And, more generally, excessive regulatory burdens undermine the competitive process, by distorting business decisions in a manner that detracts from competition on the merits.

It follows that, if the BDAP really wanted to challenge “unfair” corporate advantages, it would seek to roll back excessive regulation (see my November 2012 article on Trump Administration competition policy). Indeed, the Trump Administration’s regulatory reform program (which features agency-specific regulatory reform task forces) seeks to do just that. Perhaps then the BDAP could be rewritten to focus on endorsing President Trump’s regulatory reform initiative, rather than emphasizing a meritless “big is bad” populist antitrust policy that was consigned to the enforcement dustbin decades ago.

The BDAP’s Specific Proposals Would Harm the Economy and Reduce Consumer Welfare

Unfortunately, the BDAP does more than wax nostalgic about old-time “big is bad” antitrust policy. It affirmatively recommends policy changes that would harm the economy.

First, the BDAP would require “a broader, longer-term view and strong presumptions that market concentration can result in anticompetitive conduct.” Specifically, it would create “new standards to limit large mergers that unfairly consolidate corporate power,” including “mergers [that] reduce wages, cut jobs, lower product quality, limit access to services, stifle innovation, or hinder the ability of small businesses and entrepreneurs to compete.” New standards would also “explicitly consider the ways in which control of consumer data can be used to stifle competition or jeopardize consumer privacy.”

Unlike current merger policy, which evaluates likely competitive effects, centered on price and quality, estimated in economically relevant markets, these new standards are open-ended. They could justify challenges based on such a wide variety of factors that they would incentivize direct competitors not to merge, even in cases where the proposed merged entity would prove more efficient and able to enhance quality or innovation. Certain less efficient competitors – say small businesses – could argue that they would be driven out of business, or that some jobs in the industry would disappear, in order to prompt government challenges. But such challenges would tend to undermine innovation and business improvements, and the inevitable redistribution of assets to higher-valued uses that is a key benefit of corporate reorganizations and acquisitions. (Mergers might focus instead, for example, on inefficient conglomerate acquisitions among companies in unrelated industries, which were incentivized by the overly strict 1960s rules that prohibited mergers among direct competitors.) Such a change would represent a retreat from economic common sense, and be at odds with consensus economically-sound merger enforcement guidance that U.S. enforcers have long recommended other countries adopt. Furthermore, questions of consumer data and privacy are more appropriately dealt with as consumer protection questions, which the Federal Trade Commission has handled successfully for years.

Second, the BDAP would require “frequent, independent [after-the-fact] reviews of mergers” and require regulators “to take corrective measures if they find abusive monopolistic conditions where previously approved [consent decree] measures fail to make good on their intended outcomes.”

While high profile mergers subject to significant divestiture or other remedial requirements have in appropriate circumstances included monitoring requirements, the tone of this recommendation is to require that far more mergers be subjected to detailed and ongoing post-acquisition reviews. The cost of such monitoring is substantial, however, and routine reliance on it (backed by the threat of additional enforcement actions based merely on changing economic conditions) could create excessive caution in the post-merger management of newly-consolidated enterprises. Indeed, potential merged parties might decide in close cases that this sort of oversight is not worth accepting, and therefore call off potentially efficient transactions that would have enhanced economic welfare. (The reality of enforcement error cost, and the possibility of misdiagnosis of post-merger competitive conditions, is not acknowledged by the BDAP.)

Third, a newly created “competition advocate” independent of the existing federal antitrust enforcers would be empowered to publicly recommend investigations, with the enforcers required to justify publicly why they chose not to pursue a particular recommended investigation. The advocate would ensure that antitrust enforcers are held “accountable,” assure that complaints about “market exploitation and anticompetitive conduct” are heard, and publish data on “concentration and abuses of economic power” with demographic breakdowns.

This third proposal is particularly egregious. It is at odds with the long tradition of prosecutorial discretion that has been enjoyed by the federal antitrust enforcers (and law enforcers in general). It would also empower a special interest intervenor to promote the complaints of interest groups that object to efficiency-seeking business conduct, thereby undermining the careful economic and legal analysis that is consistently employed by the expert antitrust agencies. The references to “concentration” and “economic power” clarify that the “advocate” would have an untrammeled ability to highlight non-economic objections to transactions raised by inefficient competitors, jealous rivals, or self-styled populists who object to excessive “bigness.” This would strike at the heart of our competitive process, which presumes that private parties will be allowed to fulfill their own goals, free from government micromanagement, absent indications of a clear and well-defined violation of law. In sum, the “competition advocate” is better viewed as a “special interest” advocate empowered to ignore normal legal constraints and unjustifiably interfere in business transactions. If empowered to operate freely, such an advocate (better viewed as an albatross) would undoubtedly chill a wide variety of business arrangements, to the detriment of consumers and economic innovation.

Finally, the BDAP refers to a variety of ills that are said to affect specific named industries, in particular airlines, cable/telecom, beer, food prices, and eyeglasses. Airlines are subject to a variety of capacity limitations (limitations on landing slots and the size/number of airports) and regulatory constraints (prohibitions on foreign entry or investment) that may affect competitive conditions, but airlines mergers are closely reviewed by the Justice Department. Cable and telecom companies face a variety of federal, state, and local regulations, and their mergers also are closely scrutinized. The BDAP’s reference to the proposed AT&T/Time Warner merger ignores the potential efficiencies of this “vertical” arrangement involving complementary assets (see my coauthored commentary here), and resorts to unsupported claims about wrongful “discrimination” by “behemoths” – issues that in any event are examined in antitrust merger reviews. Unsupported references to harm to competition and consumer choice are thrown out in the references to beer and agrochemical mergers, which also receive close economically-focused merger scrutiny under existing law. Concerns raised about the price of eyeglasses ignore the role of potentially anticompetitive regulation – that is, bad government – in harming consumer welfare in this sector. In short, the alleged competitive “problems” the BDAP raises with respect to particular industries are no more compelling than the rest of its analysis. The Justice Department and Federal Trade Commission are hard at work applying sound economics to these sectors. They should be left to do their jobs, and the BDAP’s industry-specific commentary (sadly, like the rest of its commentary) should be accorded no weight.

Conclusion

Congressional Democrats would be well-advised to ditch their efforts to resurrect the counterproductive antitrust policy from days of yore, and instead focus on real economic problems, such as excessive and inappropriate government regulation, as well as weak protection for U.S. intellectual property rights, here and abroad (see here, for example). Such a change in emphasis would redound to the benefit of American consumers and producers.

First, the final rule prohibits covered providers of certain consumer financial products and services from using an agreement with a consumer that provides for arbitration of any future dispute between the parties to bar the consumer from filing or participating in a class action concerning the covered consumer financial product or service. Second, the final rule requires covered providers that are involved in an arbitration pursuant to a pre-dispute arbitration agreement to submit specified arbitral records to the Bureau and also to submit specified court records. The Bureau is also adopting official interpretations to the regulation.

The Arbitration Rule’s effective date is 60 days following its publication in the Federal Register (which is imminent), and it applies to contracts entered into more than 180 days after that.

Cutting through the hyperbole that the Arbitration Rule protects consumers from “unfairness” that would deny them “their day in court,” this Rule is in fact highly anti-consumer and harmful to innovation. As Competitive Enterprise Senior Fellow John Berlau put it, in promulgating this Rule, “[t]he CFPB has disregarded vast data showing that arbitration more often compensates consumers for damages faster and grants them larger awards than do class action lawsuits. This regulation could have particularly harmful effects on FinTech innovations, such as peer-to-peer lending.” Moreover, in a coauthored paper, Professors Jason Johnston of the University of Virginia Law School and Todd Zywicki of the Scalia Law School debunked a CFPB study that sought to justify the agency’s plans to issue the Arbitration Rule. They concluded:

The CFPB’s [own] findings show that arbitration is relatively fair and successful at resolving a range of disputes between consumers and providers of consumer financial products, and that regulatory efforts to limit the use of arbitration will likely leave consumers worse off . . . . Moreover, owing to flaws in the report’s design and a lack of information, the report should not be used as the basis for any legislative or regulatory proposal to limit the use of consumer arbitration.

Unfortunately, the Arbitration Rule is just the latest of many costly regulatory outrages perpetrated by the CFPB, an unaccountable bureaucracy that offends the Constitution’s separation of powers and should be eliminated by Congress, as I explained in a 2016 Heritage Foundation report.

Legislative elimination of an agency, however, takes time. Fortunately, in the near term, Congress can apply the Congressional Review Act (CRA) to prevent the Arbitration Rule from taking effect, and to block the CFPB from passing rules similar to it in the future.

[The CRA is] Congress’s most recent effort to trim the excesses of the modern administrative state. The act requires the executive branch to report every “rule” — a term that includes not only the regulations an agency promulgates, but also its interpretations of the agency’s governing laws — to the Senate and House of Representatives so that each chamber can schedule an up-or-down vote on the rule under the statute’s fast-track procedure. The act was designed to enable Congress expeditiously to overturn agency regulations by avoiding the delays occasioned by the Senate’s filibuster rules and practices while also satisfying the [U.S. Constitution’s] Article I Bicameralism and Presentment requirements, which force the Congress and President to collaborate to enact, revise, or repeal a law. Under the CRA, a joint resolution of disapproval signed into law by the President invalidates the rule and bars an agency from thereafter adopting any substantially similar rule absent a new act of Congress.

Although the CRA was almost never invoked before 2017, in recent months it has been used extensively as a tool by Congress and the Trump Administration to roll back specific manifestations Obama Administration regulatory overreach (for example, see here and here).

Today, the International Center for Law & Economics (ICLE) released a study updating our 2014 analysis of the economic effects of the Durbin Amendment to the Dodd-Frank Act.

The new paper, Unreasonable and Disproportionate: How the Durbin Amendment Harms Poorer Americans and Small Businesses, by ICLE scholars, Todd J. Zywicki, Geoffrey A. Manne, and Julian Morris, can be found here; a Fact Sheet highlighting the paper’s key findings is available here.

Introduced as part of the Dodd-Frank Act in 2010, the Durbin Amendment sought to reduce the interchange fees assessed by large banks on debit card transactions. In the words of its primary sponsor, Sen. Richard Durbin, the Amendment aspired to help “every single Main Street business that accepts debit cards keep more of their money, which is a savings they can pass on to their consumers.”

Unfortunately, although the Durbin Amendment did generate benefits for big-box retailers, ICLE’s 2014 analysis found that it had actually harmed many other merchants and imposed substantial net costs on the majority of consumers, especially those from lower-income households.

In the current study, we analyze a welter of new evidence and arguments to assess whether time has ameliorated or exacerbated the Amendment’s effects. Our findings in this report expand upon and reinforce our findings from 2014:

Relative to the period before the Durbin Amendment, almost every segment of the interrelated retail, banking, and consumer finance markets has been made worse off as a result of the Amendment.

Predictably, the removal of billions of dollars in interchange fee revenue has led to the imposition of higher bank fees and reduced services for banking consumers.

In fact, millions of households, regardless of income level, have been adversely affected by the Durbin Amendment through higher overdraft fees, increased minimum balances, reduced access to free checking, higher ATM fees, and lost debit card rewards, among other things.

Nor is there any evidence that merchants have lowered prices for retail consumers; for many small-ticket items, in fact, prices have been driven up.

Contrary to Sen. Durbin’s promises, in other words, increased banking costs have not been offset by lower retail prices.

At the same time, although large merchants continue to reap a Durbin Amendment windfall, there remains no evidence that small merchants have realized any interchange cost savings — indeed, many have suffered cost increases.

And all of these effects fall hardest on the poor. Hundreds of thousands of low-income households have chosen (or been forced) to exit the banking system, with the result that they face higher costs, difficulty obtaining credit, and complications receiving and making payments — all without offset in the form of lower retail prices.

Finally, the 2017 study also details a new trend that was not apparent when we examined the data three years ago: Contrary to our findings then, the two-tier system of interchange fee regulation (which exempts issuing banks with under $10 billion in assets) no longer appears to be protecting smaller banks from the Durbin Amendment’s adverse effects.

This week the House begins consideration of the Amendment’s repeal as part of Rep. Hensarling’s CHOICE Act. Our study makes clear that the Durbin price-control experiment has proven a failure, and that repeal is, indeed, the only responsible option.

On February 22, 2017, an all-star panel at the Heritage Foundation discussed “Reawakening the Congressional Review Act” – a statute which gives Congress sixty legislative days to disapprove a proposed federal rule (subject to presidential veto), under an expedited review process not subject to Senate filibuster. Until very recently, the CRA was believed to apply only to very recently promulgated regulations. Thus, according to conventional wisdom, while the CRA might prove useful in blocking some non-cost-beneficial Obama Administration midnight regulations, it could not be invoked to attack serious regulatory agency overreach dating back many years.

Last week’s panel, however, demonstrated that conventional wisdom is no match for the careful textual analysis of laws – the sort of analysis that too often is given short-shrift by commentators. Applying straightforward statutory construction techniques, my Heritage colleague Paul Larkin argued persuasively that the CRA actually reaches back over 20 years to authorize congressional assessment of regulations that were not properly submitted to Congress. Paul’s short February 15 article on the CRA (reprinted from The Daily Signal), intended for general public consumption, lays it all out, and merits being reproduced in its entirety:

In Washington, there is a saying that regulators never met a rule they didn’t like. Federal agencies, commonly referred to these days as the “fourth branch of government,” have been binding the hands of the American people for decades with overreaching regulations.

All the while, Congress sat idly by and let these agencies assume their new legislative role. What if Congress could not only reverse this trend, but undo years of burdensome regulations dating as far back as the mid-1990s? It turns out it can, with the Congressional Review Act.

The Congressional Review Act is Congress’ most recent effort to trim the excesses of the modern administrative state. Passed into law in 1996, the Congressional Review Act allows Congress to invalidate an agency rule by passing a joint resolution of disapproval, not subject to a Senate filibuster, that the president signs into law.

Under the Congressional Review Act, Congress is given 60 legislative days to disapprove a rule and receive the president’s signature, after which the rule goes into effect. But the review act also sets forth a specific procedure for submitting new rules to Congress that executive agencies must carefully follow.

If they fail to follow these specific steps, Congress can vote to disapprove the rule even if it has long been accepted as part of the Federal Register. In other words, if the agency failed to follow its obligations under the Congressional Review Act, the 60-day legislative window never officially started, and the rule remains subject to congressional disapproval.

The legal basis for this becomes clear when we read the text of the Congressional Review Act.

According to the statute, the period that Congress has to review a rule does not commence until the later of two events: either (1) the date when an agency publishes the rule in the Federal Register, or (2) the date when the agency submits the rule to Congress.

This means that if a currently published rule was never submitted to Congress, then the nonexistent “submission” qualifies as “the later” event, and the rule remains subject to congressional review.

This places dozens of rules going back to 1996 in the congressional crosshairs.

The definition of “rule” under the Congressional Review Act is quite broad—it includes not only the “junior varsity” statutes that an agency can adopt as regulations, but also the agency’s interpretations of those laws. This is vital because federal agencies often use a wide range of documents to strong-arm regulated parties.

The Congressional Review Act is especially powerful because once Congress passes a joint resolution of disapproval and the president signs it into law, the rule is nullified and the agency cannot adopt a “substantially similar” rule absent an intervening act of Congress.

This binds the hands of federal agencies to find backdoor ways of re-imposing the same regulations.

The Congressional Review Act gives Congress ample room to void rules that it finds are mistaken. Congress may find it to be an indispensable tool in its efforts to rein in government overreach.

Now that Congress has a president who is favorable to deregulation, lawmakers should seize this opportunity to find some of the most egregious regulations going back to 1996 that, under the Congressional Review Act, still remain subject to congressional disapproval.

In the coming days, my colleagues will provide some specific regulations that Congress should target.

For a more fulsome exposition of the CRA’s coverage, see Paul’s February 8 Heritage Foundation Legal Memorandum, “The Reach of the Congressional Review Act.” Hopefully, Congress and the Trump Administration will take advantage of this newly-discovered legal weapon as they explore the most efficacious means to reduce the daunting economic burden of federal overregulation (for a subject matter-specific exploration of the nature and size of that burden, see the most recent Heritage Foundation “Red Tape Rising” report, here).