The following post comes to us from Eric Posner, Kirkland & Ellis Distinguished Service Professor of Law and Aaron Director Research Scholar at the University of Chicago.

The incentive to take socially costly financial risks is inherent in banking: because of the interconnected nature of banking, one bank’s failure can increase the risk of failure of another bank even if they do not have a contractual relationship. If numerous banks collapse, the sudden withdrawal of credit from the economy hurts third parties who depend on loans to finance consumption and investment. The perverse incentive to take financial risk is further aggravated by underpriced government-supplied insurance and the government’s readiness to play the role of lender of last resort.

The major regulatory tool for countering these incentives is the minimum capital requirement, which ensures that banks raise funds through equity rather than debt at the margin, thus placing a greater portion of the downside of failure on shareholders. Starting in 1981, U.S. regulators issued rules that set out minimum capital-asset ratios. Over the years, the rules have become more detailed and elaborate and (nominally) stricter. However, it is clear that the ratios mandated by rules were always too low, and that they were so riddled with exceptions that they could be easily evaded. Market forces alone caused most banks to maintain higher capital-asset ratios than were required by the rules. Insufficient capitalization of banks contributed to the financial crisis of 2007-2008.

Why have U.S. regulators always issued inadequate capital regulations? Many familiar culprits can be identified, including the political influence of the financial sector and the intellectual ascendance of market ideology. But a major source of the problem is that banking regulators did not use cost-benefit analysis—or any sophisticated economic reasoning—to determine what capital requirements should be. Instead, they responded in an ad hoc way, gradually increasing or reducing ratios in response to financial conditions and technological change. Explanations in regulatory materials showed that regulators did not understand cost-benefit analysis and did not use it properly in the rare instances when it was discussed. A consistent theme is that regulators did not believe that new capital regulations would affect more than a handful of banks. In regulatory documents, they repeatedly assured banks that the regulations would not affect the vast majority of them. Thus, a common justification for a regulation was precisely that it would have little effect on banks.

This style of regulation can be called “norming”—choosing a regulatory standard that does not interfere with the mean or modal behavior of regulated entities, and only rules out outliers at the low end. Norming may be a reasonable regulatory strategy in imaginable contexts—for example, when little is known about the social costs of a behavior and so the proper government response is exploratory and experimental. But this is not the case for the financial sector. Because all banks, not just the least capitalized banks, have strong incentives to engage in risky behavior that is socially costly, norming could not have given, and did not give, banks an adequate incentive to increase capital. The norming style of regulation likely contributed to the financial crisis of 2007-2008.

This massive regulatory failure raises the question whether a different style of regulation could have prevented or mitigated the crisis. It is possible that the answer is no—that banks will always be underregulated because they have enormous political influence, and no interest group is powerful enough to oppose them. But another view is that the banking regulators failed because they lacked the intellectual resources necessary to oppose deregulatory efforts led by the financial industry.

The best approach to regulating banks is to rely on cost-benefit analysis, which would force regulators to lay out explicitly what they think the costs and benefits of a capital adequacy rule might be. On the cost side, a capital adequacy rule requires banks to switch at the margin from debt to equity. Many economists are skeptical that the social cost of debt is high; if they are right, then even strict capital adequacy rules would not create significant social costs although banks would see them as burdensome. On the benefit side, a capital adequacy rule reduces the risk of a financial collapse and the massive costs associated with it. Estimating these costs and benefits is challenging, but progress has already been made. While some commentators have argued that it is impossible to perform cost-benefit analysis of financial regulations because of the difficulty of determining risks and valuations, similar criticisms were made of cost-benefit analysis of environmental regulation, yet many of the valuation challenges in that area of regulation have been overcome.

In How Do Bank Regulators Determine Capital Adequacy Requirements?, I document the explanations that banking regulators have given for the capital adequacy regulations they have issued since 1981. As I show, these explanations reveal a “norming” mentality that led to weak regulations and an undercapitalized banking sector. I argue that if banking regulators had been required to use cost-benefit analysis—as many other regulators were required to do at the time—they would likely have issued stricter regulations.