Category: HELR Blog

Introduction

The Farm Bill affects nearly every aspect of agriculture and forestry in the United States. Therefore, its next reauthorization offers an important opportunity to better manage the risks of climate change on farms, forests, and ranches by supporting resilience practices that also offer greenhouse gas (GHG) emission reductions.

Agriculture is vulnerable to the impacts of climate change, including rising temperatures, changes in rainfall and pest migration patterns, extreme weather events, and drought. In addition to being heavily affected by climate change, agriculture is also a significant contributor to climate change. Agricultural practices are responsible for about eight percent of U.S. GHG emissions.[4] Estimates of total food system emissions, which include the CO2 emissions from energy use and transportation, increase the agricultural industry’s proportion of U.S. GHG emissions to between 19 and 29 percent.[5]

To better align their practices with their long-term interests, farmers and ranchers can adopt practices that enhance their resilience, while also reducing GHG emissions, and increasing carbon sequestration. Many of these practices improve the long-term productivity and profitability of farms. For example, farmers are already adopting practices that reduce emissions or sequester carbon in the soil and in woody biomass while also improving productivity and resilience on their land.

This paper proposes a suite of practices that should be considered during the next authorization of the Farm Bill to improve on-farm efforts to adapt to and mitigate climate impacts. It is organized into four main sections. Part I provides background on the Farm Bill and the ways that the U.S. agricultural system contributes to GHG emissions. Part II provides an overview of opportunities for on-farm mitigation and adaptation. Many of the practices we recommend can reduce on-farm emissions and build a more resilient agricultural system. Part III identifies a set of metrics that we used to assess potential proposals. Lastly, Part IV summarizes how climate practices can be incorporated across titles and highlights three policy options.

I. Background

A. Agricultural Sources of GHG Emissions

Greenhouse gases trap heat in the atmosphere and contribute to increases in global temperatures. Although this a natural process, increased greenhouse gas emissions since the industrial revolution have increased atmospheric greenhouse gases to levels never before recorded. Agriculture, including raising crops and animals as well as resulting land use changes and farm equipment usage, is a source of three GHGs: methane (CH4), nitrous oxide (N2O), and carbon dioxide (CO2).[6]

Globally, emissions from food systems are responsible for nearly a third of all GHG emissions.[8] Domestically, EPA’s Inventory of U.S. Greenhouse Gas Emissions and Sinks divides up agriculture-related emissions into different categories. N2O and CH4 emissions are categorized as “Agricultural,” and accounted for 8.3 percent of total greenhouse gas emissions in the United States in 2014.[9] In 2014, N2O emissions were 336 million metric tons of carbon dioxide equivalent (MMT CO2 Eq.); these emissions were caused primarily by soil management such as the use of synthetic fertilizers, tillage, and organic soil amendments.[10] Manure management, and biomass burning, also contribute to N2O emissions. CH4 emissions were 238 MMT CO2 Eq. and were produced by enteric fermentation during ruminant digestion (164 MMT CO2 Eq.), manure management (61 MMT CO2 Eq.), and the wetland cultivation of rice (12 MMT CO2 Eq.)[11]

CO2 emissions from agriculture-related land use changes and equipment usage are accounted for in the “Land Use, Land-Use Change, and Forestry” and the “Energy” categories, respectively. Estimates of total food system emissions, which include the CO2 emissions from energy use and transportation, increase the agricultural industry’s proportion of U.S. GHG emissions to between 19 and 29%.[12]

II. Strategies for Managing Climate Risk through Mitigation and Adaptation

Given agriculture’s contributions to GHG emissions that are contributing to climate change, which in turn affects agricultural productivity, it is appropriate to consider how climate change can be incorporated across the titles of the Farm Bill. The anticipated reauthorization in 2018 can play a critical role in addressing climate change in the United States by promoting practices that encourage mitigation and adaptation practices on farms.

Adopting new agricultural practices can be challenging, especially for small farmers or operations without access to large amounts of capital or information about adaptation opportunities. However, doing so will not only assist the U.S. farmers and ranchers confront shifting seasons, more severe storm events, new pests, drought, and other challenges,[13] it will also reduce the Farm Bill’s fiscal burden on taxpayers.[14] A number of land managers are already adopting strategies that not only reduce emissions or sequester carbon in the soil, but also have the important co-benefits of improving productivity and resilience.[15]

First, land managers can reduce the GHG emissions of their farming practices in a number of ways. Practices such as conservation tillage reduce soil disturbance, and prevent some erosion, which can lower soil carbon loss. Precision agriculture strategies can reduce fertilizer inputs on cropland, which in turn reduces GHG emissions from fertilizer production and application.[17] Reincorporating livestock manure onto cropland as well as improved management of liquid manure using anaerobic digesters or other on-farm technology can reduce methane emissions from livestock waste by capturing it rather than emitting it.[18]

Second, land managers can sequester additional carbon through on-farm practices. Soil carbon can be increased by incorporating cover crops, including legumes, into crop rotations, reducing tillage, and agroforestry practices.[19] In addition, planting perennial crops or incorporating trees into farms through alley cropping, hedgerows, and riparian forest buffers can lead to long-term sequestration of carbon in woody biomass.

Finally, land managers can take steps to avoid future emissions. The most critical way to avoid new on-farm emissions is to avoid land conversion, which releases carbon that was previously sequestered in the soil and in woody biomass.

B. Adaptation Measures

Adapting to a changing climate will require farmers, foresters, and ranchers to prepare for and respond to new risks, including extreme weather events, shifts in growing seasons, and different pests and plant diseases. Figure 3 provides an overview of the range of practices that farmers can undertake to adapt to climate change.

To make farming operations more resilient, farmers can enhance soil health, which will make agricultural systems better able to withstand extreme weather, drought, and erosion due to high winds or flooding.[21] Strategies for enhancing soil health include adjusting production inputs, timing of planting and soil amendments, cover crops, tillage, new crop species, and diversified crop rotations.[22]

Farmers can also take additional steps to make their farms more resilient to other climate risks. For example, to prepare for flooding, heavy rainfall, and other risks, farmers can implement resilient farm landscapes that include buffer strips and the return of marginal cropland to native vegetation. To prepare for new pests and diseases, farmers can diversify their crop selection and alter crop rotations. To adjust to changing seasons and a warming climate, farmers can plant different crops; crop scientists can also develop more heat- and drought-resistant crop varieties. Resilience planning is also important on the community level, as rural communities can ensure that new infrastructure investments supported by the Farm Bill, such as rural water and energy systems, are resilient to climate change effects.

C. Opportunities for Complementary Mitigation and Adaptation

Importantly, many on-farm practices can help with both climate adaptation and mitigation.[24] For example, improving soil health not only mitigates climate change, it also makes farms more resilient and better able to withstand the shifting, and at times extreme, conditions of a changing climate. Efficient fertilizer application will reduce GHG emissions while enhancing soil resilience. Similarly, cover cropping, diversified crops, and other practices that stabilize the soil will reduce GHG emissions from the soil while building soil health. It is important to note that the efficiency of these on-farm practices will vary by region, impacting the ways they can and should be implemented.[25]

Mitigation and adaptation strategies for agricultural systems often require long-term planning to strengthen “climate-sensitive assets,” such as soil and water, over time and in changing conditions.[26] Developing better regionally specific agricultural climate and conservation practice adoption data is required for this long-term planning to be successful. From those baseline data, regional efforts will be critical to identify mitigation opportunities, develop strategic adaptation planning, and implement enhanced soil and livestock management practices.[27]

III. Metrics for Prioritizing Reform Proposals

As the summary above indicates, there are many actions that can promote climate change mitigation or adaptation in agriculture. In addition, changes can be made to every Title of the Farm Bill that would promote one or more of these mitigation and adaptation strategies. Given this complexity, the uncertainties associated with quantitative estimates of the mitigation potential of different strategies, and the qualitative differences between mitigation and adaptation as goals, we developed a range of qualitative metrics that we used to analyze potential reforms. In particular, we considered:

Potential magnitude of climate impact: Priority was given to proposals that had proven climate benefits, did not require significant additional research, and targeted the largest sources of agricultural GHG emissions.

Co-benefits: Priority was given to proposals that could increase resiliency or economic benefits of farms.

Equity: Priority was given to programs that could benefit small and large farms in all regions.

Scalability: Priority was given to proposals that seemed replicable and applicable to farms across the country or where Climate Hubs could facilitate regional diversity.

Enforceability/Administrability: Priority was given to proposals that could be tied in with or build upon existing requirements or programs in the Farm Bill.

Feasibility: Feasibility considerations included ease of implementation technically, economically, and politically. Because any legislative change will need to be passed in Congress, political feasibility was determined to be one of the most important considerations. Accordingly, we prioritized proposals that seemed, based on stakeholder engagement, suitable for the next Farm Bill, given competing interests for funding and stakeholder sentiment towards climate action.

An analysis of these metrics is included throughout our recommendations. However, these should be considered as only a first step. While we have attempted to target the largest sources of GHG emissions, more detailed proposals will be required before there can be precise estimates of the potential for emission reductions. The USDA’s COMET-Farm, an online farm and ranch GHG accounting tool, can likely facilitate this effort.[28] Similarly, determining the economic feasibility of specific reform proposals has been difficult because of taxpayer subsidization, the uncertainty of how appropriations may be allocated, and the varying degrees of stringency that reforms could encompass (e.g. mandate vs. incentive). Finally, while previous Farm Bill reauthorizations can serve as a guide, the ongoing transitions at U.S. federal agencies engaged in Farm Bill programs will likely have impacts on the political feasibility of proposals that cannot be appropriately assessed at this time. For these reasons, we recommend that additional research measure the climate impact of proposals, outline the benefits and co-benefits for farmers and the public, articulate the administrability of the program, and gather stakeholder input and support for proposals.

IV. Pathways for Addressing Climate Change in the Farm Bill

To determine how the Farm Bill could better address climate change, we first categorized the range of mitigation and adaptation practices identified in Figures 2 and 3, above, in terms of their potential applicability to the Farm Bill. We then examined how these practices mapped onto the current titles in the Farm Bill. Finally, we assessed how the upcoming Farm Bill could better incentivize these actions across titles, with an eye toward win-win practices with both mitigation and adaptation benefits.

Figure 4 contains the range of possibilities we identified for addressing climate mitigation and adaptation by title. To fully assess the impact of each of these policy options – and its interaction with other policies and programs –requires additional research and outreach to stakeholders affected. We discuss in more detail below a set of recommendations that best fit our metrics, indicated by bold font in this table.

Figure 4. Options for Addressing Climate Change by Farm Bill Title

All of these areas for reform have the potential to advance climate-ready agricultural practices through the Farm Bill. Many of these areas for reform also have wide-ranging benefits beyond climate change mitigation or adaptation such as enhancing on-farm productivity and more efficiently using taxpayer dollars. We elected to focus on three recommendations we judged to be particularly important based on the metrics we established in Part III).

Recommendation 2: Ensure the best available science and research—including the outcome of pilot programs—are incorporated into Farm Bill programs; support dissemination of downscaled climate data through USDA regional offices and land grant universities to develop agricultural climate mitigation and adaptation capacity under Title VII.

Recommendation 3: Advance manure management collection and storage methods, as well as biogas development under Title IX to mitigate GHG contributions from livestock.

Crop insurance is deeply subsidized by the federal government, and it represents the single largest federal outlay in the farm safety net.[31] On average, taxpayers cover 62 percent of crop insurance premiums.[32] The insurance companies’ losses are reinsured by USDA, and the government also reimburses their administrative and operating costs.[33] The Congressional Budget Office anticipates that this program will cost taxpayers over $40 billion from 2016 to 2020.[34]

These subsidies disproportionately benefit large farms: while only about 15 percent of farms use crop insurance, insured farms account for 70 percent of U.S. cropland.[35] Small farmers struggle to utilize crop insurance because of the high administrative burden and challenges of insuring specialty crops.[36] In addition to clear equity concerns involving access to crop insurance, this situation is problematic from a climate perspective because larger farms are more likely to grow monocultures, which are both more vulnerable to pests and extreme weather events and can degrade soil health. Indeed, just four crops—corn, cotton, soybeans, and wheat—make up about 70 percent of total acres enrolled in crop insurance.[37]

The current loss coverage policies in the crop insurance program can discourage farmers from proactively reducing their risks by taking steps to enhance soil health and resilience. Because farmers with crop insurance are protected against losses incurred from impacts likely to increase with climate change, farmers may not be properly incentivized to respond to the changing conditions.[38] Some environmental organizations have even raised concerns that in response to the crop insurance transfer of risk, some farmers may be more willing to engage in unsustainable practices, such as aggressive expansion, irresponsible management, and use of marginal land.[39] In addition, farmers may make planting decisions based on the insurance program incentives rather than market-based signals.[40] In these ways, crop insurance can push farmers towards practices that pose risks to both their operations and taxpayer obligations.[41] It is therefore important that the crop insurance program better align farmers’ risk management incentives with the real and growing risks they face from climate change.

One way to achieve this objective is through incentivizing or requiring farmers to undertake actions to improve soil management and promote soil health. Some specific changes to the crop insurance program that could promote these practices include:

Adjusting the length of policies to better reflect the value added from changes that improve long-term soil health.

Writing soil health requirements into insurance policies.

More generally, changes to the crop insurance program that reduce the magnitude of the subsidy offered to farmers, such as setting a dollar-per-acre cap, could reduce the moral hazard that current policies create.[42] The methodology used to set premiums could also be adjusted to be based more on the projected frequency and intensity of events such as droughts and floods rather than on backward-looking data. RMA has started to incorporate climate-related risk metrics into annual rates by weighting recent loss experience more heavily, thereby more accurately reflecting the risks that growers face. However, it is important to consider future risks from climate change as well.

Requirements of the crop insurance program that act as disincentives to climate-friendly farming practices should be updated to account for growing climate risks farmers face. For example, RMA has guidelines in place about the termination of cover crops, because of concerns that these crops will scavenge water from the commodity crops.[43] This requirement can act as a disincentive to farmers’ adoption of cover cropping, a practice that builds the soil and reduces runoff in the non-growing season.[44] The next Farm Bill could specify that there should be no specific termination requirements for cover crops.

Insurance policies may also serve to incentivize some environmentally harmful practices, such as early and excess fertilizer application and cultivation of environmentally sensitive land.[45] Because early application maximizes crops’ uptake of nitrogen, it can increase yield in the short term, but it contributes to nitrous oxide emissions, unhealthy soils that become less able to fix nitrogen and must rely increasingly on fertilizer, and polluted runoff. In addition, synthetic fertilizers, which are made from non-renewable materials, including petroleum and potash, are produced at a huge energy cost.[46] Some studies have suggested that crop insurance may incent some farmers to convert highly erodible or wetlands to farmland.[47] Therefore, the next Farm Bill could also indicate this type of practice is not required to be eligible for crop insurance. This change could be complemented by an increase in the length of insurance policies, as discussed above, because insurance companies would benefit from the longer-term improvements in soil health.

b. Tie crop insurance to a new conservation compliance provision for building soil health for climate ready agriculture

Currently, in order to qualify for crop insurance, farmers must satisfy two conservation compliance requirements, the Wetland Conservation (“Swampbuster”) and Highly Erodible Land Conservation (“Sodbuster”) provisions.[48] These provisions ensure, respectively, that farmers do not convert a wetland or plant crops on highly erodible land or a previously converted wetland.[49] While these current conservation requirements are beneficial in addressing some climate impacts, adding a conservation compliance requirement directly targeted at climate-related practices would improve upon them.

With 70 percent of farmland in the crop insurance program, changes in conservation compliance through the next Farm Bill or through RMA’s policies can drive big climate change benefits. Under Title II, Congress could create an additional conservation compliance requirement for climate-friendly agricultural practices, which could either be required to obtain crop insurance or could make farmers eligible for rebates. The types of on-farm practices that could mitigate risk and enhance climate resilience include more precise irrigation and fertilizer application, reduced tillage of the soil, cover cropping, altering crop rotations, and building buffer strips and riparian buffers. Particularly beneficial practices for building resilient soil include cover cropping, diversified crop rotations, reducing tillage, and efficient irrigation.[50]

In addition, enforcement gaps have limited the success of the existing conservation compliance requirements. To make the mechanism effective, it will be important to establish simple and effective enforcement, for example by using remote sensing, and to ensure that Natural Resources Conservation Service (NRCS) offices have sufficient resources to carry out enforcement efforts.

First, these proposals could produce significant climate benefits from increasing soil health, in terms of both mitigation and adaptation. Reform of the crop insurance and conservation titles could also help address some of the equity issues that currently exist between small and large farms. Existing USDA programs, described in the next section, could help with scalability and administrability. Finally, in terms of feasibility, while any change may be difficult, our stakeholder engagement indicated that farmers are open to programs that target soil health, given the potential economic benefits to their farms. While the actual on-farm impacts will vary based on how the program is designed and constructed, building more resilient, healthy soil can help improve environmental outcomes and decrease the risk of crop loss.[51]

Recommendation 2: Ensure Best Available Science and Research Guides Farm Bill Programs

Agricultural practices that promote climate change mitigation and adaptation, including those described above, are often regionally specific in their implementation. For many new climate-ready practices to be included in conservation compliance or crop insurance, the USDA would need to account for this regional specificity. For example, the benefits of many of the on-farm practices that improve soil health, including more precise irrigation and fertilizer application, reduced tillage of the soil, and altering crop rotations, vary by region and soil type. In some areas, no-till methods may be infeasible; farmers who try to implement no-till in these areas would likely continue to till to some degree or after a short period of time, resulting in quick reversal of the achieved carbon sequestration benefits. Furthermore, the technical specificity of choosing among these practices and correctly implementing them requires guidance at a local level.

To address these types of knowledge gaps and to provide technical assistance to states and farmers, the USDA has created a range of programs, including Climate Hubs, which were established at public land-grant universities in 2014.[52] The Hubs deliver science-based knowledge, practical information, and program support for farmers to engage in “climate-informed decision-making” by farmers.[53]

Increasing funding in the 2018 Farm Bill in Title VII, the Research title, could solidify and expand USDA’s ability to administer and scale climate research and outreach efforts across all regions of the country. Additionally, creating systems to collect and analyze regional data on pilot programs and ensure best practices are adopted could assist long-term efforts to incorporate climate policies into Farm Bill programs.[54] For these reasons the Farm Bill should provide additional funding for climate research and monitoring, especially focused on regional resilience.

Improving livestock management, especially manure management, is a significant opportunity for mitigating emissions of methane and achieving several co-benefits for the public and farmers. There is currently very little regulation of livestock manure management. Manure is sometimes stored—uncovered—in a single collection site, which causes the methane to be released directly into the atmosphere. In addition to being a major GHG emissions source, it can cause a range of considerable environmental harms.[55]

a. Require improved manure management, including the covering of lagoons

First, the upcoming Farm Bill could address manure management collection and storage methods. Practices can be improved through actions such as allowing livestock to roam,[56] covering manure lagoons, flaring the methane produced, or producing biogas for use. Simply covering a manure lagoon results in significant decreases in methane emissions, as well as decreased odors. Flaring is the combustion of methane, which yields water and carbon dioxide. Although flaring still emits GHGs, carbon dioxide is a less potent GHG than methane.

The Farm Bill could promote these practices either through incentives or mandates in the Conservation or Crop Insurance titles. For example, the Farm Bill could mandate or incentivize farmers with a threshold number of cattle, swine, or poultry cover manure and flare the produced methane to be eligible for crop insurance. Such a mandate would have the greatest impact at Concentrated Animal Feeding Operations (CAFOs), which may also be better able to bear the high capital costs associated with biogas production.

b. Pursue strategies to decrease methane emissions, including biogas and other on-farm renewable energy production

Second, the Energy Title could incentivize on-farm biogas. On farms, many different substrates may be used to produce biogas, including animal excrements (including that of cattle, swine, poultry,[57] and horse), food waste, milling by-products, and catch crops (such as clover grass on farms without livestock).[58] Farmers can realize substantial savings from biogas production, including through substituting biogas for other energy sources, through substituting digestate[59] for commercial fertilizers,[60] and by avoiding disposal and treatment of substrates (such as for waste-water treatment). Farmers may also be able to sell carbon offsets.[61] In addition, farmers producing biogas can avoid some of the worst problems with animal agriculture: farmers must do something with the manure, and its storage can produce strong odors,[62] unhealthy conditions for workers and families,[63] and pollution through runoff in the worst scenarios.[64]

Farmers have two main options for biogas use: (1) generation of electricity for on-site use or sale to the grid; and (2) direct use of biogas locally, either on-site or nearby.[65] Using the biogas to fuel a generator to produce electricity is considered the most profitable use for most farms.[66] Another use is to upgrade the biogas, then called biomethane, to be injected into the national natural gas pipeline network as a substitute for extracted natural gas.

Because farmers could benefit financially from on-farm use or the sale of biogas, the Farm Bill should continue and expand funding for the Rural Energy for America Program, which offers cost-sharing grants and loans for renewable energy improvements. [67] However, these programs are most likely to benefit large farms because anaerobic digesters are expensive and require a large and constant supply of substrate to produce a return on investment. We therefore suggest the Farm Bill also fund pilot programs to assist small farm communities to form cooperatives so that they are also able to utilize this technology and participate in the grant or loan program.

Even with the available grants and loans, farmers are still taking a substantial financial risk. USDA or land-grant universities should actively help communities or cooperatives with the planning and application process. Large farms or cooperatives who are unable or unwilling to operate and maintain anaerobic digesters themselves could hire a company to lease the equipment and manage the biogas production process.[68] USDA Rural Development Agencies could be a valuable liaison between biogas management companies and farmers.

CAFOs could be part of a voluntary program or required to use anaerobic digesters due to their greater contribution to climate change and other environmental harms. Because CAFOs are responsible for high levels of greenhouse gas emissions and because anaerobic digesters are economically feasible for large operations, there is reason to consider the benefits that could be achieved by requiring these practices for large CAFOs in the Farm Bill.

Livestock management is a critical area for addressing climate impacts, and biogas has the potential to be a win-win for farmers willing to invest in alternative energy production.

Conclusion

The U.S. agricultural system must evolve to mitigate climate change and adapt to the effects of a changing climate. Opportunities for climate change mitigation and adaptation exist across the Farm Bill titles, from bolstering climate resilient infrastructure in the Rural Development title to incentivizing sustainable forest management in the Forestry Title. Taking action on climate measures in the next Farm Bill reauthorization will help farmers better plan for changing conditions, protect taxpayers from increasing risks, and assist the United States in meeting its global climate commitments. The next Farm Bill should incorporate climate risk management provisions, and state and local actors should consider ways to support these efforts.

[7] EPA, Overview of Greenhouse Gas Emissions [hereinafter EPA, Overview], https://perma.cc/7WS6-JXQY. The two to three percent of emissions unaccounted for are fluorinated gases, which are synthesized during industrial processes. Id.

[13]See U.S. Dep’t of Agric., USDA Agriculture Climate Change Adaptation Plan 9 (2014) [hereinafter USDA, Adaptation Plan], https://perma.cc/8SM9-5NDX; Louise Jackson & Susan Ellsworth, Scope of Agricultural Adaptation in the United States: The Need for Agricultural Adaptation, in The State of Adaptation in the United States (2012), https://perma.cc/HS57-K35T.

[14] For example, a recent report from the Office of Management and Budget and the Council of Economic Advisers estimates that the annual cost of the crop insurance program will increase by $4 billion per year in 2080 as a result of the impacts of climate change. OMB & CEA, Climate Change: The Fiscal Risks Facing the Federal Government 6 (Nov. 2016), https://perma.cc/4Y22-P85V; see also USDA, Adaptation Plan, supra note 14, at 9.

[15] U.S. Dep’t of Agric., Climate Change and Agriculture in the United States: Effects and Adaptation 126–27 (2013) [hereinafter USDA, Effects and Adaptation], https://perma.cc/QW8T-Y4RL.

[19] For a more detailed review of how carbon sequestration can be increased in agriculture, see Daniel Kane, Nat’l Sustainable Agric. Coal., Carbon Sequestration Potential on Agricultural Lands: A Review of Current Science and Available Practices (2015), https://perma.cc/R4WA-2PPK.

[25] For example, in the Central Valley of California, an adaptation plan that included integrated changes in crop mix and altered irrigation, fertilization, and tillage practices, was found to be most effective for managing climate risk. Id. Along with the USDA Climate Hubs, the following organizations have undertaken projects related to regional agricultural adaptation research and planning: California Healthy Soils Initiative; Wisconsin Initiative on Climate Change Impacts; Southeast Florida Regional Climate Change Compact; The Mid-Atlantic Water Program; U.S. Midwest Field Research Network for Climate Adaptation.

[35] U.S. Dep’t of Agric., Structure and Finances of U.S. Farms: Family Farm Report, 2014 Edition 32–33 (2014), https://perma.cc/S9YP-P6CY.

[36] Generally, the more diverse or specialized crops and livestock a farmer produces, the harder it is to obtain insurance. These policies are not designed to support small producers and the policies are administratively complex and burdensome for small farmers, with high premiums for small farmers. On the one hand, if small farmers used yield-based or revenue-based insurance policies, those farmers would need to purchase insurance for each crop, which requires producing a significant volume of each single crop to justify the paperwork and setting up a contracted purchase price from a processor. On the other hand, whole farm insurance policies base policies on average adjusted gross revenue of the farm, regardless of the variety of products the farmer grows. This type of policy is more appropriate for diversified farmers, but may still be too cumbersome for small farms to participate. See Jeff Schahczenski, Nat’l Sustainable Agric. Info. Serv., Crop Insurance Options for Specialty, Diversified, and Organic Farmers (2012), https://perma.cc/64P6-CTRC; Nat’l Sustainable Agric. Coal., Have Access Improvements to the Federal Crop Insurance Program Gone Far Enough?, NSAC’s Blog (July 28, 2016), https://perma.cc/PT37-RNNL.

[45] USDA’s Economic Research Service found that “[l]ands brought into or retained in cultivation due to these crop insurance subsidy increases are, on average, less productive, more vulnerable to erosion […] then cultivated cropland overall. Based on nutrient application data, these lands are also associated with higher levels of potential nutrient losses per acre.” USDA Economic Research Service, Report Summary:Environmental Effects of Agricultural Land Use Change (Aug. 2006); see also Daniel Sumner and Carl Zulauf, The Conservation Crossroads in Agriculture: Insight from Leading Economists. Economic and Environmental Effects of Agricultural Insurance Programs, The Council on Food, Agricultural and Resource Economics (2012).

[46]See Stephanie Ogburn, The Dark Side of Nitrogen, Grist (Feb. 5, 2010), https://perma.cc/9J6E-ZD9J (“About one percent of the world’s annual energy consumption is used to produce ammonia, most of which becomes nitrogen fertilizer.”).

[54] The existing ARS LTAR system, which conducts longterm sustainability research, could be used to inform the regional best practices communicated in outreach efforts. See Agric. Research Serv., U.S. Dep’t of Agric., Long-Term Agroecosystem Research (LTAR) Network, https://perma.cc/6XRT-FBTC.

[55] For example, manure management practices can create a public nuisance for which neighbors have little recourse. In addition, runoff from agriculture is not adequately regulated under the Clean Water Act and results in pollution to the nation’s waterways. Every year a hypoxic zone, also called a dead zone, develops where the Mississippi River dumps pollution from Midwest livestock and fertilizers into the Gulf of Mexico. See Kyle Weldon & Elizabeth Rumley, Nat’l Agric. L. Ctr., States’ Right to Farm Statutes, https://perma.cc/Y8XA-KUBR; Ada Carr, This Year’s Gulf of Mexico “Dead Zone” Will Be the Size of Connecticut, Researchers Say, Weather.com (Jun. 15, 2016), https://perma.cc/36ZZ-NKY9.

[56] Farms where the cattle range freely do not release as much methane to the atmosphere because the less consolidated manure is more likely to be absorbed into the soil rather than anaerobically digested to produce methane.

[57] Using poultry manure as a substrate can be difficult because feathers and poultry litter can clog anaerobic digesters. See Donald L. Van Dyne & J. Alan Weber, Special Article, Biogas Production from Animal Manures: What Is the Potential?, Industrial Uses/IUS-4 20, 22 (Dec. 1994).

[59] Digestate is the solid that is left over after biogas has been produced. Digestate can be sold or used on farm as fertilizer. It smells better than manure, is free of harmful bacteria, and contains nitrogen in a form that is more bioavailable for crops.

[60] 40 organic farms in Germany, in a region without livestock, have found it worthwhile to cooperate in supplying and transporting clover grass up to 50 km to an AD because the digestate provides them with a flexible organic fertilizer. See SustainGas, supra note 60, at 28. They find that the digestate leads to higher quality for their food crops. Id. “Biogas has to serve food production via improved nutrient supply,” one farmer says. Id.

[61] If farmers can show that they have reduced their methane emissions, they may be able to sell the carbon offsets in exchanges such as the California GHG cap and trade market. See Cal. Air Resources Bd., Compliance Offset Protocol, Livestock Projects: Capturing and Destroying Methane from Manure Management Systems (2014), https://perma.cc/68EF-2SB9.

[62] The odor-reducing benefits are viewed as especially desirable for poultry and swine farms.

[63] Biogas plants dispose of waste and sewage, making conditions healthier. Not only does the anaerobic digestion process remove pathogens, but because biogas production requires collecting manure at a central location, some unhygienic conditions are avoided. See Julia Bramley, et al., Tufts Department of Urban & Environmental Policy & Planning, Agricultural Biogas in the United States: A Market Assessment 122 (2011), https://perma.cc/Z4ER-S4SD.

[64] Livestock manure generated at cattle yards and dairy farms can contaminate surface and ground water through runoff. Anaerobic digestion sanitizes the manure to a large extent, decreasing the risk of water contamination. Id.

[68] This model is frequently used for wind energy production. See Agric. Research Serv., U.S. Dep’t of Agric., Wind and Sun and Farm-Based Energy Sources, Agric. Res., Aug. 2006, https://perma.cc/ZBJ9-R74Q.

The California Cap-and-Trade Program (“CAT”) is derived from the California Global Warming Solutions Act of 2006 (“Global Warming Act”), which requires the State to reduce its greenhouse gas (“GHG”) emissions to 1990 levels by 2020.[1] The California Air Resource Board (“CARB”) is the State regulatory agency responsible for the project.[2] In 2011, the CARB adopted cap-and-trade regulations and created the CAT to set limits on GHG emissions.[3] The first auctions for the CAT were held in 2012, and the program went into full effect on January 1, 2013.[4]

The CAT operates in two phases each year. First, a number of emission allowances are freely distributed to entities that fall under the purview of the program.[5] Second, the remaining allowances are auctioned off on a quarterly basis.[6] The free distributions are reduced annually, and eventually all the allowances will be distributed via auctions.[7] The program also permits carbon offsets to satisfy up to eight percent of an entity’s compliance obligations.[8] The ultimate objective is to create incentives for businesses to craft environmentally friendly industrial practices as the number of yearly allowances decreases over time.

The CAT also has an enormous scope, and it is the world’s second largest market-based mechanism designed to reduce GHG emissions.[9] This size makes the successful implementation of the program especially impressive. The success is due largely to a design structure that draws upon the shortcomings of previous cap-and-trade initiatives, such as the Regional Greenhouse Gas Initiative (“RGGI”) in the northeastern United States and the Emissions Trading System (“ETS”) in the European Union.

II. Lessons Learned from the Regional Greenhouse Gas Initiative

The CAT was not the first emissions marketplace in the United States. In 2009, the RGGI went into effect as a cap-and-trade marketplace for CO2 emissions in the following nine states: Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New York, Rhode Island, and Vermont.[10] However, the RGGI has been plagued with numerous shortcomings that have frustrated the performance of the initiative and which impart several lessons on how to more effectively design a cap-and-trade system.

A. Lesson 1: Cap-and-Trade Programs Need a Broad Scope

A key drawback of the RGGI is its limited scope. The program applies exclusively to CO2 emissions and only covers electrical power plants with the capacity to generate twenty-five or more megawatts.[11] Predictably, the results of the RGGI have been underwhelming, as only 163 facilities fall under the regulatory reach of the program.[12] Furthermore, CO2 emissions merely account for twenty percent of the GHG emissions in the nine participant states—a number that shrinks even further since the RGGI only regulates the electrical sector.[13] This narrowed scope has undermined the efficacy of the RGGI so drastically that Congress considers the program’s contribution to global GHG reductions to be “arguably negligible.”[14]

B. Lesson 2: Emission Forecasts Must Be Accurate

The second significant failing of the RGGI was that it overestimated the amount of CO2 emissions among the member states.[15] In fact, the RGGI set an initial emissions cap that was above actual emissions levels.[16] This was a gross oversight that stemmed from two key defects in the RGGI’s design.

First, the RGGI emission limits for the first cap period, which ran from 2009–2013, were based on emission estimations made in 2005.[17] Between 2005 and 2009, the amount of electricity generation in the member states decreased by thirty-six percent due to energy efficiency improvements and structural changes in energy generation portfolios.[18] Second, the RGGI distorted its emission forecasts by including all electrical power plants that had the capacity to generate twenty-five or more megawatts in its estimates.[19] Limiting the emission calculations to power plants that actually generated twenty-five or more megawatts would have produced more accurate projections.

These errors have been catastrophic for the initiative. The initial regulations had no effect on most businesses, which were already emitting below the inflated emissions cap.[20] Participation in the RGGI was therefore minimal, since many of the targeted businesses had no need to reduce emissions, purchase allowances, or generate offset credits.[21] Furthermore, because the RGGI does not limit the amount of allowances that can be “banked” and used in subsequent years, many companies have stored substantial amounts of these initial surplus allowances for future use.[22]

The administrators of the RGGI have taken extreme measures to try and remedy these miscalculations. Most notably, they implemented a “revised emissions cap,” running from 2014–2020, that slashes the emission limits by forty-five percent in an effort to match actual emission levels.[23] Such radical action would not have been necessary if the initial emissions cap had been more precise.

C. Lesson 3: Auctions Need Robust Price Floors

A final pitfall of the RGGI is its undervalued price floor for auctions. The reserve price has hovered around two dollars per allowance, despite being scheduled to increase according to the Consumer Price Index (“CPI”).[24] But the fact that auctioned allowances have been sold at prices exceeding five dollars indicates that businesses are willing to pay more.[25] The program therefore severely underappreciated the corporate demand for allowances and forfeited substantial potential earnings. Moreover, by greatly undervaluing the price floor, the RGGI administrators neglected to protect against suboptimal years when allowance prices have plummeted. A higher reserve price would have preserved the revenue generation capacity of the program, even during these off years.[26]

III. Lessons Learned from the European Union’s Emission Trading System

There are also numerous lessons to be learned from the deficiencies of the European Union’s ETS, which is the world’s largest market-based mechanism for reducing GHG emissions.

A. Lesson 1: Cap-and-Trade Programs Need Ambitious Initial Targets

At the conclusion of Phase I of the ETS, the “Learning Phase” that ran from 2005–2007, it was apparent that the initial targets for emission reductions were far too lenient.[27] Indeed, the lax regulations during Phase I only produced GHG reductions of three percent.[28] The EU was forced to compensate by crafting extreme targets for Phases II and III of the program, setting emissions goals of six percent below 2005 levels for Phase II and twenty-one percent below 2005 levels for Phase III.[29] If the EU had formulated a more ambitious target for Phase I rather than over-prioritizing the transition of members into the program, it would have avoided the need for these drastic adjustments.

B. Lesson 2: Allowances Must Be Apportioned Judiciously

Similar to the RGGI, the ETS grossly over-allocated emission allowances. In fact, ETS allowances initially exceeded the amount of actual emissions by four percent.[30] This miscalculation was devastating for Phase I of the ETS, as it enabled European businesses to emit 130 million tons more in GHGs than they had emitted prior to the implementation of the program.[31] This surplus destroyed the demand for allowances in the ETS marketplace, and auction prices fell precipitously.[32] The EU was forced to heavily reconfigure ETS allowance allocations to try and mitigate the damage caused by these initial overestimations, and it is still attempting to normalize the ETS marketplace.[33]

C. Lesson 3: Cap-and-Trade Programs Need Balanced Market Designs

The ETS has also been hamstrung by its inferior market design. Phase I of the program did not permit any allowances to be banked for future use.[34] Coupled with the initial over-allocation of allowances, this meant that most regulated entities possessed surplus allowances they had to expend by the year-end. This resulted in extreme downward price volatility at the conclusion of trading periods, as many companies attempted to dump the remainder of their emission allowances into the auctions.[35] The EU was once again forced to implement significant revisions to correct this oversight.[36] And while the ETS now permits allowances to be banked, the initial trading instability across Europe nearly destroyed the program.[37]

The EU also does not set a reserve price for ETS auctions, meaning there is no price protection for emission allowances.[38] This remains a gross oversight by the EU, as the lack of a price floor fails to account for the inevitable fluctuation of allowance prices due to changes in weather or energy price cuts. As a consequence, the ETS has lost significant revenue during periods of low auction demand where allowances have sold for pennies on the dollar, and the program will continue to be financially vulnerable until this design flaw is remedied.[39]

D. Lesson 4: Cap-and-Trade Programs Need Administrative Uniformity

Administrative inefficiencies have also plagued the ETS. The most glaring hole was the initial lack of a single registry for ETS participants.[40] Prior to 2012, each nation participating in the ETS had its own registry, which resulted in inconsistent regulation across the system.[41] The Danish registry, for example, failed to vet its registrants for two years.[42] The registry ultimately became so saturated with fraudulent companies that over ninety percent of account holders had to be deleted in 2010.[43] Even after the EU moved all participants into a single registry, the credibility lost among consumers during these initial years continues to plague the reputation of the program.

E. Lesson 5: Cap-and-Trade Programs Need Strong Cyber-Security

The final shortcoming of the ETS is that its cyber-security has been extremely assailable. “Phishing” has been one particularly vexing problem. The scam involves the creation and promotion of fake registries that solicit users to reveal their ETS identification codes. The “phishers” then use this information to carry out carbon trading transactions in legitimate registries. These deceptions have had severe economic ramifications, and as much as three million euros have been stolen in a single month.[44]

Hacking has been another key cyber-security issue for the ETS. Hackers have been able to infiltrate users’ computer systems and sell off all their allowances for immediate cash payments on the “spot market.”[45] Numerous companies have been crippled by this scam, and hackers have defrauded certain businesses of more than seven million euros worth of emission allowances.[46]

IV. The Success of the California Cap-and-Trade Program

When considering the numerous oversights of the RGGI and ETS programs, the success of the CAT is doubly impressive. This success is due to the balanced design of the CAT, which incorporates the strengths of the RGGI and ETS while mitigating their weaknesses.

Both the RGGI and ETS erred by overestimating actual emission levels and allocating excessive allowances. The CARB avoided this mistake by crafting a precise allocation methodology that prevented surplus allowances from derailing the auction marketplace. Foremost, the CARB calculated California emission levels for the years immediately preceding the creation of the CAT to more accurately forecast future emissions. The CARB also narrowed the variability of its emissions estimates by only including emitters who had actually emitted 25,000 or more metric tons of CO2 or equivalents.[47] Emitters who merely had the capacity to emit beyond the 25,000 metric ton threshold were not included in the calculations. The greater accuracy of the CAT estimates was evidenced during the program’s first quarterly auction in 2012, where all twenty-three million allowances offered at the auction were purchased above the reserve price.[48]

B.Success 2: The CAT Began Ambitiously While Also Facilitating Transition

Another common error of the RGGI and ETS was that their design strategies over-prioritized transitioning members into their systems. The programs initially neglected to implement substantive emission reduction targets for fear of overwhelming participants, and they have subsequently instituted dramatic reforms to compensate. By contrast, the CARB recognized the need to balance the transition of members into the program against regulatory efficacy, lest one derail the other.

The CARB facilitated the transition of participants into the CAT by narrowing the scope of the first compliance period to only cover electrical and industrial sectors. It waited until the second compliance period to expand into the transportation and heating fuel sectors to provide companies time to adjust their business practices.[49] Yet the CARB also implemented considerable GHG reduction targets. The CARB initially set a 2020 reduction goal of seventeen percent below 2013 levels, which still eclipses the target of the RGGI.[50] Due to these ambitious benchmarks, the CAT has already produced “non-negligible” emission reductions and economic gains, with 2013 alone seeing GHG reductions of over a million and a half metric tons and statewide economic growth of two percent.[51] The CAT has benefitted greatly from such a stable infrastructure, and it remains on track to reach its ultimate emission reduction target by 2020.[52]

C. Success 3: The CAT Has a Broad Scope

The CARB also built off the mistakes of the RGGI by broadening the regulatory scope of the CAT. Because it only regulates CO2 emissions, the RGGI covers less than twenty percent of the GHG emissions generated across its nine participating states.[53] By contrast, the CAT emulates the ETS by also covering CO2 equivalents such as CH4, N2O and other fluorinated GHGs, resulting in more effective emission restrictions.[54] The CARB also recognized that the RGGI erred in solely regulating electrical power plants. Accordingly, the CARB extended CAT regulations into other sectors heavy in GHG emissions, such as industrial, transportation, and heating fuel sectors.[55] Because of this broader scope, the CAT already covers over 600 facilities in California, whereas the RGGI only reaches 163 facilities across nine states.[56] The CAT also covers more than eighty-five percent of California’s GHG emissions, which is almost four times the amount of GHG coverage under the RGGI.[57]

D. Success 4: The CAT Has a Balanced Market Design

The CAT also avoided the severe design blunders of the RGGI and ETS. Rather than undervaluing or ignoring auction price floors, the CARB instituted a strong reserve price of ten dollars in 2012, which has been set to increase each year thereafter by five percent (in addition to increases for inflation).[58] Allowances have consistently sold above these amounts, but the price floor has provided steady protection against downward price volatility during poor trading periods.[59] Moreover, the built-in mechanism for annual increases to the reserve price has ensured that the price floor continues to increase irrespective of CPI circumstances.[60]

The CAT further protects against precarious price drops by permitting allowances to be banked.[61] This avoids the price instability problems of the ETS by discouraging businesses from dumping surplus allowances into auctions at the end of trading periods. Nevertheless, the CAT imposes limits on the maximum amount of allowances that can be held by a business.[62] This circumvents the design flaw of the RGGI that allows businesses to bank an inordinate amount of allowances and eliminate any need to subsequently reduce emissions.[63]

The revenues generated by the CAT best demonstrate the success of its market design. The first auction raised more than $289 million, and the first compliance period generated $969 million in revenue for California.[64] Projections estimate that the CAT will generate two billion dollars or more per year as the program’s regulatory scope continues to scale upwards.[65]

E.Success 5: The CAT Has Strong Administrative and Security Practices

The CAT has also benefitted immensely from its efficient administration and strong security practices. Foremost, the CAT keeps a single registry for all its regulated entities, ensuring vigilant and orderly monitoring of all participants.[66] The cyber-security protocols of the CAT have been extremely successful as well.[67] To prevent hackers and phishers from infiltrating the program, CAT auctions take place over a four-hour window that is constantly supervised by state employees.[68] The bidders and supervisors remain undisclosed to the public, and all parties must surrender their electronic devices during the auction.[69] This “sealed bid” approach to the auctions has protected the CAT from the fraud and counterfeiting issues that tormented the RGGI and ETS.[70]

V. A Recent Legal Challenge: Are Cap-and-Trade Auctions Tax Programs?

Despite the success of the CAT, the program has faced serious legal obstacles. The principal challenge took place in the recent Morning Star Packing Company v. California Air Resources Board case, where the plaintiffs alleged that the auctions were unconstitutional and violated California law.[71] The chief contention was that the CAT constituted a tax on companies for emitting GHGs.[72] The plaintiffs argued that the statutory authorization of the CAT, the Global Warming Act, therefore fell under the purview of California’s Proposition 13, which requires legislators to pass by two-thirds vote “any act to increase state taxes for the purpose of increasing revenue.”[73] Because the Global Warming Act was not passed by a two-thirds vote, the plaintiffs asserted that the CARB exceeded its regulatory authority when it created the CAT.[74]

The dispositive issue in the case was whether the auctions were unconstitutional taxes or whether they were permissible regulatory fees placed on tradable commodities.[75] The Sacramento superior court ultimately upheld the CAT, concluding that emission allowances were tradable commodities in a marketplace.[76] The court considered several distinctions between taxes and regulatory fees, but the chief difference seemed to be that whereas the government sets tax prices, the market determined the auction price of the emission allowances.[77] Thus, the fact that the allowances had no value independent of the California regulatory scheme did not transform the auctions into a tax program, and the allowances remained tradable commodities.[78]

Yet the superior court ruling did not mark the end of the contentious litigation. The Morning Star decision was appealed to the Sacramento appeals court, which affirmed the lower court judgment by a two-to-one majority decision.[79] In turn, the appellate court ruling was appealed to the California Supreme Court, which ultimately declined to hear the case in June of 2017.[80] What should have been a resounding victory, however, was diminished by the fact that the State Supreme Court did not issue a written opinion on the program itself.[81] Nevertheless, the affirmation of the CAT provided market-based environmentalism with a new lease on life and has galvanized California policymakers and legislators.

VI. The Aftermath of Morning Star

The ramifications of the Morning Star have already been substantial in California. State legislators quickly capitalized on the State Supreme Court’s dismissal of the case by voting to extend the CAT an additional ten years through 2030.[82] The extension produced newfound confidence in environmentalism and revitalized the market economy surrounding the CAT – whereas previous quarterly auction sales had dropped sharply, the California government sold every emission permit offered in the August 2017 auction.[83]

Yet these successes have not been replicated on a national scale. This is somewhat perplexing, as the CAT provides a workable model upon which to base the creation of a federal cap-and-trade program. In particular, Congress could convincingly argue that the Morning Star case supports the notion that cap-and-trade programs deal with tradable commodities and do not constitute tax programs. Congress could therefore avoid having to rely on the Taxing and Spending Clause of the Constitution to justify the creation of an auction program and, instead, could derive its authority from the broader powers of the Commerce Clause.

The affirmation of Morning Star also provides strong persuasive reasoning for Congress to resolve the longstanding debate on whether emission allowances are “physical” (or “nonfinancial”) commodities, which are physically deliverable and consumable, or “financial” commodities that are satisfied through cash settlements.[84] Relying upon the Morning Star court’s description of allowances as being consumable and involving the physical transfer of title, Congress now has a strong basis for asserting, on the federal level, that allowances are physical commodities.[85] This would shield a federal cap-and-trade program from the administrative burdens of complying with the Commodity Exchange Act and other commercial regulations. [86]

Despite the reasoning provided by Morning Star, recent federal policy has demonstrated a marked shift away from the environmentalist approach espoused by the Obama Administration. The recent withdrawal of the Clean Power Plan, the Obama-era rule regulating greenhouse gas emissions, best evinces this change in protocol.[87] Indeed, with the Environmental Protection Agency consistently the choice target of President Trump’s proposed budget cuts, environmentalism on a national level has been placed in a precarious position.[88]

It remains to be seen whether this federal paradigm shift will take a toll on the CAT. It is certain, however, that the demise of the CAT would be the death knell for market-based environmentalism in the United States. Fortunately, the CAT has several contingency protocols to counteract market volatility. In particular, the CARB can hold unsold allowances off the market for at least nine months to compress the supply and force participants back to the auctions.[89] This foresight proved to be invaluable in the wake caused by the initial Morning Star appeal in 2016, during which time the May 2016 and August 2016 auctions only sold eleven percent and thirty-five percent, respectively, of the allowances offered.[90] The remedial mechanisms built into the CAT allowed administrators to re-stabilize the market, and the November 2016 auction resulted in the successful sale of eighty-nine percent of the offered allowances.[91] Nevertheless, these contingencies are merely stopgap solutions, and hesitation among market participants will likely resurface as Californian and national policy progress along their collision course. Until a clear and unified path towards environmentalism is forged across the nation, an ominous shadow will remain cast over the CAT.

VII. Conclusion

The CAT has been a landmark initiative for environmentalism in the United States. Incorporating lessons from the RGGI and ETS, the program has struck a masterful balance in its market design and has produced significant environmental and financial gains for California. The affirming decision of the California judiciary and recent expansion of the program by the California legislature have been beacons of hope for cap-and-trade. Despite these successes, the future of the CAT remains in doubt, plagued by an uncertain socio-political climate where federal support for environmentalism has recently waned. And while the CAT has withstood previous legal and economic challenges, it is undeniable that the decisive battle for market-based environmentalism across the United States has begun.

[5]Id. From 2013–2015, the program covered electrical and industrial power plants that emitted 25,000 or more metric tons of CO2 or equivalent gases per year. Since 2015, fuel distributors have also been covered.

[86]See, e.g., 7 U.S.C. § 1a(47)(B)(ii) (2012) (excluding from the definition of “swap” “any sale of a nonfinancial commodity or security for deferred shipment or delivery, so long as the transaction is intended to be physically settled”).

The history of the American west is inextricably intertwined with damming rivers.[1] Whether for navigation, irrigation, or hydroelectric power, nearly every American river has been dammed.[2] In fact, stretching back to the day the Founding Fathers signed the Declaration of Independence, determined Americans have finished an average of one large-scale dam every day.[3] Currently, there are at least 76,000 dams in this country.[4]

While these dams have vastly contributed to America’s efforts to settle the west, they have come with significant costs. Although these dams’ harms are varied,[5] one of the primary concerns among advocates in the Pacific Northwest is the dramatic impacts dams have on species of anadromous fish, particularly salmonids.[6] In the Columbia River basin, dams block salmon and steelhead migration to more than 55% of historically available spawning grounds.[7] Since many anadromous fish species in the Pacific Northwest are listed as either threatened or endangered,[8] the Endangered Species Act[9] (ESA) can be a valuable tool to induce voluntary dam removals by requiring the Federal Energy Regulatory Commission (FERC) to include costly fish passage upgrades in any relicensing proceeding.[10]

Northwest salmon advocates rejoiced in 2014 when, following a lengthy campaign from a coalition of tribal and environmental activist groups,[11] construction crews completed the largest dam-removal project in American history by removing both the Elwha and the Glines Canyon Dams.[12] Removing these dams started the process of restoring seventy miles of the Elwha River to natural flows that had not existed since construction of the dams first began in 1911.[13] Since the dams came down, the river’s ecological quality has improved at an astonishing rate.[14] In fact, salmon and steelhead populations in the Elwha River have already reached thirty-year highs.[15]

The tremendous success of freeing the Elwha cannot be overstated, but the dams required decades of activist toil to remove.[16] In contrast, removing the Little Sandy and Marmot dams from the Sandy River in Oregon was accomplished in only eight years.[17] There are certainly many core differences between these campaigns that help explain this discrepancy, but chief among these is the fact that Federal Power Act[18] (FPA) amendments incentivized the owner of the Little Sandy and Marmot dams to privately fund the removal, while the Elwha removal languished waiting on federal funding for over a decade.[19]

This Essay will discuss the statutory changes to the FERC relicensing process that have worked to improve fish passage at hydropower facilities in recent decades and will continue to fuel upgrades and dam removals in the future. Part II lays out an overview of the environmental requirements of FERC relicensing and analyzes the Bull Run Hydropower Project as an example of a successful dam removal that was prompted as a result of its owner pursuing relicensing. Part III then reviews the relicensing schedule for several dams in Oregon and Washington to discuss how these fish passage improvements will continue occurring for the foreseeable future.

The current regulatory process will—at least marginally—improve fish passage at many hydropower facilities in the near future as older dams apply for relicensing through FERC. Privately operated hydroelectric dams can only operate under a license from FERC.[20] For older dams, the cost of installing fish passage during the FERC relicensing process can exceed the cost of removal, thereby incentivizing the dam owner to opt for removal.[21] For dams that successfully obtain a license to continue operation, the current statutory relicensing framework requires FERC to include any recommended fish passage upgrades as mandatory conditions in the license.[22] Due to new environmental statutes and regulations passed during the lifetime of the preceding license, many hydroelectric dams in the Columbia River basin are likely to require passage upgrades.[23]

FERC is in the midst of a massive relicensing period.[24] The FERC relicensing process has had a tremendous impact on fish passage in the Columbia River basin in recent history, as both Oregon and Washington were included in FERC’s list of states requiring the most dam relicenses between 2005 and 2015.[25] As discussed below, absent a congressional amendment of the FPA, the FERC relicensing process will mandate fish passage upgrades at Northwest hydroelectric facilities for decades to come.

A. The FERC Licensing Process

In 1920, Congress passed the FPA, authorizing the federal government to regulate private hydroelectric dams.[26] While older dams may have been constructed without a FERC license,[27] all dams must eventually obtain a license to continue operation.[28]

Initially, FERC only considered a dam’s power-generation potential when reviewing a license application, while ignoring the environmental impacts.[29] Then in 1986, Congress amended the FPA[30] to require FERC to include permit conditions protecting fish and wildlife.[31] Now, FERC licenses “require the construction, maintenance, and operation by a licensee at its own expense of such . . . fishways as may be prescribed by” the United States Fish & Wildlife Service or the National Oceanic and Atmospheric Administration (NOAA) Fisheries.[32] FERC cannot “modify, reject, or reclassify any prescriptions submitted by” those agencies.[33] If FERC disagrees with the fish passage conditions, FERC must either withhold the license or dispute the conditions before the relevant court of appeals.[34]

New FERC permits may last for a duration of up to fifty years.[35] Due to this timeframe, FERC will spend the foreseeable future considering relicensing applications for dams whose original permits were approved with minimal environmental consideration. For instance, FERC will review relicensing applications for dams that were approved without an Environmental Impact Statement (EIS) through 2020,[36] dams that were approved without wildlife permit conditions through 2036,[37] and dams that were approved prior to Endangered Species Act protections for anadromous fish through the 2040s.[38]

When owners of these dams apply for relicensing, modern environmental and endangered species protections will likely require project owners to significantly upgrade the dams’ fish passage facilities. FERC has proven willing to attach extremely costly fish passage conditions to its relicensing decisions, which can make removal the most cost-effective next step for hydroelectric dam operators.[39] For those dams that remain standing, new FERC licenses will still likely improve fish passage because relicensing will be conditioned upon upgrading fish passage to meet modern environmental and ESA requirements.[40]

The FERC relicensing process has proven to be an effective tool in persuading operators of large hydroelectric dams to negotiate effective and efficient dam removals that are entirely funded by the dam operators. Few cases highlight how well this process can facilitate dam removals better than the Marmot and Little Sandy dams of the Bull Run Hydropower Project. The Bull Run project is the gold-standard in dam removal for many reasons, including 1) it was entirely funded by the operator without predetermined cost caps;[41] and 2) the dams came out quickly, with minimal confrontation between the affected parties.[42]

Twenty-six miles east of Portland, Oregon the Bull Run River flows through the Mt. Hood National Forest.[43] The Bull Run River drains a 102 square-mile watershed and is almost entirely fed by rain and snowmelt from Mt. Hood.[44] As the main source of water for Portland, the Bull Run watershed provides tap water for nearly one-fifth of all Oregonians.[45] Development on the Bull Run began in the 19th century,[46] and the river became an important source of both water and electricity for the surrounding area.[47]

In 1912, Pacific Gas & Electric (PGE) completed the primary stage of one of the largest developments in the watershed: the Bull Run Hydropower Project.[48] To increase the powerhouse’s capacity, PGE constructed the Little Sandy Dam to divert water from the Little Sandy River to Roslyn Lake, the reservoir behind the project’s powerhouse.[49] The dam completely diverted the Little Sandy River 1.7 miles upriver from its confluence with the Bull Run River.[50] The dam blocked salmon migration upstream and decreased flows to the remaining salmon habitat downstream.[51]

The following year, PGE completed the Marmot Dam on the Sandy River.[52] This dam diverted water from the mainstem Sandy River to the Little Sandy upstream from the Little Sandy Dam, thereby increasing the capacity at Roslyn Lake.[53] The original Marmot Dam was a wood and sediment structure.[54] Unlike the Little Sandy Dam, the Marmot Dam did not block all salmon migration because the original structure included a fish ladder.[55] In 1989, PGE replaced the original Marmot Dam with a forty-seven foot concrete dam.[56]

The Bull Run Hydropower Project’s dams and diversions decreased fish runs in the Sandy River and Bull Run watersheds to 10%–25% of their historic runs.[57] PGE, the operator of the Marmot and Little Sandy Dams, operated four hydroelectric systems that would all require FERC relicensing in the early 2000s.[58] Due to the increasing burden of maintaining century-old dams, relatively low summer flows, and modern environmental regulations,[59] PGE determined that the Bull Run Hydropower System’s costs were simply insurmountable.[60] PGE chose to voluntarily surrender its FERC license.[61] After negotiating a settlement agreement with all affected parties,[62] FERC granted PGE’s petition to surrender its license in 2004.[63] Because of the inclusive settlement process,[64] public support for the final project was high, and PGE obtained all necessary environmental permits to move forward with the dam removal in only eighteen months.[65]

On July 24, 2007, engineers began the process of removing the Marmot Dam by setting off explosives to crack the concrete face.[66] The process ended that October with the breach of a temporary diversion dam built just upstream.[67] At the time, this was the largest dam removed in the Pacific Northwest, both in terms of height and trapped sediment.[68] The Sandy River recovered much more rapidly than expected, with migrating coho salmon reported swimming past the old dam site just one day after engineers completed the removal process.[69] The Little Sandy Dam was removed the following summer.[70]

An important takeaway from the Bull Run Hydropower Project’s removal is that, under the right circumstances, environmental conditions placed on FERC relicensing approvals can act as a tremendous hammer to force dam removals. In fact, PGE decided to pursue settlement negotiations before it even received the final fish passage requirements.[71] Preliminary estimates were enough for PGE to determine that the Bull Run system would not be economical. The Bull Run removal process shows just how effectively the FERC regulatory process can trigger rapid dam removals with minimal delays and no public funding.

III. The Glut of Pending and Upcoming License expirations Will Require FERC to Revisit Fish Passing in the Pacific Northwest for Several Decades.

Because of the fifty-year lifetime of its licenses, FERC is currently in the process of relicensing the final pre–National Environmental Policy Act[72] (NEPA) hydroelectric dams.[73] Several dams in both Washington and Oregon are still operating under such licenses.[74] Although the relicensing process has proceeded slowly, one certainty is that fish passage upgrades will be a mandatory condition for almost any new FERC license. This Part discusses a few dams in both Northwest states that are scheduled for relicensing in the coming decades and provides contemporary examples of the fish passage upgrades that FERC has already required at Northwest dams in recent years.

A. Washington Dam Relicensing

FERC currently licenses fifty-five privately operated hydroelectric dams in Washington.[75] Two of these dams—Sullivan Lake and Packwood Lake—were licensed prior to the mandatory environmental review process codified in NEPA.[76] The Packwood Lake dam, for example, was last licensed in July 1960.[77]

Packwood’s initial license was set to expire in 2010, but the dam has been operating under annual interim permits while working to determine what mandatory conditions will attach to the final new license.[78] As part of this relicensing process, Energy Northwest—the operator of Packwood Dam—has had to cooperate with NOAA Fisheries to determine the impact that the dam’s continued operation will have on listed species.[79] NOAA Fisheries found that three listed species were likely to be affected by the dam’s operation: Lower Columbia River Chinook, coho, and steelhead.[80] To mitigate these harms, Energy Northwest has built an exclusionary screen to keep migrating salmonids out of the channel leading to the powerhouse,[81] but more expansive requirements may be included before FERC can issue the final license.[82]

Along with the pre-NEPA dams, FERC also oversees seventeen dams that are operating under licenses issued prior to the Electric Consumers Protection Act and, as such, did not require any wildlife considerations.[83] These dams will be pursuing relicensing through the 2030s, which will inevitably mandate new fish passage conditions, thereby improving salmonid accessibility to spawning grounds.[84]

B. Oregon Dam Relicensing

Of the twenty-five actively licensed dams in Oregon,[85] there are three dams operating under pre-NEPA licenses: the Klamath, Hell’s Canyon, and Carmen-Smith dams.[86] The greatest fish-passage improvements will occur in the Klamath River, where PacifiCorp—the dams’ owner—has agreed to remove four huge dams by 2020, opening up 570 miles of riparian habitat for returning salmon.[87] Under the agreement, PacifiCorp will provide $200 million for the removal, and the state of California will fund up to an additional $250 million by selling general obligation bonds.[88]

On top of this monumental dam removal, the Carmen-Smith dam near Eugene, Oregon also agreed to significant improvements for salmon in order to relicense.[89] The Carmen-Smith license was issued in 1959 and expired in 2008.[90] As part of its relicensing effort, the Eugene Water and Electricity Board (EWEB) entered into a settlement agreement with sixteen other parties consisting mainly of government agencies, Native American Tribes, and environmental organizations.[91] This agreement included extensive salmonid habitat enhancements and a fish passage–system upgrade.[92] However, a precipitous decline in utility prices triggered a renegotiated agreement, and the fish passage upgrade was replaced with a trap-and-haul system to transport the fish around the dam’s powerhouses.[93] The parties submitted this amended agreement to FERC in 2016.[94] However, should NOAA Fisheries find this trap-and-haul system insufficient to protect the listed species, then EWEB could still be required to install the original fish passage upgrades.[95]

In addition, FERC oversees seven additional dam licenses that were approved prior to the Electronic Consumers Protection Act.[96] The last of these licenses expires in 2039.[97]

IV. Conclusion

Dam removals have become much more common in recent decades, and FERC relicensing has played a large role by requiring expensive fish-passage upgrades as a mandatory condition of an extended operating license. This uptick in FERC-triggered removals was caused by the fact that many of the last dams to be licensed without any environmental oversight have sought relicensing in the past decade. While almost all the pre-NEPA dams have been relicensed at this point, FERC relicensing will continue to trigger fish passage upgrades at facilities that were originally licensed before FERC started attaching mandatory wildlife considerations in 1986. Organizations operating dams in the Pacific Northwest that were licensed prior to these wildlife conditions will be pursuing relicensing through 2039.

In some cases—like the Little Sandy and Marmot Dams in Oregon—the economic cost of the Electronic Consumers Protection Act’s fish passage requirements will exceed the benefit of continued operation and make removal the more cost-effective option. In most other cases, the new FERC license will still mandate fish passage upgrades like installing a fish-ladder or implementing a trap-and-haul system. Through either dam-removal or upgrades, these FERC conditions will improve fish-passage at hydroelectric dams throughout the Pacific Northwest.

[1] U.S. Army Corps of Eng’rs, Water in the U.S. American West 6 (2012).

[14] Lynda V. Mapes, Elwha: Roaring Back to Life, Seattle Times (Feb. 13, 2016), http://projects.seattletimes.com/2016/elwha/ (Scientists have been “amazed at the speed of change under way in the Elwha.”).

[36] National Environmental Policy Act of 1969, 42 U.S.C. §§ 4321–4347. NEPA was signed into law in 1970. What is the National Environmental Policy Act?, Envtl. Protection Agency, https://www.epa.gov/nepa/what-national-environmental-policy-act (last visited Sept. 30, 2017).

[39] For example, FERC would have required PacifiCorp to spend over $30 million on fish passage upgrades to relicense the Condit Dam, so PacifiCorp chose to remove the dam at a cost of approximately $17 million. David H. Becker, The Challenges of Dam Removal: The History and Lessons of the Condit Dam and Potential Threats from the 2005 Federal Power Act Amendments, 36 Envtl. L. 812, 826–27 (2006).

[46] The City of Portland first diverted water from the Bull Run in 1894. Andrew Theen, From Bull Run to Mount Tabor: The History of Portland’s Open Reservoirs (Timeline), Oregonian (Dec. 17, 2014), http://www.oregonlive.com/portland/index.ssf/2014/12/from_bull_run_to_mount_tabor_t.html.

[47]Bull Run: The Town That Time Forgot, PDX Hist. (Oct. 28, 2016), http://www.pdxhistory.com/html/bull_run.html.

[48] The main powerhouse was completed in 1912. The Century-Old Bull Run Powerhouse Finds New Life, Thanks to 3 Portland Preservationists, Oregonian (Dec. 6, 2012), http://www.oregonlive.com/gresham/index.ssf/2012/12/the_century-old_bull_run_power.html.

[58] Of PGE’s four hydroelectric systems, the Bull Run project was the smallest. Julie A. Keil, Bull Run Decommissioning: Paving the Way for Hydro’s Future, Hydro Rev. (Mar. 1, 2009), http://www.hydroworld.com/articles/hr/print/volume-28/issue-2/feature-articles/dam-removal/bull-run-decommissioning-paving-the-way-for-hydrorsquos-future.html.

[59] The Bull Run system affected fish passage, temperature pollution, and river flows; several threatened fish species also migrated to the rivers. Id.

[60] This is understandable when you consider the fact that PGE would have had to upgrade two century-old dams just to continue electricity production at a single powerhouse. Id.

[62] There were a total of twenty-two parties in the settlement. Id. PGE also agreed to pay all costs for the removal in the settlement, thereby circumventing the arduous process of securing federal funding. Blumm, supra note 17, at 1070.

I. Introduction

The scope of the Clean Water Act’s jurisdiction has been controversial throughout the statute’s history. Reconciling the extent of Congress’ Commerce Clause authority with the reality of vast hydrological connections across the United States has been an unenviable task delegated to the United States Environmental Protection Agency (EPA) and the United States Army Corps of Engineers (the Corps). This post is a comprehensive, though certainly not exhaustive, examination of EPA’s and the Corps’ efforts to define the jurisdictional scope of the Clean Water Act. The issue is once again embroiled in litigation, and regulation is in the hands of an Administration seeking to depart substantially from prior policies. For that reason, I also discuss potential outcomes of the litigation and President Trump’s Executive Order.

II. History of the “Waters of the United States” Rule

In 1972, Congress amended the Federal Water Pollution Control Act to create what we know today as the Clean Water Act.[1] For the first time, federal jurisdiction based on the Commerce Clause power extended beyond traditional navigable waters, as the Act defined “navigable waters” to mean “waters of the United States, including the territorial seas.”[2] EPA and the Corps (the Agencies) share regulatory authority under the Act, however, EPA has ultimate authority to interpret the term “navigable waters.”[3]

A. The Regulatory Evolution of Waters of the United States

The first substantive definition of “waters of the United States” came from EPA’s Office of General Counsel in 1973.[4] EPA believed that removal of the word “navigable” from the definition evidenced congressional intent to regulate “pollution of waters . . . capable of affecting interstate commerce.”[5] The definition included navigable waters, tributaries of navigable waters, interstate waters, and interstate lakes, rivers, and streams used by interstate travelers for recreational purposes, for commercial fishing, or for industrial purposes.[6] EPA issued an official regulatory definition shortly thereafter, changing the final three categories of jurisdictional waters to intrastate waters used by interstate travelers for recreational purposes, for commercial fishing, or for industrial purposes.[7]

The Corps issued its regulatory definition in 1974, covering “those waters of the United States which are subject to the ebb and flow of the tide, and/or are presently, or have been in the past, or may be in the future susceptible for use for purposes of interstate or foreign commerce.”[8] In 1975, the United States District Court for the District of Columbia determined that Congress’ intent in defining “navigable waters” as “waters of the United States” was to assert “federal jurisdiction over the nation’s waters to the maximum extent permissible under the Commerce Clause of the Constitution” and that the term was “not limited to the traditional tests of navigability” as they appeared in the Corps’ definition.[9] The Corps was ordered to publish new regulations based on this interpretation.[10]

Ultimately, after some political controversy,[11] the Corps published an interim final rule aligning with EPA’s regulation.[12] Notably, the Corps definition included wetlands, intrastate waters used for agricultural production, and other waters that, on a case-by-case basis, may be determined by the Corps to “necessitate regulation for the protection of water quality” as defined in EPA’s guidelines.[13] In 1977, the Corps published its final definition distinguishing its jurisdiction under the Act from its jurisdiction under older laws such as the Rivers and Harbors Act.[14] The 1977 definition included five categories of waters including a Commerce Clause-based category: “All other waters of the United States not identified in Categories 1–3, such as isolated lakes and wetlands, intermittent streams, prairie potholes, and other waters . . . the destruction of which could affect interstate commerce.”[15]

The Commerce Clause category, once codified, was adopted by EPA in later regulations.[16] This basis for jurisdiction remained on the books until the latest attempt at defining “waters of the United States” in 2015.[17] By 1982, the Agencies had matching regulatory definitions (the 1982 Rule).[18]

B. Challenges to the 1982 Rule in the Supreme Court

Over the decades, the 1982 Rule faced repeated challenges in court. However, three Supreme Court rulings have fundamentally defined the jurisdiction of the Clean Water Act, influencing the Agencies’ interpretation of the 1982 Rule, and ultimately straining that interpretation to the point where revision was necessary.

1. Riverside Bayview

United States v. Riverside Bayview Homes, Inc.[19] (Riverside Bayview) originated as an enforcement action against defendants who commenced filling wetlands located on their property before the Corps took action on their permit application.[20] The issue before the Court was whether the defendants’ land fell within the Clean Water Act’s jurisdiction.[21]

The Court noted that the language, legislative history, and underlying policy of the Clean Water Act regarding its jurisdictional reach was ambiguous.[22] Based on this ambiguity, the Court analyzed the reasonableness of the Corps’ assertion of jurisdiction over adjacent wetlands.[23] The Court determined:

In view of the breadth of federal regulatory authority contemplated by the Act itself and the inherent difficulties of defining precise bounds to regulable waters, the Corps’ ecological judgment about the relationship between waters and their adjacent wetlands provides an adequate basis for a legal judgment that adjacent wetlands may be defined as waters under the Act.[24]

In deferring to the Corps, the Court upheld the 1982 Rule as permissible under the Clean Water Act.

2. SWANCC

In Solid Waste Agency of Northern Cook County v. United States Army Corps of Engineers[25] (SWANCC), the petitioner was a municipal corporation seeking to develop a parcel of real estate for use as a balefill (a type of landfill).[26] Based on a finding that migratory birds utilized gravel pits on the parcel, the Corps asserted jurisdiction, and denied the petitioner’s applications for a section 404 permit.[27]

The controversy in this case arose from language in the preamble to a Federal Register publication by the Corps suggesting that “other waters” as defined in the 1982 Rule included waters utilized by migratory birds.[28]

Distinguishing this case from Riverside Bayview, the Court planted the seed of the now-familiar “significant nexus” standard.

It was the significant nexus between the wetlands and “navigable waters” that informed our reading of the CWA in Riverside Bayview Homes. Indeed, we did not “express any opinion” on the “question of the authority of the Corps to regulate discharges of fill material into wetlands that are not adjacent to bodies of open water . . . .” In order to rule for respondents here, we would have to hold that the jurisdiction of the Corps extends to ponds that are not adjacent to open water. But we conclude that the text of the statute will not allow this.[29]

This statement arguably eliminated the entire category of “other waters” from the jurisdictional scope of the Clean Water Act. A narrower interpretation of the holding focuses on the Migratory Bird Rule. The Court chose to read the Clean Water Act “to avoid the significant constitutional and federalism questions raised by respondent’s interpretation,” meaning the Migratory Bird Rule, and therefore gave the Corps no deference.[30] The Court held that the “other waters” provision “as clarified and applied to petitioner’s balefill site pursuant to the ‘Migratory Bird Rule’ exceeds the authority granted to respondents under § 404(a) of the CWA.”[31]

The Corps interpreted this holding narrowly by issuing guidance advising regulators to no longer assert jurisdiction based on the presence of migratory birds, but to “consult legal counsel” if a water body in question might be connected with interstate commerce.[32]

3. Rapanos

In 2006, the Supreme Court issued its decision in Rapanos v. United States.[33] This decision vacated and remanded two decisions from the United States Court of Appeals for the Sixth Circuit upholding the Corps’ assertion of jurisdiction based on a “significant nexus” standard,[34] however, the contemporaneous opinion was fractured and no majority opinion emerged.[35] Justice Scalia authored the plurality opinion, joined by Chief Justice Roberts and Justices Thomas and Alito.[36] Chief Justice Roberts and Justice Kennedy wrote concurring opinions.[37] Justice Stevens authored the dissenting opinion, joined by Justices Souter, Ginsburg, and Breyer.[38]

In this 4–1–4 split, five justices agreed that the lower court decisions should be vacated.[39] Four justices agreed that the lower court decisions should be affirmed.[40] Eight justices agreed that Scalia’s test would confer jurisdiction.[41] Five justices agreed that Kennedy’s test would confer jurisdiction.[42] Since both tests were approved by a majority of the justices, a definitive test for determining the appropriate connection between traditional navigable waters and other hydrological features was once again eluded.

i. Justice Scalia’s Plurality Opinion

The plurality simplified the jurisdictional inquiry by focusing on the word “waters”, which appears in both sections 502(7) and 502(12).[43] The plurality examined a dictionary definition of “waters” and concludes that based “[o]n this definition, ‘the waters of the United States’ include only relatively permanent, standing or flowing bodies of water.”[44]

The plurality noted that the Court in Riverside Bayview found the line between waters of the United States and dry land to be ambiguous, and therefore deferred to the Corps’ determination.[45] Without pointing to any particular language from SWANCC, the plurality stated that in SWANCC the Court rejected the notion that ecological considerations can provide an independent basis for jurisdiction.[46] Based on this assumption, the plurality added a second requirement to its jurisdictional test: “[O]nly those wetlands with a continuous surface connection to bodies that are ‘waters of the United States’ in their own right, so that there is no clear demarcation between ‘waters’ and wetlands, are ‘adjacent to’ such waters and covered by the Act.”[47]

The plurality’s test therefore requires a determination that 1) the “adjacent” water is relatively permanent, and 2) that there is a continuous surface connection to the “adjacent” water.[48]

ii. Justice Kennedy’s Concurrence

In his concurring opinion, Justice Kennedy’s view of the “significant nexus” standard suggests that it is more than an indicator of “adjacency”—he found that Riverside Bayview stood for the proposition that “the connection between a nonnavigable water or wetland and a navigable water may be so close, or potentially so close, that the Corps may deem the water or wetland a ‘navigable water’ under the Act.”[49] Justice Kennedy characterized SWANCC as standing for the inverse: if there is “little or no connection” between a nonnavigable water and a traditional navigable water, then that water is not jurisdictional.[50]

His concurrence discusses how a “significant nexus” may be established: “[W]etlands possess the requisite nexus, and thus come within the statutory phrase ‘navigable waters,’ if the wetlands, either alone or in combination with similarly situated lands in the region, significantly affect the chemical, physical, and biological integrity of other covered waters more readily understood as ‘navigable.’”[51]

iii. Aftermath of Rapanos

Because there is no single “logical subset” from which a clear rule can be divined, courts have disagreed on how to apply the law. Nonetheless, most courts agreed that a water was jurisdictional under the Act at least where Justice Kennedy’s significant nexus test was satisfied, and no court has held that a water is jurisdictional only if it meet’s the plurality’s “continuous surface connection” requirement. [52] In 2008, the Agencies issued a guidance document instructing regulators as to what waters were now considered jurisdictional considering the Supreme Court’s opinion.[53]

III. The Current Status of the 2015 Clean Water Rule

Accepting Justice Kennedy’s invitation to clarify CWA jurisdiction through a new rulemaking,[54] the Agencies promulgated the final Clean Water Rule (2015 Rule) on June 29, 2015.[55] Several states, industry groups, and environmental stakeholders challenged the 2015 Rule on the day of promulgation.[56] One day before the effective date, a district court judge in North Dakota granted a temporary injunction in favor of state petitioners.[57] Meanwhile, the Agencies sought to transfer nine district court cases for centralized pretrial proceedings.[58] The United States Judicial Panel on Multidistrict Litigation denied the government’s motion based on a lack of discovery or questions of fact.[59] By the end of 2015, over one hundred parties had filed twenty-three petitions for review in the courts of appeals, and almost one hundred parties had filed seventeen district court complaints.[60]

A. The Sixth Circuit’s Jurisdictional Ruling

Many petitions originating in the courts of appeals were consolidated in the Sixth Circuit.[61] Determining that state petitioners had demonstrated a substantial possibility of success on the merits, the court issued a nationwide stay of the Clean Water Rule which remains in place today.[62] Several petitioners then moved to dismiss their own petitions due to lack of jurisdiction.[63] Before a panel of judges, petitioners and intervenors argued that the Clean Water Act’s judicial review provision, section 509(b)(1), should be read narrowly to exclude the 2015 Rule from its scope.[64] The federal defendants, on the other hand, argued that either sections 509(b)(1)(E) or 509(b)(1)(F) could be used to invoke court of appeals jurisdiction.[65]

Judge McKeague, delivering the opinion of the court, agreed with federal defendants that section 509(b)(1)(E) applied because the 2015 Rule “indirectly produce[d] various limitations on point-source operators and permit issuing authorities.”[66] Furthermore, § 509(b)(1)(F) applied as well, since the extension of jurisdiction found in the 2015 Rule “indisputably expand[ed] regulatory authority and impact[ed] the granting and denial of permits in fundamental ways.”[67] Judge Griffin, concurring in the judgment, did so only because of circuit precedent.[68] Therefore, petitioners’ and intervenors’ motions to dismiss were denied, and the Sixth Circuit retained jurisdiction based on section 509(b)(1)(F).[69]

B. The Supreme Court Case

Sixth Circuit intervenor National Association of Manufacturers petitioned the Supreme Court for writ of certiorari in September 2016.[70] The Court granted the petition on the following issue:

[W]hether the Sixth Circuit erred when it held that it has jurisdiction under 33 U.S.C. § 1369(b)(1)(F) to decide petitions to review the waters of the United States rule, even though the rule does not “issu[e] or den[y] any permit” but instead defines the waters that fall within Clean Water Act jurisdiction.[71]

At the time of this writing, opening briefs are due to be filed on April 27, 2017[72]

IV. President Trump’s Executive Order and the Future of Rulemaking and Litigation

The election of Donald Trump undoubtedly ushered in the beginnings of a seismic shift in federal environmental policy. On February 28, 2017, President Trump signed an Executive Order directing the Agencies to review the 2015 Rule for consistency with the following policy: “It is in the national interest to ensure that the Nation’s navigable waters are kept free from pollution, while at the same time promoting economic growth, minimizing regulatory uncertainty, and showing due regard for the roles of the Congress and the States under the Constitution.”[73] The Agencies are next directed to “publish for notice and comment a proposed rule rescinding or revising the rule, as appropriate and consistent with law.”[74] Finally, the Order mandates that the Agencies “shall consider interpreting the term ‘navigable waters,’ . . . in a manner consistent with the opinion of Justice Antonin Scalia” in Rapanos.[75] A rule defining “waters of the United States” in accordance with this Order would represent a significant and unprecedented narrowing of Clean Water Act jurisdiction. Notably, this Executive Order has no immediate regulatory effect. However, for the remainder of this post, the discussion will assume that the Agencies share the interests of the State petitioners, and will seek a litigation strategy leading to the collapse of the 2015 Rule.

As a result of the Executive Order, the federal respondents in National Ass’n of Manufacturers v. United States Department of Defense sought to hold the Supreme Court briefing schedule in abeyance. This motion was denied, however, after facing opposition from several parties.[76]

A. Potential Outcomes of Current Litigation

Based on federal respondent’s failure to convince the Court to hold briefing in abeyance, it is likely that the Court will decide the jurisdictional issue before the 2015 Rule’s revision or rescission. Once litigation returns to the Sixth Circuit or the district courts (depending on the ruling), it is unclear whether a court will decide the merits of the 2015 Rule. If the Agencies take regulatory action either before or shortly after the litigation becomes active again, those who petitioned for review of the 2015 Rule may find their petitions mooted.

A two-part test determines mootness: a case is moot if “(1) it can be said with assurance that there is no reasonable expectation that the alleged violation will recur, and (2) interim relief or events have completely and irrevocably eradicated the effects of the alleged violation.”[77] “When both conditions are satisfied it may be said that the case is moot because neither party has a legally cognizable interest in the final determination of the underlying questions of fact and law.”[78] Undoubtedly, the Agencies’ action in rescinding or revising the rule would qualify as “interim relief or events,” but there is a question of whether a rescission means that “no reasonable expectation that the alleged violation will recur,” or in the event of a revision, whether it has “completely and irrevocably eradicated the effects of the alleged violation”.[79] Additionally, an exception to mootness occurs when a petitioner demonstrates that: “(1) the challenged action was in its duration too short to be fully litigated prior to its cessation or expiration, and (2) there [is] a reasonable expectation that the same complaining party [will] be subjected to the same action again.”[80]

As an example, if the Agencies rescind the 2015 Rule without immediately replacing it with a new rule, the “capable of repetition yet evading review” exception may apply regarding certain claims. In another scenario, if a revision of the 2015 Rule presents many of the same issues, then the effects of the alleged violations are not completely and irrevocably eradicated and litigation may continue. On the other hand, if the Agencies are not prepared to rescind or revise the 2015 Rule upon resumption of litigation, and they attempt to argue that the case is moot by virtue of the Executive Order’s expression of intent alone, it is unlikely that such an argument will meet the Davis test for mootness.

Precedent from the United States Court of Appeals for the Ninth Circuit suggests that if the Agencies cannot immediately revise or rescind the rule, they may have another option—a consent judgment. In Turtle Island Restoration Network v. United States Department of Commerce[81] (Turtle Island), the court held that no Administrative Procedure Act[82] (APA) rulemaking procedure was necessary when environmental plaintiffs and federal defendants entered into a consent decree vacating a portion of a final rule, temporarily reinstated the previous rule, and remanded the rule to the agency to reconsider a new rule.[83] Industry defendant-intervenors appealed the consent decree and cited the United States Court of Appeals for the District of Columbia Circuit’s decision in Consumer Energy Council of America v. Federal Energy Regulatory Commission[84] (Consumer Energy) for the proposition that notice and comment is required prior to repeal.[85] The Ninth Circuit distinguished Consumer Energy by finding the concerns motivating the agency in that case to be different from those raised during the original rulemaking and noting that no party in that case had suggested repeal as a remedy.[86] In Turtle Island, the environmental plaintiffs sought repeal for reasons that they had raised during the initial rulemaking.[87] The court also noted that no substantive changes were made to the rule—repealing the provision at issue simply reinstated the prior rule.[88]

In a more recent Ninth Circuit opinion, the court simultaneously reaffirmed its holding in Turtle Island while limiting the types of consent decrees that may alter a regulation:

It follows that where a consent decree does promulgate a new substantive rule, or where the changes wrought by the decree are permanent rather than temporary, the decree may run afoul of statutory rulemaking procedures even though it is in form a “judicial act.” […] We therefore hold that a district court abuses its discretion when it enters a consent decree that permanently and substantially amends an agency rule that would have otherwise been subject to statutory rulemaking procedures.[89]

Together, Turtle Island and Conservation Northwestv. Sherman create the following positive rule: If a consent decree repeals or vacates an agency action, the legal effect is to restore the status quo, and if this restoration is temporarily subject to further agency action—the substance of which remains within the agency’s discretion—then the consent decree may be upheld.[90]

Entering a consent decree may be an attractive option if federal defendants see it as the quickest escape route from litigation. However, petitioners who prefer the 2015 Rule to the prior rule will likely object to the decree. If objection is unsuccessful, the court may consider whether all petitioners’ claims have been mooted by the terms of the consent decree.

B. Possible Regulatory Actions

Shortly following the issuance of President Trump’s Executive Order, the Agencies published a notice of their intent to engage in a rulemaking consistent with that Order.[91]

It is unlikely that they will accomplish this task quickly. Considering the nine-year gap between Rapanos and the final 2015 Rule, the prospect of a final rule occurring within the current administration is questionable. In the meantime, the Agencies may attempt to use a guidance document similar to the 2008 guidance issued after Rapanos. A guidance document based on the plurality in Rapanos will be less susceptible to challenge if implemented while the 1982 Rule is in force, as opposed to the 2015 Rule, which relies heavily on the “significant nexus” test. However, any guidance document that substantively changes the legal meaning of a regulation may be set aside by a court if challenged.[92]

Therefore, if the federal defendants are unable to dispose of the 2015 Clean Water Rule via litigation, they may attempt to revoke the 2015 Rule without immediately replacing it. Such a revocation may be subject to challenges based on procedure, substance, or both. If the Agencies fail to utilize APA notice and comment procedures in revoking the 2015 Rule, a court could invalidate the revocation.

In Consumer Energy, federal defendants argued that their revocation of the rule at issue rendered the case moot.[93] However, the court held that the Federal Energy Regulatory Commission’s revocation order was invalid because the agency did not follow APA rulemaking procedures: “The value of notice and comment prior to repeal of a final rule is that it ensures that an agency will not undo all that it accomplished through its rulemaking without giving all parties an opportunity to comment on the wisdom of repeal.”[94] Substantive challenges to a revocation, rescission, or revision are discussed in the following section.

C. Challenging a New Rule

The oft-cited Motor Vehicle Manufacturers Ass’n of the United States v. State Farm Mutual Automobile Insurance Co.[95] stands for the proposition that “an agency changing its course by rescinding a rule is obligated to supply a reasoned analysis for the change beyond that which may be required when an agency does not act in the first instance.”[96] This case arose when the Reagan Administration, in a nationwide deregulation effort, rescinded a rule requiring auto makers to install either airbags or passive restraints.[97]

However, in Federal Communications Commission v. Fox Television Stations, Inc.[98] (Fox Television), the Court clarified its ruling: “[O]ur opinion in State Farm neither held nor implied that every agency action representing a policy change must be justified by reasons more substantial than those required to adopt a policy in the first instance.”[99] The Court explained that the distinction being made in State Farm was between a § 706(1) review of a failure to act and a § 706(2)(A) review of agency action, not initial and subsequent agency action as in a rulemaking and rescission.[100]

Describing the support required for a change, the Fox Television Court highlighted that an agency must “[1)]display awareness that it is changing position,” and it is sufficient if the record shows that “[2)] the new policy is permissible under the statute, [3)] that there are good reasons for it, and [4)] that the agency believes [the new policy] to be better.”[101] The fourth element is similar to a free space in bingo—the agency’s change in policy “adequately indicates” its belief that the new policy is better.[102]

Fox Television says an agency must “provide a more detailed justification than what would suffice for a new policy created on a blank slate”: When 1) “its new policy rests upon factual findings that contradict those which underlay its prior policy” or 2) “when its prior policy has engendered serious reliance interests that must be taken into account.”[103] In these scenarios, “a reasoned explanation is needed for disregarding facts and circumstances that underlay or were engendered by the prior policy.”[104]

Although it seems relatively easy for an agency to justify a change in policy based on the Fox Television standard, a question remains regarding what role the extensive scientific record used to justify the 2015 Rule may play in challenging a new rule.[105]

Ninth Circuit precedent emerging from the Bush-era “Roadless Rule” provides useful guidance for how a court may handle a changed policy. In a case regarding the “Tongass Exemption” to the Roadless Rule, the court held that the United States Department of Agriculture’s 2003 Record of Decision (ROD) adopting the Tongass Exemption (which was based on the 2001 Roadless Rule Final Environmental Impact Statement) was arbitrary and capricious based on a Fox Television analysis of the change in policy.[106] In the 2001 ROD for the Roadless Rule, the agency found that “the long-term ecological benefits to the nation of conserving these inventoried roadless areas outweigh the potential economic loss to southeast Alaska communities from application of the Roadless Rule.”[107] In the 2003 ROD for the Tongass Exemption, however, the agency reversed its policy based on “concern about economic and social hardship that application of the roadless rule’s prohibitions would cause in communities throughout Southeast Alaska.”[108]

The Ninth Circuit found that the agency “made factual findings directly contrary to the 2001 ROD and expressly relied on those findings to justify the policy change.”[109] The court was careful to note that agencies are entitled to give more weight to certain concerns, but may not “simply discard prior factual findings without a reasoned explanation.”[110] The finding at issue was the necessity of the Roadless Rule to maintain important roadless area values.[111] The 2001 ROD made this finding, but the 2003 ROD found that the Roadless Rule was unnecessary because roadless values were protected by the Tongass Forest Plan.[112] The agency concluded that the sufficiency of the Forest Plan struck a new balance in its analysis, causing socioeconomic concerns to outweigh the benefits of the Roadless Rule’s protections.[113] The court found that the 2003 ROD violated the APA because the agency provided no reasoned explanation for “why an action that it found posed a prohibitive risk to the Tongass environment only two years before now poses merely a ‘minor’ one.”[114]

However, the Agencies may reevaluate their policy choices based on the facts available to them if the statute permits the resulting rule. In National Ass’n of Home Builders v. United States Environmental Protection Agency,[115] the D.C. Circuit rejected petitioners’ argument that the amendment of a rule was invalid because the promulgating agency merely revisited old arguments rather than basing its amendment on new facts or circumstances.[116] The court held that “a reevaluation of which policy would be better in light of the facts” is permissible, as Fox Television made clear that “this kind of reevaluation is well within an agency’s discretion.”[117]

The court also rejected petitioner’s contention that “because the [r]ule eliminates a provision that was consistent with congressional intent, the Court should not defer to EPA in making such a decision.”[118] The court held that “the fact that the original [rule] was consistent with congressional intent is irrelevant as long as the amended rule is also permissible under the statute.”[119] However, it was also emphasized that EPA found the rule’s amendment to promote “to a greater extent, the statutory directive.”[120] The court noted that “it was hardly arbitrary or capricious for EPA to issue an amended rule it reasonably believed would be more reliable, more effective, and safer than the original rule.”[121]

Again, assuming that any new rule will be promulgated pursuant to the Executive Order, the rule will likely follow the standard enunciated by the late Justice Scalia in Rapanos. A petitioner may have an uphill battle in arguing that such a policy is not permitted by the Clean Water Act—not only is the plain language of the Act uncommonly vague, but an interpretation crafted by a Supreme Court justice and accepted as sufficient to establish jurisdiction by a majority of the Court is uncommonly valid. However, those who wish to see a more protective rule may be able to argue congressional intent despite the D.C. Circuit’s holding in National Ass’n of Home Builders—since EPA, in that case, believed its new regulation increased conformity with the purpose of the statute rather than deviated from it.[122]

V. Conclusion

Although it is difficult to see clearly into the future, those who have studied or practiced administrative law know that APA notice and comment rulemaking requires substantial resources. To develop an administrative record supporting a rule based on Justice Scalia’s plurality in Rapanos may take years. Once the rule is final, it will face opposition from many directions. In the case of Clean Water Act jurisdiction, a change in the regulatory landscape affects a wide swath of interests—state sovereignty, landowner rights, industry flexibility, human health, conservation, and recreation. Considering that even a guidance conforming with Justice Scalia’s test could be subject to judicial review, there seems to be no tool that the Trump Administration can utilize to rapidly change the regulatory landscape of the Clean Water Act. In the words of former President Barack Obama, “the federal government and our democracy is not a speedboat, it’s an ocean liner.”[123]

* Kacy Manahan is a clinical student at Earthrise Law Center at Lewis & Clark Law School where she assists in representing Respondent Waterkeeper Alliance in National Ass’n of Manufacturers v. United States Department of Defense before the Supreme Court of the United States. She is the 2017–2018 Symposium Editor for Environmental Law, and can be reached at kmanahan@lclark.edu.

[53] U.S. Envtl. Prot. Agency & U.S. Dep’t of the Army, Clean Water Act Jurisdiction Following the U.S. Supreme Court’s Decision in Rapanos v. United States & Carabell v. United States (Dec. 2, 2008), https://perma.cc/XE8Q-UJ53.

[105]See generally Office of Research & Dev., U.S. Envtl Prot. Agency, EPA/600/R-14/475F, Connectivity of Streams and Wetlands to Downstream Waters: A Review and Synthesis of the Scientific Evidence (2015), https://perma.cc/7V2L-ZLQ8

Coastal municipalities are struggling to address the uncertain future risks created by sea level rise. Conventional models of ex ante protection and ex post relief are both too costly and often insufficient to mitigate the impacts of climate change. Sea level derivative instruments provide an alternative model for financing adaptation projects that allow municipalities to transfer the risk of climactic uncertainty to parties willing and able to take a counter position. Two sea level derivative instruments—a sea level default swap and a nuisance flooding futures contract—are proposed. They are designed to reduce the risk that a given sea-level rise adaptation project will be either under or over protective while providing additional capital for project development. Due to the lack of a ready counterparty with obvious need to hedge, markets for these sea level derivatives may be susceptible to excessive speculation. However, trading would be subject to regulation by the Commodity Futures Exchange Commission—facilitating a funding adaptation for coastal municipalities.

I. Introduction

Regardless of public debate on the anthropogenicity of climate change, the existence of sea level rise is not in dispute. Although models have predicted that the U.S. economy will receive a net benefit as global temperatures rise,[1] the costs of sea-level fluctuation will be born disproportionately by coastal governments facing local impacts. Nuisance flooding—tidal flooding of private property and public infrastructure caused by an exceedance of historic tide levels—has become commonplace in coastal municipalities and is projected to increase in frequency.[2] In addition, there is a strong correlation between sea level fluctuation and storm related property damage. Even modest sea level rise magnifies storm impacts. Climate prediction models project a range of future increases in coastal storm surge frequency and intensity.[3] This is likely to hit coastal communities hard.

The costs of relief and repair can easily exceed municipal or even state government capacity because flooding often involves widespread correlated loss. Currently, ex post flood response spending falls under the Disaster Relief Fund overseen by the Federal Emergency Management Agency (“FEMA”).[4] Expenditures from the Disaster Relief Fund have been increasing as weather events causing more than one billion dollars in damage become more frequent.[5] In addition, the National Flood Insurance Program (“NFIP”)—a public-private partnership that federally backs insurers willing to cover flood damages—has struggled under its massive debt to the U.S. Treasury in recent years.[6] The inability of NFIP to charge actuarial risk premiums to insureds and the increasing magnitude of sea level related weather damages have pushed the program to the brink of failure.[7] NFIP is currently indebted to the U.S. Treasury for more than twenty-three billion dollars and has net exposure in excess of 1.2 trillion dollars.[8] In addition, problems with claim settlement following Hurricanes Katrina and Sandy have eroded public confidence in NFIP.[9]

Adaptation projects that mitigate flood risk are economically preferable to ex post disaster relief through FEMA and/or NFIP. One widely accepted estimate approximates that each dollar spent on hazard reduction saves society an average of four relief dollars.[10] Again, the challenge is local. Municipalities have been slowed by the political drag of raising large sums of money for projects that deal with uncertain future risks. For example, the City of Miami Beach, Florida, has developed a storm water management plan to raise streets, install water pump systems, and build up sea walls to mitigate the impacts of nuisance flooding.[11] The plan comes with a price-tag of 400 million dollars and a lot of uncertainty, two factors that have generated political pushback.[12] Although the Miami Beach plan builds in a 10 million dollar adjustment for uncertain increases over predicted absolute sea level, the adjustment could be an unnecessary over-protection or it could be insufficient to cope with sea level rise within the 20-year lifespan of the project.[13] Its wisdom remains uncertain.

Federal funds are available to assist States with funding adaptation projects, but these expenditures often fall short of providing all of the support needed to implement desired adaptation strategies for coping with more extreme sea level outcomes.[14] First, these funds often come out of the same bucket used to fund disaster relief in high-risk areas, relief criticized as an unwelcome drain on federal sea-level adaptation support.[15] Second, they often provide too little. For example, contrast the Obama Administration’s pledge of 100 million dollars to assist with implementation of a sea level adaptation in Norfolk, Virginia, with the city’s 1.2 billion dollar sea level adaptation “wish list.”[16]

Coastal municipalities, as well as others along tidal waterways, are stuck. They need to be able to access capital for adaption projects but avoid spending unnecessary funds on preempting sea level predictions that never materialize. Predicting the impact of climate change on sea level is only possible within certain ranges due to the complexity of climate variables—inevitably, some adaptation projects will be over-protective and some will be under-protective.[17] Derivative contracts that “commoditize” sea levels[18] would provide a financing vehicle for adaptation projects that effectively shift this uncertainty risk onto willing counterparties. This gets municipalities away from reliance on federal grants, avoids overreliance on ex post flood relief, and limits the political pushback that may attend large-scale bond-financed projects. To illustrate how this might work, we now turn to derivative contract design.

II. Two Sea-Level Derivatives Proposed

The idea for sea level derivatives has been around for several years. Writing shortly before the enactment of the Dodd-Frank Act, Daniel Bloch and others proposed a climate default swap wherein counterparties receive a payout in the event sea level reaches a pre-defined trigger level.[19] Bloch’s proposal is an adaptation of a weather swap of the sort that has been traded since the late 1990s.[20] In these swaps, parties with interests subject to climactic uncertainty trade financial positions with counterparties, so that the costs of the exchange to the party seeking to shed risks are paid for by the economic benefits of a favorable climactic outcome. In the sea level context, a coastal interest could sell the risk of higher than anticipated relative or absolute mean sea levels (“MSL”)[21] by taking premiums from the counterparty under an agreement to pay the counterparty in the event an agreed upon MSL is reached. Bloch’s climate default swap, and related climate default “bond,” would allow municipalities to avoid under-protective adaptive measures by transferring the risk that a given proposal is over-protective to willing private counterparties.[22] Ideally, such counterparties would have a reciprocal risk so that they would be hedging against a lower than expected sea level rise.[23]

As a simplified example of how a Bloch style sea level default swap would work, consider a municipality seeking to build a sea wall. The municipality knows with high certainty that it needs to build the first six feet because storm surges will frequently reach this height under the most conservative models. But beyond this, sea level rise projection variability makes it less clear that the municipality should build any higher. The municipality bears the risk that each extra inch of sea wall could be an unnecessary expense. So it executes a swap. The counterparty pays a premium to the municipality up front, financing the additional inches on the sea wall, in exchange for a payout in the event MSL reached a certain level within a specified period. Economically, the costs of the extra inches (plus interest) should equal the economic value of the flood avoided. So if the MSL trigger is reached and the municipality pays out, that cost is offset by the gain of staying dry created by those extra inches. If the MSL trigger is not reached, the municipality does not payout and the counterparty takes a loss. Under a “bond” type structure for a sea level default swap, a storm surge trigger could be used and reset at the beginning of each coupon period, with coupon payments by the municipality offset by the value of economic loss avoided. Redemption value adjustment may be necessary to ensure that the value of the “bond” swap tracks the real world value of flood avoidance.

Sea level default swaps do not completely hedge against the risk that adaptation projects are under-protective. The sea level could wildly exceed expectations, thus triggering a pay out and leaving the municipality to deal with flood damages—albeit lessened by the adaptation project. Or flood events could occur due to storm surges or project failures that do not involve the MSL trigger. Nuisance flooding futures are a way to hedge against this risk by commoditizing floods in the same way that markets have commoditized weather. Rather than heating degree days or cooling degree days, a futures contract can be executed on nuisance flooding inch days (“NFIs”)—quantified as the amount of flooding along vertical and horizontal metrics over an agreed upon bound. However, NFI futures suffer from two immediate problems. First, risk mitigation untethered to adaptation projects may pull capital investment away from innovation.[24] Further, this sort of loss relief is a less efficient use of capital than the loss prevention that would be facilitated by public adaptation projects.[25] Second, it is less likely that a counterparty would take a long position on NFIs for hedging than for sea level default swaps, potentially fostering a speculative market on flooding.[26] Notwithstanding these shortcomings, NFI futures can provide an alternative or supplement to conventional flood relief to guard against the risks of under-protection. To the extent that the absence of an obvious counterparty for NFI shorts and sea level default swaps fosters speculation, the applicable regulatory framework should be considered.

III. Regulatory Considerations

Sea level default swaps and NFI futures would be subject to regulation by the Commodity Futures Trading Commission (“CFTC”) under the Commodity Exchange Act (“CEA”). The CEA grants the CFTC exclusive jurisdiction over transactions involving swaps or contracts of sale of a commodity for future delivery.[27] The threshold question for regulation is therefore whether these instruments are based on an underlying “commodity” within the meaning of the CEA.[28] If so, municipalities seeking to finance adaptation projects through sea level default swaps or seeking to hedge against under-protection through NFI futures must be aware of the regulatory significance attached to such actions.

These instruments are based on an underlying financial “commodity” within the meaning of the CEA. Similar to weather-based derivatives, the underlier of these instruments is the occurrence of an independent measurable event—absolute or relative MSL in the case of a swap, and NFI in the case of a futures contract. The CFTC has repeatedly found that these kinds of intangible underliers are valid bases for futures contracts, even though there is no ready spot market for them and they may not be directly traded, because they represent some measure of an economic event that can be hedged against by contract.[29]

Regulation of these instruments is first scoped by how the commodity is characterized. The underlier of these instruments, as indexed measures of water levels, should be considered a financial commodity because it cannot be physically delivered and is not subject to the shared risks attending most physical commodities, such as supply fluctuation, damage, theft, or deterioration.[30] This characterization as a financial commodity accurately represents the translation of MSL or NFIs into economic gain or loss, even though the rise or fall of water is an event occurring in the physical world. Because these instruments are based on financial commodities, they are subject to CFTC jurisdiction and do not qualify for the exemptions attending to contracts for future sale or delivery of physical commodities. The regulatory import of the CFTC’s jurisdiction is further scoped by how these instruments are characterized and by whom they are exchanged.

A. Sea Level Default Swaps

Sea level default swaps, although used to finance adaptation projects, are simply the exchange of economic streams between parties.[31] Section 1a(47) of the CEA defines a “swap” to include any agreement or contract providing for payment “dependent on the occurrence, nonoccurrence, or the extent of the occurrence of an event or contingency associated with a potential financial, economic, or commercial consequence . . . .”[32] Swaps are also defined to include instruments that provide a basis for the exchange of payments based upon indices and/or quantitative measures.[33] “Mixed swaps”—swaps that appear to fall within the jurisdiction of both the CFTC and the Securities and Exchange Commission (“SEC”)—are subject to joint regulation.[34]

Sea level default swaps should be used only to finance adaptation projects,[35] raising the question of whether these can be characterized as bonds traditionally regulated by the SEC.[36] However, these instruments should not be characterized as bonds or other debt securities simply because they are used to finance public projects and are issued by a traditional issuer of bonds. Sea level default swaps do not entitle a borrower to repayment, but rather entitle a long position party to settlement in the event of MSL default. In addition, these instruments cannot be characterized as “security-based” because the underlier is a commodity, not a security or security-based index. Nor can they be characterized as a forward for commodity option contract that would be excluded from CFTC regulation.[37] Sea level default swaps are subject to exclusive CFTC jurisdiction.

The CFTC’s regulations on swaps are authorized and informed by the 2010 Dodd-Frank Act’s[38] reforms to the CEA. The regulations check the conduct of contract participants by requiring swap agreements to either be cleared and subject to regulation by an exchange facility, or traded off-exchange exclusively among “eligible contract participants.”[39] Considering the geographic and project specific factors that must be considered in crafting sea level default swaps to finance municipal climate adaptation, it is likely that these transactions—at least at the outset—would be off-exchange.[40] In this realm, CFTC regulations target participant conduct more than the contract terms.

“Eligible contract participants” are broken down into sub-categories with particular regulatory import. Municipalities are classified as “special entities”[41] and are provided with enhanced protections under Dodd-Frank to ensure that they receive unbiased independent advice before entering into swap transactions.[42] “Swap dealers” or “major swap participants”—which may include the financial institutions best positioned to enter into sea level default swaps with municipalities—would have to reasonably believe that coastal municipalities have truly independent representatives with sufficient knowledge to evaluate transactions, provide written representations as to fairness on that basis, act in the best interest of the municipality, and make all appropriate disclosures to the municipality.[43] The regulations do not make clear whether an eligible contract participant who is not defined as a swap dealer or major market participant[44] would have the same obligations towards a municipal counter-party. However, reporting and record keeping requirements would still apply.[45] These regulations limit the pool of eligible counterparties and would appear to shield municipalities from overtly predatory speculators.

B. NFI Futures

Futures contracts on NFIs would be based on a standardized indexed measurement that correlates to economic losses caused by flooding related to sea-level rise. These contracts would allow municipalities—or even private property owners—to hedge against the under-protection of public adaptation projects. Although actual delivery is not contemplated, these are contracts for the future delivery of a financial commodity and are thus subject to regulation by CFTC.[46] Unlike sea level default swaps, these futures are well-suited to standardization and trade on exchanges, even though they are based on local measures.[47] This exchange access would deepen the pool of market participants and ideally allow coastal interests to better assess flood exposure through the price discovery function of futures trading.[48]

To allow for the trading of NFI futures on an exchange, contracts would be standardized and would need to be cleared by the CFTC.[49] Contract markets may not list contracts that are readily susceptible to manipulation, and the fact that these futures are based on an intangible commodity may raise some concern.[50] The NFI indices will have to be developed as standardized, independent, and verifiable sources of information to ensure veracity.[51] Past experience with futures pricing data has shown that self-reporting or phone call surveys are susceptible to manipulation.[52] To maintain integrity, a localized NFI index should be based on data pulled from scientific monitoring devices rather than the reporting of local residents. This should hinder attempts to manipulate that cannot also be excused as potentially legitimate market activity.[53] NFI futures contracts can thus be designed to pass muster with the CFTC, harness price discovery benefits, and hedge against under-protective adaptation projects.

IV. Conclusion

Sea level derivatives provide a promising path for funding climate adaptation. Use of these derivatives to hedge against risks inherent in sea level rise adaptation projects is warranted in light of the mounting expense and uncertainty of adaptation as well as the increasing inability of NFIP and FEMA to mitigate flood loss. Two derivative instruments, sea level default swaps and flooding futures are proposed as vehicles to shift the risks of under-protection and over-protection onto willing and able private parties. However, the lack of an obvious counterparty need to hedge may lead to heavy speculation. This carries with it the risks that counterparties may be over exposed and that markets will be susceptible to manipulation. But these transactions would be subject to CFTC regulation either on or off an approved exchange market. They should and must be structured to comport with those regulations to avoid the problematic incentives of an overly speculative market. The uncertainty of future sea level rise demands we adapt our cities. The economic reality of flooding demands we adapt our finances. Municipalities can strategically use derivative contracts to meet these needs, funding adaptation.

* Editor-in-Chief of the Virginia Environmental Law Journal, J.D. Candidate, Class of 2017, University of Virginia School of Law.

[14]See The Role of Mitigation in Reducing Federal Expenditures for Disaster Response: Hearing before the Subcomm. on Emergency Management, Intergovernmental Relations, and the District of Columbia of the S. Comm. on Homeland Security and Governmental Affairs, 113th Cong. 2, 6–8 (2014) (statement of David Miller, Associate Administrator, Federal Insurance and Mitigation Administration).

[17] Daniel Bloch et al., Climate Hedging Explained 6 (2010), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1676146. Note that this uncertainty also may have negative implications for municipal bond credit ratings, although impacts to date have been negligible. See Larry Levitz et al., Sea Level Rise May Challenge Some Local US Governments, Fitch Ratings (Sept. 16, 2015 5:11 PM), http://www.fitchratings.com/site/pr/990900.

[18] To “commoditize” in this sense is to designate as an underlying commodity solely to facilitate risk transfer through derivative contracts. Weather derivatives are one example of this: parties exchange weather derivatives but it is not possible to produce, buy, or sell the underlying weather. Cf. Felix Carabello, Market Futures: Introduction to Weather Derivatives, Investopedia, http://www.investopedia.com/articles/optioninvestor/05/052505.asp (last visited Mar. 31, 2017).

[23] Consideration of these risks is beyond the scope of this post, but they may include property developers positioned to benefit from sea level rise or parties in other sectors that may benefit from climate change in some other way. See supra note 1 and accompanying text.

[25]See supra note 10 and accompanying text. Note that Bloch’s climate default instruments can also be used for non-adaptive but necessary public expenses such as funding easement purchases to move coastal property owners away from flood prone areas. Bloch et al., supra note 22, at 95.

[26] For heating or cooling degree days, there are ready counterparties who need to hedge against the opposite outcome—farmers and energy companies compliment each other. See Travis L. Jones, Agricultural Applications of Weather Derivatives, 6(6) Int’l. Bus. & Econ. Res. J. 53, 56–57 (2007). For NFIs, it is difficult to imagine a robust market for counterparties that are not speculators beyond local repair and construction interests.

[35] Sea level default swaps used otherwise may be characterized as a sort of flood insurance, excluded from regulation as a swap but inevitably plagued by the same problems as the NFIP because the effect of correlated loss is multiplied by the sector’s inefficient use of capital. See supra notes 4–9.

[47] City specific heating or cooling degree days provide a ready example.

[48]Cf. Sharon Brown-Hruska, Comm’r, U.S. Commodity Futures Trading Comm’n., The Functions of the Derivative Market and Role of the Market Regulator, Speech at the 2006 Planalystics GasBuyer Client Conference (May 18, 2006), available at http://www.cftc.gov/PressRoom/SpeechesTestimony/opabrownhruska-45.

[51] Gary Gensler, Libor, Naked and Exposed, NY Times (Aug. 6, 2012), http://www.nytimes.com/2012/08/07/opinion/libor-naked-and-exposed.html (opining that derivative markets work best where the underlier is observable and integrity can be verified). This is not foolproof; it still may be possible for an unscrupulous trader to physically manipulate the underlier by somehow inducing flood or otherwise move the value of the futures contract. See, e.g., Commodity Futures Trading Commission v. Amaranth Advisors LLC, 554 F. Supp. 2d 523, 528–30 (S.D.N.Y. 2008) (discussing “marking the close” manipulation); Ryan Jacobs, The Forest Mafia: How Scammers Steal Millions Through Carbon Markets, The Atlantic (Oct. 11, 2013), https://www.theatlantic.com/international/archive/2013/10/the-forest-mafia-how-scammers-steal-millions-through-carbon-markets/280419/ (discussing how intangible commodities like carbon credits are susceptible to hacking and scams).

In November 2015, New York Attorney General Eric Schneiderman began an investigation into whether ExxonMobil made public statements about climate change that conflicted with its own internal research.[1] Schneiderman issued a subpoena to ExxonMobil ordering production of documents related to its internal climate change research and the use of that research in making strategic decisions.[2] This investigation differentiates itself from previous climate change litigation by attempting to hold companies responsible for their contributions to climate change using laws unrelated to climate change. If New York is successful in its investigation, it could signal a new wave of climate change litigation centered on issues tangentially related to climate change.

The current investigation pursues a different strategy than past approaches to climate change litigation. Climate change litigation against greenhouse gas (“GHG”) emitters usually involves violations of federal law (such as the Clean Air Act or the National Environmental Policy Act).[3] If a GHG emitter has not violated any federal or state environmental law, plaintiffs bringing an action against a GHG emitter face the difficult task of proving tort law nuisance claims.[4] Avoiding this, New York’s investigation aims to hold GHG emitters accountable by attacking something more tangible: disclosures to investors.[5] The Martin Act, a New York statute, gives broad power to the attorney general to investigate companies for finance-related “deception, misrepresentation, concealment, suppression, [or] fraud.”[6] Investigating large GHG emitters under the Martin Act allows New York to confront the problem of climate change without the difficulties of pursuing a climate change nuisance claim under tort law.

New York has been leading the movement for an increase in the disclosures companies must make about their internal climate change research. Before its current investigation, New York settled with Xcel Energy and Dynegy, Inc. in 2008 and AES Corp. in 2009, after investigating their omissions of climate change risks in SEC filings.[7] As part of the settlement agreements, the energy companies agreed to disclose their potential financial exposure due to climate change, the incorporation of their internal climate change research and projections into their overall strategic plans, and the companies’ efforts to reduce, offset, or limit their GHG emissions.[8]

Following these settlements, the federal government also sought to provide investors with information about companies’ financial exposure to various aspects of climate change.[9] In 2010, the SEC began to require companies to make certain climate change related disclosures.[10] These disclosures include the impact of climate change regulations, climate change’s effect on business trends (e.g. decreased consumer demand for goods that produce significant GHG emissions), and climate change’s physical effects (e.g. rising sea levels threatening property).[11]

These new SEC regulations give New York and other states an additional basis to investigate GHG emitters; however, it is unclear whether this government-led form of litigation will become a standard form of litigation for regulating the conduct of GHG emitters. The evolution of cigarette litigation provides insight. Individuals and families led the initial two waves of litigation but were unsuccessful due to skepticism about the emerging scientific research and vigorous opposition from cigarette companies.[12] State governments, and not individuals, assumed the lead in the third wave of litigation and sued tobacco companies for the costs of treating people with tobacco-related illnesses.[13] This approach was successful, resulting in $246 billion in settlements and the disclosure of millions of documents about the health risks of cigarettes and the deceptive marketing of cigarettes.[14]

Like cigarette litigation, climate change litigation also faces political hurdles. Congress and the attorneys general of Alabama and Oklahoma opposed New York’s investigation into ExxonMobil.[15] Lamar Smith, Chairman of the House Space, Science, and Technology Committee, subpoenaed both Schneiderman and Massachusetts Attorney General Maura Healey, seeking information regarding their investigations of ExxonMobil.[16] Both Smith and the attorneys general criticized the investigation as an infringement on the First Amendment protections that allow ExxonMobil and other companies to disagree about the science of climate change.[17]

Other states seeking to investigate GHG emitters using an approach similar to New York’s may face additional restrictions. The Martin Act gives the Attorney General of New York powers that are not available to many other attorneys general.[18] The Martin Act differs from other fraud statutes because the attorney general does not need to prove intent to demonstrate liability.[19] It also allows multiple remedies. While injunctive relief was the original remedy, the Martin Act now allows the attorney general to bring criminal or civil charges.[20] Of the seventeen attorneys general supporting New York’s investigation into ExxonMobil, only Massachusetts Attorney General Maura Healy has similarly issued a subpoena. [21]

New York may broaden its investigation as it receives more information from ExxonMobil. Potential targets include organizations that both publicly question climate change research and receive funding from GHG emitters.[22] Discrepancies between the public statements of these organizations and the internal documents of the companies providing them funding could offer more proof for attorneys general seeking charges. Because many of the large GHG emitters have funded the same climate change denying organizations (such as the Global Climate Coalition), the investigation into these organizations could ensnare other GHG emitters in similar Martin Act investigations.[23]

New York’s investigation is still young, and there is little certainty about the outcome. If successful, the investigation could encourage other states to pursue similar investigations into ExxonMobil and other large GHG emitters. Climate change litigation may follow a similar path to cigarettes and other toxic torts, in which the increasing scientific evidence does not result in litigation success until the evidence becomes “overwhelming.”[24] Until climate change science becomes overwhelmingly accepted, litigation involving residual considerations, such as fraudulent misrepresentations to investors, may be the most effective legal option for affecting companies’ contributions to climate change.

[7] Assurance of Discontinuance Pursuant to Executive Law §63(15), In the Matter of Xcel Energy Inc., (2008) (No. 08-012) [available at http://www.ag.ny.gov/sites/default/files/press-releases/archived/xcel_aod.pdf]; Assurance of Discontinuance Pursuant to Executive Law §63(15), In the matter of Dynegy, Inc. (2008) (No. 08-132) [available at http://www.ag.ny.gov/sites/default/files/press-releases/archived/dynegy_aod.pdf]; Assurance of Discontinuance Pursuant to Executive Law §63(15), In the Matter of AES Corp., (2008) (No. 09-159), 2009 [available at http://www.ag.ny.gov/sites/default/files/press-releases/archived/AES%20AOD%20Final%20fully%20executed.pdf].

[17] This argument appears misguided as it does not address whether ExxonMobil committed actionable fraud or deception, as determined by the Martin Act, by not disclosing internal information about climate change.

I. INTRODUCTION

New York City is a city thought by many to be one of the most incredible, majestic, and beautiful cities in the world. Its prominence and prosperity has grown just like the skyline, continuously reaching new heights. Ironically, one of the most beautiful places in New York City, Central Park, is also home to one of the most ugly and archaic realities of not just the city, but of the country. Walking through midtown Manhattan you will find iconic buildings, thousands of business professionals and tourists, and incredible culture. The ugliness that you will also find is animal cruelty, on full display.

There is a large horse and carriage industry in New York City and carriage drivers are able to exploit horses for upwards of fifty dollars for a short twenty-minute ride.[1] Under the Animal Welfare Act, horses do not receive any federal protection.[2] As such, day in and day out, horses are treated in an inhumane manner by the carriage industry in New York City.[3] The American Society for the Prevention of Cruelty to Animals (“ASPCA”), stated “[t]he life of a carriage horse on New York City streets is extremely difficult and life threatening . . . carriage horses were never meant to live and work in today’s urban setting.”[4]

Carriage horses are forced to work in a harsh environment, foreign to their natural habitat. They work roughly nine hours per day, seven days a week, walking on hard pavement, pulling a carriage weighing hundreds of pounds, and inhaling unhealthy air from cars, buses, and taxis.[5] Among the many effects of working in this unnatural environment are incidents where horses will react by taking off at full speed into the busy city streets.[6] These horses are denied the their fundamental needs to live healthy lives, such as pasture time to “graze, stroll and socialize freely on grass.”[7]

II. BACKGROUND

The protection of horses was a central campaign policy used by Mayor Bill de Blasio in his 2013 Mayoral campaign.[8] Mayor de Blasio gained support of animal rights activists because of his political stance on the issue.[9] In January of 2016, an “agreement in concept” was announced by Mayor Bill de Blasio’s office, which would have significantly reduced the horse carriage industry.[10] The agreement would have shrunk the horse and carriage industry from its current size of two hundred and twenty horses down to ninety-five by 2018.[11] The agreement would have required the building of a new stable in Central Park, which would have been large enough to house seventy-five horses at the time.[12]

In order to effectuate and implement the agreement and plan, the City Counsel needed to approve the deal. Unfortunately, in February of 2016, the New York City Counsel did not simply reject the vote, rather they canceled the vote altogether.[13] The legislation failed as a result of the Teamster’s union pulling support for Mr. de Blasio.[14] When the Teamsters pulled out, the legislation no longer had the sufficient number of votes for it to pass. After the cancelation of the vote, a carriage driver and spokesman for the industry, Ian McKeever, stated to reporters, “It’s a great day for the horse and carriages.”[15] However, for the horses, it meant the continuation of inhumane treatment, starvation and borderline torture.

Currently, most carriage horses are housed in Clinton Park Stables, which is a building located on 52nd Street near the Hudson River.[16] Most of the stalls inside the Clinton Park Stables are eight feet by ten feet.[17] However, according to customary and humane housing for horses, the ideal stall size for a horse of one thousand pounds or larger is twelve feet by twelve feet.[18] For more “compact breeds,” such as ponies, the ideal size is ten feet by ten feet.[19] This necessarily entails another level of inhumane treatment of these horses. When they are not subject to the harsh life on the job, they retire to a stall that is too small for house their large frames.

In addition to ill-equipped housing, horses are treated inhumanly throughout the course of their lives. Even as the horses grow tired, sick and old they are still required to work long excruciating hours every day.[20] They often suffer from respiratory ailments as a result of breathing exhaust fumes on a daily basis, and typically develop extreme leg issues from traversing the city streets on hard, unforgiving surfaces all day.[21]

In many instances, these issues go untreated. On September 14, 2006, a horse that had worked pulling carriages through New York City for nearly two decades, collapsed in Central Park.[22] After the horse collapsed, the carriage driver began to whip the defeated horse repeatedly in attempts to get the horse to stand up and continue working.[23] A crowd, terrified, gathered around urging the carriage driver to stop.[24] Eventually, a police trailer took the horse away to her stable, and the horse died early the next morning.[25] Similarly, in April of 2014, a carriage driver falsified records to force an “old, asthmatic” horse to continue working.[26] The horse was involved in a horse-carriage accident in September 2013, and the carriage deriver was previously charged with working horses for more than twelve hours in a twenty-hour period.[27]

As another example, on February 23, 2015, a horse was found in his stall unable to stand up.[28] Thereafter, it was discovered that the horse had suffered a fractured leg and was later euthanized.[29] In another event, in December of 2013, a carriage driver was charged with cruelty to animals after he was discovered to be working a horse that was “visibly inured and struggling to pull the weight of the carriage.”[30] A veterinarian later found that the horse had “thrush—an infection of the hoof that, if left untreated, can lead to permanent lameness and sometimes even require euthanasia.”[31] These are just a few examples of the multiple pages of reports of inhumane treatment of these horses that occur far too often.

The horses are not the only living beings at risk due to the horse and carriage industry. Horses are prey animals and, therefore, have a “highly developed flight drive that is easily triggered when they are startled by an unexpected or threatening stimulus.”[32] In other words, the loud, busy and chaotic streets of New York City seems like the worst place for an extremely sensitive thousand pound animal to be.

There have been over thirty carriage horse accidents in the past few years alone. Many of these instances involve horses being “spooked,” from which their natural reaction is to take off running.[33] On June 9, 2014, something in the city spooked a horse and he bolted through the city streets.[34] An innocent bystander attempted to stop the horse by grabbing its reins and was then dragged by the horse.[35] On October 19, 2014 a witness videos shows a spooked horse bolting up 11th avenue in Manhattan, running full speed through busy a busy intersection.[36] In another incident involving a bolting horse, occurring on October 28, 2011, a witness described the incident, “The horse took off at top speed and could not be stopped. He could have easily trampled a pedestrian.”[37] Had the horse trampled an innocent pedestrian, who would have been to blame? In our society, it is more likely than not that the media would have depicted the horse as out of control, where it was merely acting instinctively, but in the wrong environment.

Therefore, having horses in an over populated New York City is incredibly dangerous to all that are in the area. At any moment, a horse may be spooked and may take off, putting everyone in its path at risk of serious injuries, if not death. The risk to public safety does not end there, however. Spending about nine hours a day on the job, horses naturally defecate on the same streets they traverse. There are two hundred and twenty horses in the horse and carriage industry in New York City.[38] Making matters worse, “carriage drivers often do not clean up after the horses, leaving waste and rotting debris.”[39] Therefore, city health officials have an additional burden to regularly monitor the horses for diseases to ensure that they are not carrying disease that could be transmitted to other animals, or to humans.[40]

III. SOLUTION

Given the fact that previous efforts by Mayor de Blasio and animal rights activists have failed, a different approach is needed to effectuate a change in the exploitation of horses in New York City. I will propose two approaches that will result in a cleaner environment in the streets of New York City, as well as eliminate the inhumane treatment of horses. The first approach that I propose will result in the end of the horse and carriage industry through a phase-out process, and is likely to be met with the strongest opposition, as it is the more ambitious approach. The second approach will still allow the operation of the industry, but will reduce the number of horses allowed, as well as increase the standards of horse keeping, ensuring that they are treated humanely.

A. FIRST APPROACH

Too drastic of a change, too quickly, will not only be logistically difficult to implement, but also will result in the displacement of many workers that rely on the industry for income. Therefore, the first approach that I propose is a phase-out process by which carriage drivers will still be able to operate for a certain amount of time, but after which operation will be in violation of the law. This process will be implemented through newly promulgated regulations and a strict permit process.

Under this approach, beginning January 1, 2018, no more permits will be issued to operate a horse a carriage. After this date, those with current and up to date permits will be allowed to continue operation. Those with said permits will be legally allowed to continue operation until July 1, 2019. After this date, any operation of horse and carriage will be in direct conflict with the law and will be subject to minimum fines of $5,000, confiscation of the horse and any additional penalties imposed by law. This approach is similar to the workings amortization periods with non-conforming uses in the context of zoning laws.

B. SECOND APPROACH

In the alternative, this more lenient approach is more of a compromise. It will have the effect of minimizing the possibility that horses are treated inhumanely, will reduce the number of horses used in the city, but will still allow for the operation of the industry. This approach, like the approach of Mayor de Blasio previously discussed, will reduce the number of horse in the New York City horse and carriage industry from two hundred and twenty to seventy-five.

In addition, these seventy-five horses will be placed on a rotation system and will, by law, only be allowed to serve a maximum of six months of service every three years. This will help to ensure that the polluted air, hard concrete, and busy Manhattan streets will have as little impact on the longevity of the life of the horse as possible. Furthermore, this rotation will ensure that large, full-grown horses are not subject to living in stalls that are too small for the majority of their lives. Finally, this compromise will require that carriage operators attain a higher level of horse training, will be subject to city inspections of housing arrangements for the horses, and will be subject to high fines and relinquishment of license after only one violation.

IV. CONCLUSION

In recent years the debate over the horse and carriage industry in New York City has grown more and more contentious. Meanwhile, these horses are treated inhumanely, subject to harsh working conditions, and, the public is put at risk. Through either of the approaches I have proposed, we can begin to protect horses and make the streets of New York City a more sanitary environment. Implementation will be met with strong opposition, but that cannot deter action. Mistreated horses cannot speak for themselves and tell us the pain that they go through, but the record of inhumane treatment speaks volumes. If we stand by and allow this inhumane treatment continue, that too, speaks volumes about us as a society.

By Breanna Hayes, Managing Editor, Vermont Journal of Environmental Law

I. Introduction

Human use of fossil fuels dates back to prehistoric times.[1] Before the Industrial Revolution, humans mostly relied on wood, wind, and water as energy sources.[2] But as the Industrial Revolution progressed, humans developed a dependence on fossil fuels.[3] In addition, the advancements of the Industrial Revolution allowed for the human population to grow rapidly.[4] Combined, these facts indicate that, not only were humans developing a greater dependence on fossil fuels, but also there were more humans on earth than ever before. With a greater number of humans, fossil fuel dependence was even more severe.

While humans blindly relied on fossil fuels for centuries, by the 1940s scientists began predicting the impact that fossil fuels would have on the environment.[5] In 1949, M. King Hubbart made a prediction known as “Hubbart’s Peak.”[6] According to this prediction, fossil fuels would peak in the 1970s.[7] Hubbart further predicted that despite the peak in fossil fuels, humans would still have a rising demand for energy.[8] According to Hubbart’s predictions, the energy sector would need to replace fossil fuels with renewable energy sources to meet the demand.[9] As predicted, oil peaked in 1971, with other fossil fuels soon to follow.[10] Yet, by the time the world began to acknowledge Hubbart’s peak, fossil fuels had “become so firmly interwoven into human progress and economy, that changing this energy system would drastically alter the very way we have lived our lives.”[11] While the transition will be difficult, many nations around the world have begun to move away from fossil fuels.[12] A major victory in the movement from fossil fuels was the Paris Agreement, which has been ratified by more than 100 countries.[13] In the Paris Agreement, countries committed to, “holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change”[14]

While world governments are progressing towards greener energy to combat climate change, a problem arises in the energy industry. Publicly traded energy companies are a considerable market force,[15] despite demand for fossil fuels continuing to decrease and the demand for renewable energy sources rising.[16] In 2015, even though fossil fuel prices were at a multi-year low, “over half of global power capacity additions in 2015 came from wind, solar, hydro and nuclear.”[17] Additionally, the Paris Agreement, formulated in December 2015, creates the expectation that policy-makers will advance progressive ideas to help countries meet the agreed-upon two degree Celsius cap.[18] An obvious way to mitigate this change is for energy companies to start diversifying portfolios to include renewable sources.[19] However, timing is key for both the market and the climate. Concerning the market, “If [companies] move too quickly, money could be left on the table from their fossil fuels business. But too slowly, and they could miss their window of opportunity.” On the other hand, the world has a very restricted carbon budget if it is going to honor the two degree Celsius cap embodied in the Paris Agreement.[20]

This article focuses on the window of time that companies have to shift from fossil fuels to renewable energy. The article provides a quick overview of public companies and the use of stock. Then, the article discusses the “Carbon Bubble” and how it compares and contrasts to both the dotcom and housing market bubbles. Finally, the article discusses the environmental impact of the energy industry’s financial choices.

II. Background: Brief Overview of Public Companies

Public companies differ from private companies in two ways. First, public companies trade stock on the public stock exchange and second, public companies make regular, legally required disclosures to the Securities Exchange Commission (SEC).[21] By selling stock on the public stock exchange, any person can purchase stock in a company. In addition to individual citizens, institutional investors—such as pension funds, insurance companies, and mutual funds—may purchase public stock.[22] The SEC requires publicly traded companies to make regular disclosures to protect both individual and institutional investors.[23] According to SEC, public companies must “disclose meaningful financial and other information” so there can be a “common pool of knowledge for all investors to use to judge for themselves whether to buy, sell, or hold a particular security.”[24]

When a company decides to sell stock to the public, multiple factors determine the price for the stock and the price can fluctuate constantly. Regardless of what a person pays for a share of stock, they are entitled to the same thing—a share of the company’s equity. The company’s equity can be determined by a simplified formula. All companies have assets and liabilities. Assets are the fixed infrastructure and inventory the company can liquidate, and liabilities are the debts a company owes. Because a company’s equity is subordinate to its debts, equity can be determined by subtracting the liabilities from the assets. The equity is then divided by the amount of stock the company issued. For example, if a company owns $50 million worth of assets and has $20 million in liabilities, the company’s equity would be $30 million. If the company issued 2 million shares, each share would be worth $15 of the equity.

As mentioned, a stock’s price is not locked in to its current value of equity. Rather, many factors, such as investor enthusiasm may alter the price. [25] If investors believe that the company will grow, stock may sell higher than it is worth in equity. Yet, a problem arises when stock price increases rapidly but the assets of the company do not catch up. When assets’ prices appreciate beyond their value, a market bubble emerges.[26] For example, “Investors may bid up the price of an asset in the belief that its price will continue to rise and when the ever-higher price results in an ever-smaller number of buyers, the price eventually declines rapidly.”[27] Inevitably, the bubble bursts and the price drops.[28]

Some are concerned that energy companies are creating a bubble.[29] The concern stems from how energy companies value assets. Energy companies consider reserves of fossil fuels as assets.[30] But, in light of current political and social action regarding climate change, many believe that energy companies will not be able to utilize all fossil fuel reserves that are now considered assets of the companies.[31]

III. The Carbon Bubble

Prior to the Paris agreement, nearly 200 countries signed the Cancun Agreement, which embodied an international commitment to keep the global temperature from rising more than two degrees Celsius from pre-industrial levels.[32] The Cancun Agreement additionally acknowledges the possible need to further restrict global warming to 1.5 degrees Celsius.[33] In November 2011, Carbon Tracker Initiative (CTI), a nonprofit think tank, used the Cancun Agreement as a reference point and pioneered the concept of “the Carbon Bubble.”[34] Relying on the assertion that the world would limit carbon usage within the bounds of the Cancun Agreement, CTI calculated that the world’s energy budget was about one-third of what energy companies had in reserves.[35] The other two-thirds would be “stranded assets.” CTI defined stranded assets as:

Fossil fuel energy and generation resources which, at some time prior to the end of their economic life (as assumed at the investment decision point), are no longer able to earn an economic return (i.e. meet the company’s internal rate of return), as a result of changes in the market and regulatory environment associated with the transition to a low-carbon economy.[36]

CTI further calculated, that by 2011, the world had already used a third of its usable energy budget.[37]

CTI also asserted that the carbon bubble could pose financial risks to investors. The report states that:

The current system of market oversight and regulatory supervision is not adequate to send the required signals to shift capital towards a low carbon economy at the speed or scale required. The current short-term approach of the investment industry leaves asset owners exposed to a portfolio of assets whose value is likely to be seriously impaired.[38]

CTI further criticized the energy industry for continuing to use invested money to explore for more fossil fuel reserves, despite the fact that the reserves already located would exceed the carbon budget.

IV. The Other Bubbles: Housing and Dotcom

To understand the possible effects of a carbon bubble, it is useful to look at the two most recent economic bubbles: the dotcom and housing bubbles. The dotcom bubble occurred from 1995-2001 and revolved around the growing tech industry catalyzed by the advent of the internet.[39] The housing bubble began to grow in 2000 and burst in 2006 after banks and other originators approved more and more subprime and nonprime mortgages.[40] The distinguishing factor between the housing bubble and the dotcom bubble is the was the impact on the economy. The housing bubble had a heavy impact on the economy, while the dot com bubble did not.[41]

In the 1990s, the internet became increasingly integrated into everyday life. In response to growing dependence on the internet, many online retail companies began springing up.[42] Investors enthusiastically invested in companies that were taking advantage of the internet frontier.[43] Investor enthusiasm was so high, some companies saw stock prices double within one day of an IPO.[44] The flow of investments fueled the “dotcom bubble.”[45]

The intense investor enthusiasm made stock prices rise[46] however, some companies were losing as much as $10 million to $30 million per quarter.[47] Due to these unsustainable losses, many internet-based companies folded.[48] Between March and April of 2000, roughly a trillion dollars worth of investments were lost.[49]

Just as the dotcom bubble popped, the housing bubble began to grow.[50] In the early 2000s, banks and other originators approved more subprime and nonprime loans.[51] These mortgages were high risk because approved borrowers often had low credit scores or were charged rates and fees higher than they were unqualified for.[52] Some mortgage loans had risks layered, including those where potential repayment issues were deferred by permitting “adjustable” payments.[53] Such structures allowed borrowers to select monthly payments that were lower than the fully amortized rate.[54] This meant that borrowers could make no principal payments and just send in a fraction of the interest accruing each month, for the first few years. With this type of adjustable rate mortgage, the principal balance would grow.[55] The rates would reset in a few years to the fully amortized rates, and then monthly payments would spike to a level that many borrowers could not afford. After banks and other originators approved subprime mortgages, banks would pool these mortgages and use them to back securities that they would then sell to investors, including other banks.[56] Certain slices of these mortgage-backed securities were sometimes repackaged into new pools that also issued securities.[57] This scheme worked to help supply needed credit to the housing market while borrowers could afford their payments. However, when payments spiked and many borrowers defaulted, the mortgage-backed securities began to decline in value. Some banking firms that held mortgage-linked securities in their portfolios began to collapse.[58]

The burst of both the dotcom and housing bubbles caused a loss of roughly $6 trillion in household wealth.[59] While both bubbles caused similar losses, the housing bubble burst had a much greater effect on the rest of the economy than the dotcom bubble did.[60] The housing bubble had a stronger effect on the economy because of the population impacted.[61] When the dotcom bubble burst, the majority of investors were wealthy and less indebted.[62] Even though those investors lost money, they still had disposable income. In contrast, the people who felt the shock of the housing bubble were mostly low-income homeowners.[63] The bubble was fueled by subprime and nonprime loans that many people could not afford to repay. As a result, most of these people’s income went into trying to pay their mortgages and save their homes.[64] Unlike the wealthy, albeit unlucky, investors who lost wealth in the dotcom bubble, the homeowners impacted by the housing bubble could not afford to continue retail spending.[65] Consequently, the economy felt a much greater shock from the housing bubble burst than the previous dotcom bubble burst.

V. The Carbon Bubble Mirrors the Dotcom Bubble on a Financial Scale

It is unlikely that the carbon bubble will have the same detrimental effect on the economy that the housing bubble had. This is because the carbon bubble differs from the housing bubble in two significant ways. First, the carbon bubble is not fueled by debt, subprime or otherwise. Second, the carbon bubble is more similar to the dotcom bubble because the people who will most likely feel the shock are wealthy investors who will be able to absorb the loss without halting retail spending.

In the housing crisis, the assets that were overvalued were the mortgage-backed securities. Borrowers could not repay high-risk loans, so there was no capital to fund the mortgage-back securities. On the other hand, there are still high consumption rates of fossil fuels.[66] Whereas the housing bubble was built on unsustainable loans, the carbon bubble is forming around anticipated legislation. The carbon bubble is not forming from industry’s inability to provide reserves, rather anticipatory need for regulation.

Another factor that fueled the housing bubble was government intervention. The government promoted home ownership, leading more people to borrow money.[67] In contrast, governments are not promoting fossil fuel usage. The recent election of Donald Trump to the Presidency may impact how “stranded” the energy company assets really are. During the Obama Administration, the United States made strides toward greener energy, which included signing the Paris Agreement.[68] The President-Elect Donald Trump has pledged to withdraw from the Paris Agreement and has supported the use of fossil fuels.[69] Therefore, the United States may not provide restrictive legislation that would burst the carbon bubble. Nevertheless, while a pro-fossil fuel administration in the United States may delay the shock, it will still come. The United States is only one of the countries that ratified the Paris Agreement. While fossil fuels may have a market in the United States under a Trump Administration, the global market will still decrease.

The carbon bubble will most likely affect the economy similarly to the dotcom bubble. If the shares plummet, those affected will be mostly wealthy or institutional investors. For example, according to the Forbes Global 2000 list of the World’s Biggest Public Companies, ExxonMobil ranked as number 9 and Chevron ranked number 28.[70] The companies also ranked first and third respectively for public companies in oil and gas operations.[71] Of the 4.15 billion outstanding ExxonMobil shares, company insiders own over 500 million and institutions own over 2 billion.[72] Similarly, of the 1.89 billion outstanding Chevron stock, corporate insiders own approximately 75 million and institutional investors own more than 1.18 billion shares.[73] While personal wealth would be lost if energy stock plummeted, it would not have the same detrimental effect on retail spending as the housing bubble did.

Furthermore, stockholders are holding the companies accountable for their practices. Recently, ExxonMobil shareholders agreed to the “prudent use of investor capital in light of the climate change related risks of stranded carbon assets.”[74] Also, some shareholders are bringing a securities class action alleging that ExxonMobil materially misrepresented its assets[75] (although currently, the class is not yet certified[76]). Shareholders can use these avenues to assist legislators in holding these companies to the carbon budget.

VI. The Environment Will Still Suffer

While the carbon bubble is unlikely to wreak havoc on the economy, the threat to the environment is still very real. In fact, the lack of effect on the economy may increase the threat to the environment. If the world is committed to keeping global temperatures below two degrees Celsius, as of 2013, 60-80 percent of fossil fuel reserves must stay under the ground.[77] The use of fossil fuels and the transition to renewables may not lead the world to a financial crisis, but if the transition is not quick, the world may face an environmental crisis. If energy companies are too slow in transitioning from fossil fuel to renewable sources, they will overspend on the carbon budget. If that happens, the likelihood of global temperatures exceeding the agreed on cap of two degrees Celsius increases. [78] There are many effects that rising temperatures could have on the environment, including shrinking glaciers, loss of sea ice, accelerated sea level rise, longer and more intense heat waves, shifts in plant and animal ranges, trees flowering sooner, etc.[79] Many of these effects are already documented.[80]

If these changes continue, communities will feel the impact. The effects could be health-based, social, or cultural due to a change in the availability of natural resources.[81] According to the Environmental Protection Agency,

Climate change may especially impact people who live in areas that are vulnerable to coastal storms, drought, and sea level rise or people who live in poverty, older adults, and immigrant communities. Similarly, some types of professions and industries may face considerable challenges from climate change. Professions that are closely linked to weather and climate, such as outdoor tourism, commerce, and agriculture, will likely be especially affected.[82]

While energy companies are inflating the carbon bubble by burning carbon and contributing to these environmental effects, it is unlikely that courts will hold companies liable.[83] This is because, without a federal cause of action, it is challenging to prove causation.[84] Since climate change is a global problem, it is challenging to prove that individual companies caused certain environmental issues.

VII. Conclusion

There is reason to be concerned about the carbon bubble, but that reason is not the stock market. Most likely, the carbon bubble will not have the effect on the economy that the housing market did. This is because, on the financial side, either the companies will divest from fossil fuels or the people who will be affected by the carbon bubble burst will be wealthy enough to absorb the shock.

On the other hand, if companies continue to burn carbon and inflate the carbon bubble, people will feel the environmental effects on a societal level. Natural resources may become scarcer, cultural ways of life may fade due to lack of resources and communities may be destroyed due to harsh storms. The impact on communities will not come from a stock market crash; it will come from environmental catastrophes.