Category: Environmental Law Review Syndicate

Introduction

The Farm Bill affects nearly every aspect of agriculture and forestry in the United States. Therefore, its next reauthorization offers an important opportunity to better manage the risks of climate change on farms, forests, and ranches by supporting resilience practices that also offer greenhouse gas (GHG) emission reductions.

Agriculture is vulnerable to the impacts of climate change, including rising temperatures, changes in rainfall and pest migration patterns, extreme weather events, and drought. In addition to being heavily affected by climate change, agriculture is also a significant contributor to climate change. Agricultural practices are responsible for about eight percent of U.S. GHG emissions.[4] Estimates of total food system emissions, which include the CO2 emissions from energy use and transportation, increase the agricultural industry’s proportion of U.S. GHG emissions to between 19 and 29 percent.[5]

To better align their practices with their long-term interests, farmers and ranchers can adopt practices that enhance their resilience, while also reducing GHG emissions, and increasing carbon sequestration. Many of these practices improve the long-term productivity and profitability of farms. For example, farmers are already adopting practices that reduce emissions or sequester carbon in the soil and in woody biomass while also improving productivity and resilience on their land.

This paper proposes a suite of practices that should be considered during the next authorization of the Farm Bill to improve on-farm efforts to adapt to and mitigate climate impacts. It is organized into four main sections. Part I provides background on the Farm Bill and the ways that the U.S. agricultural system contributes to GHG emissions. Part II provides an overview of opportunities for on-farm mitigation and adaptation. Many of the practices we recommend can reduce on-farm emissions and build a more resilient agricultural system. Part III identifies a set of metrics that we used to assess potential proposals. Lastly, Part IV summarizes how climate practices can be incorporated across titles and highlights three policy options.

I. Background

A. Agricultural Sources of GHG Emissions

Greenhouse gases trap heat in the atmosphere and contribute to increases in global temperatures. Although this a natural process, increased greenhouse gas emissions since the industrial revolution have increased atmospheric greenhouse gases to levels never before recorded. Agriculture, including raising crops and animals as well as resulting land use changes and farm equipment usage, is a source of three GHGs: methane (CH4), nitrous oxide (N2O), and carbon dioxide (CO2).[6]

Globally, emissions from food systems are responsible for nearly a third of all GHG emissions.[8] Domestically, EPA’s Inventory of U.S. Greenhouse Gas Emissions and Sinks divides up agriculture-related emissions into different categories. N2O and CH4 emissions are categorized as “Agricultural,” and accounted for 8.3 percent of total greenhouse gas emissions in the United States in 2014.[9] In 2014, N2O emissions were 336 million metric tons of carbon dioxide equivalent (MMT CO2 Eq.); these emissions were caused primarily by soil management such as the use of synthetic fertilizers, tillage, and organic soil amendments.[10] Manure management, and biomass burning, also contribute to N2O emissions. CH4 emissions were 238 MMT CO2 Eq. and were produced by enteric fermentation during ruminant digestion (164 MMT CO2 Eq.), manure management (61 MMT CO2 Eq.), and the wetland cultivation of rice (12 MMT CO2 Eq.)[11]

CO2 emissions from agriculture-related land use changes and equipment usage are accounted for in the “Land Use, Land-Use Change, and Forestry” and the “Energy” categories, respectively. Estimates of total food system emissions, which include the CO2 emissions from energy use and transportation, increase the agricultural industry’s proportion of U.S. GHG emissions to between 19 and 29%.[12]

II. Strategies for Managing Climate Risk through Mitigation and Adaptation

Given agriculture’s contributions to GHG emissions that are contributing to climate change, which in turn affects agricultural productivity, it is appropriate to consider how climate change can be incorporated across the titles of the Farm Bill. The anticipated reauthorization in 2018 can play a critical role in addressing climate change in the United States by promoting practices that encourage mitigation and adaptation practices on farms.

Adopting new agricultural practices can be challenging, especially for small farmers or operations without access to large amounts of capital or information about adaptation opportunities. However, doing so will not only assist the U.S. farmers and ranchers confront shifting seasons, more severe storm events, new pests, drought, and other challenges,[13] it will also reduce the Farm Bill’s fiscal burden on taxpayers.[14] A number of land managers are already adopting strategies that not only reduce emissions or sequester carbon in the soil, but also have the important co-benefits of improving productivity and resilience.[15]

First, land managers can reduce the GHG emissions of their farming practices in a number of ways. Practices such as conservation tillage reduce soil disturbance, and prevent some erosion, which can lower soil carbon loss. Precision agriculture strategies can reduce fertilizer inputs on cropland, which in turn reduces GHG emissions from fertilizer production and application.[17] Reincorporating livestock manure onto cropland as well as improved management of liquid manure using anaerobic digesters or other on-farm technology can reduce methane emissions from livestock waste by capturing it rather than emitting it.[18]

Second, land managers can sequester additional carbon through on-farm practices. Soil carbon can be increased by incorporating cover crops, including legumes, into crop rotations, reducing tillage, and agroforestry practices.[19] In addition, planting perennial crops or incorporating trees into farms through alley cropping, hedgerows, and riparian forest buffers can lead to long-term sequestration of carbon in woody biomass.

Finally, land managers can take steps to avoid future emissions. The most critical way to avoid new on-farm emissions is to avoid land conversion, which releases carbon that was previously sequestered in the soil and in woody biomass.

B. Adaptation Measures

Adapting to a changing climate will require farmers, foresters, and ranchers to prepare for and respond to new risks, including extreme weather events, shifts in growing seasons, and different pests and plant diseases. Figure 3 provides an overview of the range of practices that farmers can undertake to adapt to climate change.

To make farming operations more resilient, farmers can enhance soil health, which will make agricultural systems better able to withstand extreme weather, drought, and erosion due to high winds or flooding.[21] Strategies for enhancing soil health include adjusting production inputs, timing of planting and soil amendments, cover crops, tillage, new crop species, and diversified crop rotations.[22]

Farmers can also take additional steps to make their farms more resilient to other climate risks. For example, to prepare for flooding, heavy rainfall, and other risks, farmers can implement resilient farm landscapes that include buffer strips and the return of marginal cropland to native vegetation. To prepare for new pests and diseases, farmers can diversify their crop selection and alter crop rotations. To adjust to changing seasons and a warming climate, farmers can plant different crops; crop scientists can also develop more heat- and drought-resistant crop varieties. Resilience planning is also important on the community level, as rural communities can ensure that new infrastructure investments supported by the Farm Bill, such as rural water and energy systems, are resilient to climate change effects.

C. Opportunities for Complementary Mitigation and Adaptation

Importantly, many on-farm practices can help with both climate adaptation and mitigation.[24] For example, improving soil health not only mitigates climate change, it also makes farms more resilient and better able to withstand the shifting, and at times extreme, conditions of a changing climate. Efficient fertilizer application will reduce GHG emissions while enhancing soil resilience. Similarly, cover cropping, diversified crops, and other practices that stabilize the soil will reduce GHG emissions from the soil while building soil health. It is important to note that the efficiency of these on-farm practices will vary by region, impacting the ways they can and should be implemented.[25]

Mitigation and adaptation strategies for agricultural systems often require long-term planning to strengthen “climate-sensitive assets,” such as soil and water, over time and in changing conditions.[26] Developing better regionally specific agricultural climate and conservation practice adoption data is required for this long-term planning to be successful. From those baseline data, regional efforts will be critical to identify mitigation opportunities, develop strategic adaptation planning, and implement enhanced soil and livestock management practices.[27]

III. Metrics for Prioritizing Reform Proposals

As the summary above indicates, there are many actions that can promote climate change mitigation or adaptation in agriculture. In addition, changes can be made to every Title of the Farm Bill that would promote one or more of these mitigation and adaptation strategies. Given this complexity, the uncertainties associated with quantitative estimates of the mitigation potential of different strategies, and the qualitative differences between mitigation and adaptation as goals, we developed a range of qualitative metrics that we used to analyze potential reforms. In particular, we considered:

Potential magnitude of climate impact: Priority was given to proposals that had proven climate benefits, did not require significant additional research, and targeted the largest sources of agricultural GHG emissions.

Co-benefits: Priority was given to proposals that could increase resiliency or economic benefits of farms.

Equity: Priority was given to programs that could benefit small and large farms in all regions.

Scalability: Priority was given to proposals that seemed replicable and applicable to farms across the country or where Climate Hubs could facilitate regional diversity.

Enforceability/Administrability: Priority was given to proposals that could be tied in with or build upon existing requirements or programs in the Farm Bill.

Feasibility: Feasibility considerations included ease of implementation technically, economically, and politically. Because any legislative change will need to be passed in Congress, political feasibility was determined to be one of the most important considerations. Accordingly, we prioritized proposals that seemed, based on stakeholder engagement, suitable for the next Farm Bill, given competing interests for funding and stakeholder sentiment towards climate action.

An analysis of these metrics is included throughout our recommendations. However, these should be considered as only a first step. While we have attempted to target the largest sources of GHG emissions, more detailed proposals will be required before there can be precise estimates of the potential for emission reductions. The USDA’s COMET-Farm, an online farm and ranch GHG accounting tool, can likely facilitate this effort.[28] Similarly, determining the economic feasibility of specific reform proposals has been difficult because of taxpayer subsidization, the uncertainty of how appropriations may be allocated, and the varying degrees of stringency that reforms could encompass (e.g. mandate vs. incentive). Finally, while previous Farm Bill reauthorizations can serve as a guide, the ongoing transitions at U.S. federal agencies engaged in Farm Bill programs will likely have impacts on the political feasibility of proposals that cannot be appropriately assessed at this time. For these reasons, we recommend that additional research measure the climate impact of proposals, outline the benefits and co-benefits for farmers and the public, articulate the administrability of the program, and gather stakeholder input and support for proposals.

IV. Pathways for Addressing Climate Change in the Farm Bill

To determine how the Farm Bill could better address climate change, we first categorized the range of mitigation and adaptation practices identified in Figures 2 and 3, above, in terms of their potential applicability to the Farm Bill. We then examined how these practices mapped onto the current titles in the Farm Bill. Finally, we assessed how the upcoming Farm Bill could better incentivize these actions across titles, with an eye toward win-win practices with both mitigation and adaptation benefits.

Figure 4 contains the range of possibilities we identified for addressing climate mitigation and adaptation by title. To fully assess the impact of each of these policy options – and its interaction with other policies and programs –requires additional research and outreach to stakeholders affected. We discuss in more detail below a set of recommendations that best fit our metrics, indicated by bold font in this table.

Figure 4. Options for Addressing Climate Change by Farm Bill Title

All of these areas for reform have the potential to advance climate-ready agricultural practices through the Farm Bill. Many of these areas for reform also have wide-ranging benefits beyond climate change mitigation or adaptation such as enhancing on-farm productivity and more efficiently using taxpayer dollars. We elected to focus on three recommendations we judged to be particularly important based on the metrics we established in Part III).

Recommendation 2: Ensure the best available science and research—including the outcome of pilot programs—are incorporated into Farm Bill programs; support dissemination of downscaled climate data through USDA regional offices and land grant universities to develop agricultural climate mitigation and adaptation capacity under Title VII.

Recommendation 3: Advance manure management collection and storage methods, as well as biogas development under Title IX to mitigate GHG contributions from livestock.

Crop insurance is deeply subsidized by the federal government, and it represents the single largest federal outlay in the farm safety net.[31] On average, taxpayers cover 62 percent of crop insurance premiums.[32] The insurance companies’ losses are reinsured by USDA, and the government also reimburses their administrative and operating costs.[33] The Congressional Budget Office anticipates that this program will cost taxpayers over $40 billion from 2016 to 2020.[34]

These subsidies disproportionately benefit large farms: while only about 15 percent of farms use crop insurance, insured farms account for 70 percent of U.S. cropland.[35] Small farmers struggle to utilize crop insurance because of the high administrative burden and challenges of insuring specialty crops.[36] In addition to clear equity concerns involving access to crop insurance, this situation is problematic from a climate perspective because larger farms are more likely to grow monocultures, which are both more vulnerable to pests and extreme weather events and can degrade soil health. Indeed, just four crops—corn, cotton, soybeans, and wheat—make up about 70 percent of total acres enrolled in crop insurance.[37]

The current loss coverage policies in the crop insurance program can discourage farmers from proactively reducing their risks by taking steps to enhance soil health and resilience. Because farmers with crop insurance are protected against losses incurred from impacts likely to increase with climate change, farmers may not be properly incentivized to respond to the changing conditions.[38] Some environmental organizations have even raised concerns that in response to the crop insurance transfer of risk, some farmers may be more willing to engage in unsustainable practices, such as aggressive expansion, irresponsible management, and use of marginal land.[39] In addition, farmers may make planting decisions based on the insurance program incentives rather than market-based signals.[40] In these ways, crop insurance can push farmers towards practices that pose risks to both their operations and taxpayer obligations.[41] It is therefore important that the crop insurance program better align farmers’ risk management incentives with the real and growing risks they face from climate change.

One way to achieve this objective is through incentivizing or requiring farmers to undertake actions to improve soil management and promote soil health. Some specific changes to the crop insurance program that could promote these practices include:

Adjusting the length of policies to better reflect the value added from changes that improve long-term soil health.

Writing soil health requirements into insurance policies.

More generally, changes to the crop insurance program that reduce the magnitude of the subsidy offered to farmers, such as setting a dollar-per-acre cap, could reduce the moral hazard that current policies create.[42] The methodology used to set premiums could also be adjusted to be based more on the projected frequency and intensity of events such as droughts and floods rather than on backward-looking data. RMA has started to incorporate climate-related risk metrics into annual rates by weighting recent loss experience more heavily, thereby more accurately reflecting the risks that growers face. However, it is important to consider future risks from climate change as well.

Requirements of the crop insurance program that act as disincentives to climate-friendly farming practices should be updated to account for growing climate risks farmers face. For example, RMA has guidelines in place about the termination of cover crops, because of concerns that these crops will scavenge water from the commodity crops.[43] This requirement can act as a disincentive to farmers’ adoption of cover cropping, a practice that builds the soil and reduces runoff in the non-growing season.[44] The next Farm Bill could specify that there should be no specific termination requirements for cover crops.

Insurance policies may also serve to incentivize some environmentally harmful practices, such as early and excess fertilizer application and cultivation of environmentally sensitive land.[45] Because early application maximizes crops’ uptake of nitrogen, it can increase yield in the short term, but it contributes to nitrous oxide emissions, unhealthy soils that become less able to fix nitrogen and must rely increasingly on fertilizer, and polluted runoff. In addition, synthetic fertilizers, which are made from non-renewable materials, including petroleum and potash, are produced at a huge energy cost.[46] Some studies have suggested that crop insurance may incent some farmers to convert highly erodible or wetlands to farmland.[47] Therefore, the next Farm Bill could also indicate this type of practice is not required to be eligible for crop insurance. This change could be complemented by an increase in the length of insurance policies, as discussed above, because insurance companies would benefit from the longer-term improvements in soil health.

b. Tie crop insurance to a new conservation compliance provision for building soil health for climate ready agriculture

Currently, in order to qualify for crop insurance, farmers must satisfy two conservation compliance requirements, the Wetland Conservation (“Swampbuster”) and Highly Erodible Land Conservation (“Sodbuster”) provisions.[48] These provisions ensure, respectively, that farmers do not convert a wetland or plant crops on highly erodible land or a previously converted wetland.[49] While these current conservation requirements are beneficial in addressing some climate impacts, adding a conservation compliance requirement directly targeted at climate-related practices would improve upon them.

With 70 percent of farmland in the crop insurance program, changes in conservation compliance through the next Farm Bill or through RMA’s policies can drive big climate change benefits. Under Title II, Congress could create an additional conservation compliance requirement for climate-friendly agricultural practices, which could either be required to obtain crop insurance or could make farmers eligible for rebates. The types of on-farm practices that could mitigate risk and enhance climate resilience include more precise irrigation and fertilizer application, reduced tillage of the soil, cover cropping, altering crop rotations, and building buffer strips and riparian buffers. Particularly beneficial practices for building resilient soil include cover cropping, diversified crop rotations, reducing tillage, and efficient irrigation.[50]

In addition, enforcement gaps have limited the success of the existing conservation compliance requirements. To make the mechanism effective, it will be important to establish simple and effective enforcement, for example by using remote sensing, and to ensure that Natural Resources Conservation Service (NRCS) offices have sufficient resources to carry out enforcement efforts.

First, these proposals could produce significant climate benefits from increasing soil health, in terms of both mitigation and adaptation. Reform of the crop insurance and conservation titles could also help address some of the equity issues that currently exist between small and large farms. Existing USDA programs, described in the next section, could help with scalability and administrability. Finally, in terms of feasibility, while any change may be difficult, our stakeholder engagement indicated that farmers are open to programs that target soil health, given the potential economic benefits to their farms. While the actual on-farm impacts will vary based on how the program is designed and constructed, building more resilient, healthy soil can help improve environmental outcomes and decrease the risk of crop loss.[51]

Recommendation 2: Ensure Best Available Science and Research Guides Farm Bill Programs

Agricultural practices that promote climate change mitigation and adaptation, including those described above, are often regionally specific in their implementation. For many new climate-ready practices to be included in conservation compliance or crop insurance, the USDA would need to account for this regional specificity. For example, the benefits of many of the on-farm practices that improve soil health, including more precise irrigation and fertilizer application, reduced tillage of the soil, and altering crop rotations, vary by region and soil type. In some areas, no-till methods may be infeasible; farmers who try to implement no-till in these areas would likely continue to till to some degree or after a short period of time, resulting in quick reversal of the achieved carbon sequestration benefits. Furthermore, the technical specificity of choosing among these practices and correctly implementing them requires guidance at a local level.

To address these types of knowledge gaps and to provide technical assistance to states and farmers, the USDA has created a range of programs, including Climate Hubs, which were established at public land-grant universities in 2014.[52] The Hubs deliver science-based knowledge, practical information, and program support for farmers to engage in “climate-informed decision-making” by farmers.[53]

Increasing funding in the 2018 Farm Bill in Title VII, the Research title, could solidify and expand USDA’s ability to administer and scale climate research and outreach efforts across all regions of the country. Additionally, creating systems to collect and analyze regional data on pilot programs and ensure best practices are adopted could assist long-term efforts to incorporate climate policies into Farm Bill programs.[54] For these reasons the Farm Bill should provide additional funding for climate research and monitoring, especially focused on regional resilience.

Improving livestock management, especially manure management, is a significant opportunity for mitigating emissions of methane and achieving several co-benefits for the public and farmers. There is currently very little regulation of livestock manure management. Manure is sometimes stored—uncovered—in a single collection site, which causes the methane to be released directly into the atmosphere. In addition to being a major GHG emissions source, it can cause a range of considerable environmental harms.[55]

a. Require improved manure management, including the covering of lagoons

First, the upcoming Farm Bill could address manure management collection and storage methods. Practices can be improved through actions such as allowing livestock to roam,[56] covering manure lagoons, flaring the methane produced, or producing biogas for use. Simply covering a manure lagoon results in significant decreases in methane emissions, as well as decreased odors. Flaring is the combustion of methane, which yields water and carbon dioxide. Although flaring still emits GHGs, carbon dioxide is a less potent GHG than methane.

The Farm Bill could promote these practices either through incentives or mandates in the Conservation or Crop Insurance titles. For example, the Farm Bill could mandate or incentivize farmers with a threshold number of cattle, swine, or poultry cover manure and flare the produced methane to be eligible for crop insurance. Such a mandate would have the greatest impact at Concentrated Animal Feeding Operations (CAFOs), which may also be better able to bear the high capital costs associated with biogas production.

b. Pursue strategies to decrease methane emissions, including biogas and other on-farm renewable energy production

Second, the Energy Title could incentivize on-farm biogas. On farms, many different substrates may be used to produce biogas, including animal excrements (including that of cattle, swine, poultry,[57] and horse), food waste, milling by-products, and catch crops (such as clover grass on farms without livestock).[58] Farmers can realize substantial savings from biogas production, including through substituting biogas for other energy sources, through substituting digestate[59] for commercial fertilizers,[60] and by avoiding disposal and treatment of substrates (such as for waste-water treatment). Farmers may also be able to sell carbon offsets.[61] In addition, farmers producing biogas can avoid some of the worst problems with animal agriculture: farmers must do something with the manure, and its storage can produce strong odors,[62] unhealthy conditions for workers and families,[63] and pollution through runoff in the worst scenarios.[64]

Farmers have two main options for biogas use: (1) generation of electricity for on-site use or sale to the grid; and (2) direct use of biogas locally, either on-site or nearby.[65] Using the biogas to fuel a generator to produce electricity is considered the most profitable use for most farms.[66] Another use is to upgrade the biogas, then called biomethane, to be injected into the national natural gas pipeline network as a substitute for extracted natural gas.

Because farmers could benefit financially from on-farm use or the sale of biogas, the Farm Bill should continue and expand funding for the Rural Energy for America Program, which offers cost-sharing grants and loans for renewable energy improvements. [67] However, these programs are most likely to benefit large farms because anaerobic digesters are expensive and require a large and constant supply of substrate to produce a return on investment. We therefore suggest the Farm Bill also fund pilot programs to assist small farm communities to form cooperatives so that they are also able to utilize this technology and participate in the grant or loan program.

Even with the available grants and loans, farmers are still taking a substantial financial risk. USDA or land-grant universities should actively help communities or cooperatives with the planning and application process. Large farms or cooperatives who are unable or unwilling to operate and maintain anaerobic digesters themselves could hire a company to lease the equipment and manage the biogas production process.[68] USDA Rural Development Agencies could be a valuable liaison between biogas management companies and farmers.

CAFOs could be part of a voluntary program or required to use anaerobic digesters due to their greater contribution to climate change and other environmental harms. Because CAFOs are responsible for high levels of greenhouse gas emissions and because anaerobic digesters are economically feasible for large operations, there is reason to consider the benefits that could be achieved by requiring these practices for large CAFOs in the Farm Bill.

Livestock management is a critical area for addressing climate impacts, and biogas has the potential to be a win-win for farmers willing to invest in alternative energy production.

Conclusion

The U.S. agricultural system must evolve to mitigate climate change and adapt to the effects of a changing climate. Opportunities for climate change mitigation and adaptation exist across the Farm Bill titles, from bolstering climate resilient infrastructure in the Rural Development title to incentivizing sustainable forest management in the Forestry Title. Taking action on climate measures in the next Farm Bill reauthorization will help farmers better plan for changing conditions, protect taxpayers from increasing risks, and assist the United States in meeting its global climate commitments. The next Farm Bill should incorporate climate risk management provisions, and state and local actors should consider ways to support these efforts.

[7] EPA, Overview of Greenhouse Gas Emissions [hereinafter EPA, Overview], https://perma.cc/7WS6-JXQY. The two to three percent of emissions unaccounted for are fluorinated gases, which are synthesized during industrial processes. Id.

[13]See U.S. Dep’t of Agric., USDA Agriculture Climate Change Adaptation Plan 9 (2014) [hereinafter USDA, Adaptation Plan], https://perma.cc/8SM9-5NDX; Louise Jackson & Susan Ellsworth, Scope of Agricultural Adaptation in the United States: The Need for Agricultural Adaptation, in The State of Adaptation in the United States (2012), https://perma.cc/HS57-K35T.

[14] For example, a recent report from the Office of Management and Budget and the Council of Economic Advisers estimates that the annual cost of the crop insurance program will increase by $4 billion per year in 2080 as a result of the impacts of climate change. OMB & CEA, Climate Change: The Fiscal Risks Facing the Federal Government 6 (Nov. 2016), https://perma.cc/4Y22-P85V; see also USDA, Adaptation Plan, supra note 14, at 9.

[15] U.S. Dep’t of Agric., Climate Change and Agriculture in the United States: Effects and Adaptation 126–27 (2013) [hereinafter USDA, Effects and Adaptation], https://perma.cc/QW8T-Y4RL.

[19] For a more detailed review of how carbon sequestration can be increased in agriculture, see Daniel Kane, Nat’l Sustainable Agric. Coal., Carbon Sequestration Potential on Agricultural Lands: A Review of Current Science and Available Practices (2015), https://perma.cc/R4WA-2PPK.

[25] For example, in the Central Valley of California, an adaptation plan that included integrated changes in crop mix and altered irrigation, fertilization, and tillage practices, was found to be most effective for managing climate risk. Id. Along with the USDA Climate Hubs, the following organizations have undertaken projects related to regional agricultural adaptation research and planning: California Healthy Soils Initiative; Wisconsin Initiative on Climate Change Impacts; Southeast Florida Regional Climate Change Compact; The Mid-Atlantic Water Program; U.S. Midwest Field Research Network for Climate Adaptation.

[35] U.S. Dep’t of Agric., Structure and Finances of U.S. Farms: Family Farm Report, 2014 Edition 32–33 (2014), https://perma.cc/S9YP-P6CY.

[36] Generally, the more diverse or specialized crops and livestock a farmer produces, the harder it is to obtain insurance. These policies are not designed to support small producers and the policies are administratively complex and burdensome for small farmers, with high premiums for small farmers. On the one hand, if small farmers used yield-based or revenue-based insurance policies, those farmers would need to purchase insurance for each crop, which requires producing a significant volume of each single crop to justify the paperwork and setting up a contracted purchase price from a processor. On the other hand, whole farm insurance policies base policies on average adjusted gross revenue of the farm, regardless of the variety of products the farmer grows. This type of policy is more appropriate for diversified farmers, but may still be too cumbersome for small farms to participate. See Jeff Schahczenski, Nat’l Sustainable Agric. Info. Serv., Crop Insurance Options for Specialty, Diversified, and Organic Farmers (2012), https://perma.cc/64P6-CTRC; Nat’l Sustainable Agric. Coal., Have Access Improvements to the Federal Crop Insurance Program Gone Far Enough?, NSAC’s Blog (July 28, 2016), https://perma.cc/PT37-RNNL.

[45] USDA’s Economic Research Service found that “[l]ands brought into or retained in cultivation due to these crop insurance subsidy increases are, on average, less productive, more vulnerable to erosion […] then cultivated cropland overall. Based on nutrient application data, these lands are also associated with higher levels of potential nutrient losses per acre.” USDA Economic Research Service, Report Summary:Environmental Effects of Agricultural Land Use Change (Aug. 2006); see also Daniel Sumner and Carl Zulauf, The Conservation Crossroads in Agriculture: Insight from Leading Economists. Economic and Environmental Effects of Agricultural Insurance Programs, The Council on Food, Agricultural and Resource Economics (2012).

[46]See Stephanie Ogburn, The Dark Side of Nitrogen, Grist (Feb. 5, 2010), https://perma.cc/9J6E-ZD9J (“About one percent of the world’s annual energy consumption is used to produce ammonia, most of which becomes nitrogen fertilizer.”).

[54] The existing ARS LTAR system, which conducts longterm sustainability research, could be used to inform the regional best practices communicated in outreach efforts. See Agric. Research Serv., U.S. Dep’t of Agric., Long-Term Agroecosystem Research (LTAR) Network, https://perma.cc/6XRT-FBTC.

[55] For example, manure management practices can create a public nuisance for which neighbors have little recourse. In addition, runoff from agriculture is not adequately regulated under the Clean Water Act and results in pollution to the nation’s waterways. Every year a hypoxic zone, also called a dead zone, develops where the Mississippi River dumps pollution from Midwest livestock and fertilizers into the Gulf of Mexico. See Kyle Weldon & Elizabeth Rumley, Nat’l Agric. L. Ctr., States’ Right to Farm Statutes, https://perma.cc/Y8XA-KUBR; Ada Carr, This Year’s Gulf of Mexico “Dead Zone” Will Be the Size of Connecticut, Researchers Say, Weather.com (Jun. 15, 2016), https://perma.cc/36ZZ-NKY9.

[56] Farms where the cattle range freely do not release as much methane to the atmosphere because the less consolidated manure is more likely to be absorbed into the soil rather than anaerobically digested to produce methane.

[57] Using poultry manure as a substrate can be difficult because feathers and poultry litter can clog anaerobic digesters. See Donald L. Van Dyne & J. Alan Weber, Special Article, Biogas Production from Animal Manures: What Is the Potential?, Industrial Uses/IUS-4 20, 22 (Dec. 1994).

[59] Digestate is the solid that is left over after biogas has been produced. Digestate can be sold or used on farm as fertilizer. It smells better than manure, is free of harmful bacteria, and contains nitrogen in a form that is more bioavailable for crops.

[60] 40 organic farms in Germany, in a region without livestock, have found it worthwhile to cooperate in supplying and transporting clover grass up to 50 km to an AD because the digestate provides them with a flexible organic fertilizer. See SustainGas, supra note 60, at 28. They find that the digestate leads to higher quality for their food crops. Id. “Biogas has to serve food production via improved nutrient supply,” one farmer says. Id.

[61] If farmers can show that they have reduced their methane emissions, they may be able to sell the carbon offsets in exchanges such as the California GHG cap and trade market. See Cal. Air Resources Bd., Compliance Offset Protocol, Livestock Projects: Capturing and Destroying Methane from Manure Management Systems (2014), https://perma.cc/68EF-2SB9.

[62] The odor-reducing benefits are viewed as especially desirable for poultry and swine farms.

[63] Biogas plants dispose of waste and sewage, making conditions healthier. Not only does the anaerobic digestion process remove pathogens, but because biogas production requires collecting manure at a central location, some unhygienic conditions are avoided. See Julia Bramley, et al., Tufts Department of Urban & Environmental Policy & Planning, Agricultural Biogas in the United States: A Market Assessment 122 (2011), https://perma.cc/Z4ER-S4SD.

[64] Livestock manure generated at cattle yards and dairy farms can contaminate surface and ground water through runoff. Anaerobic digestion sanitizes the manure to a large extent, decreasing the risk of water contamination. Id.

[68] This model is frequently used for wind energy production. See Agric. Research Serv., U.S. Dep’t of Agric., Wind and Sun and Farm-Based Energy Sources, Agric. Res., Aug. 2006, https://perma.cc/ZBJ9-R74Q.

The California Cap-and-Trade Program (“CAT”) is derived from the California Global Warming Solutions Act of 2006 (“Global Warming Act”), which requires the State to reduce its greenhouse gas (“GHG”) emissions to 1990 levels by 2020.[1] The California Air Resource Board (“CARB”) is the State regulatory agency responsible for the project.[2] In 2011, the CARB adopted cap-and-trade regulations and created the CAT to set limits on GHG emissions.[3] The first auctions for the CAT were held in 2012, and the program went into full effect on January 1, 2013.[4]

The CAT operates in two phases each year. First, a number of emission allowances are freely distributed to entities that fall under the purview of the program.[5] Second, the remaining allowances are auctioned off on a quarterly basis.[6] The free distributions are reduced annually, and eventually all the allowances will be distributed via auctions.[7] The program also permits carbon offsets to satisfy up to eight percent of an entity’s compliance obligations.[8] The ultimate objective is to create incentives for businesses to craft environmentally friendly industrial practices as the number of yearly allowances decreases over time.

The CAT also has an enormous scope, and it is the world’s second largest market-based mechanism designed to reduce GHG emissions.[9] This size makes the successful implementation of the program especially impressive. The success is due largely to a design structure that draws upon the shortcomings of previous cap-and-trade initiatives, such as the Regional Greenhouse Gas Initiative (“RGGI”) in the northeastern United States and the Emissions Trading System (“ETS”) in the European Union.

II. Lessons Learned from the Regional Greenhouse Gas Initiative

The CAT was not the first emissions marketplace in the United States. In 2009, the RGGI went into effect as a cap-and-trade marketplace for CO2 emissions in the following nine states: Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New York, Rhode Island, and Vermont.[10] However, the RGGI has been plagued with numerous shortcomings that have frustrated the performance of the initiative and which impart several lessons on how to more effectively design a cap-and-trade system.

A. Lesson 1: Cap-and-Trade Programs Need a Broad Scope

A key drawback of the RGGI is its limited scope. The program applies exclusively to CO2 emissions and only covers electrical power plants with the capacity to generate twenty-five or more megawatts.[11] Predictably, the results of the RGGI have been underwhelming, as only 163 facilities fall under the regulatory reach of the program.[12] Furthermore, CO2 emissions merely account for twenty percent of the GHG emissions in the nine participant states—a number that shrinks even further since the RGGI only regulates the electrical sector.[13] This narrowed scope has undermined the efficacy of the RGGI so drastically that Congress considers the program’s contribution to global GHG reductions to be “arguably negligible.”[14]

B. Lesson 2: Emission Forecasts Must Be Accurate

The second significant failing of the RGGI was that it overestimated the amount of CO2 emissions among the member states.[15] In fact, the RGGI set an initial emissions cap that was above actual emissions levels.[16] This was a gross oversight that stemmed from two key defects in the RGGI’s design.

First, the RGGI emission limits for the first cap period, which ran from 2009–2013, were based on emission estimations made in 2005.[17] Between 2005 and 2009, the amount of electricity generation in the member states decreased by thirty-six percent due to energy efficiency improvements and structural changes in energy generation portfolios.[18] Second, the RGGI distorted its emission forecasts by including all electrical power plants that had the capacity to generate twenty-five or more megawatts in its estimates.[19] Limiting the emission calculations to power plants that actually generated twenty-five or more megawatts would have produced more accurate projections.

These errors have been catastrophic for the initiative. The initial regulations had no effect on most businesses, which were already emitting below the inflated emissions cap.[20] Participation in the RGGI was therefore minimal, since many of the targeted businesses had no need to reduce emissions, purchase allowances, or generate offset credits.[21] Furthermore, because the RGGI does not limit the amount of allowances that can be “banked” and used in subsequent years, many companies have stored substantial amounts of these initial surplus allowances for future use.[22]

The administrators of the RGGI have taken extreme measures to try and remedy these miscalculations. Most notably, they implemented a “revised emissions cap,” running from 2014–2020, that slashes the emission limits by forty-five percent in an effort to match actual emission levels.[23] Such radical action would not have been necessary if the initial emissions cap had been more precise.

C. Lesson 3: Auctions Need Robust Price Floors

A final pitfall of the RGGI is its undervalued price floor for auctions. The reserve price has hovered around two dollars per allowance, despite being scheduled to increase according to the Consumer Price Index (“CPI”).[24] But the fact that auctioned allowances have been sold at prices exceeding five dollars indicates that businesses are willing to pay more.[25] The program therefore severely underappreciated the corporate demand for allowances and forfeited substantial potential earnings. Moreover, by greatly undervaluing the price floor, the RGGI administrators neglected to protect against suboptimal years when allowance prices have plummeted. A higher reserve price would have preserved the revenue generation capacity of the program, even during these off years.[26]

III. Lessons Learned from the European Union’s Emission Trading System

There are also numerous lessons to be learned from the deficiencies of the European Union’s ETS, which is the world’s largest market-based mechanism for reducing GHG emissions.

A. Lesson 1: Cap-and-Trade Programs Need Ambitious Initial Targets

At the conclusion of Phase I of the ETS, the “Learning Phase” that ran from 2005–2007, it was apparent that the initial targets for emission reductions were far too lenient.[27] Indeed, the lax regulations during Phase I only produced GHG reductions of three percent.[28] The EU was forced to compensate by crafting extreme targets for Phases II and III of the program, setting emissions goals of six percent below 2005 levels for Phase II and twenty-one percent below 2005 levels for Phase III.[29] If the EU had formulated a more ambitious target for Phase I rather than over-prioritizing the transition of members into the program, it would have avoided the need for these drastic adjustments.

B. Lesson 2: Allowances Must Be Apportioned Judiciously

Similar to the RGGI, the ETS grossly over-allocated emission allowances. In fact, ETS allowances initially exceeded the amount of actual emissions by four percent.[30] This miscalculation was devastating for Phase I of the ETS, as it enabled European businesses to emit 130 million tons more in GHGs than they had emitted prior to the implementation of the program.[31] This surplus destroyed the demand for allowances in the ETS marketplace, and auction prices fell precipitously.[32] The EU was forced to heavily reconfigure ETS allowance allocations to try and mitigate the damage caused by these initial overestimations, and it is still attempting to normalize the ETS marketplace.[33]

C. Lesson 3: Cap-and-Trade Programs Need Balanced Market Designs

The ETS has also been hamstrung by its inferior market design. Phase I of the program did not permit any allowances to be banked for future use.[34] Coupled with the initial over-allocation of allowances, this meant that most regulated entities possessed surplus allowances they had to expend by the year-end. This resulted in extreme downward price volatility at the conclusion of trading periods, as many companies attempted to dump the remainder of their emission allowances into the auctions.[35] The EU was once again forced to implement significant revisions to correct this oversight.[36] And while the ETS now permits allowances to be banked, the initial trading instability across Europe nearly destroyed the program.[37]

The EU also does not set a reserve price for ETS auctions, meaning there is no price protection for emission allowances.[38] This remains a gross oversight by the EU, as the lack of a price floor fails to account for the inevitable fluctuation of allowance prices due to changes in weather or energy price cuts. As a consequence, the ETS has lost significant revenue during periods of low auction demand where allowances have sold for pennies on the dollar, and the program will continue to be financially vulnerable until this design flaw is remedied.[39]

D. Lesson 4: Cap-and-Trade Programs Need Administrative Uniformity

Administrative inefficiencies have also plagued the ETS. The most glaring hole was the initial lack of a single registry for ETS participants.[40] Prior to 2012, each nation participating in the ETS had its own registry, which resulted in inconsistent regulation across the system.[41] The Danish registry, for example, failed to vet its registrants for two years.[42] The registry ultimately became so saturated with fraudulent companies that over ninety percent of account holders had to be deleted in 2010.[43] Even after the EU moved all participants into a single registry, the credibility lost among consumers during these initial years continues to plague the reputation of the program.

E. Lesson 5: Cap-and-Trade Programs Need Strong Cyber-Security

The final shortcoming of the ETS is that its cyber-security has been extremely assailable. “Phishing” has been one particularly vexing problem. The scam involves the creation and promotion of fake registries that solicit users to reveal their ETS identification codes. The “phishers” then use this information to carry out carbon trading transactions in legitimate registries. These deceptions have had severe economic ramifications, and as much as three million euros have been stolen in a single month.[44]

Hacking has been another key cyber-security issue for the ETS. Hackers have been able to infiltrate users’ computer systems and sell off all their allowances for immediate cash payments on the “spot market.”[45] Numerous companies have been crippled by this scam, and hackers have defrauded certain businesses of more than seven million euros worth of emission allowances.[46]

IV. The Success of the California Cap-and-Trade Program

When considering the numerous oversights of the RGGI and ETS programs, the success of the CAT is doubly impressive. This success is due to the balanced design of the CAT, which incorporates the strengths of the RGGI and ETS while mitigating their weaknesses.

Both the RGGI and ETS erred by overestimating actual emission levels and allocating excessive allowances. The CARB avoided this mistake by crafting a precise allocation methodology that prevented surplus allowances from derailing the auction marketplace. Foremost, the CARB calculated California emission levels for the years immediately preceding the creation of the CAT to more accurately forecast future emissions. The CARB also narrowed the variability of its emissions estimates by only including emitters who had actually emitted 25,000 or more metric tons of CO2 or equivalents.[47] Emitters who merely had the capacity to emit beyond the 25,000 metric ton threshold were not included in the calculations. The greater accuracy of the CAT estimates was evidenced during the program’s first quarterly auction in 2012, where all twenty-three million allowances offered at the auction were purchased above the reserve price.[48]

B.Success 2: The CAT Began Ambitiously While Also Facilitating Transition

Another common error of the RGGI and ETS was that their design strategies over-prioritized transitioning members into their systems. The programs initially neglected to implement substantive emission reduction targets for fear of overwhelming participants, and they have subsequently instituted dramatic reforms to compensate. By contrast, the CARB recognized the need to balance the transition of members into the program against regulatory efficacy, lest one derail the other.

The CARB facilitated the transition of participants into the CAT by narrowing the scope of the first compliance period to only cover electrical and industrial sectors. It waited until the second compliance period to expand into the transportation and heating fuel sectors to provide companies time to adjust their business practices.[49] Yet the CARB also implemented considerable GHG reduction targets. The CARB initially set a 2020 reduction goal of seventeen percent below 2013 levels, which still eclipses the target of the RGGI.[50] Due to these ambitious benchmarks, the CAT has already produced “non-negligible” emission reductions and economic gains, with 2013 alone seeing GHG reductions of over a million and a half metric tons and statewide economic growth of two percent.[51] The CAT has benefitted greatly from such a stable infrastructure, and it remains on track to reach its ultimate emission reduction target by 2020.[52]

C. Success 3: The CAT Has a Broad Scope

The CARB also built off the mistakes of the RGGI by broadening the regulatory scope of the CAT. Because it only regulates CO2 emissions, the RGGI covers less than twenty percent of the GHG emissions generated across its nine participating states.[53] By contrast, the CAT emulates the ETS by also covering CO2 equivalents such as CH4, N2O and other fluorinated GHGs, resulting in more effective emission restrictions.[54] The CARB also recognized that the RGGI erred in solely regulating electrical power plants. Accordingly, the CARB extended CAT regulations into other sectors heavy in GHG emissions, such as industrial, transportation, and heating fuel sectors.[55] Because of this broader scope, the CAT already covers over 600 facilities in California, whereas the RGGI only reaches 163 facilities across nine states.[56] The CAT also covers more than eighty-five percent of California’s GHG emissions, which is almost four times the amount of GHG coverage under the RGGI.[57]

D. Success 4: The CAT Has a Balanced Market Design

The CAT also avoided the severe design blunders of the RGGI and ETS. Rather than undervaluing or ignoring auction price floors, the CARB instituted a strong reserve price of ten dollars in 2012, which has been set to increase each year thereafter by five percent (in addition to increases for inflation).[58] Allowances have consistently sold above these amounts, but the price floor has provided steady protection against downward price volatility during poor trading periods.[59] Moreover, the built-in mechanism for annual increases to the reserve price has ensured that the price floor continues to increase irrespective of CPI circumstances.[60]

The CAT further protects against precarious price drops by permitting allowances to be banked.[61] This avoids the price instability problems of the ETS by discouraging businesses from dumping surplus allowances into auctions at the end of trading periods. Nevertheless, the CAT imposes limits on the maximum amount of allowances that can be held by a business.[62] This circumvents the design flaw of the RGGI that allows businesses to bank an inordinate amount of allowances and eliminate any need to subsequently reduce emissions.[63]

The revenues generated by the CAT best demonstrate the success of its market design. The first auction raised more than $289 million, and the first compliance period generated $969 million in revenue for California.[64] Projections estimate that the CAT will generate two billion dollars or more per year as the program’s regulatory scope continues to scale upwards.[65]

E.Success 5: The CAT Has Strong Administrative and Security Practices

The CAT has also benefitted immensely from its efficient administration and strong security practices. Foremost, the CAT keeps a single registry for all its regulated entities, ensuring vigilant and orderly monitoring of all participants.[66] The cyber-security protocols of the CAT have been extremely successful as well.[67] To prevent hackers and phishers from infiltrating the program, CAT auctions take place over a four-hour window that is constantly supervised by state employees.[68] The bidders and supervisors remain undisclosed to the public, and all parties must surrender their electronic devices during the auction.[69] This “sealed bid” approach to the auctions has protected the CAT from the fraud and counterfeiting issues that tormented the RGGI and ETS.[70]

V. A Recent Legal Challenge: Are Cap-and-Trade Auctions Tax Programs?

Despite the success of the CAT, the program has faced serious legal obstacles. The principal challenge took place in the recent Morning Star Packing Company v. California Air Resources Board case, where the plaintiffs alleged that the auctions were unconstitutional and violated California law.[71] The chief contention was that the CAT constituted a tax on companies for emitting GHGs.[72] The plaintiffs argued that the statutory authorization of the CAT, the Global Warming Act, therefore fell under the purview of California’s Proposition 13, which requires legislators to pass by two-thirds vote “any act to increase state taxes for the purpose of increasing revenue.”[73] Because the Global Warming Act was not passed by a two-thirds vote, the plaintiffs asserted that the CARB exceeded its regulatory authority when it created the CAT.[74]

The dispositive issue in the case was whether the auctions were unconstitutional taxes or whether they were permissible regulatory fees placed on tradable commodities.[75] The Sacramento superior court ultimately upheld the CAT, concluding that emission allowances were tradable commodities in a marketplace.[76] The court considered several distinctions between taxes and regulatory fees, but the chief difference seemed to be that whereas the government sets tax prices, the market determined the auction price of the emission allowances.[77] Thus, the fact that the allowances had no value independent of the California regulatory scheme did not transform the auctions into a tax program, and the allowances remained tradable commodities.[78]

Yet the superior court ruling did not mark the end of the contentious litigation. The Morning Star decision was appealed to the Sacramento appeals court, which affirmed the lower court judgment by a two-to-one majority decision.[79] In turn, the appellate court ruling was appealed to the California Supreme Court, which ultimately declined to hear the case in June of 2017.[80] What should have been a resounding victory, however, was diminished by the fact that the State Supreme Court did not issue a written opinion on the program itself.[81] Nevertheless, the affirmation of the CAT provided market-based environmentalism with a new lease on life and has galvanized California policymakers and legislators.

VI. The Aftermath of Morning Star

The ramifications of the Morning Star have already been substantial in California. State legislators quickly capitalized on the State Supreme Court’s dismissal of the case by voting to extend the CAT an additional ten years through 2030.[82] The extension produced newfound confidence in environmentalism and revitalized the market economy surrounding the CAT – whereas previous quarterly auction sales had dropped sharply, the California government sold every emission permit offered in the August 2017 auction.[83]

Yet these successes have not been replicated on a national scale. This is somewhat perplexing, as the CAT provides a workable model upon which to base the creation of a federal cap-and-trade program. In particular, Congress could convincingly argue that the Morning Star case supports the notion that cap-and-trade programs deal with tradable commodities and do not constitute tax programs. Congress could therefore avoid having to rely on the Taxing and Spending Clause of the Constitution to justify the creation of an auction program and, instead, could derive its authority from the broader powers of the Commerce Clause.

The affirmation of Morning Star also provides strong persuasive reasoning for Congress to resolve the longstanding debate on whether emission allowances are “physical” (or “nonfinancial”) commodities, which are physically deliverable and consumable, or “financial” commodities that are satisfied through cash settlements.[84] Relying upon the Morning Star court’s description of allowances as being consumable and involving the physical transfer of title, Congress now has a strong basis for asserting, on the federal level, that allowances are physical commodities.[85] This would shield a federal cap-and-trade program from the administrative burdens of complying with the Commodity Exchange Act and other commercial regulations. [86]

Despite the reasoning provided by Morning Star, recent federal policy has demonstrated a marked shift away from the environmentalist approach espoused by the Obama Administration. The recent withdrawal of the Clean Power Plan, the Obama-era rule regulating greenhouse gas emissions, best evinces this change in protocol.[87] Indeed, with the Environmental Protection Agency consistently the choice target of President Trump’s proposed budget cuts, environmentalism on a national level has been placed in a precarious position.[88]

It remains to be seen whether this federal paradigm shift will take a toll on the CAT. It is certain, however, that the demise of the CAT would be the death knell for market-based environmentalism in the United States. Fortunately, the CAT has several contingency protocols to counteract market volatility. In particular, the CARB can hold unsold allowances off the market for at least nine months to compress the supply and force participants back to the auctions.[89] This foresight proved to be invaluable in the wake caused by the initial Morning Star appeal in 2016, during which time the May 2016 and August 2016 auctions only sold eleven percent and thirty-five percent, respectively, of the allowances offered.[90] The remedial mechanisms built into the CAT allowed administrators to re-stabilize the market, and the November 2016 auction resulted in the successful sale of eighty-nine percent of the offered allowances.[91] Nevertheless, these contingencies are merely stopgap solutions, and hesitation among market participants will likely resurface as Californian and national policy progress along their collision course. Until a clear and unified path towards environmentalism is forged across the nation, an ominous shadow will remain cast over the CAT.

VII. Conclusion

The CAT has been a landmark initiative for environmentalism in the United States. Incorporating lessons from the RGGI and ETS, the program has struck a masterful balance in its market design and has produced significant environmental and financial gains for California. The affirming decision of the California judiciary and recent expansion of the program by the California legislature have been beacons of hope for cap-and-trade. Despite these successes, the future of the CAT remains in doubt, plagued by an uncertain socio-political climate where federal support for environmentalism has recently waned. And while the CAT has withstood previous legal and economic challenges, it is undeniable that the decisive battle for market-based environmentalism across the United States has begun.

[5]Id. From 2013–2015, the program covered electrical and industrial power plants that emitted 25,000 or more metric tons of CO2 or equivalent gases per year. Since 2015, fuel distributors have also been covered.

[86]See, e.g., 7 U.S.C. § 1a(47)(B)(ii) (2012) (excluding from the definition of “swap” “any sale of a nonfinancial commodity or security for deferred shipment or delivery, so long as the transaction is intended to be physically settled”).

The history of the American west is inextricably intertwined with damming rivers.[1] Whether for navigation, irrigation, or hydroelectric power, nearly every American river has been dammed.[2] In fact, stretching back to the day the Founding Fathers signed the Declaration of Independence, determined Americans have finished an average of one large-scale dam every day.[3] Currently, there are at least 76,000 dams in this country.[4]

While these dams have vastly contributed to America’s efforts to settle the west, they have come with significant costs. Although these dams’ harms are varied,[5] one of the primary concerns among advocates in the Pacific Northwest is the dramatic impacts dams have on species of anadromous fish, particularly salmonids.[6] In the Columbia River basin, dams block salmon and steelhead migration to more than 55% of historically available spawning grounds.[7] Since many anadromous fish species in the Pacific Northwest are listed as either threatened or endangered,[8] the Endangered Species Act[9] (ESA) can be a valuable tool to induce voluntary dam removals by requiring the Federal Energy Regulatory Commission (FERC) to include costly fish passage upgrades in any relicensing proceeding.[10]

Northwest salmon advocates rejoiced in 2014 when, following a lengthy campaign from a coalition of tribal and environmental activist groups,[11] construction crews completed the largest dam-removal project in American history by removing both the Elwha and the Glines Canyon Dams.[12] Removing these dams started the process of restoring seventy miles of the Elwha River to natural flows that had not existed since construction of the dams first began in 1911.[13] Since the dams came down, the river’s ecological quality has improved at an astonishing rate.[14] In fact, salmon and steelhead populations in the Elwha River have already reached thirty-year highs.[15]

The tremendous success of freeing the Elwha cannot be overstated, but the dams required decades of activist toil to remove.[16] In contrast, removing the Little Sandy and Marmot dams from the Sandy River in Oregon was accomplished in only eight years.[17] There are certainly many core differences between these campaigns that help explain this discrepancy, but chief among these is the fact that Federal Power Act[18] (FPA) amendments incentivized the owner of the Little Sandy and Marmot dams to privately fund the removal, while the Elwha removal languished waiting on federal funding for over a decade.[19]

This Essay will discuss the statutory changes to the FERC relicensing process that have worked to improve fish passage at hydropower facilities in recent decades and will continue to fuel upgrades and dam removals in the future. Part II lays out an overview of the environmental requirements of FERC relicensing and analyzes the Bull Run Hydropower Project as an example of a successful dam removal that was prompted as a result of its owner pursuing relicensing. Part III then reviews the relicensing schedule for several dams in Oregon and Washington to discuss how these fish passage improvements will continue occurring for the foreseeable future.

The current regulatory process will—at least marginally—improve fish passage at many hydropower facilities in the near future as older dams apply for relicensing through FERC. Privately operated hydroelectric dams can only operate under a license from FERC.[20] For older dams, the cost of installing fish passage during the FERC relicensing process can exceed the cost of removal, thereby incentivizing the dam owner to opt for removal.[21] For dams that successfully obtain a license to continue operation, the current statutory relicensing framework requires FERC to include any recommended fish passage upgrades as mandatory conditions in the license.[22] Due to new environmental statutes and regulations passed during the lifetime of the preceding license, many hydroelectric dams in the Columbia River basin are likely to require passage upgrades.[23]

FERC is in the midst of a massive relicensing period.[24] The FERC relicensing process has had a tremendous impact on fish passage in the Columbia River basin in recent history, as both Oregon and Washington were included in FERC’s list of states requiring the most dam relicenses between 2005 and 2015.[25] As discussed below, absent a congressional amendment of the FPA, the FERC relicensing process will mandate fish passage upgrades at Northwest hydroelectric facilities for decades to come.

A. The FERC Licensing Process

In 1920, Congress passed the FPA, authorizing the federal government to regulate private hydroelectric dams.[26] While older dams may have been constructed without a FERC license,[27] all dams must eventually obtain a license to continue operation.[28]

Initially, FERC only considered a dam’s power-generation potential when reviewing a license application, while ignoring the environmental impacts.[29] Then in 1986, Congress amended the FPA[30] to require FERC to include permit conditions protecting fish and wildlife.[31] Now, FERC licenses “require the construction, maintenance, and operation by a licensee at its own expense of such . . . fishways as may be prescribed by” the United States Fish & Wildlife Service or the National Oceanic and Atmospheric Administration (NOAA) Fisheries.[32] FERC cannot “modify, reject, or reclassify any prescriptions submitted by” those agencies.[33] If FERC disagrees with the fish passage conditions, FERC must either withhold the license or dispute the conditions before the relevant court of appeals.[34]

New FERC permits may last for a duration of up to fifty years.[35] Due to this timeframe, FERC will spend the foreseeable future considering relicensing applications for dams whose original permits were approved with minimal environmental consideration. For instance, FERC will review relicensing applications for dams that were approved without an Environmental Impact Statement (EIS) through 2020,[36] dams that were approved without wildlife permit conditions through 2036,[37] and dams that were approved prior to Endangered Species Act protections for anadromous fish through the 2040s.[38]

When owners of these dams apply for relicensing, modern environmental and endangered species protections will likely require project owners to significantly upgrade the dams’ fish passage facilities. FERC has proven willing to attach extremely costly fish passage conditions to its relicensing decisions, which can make removal the most cost-effective next step for hydroelectric dam operators.[39] For those dams that remain standing, new FERC licenses will still likely improve fish passage because relicensing will be conditioned upon upgrading fish passage to meet modern environmental and ESA requirements.[40]

The FERC relicensing process has proven to be an effective tool in persuading operators of large hydroelectric dams to negotiate effective and efficient dam removals that are entirely funded by the dam operators. Few cases highlight how well this process can facilitate dam removals better than the Marmot and Little Sandy dams of the Bull Run Hydropower Project. The Bull Run project is the gold-standard in dam removal for many reasons, including 1) it was entirely funded by the operator without predetermined cost caps;[41] and 2) the dams came out quickly, with minimal confrontation between the affected parties.[42]

Twenty-six miles east of Portland, Oregon the Bull Run River flows through the Mt. Hood National Forest.[43] The Bull Run River drains a 102 square-mile watershed and is almost entirely fed by rain and snowmelt from Mt. Hood.[44] As the main source of water for Portland, the Bull Run watershed provides tap water for nearly one-fifth of all Oregonians.[45] Development on the Bull Run began in the 19th century,[46] and the river became an important source of both water and electricity for the surrounding area.[47]

In 1912, Pacific Gas & Electric (PGE) completed the primary stage of one of the largest developments in the watershed: the Bull Run Hydropower Project.[48] To increase the powerhouse’s capacity, PGE constructed the Little Sandy Dam to divert water from the Little Sandy River to Roslyn Lake, the reservoir behind the project’s powerhouse.[49] The dam completely diverted the Little Sandy River 1.7 miles upriver from its confluence with the Bull Run River.[50] The dam blocked salmon migration upstream and decreased flows to the remaining salmon habitat downstream.[51]

The following year, PGE completed the Marmot Dam on the Sandy River.[52] This dam diverted water from the mainstem Sandy River to the Little Sandy upstream from the Little Sandy Dam, thereby increasing the capacity at Roslyn Lake.[53] The original Marmot Dam was a wood and sediment structure.[54] Unlike the Little Sandy Dam, the Marmot Dam did not block all salmon migration because the original structure included a fish ladder.[55] In 1989, PGE replaced the original Marmot Dam with a forty-seven foot concrete dam.[56]

The Bull Run Hydropower Project’s dams and diversions decreased fish runs in the Sandy River and Bull Run watersheds to 10%–25% of their historic runs.[57] PGE, the operator of the Marmot and Little Sandy Dams, operated four hydroelectric systems that would all require FERC relicensing in the early 2000s.[58] Due to the increasing burden of maintaining century-old dams, relatively low summer flows, and modern environmental regulations,[59] PGE determined that the Bull Run Hydropower System’s costs were simply insurmountable.[60] PGE chose to voluntarily surrender its FERC license.[61] After negotiating a settlement agreement with all affected parties,[62] FERC granted PGE’s petition to surrender its license in 2004.[63] Because of the inclusive settlement process,[64] public support for the final project was high, and PGE obtained all necessary environmental permits to move forward with the dam removal in only eighteen months.[65]

On July 24, 2007, engineers began the process of removing the Marmot Dam by setting off explosives to crack the concrete face.[66] The process ended that October with the breach of a temporary diversion dam built just upstream.[67] At the time, this was the largest dam removed in the Pacific Northwest, both in terms of height and trapped sediment.[68] The Sandy River recovered much more rapidly than expected, with migrating coho salmon reported swimming past the old dam site just one day after engineers completed the removal process.[69] The Little Sandy Dam was removed the following summer.[70]

An important takeaway from the Bull Run Hydropower Project’s removal is that, under the right circumstances, environmental conditions placed on FERC relicensing approvals can act as a tremendous hammer to force dam removals. In fact, PGE decided to pursue settlement negotiations before it even received the final fish passage requirements.[71] Preliminary estimates were enough for PGE to determine that the Bull Run system would not be economical. The Bull Run removal process shows just how effectively the FERC regulatory process can trigger rapid dam removals with minimal delays and no public funding.

III. The Glut of Pending and Upcoming License expirations Will Require FERC to Revisit Fish Passing in the Pacific Northwest for Several Decades.

Because of the fifty-year lifetime of its licenses, FERC is currently in the process of relicensing the final pre–National Environmental Policy Act[72] (NEPA) hydroelectric dams.[73] Several dams in both Washington and Oregon are still operating under such licenses.[74] Although the relicensing process has proceeded slowly, one certainty is that fish passage upgrades will be a mandatory condition for almost any new FERC license. This Part discusses a few dams in both Northwest states that are scheduled for relicensing in the coming decades and provides contemporary examples of the fish passage upgrades that FERC has already required at Northwest dams in recent years.

A. Washington Dam Relicensing

FERC currently licenses fifty-five privately operated hydroelectric dams in Washington.[75] Two of these dams—Sullivan Lake and Packwood Lake—were licensed prior to the mandatory environmental review process codified in NEPA.[76] The Packwood Lake dam, for example, was last licensed in July 1960.[77]

Packwood’s initial license was set to expire in 2010, but the dam has been operating under annual interim permits while working to determine what mandatory conditions will attach to the final new license.[78] As part of this relicensing process, Energy Northwest—the operator of Packwood Dam—has had to cooperate with NOAA Fisheries to determine the impact that the dam’s continued operation will have on listed species.[79] NOAA Fisheries found that three listed species were likely to be affected by the dam’s operation: Lower Columbia River Chinook, coho, and steelhead.[80] To mitigate these harms, Energy Northwest has built an exclusionary screen to keep migrating salmonids out of the channel leading to the powerhouse,[81] but more expansive requirements may be included before FERC can issue the final license.[82]

Along with the pre-NEPA dams, FERC also oversees seventeen dams that are operating under licenses issued prior to the Electric Consumers Protection Act and, as such, did not require any wildlife considerations.[83] These dams will be pursuing relicensing through the 2030s, which will inevitably mandate new fish passage conditions, thereby improving salmonid accessibility to spawning grounds.[84]

B. Oregon Dam Relicensing

Of the twenty-five actively licensed dams in Oregon,[85] there are three dams operating under pre-NEPA licenses: the Klamath, Hell’s Canyon, and Carmen-Smith dams.[86] The greatest fish-passage improvements will occur in the Klamath River, where PacifiCorp—the dams’ owner—has agreed to remove four huge dams by 2020, opening up 570 miles of riparian habitat for returning salmon.[87] Under the agreement, PacifiCorp will provide $200 million for the removal, and the state of California will fund up to an additional $250 million by selling general obligation bonds.[88]

On top of this monumental dam removal, the Carmen-Smith dam near Eugene, Oregon also agreed to significant improvements for salmon in order to relicense.[89] The Carmen-Smith license was issued in 1959 and expired in 2008.[90] As part of its relicensing effort, the Eugene Water and Electricity Board (EWEB) entered into a settlement agreement with sixteen other parties consisting mainly of government agencies, Native American Tribes, and environmental organizations.[91] This agreement included extensive salmonid habitat enhancements and a fish passage–system upgrade.[92] However, a precipitous decline in utility prices triggered a renegotiated agreement, and the fish passage upgrade was replaced with a trap-and-haul system to transport the fish around the dam’s powerhouses.[93] The parties submitted this amended agreement to FERC in 2016.[94] However, should NOAA Fisheries find this trap-and-haul system insufficient to protect the listed species, then EWEB could still be required to install the original fish passage upgrades.[95]

In addition, FERC oversees seven additional dam licenses that were approved prior to the Electronic Consumers Protection Act.[96] The last of these licenses expires in 2039.[97]

IV. Conclusion

Dam removals have become much more common in recent decades, and FERC relicensing has played a large role by requiring expensive fish-passage upgrades as a mandatory condition of an extended operating license. This uptick in FERC-triggered removals was caused by the fact that many of the last dams to be licensed without any environmental oversight have sought relicensing in the past decade. While almost all the pre-NEPA dams have been relicensed at this point, FERC relicensing will continue to trigger fish passage upgrades at facilities that were originally licensed before FERC started attaching mandatory wildlife considerations in 1986. Organizations operating dams in the Pacific Northwest that were licensed prior to these wildlife conditions will be pursuing relicensing through 2039.

In some cases—like the Little Sandy and Marmot Dams in Oregon—the economic cost of the Electronic Consumers Protection Act’s fish passage requirements will exceed the benefit of continued operation and make removal the more cost-effective option. In most other cases, the new FERC license will still mandate fish passage upgrades like installing a fish-ladder or implementing a trap-and-haul system. Through either dam-removal or upgrades, these FERC conditions will improve fish-passage at hydroelectric dams throughout the Pacific Northwest.

[1] U.S. Army Corps of Eng’rs, Water in the U.S. American West 6 (2012).

[14] Lynda V. Mapes, Elwha: Roaring Back to Life, Seattle Times (Feb. 13, 2016), http://projects.seattletimes.com/2016/elwha/ (Scientists have been “amazed at the speed of change under way in the Elwha.”).

[36] National Environmental Policy Act of 1969, 42 U.S.C. §§ 4321–4347. NEPA was signed into law in 1970. What is the National Environmental Policy Act?, Envtl. Protection Agency, https://www.epa.gov/nepa/what-national-environmental-policy-act (last visited Sept. 30, 2017).

[39] For example, FERC would have required PacifiCorp to spend over $30 million on fish passage upgrades to relicense the Condit Dam, so PacifiCorp chose to remove the dam at a cost of approximately $17 million. David H. Becker, The Challenges of Dam Removal: The History and Lessons of the Condit Dam and Potential Threats from the 2005 Federal Power Act Amendments, 36 Envtl. L. 812, 826–27 (2006).

[46] The City of Portland first diverted water from the Bull Run in 1894. Andrew Theen, From Bull Run to Mount Tabor: The History of Portland’s Open Reservoirs (Timeline), Oregonian (Dec. 17, 2014), http://www.oregonlive.com/portland/index.ssf/2014/12/from_bull_run_to_mount_tabor_t.html.

[47]Bull Run: The Town That Time Forgot, PDX Hist. (Oct. 28, 2016), http://www.pdxhistory.com/html/bull_run.html.

[48] The main powerhouse was completed in 1912. The Century-Old Bull Run Powerhouse Finds New Life, Thanks to 3 Portland Preservationists, Oregonian (Dec. 6, 2012), http://www.oregonlive.com/gresham/index.ssf/2012/12/the_century-old_bull_run_power.html.

[58] Of PGE’s four hydroelectric systems, the Bull Run project was the smallest. Julie A. Keil, Bull Run Decommissioning: Paving the Way for Hydro’s Future, Hydro Rev. (Mar. 1, 2009), http://www.hydroworld.com/articles/hr/print/volume-28/issue-2/feature-articles/dam-removal/bull-run-decommissioning-paving-the-way-for-hydrorsquos-future.html.

[59] The Bull Run system affected fish passage, temperature pollution, and river flows; several threatened fish species also migrated to the rivers. Id.

[60] This is understandable when you consider the fact that PGE would have had to upgrade two century-old dams just to continue electricity production at a single powerhouse. Id.

[62] There were a total of twenty-two parties in the settlement. Id. PGE also agreed to pay all costs for the removal in the settlement, thereby circumventing the arduous process of securing federal funding. Blumm, supra note 17, at 1070.

Introduction

As the public has become more aware of the intense connection between the practices of electric utilities and greenhouse gas emissions, interested groups have shone a brighter spotlight on the regulation of utilities in the United States. Some have called on the Federal Energy Regulatory Commission (“FERC”) to take on a more environmentally conscious role when exercising their authority to set wholesale rates.[1] While FERC still hasn’t explicitly taken environmental considerations into wholesale rate setting, it has taken steps to continue to ensure reliability as the nation’s energy portfolio composition shifts.[2]

Generally, under the Federal Power Act, FERC has jurisdiction over sales of electricity for resale in interstate commerce (wholesale sales), electricity transmission, and practices “affecting” rates.[3] The Supreme Court recently authorized a construction of FERC’s jurisdiction in FERC v. Electric Power Supply Association (“EPSA”) to include practices that “directly affect” wholesale rates.[4] This decision was seen as good for clean energy, as it removed barriers for demand response resources[5] to compete in the wholesale market in the short-term, while allowing FERC to have more regulatory flexibility in the long-term.[6]

At the state level, legislators and regulatory bodies generally retain the authority to set retail rates, maintain and site local facilities, and to establish resource portfolios.[7] There are a wide range of potential policies that can be used to foster clean energy, including feed-in tariffs,[8] renewable portfolio standards,[9] rebates for renewables,[10] a carbon tax,[11] a ban on carbon imports and new coal plant construction,[12] and net-metering policies.[13] A majority of states in the country have passed some form of a renewable portfolio standard mandating a certain percentage of the state’s electricity come from renewable resources.[14] These policies can originate in the state legislature or can come from the state utility regulator directly.[15] These state policies use several different regulatory tools, from market-based incentives like renewable energy credits to other state law mechanisms such as long-term power purchase agreements or mandated utility-owned renewable generation.

Some of these state clean energy policies have recently been challenged or are currently being challenged in the federal courts on preemption and dormant commerce clause grounds.[16] Challenges to these policies typically allege that the state programs are either preempted by the Federal Power Act, or are an impermissible intrusion into Congress’s exclusive power to regulate interstate commerce.

The Court, by authorizing an expansion of FERC’s jurisdiction in EPSA, and by failing to clarify the preemption analysis under the Federal Power Act in another recent case, Hughes v. Talen Energy Marketing LLC, may have inadvertently created considerable uncertainty about the extent of federal and state authority—or at least failed to remedy existing uncertainty. More thorough discussions on the shifting approach to the division of state and federal authority in energy law can be found elsewhere.[17] This Article will instead offer some speculation about the impacts of EPSA and Hughes on state policymaking.

FERC v. EPSA and Hughes v. Talen Energy Marketing

In Federal Energy Regulatory Commission v. Electric Power Supply Ass’n, the Supreme Court upheld FERC’s assertion of jurisdiction by allowing it to regulate practices that “directly affect” wholesale rates.[18] At issue in EPSA was whether FERC had authority to regulate demand response transactions (where a provider contracts with consumers to reduce energy consumption), or whether those transactions should be classified as “retail sales.”[19] The Federal Power Act grants FERC jurisdiction over practices affecting rates, and in EPSA, the Court adopted a D.C. Circuit test that cabined that authority to practices “directly affecting” rates.[20] After adopting the directly affecting test, the Court found that FERC had jurisdiction over demand response practices, that the rule did not impermissibly tread into authority reserved to the states, and that FERC did not act arbitrarily and capriciously in its decision to compensate electricity users at the same rates as electricity generators.

Whereas EPSA dealt primarily with the extent of FERC’s jurisdiction under the Federal Power Act, Hughes v. Talen Energy tackled the separate but related issue of whether a state program was preempted under the Federal Power Act.[21] The case was on review from the Fourth Circuit, where the appellate court found that a Maryland program was preempted both as a matter of field preemption (because FERC “occupies the field” of setting wholesale rates), and also as a matter of conflict preemption (because rates under Maryland’s program conflicted with FERC approved rates).[22] On review, the Supreme Court affirmed the lower court’s ruling, albeit on narrow grounds, finding that the Maryland program “impermissibly intrude[d] upon the wholesale electricity market, a domain Congress reserved to FERC alone.”[23]

One could argue that the Supreme Court narrowed the scope of the Fourth Circuit holding. For example, the Court distinguished between contracts-for-differences (which was the regulatory mechanism that Maryland deployed to encourage new natural gas plant development) and other more traditional long-term power purchase agreements.[24] However, in other ways, the Court’s opinion is actually more ambiguous—the Court does not clarify whether the correct analytical approach here should be conflict, field, or another form of preemption analysis,[25] and two Justices wrote concurring opinions to advocate for their distinct approaches.[26]

Because the opinion only addressed a narrow set of situations, the court did little if anything to address whether any other state regulatory mechanisms designed to encourage renewable deployment would be preempted under the Federal Power Act, and specifically limited their holding to Maryland’s program.[27] The decision provides no guidance on how to analyze these state law regulatory programs unless they contain contracts-for-differences that are pegged to a FERC-approved wholesale price, as Maryland’s program did. Therefore, the case is unlikely to act as a prophylactic to the litigation that is ongoing in the lower courts.[28] It makes one wonder why the Supreme Court took the case in the first place—there was no circuit split after the Fourth Circuit’s decision, and the Court failed to use the case as an opportunity to instruct the lower courts.

Combining the holding from EPSA with Hughes along with some of the more archaic language in previous energy preemption cases provide ample fuel for challenges to state renewable energy policies. Simply, if the Federal Power Act draws a jurisdictional “bright-line,”[29] or if “[i]t is common ground that if FERC has jurisdiction over a subject, then the States cannot have jurisdiction over the same subject,”[30] then any practice that “directly affects” wholesale rates should be exclusively within FERC’s jurisdiction. This could result in effectively shrinking state regulatory authority after EPSA and Hughes.

Still, the extent of practices that come within FERC’s “affecting” jurisdiction is unknown, and it may be that FERC must first exercise this jurisdiction over a particular practice before it has a preemptive effect. However, this doesn’t prevent litigants from making those arguments in the lower courts to invalidate clean energy programs, and Hughes may stand as a missed opportunity to clarify the scope of preemption under the Federal Power Act.

In fact, litigants are already citing Hughes and EPSA to challenge state clean energy programs. On October 2016, the Coalition for Competitive Energy filed a challenge to the New York Public Service Commission’s Clean Energy Standard in the Southern District of New York.[31] The Clean Energy Standard was issued in August,[32] and set a target for New York to obtain fifty percent of their electricity from renewable resources by 2030.[33] In addition to continuing New York’s renewable energy credit program,[34] the Clean Energy Standard included a requirement that load-serving entities purchase Zero-Energy Credits that correlate with electricity generated by nuclear facilities.[35] Coalition for Competitive Energy is challenging this specific program (the zero-emissions credits) in their complaint, alleging that it “operates within the area of FERC’s exclusive jurisdiction” and should therefore be preempted.[36] The petition cites EPSA to argue that “[s]tate actions that ‘directly affect the wholesale rate’” are invalid.[37]

Additionally, the Second Circuit recently granted Allco’s request for an injunction to prevent state officials from conducting a clean energy request for purchase (“RFP”) in Connecticut.[38] The decision did not enjoin state officials in Massachusetts and Rhode Island who are also participating in the RFP.[39] While the Second Circuit did not disclose their reasoning when it granted the injunction,[40] Allco’s petition for injunction pointed to Hughes when arguing that the program was preempted under the Federal Power Act.[41]

While it may seem that uncertainty in the preemption context is a net loss for individuals concerned about an accelerated transition to clean energy, climate advocates may also weaponize Hughes in other contexts to argue that other state polices that prop up coal and natural gas plants are preempted by the Federal Power Act. For example, the Ohio Public Utilities Commission recently attempted to use power-purchase agreements—which can sometimes be a tool to generate procure renewables[42]—to subsidize coal plants in the state.[43] The proposal was blocked by FERC before it could take effect,[44] but the program could have been challenged under Hughes if it remained in place.

Both examples citing to Hughes show challenges to state energy programs that operate outside of FERC-approved markets, unlike the Maryland program at issue in Hughes where the parties adjusted the FERC-approved rate.[45] Perhaps the biggest challenge going forward for clean energy advocates will be how to distinguish state programs that do not advance climate goals (like the Maryland program at issue in Hughes) from those that do (such as the program at issue in Allco), when both often use the exact same regulatory tools.

The Supreme Court may return to the question of the extent of federal and state authority under the Federal Power Act sometime within the next few years. It could reach one of several conclusions. It may reaffirm past language about the “bright-line” between federal and state regulatory authority—confirming that EPSA represented an expansion of FERC’s power and a simultaneous restriction on state authority. It may endorse some form of concurrent jurisdiction, as it did in the Natural Gas Act context in Oneok Inc. v. Learjet, Inc.,[46] and if it does, it may then decide how to restructure the preemption analysis under this concurrent jurisdictional model. It may establish some method of floor preemption,[47] or alternatively, it may leave the preemption decision up to the federal agency,[48] as it does in some other contexts.[49] Also, the Court may simply leave the resolution of these issues up to the lower federal courts.

Conclusion

Regardless of the approach the court takes, the fact that all of these questions remain open and unresolved currently creates considerable legal uncertainty for state regulators that are trying to update and craft effective clean energy laws. States are already testing the boundaries of their authority in many instances,[50] and many may continue to do so despite these new uncertainties. Further, it may be impossible to disaggregate the influence that legal uncertainty is having on state regulators from other influences such as political pressures. I would assume state legislators and regulators—some that are designing state laws to ensure their compliance with the Clean Power Plan—would likely prefer clarity on what regulatory mechanisms they are allowed to use without running afoul of the Supremacy Clause. Hughes thus represents a missed opportunity, and the recent power trio of Oneok, EPSA, and Hughes may shortly turn into a quartet.

* J.D. Candidate, Harvard Law School. The author would like to thank Ari Peskoe, Senior Fellow in Electricity Law at the Harvard Environmental Law Program Policy Initiative, and Robin Smith and Nate Bishop for their help and advice. Any mistakes or omissions are the author’s own.

[1]See, e.g., Christopher Bateman and James T.B. Tripp, Towards Greener FERC Regulation of the Power Industry, 38 Harv. Envtl. L. Rev. 275 (2014) (arguing that consideration of environmental consequences by FERC is permissible under the Federal Power Act); Joel B. Eisen, FERC’s Expansive Authority to Transform the Electricity Grid, 49 U.C. Davis L. Rev. 1783, 1788 (2016) (arguing that under recent case law, FERC may now include environmental considerations into wholesale rates so long as those considerations “directly affect” those rates); Steven Weissman & Romany Webb, Berkeley Center for Law, Energy & the Environment, Addressing Climate Change Without Legislation: Volume 2, How the Federal Energy Regulatory Commission Can Use Its Existing Legal Authority to Reduce Greenhouse Gas Emissions and Increase Clean Energy Use (2014), https://perma.cc/JH8H-FLYT (arguing that FERC can add the cost of carbon when setting the prices in the wholesale market).

[5] FERC defines demand response as “a reduction in the consumption of electric energy by customers from their expected consumption in response to an increase in the price of electric energy or to incentive payments designed to induce lower consumption of electric energy.” 18 C.F.R. § 35.28(b)(4) (2015).

[11] The State of Washington considered a carbon tax in a 2016 ballot initiative. See, Initiative Measure No. 732 (filed March 29, 2016) https://perma.cc/26ZL-Z9D8.

[12] Minn. Stat. § 216H.03, subd. 3(2) and (3) (2007) (“no person shall . . . (2) import or commit to import from outside the state power from a new large energy facility that would contribute to statewide power sector carbon dioxide emissions; or (3) enter into a new long-term power purchase agreement that would increase statewide power sector carbon dioxide emissions.”)

[14] Jocelyn Durkay, “State Renewable Portfolio Standards and Goals,” National Conference of State Legislatures (July 27th, 2016) (reporting that “Twenty-nine states, Washington, D.C. and three territories have adopted an RPS, while eight [additional] states have set renewable energy goals”). https://perma.cc/DV9L-JRRL.

[15]See Public Service Commission of N.Y., Order Adopting a Clean Energy Standard (Aug. 1 2016). https://perma.cc/3GSF-Q36Z.

[24]Id. at 1299 (“But the contract at issue here differs from traditional bilateral contracts in this significant respect: The contract for differences does not transfer ownership of capacity from one party to another outside the auction.”).

[25]Id. at 1297 (“A state law is preempted where Congress has legislated comprehensively to occupy an entire field of regulation, leaving no room for the States to supplement federal law,” as well as “where, under the circumstances of a particular case, the challenged state law stands as an obstacle to the accomplishment and execution of the full purposes and objectives of Congress” (citations omitted).

[26]Id. at 1300 (Sotomayor, J., concurring) (clarifying that the purpose of the Federal Power Act should serve as the “ultimate touchstone” for the preemption analysis and the Court should resist “talismanic” preemption vocabulary); id. at 1301 (Thomas, J., concurring) (stating that he would not rest his holding on principles of implied-preemption).

[27]Id. at 1299 (“Our holding is limited: We reject Maryland’s program only because it disregards an interstate wholesale rate required by FERC. We therefore need not and do not address the permissibility of various other measures States might employ to encourage development of new or clean generation, including tax incentives, land grants, direct subsidies, construction of state-owned generation facilities, or re-regulation of the energy sector. Nothing in this opinion should be read to foreclose Maryland and other States from encouraging production of new or clean generation through measures untethered to a generator’s wholesale market participation.”).

[42]Cf. American Council on Renewable Energy, Renewable Energy in Massachusetts (2014), https://perma.cc/V6GF-GEF8 (“In February 2014, the state approved 12 long-term power purchase agreements with four Massachusetts utilities for 409 MW of wind projects in Maine and New Hampshire”).

Abstract

The Dakota Access Pipeline (DAPL) has become a contentious topic in recent months. The controversy centers around Dakota Access, LLC[1], a subsidiary of Energy Transfer Crude Oil Company, LLC, and the Standing Rock Sioux Tribe of North and South Dakota[2] (the Tribe or Sioux), a federally-recognized Indian tribe. The Tribe’s reservation, Standing Rock Indian Reservation, is half a mile upstream from where DAPL’s crude oil pipeline would cross the Missouri River underneath Lake Oahe in North Dakota.[3] While much of the recent media attention surrounding Dakota Access and the Tribe has focused on the destruction of the Tribe’s ancestral burial grounds, the underlying issue can be traced back to the nationwide permits issued by the Army Corps of Engineers (the Corps) in 2012.[4] More specifically, this article will examine Nationwide Permit 12 (NWP 12), which was one of the fifty NWPs issued by the Corps in 2012[5] and is at the heart of the current legal battle between Dakota Access and the Tribe.

Introduction

The Tribe and environmentalists alike raised concerns about the potential health and environmental consequences of oil spills, being that the Missouri River “provides drinking water for millions of Americans and irrigation water for thousands of acres of farming and ranching lands.”[6] Besides the Tribes concern about the proximity of the pipeline to their reservation, they were also concerned about the pipeline disrupting sacred ancestral burial grounds and places of cultural significance to the Sioux people.[7] In particular, the Sioux have traditionally placed significance on the convergence of the Missouri and Cannonball Rivers because their ancestors gathered at that location to peacefully trade with other tribes.[8] Ironically, this is not the first time the Army Corps of Engineers (the Corps) or the federal government has taken the Tribe’s land in particular location without their consent. In 1958 the Corps dredged the sacred Cannonball river to construct the Oahe Dam, which created the man-made Lake Oahe that now covers the confluence of the two rivers and is the future site of DAPL.[9] The Oahe Dam not only destroyed a site of spiritual significance to the Sioux, but also flooded nearly fifty-six thousand acres of Standing Rock Reservation and over one hundred four thousand acres on the Cheyenne River Reservation.[10] Overall, the construction of the Oahe Dam destroyed more Indian land than any other public works project in America.[11] Nonetheless, the Tribe continues to use the banks of the Missouri River for “spiritual ceremonies, and the River, as well as Lake Oahe, plays an integral role in the life and recreation of those living on the reservation.”[12] With that poignant history in mind, it comes as no surprise that the Tribe would fight so vehemently against DAPL which would obviously affect both the Missouri River and Lake Oahe.

Fearing, once again, the possibility of sacred burial grounds being destroyed, the Tribe pursued legal action against the Army Corps of Engineers (Corps), the federal agency that approved DAPL’s permits, in hopes of being granted an injunction that would block DAPL’s construction of the pipeline.[13] The outcome of the suit, decided September 9th by the D.C. District Court, held that the Corps had sufficiently followed federal law in approving the pipeline.[14] Minutes after the court’s decision came down, the Department of Justice, the Department of the Army and the Department of the Interior issued a joint statement temporarily halting the work.[15]

The future of DAPL underneath Lake Oahe is still unclear and it will, more than likely, continue to be a political hot potato for months to come. In its simplest form, the conflict comes down to the permitting process and the Corp’s alleged failure to adequately consult the Tribe before issuing the permit.[16] The permit granted to DAPL is a type of general permit known as Nationwide Permit 12 (NWP 12) and has caused considerable controversy in the past several years.

Nationwide Permits

Although one might logically assume that a crude oil pipeline traversing thousands of miles across the United States would require an extensive federal appraisal and permitting process, that assumption would be incorrect. Domestic oil pipelines require no general approval from the federal government.[17] For example, DAPL needed almost no federal permitting of any kind because “99% of its route traversed private land.”[18] However, when construction activity occurs in waters of the United States, meaning in federally regulated waters such as Lake Oahe, the Corps needs to permit the activity under the Clean Water Act (CWA) or the Rivers and Harbors Act or sometimes both.[19]

Section 404(e) of the CWA has been the provision primarily used by the Corps to issue general permits.[20] Nationwide permits (NWP) are a type of general permit that are issued or reissued every five years by the Corps headquarters[21], whereas regional permits are issued by an individual Corps District for a specific geographical area.[22] NWPs authorize small-scale activities that are “similar in nature and result in no more than minimal individual and cumulative adverse environmental effects.”[23] Because NWPs pre-approve categories of activities upfront, there is considerably less federal involvement upon commencement of an individual project. Indeed, in most cases project proponents can commence their activities without ever notifying the Corps.[24] Some of the NWPs, including NWP 12, require the project proponent to submit a Pre-Construction Notification (PCN) to the relevant Corps District Engineer who then confirms whether or not the proposed activities qualify for NWP authorization.[25] If the District Engineer determines that the proposed activity qualifies, he/she then issues a verification letter to the project proponent. It is important to note that the District Engineer is merely verifying that the activity is one that was already pre-authorized by the Corps when they promulgated the NWP reissuance.[26]

NWPs are designed to streamline the permitting process and are often considered to be more cost-efficient and cost-effective for both the Corps and the individual or business seeking the permit.[27] Although NWPs can have important benefits when used for their intended purpose, some of the NWPs, NWP 12 in particular, are often used by the oil and gas industries as a way to fast-track the permitting process by avoiding project-specific environmental review and by skirting around a more comprehensive public participation process.[28] The oil and gas industries circumvent stricter federal regulations by evading the National Environmental Policy Act’s (NEPA) “hard look” review which requires federal agencies to analyze the environmental consequences of all “major Federal actions significantly affecting the quality of the human environment.”[29] If the federal action is one that would significantly affect the environment, the level of federal involvement and regulation is substantially elevated.[30]Although NEPA review applies only to major federal actions and imposes obligations only on federal agencies, “it is well-settled that ‘federal involvement in a non-federal project may be sufficient to federalize the project for purposes of NEPA.’”[31] In other words, it is possible for the Corps to have “sufficient control and responsibility”[32] over a project to warrant them having authority to control portions of a project that would normally be out of their jurisdiction. The district engineer makes the determination as to whether the scope of the Corps involvement warrants them to federalize the entire project.[33] For example, if a pipeline spans 100 miles and 40 miles of the project fall within federal control, the district engineer can determine the scope of the project gives the Corps sufficient control to warrant federalizing all 100 miles of the project, even if the other 60 miles were done by private action.[34]

NWP 12

The Corps renewed fifty nationwide permits on February 21, 2012 and they will expire on March 19, 2017.[35] The Corps, however, has no intention of letting these NWPs expire and on June 1, 2016 they proposed to reissue the NWPs and published the proposed rules in the Federal Register to solicited public comments.[36] The renewal included NWP 12, which covers “construction, maintenance, repair and removal of utility lines . . . provided the activity does not result in the loss of greater than 1/2 acre of waters of the United States for each single and complete project.”[37] The Corps defined NWP 12 to include “pipeline[s] for the transportation of gaseous, liquid, liquescent, or slurry substance, and any cable, line, or wire. . . .”[38] Accordingly, the construction of a pipeline may qualify for NWP 12 as long as the construction is a single and complete project and does not result in a loss greater than 1/2 acre of jurisdictional waters. At this point NWP 12 seems innocuous enough, however the conflict arises over the Corps defining a single and complete project as,“[the] portion of the total linear project proposed or accomplished by one owner/developer . . . that includes all crossings of a single water of the United States (i.e., a single waterbody) at a specific location.”[39]

The effect of this definition is that it allows each water crossing to be verified under NWP 12 separately, essentially creating many “single and complete projects” along one proposed route.[40] In other words, the Corps allows pipeline proponents to “stack” NWP 12 hundreds, if not, thousands of times along a single pipeline.[41] For instance, TransCanada’s Gulf Coast Pipeline, which is the bottom half of the Keystone XL Pipeline, is 485 miles long and crosses United States waters 2,227 times, meaning the it “crosse[d] . . . waters about once every 1150 feet.”[42] The Corps verified the Gulf Coast Pipeline under NWP 12, even though NWP 12 was used 2,227 times in the process.[43] Another example is the Corps’ verification of Enbridge’s Flanagan South Pipeline under NWP 12 despite the pipeline traversing 27 miles of federal land, and crossing waters of the United States 1,950 separate times.[44] The Corps is essentially allowing project proponents to piecemeal the pipeline into separate smaller projects, which is seemingly inconsistent with NEPA.[45] What is perhaps more extraordinary is the Corps defines a single and complete non-linear project as requiring the project to have independent utility[46], which is defined as the project having the ability to be “constructed absent other projects in the project area.”[47] Not only does the definition of single and complete non-linear project require independent utility, it also specifically states “[s]ingle and complete non-linear projects may not be “piecemealed . . . .”[48] It is bewildering why Corps distinguishes so drastically between linear and non-linear projects, especially when considering linear projects that cannot function independently are, by their very nature, neither “single” nor “complete.”

The Corps justifies the expansive nature of NWP 12 by requiring the project proponent to submit a PCN to the Corps District Engineer (DE).[49] The DE will then review the PCN and determine if the proposed action “will result in more than minimal individual or cumulative adverse environmental effects or may be contrary to the public interest.”[50] On its face, requiring the DE to perform an extra layer of review may alleviate concerns about the open-ended nature of NWP 12. However, the review is based solely on the discretion of the DE and whether he/she determines there will be cumulative effects.[51] The PCN verification of the Gulf Coast Pipeline is an example of the considerable amount of discretion granted to the Corps. The Gulf Coast Pipeline passes through three Corps’ districts; Galveston, Fort Worth, and Tulsa and even though all three districts issued verification letters, none of the letters “provide a reasoned basis for any cumulative impacts analysis.”[52] As District Judge Martinez’s dissent points out, the verification letters issued by the three districts attempted to circumvent the analysis by “simply stat[ing] the legal standard and then recit[ing] that it made a ‘determination’ that such criteria were satisfied.”[53] Even though the DE and the Corps provided no specific findings as to why authorizing the use of NWP 12 2,227 times wouldn’t have a cumulative effect, the Tenth Circuit Court of Appeals approved the Corps use of discretion in verifying NWP 12.[54]

As seen above, the Corps definition of “single and complete” essentially allows the project proponent to segment the pipeline into smaller projects, which, in turn, allows the Corps to treat the project as not significant enough to warrant them having “control and responsibility”[55] over the entire project.[56] The approval of the Gulf Coast Pipeline is an example of how easily NWP 12 can be manipulated. Judge Martinez’s dissent challenges the Corps conclusion that its’ involvement did not warrant them to have sufficient control and responsibility and he asserted that “[c]onsidering the number of permits [2,227] issued by the Corps . . . it is patently ludicrous for Appellees to characterize the Corps’ involvement in the subject project as minimal . . . .”[57]

NWP 12 and DAPL

The malleability of NWP 12 is seen, once again, in its application permitting the Dakota Access Pipeline.[58] DAPL is not similar to the Gulf Coast Pipeline and Flanagan South Pipeline in the sense that the Corps didn’t seemingly abuse its authority by granting the use of NWP 12 thousands of times, rather the application of NWP 12 in DAPL’s context is offensive in the sense that it approved the pipeline even though the Tribe alleged it was not adequately consulted[59] as required under Section 106 of the National Historic Preservations Act (NHPA).[60]

Section 106, also known as the “stop, look, and listen” provision[61] requires “[f]ederal agencies takes into account the effects of their undertakings on historic properties and afford the Council a reasonable opportunity to comment on such undertakings.”[62] Meaning, the Corps are required to consider, prior to the reissuance of the NWPs, the effects of the permits on properties of cultural and historical significance.[63] This would have required the Corps to consult with the Tribe before they reissued the NWPs in 2012. Additionally, the consultation can’t just be a rubber stamping process, it “must recognize the government-to-government relationship between the Federal Government and Indian tribes.”[64]

The Corps claimed, and District Court Judge Boasberg agreed, that the Corps “made a reasonable effort to discharge its duties under NHPA prior to promulgating NWP 12” and that “the Corps’ effort to speak with those it thought be concerned was sufficient . . . .”[65] This “reasonable effort” to consult the Tribe included the Corps sending a notification letters containing information pertaining to its proposed NWPs, as well as the Corps holding listening sessions and workshops with tribes, and eventually the Corps sending letters to the Tribe inviting them to begin consultations.[66] The Advisory Council on Historic Preservation (ACHP), the federal agency that promulgates the regulations used to implement Section 106[67], wrote five letters[68] to the Corps questioning the adequacy of the tribal consultations. The EPA and Department of Interior also wrote letters to the Corps questioning their use of NWP 12 and the adequacy of tribal consultations.[69] The ACHP’s final letter states that it believes the “findings made by the Corps are premature, based on an incomplete identification effort, which was not sufficiently informed by the knowledge and perspective of consulting parties . . . .”[70] Despite all the objections from the Tribe and three other federal agencies, the Corps and Judge Boasberg emphasize that the Corps’ efforts were reasonable “given the nature of the permit.”[71] In other words, because NWP 12 is broad and over inclusive then apparently the Corps’ consultation requirements can be viewed in the same way.

Conclusion

This article has attempted to highlight a fundamental problem with how the United States permits domestic oil pipelines. The controversy surrounding the Dakota Access Pipeline has the potential to have both negative and positive implications. The most obvious potentially negative consequence is that the Sioux Tribe may, once again, lose sites of cultural significance at the hands of the U.S. government. However, a positive outcome that has emerged from this whole fiasco is that it has created a national dialog regarding not only nationwide permits and pipelines, but more importantly, how we, as citizens, view and understand the rights of Native Americans.

Introduction

Many recent decisions by the Ninth Circuit[1] have required the court to review agency actions under the Administrative Procedure Act[2] (APA) arbitrary or capricious standard.[3] The Supreme Court has held that the arbitrary or capricious standard is a “highly deferential” standard of review, though the inquiry must nonetheless “be searching and careful.”[4] Furthermore, the agency’s decision is “‘entitled to a presumption of regularity,’ and [the Court] may not substitute [its] judgment for that of the agency.”[5] For purposes of this discussion, it is important to note that “traditional deference to the agency is at its highest where a court is reviewing an agency action that required a high level of technical expertise.”[6]

In cases where a petitioner is challenging an agency action under the Endangered Species Act[7] (ESA) the court will usually be tasked with reviewing whether the action was arbitrary or capricious in light of the ESA’s “best available science” mandate.[8] The ESA requires an agency to insure that its actions will not jeopardize the continued existence of any endangered species,[9] and the best available science mandate requires the agency to utilize the best available scientific data to inform its no jeopardy review.[10] Challenges to an agency action as arbitrary and capricious for failing to utilize the best available science must show that the agency ignored the relevant available science.[11]

Given the heightened level of deference for decisions based on science and the low standard of what constitutes the best available science, the ESA mandate rarely threatens to invalidate an agency’s decision.[12] In fact, none of the Ninth Circuit cases in the last year that have considered the issue have substantively evaluated an agency decision under the best available science mandate.[13] Rather, the agencies were given heightened deference to make their own decisions as to what constituted best available science.[14] This leaves us to wonder whether the ESA’s best available science mandate serves as a purposeful requirement in the Ninth Circuit.

The APA and the Arbitrary and Capricious Standard

The APA provides the standard for judicial review of an agency decision. Specifically, section 10 addresses judicial review and provides:

To the extent necessary to decision and when presented, the reviewing court shall decide all relevant questions of law, interpret constitutional and statutory provisions, and determine the meaning or applicability of the terms of an agency action.[15]

Section 10 further establishes the arbitrary and capricious standard by stating that the reviewing court shall “hold unlawful and set aside agency action, findings, and conclusions found to be … arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law.”[16]

The APA’s arbitrary and capricious standard of review, however, is only applied when the governing legislation does not set forth its own standard of review.[17] There are several examples of legislation that utilize the APA as a default,[18] but key to this commentary is the fact that the ESA also relies on the APA as its default standard of review.

Meaning of Arbitrary and Capricious

Based on the text of the applicable legislation, it is easy to know when the arbitrary and capricious standard will be applied as the governing standard of review. However, in addition to understanding when the standard of review will be applied, it is helpful for both agencies and courts to have the same understanding of what is meant by “arbitrary and capricious.”

Congress did not define precisely what it meant by “arbitrary and capricious” within the text of the APA.[19] Instead, courts have looked to the terms’ ordinary meaning for a definition.[20] For example, Black’s Law defines arbitrary as a decision “founded on prejudice or preference rather than on reason or fact.”[21] Additionally, capricious is defined as “unpredictable or impulsive behavior” or “contrary to the evidence or established rules of law.”[22]

Deference

The arbitrary and capricious standard of review is a very narrow standard of review that requires the reviewing court to assume a deferential posture such that the court may not simply substitute its judgment for that of the agency.[23] Although the court’s deference must be at its highest when reviewing agency decisions relying on technical expertise, the reviewing court still has an affirmative obligation under the APA to ensure the agency exercised sound judgment and made a reasonable decision based on its available information.[24] Thus, in its review the court must walk a fine line between substituting its judgment for that of the agency and simply affirming agency decision making because it was the decision of the agency.

The U.S. Supreme Court has somewhat defined this line by stating that courts are only to determine if the agency considered the “relevant factors” and if the agency made a “clear error of judgment,” rendering its actions arbitrary and capricious.[25] Because terms such as “clear error of judgment” do not provide a clear standard, the Supreme Court articulated four specific scenarios for when agencies’ actions are considered arbitrary and capricious:

The agency “relied on factors which Congress has not intended it to consider.”

The agency “entirely failed to consider an important aspect of the problem.”

The agency “offered an explanation for its decision that runs counter to the evidence before the agency.”

The agency offered an explanation “so implausible that it could not be ascribed to a difference in view or the product of agency expertise.”[26]

These rules provide clarity to both courts and agencies because they set forth a specific standard for determining whether an agency has acted arbitrarily and capriciously.

Best Available Science Under the Endangered Species Act

History

The Endangered Species Preservation Act of 1966[27] (ESPA) was the first environmental statute to impose a requirement to utilize science in environmental decisions made by an administrative body.[28] The statute required the Secretary of the Interior to make determinations as to which species were at risk of extinction and directed the secretary to consult with relevant scientists in creating the list of endangered species.[29] The ESPA did not require the ultimate listing decisions to rest on the scientific information, but Congress intended the consultations to provide the foundation for the listings.[30]

The “best available science” requirement was later introduced in the Endangered Species Conservation Act of 1969[31] and remained largely unchanged in the current ESA.[32] However, Congress neither defined “best available science” nor provided instruction as to how to apply the requirement in either the 1969 Act or the current 1973 Act.[33] It has been suggested that the term “best available science” was not further defined in either the 1969 or 1973 statutes because Congress simply intended to continue the ESPA requirement to seek input from scientists prior to making listing decisions.[34]

What is Required Under the Best Available Science Mandate?

Without an explicit statutory definition or guidelines of how to apply the best available science mandate, we are forced to rely on judicial opinions interpreting the ESA to ascertain what is required by the mandate. Two distinct guidelines emerge from looking at these opinions: (1) an agency cannot ignore relevant available data and (2) an agency does not have an obligation to generate new data, even if only relatively weak data is available.[35]

The Ninth Circuit has repeatedly held that an agency “cannot ignore available biological information.”[36] Put more specifically, the agency “must not disregard available scientific evidence that is in some way better than the evidence it relies on.”[37] Furthermore, the court has held that an agency is not necessarily in noncompliance with the best available science mandate if it disagrees with or discredits the available scientific data.[38] For example, in Kern County Farm Bureau v. Allen[39] (Kern) the court rejected Kern’s argument that the United States Fish & Wildlife Service (FWS) violated the best available science mandate by misinterpreting three studies. In Kern, the fact that the FWS cited the studies and did not ignore them was enough to comply with the best available science mandate.[40] Therefore, a challenger must specifically point to relevant data that was omitted from consideration to sustain a claim that an agency failed to utilize the best available science.[41]

Although the Ninth Circuit has required an agency to utilize the best scientific data available, the court has also held that the mandate “does not… require an agency to conduct new tests or make decisions on data that does not yet exist.”[42] This holding is consistent with other circuits that have addressed this issue.[43] For example, the D.C. Circuit has held that an agency must utilize the best scientific data available, not the best scientific data possible.[44]

This approach has been met with criticism because agencies are allowed to rely on data that is weak or inconclusive when it is the only data available.[45] Because few data are available for many endangered species,[46] there exists the possibility that many decisions regarding endangered species will be made with little to no scientific data in support. If that were the case, the purpose of consulting scientific data prior to making a decision would be entirely undermined.

Application of the Best Available Science Mandate Under the Current Endangered Species Act

The best available science mandate is triggered any time an agency contemplates an action that might impact an endangered species. Section 7(a) of the ESA requires the agency to “insure that any action authorized, funded, or carried out by such agency is not likely to jeopardize the continued existence of any endangered or threatened species or result in destruction or adverse modification of the habitat of such species.”[47] Section 7(a) further requires that in fulfilling the requirements under the section the agency “shall use the best available scientific and commercial data.”[48]

Deference

The deference afforded to agencies in review of science-based decisions raises doubt as to whether the best available science mandate actually operates as a substantial requirement to an agency proposing an action under section 7. The Ninth Circuit in particular has held that when the analysis of an agency decision requires a high level of technical expertise, the court “must defer to the informed discretion of the responsible federal agencies.”[49] In fact, it is common practice across the circuits to give an “extreme degree” of deference to decisions founded on the scientific or technical expertise of an agency.[50]

Ninth Circuit Deference on Matters of Science

A Substantive Mandate in 2005

In 2005 the Ninth Circuit decided Pacific Coast Federation of Fishermen’s Ass’ns v. Bureau of Reclamation[51] (Pacific Coast) and breathed life into the best available science mandate. Prior to this decision, many courts had used deference to avoid upholding the substantive mandate requiring agencies to insure against jeopardy.[52] In Pacific Coast, the Ninth Circuit inserted itself into the Klamath Basin conflict.[53] The conflict stemmed from the National Marine Fisheries Service (NMFS) issuing a biological opinion (BiOp) requiring the Bureau of Reclamation (BOR) to limit diversion of water from the Klamath River for irrigation purposes because this diversion would jeopardize the continued existence of the endangered suckerfish and coho salmon.[54] This closure resulted in significant agricultural losses, as 2001 also saw record drought.[55]

After the drought of 2001, the Departments of the Interior and Commerce commissioned the National Research Council (NRC) to perform a “scientifically rigorous peer review” of whether the BiOp was consistent with available scientific information.[56] The conclusion of the NRC study questioned the validity of the 2001 BiOp.[57] The study found that “the 2001 BiOp’s drastic halting of water diversions was not scientifically supported,” but the study did not offer comment as to the minimum water levels necessary to maintain the endangered fish.[58]

In 2002, BOR prepared a long-range biological assessment and proposed a new flow regime that would vary the river flow by “water year type.”[59] The NMFS concluded that the BOR’s proposed actions would jeopardize the continued existence of coho salmon, and it issued a new BiOp that developed a reasonable and prudent alternative (RPA) to replace the BOR proposal.[60] That RPA was the subject of Pacific Coast.

The Northern District of California found that the short-term measures of the RPA were not arbitrary and capricious.[61] On appeal to the Ninth Circuit, the Court did not grant the customary heightened deference to the agency’s decision.[62] Rather, the Court engaged in a “careful and searching” review of the BiOp, stating that the agency “is obligated to articulate a rational connection between the facts found and the choices made.”[63] Specifically, the court found that

Although . . . the agency believed that the RPA would avoid jeopardy to the coho, this assertion alone is insufficient to sustain the BiOp and the RPA. The agency essentially asks that we take its word that the species will be protected if its plans were followed. If this were sufficient, the NMFS could simply assert that its decisions were protective and so withstand all scrutiny.[64]

Therefore, the Ninth Circuit found the authorized short-term measures of the Bi-Op to be arbitrary and capricious.[65]

This decision marked an important step in making the ESA’s best available science requirement a substantive mandate. Despite the deference due to the agency, the court looked substantively at the BiOp to find that it could not insure against jeopardy. This case sent a message that an agency could not rely on heightened deference to avoid judicial review of its actions.

Clarification of the Arbitrary and Capricious Standard in 2008

In 2008, the Ninth Circuit sought to “clarify some of [its] environmental jurisprudence” by hearing en banc Lands Council v. McNair (Lands Council III).[66] The court felt a need for uniformity because Ecology Center, Inc. v. Austin[67] “defied well-established law concerning the deference [the court] owe[s] to agencies and their methodological choices.”[68] Additionally, the court likely wanted to address the fact that “in recent years, [the Ninth Circuit’s] environmental jurisprudence has, at times, shifted away from the appropriate standard of review and could be read to suggest” that judges should sit on the bench and “act as a panel of scientists.”[69]

The en banc review resulted in a reversal of the preliminary injunction initially granted by the Ninth Circuit in TheLands Council v. McNair (Lands Council II)[70] and the overruling of Ecology Center.[71] Lands Council III overruled Ecology Center’s instruction that courts may suggest how an agency is required to validate its scientific methodology.[72] In Ecology Center, the court required the Forest Service to “demonstrate the reliability of its scientific methodology or the hypothesis underlying the Service’s methodology with on the ground analysis,”[73] but the court in Lands Council III concluded that the Forest Service may use a particular analysis “if it deems it appropriate or necessary, but it is not required to do so.”[74] In other words, as long as “there is a reasonable scientific basis to uphold the legitimacy of [the] modeling,” the courts are required to give deference to the agency and uphold its model.[75] Therefore, Lands Council III significantly reigned in the court’s ability to question how agencies justify scientific methodology.

In addition to precluding courts from prescribing the means by which an agency validates its scientific methodologies, Lands Council III also established that courts do not have the authority to choose which scientific studies support agency actions.[76] If the agency considered the scientific evidence available to it, courts must defer to the agency’s interpretations of that scientific evidence.[77]Therefore, because the Forest Service considered many different studies, the court in Lands Council III explicitly deferred to the agency’s interpretation of the scientific evidence.[78]

Finally, Lands Council III overruled Ecology Center’s requirement that an agency must present every scientific uncertainty in the evidence used to inform a decision.[79] Consequently, an agency no longer bears “the burden to anticipate questions that are not necessary to its analysis, or to respond to uncertainties that are not reasonably supported by any scientific authority.”[80] The Ninth Circuit only requires that an agency “acknowledge and respond to comments by outside parties that raise significant scientific uncertainties and reasonably support that such uncertainties exist.”[81]

Thus, the en banc court established three rules to guide Ninth Circuit jurisprudence when using the arbitrary and capricious standard of review for an agency’s use of science:

Courts may not prescribe the specific means by which an agency must validate methodologies.

Courts may no longer choose between which scientific studies support an agency’s action, so long as the agency provides an explanation for its conclusion.

An agency no longer needs to address every scientific uncertainty surrounding the science it uses to support its position. The agency only needs to “acknowledge” and “respond” to the claims by parties raising and supporting that “significant scientific uncertainties” exist.[82]

Current Cases

Pacific Coast marked what commentators believed was a change toward a more substantive science requirement.[83] However, a decade later it does not appear as though the Ninth Circuit has continued down the Pacific Coast path of reducing the deference it affords to agencies when reviewing compliance with the best available science mandate. Rather, the Ninth Circuit has stayed consistent with the “rules” issued by the Lands Council III en banc court. However, the three cases decided by the Ninth Circuit in 2015 reviewing the best available science requirement under the ESA[84] show that heightened agency deference is rendering the science mandate utterly meaningless.

In Alliance for the Wild Rockies v. Bradford,[85] the Ninth Circuit issued a memorandum opinion affirming that the United States Forest Service (USFS) did not violate the ESA by concluding that its Grizzly Project would not likely adversely affect the grizzly bear population.[86] The court noted that USFS met the requirements of the ESA by consulting the Wakkinen Study when making its determination.[87] The court also noted that its review of the scientific judgments and technical analyses made within an agency’s field of expertise should be at its most deferential.[88] Therefore, the court concluded that USFS had complied with the ESA’s best available science mandate.[89]

In Center for Biological Diversity v. United States Fish & Wildlife Service,[90] the Center for Biological Diversity (CBD) brought suit against the FWS challenging the FWS’s decision to sign a memorandum of agreement (MOA) for groundwater pumping based on conclusions reached in its biological opinion (BiOp).[91] CBD sued for declaratory and injunctive relief against the FWS alleging, among other things, that the BiOp failed to meet the best available science standard set forth by §7 of the ESA.[92]

Specifically, CBD argued that the foundation of the BiOp’s no jeopardy finding was based on expediency not on science.[93] CBD attempted to support its argument by pointing to the fact that the conservation measures’ flow reduction triggers were negotiated and not biologically based.[94] The Ninth Circuit noted that the ESA does not require FWS to design or plan its projects using the best science possible.[95] Rather, “once action is submitted for formal consultation, the consulting agency must use the best scientific and commercial evidence available in analyzing the potential effects of that action on endangered species in its biological opinion.”[96] Therefore, the court concluded that negotiated terms do not of themselves prove that the BiOp analysis failed to utilize the best available science.[97]

Additionally, CBD argued that the BiOp’s conclusions should not be given deference because the FWS failed to address concerns raised by its own scientists regarding the effectiveness of the MOA’s conservation measures.[98] The Ninth Circuit explained that CBD’s claim failed as there was no evidence supporting a conclusion that FWS scientists’ concerns were supported by better science than the science used in the BiOp, or that FWS disregarded better scientific information than the evidence FWS relied upon.[99] Thus, the Ninth Circuit concluded that CBD was unable to prove that the no jeopardy conclusion in the BiOp was arbitrary or capricious for failing to utilize the best available science.[100]

In Cascadia Wildlands v. Thrailkill,[101] Cascadia Wildlands (Cascadia) brought action seeking to enjoin the Douglas Fire Complex Recovery Project (Recovery Project), which authorized salvage logging of roughly 1,600 acres of fire-damaged forest.[102] In approving the Recovery Project, the Medford District of the Bureau of Land Management relied on a biological opinion issued by the FWS.[103] This biological opinion concluded that the Recovery Project was not likely to result in jeopardy to the Northern Spotted Owl species or in destruction or adverse modification of the critical habitat.[104] Cascadia claimed the FWS biological opinion failed to comply with requirements of the ESA because the FWS did not apply scientific data to the opinion.[105]

As to the no jeopardy conclusion, the court found that the record supported that the FWS relied on several surveys to reach its conclusion and gave the agency deference that the data it used was the best available scientific data.[106] With regard to the effects on the habitat, the court found that the FWS utilized several lengthy scientific reports regarding pre-fire and post-fire habitats to support the conclusion in its biological opinion.[107] Furthermore, the court noted that a reviewing court cannot substitute its judgment for that of the agency when the agency used adequate and reliable data.[108]

Cascadia also argued that the FWS’s 2011 Northern Spotted Owl Recovery Plan constituted the best available science and that the FWS was required to follow it.[109] The court rejected this argument. The court stated that recovery and jeopardy are two distinct concepts.[110] The court noted that a Recovery Project does not necessarily need to promote or bring about a long-term recovery of the species.[111] Rather, the biological opinion should and does focus on the Recovery Project’s ability to conserve the habitat so as not to have a detrimental effect on the species population.[112]

The court ultimately concluded that Cascadia failed to show that the FWS did not utilize the best available scientific information when issuing its biological opinion that the Recovery Project would not jeopardize the Northern Spotted Owl or its critical habitat.[113] Therefore, the Ninth Circuit affirmed the district court’s denial of the preliminary injunction to enjoin the Recovery Project.[114]

These three cases illustrate that the Ninth Circuit is still affording agencies heightened deference in scientific judgments and technical analyses. The court appears to look merely at whether the agency consulted scientific data prior to making decisions without reviewing the adequacy of the scientific data. Therefore, the ESA’s best available science mandate can be easily satisfied and will be subject to little scrutiny in the Ninth Circuit.

Conclusion

When reviewing scientific decisions based on agency expertise, the standard practice across the circuits is to afford deference to the agency unless is it shown that the agency ignored relevant scientific data when making its decision.[115] Unfortunately, this practice leaves little recourse for petitioners seeking to hold an agency accountable for substantiating its decision. As it stands now, the best available science requirement is satisfied as long as the agency considers the available data.[116] The agency is free to disagree with the data, discredit the data, or rely on weak or inconclusive data if it is the only data available.[117] As long as the agency articulates the rationale between the data and the decision made, the court will uphold the agency action.[118] This means that as long as an agency communicates a justification for its decision, the justification itself will more than likely not be reviewed by the court.

In 2005, the Ninth Circuit substantively reviewed an agency decision and found the agency relied heavily on unstated assumptions rather than scientific evidence.[119] Had the court simply given deference to the agency’s conclusion because it articulated a justification for its decision, the court would have failed to notice that the agency was not actually basing that decision on science. Pacific Coast exemplifies the need for substantive review of agency decisions, even though the court does not like to assume the role of technical expert.[120]

Although the Ninth Circuit demonstrated in Pacific Coast that it was willing to substantively review agency decisions relating to science, the court has since shifted back to the more customary deferential approach. As the three 2015 cases show, the Ninth Circuit is reluctant to substitute its judgment for that of an agency with regard to science and as a result affords agencies great deference when reviewing decisions based on the agency’s scientific expertise.

It is unclear why the Ninth Circuit has shifted back to the deferential standard of review. Perhaps it is because Congress has remained silent on the science standard for over three decades, or perhaps the court is reluctant to proceed differently than the other circuits. Whatever the reason, it is clear that until courts engage in substantive review of agencies’ scientific decisions or Congress establishes an explicit standard of the type and quality of scientific data required, the best science available mandate will continue to operate as a fiction in the review of agency decisions.

[18] The National Forest Management Act (NFMA), 16 U.S.C. §§1600–1687 (2012), and the National Environmental Policy Act of 1969 (NEPA), 42 U.S.C. §§ 4321–4370h (2012), are other examples of legislation that rely on the APA as a default standard of review.

[20]See Fed. Commc’ns Comm’n v. Fox Television Stations, Inc., 556 U.S. 502, 516 (2009) (stating that the arbitrary and capricious standard is satisfied so long as the Commission’s action was not arbitrary or capricious in the ordinary sense); United States v. Locke, 471 U.S. 84, 95 (1985) (deference to the supremacy of the Legislature, as well as recognition that Congressmen typically vote on the language of a bill, generally requires us to assume that the legislative purpose is expressed by the ordinary meaning of the words used).

In 1971, the Peruvian theologian and Dominican priest Gustavo Gutiérrez published his seminal work, A Theology of Liberation, in which he advocated an activist approach to Christianity based on the belief that it is only through living in solidarity with exploited and impoverished populations that all people can ultimately become free from all forms of injustice, oppression, and suffering.[1] Recognizing that “the signs of the times,” demanded a theology that synthesized spiritual contemplation and direct action,[2] Gutiérrez identified Christ’s description of the Last Judgment as the foundation of this call to solidarity with the poor[3]:

“I was hungry and you gave me food. I was thirsty and you gave me drink. I was a stranger and you took me in. I was naked and you clothed me. I was sick and you visited me. I was in prison and you came unto me…insofar as you did this to one of the least of my brethren, you did it to me.”[4]

More than three decades later, Pope Francis used similar language of liberation when he declared climate change to be the imperative moral issue of our time, asserting “the earth herself, burdened and laid waste, is among the most abandoned and maltreated of our poor.”[5] Moreover, both Gutiérrez and Pope Francis identified rampant consumerism and a self-centered notion of economic progress as the greatest contributors to deplorable conditions in the developing world. Just as Gutiérrez descried social and economic poverty as “the fruit of injustice and coercion” sown by wealthy nations and force-fed to poorer ones,[6] so too Pope Francis lamented that human beings frequently seem “to see no other meaning in their natural environment than what serves for immediate use and consumption.”[7]

Liberation theology, although most strongly associated with the Catholic Church in Latin America,[8] is not uniquely Catholic, or even uniquely Christian. Rather, the concept of liberation is a facet of all religions that challenge the injustice and poverty that are the byproducts of neoliberal economics.[9] Moreover, though the term “liberation” often carries a religious connotation,[10] liberationist principles can exist even within secular ethical theories, notably environmental justice,[11] that do not expressly use the term “liberation.” Similar to how liberation extends beyond the bounds of religion, steadily growing concerns over climate change and other environmental problems are also not confined to religion,[12] let alone any particular religion.[13] The twenty-first century is witnessing the emergence of a new ecological conscience, and as the world’s largest economic power, the United States has the opportunity to place itself in the vanguard of a global environmental movement toward greener and more sustainable practices.[14]

Rising sea levels, unpredictable weather, and dwindling natural resources make it increasingly difficult to maintain the notion that nature is beyond our ability to hurt and its bounty beyond our ability to deplete.[15] Americans’ changing attitudes and behaviors regarding sustainability in this Anthropocene era[16] indicate a sobering realization that unchecked greenhouse gas emissions have created a tragedy of the atmospheric commons.[17] Increasing awareness of the magnitude of climate change and other pressing environmental concerns has begun shifting our collective environmental values toward an ethical posture that acknowledges the continuity and interdependence of all life,[18] thus laying bare the logical conclusion that our mistreatment of the natural world translates into mistreatment of the poor, who are especially vulnerable to environmental harms.[19] The mutability of environmental ethics, however, strains against the intractability of environmental law, whose overreliance on economic principles and stilted doctrine has locked it into a narrow and anthropocentric outlook that perceives environmentally responsible practices solely as instrumental, rather than intrinsic, goods.[20]

Changes in climate, both literal and metaphorical, have created a world where environmental rights and human rights are no longer distinct concepts.[21] Yet current environmental law fails to adequately serve the public good because an outdated approach to valuing the environment and situating humans in relation to it prevents the law from evolving to conform to contemporary values.[22] Though remedying this problem is a gargantuan task with no simple solution,[23] this paper argues that the market-based principles and inflexible legal doctrines that have historically governed environmental law should yield to a liberationist ideal already taking root in environmental ethics, an ideal that recognizes “[t]here is no separating human beings from ecological nature,”[24] and therefore seeks to protect human interests by protecting the interests of the natural world.

Part II of this paper provides an overview of several strands of environmental ethics that rose to prominence over the last forty years, most notably value theory, which strongly influenced the policies underlying many of the major pieces of environmental legislation passed in the late 1960s and early 1970s. That section also explores the concepts of ecojustice and environmental justice, two approaches to humanity’s ethical duties toward the environment rooted in social justice. It further argues that environmental ethics has taken a backseat to utilitarian, economics-centered policies because of its perennial struggle to find purchase in the realm of environmental law. Part III argues that although lawmakers on the federal and state levels are finally formulating legislative and regulatory plans to address major environmental problems like climate change, efforts to put these plans into action are hindered by two systemic shortcomings of current environmental law: cost-benefit analysis and standing doctrine. Part IV returns to the concept of liberation, first analyzing how it overcomes or avoids many of the problems other theories of environmental ethics have faced. Next, it explains that emergent twenty-first century environmental values indicate a movement toward a liberationist approach to environmental ethics, and concludes by exploring how the truest expressions of this movement—the notions of uncanniness and planetarian identity—can correct the shortcomings of existing environmental law.

[Note: This piece has been modified from its original content for the ELRS submission. A subsequent publication will include this article in its entirety. For those who would like to read further, please see the citation in the following footnote.[25]

Environmental Ethics and Their Divorce from Environmental Law

Given the vast history of environmental ethics, even just in the United States,[26] this paper will limit its focus to several major developments in environmental ethics from the latter-half of the twentieth century and their interaction with environmental law. Of particular interest is the influence of value theory—“what matters and why”—on environmental ethics and law.[27] Value theory was at the forefront of environmental ethics from the late 1960s through the 1970s, the “golden age of environmental law” that saw Congress enact the most significant of the country’s environmental legislation,[28] including the National Environmental Policy Act (NEPA),[29] Clean Air Act,[30] Clean Water Act,[31] and Endangered Species Act (ESA).[32]

This section is divided into three parts. The first offers a quick overview of value theory as applied to environmental ethics, focusing on the distinction between nature as an intrinsic good and an instrumental good. The second part considers the concepts of “ecojustice,” a Christian strategy of environmental ethics that views nature as an intrinsic good, and “environmental justice,” a (mostly) secular approach to environmental ethics that regards nature as more of an instrumental good. The third part explains the limits of value theory, and why these limits ostensibly make it unworkable from the perspective of environmental law.

Value Theory and the Strategy of Nature’s Standing

Willis Jenkins, a professor of environmental theology and ethics at the University of Virginia, has noted that, compared to other fields of “practical ethics,” environmental ethics struggles to reach a consensus on what it is actually trying to achieve and how it should go about achieving it.[33] This is because environmental ethics has trouble agreeing on why people should find that nature has value, and thus regard environmental issues as morally important.[34] Several different strategies have arisen that attempt to answer this question, and arguably the best known of these is something Jenkins identifies as “the strategy of nature’s standing,” a name that carries obvious legal overtones.[35] This strategy attempts to situate moral value within nature itself, but when it emerged during the golden age of environmental law, ethicists quickly realized “that the inherited vocabularies of ethics could not capture the value of nature, focused as they were on human interests (consequentialism) and rights (in deontological and contract theories).”[36] Accordingly, a new theory of nature’s value was needed, and the question became whether nature held “intrinsic value” for humanity in addition to mere “instrumental value.”[37] In other words, is the natural world just “a means to some other end” (instrumental value), or is it “an end in itself” (intrinsic value)?[38]

Advocates for nature’s intrinsic value asserted that traditional “anthropocentric” conceptions of the natural world should be replaced with a “biocentric” approach “locating value in life itself (and other aspects of self-organizing nature such as species, ecosystems, and even the planet),” or with an even stronger “ecocentric” or “deep ecology”[39] approach “presenting human interests and rights as just one example of the ethical weight of all self-organizing nature.”[40] On the other side of the argument, advocates for an instrumental conception of nature’s value held to an anthropocentric view that “the concept of value makes no sense independent of human beings for whom the value matters.”[41] The debate between intrinsic and instrumental was not (nor does it continue to be) black and white. Some environmental ethicists occupied a middle ground, acknowledging that although nature has intrinsic value, “such value does not . . . entail any obligation on the part of human beings,” because that intrinsic value by itself does not necessarily “contribute[] to the well-being of human agents.”[42]

Ecojustice and Environmental Justice

Just as he identifies three major strategies for making environmental problems intelligible to a secular moral experience, Jenkins also identifies three major strategies for explaining the importance of the environment from a Christian moral perspective.[43] Of greatest interest to this paper is ecojustice, which mirrors the value theory-focused approach of the strategy of nature’s standing[44] and generally reflects the environmental values of Roman Catholicism,[45] the soil from which liberation theology grew. According to Jenkins, ecojustice holds that nature has intrinsic moral value for Christians by virtue of being part of God’s creation: “The strategy of ecojustice makes respect for creation a mode of response to God. Right relations with God require right relations with God’s creation, which by virtue of its own relationship with God, calls for moral response.”[46]

As the name implies, ecojustice takes the concept of justice “as its overarching moral category,”[47] meaning it shares more than just a similar developmental timeline with liberation theology.[48] Like liberation theology, ecojustice is pastoral, which means it operates largely at the interstitial places between base Christian communities and the Church, bringing the two together to foster a more productive dialogue.[49] Moreover, by implicating environmental concerns in questions of economic and social justice, ecojustice expressly links harm to the environment with harm to the poor. For example, in 1989 a Presbyterian committee declared that “nature has become co-victim with the poor, that the vulnerable earth and the vulnerable people are oppressed together.”[50]

Ecojustice’s arguably secular counterpart “for bringing environmental issues within the purview of justice,” is called (unsurprisingly) environmental justice,[51] and is generally defined as “the fair treatment and meaningful involvement of all people regardless of race, color, national origin, or income with respect to the development, implementation and enforcement of environmental laws, regulations and policies.”[52] Though often regarded as an offshoot of the civil rights movement,[53] environmental justice did not truly begin developing in earnest until roughly a decade after the emergence of ecojustice in the early 1970s.[54] In a little over ten years, the movement gained enough momentum that the U.S. Environmental Protection Agency (EPA) created its own Office of Environmental Justice in 1992.[55] Two years later, President Clinton issued Executive Order 12,898, instructing every federal agency to “make achieving environmental justice part of its mission by identifying and addressing . . . disproportionately high and adverse human health or environmental effects of its programs, policies, and activities on minority populations and low-income populations.”[56]

Possibly due to their intertwining histories, the line separating ecojustice from environmental justice is not clear. Some environmental ethicists appear to regard environmental justice merely as a constitutive part of ecojustice, noting that several principles of environmental justice are basically restatements of ecojustice’s “integrative view” that strives for a “synthesis of justice and ecology, a single mission of religious reform responding to both environmental degradation and human oppression.”[57] Others, such as Jenkins, note that although ecojustice and environmental justice both concern themselves with the link between environmental degradations and human dignity, they differ in where they situate the locus of that dignity: “Ecojustice focuses on creation’s integrity; environmental justice on humanity’s ecological integrity.”[58]

Viewed from this perspective, ecojustice appears to intrinsically value nature because it “evaluate[s] right relations directly in reference to creation’s own dignity,”[59] whereas environmental justice seems to instrumentally value nature because it “critique[s] environmental degradations with respect to human dignity.”[60] Richard Bohannon and Kevin O’Brien seem to support this proposition,[61] but also go a step further, arguing that although environmental justice may have religious elements or be religiously motivated, its ties to religion, unlike ecojustice’s, have “not been prominent or explicit.”[62] More specifically, they note that the national survey of every registered toxic waste facility in the U.S. that the United Church of Christ produced in the wake of the Warren County protest included “no discussion of [religious] values, no mention of God or faith, and no emphasis on connecting the fight against injustice to the ministry of the church. This is a practical and political document, seeking to support community organizing and change public policy for the sake of social justice.”[63]

Ultimately, Bohannon and O’Brien conclude, the differences between ecojustice and environmental justice trace back to “the social location of [their] advocates. While environmental justice is a movement that emerged in inner cities and poor rural areas, eco-justice was developed by scholars, ministers, and academic theologians on university campuses.”[64] In other words, ecojustice comes from a place of social and economic privilege that environmental justice does not, and therefore ecojustice, despite all its good intentions, lacks self-awareness when it attempts to synthesize human and nonhuman interests under a single holistic vision.[65] This limitation on ecojustice’s ability to fully connect with those suffering the worst instances of injustice thus seems to eliminate it from the running as truly practical Christian environmental ethic.

Similarly, the strategy of nature’s standing, which also seems unable to generate a fully inclusive theory of the natural world’s value, appears to be unworkable as a secular environmental ethic. Indeed, some commentators suggest that environmental justice holds an advantage over the strategy of nature’s standing because whereas that value theory-laden approach struggles to find agreement on the criteria that give nature its moral worth (and therefore struggles to identify social practices adequate to protect that worth), environmental justice’s “ecological anthropology” lends itself to economic approaches that better jibe with the strictures of environmental law.[66] As we will see in Part III, however, even though environmental justice should in theory be able to curtail the consequentialist excesses of economic theories of environmental value, in practice cost-benefit principles frequently arrive at notions of “public good” that actually do more harm than good.

The Limits of Value Theory

Jedidiah Purdy identifies two limits on value theory’s practical application that, despite the theory’s prominence in both secular and religious environmental ethics in the 1970s, undermined its ability to have a lasting effect on environmental law. The first limit boils down to the fact that because “value” is an ineluctably human construction, any claims about the value of nature necessarily rely on considerations that only humans can regard as values.[67] This is most true of anthropocentric conceptions of value, where “[a]ny claim about the value of nature must call on considerations that humans can regard as values, that is, which they can imagine themselves pursuing and respecting.”[68] But this limit also applies to biocentric and ecocentric theories that value nature intrinsically, because even if we do not confer value on nature, we still respond to value, and such response is contingent on our ability to recognize something as being “of value” in the first place.[69]

This limitation on value theory gives rise to the second: an inability to promote action. In other words, regardless of whether we adopt an intrinsic or instrumental approach to valuing nature, neither one tells us anything about how to protect that value.[70] Purdy uses the Endangered Species Act to illustrate this point, explaining that neither interpreting the Act from an intrinsic perspective (e.g., spotted owls have intrinsic value because the Act prioritizes their survival over nearly any competing human interest), nor from an anthropocentric perspective (e.g., the Act expresses a human preference for species’ survival) does anything to inform the Act’s operation.[71]

Purdy also notes a second pair of ethical theories, individualism and holism, which initially appear to be more promising than intrinsic and instrumental valuations of nature, yet also become unworkable as practical environmental ethics.[72] Individualism, in an environmental context, essentially operates as a narrower version of the biocentric and ecocentric strands of intrinsic value theory,[73] locating value in individual organisms’ “interests, points of view, or, perhaps, the very existence of individual animals and plants,”[74] but drawing the line at attributing moral standing to “holistic entities like species or ecosystems.”[75] This approach is attractive because valuing individuals creates an obligation to prevent, or at least not deliberately cause, the suffering of any living thing.[76] Followed to its logical end, however, this obligation becomes problematic for two reasons. First, because it attributes value to individuals and not larger natural systems, individualism appears to preclude valuing one species more than any other, even if one species is endangered and the other is invasive.[77] Second, this approach’s imperative to value the lives of all individual organisms ostensibly produces an absurd result in which environmental ethics stands in opposition to all natural systems: “consistent commitment to avoiding the suffering of sentient beings would seem to imply exterminating predators, even genetically engineering wild species so that the survival of some no longer requires the suffering of others—creating, that is, a world either without foxes and grizzlies, or with herbivorous versions of them.”[78]

On the other side of the spectrum is holism, which takes a “big picture” view on the environment, and “locates value in self-organizing systems such as ecosystems, species, or ‘nature’ itself.” [79] This means holism runs into the same wall as ecojustice: it fails to account for the values of and differences among individuals.[80] Just as ecojustice risks erroneously assuming that everybody, regardless of their personal experiences within their communities, will be fine so long as they share its vision of an integrated and harmonious environmental ethic,[81] so too does a holistic approach lead environmentalists to the unpleasant conclusion that the suffering of individual members of a species is morally acceptable so long as a the species as a whole survives.[82] Holism also hits a second snag in that it “dissolves the distinction between human and nonhuman,”[83] resulting in a perverse syllogism that declares any human activity, no matter how destructive, to be “natural”: “If we are part of nature, then everything we do is part of nature, and is natural in that primary sense.”[84]

As with intrinsic and instrumental valuations of nature, individualism’s and holism’s uncompromising stances undermine their usefulness as practical environmental ethics. Each of these competing theories stubbornly refuses to acquiesce to any kind of moral pluralism in the belief that “seiz[ing] on one aspect of environmental value and exclud[ing] competing considerations [is] in the service of theoretical consistency.”[85] The irony, however, is that environmental law turned away from value theory precisely because its competing variants could not generate a consistent answer to the question of how we should value nature.[86]

Mechanisms Responsible for the Gulf Between Environmental Ethics and Law

[Omitted]

Toward A Liberationist Approach in Environmental Ethics

[Omitted]

Conclusion

The persistence of disputes over how we should morally value the environment and the natural world demonstrates the difficulty of crafting practical yet ethical solutions to vast and abstract problems. But in the classic tradition of making lemonade out of lemons, a burgeoning unity of will among Americans to take action against today’s “crucibles of ethical development”[87] can hopefully galvanize ethical development, which in turn can both inform and be made “more palatable” by law.[88] A liberationist approach to environmental law, with its integrative view of social and environmental justice, as well as a vision of collaborative engagement among community members on the local, regional, national, and global levels, could smooth the process of adapting our outdated environmental laws to our evolving environmental values. Even liberation theology has its limits on its practical application, however. Gustavo Gutiérrez admitted that he could not do more than “sketch these considerations [i.e., the Church’s role in process of liberation], or more precisely, outline new questions—without claiming to give conclusive answers.”[89]

Accordingly, liberation theology, as any other religious tradition with an activist social agenda, struggles to have a lasting impact on law and public policy because it must render unto Caesar what is Caesar’s.[90] Liberation theology resides simultaneously in separate realms. On one side is the realm of the spirit, where liberation theology dwells in eternity, infinity, and possibility. On the other side is the material world, where temporality, finitude, and necessity hold sway. Fortunately for environmental law, it only has to worry about the here and now. Unfortunately, we live in a time where the nation’s environmental values are swiftly changing in the face of anthropogenic environmental problems of global significance, thereby demanding significant overhaul of environmental law in order for it to adequately safeguard these values.

[5] Pope Francis I, Laudato Si’ ¶ 2 (2015); see also Cristina Maza, One Year Later, How a Pope’s Message on Climate Change Has Resonated, Christian Science Monitor (June 24, 2016), http://www.csmonitor.com/Environment/2016/0624/One-year-later-how-a-Pope-s-message-on-climate-has-resonated (“In the year since Pope Francis released his encyclical, Laudato Si’, imploring his followers and fellow believers to care for the earth and its creatures, observers say more and more Roman Catholics are beginning to view climate change as a moral issue in which caring for the earth and caring for the poor intersect.”).

[8]See Leonardo Boff & Clodovis Boff, Introducing Liberation Theology 9 n.1 (Paul Burns trans., 24th prtg. 2011) (identifying the second Latin American bishops’ conference held at Medellín, Columbia in 1968, which met to discuss strategies for implementing the pronouncements of the Second Vatican Council, as the “official launching” of the theme of liberation in Latin America).

[9]See generally The Hope of Liberation in World Religions (Miguel A. De La Torre ed., 2008) (providing an analysis of the liberationist elements within a number of religious traditions).

[10] This is not always true, however. For example, consider the women’s liberation and animal liberation movements.

[12]See, e.g., Sarah Krakoff, Planetarian Identity Formation and the Relocalization of Environmental Law, 64 Fla. L. Rev. 87, 92-93 (2012) (identifying the rapid growth of localism—“placing value on working and buying locally”—as a response to growing awareness about the dangers of climate change).

[13]See, e.g., Malavika Vyawahare, Faith Leaders Call for Climate Change Action, ClimateWire, Nov. 12, 2015, http://www.eenews.net/climatewire/stories/1060027860/search?keyword=pope+ francis (reporting on a symposium where more than fifty delegates representing a range of faiths expressed their hopes that members of all religions would rally around fighting both climate change and poverty).

[14]See Press Release, White House, U.S. Leadership and the Historic Paris Agreement to Combat Climate Change (Dec. 12, 2015), https://www.whitehouse.gov/the-press-office/2015/12/12/us-leadership-and-historic-paris-agreement-combat-climate-change (announcing the U.S.’s commitment to achieving the goals for combating climate change set forth in the Paris Agreement reached at the 21st Conference of the Parties of the United Nations Framework Convention on Climate Change).

[15]See Richard Herrmann, Pew Oceans Commission, America’s Living Oceans: Charting a Course for Sea Change 5, http://www.pewoceans.org/oceans/ press_release.asp. (2003) (“We have reached a crossroads where the cumulative effect of what we take from, and put into, the ocean substantially reduces the ability of marine ecosystems to produce the economic and ecological goods and services that we desire and need. What we once considered inexhaustible and resilient is, in fact, finite and fragile.”).

[16]See Jedidiah Purdy, After Nature: A Politics for the Anthropocene 1-2 (2015) [hereinafter After Nature] (acknowledging the general consensus in the scientific community that for some time the earth been in a new geological epoch, one in which “humans are a force, maybe the force, shaping the planet.”).

[17] Krakoff, supra note 12, at 98 (“The global atmosphere is a common-pool resource, and since industrialization, agents have acted in their rational self-interest by emitting greenhouse gases in order to benefit from inexpensive energy. Even now that we know about the market’s failure to internalize the cost of greenhouse gas emissions, rational actors will still opt for cheap energy over reductions in greenhouse gas emissions because of the possibility that a defector could undermine the regime of curbing emissions.”).

[18]See After Nature, supra note 16, at 2 (“The Anthropocene finds its most radical expression in our acknowledgment that the familiar divide between people and the natural world is no longer useful or accurate.”).

[19]Seeid. at 46 (arguing that “natural catastrophe amplifies existing inequality” because the wealthy are better able to absorb and acclimate to the harmful consequences of man-made ecological damage).

[20]See Jedidiah Purdy, Our Place in the World: A New Relationship for Environmental Ethics and Law, 62 Duke L.J. 857, 871-77 (2013) [hereinafter Our Place in the World] (explaining how philosophical accounts of environmental ethics in the 1970’s struggled to produce an agreed-upon basis for valuing nature that could be translated into law, thereby leading policymakers to turn to the economic theories that have defined environmental law for last four decades).

[22]See Our Place in the World, supra note 20, at 883 (arguing that the divide that has grown between environmental ethics and environmental law over the last forty years demands that the law reshape itself to reflect our creative ethical capacity).

[23]See After Nature, supra note 16, at 262 (“[E]verything is connected to everything else, often in subtle and hidden ways, and any attempt to master the whole from a single standpoint is hubris and likely to turn out badly.”).

[26] For an insightful and detailed analysis of the evolution of American views on the value of the environment over the country’s history, see generally Jedidiah Purdy, American Natures: The Shape of Conflict in Environmental Law, 36 Harv. Envtl. L. Rev. 169 (2012).

[33]See Willis Jenkins, Ecologies of Grace: Environmental Ethics and Christian Theology 31-32 (2008) (arguing that unlike biomedical ethics or business ethics, environmental ethics it has no “discernible social practices” upon which to base its inquiries).

[35]Id. at 42. Jenkins identifies two other secular strategies besides nature’s standing: the strategy of moral agency, id. at 46-51, and the strategy of ecological subjectivity, id. at 51-57. I have chosen to concentrate on the strategy of nature’s standing because its efforts to correlate “normative obligations with the moral status of the nonhuman world” typically set it in direct opposition to the “blinkered economic rationalism of many public policy justifications.” Id. at 42.

[38] John O’Neill, The Varieties of Intrinsic Virtue, 73 Monist 119, 119 (1992); see also Gary Varner, Biocentric Individualism, in Environmental Ethics 90, 92 (David Schmidtz & Elizabeth Willot eds., 2d ed. 2012) (“Intrinsic value is the value something has independently of its relationships to other things. If a thing has intrinsic value, then its existence (flourishing, etc.) makes the world a better place, independently of its value to anything else or any other entity’s awareness of it.”).

[39] Arne Naess, The Shallow and the Deep, Long-Range Ecology Movements, 16 Inquiry 95 (1973), reprinted in Environmental Ethics, supra note 37, at 129, 129 (contrasting “the Shallow Ecology movement,” which Naess describes as the “[f]ight against pollution and resource depletion” and having as its central objective “the health and affluence of people in the developed countries,” with “the Deep Ecology movement,” which he describes as “rejection of the man-in-environment image in favor of the relational, total-field image.”).

[40]Our Place in the World, supra note 20, at 871; see also Jenkins, supra note 33, at 42-43 (comparing J. Baird Callicott’s view of nature’s intrinsic value, which could generally be described as “biocentric,” with that of Holmes Rolston, which could generally be described as “ecocentric.”).

[45]See id. at 19-20 (explaining that the correspondence of Roman Catholicism, Protestantism, and Eastern Orthodoxy with ecojustice, stewardship, and ecological spirituality, respectively, are only tendencies and not hard rules).

[48]See Michael Moody, Caring for Creation: Environmental Advocacy by Mainline Protestant Organizations, in The Quiet Hand of God 237, 239 (Robert Wuthnow & John Evans eds., 2002) (reporting that the term “ecojustice” was either coined or “made its public debut” in a 1972 strategic planning group of the American Baptist Churches).

[49]Compare Boff & Boff, supra note 8, at 14-15 (describing “pastoral theology” as a “middle level” of liberation theology that works as a “progressively integrating factor among pastors, theologians, and lay persons, all linked together around the same axis: their liberative mission.”), with Jenkins, supra note 33, at 62 (“In order to make environmental issues part of its churches’ enduring pastoral concerns, [ecojustice] redeployed Christian notions of justice to make appropriate response to nature fit with the rationale for existing humanitarian mission commitments.”).

[53]See Worsham, supra note 51, at 633-34 (crediting either a 1979 Texas environmental rights suit or a 1982 citizens’ protest “modeled after the civil rights protests of the 1960s” in Warren County, North Carolina against a polychlorinated biphenyl landfill as the root of the modern environmental justice movement). Worsham, though writing from a legal perspective, appears vulnerable to a criticism Jenkins levels against “[s]ociological observers of [environmental justice],” namely that they “tend to skip [environmental justice’s] associations with religion.” Willis Jenkins, The Future of Ethics: Sustainability, Social Justice, and Religious Creativity 206 (2013). Case in point, Jenkins notes that when the North Carolina citizens began their protest, “they marched out from a church,” see id., a fact Worsham omits.

[54]See Moody, supra note 47, at 239 (“[Ecojustice] predates—by more than a decade—the widespread recognition within the secular environmental movement of the importance of highlighting justice connections.”).

[61]See Bohannon & O’Brien, supra note 56 (relying on the “Principles of Environmental Justice” developed by the First National People of Color Environmental Leadership Summit in 1991, “which have been used ever since to summarize the moral impulse behind the movement,” to argue that environmental justice does not “explicitly advocate on behalf of the nonhuman world for its own sake—the ‘health’ of the nonhuman world is implicitly for the benefit of ‘present and future generations’ of humans”).

[65]See id. (“Those of us . . . who do not come from oppressed communities must be cautious about claiming that we can fully understand or summarize the interests and ideas of environmental justice activists, and we must allow these activists to speak for themselves.”)

[67]See Our Place in the World, supra note 20, at 873 (“Conceptually, the issue of intrinsic versus [instrumental] value rapidly produces a dilemma, an irresolvable standoff between anthropocentric and biocentric perspectives.”).

[69]See id. (“The mind is the theater, so to speak, in which we experience value; but that does not make the mind value’s source, any more than it creates the other people with whom we have relationships.”). Purdy identifies a potential resolution to this problem in the concept of uncanniness, which will be explored in Part IV.

[84] Sober, supra note 76, at 137; see also After Nature, supra note 16, at 240 (making a similar point by asserting that human exploitation of domesticated animals should be no more “immune to ethical scrutiny” because humans “co-evolved” with those species than “slavery and gender segregation should be immune because they are widespread in human history.”).

[86]See Jenkins, supra note 33, at 49 (quoting Bronislaw Szerszynski, Wallace Heim & Claire Waterton, Nature Performed: Environment, Culture and Performance 1 (2003)) (“[P]ractical rationality . . . . ‘is being driven not just by intellectual curiosity but also by an increasing sense that existing ways of thinking about nature are inadequate to practical needs,’ that in order to describe the dynamic relations among environment and society, one is ‘not well served by the noun-dominated languages used for describing both.’”).

[87]Our Place in the World, supra note 20, at 863 (identifying the crucibles as “agricultural and food systems, the ethical status of animals, and climate change”).

Before World War II, Japanese Admiral Yamamoto wrote: “Because I have seen the motor industry in Detroit and the oilfields of Texas, I know Japan has no chance if she goes to war with America, or if she starts to compete in building warships.”[1] As he anticipated, after hostilities broke out the United States government quickly began to mobilize the nation’s considerable natural resources and manufacturing capacity.

The War Production Board (WPB) was established in 1942 in order to “increase, accelerate, and regulate the production and supply of materials, articles and equipment and the provision of emergency plant facilities . . . required for the national defense.”[2] The WPB and similar entities had the ability to determine how various raw materials would be used, set prices, and enter into novel contractual arrangements with defense contractors. Some contracts provided that contractors would operate temporary facilities owned by the government,[3] or be subject to recapture of excess profits.[4] Profit margins were typically low, but in return contractors sometimes received favorable contract terms to insulate them from unexpected costs.[5]

The Contract Settlement Act of 1944 (CSA) recognized that, because of the extent to which American industry had been integrated into the war effort, any issues with the payment of claims when the war ended could imperil the entire economy. The CSA provided procuring agencies with authority “notwithstanding any provisions of law” to “agree to assume, or indemnify the war contractor against, any claims by any person in connection with such termination claims or settlement.”[6]

As was the intent of these legislative and executive acts, American industry roared to life, flooding the operational theaters with ships, planes, tanks, ordnance and fuel, and propelled the Allies to victory. But this overwhelming effort had ill-effects as well. Due in part to both the extraordinary pace of production, and the less stringent environmental regulations of the time, large amounts of toxic chemicals were released at hundreds of sites around the country.

Three-and-a-half decades later, in the face of mounting public concern about environmental pollution, Congress enacted the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA).[7] CERCLA authorizes the Environmental Protection Agency, if it determines a site poses “an imminent and substantial endangerment to the public health,” to sue certain responsible parties for the costs of cleanup.[8] Many of the sites identified by the EPA under CERCLA (commonly called “Superfund” sites) are the product of the extraordinary war-time effort, and the extraordinary defense contracts that enabled it. This set the stage for decades of litigation to allocate financial responsibility for the cleanup between the contractors (and often their corporate successors) and the government.

CERCLA LIABILITY GENERALLY

CERCLA liability will attach to any entity that owns or operates a contaminated facility, or owned or operated a facility where hazardous substances were disposed of in the past, as well as a few other categories related to transporting or arranging for the improper release of hazardous materials.[9] CERCLA liability is strict, joint, and several.[10] This means that often one party may be compelled to begin cleanup (or reimburse EPA for beginning cleanup) and then will have to seek contribution from other liable parties.[11] The liability was structured this way to ensure that there would always be a party available to pay for cleanup, and to disincentive companies from engaging in prohibited activities. Even if a corporation sells a polluted facility before the pollution is discovered, they will still be liable as a “past owner” or “operator.” Courts often note that CERCLA should be construed liberally in view of its remedial purpose to achieve its twin goals: “(1) enabling the EPA to respond efficiently and expeditiously to toxic spills, and (2) holding those parties responsible for the releases liable for the costs of the cleanup.”[12]

Under these standards, both defense contractors and the government (which specifically waived sovereign immunity related to CERCLA claims)[13] may be liable for some of the cleanup costs. But the extent of liability for each party is determined by comparing the role that each played in causing the pollution. The characterization of which entity was an “operator” is significant because of the way courts equitably apportion CERCLA contributions among the responsible parties. There is no fixed formula – instead, courts look at various sets of factors. One such set is the “Gore” factors, named after an unsuccessful but nonetheless influential attempt to pass an amendment to CERCLA in 1980 by then-Representative Al Gore.[14] Another similar set of factors are known as the Torres factors.[15] A common theme is that liability will be more heavily apportioned to a party with more “knowledge and/or acquiescence […] in the contaminating activities.”[16] The tests established for the “operator” label tend to track closely with this language, and therefore being designated as an “operator” often leads to a large share of liability.[17] The analysis of which entity was “operating” a facility, or portion thereof, has evolved over time as discussed in the next section.

FMC Corp. Suggested Broad Government Liability Even for Regulatory Oversight

In 1994, the Third Circuit decided FMC Corp. v. United States.[18] The case established a framework by which the US government could be held liable as an “operator” for acts it took in a regulatory capacity. Commentators at the time were concerned that, because the government is the ultimate “deep pocket,” this could lead to a massive amount of CERCLA liability looping back onto the government.[19] While not explicitly overruled, FMC Corp. has been limited by subsequent cases. But the decision is still relevant as its fact pattern, while rare, is not unique in the WWII-era contracting context.

FMC Corp. concerned a facility located in Front Royal, VA (then owned by corporate predecessor American Viscose) that produced high tenacity rayon (“HTR”) for plane and vehicle tires. Ordinarily the tires would have used rubber, but 90% of the United States’ rubber supply came from the Pacific, which was cut off after Pearl Harbor.[20] The facility was in fact converted from producing textile rayon to HTR largely at the behest of the government.[21] In 1982, inspections revealed elevated levels of carbon disulfide in the ground water around the plant.[22] Carbon disulfide is a volatile organic compound capable of causing neurological damage with chronic exposure.[23]

After the EPA notified FMC of its potential liability under CERLA, FMC filed suit seeking monetary contribution from the government under section 113(f) of CERCLA.[24] FMC argued that the government was “so pervasively” involved in directing the activities at the facility that it should pay some, if not all, of the cleanup costs.[25] The government admitted that it effectively controlled many aspects of the operation at the American Viscose plant, but argued that it did so only in a regulatory capacity, and that it could not be held to be an “operator” for purposes of CERCLA when it was acting only as a regulator.[26]

The FMC Corp. court looked to cases in the parent-subsidiary liability context, and chose to apply the same “substantial control” and “active involvement” test to governmental actions for purposes of CERCLA liability.[27] The court found it important that, even if the government was primarily “regulating,” it:

“determined what product the facility would manufacture, controlled the supply and price of the facility’s raw materials, in part by building or causing plants to be built near the facility for their production, supplied equipment for use in the manufacturing process, acted to ensure that the facility retained an adequate labor force, participated in the management and supervision of the labor force, had the authority to remove workers who were incompetent or guilty of misconduct, controlled the price of the facility’s product, and controlled who could purchase the product.”[28]

The court ultimately found that the government was an “operator” of the plant.[29] To the extent that this result was not what Congress may have intended when it adopted CERCLA, the court noted that amending the statute was within the power of Congress, not the Courts.[30]

Bestfoods Narrowed the Operator Liability Standard

Four years later, the Supreme Court decided United States vs. Bestfoods.[31]Bestfoods dealt with the question of under what circumstances a corporate parent could be held liable as an operator under CERCLA for the actions of a subsidiary corporation. Because FMC Corp. and other earlier defense contract related decisions had examined governmental vicarious liability under CERCLA as being the same as the inquiry for a “non-governmental entity”,[32]Bestfoods would have a direct impact on government CERCLA liability.

Bestfoods found that a subsidiary “so pervasively controlled” by a parent such that it would warrant veil piercing in the corporate law context could be held derivatively liable for the acts of the subsidiary.[33] This is a high standard – even a parent and a subsidiary that share officers and directors will not necessarily meet it.[34] But even if the conduct of a parent would not warrant veil piercing, the court found that “CERCLA prevents individuals from hiding behind the corporate shield when, as ‘operators,’ they themselves actually participate in the wrongful conduct”.[35] Thus, “[u]nder CERCLA, an operator is simply someone who directs the workings of, manages, or conducts the affairs of a facility… specifically related to pollution, that is, operations having to do with the leakage or disposal of hazardous waste, or decisions about compliance with environmental regulations.”[36] This standard has been interpreted to require involvement in environmental decisions on a frequent, often “day-to-day” basis.[37]

It is unclear how FMC Corp. would have been decided under this standard. While it appears that the government did exercise some day-to-day control, it is not clear that this control had the required nexus to the actual pollution. What is clear is that this standard is intensely factual in nature. For all the record developed in FMC Corp., more might have been needed to determine if the government’s day-to-day input over personnel and other issues had the required nexus to the pollution.

Recent Cases Exemplify this Narrower Standard

Two recent cases demonstrate how much more difficult it is to assign “operator” liability to the government after Bestfoods. Exxon Mobil Corp. v. United States involved two sites in Louisiana where the production of avgas[38] for the war effort led to contamination of the Mississippi River.[39] Exxon argued that many activities at the site were performed out of fear that the refineries would be seized by the WPB if production quotas were not met.[40] The court rejected this argument, finding that the government acted more like a “very interested consumer,” and did not direct day-to-day activities.[41] The court also found persuasive that fact that some of Exxon’s contracts contained clauses stating that certain specifications and quantities would be “determined by negotiation,” as opposed to simply dictated by the government.[42]

Exxon further argued that government personnel were at the site every day, performing inspections. The court cited in response other post-Bestfoods cases where daily inspections related to contract compliance and worker safety were insufficient.[43] Ultimately, the government was not determined to be an “operator” of the avgas refineries under CERCLA.[44]

A second case, TDY Holdings, reached a similar result.[45] TDY was the corporate successor of several corporations which had operated a facility near San Diego international airport that performed aeronautical fabrication and testing as a contractor to the government between 1939 and 1999.[46] Even though it was undisputed that the government “owned some of the equipment related to the contamination, and observed and knew of TDY’s production processes and maintenance practices that released contaminates into the environment” the government was found to be merely a “past owner” and not an “operator.”[47] TDY argued that adherence to military specifications (mil specs) led inevitably to pollution, but the court found that the mil specs did not dictate how by-product chemicals should be managed, contained, or disposed of.[48] The court also explicitly distinguished FMC Corp. on the grounds that TDY actively sought out defense work, and was never “ordered, coerced, or forced” to operate as a defense plant.[49] TDY was assigned 100% of the cleanup costs as the “operator,” even though the government had been found to be a “past owner” of some facilities.[50]

Shell and I. DuPont establish the framework for litigation over indemnification clauses

With the window to assign the government “operator” liability in all but extreme cases closed, litigants have explored other ways to shift cleanup costs to the government. One method that has succeeded has been to rely on special indemnification clauses that were included in some WWII-era contracts. Unlike the in-depth factual analysis required to establish “operator” liability, this analysis involves primarily questions of law. Specifically, application of these clauses depends on whether or not the clauses extend to CERCLA liability (which was unforeseen at the time of their execution). If the clauses do cover CERCLA liability, it then must be examined whether or not the Anti-Deficiency Act prohibits payment of indemnification under the clauses and, if so, whether the ADA was effectively waived by the Contract Settlement Act of 1944.[51]

In 1940, the government contracted with E.I. du Pont to build a plant in Morgantown, WV to produce munitions-related chemicals. E.I. du Pont was to construct and operate the plant, but the facilities would be owned by the government. E.I. du Pont was to be paid a fixed fee for the operation of the plant, but the government affectively owned all of the output – there were no products “sold” to the government.[52] The contract contained an indemnification clause that read:

“the Government shall hold [E.I. du Pont] harmless against any loss, expense (including expense of litigation), or damage (including damage to third persons because of death, bodily injury or property injury or destruction or otherwise) of any kind whatsoever arising out of or in connection with the performance of the work”[53]

The court had no difficulty determining that this clause extended to CERCLA liability based on its broad, non-limited language.[54] The court then turned to the question of whether the Anti-Deficiency Act (ADA) barred payment under the indemnification clause. The trial court had determined that the ADA, which bars payments in excess of the amounts appropriated by Congress for a particular contract,[55] did bar payment of CERCLA indemnification. The Federal Circuit did not question this general conclusion, but instead focused on whether payment was otherwise “authorized by law” as an exception to the ADA.[56]

Specifically, the Federal Circuit considered whether the Contract Settlement Act of 1944 (CSA), designed to ensure rapid settlement of war related claims, could overcome the general prohibition of the ADA. The CSA provided that certain agencies;

“shall have authority, notwithstanding any provisions of law other than contained in this chapter, (1) to make any contract necessary and appropriate to carry out the provisions of this chapter; (2) to amend by agreement any existing contract, either before or after notice of its termination, on such terms and to such extent as it deems necessary and appropriate to carry out the provisions of this chapter; and (3) in settling any termination claim, to agree to assume, or indemnify the war contractor against, any claims by any person in connection with such termination claims or settlement.[57]

The Federal Circuit agreed with E.I. du Pont that this language “grant[ed] the President the authority to delegate to departments and agencies contracting power virtually unfettered by contract law, including the ADA”.[58] The case was remanded for entry of judgment in E.I. du Pont’s favor – the government would be liable for any CERCLA costs that might be imposed on E.I. du Pont.[59]

Subsequent cases have confirmed that if the CSA is applicable to the contract at issue, then the ADA restriction is not effective.[60] The only issue that remains is whether the particular indemnification clause is “(1) specific enough to include CERCLA liability or (2) general enough to include any and all environmental liability which would, naturally, include subsequent CERCLA claims.”[61]Shell Oil Co. v. United States concerned a contract where the relevant agency had agreed to pay “”any now existing taxes, fees, or charges . . . imposed upon [the Oil Companies] by reason of the production, manufacture, storage, sale or delivery of [avgas].”[62] The Federal Circuit held that future CERCLA liability was a “charge” within the meaning of the contract, and the government was therefore liable to reimburse Shell for it.[63]

The courts’ findings that certain WWII-era contractor indemnification clauses cover CERCLA liability makes this an attractive litigation tactic now that that it is more difficult to assign the government “operator” liability. Of course, not all contracts contained a version of either of the provisions discussed above. Those that did are more likely to be contracts of the type at issue in E.I. du Pont and FMC Corp., where the government and the contractor were undertaking a mode of operation that would not normally be undertaken outside of wartime. These extraordinary contracts are more likely to involve fact scenarios on which the government might also still be determined to be an “operator,” even under the narrow Bestfoods test. But even where the government might be deemed an “operator,” the indemnification clause strategy has the advantage of providing a complete bar to contractor liability as opposed to requiring apportionment, and also does not require intense factual investigation.

[14] The “Gore” factors include: “(i) the ability of the parties to demonstrate that their contribution to a discharge release or disposal of a hazardous waste can be distinguished; (ii) the amount of the hazardous waste involved;

(iii) the degree of toxicity of the hazardous waste involved; (iv) the degree of involvement by the parties in the generation, transportation, treatment, storage, or disposal of the hazardous waste; (v) the degree of care exercised by the parties with respect to the hazardous waste concerned, taking into account the characteristics of such hazardous waste; and(vi) the degree of cooperation by the parties with Federal, State, or local officials to prevent any harm to the public health or the environment.” United States v. A & F Materials Co., 578 F. Supp. 1249, 1256 (S.D. Ill. 1984).

[17]See TDY Holdings, LLC v. United States, 122 F. Supp. 3d 998, 1015 (S.D. Cal. 2015) (“In circumstances where the Government was found to be such an “operator” due to its control or management, in whole or in part, of the disposal practices at a site, courts have found it equitable to burden the Government with a substantial portion of the

[44] The government was determined to be an operator of several discrete facilities related to the litigation, including an ordinance shop. This finding was based on correspondence showing that the government “made specific decisions about waste disposal and environmental compliance,” was aware of the pollution, and decided to continue polluting. The ordinance works was described as “resembl[ing] a United States Army base more than a chemical plant” in terms of who actively managed it and its operational procedures. Id at 530-32.

[63]Id at 1284. Reyna, J. dissented, primarily on the ground that the provision in question was located in a section of the contract related to taxes and, interpreted in that context, CERCLA liability was not a “charge.” Id at 1303-05.