Tuesday, December 24, 2013

The IEA's latest long-term forecasts highlights the growth of unconventional oil and gas, especially in North America, but does not see this leading to much lower oil prices.

In their main scenario fossil fuels will still meet more than three-fourths of the world's energy needs by 2035, despite significant growth in renewable energy.

The International Energy Agency (IEA) released its latest World Energy Outlook (WEO) in November, looking twenty-plus years into our energy future. The trends it describes add nuance and detail to last year's projections, rather than upending them. Among other things they advance the expected date of global oil production leadership by the US to 2015 but suggest these gains may be short-lived and will not lead to "cheap oil." The IEA also envisions a reshuffling of the traditional roles of energy importing, exporting and consuming countries, against a backdrop of steadily increasing energy-related greenhouse gas emissions. As in previous years, the new WEO examines the full range of energy supply and demand, with a focus this time on the sources and uses of petroleum, and the emergence of Brazil as an oil and energy power. While recognizing that they might be underestimating the potential for technology or additional resource discoveries to sustain the growth of "light tight oil", or shale oil, which together with oil sands and gas liquids is a primary driver of oil supply growth today, the IEA forecasts it would peak by 2025. That puts the burden for supporting oil demand growth and the replacement of supplies lost to natural decline after 2025 back onto the Middle East producers. So in the IEA's view, OPEC's loss of market power appears temporary. A corollary to this is that the agency does not anticipate a sustained drop in oil prices, but rather a gradual increase of about 16% by 2035. That's because the unconventional oil helping to drive current market shifts is still relatively high-cost, compared to the large conventional oil resources of the Middle East. Although the IEA expects the global oil market to grow from its present level of around 90 million barrels per day (MBD) to 101 MBD in 2035, that change would be less than their forecasted equivalent global growth in gas, renewables or even coal. The concentration of oil demand in transport and petrochemicals would also increase, while other uses contract slightly. This is consistent with last year's observation that the center of the oil market is shifting towards Asia, since around one-third of the total anticipated growth in oil demand is for diesel to fuel goods deliveries in Asia. The shift toward Asia applies to other forms of energy, as well, including natural gas and the expanded use of renewable energy. This trend is already altering global energy trading patterns, and with the US becoming more energy self-sufficient the IEA sees a new role for energy exports from Canada to supply Asia. That includes both LNG and oil sands, which Fatih Birol, the IEA's chief economist, recently indicated the agency sees as only a minor, incremental threat to the climate compared to growing coal use. An added nuance in this year's outlook is that the IEA now expects world-leading energy growth in China to be overtaken in a decade or so by faster growth in India, while rapidly growing consumption in the Middle East could result in that region posting the second-highest growth in primary energy demand through 2035, especially for natural gas. In the launch presentation in London Dr. Birol assessed the consequences of strong North American energy growth and shifting exports and imports for the prices that industries pay for energy. Because any exports of low-cost North American shale gas must be priced to cover the cost of liquefaction and long-haul freight, plus a margin, global natural gas prices should converge somewhat but still not equalize among the major consuming regions. As a result, the IEA expects US-based energy-intensive industries to have a persistent cost advantage in both gas and electricity, enabling them to increase their share of global markets. That has implications for employment and economic growth, while sustained energy price disparities should also drive energy efficiency improvements in response. Another issue that received prominent attention at the launch was the always controversial matter of subsidies, for both conventional and renewable energy. The IEA estimated global fossil fuel subsidies at $544 billion 2012--mainly in developing countries and Middle East oil producers--resulting in "wasteful consumption" and fewer benefits for the poor than commonly claimed. And while supporting the use of subsidies to promote greater use of renewable energy, the agency's Executive Director, Maria van der Hoeven, made a particular point about the necessity for such subsidies to be carefully targeted and very responsive to changes in technology cost. The IEA was founded in the aftermath of the 1973-74 Arab Oil Embargo and will celebrate its 40th anniversary next year. I couldn't help thinking about that as I reviewed the updated WTO materials. They're interesting as an annual update, but also in reflecting how the world of energy has changed since the oil shocks of the 1970s.

The rapid development of unconventional oil and gas that underpins the IEA's latest forecast would likely have amazed the industry veterans I met at the start of my career, but still fit within their worldview. I think they would have found the projected growth of renewable energy, supported by climate-change-inspired subsidies that surpassed $100 billion per year in 2012 more futuristic and surprising. Yet despite the anticipated expansion of renewable energy sources over the next 22 years, the IEA envisions the share of fossil fuels in the world's total energy supply only falling from 82% today to 76% in its main "New Policies" scenario. That will seem overly cautious to many, but it underlines the challenges involved in changing such massive systems.

I'd like to wish my readers all the joys of the holiday season and a happy and prosperous New Year.

A different version of this posting was previously published on the website of Pacific Energy Development Corporation.

Thursday, December 19, 2013

The expiration of the federal subsidy for wind power on 12/31/13 provides an opportunity to replace it with a smaller benefit, more focused on innovation.

Comprehensive tax reform is the best way to approach this, including making tax incentives for energy consistent across the board.

With the end of the year fast approaching, the US wind power industry faces yet another scheduled expiration of federal tax credits for new wind turbines. The wind Production Tax Credit, or PTC, was due to expire at the end of 2012 but was extended for an additional year as part of last December’s “fiscal cliff” deal. With the PTC and other energy-related “tax expenditures” subject to Congressional negotiations on tax reform, it was looking like this might truly be its last hurrah in its current form, until Senator Baucus, Chairman of the Senate Finance Committee, released his draft proposal yesterday. Unfortunately, from what I have seen so far it falls short of sunsetting this overly generous subsidy and replacing it with a new policy emphasizing innovation.

In its 20-year history, minus a few year-long expirations in the past, the PTC has promoted tremendous growth in the US wind industry, from under 2,000 MW of installed wind capacity in 1992 to over 60,000 MW as of today. For most of its tenure, the PTC did exactly what it was intended to do: reward developers for generating increasing amounts of renewable electricity for the grid at a rate tied to inflation.

However, unlike the federal investment tax credit for solar power and some other renewables, the amount of the subsidy didn’t automatically decrease as the technology improved, with wind turbines growing steadily larger, more efficient, and cheaper to build. Instead, the PTC’s subsidy for wind power increased from 1.5 ¢ per kilowatt-hour (kWh) to its present level of around 2.3 ¢. That figure equates to up to $39 per oil-equivalent barrel, depending on which conversion from kWh to BTUs you choose.

It's also roughly one-third of today’s average US retail electricity price for industrial customers and exceeds most estimates of typical operating and maintenance costs for wind power. The latter point has serious implications for the impact of wind farms on other generators in a regional power grid.

If wind turbine installations continued at their remarkably depressed rate of just 64 MW in the first three quarters of this year, the cost of extending the current PTC for another four years and beyond, as Senator Baucus seems to be proposing, would be negligible. However, it’s evident from industry data that a major reason installations are so low in 2013 is that the uncertainty over last year’s scheduled expiration caused developers to accelerate projects into the record-setting fourth quarter of 2012. The American Wind Energy Association cites over 2,300 MW of new wind capacity under construction as of the end of September, while installations over the last three years averaged just under 8,400 MW annually.

At that rate, a one-year extension of the current PTC would add around $5 billion annually to the federal budget over the succeeding 10 years that each year's new wind farms would receive benefits. Congress’s Joint Committee on Taxation apparently came up with a slightly higher estimate of $6.1 billion for a one-year extension.

Before reflexively supporting or opposing another status quo PTC extension, we should ask what we’d be getting for that $5 or $6 billion a year. One of the commonest rationales I encounter justifying the continuation of the current PTC is that conventional energy still receives billions of dollars in subsidies each year. Without getting bogged down in arguments over the definition of a subsidy, or the real and imagined externalities associated with using fossil fuels, it is certainly true that the US oil and gas industry benefits from deductions and tax credits in the federal tax code to the tune of around $4.3 billion per year, based on figures in the latest White House budget.

If we compare these benefits on the basis of the energy production they yield, the PTC starts to look pretty expensive. For example, wind capacity additions in 2012 of over 13,100 MW increased wind generation by 20 billion kWh over the previous year. That’s the energy equivalent of about 140 billion cubic feet of natural gas in power generation, or 66,000 barrels per day of oil. (Although less than 1% of US oil consumption is used to generate electricity, oil is still an easily visualized common denominator.)

By comparison, US oil production expanded by 837,000 bbl/day, while natural gas production grew by the equivalent of another 606,000 bbl/day. So on this somewhat apples-to-oranges basis, oil and gas added more than 20 times as much new energy output to the US economy as wind power did, for roughly the same cost to the federal government.

Now, it’s true that domestic oil and gas both had banner years in 2012, in terms of growth, reversing longer-term decline trends in earlier years, but US wind had its biggest year ever last year. Another factor making this comparison more reasonable than it might otherwise seem is that these are all essentially mature technologies. Wind turbines are still improving, but these improvements are mainly incremental at this point. Nor do they or the billions in annual subsidies for wind address the single biggest obstacle to the wider adoption of wind energy, arising from its fundamental intermittency and disjunction with typical daily and seasonal electricity demand cycles.

When the PTC was first implemented in 1992, by its very existence it fostered innovation in a technology that was still in its infancy as a commercial means of generating meaningful quantities of electricity. That’s no longer the case. I’ve seen various ideas for reforming the PTC to make it more innovation-focused, but while these might be preferable to the status quo, they strike me as overly narrow. We don’t just need wind innovation, but energy innovation, and in fact innovation across the whole US economy if we want to remain globally competitive, and if we want to make more than incremental reductions in our greenhouse gas emissions.

It’s ironic in that context that the federal 20% research and development tax credit is also due to expire at the end of the year. If it came down to a choice between extending the R&D tax credit and extending the PTC, I’d hope that even the wind industry would opt for the R&D credit. That’s not entirely a false choice, considering the scale of ongoing federal deficits and debt, and the need for the government to borrow around 20% of what it spends.

Now is the ideal time to rethink the Production Tax Credit. Its expiration now wouldn’t be as abrupt as was foreseen at the end of 2011 or 2012, because last year’s extension redefined how projects qualify for the PTC. Any wind project that has either started significant work or spent 5% of its budget by year-end could still qualify for the current PTC in 2014. I have seen analysis suggesting a project begun now might even qualify after 2015, as long as work on it had been continuous.

That sets up a smoother transition, while Congress and the wind industry reevaluate what role, if any, specific wind-energy subsidies have in a national energy economy that looks very different than the one in which the PTC was first conceived in the 1990s. Making tax incentives more uniform across competing energy technologies, as Chairman Baucus's draft would do, is a good start, but instead of locking in a perpetual subsidy for current wind power technology at 50 times the rate of today's disputed oil & gas tax incentives, Congress should focus on making the tax incentives for all energy production consistent across the board, at levels that taxpayers can afford no matter how much these energy sources grow in the future.

Thursday, December 12, 2013

Increased US production of LPG and natural gas liquids is an outgrowth of the shale gas revolution and a key ingredient for translating its benefits into industrial growth.

The infrastructure investments, export opportunities and price relationships for these liquids represent a microcosm of the similar issues for shale gas and LNG.

An article in the Wall St. Journal last month on the impact of a Midwest propane shortage on farmers trying to dry their corn harvest caught my attention. How could propane be in short supply, when US production is soaring due to shale gas? While it turns out that the shortfall in question was localized and temporary, it prompted me to take a closer look at LPG supply and demand than I have in many years. I found yet another market that is being transformed by the shale gas revolution.

Like most Americans--except for those in the roughly 5% of US homes heated with it-- I normally think about LPG only when I have to change the tank on my barbecue grill. That wasn't always the case; early in my career I traded LPGs for Texaco's west coast refining system. I'm happy to see that some of my former colleagues from that period are still involved and frequently quoted as experts on it. Although the LPG market is obscure to many, it represents a microcosm of the issues of reindustrialization and product exports arising from the recent turnaround in US energy output trends.

In order to follow these developments, we first need to clarify some confusingly similar acronyms, starting with LPG. Although often used synonymously with propane, it actually stands for "liquefied petroleum gas" and covers mainly propane and butane, though some in the industry include ethane in this category. The term reflects the oil refinery source of much of their supply, both historically and to an important extent today. LPG overlaps with natural gas liquid (NGL)--ethane, propane, butane, isobutane and "natural gasoline"-- that has been separated from "wet" ( liquids-rich) natural gas during processing. NGLs are entirely distinct from the anagrammatical LNG, or liquefied natural gas, which consists mainly of methane that has been chilled until it becomes a liquid. By contrast, NGLs and LPG are typically stored at or near ambient temperature but under pressure to keep them in the liquid state.

LPG and NGLs make up a distinct segment of US and global energy markets, falling between the markets for natural gas and refined petroleum products. They are also linked to these larger markets, both logistically and economically. For example, gas marketers vary the amount of liquids they leave in "dry gas" to meet pipeline natural gas specifications based on price and other factors, and oil refiners blend varying quantities of butane into gasoline, depending on seasonal requirements. Propane and butane are mainly used as fuels, while ethane and isobutane are chiefly chemical feedstocks.

The development of shale gas in the US and Canada has affected the supply of NGLs and LPG in several important ways. First, starting around 2007 increasing shale gas output helped to halt and then reverse the decline in US natural gas production from which US NGLs are sourced. Then, following the financial crisis, diverging natural gas and crude oil/liquids prices pushed shale drillers toward the liquids-rich portions of shale basins like the Eagle Ford in Texas, in order to maximize their revenue. The resulting surge of US NGL production in late 2009 reinforced the decline of US LPG imports that began with the recession. According to US Energy Information Administration data, the US became a fairly consistent net exporter of LPG in 2011.

The current US LPG surplus is around 100,000 bbl/day, out of total production of around 2.7 million bbl/day. That surplus and its expected growth provides the basis for a number of announced LPG export projects, as well as the anticipated development of new domestic chemical facilities such as ethylene crackers that would consume substantial portions of new supply, particularly of ethane.

The success of those projects depends on significant investments in new infrastructure, including gas processing, NGL fractionators to split the raw NGL into its components, and pipelines to deliver NGL to fractionators and LPG to markets. This is particularly true for the Marcellus and Utica shale gas in the Northeast, from which little or no ethane has been extracted due to limited local demand. Not only is that a missed manufacturing opportunity, but it constitutes a potential constraint on further liquids-rich gas development, since leaving too much ethane in the marketed gas would cause it to exceed pipeline BTU specifications.

In the meantime we're left with a situation that's analogous to the growth of tight oil production from the Bakken shale. New sources of production have come on-stream faster than the infrastructure necessary to deliver them efficiently to where they can be processed or consumed. That puts a growing US surplus of propane and other NGLs in tension with tight regional markets for these fuels in the Midwest and Northeast, where residential propane prices are running well ahead of last year's at this time. The resolution of this apparent paradox will depend on which infrastructure and demand projects are eventually completed, and how soon.

A different version of this posting was previously published on the website of Pacific Energy Development Corporation.

Tuesday, December 03, 2013

R&D is under way in Germany to see whether CO2 emitted from power plants or other facilities could become a useful feedstock for manufacturing chemicals.

This could have several advantages over producing fuels from CO2, while providing modest emission reduction benefits.

A recent article in Chemical & Engineering News described current German research and development work focused on devising new industrial processes for making organic chemicals from CO2. These public/private partnerships capitalize on that country’s long expertise in industrial chemistry and its highly successful chemical sector. They are also extremely timely, not just because of growing concern about steadily increasing levels of CO2 in the atmosphere, but because Germany’s “Energiewende”, which includes the rapid phase-out of nuclear power, appears to be raising the country’s emissions as it relies increasingly on coal for baseload electricity generation.

In my last post I explained why it is unlikely that fossil fuels could be phased out rapidly enough to threaten the current valuations of oil and gas firms. But if carbon-based fuels will be with us for some time, that leaves open the large question of what to do about the CO2 emitted when they are burned, particularly from stationary installations like factories and power plants. The long-mooted approach of carbon capture and sequestration (CCS) still faces significant obstacles in terms of cost and social acceptance. That makes CO2 utilization efforts such as those underway in Germany especially intriguing as a way of turning lemons into lemonade.

It’s impossible to predict today whether any of the CO2 utilization processes that German companies and universities are pursuing will ever become commercial. However, they share some key advantages over “classic” CCS and various efforts to produce fuels and other chemicals from CO2 captured directly from the atmosphere:

Producing chemicals, rather than fuels, finesses a fundamental obstacle to recycling CO2. Thermodynamics dictate that reversing the results of combustion requires more energy than the fuels released when burned. As long as most energy globally comes from fossil fuels, it will be hard to come out ahead from an energy, emissions or cost perspective when turning CO2 back into fuels. However, if the output is valuable chemicals, that energy deficit might not be such a hindrance.

The target chemicals for these projects, including polyols, polypropylene carbonate, and acrylates, are widely used and have a global market. While most don’t quite fall into the category of premium specialty chemicals, they are unlikely to become as commoditized as motor fuels. So while cost is an important consideration, there’s probably a bit more leeway for a new process to compete and become successful.

The scale of production for these chemicals is much smaller than for motor fuels, by orders of magnitude. That means that a company investing in producing them from CO2 can hope to capture meaningful revenue and market share with a manageable scale-up from the laboratory. Yet they’re not so small that a single new plant on a scale large enough to demonstrate CO2 utilization would swamp the global market and destroy the margins that made the investment attractive in the first place.

These projects appear to be focused mainly on using the CO2 effluent from other industrial processes or power generation, ranging from 4-14% for power plants and up to 90% for some industrial processes, rather than having to collect it from the atmosphere, where it is present at just 0.04%. Starting with a CO2 concentration 100-1000 times higher than in air entails much less surface area for absorption, and likely lower energy consumption and overall capture cost.

Germany is committed to significant CO2 reduction, but the German public seems uncomfortable with the prospect of burying CO2 underground. Lacking large numbers of mature oil fields that could be revived by CO2 injection, a commercial-scale CO2 utilization industry would solve Germany’s problem of what to do with at least some of the CO2 it will eventually want to capture from the country's coal- and gas-fired power plants and other sources.

As promising as these efforts look, they are unlikely to reduce global CO2 emissions by enough to meet current goals. While chemical markets are big enough to take up some captured-and-converted CO2, they are much smaller than the global fossil fuel consumption responsible for most man-made CO2 emissions. If carbon capture really took off, the volumes of concentrated CO2 involved would require multiple additional large-scale dispositions including enhanced oil recovery, fuel production–perhaps driven by advanced nuclear power–underground burial, and possibly chemical sequestration as carbonate rock.

In the meantime, turning some CO2 that would otherwise end up in the atmosphere into organic chemicals that will end up in more durable products seems worth pursuing. If these processes can become commercial, they will help move us in the right direction, and more cost-effectively than some other approaches receiving large ongoing government subsidies, rather than the modest seed money involved in these cases. I’ll be very interested to see how these efforts turn out.

Friday, November 22, 2013

The idea that efforts to mitigate climate change expose fossil fuel assets to the risk of a bubble-like collapse has attracted some high-profile supporters.

However, the notion of a "carbon bubble" depends on questionable assumptions concerning our current knowledge of climate change, the rate of adoption of renewable energy technology, and how such assets are valued.

In their recent Wall St. Journal op-ed, Al Gore and one of his business partners characterized the current market for investments in oil, gas and coal as an asset bubble. They also offered investors some advice for quantifying and managing the risks associated with such a bubble. This is a timely topic, because I have been seeing references to this concept with increasing frequency in venues such as the Financial Times, as well as in the growing literature around sustainability investing.

Although bubbles are best seen in retrospect, investors should always be alert to the potential, particularly after our experience just a few years ago. In this case, however, I see good reasons to believe that the case for a “carbon asset bubble” has been overstated and applied too broadly. The following five myths represent particular vulnerabilities for this notion:

1. The Quantity of Carbon That Can Be Burned Is Known Precisely
Mr. Gore is careful to differentiate uncertainties from risks, which he distinguishes for their amenability to quantification. For quantifying the climate risk to carbon-heavy assets, he refers to the widely cited 2°C threshold for irreversible damage from climate change, and to the resulting “carbon budget” determined by the International Energy Agency (IEA). As Mr. Gore interprets it, “at least two-thirds of fossil fuel reserves will not be monetized if we are to stay below 2° of warming.” That would have serious consequences for investors in oil, gas and coal.

The IEA’s calculation of a carbon budget depends on a factor called “climate sensitivity.” This figure estimates the total temperature change resulting from a doubling of atmospheric CO2 concentrations. The discussion of climate sensitivity in the recently released Fifth Assessment Review of the Intergovernmental Panel on Climate Change (IPCC) sheds more light on this parameter, which turns out not to be known with certainty. Their Summary for Policymakers includes an expanded range of climate sensitivity estimates, compared to the IPCC’s 2007 assessment, of 1.5°-4.5°C with a likelihood defined as 66-100% probability. It also states, “No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.”

The draft technical report that forms the basis for the Summary for Policy Makers provides more detail on this. It further assesses a probability of 1% or less that the climate sensitivity could be less than 1°C. That shouldn’t be surprising, since temperatures have already apparently risen by 0.8°C above pre-industrial levels. At the same time, the report indicates that recent observations of the climate — as distinct from the output of complex climate models — are consistent with “the lower part of the likely range.”

In other words, while continued increases in atmospheric CO2 resulting from increasing emissions are widely expected to result in warmer temperatures in the future, the extent of the warming from a given increase in CO2 can’t be determined precisely before the fact. For now, at least, the CO2 level necessary to reach a 2°C increase would be consistent with calculated carbon budgets both larger and smaller than the IEA’s estimate. That means that the basis of Mr. Gore’s suggested “material-risk factor” — as distinct from an uncertainty — is itself uncertain.

2. The Transition to Low-Carbon Energy Is Occurring Fast Enough to Threaten Today’s Investments in Fossil Fuels
There is no doubt that renewable energy sources such as wind and solar power are growing at impressive rates. From 2010 though 2012 global solar installations grew by an average of 58% per year, while wind installations increased by 20% per year. Yet it’s also true that they make up a small fraction of today’s energy production, and that the risks for investors of extrapolating high growth rates indefinitely proved to be very significant in the past.

For further clarity on this, consider the IEA’s latest World Energy Outlook, the agency’s analysis of global energy trends, which was just released on November 12. The IEA projects global energy consumption to grow by 33% from 2011 to 2035 in its primary scenario, which reflects expanded environmental policies and incentives over those now in place. In that scenario, the global market share of fossil fuels is expected to fall from 82% to 76%, but with total fossil fuel consumption still growing by 24% over the period. Only in their “450″ scenario, based on similar assumptions to its carbon budget, would fossil fuel consumption fall by 2035, and then only by 11%.

Moreover, in its April 2013 report on “Tracking Clean Energy Progress,” the IEA warned, “The drive to clean up the world’s energy system has stalled.” This concern was based on their observation that from 1990 to 2010 the average carbon dioxide emitted to provide a given unit of energy in the global economy had “barely moved.” That’s hardly a finding to be celebrated, but it serves as an important reminder that while some renewable energy sources are growing rapidly, fossil fuel consumption is also growing, especially in the developing world — and from a much larger base.

The transition to lower-carbon energy sources is inevitable. However, it will take longer than many suppose, and it cannot be accomplished effectively with the technologies available today. That’s a view shared by observers with better environmental credentials than mine.

3. All Fossil Fuels Are Equally Vulnerable to a Bubble
As Mr. Gore correctly notes, “Not all carbon-intensive assets are created equal.” Unfortunately, that’s a distinction that some other supporters of the carbon asset bubble meme don’t seem to make, particularly with regard to oil and natural gas. The vulnerability of an investment in fossil fuel reserves or hardware to competition from renewable energy and decarbonization doesn’t just depend on the carbon intensity of the fuel type — its emissions per equivalent barrel or BTU — but also on its functions and unique attributes.

The best example of this might be a recent transaction involving the sale of a leading coal company’s mines. What’s behind this wasn’t just new EPA regulations making it much harder to build new coal-fired power plants in the US, but some fundamental, structural challenges facing coal. Power generation now accounts for 93% of US coal consumption, as non-power commercial and industrial demand has declined. This leaves coal producers increasingly reliant on a utility market that has many other--and cleaner--options for generating electricity. That’s particularly true as the production of natural gas, with lower lifecycle greenhouse gas emissions per Megawatt-hour of generation, ramps up, both domestically and globally. Coal accounts for about half of the global fossil fuel reserves that Mr. Gore and others presume to be caught up in an asset bubble.

Compare that to oil, which at 29% of global fossil fuel reserves, adjusted for energy content, still has no full-scale, mass-market alternative in its primary market of transportation energy. Despite a decade-long expansion, biofuels account for just over 3% of US liquid fuels consumption, on an energy-equivalent basis. They’re also encountering significant logistical challenges and concerns about the degree to which their production competes with food. This has contributed to efforts in the EU to limit the share of crop-based biofuels to around 6% of transportation energy. Biofuels have additional potential to displace petroleum use, particularly as technologies for converting cellulosic biomass become commercial, but barring a prompt technology breakthrough they appear incapable of substituting for more than a fraction of global oil demand in the next two decades.

Electric vehicles offer more oil-substitution potential in the long run, though they are growing from an even smaller base than wind and solar energy. Their growth will also impose new burdens on the power grid and expand the challenge of displacing the highest-emitting electricity generation with low-carbon sources.

Meanwhile, natural gas, at 20% of global fossil fuel reserves, offers the largest-scale, economic-without-subsidies substitute for either coal or oil. In any case, it has the lowest priority for substitution by renewables on an emissions basis, and so should be least susceptible to a notional carbon bubble.

4. A Large Change in Future Fossil Fuel Demand Would Have a Large Impact on Share Prices
Although Mr. Gore’s article includes a good deal of investor-savvy terminology, it is entirely lacking in two of the most important factors in the valuation of any company engaged in discovering and producing hydrocarbons: discounted cash flow (DCF) and production decline rates. Unlike tech companies such as Facebook or even Tesla, the primary investor value proposition for which depends on rapid growth and far-future profitability, most oil and gas companies are typically valued based on risked DCF models in which near-term production and profits count much more than distant ones.

At a conservative discount rate of 5%, the unrisked cash flow from ten years hence counts only 61% as much as next year’s, while cash flow 20 years hence counts only 38% as much. Announced changes in near-term cash flow due to unexpected fluctuations in production or margins would normally be expected to have a much bigger impact on share prices than an uncertain change in demand a decade or more in the future.

This is compounded by the decline curves typical of many large hydrocarbon projects. If the first 3-5 years of a project account for more than half its undiscounted cash flows, it won’t be very sensitive to long-term uncertainties, nor would a company made up of the aggregation of many projects with this characteristic. This is even truer of shale gas and tight oil projects, which yield faster returns and decline more rapidly.

I can’t speak for Wall Street's oil and gas analysts, but I’d be surprised based on past experience in the industry if the risk of a 10% or greater drop in global demand for oil or gas in the 2030s would have much of an effect on their price targets for companies — certainly not enough to qualify as a bubble.

5. Fossil Fuel Share Prices Don’t Already Account for Climate Risks
The assertion of a carbon bubble in fossil fuel assets ultimately depends on investor ignorance of climate-response risks, presumably because companies haven’t quantified those risks for them. To the extent the latter condition is true, it represents an opportunity for companies seeking to capitalize on the boom in sustainability-based investing.

However, you needn’t be an adherent of the Efficient Markets Hypothesis for which Eugene Fama was named as a recipient of this year’s Nobel Prize in Economics to realize that thanks to the Internet, average investors have access to most of the same information on this subject as Mr. Gore and his partners. Institutional investors, who make up the bulk of the shareholding for at least the larger energy firms, and the analysts who follow these companies have the resources to access even more information.

Nor is the idea of a carbon bubble exactly new. Mr. Gore didn't create it, and I’ve been following it for a couple of years, as it took over from waning interest in Peak Oil. It’s not an obscure risk, either, in the sense that sub-prime mortgages and credit default swaps were in the lead-up to the failure of Lehman Brothers in 2008. It’s becoming more mainstream every day, although the burden of proof that this risk is mispriced rests with those advocating this view.
Before concluding, a word of disclosure is in order. As you may gather from my bio, I spent many years working with and around fossil fuels, though my ongoing involvement in energy is much broader than that. As a result of that experience, my portfolio includes investments in companies with significant fossil fuel holdings. I strive for objectivity, but I can’t claim to be disinterested. However, neither can Mr. Gore. As a major investor in renewable energy and other technologies through the firm cited in the article and other roles, he has as much at stake in promoting the idea of a carbon bubble — and on a very different scale — as I might have in dispelling it.

The carbon bubble is an interesting hypothesis, even if I don’t yet find the arguments made in support of it convincing. Despite that, I see nothing wrong with investors wanting to track their carbon exposure, consider shadow carbon prices, or ensure they are properly diversified. However, the biggest risk I see that might eventually warrant considering divestment of fossil-fuel-related assets isn’t based on the merits of this analysis, but on the possibility of creating a self-fulfilling prophesy by means of drumming up social pressure on institutional investors. You might very well think that applies to this Wall St. Journal op-ed. I couldn’t possibly comment.

Monday, November 18, 2013

As the ethanol blend wall arrives, the EPA has proposed adjusting downward the federally mandated level of corn ethanol to be blended into gasoline.

This would relieve pressure on fuel blenders and retailers, but doesn't solve a problem widely expected to require bigger adjustments each year.

Last Friday the US Environmental Protection Agency proposed significant adjustments to the 2014 Renewable Fuel Standard, the federal biofuel mandate that the EPA administers. The headline change was a nearly 3 billion gallon reduction in the required biofuel volume for next year. However, as various observers, including the editors of the Washington Post, failed to differentiate, less than half of that reduction was truly discretionary. The remainder was a necessary acknowledgement of the persistently slow pace of cellulosic biofuel development and entirely in keeping with precedent.

I've written extensively about the ethanol blend wall and the need to reform the RFS and what that might look like. I don't intend to rehash those issues today. Rather, I'd like to focus on the specifics of the EPA's announcement, and why as the Post stated, "it doesn't go far enough." Because of the way the RFS targets roll up, it's not easy to see exactly what the Agency has proposed doing with each category of biofuel under the mandate.

The first aspect requiring clarification is that the roughly 99% cut in the most restrictive category of the RFS, the target for cellulosic biofuel, is nothing new. It's at least the fourth consecutive annual reduction by my count, reflecting that the substantial volumes of cellulosic biofuel projected back in 2007 were more than merely ambitious. Several new cellulosic facilities, including plants belonging to DuPont and POET, are scheduled to start up within the next year. I hesitate to call them commercial-scale, not just but their output will be less than that of typical corn-ethanol plants, but because their commerciality can't truly be known until they're started up, de-bugged and running smoothly.

Together with another plant that has already started up, these facilities will still not come close to producing the 1.75 billion gallons of cellulosic biofuel originally mandated for 2014. For the first time, though, EPA's newly revised range of 8-30 million gallons might prove realistic.

Because the RFS's "cellulosic" category rolls up within the larger, less-restrictive "advanced" biofuel category, it wasn't obvious that the effective new 2014 target for non-cellulosic advanced biofuel, which includes biodiesel, as well as ethanol from sugar cane, actually represents a modest increase from 2013 and essentially no change from its original level of 2 billion gallons.

The only truly discretionary change in the EPA's proposal falls on the least-restrictive RFS category of "renewable biofuel." As a result, the 2014 mandate for ethanol produced from corn and other grains would be cut from 14.4 billion gallons to 13.01 billion gallons--the most important figure in the entire proposal and one you won't find in the EPA's press release. 2014 US gasoline sales are expected to be just sufficient to absorb that quantity of ethanol without exceeding the 10% blending limit in place for most US gasoline, other than E85 and the literal handful of stations selling E15, the EPA's approved 15% blend. This reduction represents a milestone and should be welcomed by consumers worried about the cost and quality of the fuel they buy.

The corn ethanol industry is understandably displeased with this proposal, which makes it clear that when push comes to shove, the EPA's preference is for more advanced biofuels over corn ethanol. But the bigger issue is the one to which the Washington Post's editorial alludes: a one-year fix cannot address the structural problems of a rule that is on a trajectory to diverge farther from its planned version of the future with each passing year.

The outcome is far from settled. The EPA's 60-day comment period is just beginning, and numerous legislators, trade associations, and companies will want to have their say about it. They should hear from ordinary consumers, too.

Thursday, November 14, 2013

Innovators are developing the systems necessary for cars to drive themselves. Some, including Google, have already staged impressive demonstrations.

However, synergies with alternative fuels appear modest, and the largest efficiency gains from self-driving cars are likely to be deferred until they dominate the market.

Self-driving cars, also referred to as autonomous cars, have been in the news for several years. Interest in them spiked in September 2012, when Google announced it would make the technology available to the public within five years. Yet while this could be revolutionary in many ways, the most relevant question for us here concerns their potential to reduce transportation energy demand. At this point the likely effects of self-driving cars on fuel consumption and fuel choice appear less spectacular and more uncertain than their other selling points.

Although the entire concept of a self-driving car might seem science-fictional, it shouldn't greatly surprise anyone who has reflected on the implications of drone aircraft, GPS, smartphones, and the increasing electronification of average cars for the last several decades. From that perspective, the most important constraints on their emergence probably depend less on technology than on social and regulatory factors.

The development of self-driving cars and their precursors has been embraced by some of the biggest names in the global automotive industry, including GM, Toyota, Audi, BMW, Volvo, and Nissan, which announced plans to make the technology available across its entire product line sometime in the next decade. (Nissan also recently reported that its EV sales are lagging years behind plan.)

Suppliers to the OEMs are also making important contributions. I vividly recall driving a car equipped with radar adaptive-cruise control and other then-cutting-edge safety features in city traffic at the 2009 D.C. Auto show, courtesy of Robert Bosch, LLC. All I had to do was tap the gas pedal to engage the system and then steer, while the car did the rest. Systems like this are already appearing in production models.

The two main ways in which self-driving cars could affect future transportation energy usage involve making the operation of vehicles more efficient and enabling bigger changes in vehicle design than would otherwise be feasible. Some of these benefits would start to accrue from the day the first autonomous car left a dealership, but most would require either a critical mass of such cars in the fleet, or overwhelming dominance of the fleet. That could happen sooner in fast-growing developing countries, where legacy fleets are smaller, than in the developed world.

Consider operational changes first. Highway fuel economy could be improved by 20% by means of "drafting"--one car using the car ahead to reduce wind resistance--in automated , self-organized "platoons" of multiple cars. This, together with the avoidance of collisions, would also reduce traffic congestion, variously estimated at costing up to 2.9 billion gallons of fuel each year in the US, or up to 2% of US gasoline demand. The combined potential of these savings, assuming 100% market penetration of autonomous cars, might reach 10 billion gallons per year, a quantity larger than the gasoline displaced by corn ethanol in the US. Of course achieving such savings depends on having large numbers of self-driving cars on the road; imagine the risks if a daring driver in a conventional car attempted to join a platoon of tightly packed autonomous cars.

The efficiency gains from unattended autonomous parking don't require critical mass, and they might be significant, especially in congested urban areas, where one study suggested parking consumes up to 40% of gasoline used. However, most of these potential fuel savings could also be achieved through simpler and more easily implemented means, such as parking-space sensors and smartphone apps. And while self-driving cars might make car-sharing more popular, fewer vehicles wouldn't automatically translate into less fuel consumption if the same or more miles are driven.

The second major category of energy savings is associated with structural changes made possible by self-driving cars, mainly resulting in smaller and lighter vehicles. If cars no longer collided with each other or with inanimate objects, they wouldn't need to be nearly as robust. Saving weight saves lots of fuel. Yet it's hard to see how this process could begin before autonomous cars reached nearly 100% market penetration, since for many years they must share the road with millions of cars driven by fallible humans.

Nor is it obvious that self-driving cars would be infallible. We've already seen ordinary models exhibit random self-starting, due to malfunctioning of remote starter systems that would make up just one small subsystem of an extraordinarily complex self-driving architecture.

Some have suggested that the downsizing and weight savings facilitated by autonomous cars would hasten the adoption of battery-electric cars. The cost of today's EVs is driven largely by battery size, which is in turn a function of the vehicle's weight and its desired performance. A smaller, lighter car could make do with a smaller, cheaper battery pack. Cheaper EVs might well sell faster. However, if that must wait until enough self-driving cars are on the road for downsizing and radical lightening to become safe, it's a reasonable bet that improvements in battery technology in the intervening decades will have largely bypassed this potential benefit.

In the interim, while there might be some less-significant synergies between EVs and autonomous vehicles, neither technology is likely to depend on the other for its attraction to potential buyers. Nor do I see any obvious benefits from self-driving cars for helping alternative fuels like CNG, LNG or biofuels to gain market share.

On balance, if the average medium-term unique fuel savings of self-driving cars are limited to the 10-15% that I calculate--impressive but not game-changing--then the opportunities to improve safety and driver productivity seem like much more important motivators for this technology, for now. I also discovered a fair amount of skepticism about how soon fully autonomous cars would be widely acceptable to both consumers and regulators. Today's energy concerns might look quaint by the time such cars arrive in sufficient numbers to have a meaningful impact on them.

A different version of this posting was previously published on the website of Pacific Energy Development Corporation.

Thursday, November 07, 2013

The Arab Oil Embargo of 1973-74 focused our attention on energy security and set in motion drastic changes in the way we produce, trade and consume energy.

With US energy output approaching or exceeding 1970s levels, some experts now advocate prioritizing competition from non-petroleum fuels over reducing oil imports.

Forty years ago this month the United States and other Western countries experienced a new phenomenon as an embargo on oil deliveries from a group of the world’s largest oil exporters took effect. The embargo was a response to the military support that the US and some of its allies were providing to Israel during the Yom Kippur War then underway in the Middle East. A recent session hosted by the US Energy Security Council commemorating these events included a fascinating conversation between Ted Koppel and Dr. James Schlesinger, US Secretary of Defense at the time of the embargo and later the first US Energy Secretary.

The other, related purpose of the meeting was a presentation and discussion on the proposition that fuel competition provides a surer means of achieving energy security than our pursuit of energy independence for the next four decades following the Arab Oil Embargo. This idea warrants serious consideration, since energy independence, at least in the sense of no net imports from outside North America, is finally beginning to appear achievable.

The 1973-74 embargo was the first oil shock of a tumultuous decade, and it triggered a true crisis. The US had relied on oil costing around $3 per barrel (bbl), not just to fuel our transportation system, but also for 17% of our electricity generation and numerous other uses. The US was then one of the world’s largest oil producers but required imports comprising about one-third of supply to balance our growing demand. With the sudden loss of over a million barrels per day of oil imports from the Middle East, and lacking the sort of strategic petroleum reserve that was established a few years later, an economy already battling inflation was tipped into recession.

The embargo rattled more than the US economy; it challenged basic assumptions of American life, including our sense of entitlement to cheap and plentiful gasoline. Before the oil crisis, gasoline prices hovered around the mid-30-cent mark, with occasional local “gas wars” taking the price down to the high-20s--the inflation-adjusted equivalent of $1.60 per gallon now. Of course with average fuel economy around 13 miles per gallon, the effective real cost per mile wasn’t necessarily lower than today’s.

Within a year gas was over 50¢ at the pump, and by the end of the decade it passed $1.00/gal. for the first time. The gas lines that resulted from the unexpected supply shortfall and the federal government’s efforts to limit the ensuing increase in prices were an affront to drivers, a category that encompassed most of the over-16 population.

That first oil crisis and the subsequent energy crisis resulting from the Iranian Revolution in 1979 set in motion a number of important changes, including a sharply increased focus on energy efficiency, a deliberate effort to diversify our sources of imported oil, a pronounced shift away from oil in power generation — to the point that it now makes up less than 1% of US power plant fuel — and the beginnings of our search for affordable, renewable alternatives to oil.

The US Energy Security Council is an impressive group that includes many former government officials and captains of industry. They’ve clearly spent a lot of time studying this issue, and their report is worth reading. As I understand their conclusions and recommendations, they regard high oil prices as a bigger risk to the US economy than oil imports, per se, because of the impact of oil prices on consumer spending and the balance of trade. They have concluded that the most effective way to apply downward pressure on prices is not simply to reduce US oil imports, but to introduce meaningful fuel competition into transportation markets, where oil remains dominant with a share of around 93%.

The group doesn’t dismiss the benefits of increasing US oil production from sources such as the Bakken, Eagle Ford and other shale formations, but because these are relatively high-cost supplies, they have concluded that their leverage on global oil prices is limited. That means that higher US oil output couldn’t provide a path back to the price levels that prevailed before the Iraq War, when West Texas Intermediate crude averaged $26/bbl in 2002 and gasoline retailed for $1.35/gal.

This is a reasonable argument, though it’s worth considering that a return to $75/bbl might be feasible, if US production kept rising. That could yield US retail gasoline prices around $2.75/gal., equating to $2.15 in 2002 dollars. This isn’t as far-fetched as it might seem, because the global oil price is determined not by the entire 90 million bbl/day of world supply and demand, but by the last few million bbl/day of incremental supply, demand, and inventory changes.

The Council’s view also appears to emphasize the direct impact of oil prices on consumer spending without recognizing that rising production and falling imports shield the economy as a whole from the worst effects of high oil prices. With oil’s contribution to the trade deficit shrinking steadily, the main impact of higher oil prices is to divert money from consumers to shareholders of oil companies — of which I should disclose I am one. While exacerbating income inequality, that should at least result in a smaller impact on GDP and employment than the combination of rising oil prices and rising imports.

If the discussion had stopped at that point, the meeting would have been just another interesting Washington gabfest. However, the group’s analysis includes a set of actions it has identified as necessary for achieving their desired outcome: US energy security extending beyond the current US oil boom, underpinned by an expanding unconventional gas revolution that is widely expected to last for decades.

Their recommendations include giving fuels like methanol derived mainly from natural gas the chance to compete with gasoline made from oil, and with biofuels.They would start with revisions to the current US Corporate Average Fuel Economy standards to give carmakers incentives — not cash subsidies or mandates — to make at least half of all new vehicles fully fuel-flexible, capable of tolerating a wide range of blends of methanol, ethanol and gasoline. That seems like a no-regrets approach that could be achieved at a very low incremental cost per car. Even if you never bought a gallon of E85, M85, or M15, it could pay for itself by protecting your car from the damage that might result if you inadvertently filled up with gasoline containing more than the 10% of ethanol that carmakers believe is safe for non-flex-fuel cars. Other recommendations include easing regulations for retrofitting existing cars for flex-fuel and forming an alcohol-fuels alliance with China and Brazil.

Yet while I repeatedly heard that the group wasn’t promoting any single fuel, talk of methanol dominated the conversation. The moderator, Ann Korin, even joked that the session sounded like an “alcohol party.” As I later pointed out to her, there wasn’t a single mention of drop-in fuels — gasoline and diesel lookalikes derived from natural gas or biomass. I regard that as a crucial omission, because such fuels would be fully compatible with the billion cars already on the road, rather than just the 60 million or so new cars produced each year. They could provide greater leverage on oil prices by producing pipeline-ready products with which consumers are already familiar, from sources other than crude oil.

Part of the appeal of methanol seemed to be the potential for producing it from shale gas at a cost well below the cost of gasoline, even on an energy-equivalent basis — an important caveat, because a gallon of methanol contains half the energy of a gallon of gasoline. I hear the same argument in support of various pathways for producing jet fuel from non-oil sources, and it subscribes to the same fallacy: that market prices are set by manufacturing costs rather than supply and demand.

Fuel is a volume game. For a non-oil gasoline substitute to drive down oil prices –and thus motor fuel prices– as far as the Council apparently envisions, it would take at least several million barrels per day, on an oil-equivalent basis. Producing six million bbl/day of methanol from natural gas would consume 20 billion cubic feet per day of it. That’s 30% of last year’s US dry natural gas production, requiring 100% of the Energy Information Administration’s forecasted growth of US natural gas production through 2034. A number of other entities have their eyes on that same gas for other applications.

As many of the speakers at the Energy Security Council event reminded us, the world is a very different place than it was in 1973. Among other changes, US energy trends are headed in the right direction, with oil demand flat or declining, production rising and imports falling. That alone makes us more energy secure than we were, either five years ago or in 1972. Future oil supply disruptions are also unlikely to look much like the Arab Oil Embargo.

The Council is certainly correct that our unexpected shale gas bonanza, producing large quantities of new energy at a price equivalent to oil at $25 or less per barrel, provides a unique opportunity to weaken OPEC’s influence on oil prices. In pursuing that goal, however, it’s essential to remain flexible concerning the best pathways for gas to compete in transportation fuel markets, whether as CNG or LNG, or through conversion to electricity, methanol, or petroleum-product lookalikes. Consumer acceptance could prove to be the biggest uncertainty governing the ultimate outcome.

Thursday, October 31, 2013

Repeal of the RFS looks unlikely, but equitable reforms addressing the needs of all affected groups are possible, if Congress is willing to compromise.

Earlier this month, National Journal hosted an event on the “Biofuels Mandate: Defend, Reform, or Repeal” from Washington, DC. I encourage you to skim through the replay. The session highlighted a wide range of views concerning the US Renewable Fuels Standard (RFS), including those of the corn ethanol and advanced biofuels industries, poultry growers, chain restaurants, environmentalists, and small engine manufacturers. Although these broke down pretty sharply along pro- and anti-RFS lines, I thought I detected hints of the kind of compromise that might resolve this issue. I’d like to focus on the elements of such a deal, rather than rehashing the positions of all of the participants, with one necessary exception.

The most disappointing contributions to the discussion occurred during the interview with Representative Steve King (R, IA) by National Journal’s Amy Harder. If we accept Mr. King’s perspective, we should embrace the RFS as being as relevant today as when it was conceived, with no changes required. That flies in the face of the serious market distortions now manifesting in the “blend wall” at 10% ethanol content in gasoline.

Among other things, Mr. King claimed that a 2008 reduction of $0.06 per gallon in the now-expired ethanol blenders tax credit brought the expansion of the corn ethanol industry to a standstill. The industry’s own statistics tell a very different story, with US ethanol production capacity having grown by a further 86% since that point.

Rep. King also characterized “food vs. fuel” concerns as a bumper sticker issue, with no basis in fact. That issue might be controversial, but it is far too substantive to dismiss so cavalierly. The latest evidence of that is a vote by the European Parliament to cap the contribution of conventional biofuel — ethanol and biodiesel derived from food crops — at 6% of transportation energy out of a 2020 target of 10%, based on concerns about sustainability and competition with food. It seemed fairly clear that the Congressman views the RFS more as a farm support measure than an energy program.

The only one of Mr. King’s comments that seemed to find traction with the other pro-RFS panelists was his odd suggestion that without a mandate for biofuels, the only federal mandate in place would be one for petroleum-based fuels. Certainly, gasoline and diesel have advantages in terms of infrastructure, energy density and the legacy fleet, but he appeared to have something else in mind. From the way others picked up on this, perhaps it was his earlier reference to the tax benefits that conventional fuel producers have long enjoyed. This is the first and easiest element on which to compromise.

If ethanol producers and advanced biofuels developers are convinced that fossil fuels get a better deal from the federal government than the one they have under the RFS and the $1.01 per gallon producer tax credit for second-generation biofuels, it would be a simple matter to replace these programs with the same incentives received by oil and gas producers and petroleum refiners. After all, the biofuel industry already benefits from the Section 199 tax deduction that accounts for a third of budgeted federal tax benefits for the oil industry, and it shouldn’t be hard to devise an accelerated depreciation benefit analogous to “percentage depletion” and the expensing of intangible drilling expenses. Combined, the value of these tax benefits is about 1.3¢ per equivalent gallon of oil or natural gas produced this year.

Other concerns came across clearly. Despite the endorsement of 15% ethanol blends by the Environmental Protection Agency, blending more than 10% ethanol in gasoline creates serious risks for the US’s 500 million existing gasoline engines, large and small. The scale of corn diversion necessary to go beyond 10% is also distorting the US agricultural economy and food value chain, all the way to the restaurants in our communities. However, those engaged in developing new biofuels that don’t rely on edible crops, or that are fully compatible with existing infrastructure and engines, are legitimately worried that the repeal of the entire mandate would strand the significant investments in new technology that have already been made, and possibly smother their industry just as it nears its first commercial-scale deployments. All these points of view struck me as eminently reconcilable within a reformed RFS that recognizes that most of the assumptions of the 2007 mandate are no longer valid.

The starting point for reform of the RFS should be a 10% cap on ethanol from all sources in mass-market gasoline — excluding E85 — combined with measures to give ethanol from non-food sources priority within that cap over ethanol produced from corn or other food crops. The advanced biofuel targets of the RFS should also be scaled back significantly to reflect the reality that the 2007 targets were wildly optimistic. Ideally, they should be adjusted each year based on the previous year’s actual output. In return, the current producer tax credit for cellulosic and other second-generation biofuels could be extended beyond its scheduled expiration at the end of this year, and then phased out over a reasonable, predictable period, perhaps tied to cumulative output.

Finally, since few on the panel seemed impressed by the EPA’s exercise to date of its statutory power to adjust the RFS to fit changing circumstances, that authority should be transferred to another agency, along with clearer guidelines on when adjustments would become mandatory.

I’d be the first to admit that the reforms I’ve outlined above fall well short of the outright repeal of the RFS that many, including myself, would prefer. That’s the essence of compromise. Having just experienced a government shutdown and debt ceiling crisis brought on by the clash of two intransigent positions, this might be preferable to an impasse that leaves an unsustainable status quo untouched. And if the assessment of Representative Welch (D-VT) concerning the appetite of the Congress to take up this matter is accurate, something along these lines might just be achievable.

Reform of the RFS would leave in place for a while longer the outlines of a mechanism that one of the session's panelists accurately described as a Rube Goldberg construction. Short of a guarantee to bail out everyone who invested in biofuels production or research on the basis of the RFS that Congress put in place in 2007, should they fail in a post-repeal market, I’m not sure there’s another course that would be sufficiently equitable to all parties involved.

Wednesday, October 23, 2013

An agreement to build the UK's first new nuclear power plants since 1995 endorses the role of baseload generation in the future low-emission energy mix.

Rather than constituting a choice of nuclear instead of renewables, this looks like nuclear plus renewables as a hedge on rising UK natural gas prices.

Monday's agreement between the UK government and French utility EDF and a pair of Chinese firms marks the start of the long-awaited turnover of the country's aging nuclear power infrastructure. The deal is controversial, not least for the power price of £92.50 per megawatt-hour (MWh) guaranteed to the developers. That's equivalent to about $0.15 per kilowatt-hour (kWh) at today's exchange rates. It's also strikingly different from the choices Germany and France itself have made recently.

The UK has a long history with nuclear power, having started up the world's first commercial-scale civilian reactor in 1956, following demonstration units in the US and USSR a few years earlier. Many of the plants built in the construction wave that followed have already been retired, and none has been started up since 1995. Another 40% of the country's remaining 10,000 MW of nuclear capacity is due to shut down by the end of this decade, with all but the newest, largest nuclear plant at Sizewell scheduled for retirement by the early 2020s. Even if the two new reactors that EDF and its partners will build at the Hinkley Point site in Somerset--adjacent to two 1970s-vintage reactors still in service--are completed on schedule, Britain's nuclear output is likely to shrink before it grows again.

The Hinkley C deal hinged on a question that can still only be answered theoretically today: What is the most effective future electric generating mix for achieving the necessary combination of affordability, reliability and low greenhouse gas emissions? In the aftermath of the Fukushima accident the German government decided that nuclear had no place in that mix and doubled down on its commitment to renewable energy, particularly wind and solar power, though that shift appears to require an increase in coal-fired generation to pull off. Meanwhile, France, which currently gets 75% of its electricity from nuclear, has embarked on a plan to reduce its share to 50% while expanding renewables.

The electricity mix in the UK is already changing as large offshore wind projects and onshore wind farms come online, and as the country's inexplicable flirtation with solar power increases. The gas turbines that dominated the previous wave of power plant construction are becoming more expensive to operate as waning UK North Sea gas output must increasingly be replaced by imported gas, while more coal plants shut down. All of this is underpinned by a legally binding commitment to reduce greenhouse gas emissions by 80%, compared to 1990, by 2050.

The UK's options for devising a reliable low-emission electricity mix are limited. If it wanted to build that mix around the combination of gas and renewables that California has chosen, then it would need a cheaper source of gas. That explains the government's interest in shale gas, although the outcome--both in terms of the rate of development and the future extent and cost of shale gas production--remains uncertain. It also can't rely nearly as much on solar, since it receives on average around half as much sunlight as the Golden State. Coal won't fit without carbon capture and sequestration (CCS) that is still expensive, and large-scale hydropower potential appears to be limited. That leaves nuclear as the largest-scale low-emission baseload option to anchor the energy mix, with quick-reacting natural gas turbines left to even out the fluctuations of offshore and onshore wind, and possible future wave and tidal installations.

In that context, it was surprising that the UK energy minister apparentlhy chose to frame this week's transaction as a choice for nuclear over the "blight" of the tens of thousands of wind turbines required to generate the same electricity, annually. Configuring wind power to provide enough reliable baseload energy to make nuclear unnecessary would require more overcapacity, grid upgrades and energy storage than even California's legislators could imagine. That would cost far more than the £92.50/MWh price tag for new nuclear.

And that brings us back to the price guarantee, or "strike price", which was apparently the key to getting EDF and its partners to commit to proceed on Hinkley Point. Since the UK's coalition partners had previously determined to provide no subsidies for nuclear power, arrangements such as the loan guarantees offered to US nuclear developers were out of the question. Whether the "contract for difference" scheme chosen to support Hinkley Point's future revenue--funded by ratepayers rather than taxpayers--constitutes a subsidy by another name, it is functionally similar to the Feed-In Tariffs (FITs) offered to wind, solar and other renewables in Germany and elsewhere. For comparison, the current German solar FIT guarantees utility-scale installations the equivalent of £84/MWh for a period extending past the planned start-up of Hinkley C.

Solar and nuclear power aren't interchangeable on the grid, but the spread between them highlights the financial risks involved in the current deal. The UK is placing a potentially expensive bet on low-emission baseload power from nuclear energy, while its biggest neighbors on the Continent are turning away from nuclear to pursue steadily rising shares of intermittent wind and solar power, the cost of which keeps falling. The government's call looks justifiable today for reasons of reliability and as a long-term investment--Hinkley C should still be producing billions of kilowatt-hours a year when the wind turbines and solar panels installed in Britain this year are rust and dust. However, if the UK's Bowland shale turns out to be the first Marcellus-like play outside the US, that price guarantee could cost future British ratepayers hundreds of millions of pounds per year.

Thursday, October 17, 2013

The view that methane leaks render shale gas "worse than coal" has been further undermined by the release of a new study based on actual measurements at hundreds of gas wells.

Previous estimates of methane leakage relied on modeling or extrapolation from remote measurements. The University of Texas study addresses these shortcomings.

Since the late 1990s natural gas has been identified by both energy experts and environmentalists as a likely "bridge fuel" to facilitate the transition to cleaner energy sources. This view has recently been challenged by suggestions that methane leakage from natural gas systems--particularly from shale gas development--might be significant enough to negate the downstream climate benefits of switching to natural gas. The results of a new study from the University of Texas, sponsored by the Environmental Defense Fund (EDF) and nine energy companies, should alleviate many of those concerns. In order to understand why indications of potential natural gas leakage rates well above the previously assumed level of around 1% would cast doubt on the environmental benefits of gas, a brief primer on greenhouse gases (GHGs) is necessary. When present in the atmosphere, these gases contribute to global warming by trapping infrared radiation that would otherwise be emitted to space. Carbon dioxide is the primary GHG implicated in climate change. It currently makes up roughly 400 parts per million (ppm)--equivalent to 0.04%--of earth's atmosphere and is increasing by around 2 ppm per year. The main constituent of natural gas is methane. Although atmospheric concentrations of methane are much lower than that of CO2, totaling less than 2 ppm, pound for pound it is a much stronger GHG. Its "global warming potential" is 25 times higher than CO2's over a 100-year time horizon, and even higher on a shorter time span. While most atmospheric methane has been traced to natural or agricultural sources, a large increase in atmospheric methane from natural gas production could overwhelm the undisputed downstream emissions benefits of gas in electricity generation, compared to coal. Several academic studies raised precisely this concern with regard to natural gas produced from shale by hydraulic fracturing, or "fracking", starting with a widely-publicized paper from a professor at Cornell University in 2010. This work relied on estimates and limited data from early shale production to arrive at a conclusion that shale gas wells leak 3.6-7.9% of their cumulative output. A more recent series of studies from the National Oceanic and Atmospheric Administration (NOAA) and the University of Colorado Boulder used airborne remote sensing techniques to calculate leakage rates similar to Professor Howarth's. Other studies from groups as diverse as IHS CERA, Carnegie Mellon University, and Worldwatch Institute and Deutsche Bank addressed the same question but arrived at much lower leakage rates and impacts. And earlier this year the US Environmental Protection Agency reduced its previous estimate of overall natural gas leakage to a figure equivalent to 1.7%. However, until now all scientific studies of this issue--on both sides--were based on limited data, or on indirect measurements obtained at a significant distance from actual production sites. They relied heavily on assumptions about what was happening at large numbers of gas wells, in the absence of direct observations at these sites. That's what makes the UT study so significant; it is based on a wealth of data from actual, on-site measurements at "190 production sites throughout the US, with access provide by nine participating energy companies." That translates to roughly 500 shale gas wells in different stages of development and production. Overall, for the segment of the gas lifecycle they investigated, the UT team found methane emissions that were lower than EPA's latest estimates. Emissions from "completion flowbacks" were 98% lower, partially offset by somewhat higher observed leaks from valves and other equipment. Although this study did not measure emissions from the entire gas lifecycle, including pipelines, it would be very hard to reconcile their observed average leakage rate of 0.4% of gross gas production with leakage estimates as high as those embraced by many of shale's critics. Immediate criticisms of this study also missed several crucial points. First, without the industry involvement that they characterized as a "fatal flaw", access on this scale for direct measurements at production sites--surely the gold standard for emissions studies compared to estimates based on assumption-laden models--would have been difficult or impossible to obtain. More importantly, they also ignored the fact that the principal sources of methane emissions found by the UT team involved valves and equipment by no means unique to shale development, many of which should be amenable to hardware improvements or different technology choices. While the UT team and their sponsors at EDF stated clearly that more work needs to be done to measure methane emissions from other parts of the gas value chain, the current paper convincingly dispels the notion that the emissions from shale gas development are inherently much higher than those for gas produced from vertical wells in conventional oil and gas reservoirs. Since shale gas already accounts for over a third of US natural gas production and is widely expected to dominate future production, that result has large implications for the environmental benefits of further fuel switching and other applications for natural gas. A different version of this posting was previously published on the website of Pacific Energy Development Corporation.