Evaluating and Explaining Climate Science

Renewables XVII – Demand Management 1

The respected Gratton Institute in Australia hosted a discussion of energy insiders – grid operators, distributors, the regulator. It’s well worth reading for many reasons. When I was thinking about this article I remembered the discussion. Here are a few extracts:

MIKE: Andrew, one of the elements in the room here is the growth in peak demand. I can put however many air conditioners I want in my house and as long as I can pay for the electricity, I can turn them on and I don’t have to worry about that. You certainly can’t regulate for it. When are we going to allow you to regulate for peak demand? Obviously it’s not in the interests of the network operators who get a guaranteed rate of return on investment in growing the grid, as I understand it. It’s not there in the business model anyway. Do you see that coming?

…

MIKE: Well, controlling this thing which is really driving a lot of the issues that we have which is peak demand growth. The issue at the moment is that we haven’t had peak demand growth in the last few years because we haven’t had hot weather. We just don’t know how many air conditioners are out there that have never been turned on – three or four per household? People have made those investments, and when the next hot weather comes they’re going to recoup their investments by running them full bore. We don’t know what the load be like when that happens.

ANDREW: Mike’s quite right. Unless there is a change in usage, there’s the risk of this ongoing growth in demand and the ongoing necessity for investment in the network, and a continued increase in prices. That is the key to it. Then the question becomes who’s responsible for managing the demand? Ought it to be the businesses themselves, and providing the businesses with the incentives to go for the lowest cost solution, whether that is network augmentation or demand management. That’s a very good way of approaching it. The other is to look at the pricing structures such that those consumers who are putting the extra load on the network, with the four air conditioners, are paying for their load on the network. At the moment everybody pays on the basis of average use rather than paying for how much demand they put on the network. Now that’s a pretty radical change in the way electricity is charged. That would lead to arguably a much better outcome in terms of the economics, it would then give people the right signals to manage their demand…

MATT: I think customers face network charges and at the moment they don’t have any way to manage their network bill because it’s just based on average usage rather than peak demand and they don’t get a signal that tells them use less peak power.

GREG: How far are we away from consumers being able to control that?

TRISTAN: In other parts of the world it’s already working. For large customers at the moment they can already do that. We have a number of customers within Victoria and Australia who when the wholesale price of power goes high they curtail their usage. Smelters who just stop hotlines for a couple of hours to reduce their usage at that point in time. The reason they can do that is they can see the price signal. They have a contract which tells them in times of high prices if you turn off you get a financial reward for doing it. And they say, it’s worth doing it, I’ll turn off. Retail customers don’t get any of those price signals at the moment.

GREG: Should they?

TRISTAN: We think they should. We think there’s about $11b of installed electricity infrastructure that’s used for about eight days a year, but no-one sees that price signal. If you’ve got something that’s not used very often, it’s very expensive. The reality is if you want people to use less of something, charge them what it costs. If they’re willing to pay it, they can use it. If they’re not willing to pay it, then they’ll do something about it. In terms of enablers, though, then you do have to have things like smart meters which allow people to actually see what’s happening in their household, and you have to have products from retailers and other participants that can allow them to do something about it. Some of the things that we’re exploring in that field are the pricing mechanisms off-time use pricing, linkages to smart appliances, so your fridge, your air conditioner, your washing machine, your dishwasher, can all be interrupted based on a price signal received by the smart meter that turns the appliance on and off. We’re getting to the point where we can do that, but we need to have the regulatory infrastructure that just enables that sort of competition and pricing to occur.

—

Demand management is an important topic for the electricity industry regardless of any questions about renewable energy.

The highlighted portion in the last statement is the key – to cope with peak demand, lots of investment has to be added that will only ever be rarely used. Earlier in the discussion (not shown in the extract) there was talk about discussing with the community the tradeoff between prices and grid reliability. Basically, making the grid 99.99% reliable imposes a lot of costs.

Maybe consumers would rather have had the option to pay 2/3 of their current bill and go without electricity for half a day every 5 years.

Imagine for example, that you live in a place with hot summers and this is scenario A:

you pay 20c per kWh

across a year you pay $2,000 for electricity

Now the next year the rules are changed and you have scenario B

you pay 12c per kWh

a saving of $800 per year on your bill

on 20 or so hot days you would pay $1 per kWh from 11am to 3pm

on one day of the year between midday and 3pm you would pay $20 per kWh.

This is all, by the way, because we can’t store electricity (not with any reasonable cost). For the same reason, before intermittent renewable energy came on the scene, economical storage of electricity was also in high demand. But it wasn’t, and still isn’t, available.

If we picture the change from scenario A to B, a lot of people would be happy. Most people would take B if it was an option. Sure it’s hot but lots of people survived without air conditioners a decade before, definitely a generation before (lots still do). Fans, ice cubes, local swimming pools, beaches.. Saving $800 a year means a lot to some people. Of course, there would be winners and losers. The losers would be the air conditioning industry which would lose a large chunk of its business; suppliers of transmission and distribution equipment no longer needed to upgrade the networks; hospitals that had to pay the high costs to keep people alive..

Of course, what actually happens in this scenario, given the regulated nature of the industry that exists in most (all?) developed countries would be a little different. As peak demand falls off, the price falls off. So it isn’t a case of no one buying electricity at $1 per kWh. What happens is the demand drops off and so the price falls. Supply and demand – an equilibrium is reached where people are willing to pay the real cost. And based on the new peak demand patterns the industry tries to forecast what it needs to upgrade or expand the network over the next 5-10 years and negotiates with the regulator how this affects prices.

But the key is people paying for the very expensive peak demand they want to use at the real cost, rather than having their costs subsidized by everyone else.

It makes perfect sense once you understand a) how an electricity grid operates and b) electricity cannot be stored.

Let’s consider a different country. Although England has some hot summers the problem of peak demand in England is a different one – cold winter evenings. Now I haven’t checked any real references but my understanding is that lots of people die indoors due to the cold each year in cold countries and it’s more of a problem than people dying due to heat in the middle of the day in hot countries. (I might be wrong on this, but I’m thinking of the subset of countries where electricity is available and affordable by the general population).

If you add demand management in a cold country maybe the problem becomes a different one – poorer old people already struggling with their electricity bills now turn off the heating when they need it the most. The cost being pushed up by prosperous working people with their heating set on the maximum for comfort. The principle is the same, of course – demand management means higher prices for electricity and so on average people use less heating.

So in my hugely over-simplified world, demand management has different questions around it in different climates. Air conditioning in the middle of the summer day as a luxury vs heating in the winter evenings as a necessity.

The problem becomes more complicated when considering renewables. Now it is less about reducing peak demand, instead about trying to match demand with a variable supply.

There are a lot of studies in demand management, essentially pilot studies, where a number of consumers get charged different rates and the study looks at the resulting reduction in electricity use. Some of them suggest possible large demand reduction, especially with intelligent meters. Some of them suggest fairly pedestrian reductions. We’ll have a look at them in the next article.

Consumer demand management can come in a few different ways:

Change in schedule – e.g., you run the dishwasher at a different time. There is no reduction in overall demand, but you’ve reduced peak demand. This is simply a choice about when to use a device, and it has little impact on you the consumer, other than minor planning, or a piece of technology that needs to be programmed

Energy storage – e.g., during winter you heat up your house during the middle of the day when demand is low – and electricity rates are low – so it’s still warm in the evening. You’ve actually increased overall demand because energy will be lost (insulation is not perfect), but you have reduced peak demand

Cutting back – e.g. you don’t turn on the airconditioning during the middle of the summer day because electricity is too expensive. In this example, you suffer some small character-building inconvenience. This is not energy use deferred or changed, it’s simply overall reduction in usage. In other examples the suffering might be substantial.

The demand management “tools” don’t create energy storage. Apart from the heat capacity of a house, reduced by less than perfect insulation, and the heat capacities of fridges and freezers, there is not much energy storage (and there’s effectively no electricity storage). So the choices come down to changing a schedule (washing machine, dishwasher) or to cutting back.

It’s easy to reduce total demand. Just increase the price.

The challenge of demand management to help with intermittent renewables also depends on whether solar or wind is the dominant energy source. We’ll look at this more in a subsequent article.

So it seems that winter peak domestic demand should be driven by lighting and consumer electronics. The efficiency potential here suggests a possible change in future peak demand, but that all depends on how electrification works out in heating I guess.

You quote Tristan: “We think there’s about $11b of installed electricity infrastructure that’s used for about eight days a year, but no-one sees that price signal. If you’ve got something that’s not used very often, it’s very expensive.”

Sounds impressive at first. But a few minutes of searching reveals that Australia produces some 240 TWh of electricity a year and that the average cost to the consumer is $0.28/kWh. That comes to $67 billion per year. The cost of having the rarely used capacity is maybe $1 billion per year, so 1-2% of the total. So if Scenario A is $0.200/kWh, a more reasonable guess for Scenario B might be $0.197/kWh. I’d much rather pay the higher price than do without air conditioning.

In the U.K., air conditioning might be a luxury. In much of the U.S., it is a necessity, especially in modern large buildings.

I think there are two fallacies here. One is forgetting that a number that is really big in one context ($11 billion!) might not be big in a different context (maybe $2 trillion in electricity production over the entire life of the infrastructure). The other, often made in discussions of renewables, is to look at the cost of one part of a complex system in isolation, without regard to how it affects the system as a whole. Getting rid of the peaking capacity would not just impact the energy hogs; it would impose rolling brownouts, or worse, on everyone.

You’re right, my figure of 2/3 is invented and has no real justification.

But I can’t believe that the cost of providing peak electricity is only 1-2% of the total cost. If it was, that would mean that the cost of the fuel to create the power (i.e. the running cost) was the dominant cost seen by consumers.

Yet, the average wholesale electricity price (from AEMO) for the last few years was about A$0.055 (I used the table at the bottom of the page and didn’t try to weight by population in each state).

This tells me that the cost of generation must be under A$0.05 per kWh. So consumers are paying over A$0.20 per kWh for transmission, distribution, metering..

Note: they calculate total elec consumption 2012-2013 as 196TWh with consumer at 54TWh – “Total consumption and the classification of customers is based on information provided by companies in the industry.”

There are about 8M households in Australia and the average residential electricity bill is $1600 (lost the webpage with that last stat). That equates to $13BN per annum.

I definitely goofed by using the household price for all consumption. Commercial and industrial customers likely pay less, so my 1-2% estimate is surely low. But I don’t think I am so far out as to change the main point. The capacity used only for peak power is only a fraction (maybe 15%?) of total capacity. It is smaller as a fraction of total capital cost, since such plants are designed to be relatively cheap. The cost of that capacity is an even smaller fraction of total generation cost, at least if generation is mainly fossil fuel, and a still smaller fraction of total cost to the consumer. So maybe it is 5%, still not a big deal.

Of course if you calculate the marginal cost of that power, it is quite high (although I doubt as much as $1/kWh). It is a small part of total cost since peak power is a very small fraction of total power generated. So if you focus on marginal cost, it is a big deal; if you look at the effect on total cost, not so much.

If you increase the price of electricity to reduce demand, people may well turn to natural gas to run their stoves, water heaters, refrigerators and HVAC. Unless, of course, you do things like ban hydraulic fracturing and force a higher price for gas. While the Germans are having fun shooting themselves in the foot with Energiewende, I doubt you could do that in the USA for long.

Jacobsen’s solution assumes that 2/3’s of demand is flexible and can be met with local storage of heat or cold (for air-conditioning).

“CONUS loads for 2050–2055 for use in LOADMATCH are derived as follows. Annual CONUS loads are first estimated for 2050 assuming each end-use energy sector (residential, transportation, commercial, industrial) is converted to electricity and some electrolytic hydrogen after accounting for modest improvements in end-use energy efficiency (22). Annual loads in each sector are next separated into cooling and heating loads that can be met with thermal energy storage (TES), loads that can be met with hydrogen production and storage, flexible loads that can be met with DR, and inflexible loads (Table 1).

Most (50–95%) air conditioning and refrigeration and most (85–95%) air heating and water heating are coupled with TES (Table 1). Cooling coupled with storage is tied to chilled water (sensible-heat) TES (STES) and ice pro- duction and melting [phase-change material (PCM)-ice] (SI Appendix, Table S1). All building air- and water-heating coupled with storage uses un- derground TES (UTES) in soil. UTES storage is patterned after the seasonal and short-term district heating UTES system at the Drake Landing Commu- nity, Canada (23). The fluid (e.g., glycol solution) that heats water that heats the soil and rocks is itself heated by sunlight or excess electricity.
Overall, 85% of the transportation load and 70% of the loads for industrial high temperature, chemical, and electrical processes are assumed to be flexible or produced from H2 (Table 1).

Six types of storage are treated (SI Appendix, Table S1): three for air and water heating/cooling (STES, UTES, and PCM-ice); two for electric power generation [pumped hydropower storage (PHS) and phase-change materials coupled with concentrated solar power plants (PCM-CSP)]; and one for transport or high-temperature processes (hydrogen). Hydropower (with reservoirs) is treated as an electricity source on demand, but because res- ervoirs can be recharged only naturally they are not treated as artificially rechargeable storage. Lithium-ion batteries are used to power battery- electric vehicles but to avoid battery degradation, not to feed power from vehicles to the grid. Batteries for stationary power storage work well in this system too. However, because they currently cost more than the other storage technologies used (24), they are prioritized lower and are found not PHS is limited to its present penetration plus preliminary and pending permits as of 2015. CSP is coupled with a PCM rather than molten salt because of the greater efficiency and lower cost of the PCM (25). The maximum charge rate of CSP storage (thus mirror collector size) can be up to a factor of 5 the maximum discharge rate of CSP steam turbines to increase CSP’s capacity factor (26). Here, the maximum CSP charge rate is ∼2.6 times the maximum discharge rate (SI Appendix, Tables S1 and S2), but more CSP turbines are used than needed solely to provide annual CONUS power to increase the discharge rate of stored CSP power during times of peak power demand.

The 2050 annual cooling and heating loads (Table 1) are distributed in LOADMATCH each 30-s time step during each month of 2050–2055 in proportion to the number of cooling- and heating-degree days, re- spectively, each month averaged over the United States from 1949 to 2011 (27). Hydrogen loads and flexible loads are initially spread evenly over ach year. Annual 2050 and 2051 inflexible loads are scaled by the ratio of hourly to annual 2006 and 2007 CONUS-aggregated loads, respectively (SI Appendix, Figs. S2 and S3) (28) to give hourly 2050 and 2051 inflexible loads, which are then applied alternately between 2052 and 2055 and distributed evenly each 30-s time step each hour. DR allows initial flexible loads to be pushed forward in 30-s increments but by no more than 8 h in the base case, at which point they are made inflexible loads. However, sensitivity tests indicate that the system is also stable with no DR (SI Ap- pendix, Fig. S14).

The issue I have with calculations like Jacobson’s is that the assumptions made are everything; but it can be hard to evaluate, or sometimes even identify, the assumptions. And Jacobson has a history of making questionable assumptions.

As Frank points out, Jacobson relies heavily on storage. But the costs assumed in Table S1 on page 19 look suspiciously low. For instance, $14/kWh for pumped hydro would imply that a facility the size of Luddington (roughly 20 GWh) could be built for about $300 million. That is what Luddington cost when it was built over 40 years ago and is much less than what is being spent ($800 million) to refurbish and upgrade it.

Jacobsen also relies heavily on H2. That is an extremely expensive means of storage (more costly than batteries, I think) but all he says about cost seems to be in a confusing footnote on page 4. And it seems that he assumes that electricity (and H2?) can be shipped around the continent as needed at no significant cost.

Mike and SOD: Jacobsen is doing much the same thing as Budischak, except he is using the whole CONUS rather than a section of the country. The sun is usually shining and the wind blowing somewhere, making the intermittency problem easier and the transmission problem more challenging (assuming they actually deal with it.) Rightly or wrongly, this PNAS paper is likely be more influential paper than Budischak, so it may be worth a look.

Jacobsen envisions a vastly “superior” system to the one optimized by Budishack, which wasted about 2/3rds of wind power generated because it was too expensive to store. Jacobsen’s losses amount to only 11%. And the capital cost of storage is less that $1T. Some numbers are given below. (I didn’t know that PNAS had turned into a journal for fantasy/science fiction, but I suppose this is the logical conclusion of the PNAS pal review system.)

I tried looking up one reference (#36) on cost of thermal energy storage. (A variant of this was used at one job site where I worked. Water for air-conditioning roughly 50 buildings was chilled on summer nights and stored in a large central tank for use during the next day.) Reference #36 turns out to be mostly speculation with only one operational example with incomplete cost data. Storage of solar heat during the summer cut the cost of winter heating by 50%. Jacobsen thinks that 85% of the energy for heating and cooling can be provided from thermal energy storage when excess wind and solar are available.

This all sounds entirely too much like the wedge concept for reducing CO2 emissions. It’s all too easy to assume that each wedge will produce greater reductions than it’s actually capable of doing while grossly underestimating the cost.

$0.005/kWh for hydrogen production, compression and storage is, indeed fantasy. I’m also not clear on how the hydrogen will be converted back to electricity. You can’t simply reverse an electrolytic hydrogen plant. Fuel cells aren’t cheap and burning it in a thermal plant wouldn’t be very efficient.

Superficially, Jacobson is indeed doing much the same thing as Budischak. But there is a difference that is much more important than the area examined. Budischak et al. seem to confine themselves to realistic assumptions, whereas Jacobson seems to make whatever assumptions are needed to arrive at his pre-ordained conclusion. And he seems to downplay his most critical and questionable assumptions. Sadly, you are probably right that Jacobson’s paper will be more influential than Budischak’s.

Being generous, one might say the Jacobsen is envisioning a society very different from today’s. Today, almost everyone gets as much electrical power as they demand, whenever they demand it. Therefore we don’t need to store power. As best I can tell, Jacobsen is envisioning a society where everyone stores all the power they need for seasonal heating and cooling in large hot and cold reservoirs that are filled with there is excess solar or wind power available. Excess power is also stored as hydrogen for re-generating electricity. His society has completely adapted to a system where locally generated electrical power can be highly intermittent; though he doesn’t discuss any form of rationing by price or other means when power is short. He does discuss the cost of importing power from distant regions where wind and solar electrical generation are uncorrelated with local generation, but that cost is only 0.3 cents/kWh. If/when we run out of fossil fuels and fissionable materials, we might be forced to construct such a society.

Many older large cities are laid out around the railroad transportation that dominated during their formative years. Newer large cities in the US are laid out around a network or “beltway” of freeways. Both are “adapted” to a dominant form of transportation. It is hard to build practical mass transportation in low density cities adapted to the automobile. Jacobsen may be giving hints as to the kind of radical adaptation that will be required to survive on fully “renewable” energy.

Newer large cities in the US are laid out around a network or “beltway” of freeways.

Having lived in Southern California in the late 1950’s, I think you have it backward. The population came first. The beltways and freeways were built to deal with a natural urban sprawl, the conspiracy theory underlying the Roger Rabbit movie notwithstanding. Back then, you couldn’t build tall buildings because of earthquakes and lack of a bedrock base. LA City Hall was the tallest building in Los Angeles for quite a while. And it had a large base to support the high rise part.

Also, Charlotte, NC, for example, didn’t have beltways until quite recently and they’re still not finished.

I think you are being overly generous. If Jacobson is envisioning a society very different from today’s, then he should say so. Instead he says, in the abstract, “these results hold for many conditions, suggesting that low-cost, reliable 100% WWS systems should work many places worldwide” and, under significance, “The large-scale conversion to 100% wind, water, and solar (WWS) power … is currently inhibited by a fear of grid instability and high cost”. Both statements imply that WWS can be made to work with societies as they exist, not that society has to be redesigned around the desired energy system.

Based on my limited personal knowledge, I suspect that I-680, I-580 and SR-24 prompted the development of Contra Costa Valley. In LA, Thousand Oaks appears to have been prompted by the completion the Ventura freeway to LA, but I don’t know much about the other major suburbs such as the San Fernando Valley.

Building a freeway will definitely accelerate development. But, absent collusion between a developer and the highway department, there has to be some population already there before a multi-lane divided highway gets planned and built.

Mike: I didn’t mean to imply that I (personally) was feeling generous towards Jacobsen’s analysis. I hate lack of candor and deception among those trying to “make the world a better place”. Budischak and Jacobsen arrive at very different results because Budishack is trying to supply our current system of local electricity-on-demand while Jacobsen is supplying a society which only consumes more than 50% (?) of its electricity only when the wind blowing. Among the many things I don’t understand, I suspect communities (not the grid) pay to store thermal energy underground locally.

I found an old references suggesting that about 1/3 of the cost of natural gas is the pipeline needed to deliver it and a report say that hydrogen pipelines might be constructed for the same cost in existing right-of-ways or modestly higher cost in new right-of-ways. This reference suggest that hydrogen storage is expected to be comparable with batteries, but I don’t think anyone has built a operating facility. California is forcing its grid’s to purchase a large amount of electrical storage capacity, so we may see hydrogen storage operating there soon.

At the same pressure, methane has nearly five times the volumetric energy density of hydrogen. The energy density of hydrogen at 700 bar, which is seven times the highest pressure used in natural gas pipelines, is about 1/6 the energy density of gasoline. Nobody is going to build a 10,000psi hydrogen pipeline for long distance energy transport. At 1 bar, natural gas contains 0.0364MJ/L, Hydrogen is 0.0080MJ/L. So you would need four to five times the pipeline capacity to transport the same amount of energy with hydrogen at the same pressure. And that’s not to mention that hydrogen will leak through almost anything and causes embrittlement in many metals.

I notice in your link that they talk about geologic storage for hydrogen gas. Good luck with that. You can’t simply drill a hole in the ground and pump in hydrogen and expect to get much back. A salt dome might work, but those are not found everywhere.

Of course, the low energy density of hydrogen is most relevant to storage of hydrogen. The rate of transport depends on both density and flux. The fact that natural gas pipelines are built with different diameters suggests the existence of a practical limit to the velocity gas travels in a pipeline. The viscosity of hydrogen is about 15% less than methane, but that doesn’t change the energy density difference (4-5) much.

Hydrogen is delivered by pipeline in several industrial areas of the United States, Canada, and Europe. Typical operating pressures are 1-3 MPa (145-435 psig) with flows of 310-8,900 kg/h (685-20,000 lb/h) (Hart 1997; Zittel and Wurster 1996; Report to Congress 1995). Germany has a 210 km (130 mi) pipeline that has been operating since 1939, carrying 8,900 kg/h (20,000 lb/h) of hydrogen through a 0.25 m (10 in.) pipeline operating at 2 MPa (290 psig) (Hart 1997). The longest hydrogen pipeline in the world is owned by Air Liquide and runs 400 km (250 miles) from Northern France to Belgium (Hart 1997). The United States has more than 720 km (447 mi) of hydrogen pipelines concentrated along the Gulf Coast and Great Lakes (Hart 1997; Report to Congress 1995).

There is a sample pipeline calculation on page B-20, with only 2MPa (290 psi) of pressure. The energy cost to pressurize the pipeline may be a bigger limiting factor than the strength of the pipeline itself. However, I don’t understand some aspects of this calculation.

So I suspect we already have a decent idea of how much it costs to transport hydrogen by pipeline. None of the reports I looked at provided a firm estimate of the cost difference between methane and hydrogen in the most appropriate units ($/power), so I suspect the answer isn’t particularly favorable to hydrogen. However, the biggest capital cost is labor, not the pipe, so it doesn’t cost 2-4X as much to install a pipe 2 feet in diameter as a pipe 1 foot in diameter. (See Figure 26 of first reference.) The brittleness problem may occur where sections of pipe are welled together.

I agree with you that hydrogen presents severe practical problems that its proponents tend to sweep under the rug.

I suppose I am being picky, but I question your numbers. The difference in volumetric heating value is more like a factor of 3: HHV is 889 kJ/mol for methane (same as your value) and 286 kJ/mol for H2 (40% larger than your value). Using LHV increases the ratio from 3.1 to 3.3. At one bar, equal volume is essentially the same as equal moles. I only mention this because when I get numbers wrong I appreciate being corrected so that I can get them right in future.

It does not look like pipeline costs can be a big fraction of what electric utilities pay. Note that the residential price is 4 to 5 times the price that electric utilities pay, with commercial prices in between. As a check on these numbers, I note that I pay my provider 2.5 to 3 times what they pay for the gas (according to the bills they send me) which is consistent with the above source if what they pay is “citygate price”. So it looks like for the consumer, what matters is the cost of distribution rather than the cost of gas.

To change the subject: The above data does not bode well for the ability of a carbon tax to change consumer behavior.

Thanks for the correction. I divided the number in the Wikipedia energy density article for MJ/L for hydrogen at 700bar by 700. I’m not surprised it’s a little off. I doubt hydrogen behaves like an ideal gas at a pressure of 700bar.

Another fun fact about hydrogen is that it has a negative Joule-Thomson coefficient above 200K. That means it heats when it expands. As a result, a high pressure hydrogen leak can self-ignite. This is actually a good thing as hydrogen also has the widest explosive range in air, 4-75%.

One problem with pushing the price of electricity up so much on a hot day is that those who really need to use their a/c – the pensioners, the sick, those with young kids – will be incentivised not to use them. This already happened in Japan because of price rises in electricity to cover FF imports – the results can be fatal.

Office building here can be stultifying in summer, for the same reason – leading to poor productivity, sweat-stained clothes, and extra washing cycles. Now if hospitals were forced to go down that path…

In the US, the wireless phone industry is competitive, largely unregulated and capital intensive. The incremental cost of providing the next Gig of data is very low and demand has huge peaks and deep valleys.

Why don’t these profit-making providers charge more during peak usage periods? Why don’t big users pay a premium for hogging the bandwidth? Why do wireless providers use mobile cell-sites to soak up excess demand at large public events rather than just charging more? Maybe 99.999% availability is an important feature. It has been the standard for wireline phones for decades.

Some years ago the electricity industry decided that it was cheaper to pay big users (steel mills etc.) to not use electricity during peak demand periods (hot summer days for example) than to build more capacity. I don’t know if this is still the case.

My guess is that any attempt by electricity regulators to jack up the price of air conditioning for residences and businesses large and small (imagine waiting for a delayed flight in an airline terminal with un-openable windows) would meet with a lot of public resistance.

In any case, peak demand for air conditioning should coincide with peak availability of solar power.

DeWitt wrote: “Another fun fact about hydrogen is that it has a negative Joule-Thomson coefficient above 200K. That means it heats when it expands. As a result, a high pressure hydrogen leak can self-ignite. This is actually a good thing as hydrogen also has the widest explosive range in air, 4-75%.”

I’ve been looking at 2D molecular mechanics simulations of gases with a Lennard-Jones force-field at:

Originally, I had fun turning on a gravitational field, watching the molecules “fall”, which created a density and temperature gradient, and watching the temperature gradient quickly vanish as collisions conducted heat upwards. So much for a thermogravitational effect (which exists with only a few molecules in the box).

Looking for other phenomena that arise from the behavior of large groups of molecules, I also tried expanding the size of the box of gas molecules and observed cooling. In my ignorance, I interpreted cooling to be PdV work until you dropped the above pearl of wisdom about free expansion. I now assume that the temperature drops after the volume is increased (pressure drops) because fewer molecules are “sticking together” in (lower energy) dimers held together by van der Waals forces. However, no matter how high I make the temperature before expansion, the gas never heats up after expansion. Do you understand the molecular origin of negative Joule-Thomson coefficients?

One obvious place to look for energy is the repulsive region of the Lennard-Jones potential (though my intuition doesn’t like this hypothesis). This region might not be properly explored during the 0.002 picoseconds between calculation steps.

To get an unambiguous result for an expansion, you must specify what remains constant during the expansion. Common choices are entropy (adiabatic reversible expansion), internal energy (free adiabatic expansion), temperature, enthalpy. The last is the Joule-Thompson expansion. I took a look at your link (looks like fun), but it is not at all obvious what is held constant when you change the volume.

Mike M: Thanks for the reply. There is a pause/resume button on the simulation at any moment that lets one change the volume of the gas and then allow the molecules to continue with their current velocity. The simulation is fun for me, since you can “turn on” gravity and see whether it produces a temperature gradient (in addition to the expected density gradient). It does, but the temperature disappears quickly disappears when a large number of molecules are present.

After a lot of thought, this passage from Wikipedia makes finally makes some since:

If the expansion process is reversible, meaning that the gas is in thermodynamic equilibrium at all times, it is called an isentropic expansion. In this scenario, the gas does positive work during the expansion, and its temperature decreases.
In a free expansion, on the other hand, the gas does no work and absorbs no heat, so the internal energy is conserved. Expanded in this manner, the temperature of an ideal gas would remain constant, but the temperature of a real gas may either increase or decrease, depending on the initial temperature and pressure.
The method of expansion discussed in this article, in which a gas or liquid at pressure P1 flows into a region of lower pressure P2 via a valve or porous plug under steady state conditions and without change in kinetic energy, is called the Joule–Thomson process. During this process, enthalpy remains unchanged (see a proof below).

Contrary to what I originally thought before DeWitt introduced J-T, this is obviously not a reversible adiabatic expansion that cools off because of PdV work. Since the calculated temperature changes upon expansion, but not the quantity E, I think the simulation models expansion at constant internal energy. An expansion process at constant enthalpy is a little harder for me to grasp: The expansion is being done against some residual pressure after the throttle, so some work is being done. I don’t think the simulated expansion is at constant enthalpy.

As Mike M. points out, free expansion means enthalpy, H, is held constant. If repulsive forces dominate, then potential energy decreases as the pressure drops and the kinetic energy increases. The higher the gas temperature, the more likely this will happen. The J-T coefficient for He, for example, goes negative at about 50K, H2 at 200K and N2 at 600K. The Wikipedia article on the Joule-Thomson effect has a detailed derivation of the Joule-Thomson (Kelvin) coefficient. Obviously, an ideal gas would not change temperature in a free expansion.

Which repulsive forces? Intramolecular forces. At low temperature and high pressure, I can see some fraction of molecule existing as dimers held together by van der Waals forces. Dimer formation is releases heat (enthalpy?) and dimer breakup takes up heat? When the molecules move further apart on the average after expansion, there will be fewer dimers and therefore the temperature drops. (All of this is apparent in the simulation.)

At high enough pressure, I can imagine squeezing gas molecules together so they spend some time a distances where the intermolecular force is repulsive. These may be the “repulsive forces” you are talking about. When liquids (which have molecules that are touching each other) evaporate at 1 atm, they typically expand about 1000 fold in volume. So roughly 1000 atm will cause molecules to touch a large fraction of the time and experience repulsive intramolecular forces. If we drop to 100 psi, the molecules on the average are a molecular diameter apart – probably relatively few experience repulsive forces at any time, while many will be in the attractive region of the Lennard-Jones potential. If the Boltzmann distribution applies, the “relatively few” experiencing strong repulsive forces will increase with temperature. In conclusion, I can see where intermolecular repulsive forces might come from, but I’m surprised they are relevant around at a few atm pressure.

DeWItt wrote: “The Wikipedia article on the Joule-Thomson effect has a detailed derivation of the Joule-Thomson (Kelvin) coefficient.”

Read it several times. It doesn’t derive a value for alpha or pdT/pdP from more fundamental parameters arising from intramolecular forces and/or molecular surface area.

The Joule–Thomson effect depends crucially on the small deviation from an ideal gas, given by intermolecular forces, specifically both the attractive and the repulsive parts of the Van der Waals force as approximated (for example) by the Lennard-Jones potential.

Water is liquid at room temperature because of strong hydrogen bonding, not the Van der Waals force. Absent that, a molecule with a molecular weight of 18 would have a very low boiling point. Neon, for example, has an atomic weight of 20 and a boiling point of 27K. Methane, MW 16, has a boiling point of 111K. Talking about molecules ‘touching’ isn’t helpful either.

Calculation of a Joule-Thomson coefficient from first principles would be a complicated exercise in quantum mechanics. One can derive it from models of a non-ideal gas such as the Van der Waals or Redlich-Kwong, but it’s not trivial and not exact. Still, one can demonstrate that for a non-ideal gas at very low temperature, approaching 0K, the J-T coefficient must be positive and at very high temperature it must be negative.

To take this even further off topic: Does anyone here have experience in modifying Wikipedia articles? I ask because the “Physical mechanism” part of the Wikipedia article on the Joule-Thompson effect is clearly wrong (or so confused as to be the same thing). I say this because it talks about a process as constant energy, whereas the J-T process is at constant enthalpy. Also, it says nothing about the effect of repulsive forces, which are (I think) critical to the inversion. But I don’t want to just jump in an start editing, because I am not an expert on this specific topic. Any advice?

The physical properties of hydrogen are not off topic, or at least not very far off because hydrogen is proposed as an energy storage and transport medium in Jacobson and others as a factor in demand management. So things like volumetric energy density and safety are issues. The fact that a hydrogen leak doesn’t always self-ignite makes it even less safe, IMO.

As for Wikipedia, as I remember, anyone can edit an article. The editing history, sort of, is included in the talk page for that article that can be accessed from the tab at the top of the page.

Thanks for taking the time to share your expertise, DeWitt. Looking more closely, I see that the minimum value for the J-T coefficient is -0.05 K/bar, so a change of 100 bar would be -5 K. As I noted above, molecules at 100 bar gas molecules average about one molecular diameter apart at this pressure. The temperature change from 2 bar to 1 bar is fairly small. Compared with the amount of energy in other intermolecular interactions (heats of vaporization and fusion), this is fairly small.

FWIW, both hydrogen “bonds” and van der Waals forces are fundamentally electrostatic interactions; one with permanent dipoles and the other transient. (A carbonyl oxygen accepting a linear hydrogen bond isn’t sp hybridized and one receiving a bent hydrogen bond sp2. As best I can tell, there is no bonding in the orbital sense, just electrostatics – until you get to ionic species like bifluoride. However, this is a controversial subject.) If you’ve ever looked at drug interactions with their binding sites, you might value the idea that molecules touch and the energetics of that interaction. (:)) And if you spend a few minutes looking at the simulation, you might find some “touching” between gas molecules and perhaps even a clearer understanding of the water vapor continuum.

I did some more looking around. The temperature rise from the Joule-Thomson effect for hydrogen is not sufficient for autoignition at any reasonable pressure. Spontaneous ignition of a hydrogen leak, which doesn’t always occur as I had thought before, is thought to be due, in most cases, to an electrical discharge, as the energy to ignite an explosive mixture of hydrogen and air is very low, according to this reference, 0.017mJ for a stoichiometric mixture of hydrogen and oxygen. I wouldn’t keep a hydrogen powered car in a garage.

While the ball and stick (or spring) or snap together molecular models have their uses, They are only crude approximations to what’s actually happening at the quantum level. Atoms aren’t tiny billiard balls even though that can be a decent approximation. You could say that all the forces at the molecular level are electrostatic in origin. Hydrogen bonding in water, however, is many orders of magnitude stronger than Van der Waals forces. Protons exchange between water molecules constantly. That makes hydrogen bonding fundamentally different.

Frank – A negative coefficient corresponds to heating, so although the Joule-Thomson effect can not produce more than a few degrees of heating, it can produce cooling of several tens of degrees.

DeWitt wrote: “That makes hydrogen bonding fundamentally different.” That is what I was taught 40 years ago. But recent research supports Frank’s position: There is nothing special about a hydrogen bond; it is just a strong dipole-dipole interaction.

Mike M: It is obvious that you know more about the thermodynamics aspects of this phenomena than I do. Since you asked for comments, I’ll try to explain what is still confusing and what might be clearer.

The section you wrote comparing isothermal and iso-enthalpic expansions would be clearer if you specified reversible/irreversible and ideal/non-ideal (or both when you mean both).

You added: “The Joule–Thomson effect depends crucially on the small deviation from an ideal gas given by intermolecular forces, specifically both the attractive and the repulsive parts of the Van der Waals force as approximated (for example) by the Lennard-Jones potential.”

I might expand on this section (and move it?): In an ideal gas, there are no intramolecular or Van der Waals forces. The Joule–Thomson effect in real gases depends crucially on the deviation produced by intermolecular forces, specifically both the attractive and the repulsive parts of the Van der Waals force (for example, as approximated by the Lennard-Jones potential. As a dilute real gas is compressed, work (F*ds) is performed both against or with these intramolecular forces, subtracting from or adding to the internal energy (U) of the gas. The work done against and with these short-range molecular forces is analogous to the work done moving a piston in a cylinder against or with a force equal to pressure times piston area. At low temperature, work done with the weak attractive force upon compression dominates non-ideal behavior; but at higher temperature, the work done against the stronger repulsive forces is more important. As a consequence, PV for a real gas can be greater or less than nRT. This non-ideal behavior usually decreases or increases the temperature change associated with the adiabatic (no heat exchanged) expansion of real gases. The change in temperature experienced by the gas during expansion depends not only on the initial and final pressure, but also on the manner in which the expansion is carried out.

These last few sentences were copied from the “description” section and the whole paragraph could be included there. The description section continues with my thoughts added in [brackets].

“If the expansion process is reversible, meaning that the gas is in thermodynamic equilibrium at all times, it is called an isentropic expansion. In this scenario, the gas does positive work during the expansion, and its temperature decreases. [For an ideal gas, the quantity p^(1-gamma)*T^gamma remains constant during a reversible adiabatic expansion, but the intramolecular forces in real gases can increase or decrease the expected temperature change.]

In an irreversible free expansion, on the other hand, the gas does no work and absorbs no heat, so the internal energy (U) is conserved. [An irreversible free expansion of an ideal gas is also iso-enthalpic, because P1V1 = P2V2 and H = U + PV.] Expanded in this manner, the temperature of an ideal gas remains constant, [but the temperature of a real gas may either increase or decrease. The free expansion of a non-ideal gas is not iso-enthalpic because P1V1 is usually not equal to P2V2.]

The method of expansion discussed in this article, in which a gas or liquid at pressure P1 flows into a region of lower pressure P2 via a valve or porous plug under steady state conditions and without change in kinetic energy, is called the Joule–Thomson process. During this process, enthalpy [U+PV] remains unchanged [whether the gas is ideal or non-ideal] (see a proof below).” [A J-T expansion of an ideal gas results in no change in temperature. However, real gases usually warm or cool because of the work done against and with intramolecular forces as the spacing between molecules increases.]

Although I think I understand the reasons for non-ideal behavior of gases, the last paragraph didn’t initially give me confidence in my understanding of a Joule-Thompson expansion. As I wrote the questions below, I modified the above discussion of the three kinds of expansions as the situation seemed to partially clarified. Those modification could be wrong.

1b) Is any work being done in a J-T expansion? In a free expansion, the gas molecules are flowing against zero pressure (at least at first). In a J-T expansion, the flow is moving against P2. Is the gas doing work on itself?

2a) Are both an adiabatic free expansion and a J-T expansion of an ideal gas iso-enthalpic? [P1V1 = P2V2 for the adiabatic free expansion, so I think so. Elsewhere I’m told U is constant. I added this to the paragraph on free expansion, but I may be confused here.]

2b) Is adiabatic free expansion of a non-ideal gas iso-enthalpic? [Not in general, since P1V1 isn’t required to equal P2V2. Added this too.]

3) How does a J-T differ from a free expansion, if both free expansion and Joule-Thomson expansion of an ideal gas result in no temperature change and both are iso-entropic? [In free expansion of a gas, U and PV are both constant for ideal gases, but U remains constant and PV generally changes for a non-ideal gas. In a J-T expansion only the sum of U + PV is constant, but the sum remains constant for both ideal and non-ideal gases. Since the PV term for real gases includes the non-ideal effects produced by intramolecular forces and H is constant …. I think I should be able to continue with something meaningful about U and T.]

4) After a throttle, has the kinetic energy of bulk flow changed? [I think so. With lower density in the P2 region, doesn’t the gas need to move faster?] Is the Bernoulli effect relevant? [I need to review the origin of the Bernoulli effect.]

Thanks for the thoughtful comments. It will take me some time to think through them all and decide how the article might be modified.

There is one big idea that addresses many of your points. If I want to describe the initial and final points of a gas expansion, all I need to do is to define the thermodynamic “state” of the gas at each point; the path taken does not matter. For a system with a fixed amount of one component and one phase, I must specify two state functions to define the state. For a gas expansion one of those will be either pressure or volume, at both points. The other can be pretty much anything else; but it is often convenient to specify something (T, S, U, H, etc.) that is held constant. All that is really required is for that something to be the same at the beginning and end.

Some things do depend on the path taken. The most important examples are heat, work, and the change in entropy of the universe. And sometimes we specify a path for the convenience of computing something from the heat and work.

So if I want the temperature change in a constant enthalpy expansion of an ideal gas from P1 to P2, the path is irrelevant. Or more to the point, the path can be anything I choose for the convenience of the calculation. It is likely (actually, certain) that things like S and U will change and the q and/or w will be non-zero. But computing those things are separate problems.

This is a real problem with Wikipedia articles that deal with scientific concepts and theories rather than facts. Everything is connected to everything else, so sequence is important to understanding. I am starting to wonder what I have got myself into.

Again thanks for your comments on the Wikipedia article. After considering them, I realized where I went wrong. I had written something suitable for a thermodynamics textbook, not an encyclopedia. After several false starts, I think I figured out the right approach and have rewritten my revisions.

It did say in the intro that it is an inherently irreversible process. I now repeated that in the description.

“1b) Is any work being done in a J-T expansion? Is the gas doing work on itself?”
Yes, work is done on the high pressure side (to reduce its volume) and by the low pressure side (to increase volume). I believe I have made this clear in the revision. The total can be positive or negative. Work must be done against an external force.

“2a) Are both an adiabatic free expansion and a J-T expansion of an ideal gas iso-enthalpic?”

Yes.

“2b) Is adiabatic free expansion of a non-ideal gas iso-enthalpic? [Not in general, since P1V1 isn’t required to equal P2V2. Added this too.]”

You are correct.

“3) How does a J-T differ from a free expansion, if both free expansion and Joule-Thomson expansion of an ideal gas result in no temperature change and both are iso-entropic?”

For an ideal gas, the change in state is the same, but brought about via different paths. For a real gas, net work is done in a J-T expansion.

“4) After a throttle, has the kinetic energy of bulk flow changed? [I think so. With lower density in the P2 region, doesn’t the gas need to move faster?]”

This can be important in flow problems, but in the J-T experiment the flow is small enough that bulk kinetic energy can be neglected,

Collision Induced Absorption, of which the water vapor continuum may (or may not) be an example, still does not involve touching in any real sense. Even using the term collision is suspect. But since you can only vaguely approximate quantum mechanics using a language other than mathematics, perhaps touching and collision are reasonable approximations.

Speaking of molecules ‘touching’, question:

Is the critical temperature of a gas, above which a liquid phase doesn’t exist at any pressure, related to the Joule-Thomson effect?

You wrote “Is the critical temperature of a gas, above which a liquid phase doesn’t exist at any pressure, related to the Joule-Thomson effect?”

They are related in the sense of the “law of corresponding states” which is the idea that if you scale T and P to the critical values, you get broadly similar behavior for most gases. So for a van der Waals gas, if the critical temperature is Tc then the Boyle temperature (at which the gas is most nearly ideal) is (27/8)*Tc and the Joule-Thomson inversion temperature is (27/4)*Tc. But rules like these are crude. For example, experimental values for argon are 151 K for the critical T, 412 K for the Boyle T, and 723 K for the inversion T.

DeWItt wrote: “You could say that all the forces at the molecular level are electrostatic in origin. Hydrogen bonding in water, however, is many orders of magnitude stronger than Van der Waals forces. Protons exchange between water molecules constantly. That makes hydrogen bonding fundamentally different.”

Yes, all forces are electrostatic. You can place four hydrogens around a carbon and ask where the electrons will be found. Shared pairs of electrons and hybridized molecular orbitals (sp3) work well for many systems. However, when you consider the linear hydrogen bond in an alpha helix and the bent hydrogen bonds in DNA base pairs, you need to change the hybridization on the carbonyl oxygen from sp to sp2 using molecular orbitals. Then what do you do with hydrogen bonds at 150 deg. many of which are form in proteins? Is the oxygen in water receiving a hydrogen bond in planar or pyramidal with respect to the three surrounding oxygens? Both problems are addressed better by using purely electrostatic considerations. Bifluoride (F2H-) is a different story. So is the difference between base pairs where all of the donors are on one heterocycle and the acceptors on the other, vs pairs where the acceptors and donors alternate.

To estimate the strength of a hydrogen “bond”, calculate partial charges on all atoms in the molecules involved and use electrostatics to calculate the attractive force (or dE for bringing the molecules together from far apart). More sophisticated approaches don’t add much.

DeWitt wrote: Collision Induced Absorption, of which the water vapor continuum may (or may not) be an example, still does not involve touching in any real sense. Even using the term collision is suspect. But since you can only vaguely approximate quantum mechanics using a language other than mathematics, perhaps touching and collision are reasonable approximations.

You are often quicker and clearer thinking than I am about some subjects, but it does appear as it you haven’t watched any simulations at:

Consider taking a few minutes and then tell me why I’m wrong to think of gas molecules as “touching” or held together briefly as dimers held together with Van der Waals forces (this simulation) of hydrogen bonds (water vapor in the atmosphere. I usually reduce the number of molecules (atoms) so that only about 10% of the volume is filled (about 150 molecules). If you want your liquid phase not to float off, turn on 0.01 of gravity. Click start. Click “Faster” five times. The energy added goes into evaporation and the temperature doesn’t rise (which increase the velocity of every atom). About 15 clicks on Faster gets rid of most of the liquid phase. (The gas pressure rises, so don’t expect a clear boiling point.) Between 15 and 20 clicks of Faster the temperature beings to rise rapidly to about 1.0 (which is 140 K). You are now well above the boiling point (70 K). Once the system is at equilibrium, low down the simulation (time step and steps per frame) and watch the slower moving molecules spend time as dimers and multimers – non-ideal behavior. The spectra and thermodynamics of these dimers will differ from monomers. Is this relevant to collision-induced absorption and the water vapor continuum? I think so, but that doesn’t prove anything.

Reasons to be suspicious of what I see: 1) It’s a 2D simulation of a 3D world. 2) Filling 10% of the volume with liquid produces a gas with a pressure of about 100 atm after evaporation in the real world. One atmosphere is much emptier.

With water vapor, it isn’t just dimers and polymers. D2O and H2O, even in the vapor phase, are in equilibrium with HDO. When the bond breaks between two molecules of water, sometimes it’s the covalent end of the bond that breaks. Autoprotolysis is an example of that in water.

The article is about a retracted paper on bio-gas production. The link to the original paper is in the article.

To a large degree SOD seems to be about reading papers and analyzing them so I thought it would be of interest.

When I read the original paper, giving it a cursory glance, nothing in it seemed amiss to me. It didn’t jump out as a bunch of hype. It really took someone who was knowledgeable in the field with access to the original data to track down the error.

“The attractive interactions act over a much longer range than the repulsive interactions. Since distances between molecules are large compared to the size of the molecules, the energy of a gas is influenced mainly by the attractive part of the potential. However, the repulsive part of the potential, which keeps two molecules from occupying the same space, can have a significant impact on the volume of the gas.”

The weaker attractive interaction acts over a much longer range than the stronger repulsive interactions and dominates non-ideal behavior at low pressure and temperature. The opposite is true at high temperature explaining why the J-T coefficient decreases and becomes negative with increasing temperature. (See Fig. 1)

Setting aside primitive models of molecules as hard spheres, since atoms and molecules are mostly empty space, it isn’t clear why two molecules can’t “occupy” the same space. It just takes a lot of work/energy to squeeze them together. When this happens, I suspect the gas crosses the line into a supercritical fluid.

The Bernoulli effect is derived by converting PV into the kinetic energy of bulk flow and both terms are part of the enthalpy of the gas. Unlikely to contribute significantly to non-deal behavior in a J-T expansion.

It seems that the discussion is about curtailing demand on a few days a year at best. The first question is whether there is a significant overall curtailing of emissions. Questionable also is the consumer model – which assumes elasticity in demand. For energy in general – it is patently not the case. Where average costs – i.e. regular plus peak day – don’t increase on the swings and roundabouts perhaps more so. Think hospitals, office buildings and most factories. We lost electricity for a week after a cyclone last year. We obviously can go back to practices of the past – cold showers, sitting on deck chairs under the house because that’s where it’s cool, sleeping on the veranda, etc. My neighbor went twice to the hospital with her baby in heat stress – and ultimately drove south to where there was electricity.

Curtailing average costs may be a different story. Avoiding cost plus contracting for network expansion, imposing shared network costs on those with solar installations – or excluding them from the network – having more distributed power sources to minimise network costs.

‘We think there’s about $11b of installed electricity infrastructure that’s used for about eight days a year, …’
Leaving aside the accuracy of the numbers, there is a reason why you may have rarely used infrastructure and that is resilience. If part of the system fails you may need to re-route power using other parts of the grid. That extra capacity may only be used rarely but without it the system becomes far more unreliable. That’s a feature not a bug.