The Strange Case of Stratospheric Water Vapor, Non-linearities and Groceries

Redefining Physics

Dexter Wright re-defined the radiative transfer equations in his American Thinker article “Global Warming on Trial” with these immortal words:

Clearly, H2O absorbs more than ten times the amount of energy in the IR spectrum as does CO2. Furthermore, H2O is more than one hundred times more abundant in the atmosphere than CO2. The conclusion is that H2O is more than one thousand times as potent a greenhouse gas (GHG) as CO2.With such immutable facts facing the EPA, how will they explain their stance that CO2 is a greater danger to the public than water vapor?

But in wondering why they hadn’t, it did occur to me that non-linearity is something that most people struggle with. Or don’t struggle with because they’ve never heard of it.

I think that the non-linear world we live in is not really understood because of the grocery factor..

(And it would be impolite of me to point out that Dexter didn’t know how to interpret the transmittance graphs he showed).

Groceries and Linearities

Dexter is in the supermarket. His car has broken down so he walked a mile to get here. He has collected a few groceries but his main buy is a lot of potatoes. He has a zucchini in his hand. He picks up a potato in the other hand and it weighs three times as much. He needs 100 potatoes – big cooking plan ahead – clearly 100 potatoes will weigh 300 times as much as one zucchini.

Carrying them home will be impossible, unless the shopping trolley can help him negotiate the trip..

Perhaps this is how most people are thinking of atmospheric physics.

In a book on Non-linear Differential Equations the author commented (my memory of what he stated):

The term “non-linear differential equations” is a strange one. In fact, books about linear differential equations should be called “linear differential equations” and books about everything else should just be called “differential equations” – after all, this subject describes almost all of the real-world problems

What is the author talking about?

Perhaps I can dive into some simple maths to explain. I usually try and avoid maths, knowing that it isn’t a crowd-puller. Stay with me..

If we had the weight of a zucchini = Mz, and the weight of a potato = Mp, then the weight of our shopping expedition would be:

Weight = Mz x 1 + Mp x 100, or more generally

Weight = Mz Nz + Mp Np , where Nz = number of zucchinis and Np = number of potatoes. (Maths convention is that AB means the same as AxB to make it easier to read equations)

Not so hard? This is a linear problem. If you change the weight (or number) of potatoes the change in total is easy to calculate because we can ignore the number and weight of zucchinis to calculate the change.

Suppose instead the equation was:

Weight = (Mz Nz) Np2 + (Mp Np) Nz3

What happens when we halve the number of potatoes? It’s much harder to work out because the term on the left depends on the number of zucchinis and the number of potatoes (squared) and the term on the right depends on the number of potatoes and the number of zucchinis (cubed).

So the final result from a change in one variable could not be calculated without knowing the actual values of the other variables.

This is most real-world science/engineering problems in a nutshell. When we have a linear equation – like groceries but not engineering problems – we can nicely separate it into multiple parts and consider each one in turn. When we have a non-linear equation – real world engineering and not like groceries – we can’t do this.

It’s the grocery fallacy. Science and engineering does not usually work like groceries.

Stratospheric Water Vapor

In many blogs, the role of water vapor in the atmosphere (usually the troposphere) is “promoted” and CO2 is “diminished” because of the grocery effect. Doing the radiative transfer equations in your head is pretty difficult, no one can disagree. But that doesn’t mean we can just randomly multiply two numbers together and claim the result is reality.

A recent (2010) paper, Contributions of Stratospheric Water Vapor to Decadal Changes in the Rate of Global Warming by Solomon and her co-workers has already attracted quite a bit of attention.

This is mainly because they attribute a significant proportion of late 20th century warming to increased stratospheric water vapor, and the last decade of cooling/warming/pause in warming/statistically significant “stuff” (delete according to preferences as appropriate) to reduced water vapor in the stratosphere.

Firstly, take a look at the basic physics. The graph on the left is the effect of 1ppmv change in water vapor in 1km “layers” at different altitudes (from solving the radiative transfer equations).

Notice the very non-linear effect of “radiative forcing” of stratospheric water vapor vs height. This is a tiny 1ppmv of water vapor. Higher up in the stratosphere, 1 ppmv change doesn’t have much effect, but in the lower stratosphere it does have a significant effect. Very non-grocery-like behavior..

Unfortunately, historical stratospheric water vapor measurements are very limited, and prior to 1990 are limited to one site above Boulder, Colorado. After 1990, especially the mid-1990’s, much better quality satellite data is available. Here is the Boulder data with the later satellite data for that latitude “grafted on”:

Stratospheric water vapor measured 40'N, 1980-2010, Solomon (2010)

And the global changes from post-2000 less pre-2000 from satellite data:

It looks as though the major (recent) changes have occurred in the most sensitive region – the lower stratosphere.

The paper comments:

Because of a lack of global data, we have considered only the stratospheric changes, but if the drop in water vapor after 2000 were to extend downward by 1 km, Fig. 2 shows that this would significantly increase its effect on surface climate.

The calculations done by Solomon compare the increases in radiative forcing from changes in CO2 with the stratospheric water vapor changes.

Increases in CO2 have caused a radiative forcing change of:

From 1980-1996, about +0.36 W/m2

From 1996-2005, about +0.26 W/m2

Changes in stratospheric water vapor have caused a radiative forcing change of:

From 1980-1996, between 0 and +0.24 W/m2

From 1996-2005, about -0.10 W/m2

The range in the 1980-1996 number for stratospheric water vapor reflects the lack of available data. The upper end of the range comes from the assumption that the changes recorded at Boulder are reflected globally. The lower end that there has been no global change.

What Causes Stratospheric Water Vapor Changes?

There are two mechanisms:

methane oxidation

transport of water vapor across the tropopause (i.e., from the troposphere into the stratosphere)

Methane oxidation has a small contribution near the tropopause – the area of greatest effect – and the paper comments that studies which only consider this effect have, therefore, found a smaller radiative forcing than this new study.

Water transport across the tropopause – the coldest point in the lower atmosphere – has of course been studied but is not well-understood.

Is this All New?

Is this effect something just discovered in 2010?

From Stratospheric water vapour changes as a possible contributor to observed stratospheric cooling by Forster and Shine (1999):

This study shows how increases in stratospheric water vapour, inferred from available observations, may be capable of causing as much of the observed cooling as ozone loss does; as the reasons for the stratospheric water vapour increase are neither fully understood nor well characterized, it shows that it remains uncertain whether the cooling of the lower stratosphere can yet be fully attributable to human influences. In addition, the changes in stratospheric water vapour may have contributed, since 1980, a radiative forcing which enhances that due to carbon dioxide alone by 40%.

(Emphasis added)

From Radiative Forcing due to Trends in Stratospheric Water Vapour (2001):

A positive trend in stratospheric H2O was first observed in radiosonde data [Oltmans and Hofmann, 1995] and subsequently in Halogen Occultation Experiment (HALOE) data [Nedoluha et. al., 1998; Evans et. al., 1998; Randel et. al., 1999]. The magnitude of the trend is such that it cannot all be accounted for by the oxidation of methane in the stratosphere which also show increasing trends due to increased emissions in the troposphere. This leads to the hypothesis that the remaining increase in stratospheric H2O must originate from increased injection of tropospheric H2O across the tropical tropopause.

And back in 1967, Manabe and Wetherald said:

It should be useful to evaluate the effect of the variation of stratospheric water vapor upon the thermal equilibrium of the atmosphere, with a given distribution of relative humidity.. The larger the stratospheric mixing ratio, the warmer is the tropospheric temperature.. The larger the water vapor mixing ratio in the stratosphere, the colder is the stratospheric temperature..

Conclusion

The potential role of stratospheric water vapor on climate is not a new understanding – but finally there are some observations which can be used to calculate the effect on the radiative balance in the climate.

The paper does illustrate the non-linear effect of various climate mechanisms. It shows that small, almost unnoticed, influencers can have a large effect on climate.

And it demonstrates that important climate mechanisms are still not understood. The paper comments:

It is therefore not clear whether the stratospheric water vapor changes represent a feedback to global average climate change or a source of decadal variability. Current global climate models suggest that the stratospheric water vapor feedback to global warming due to carbon dioxide increases is weak, but these models do not fully resolve the tropopause or the cold point, nor do they completely represent the QBO, deep convective transport and its linkages to SSTs, or the impact of aerosol heating on water input to the stratosphere. This work highlights the importance of using observations to evaluate the effect of stratospheric water vapor on decadal rates of warming, and it also illuminates the need for further observations and a closer examination of the representation of stratospheric water vapor changes in climate models aimed at interpreting decadal changes and for future projections.

Given that the modeled changes add up to 70% on top of CO2 radiative forcing in an earlier period and then reduce CO2 radiative forcing by 40% in a later period, this is a very significant effect.

I expect that uncovering the mechanisms behind stratospheric water vapor change is an area of focus for the climate science community.

References

Contributions of Stratospheric Water Vapor to Decadal Changes in the Rate of Global Warming, by Solomon et al, Science (2010)

Thermal Equilibrium of the Atmosphere with a Given Distribution of Relative Humidity, by Manabe and Wetherald, Journal of Atmospheric Sciences (1967)

Stratospheric water vapour changes as a possible contributor to observed stratospheric cooling, by Forster and Shine, Geophysical Research Letters (1999)

Radiative Forcing due to Trends in Stratospheric Water Vapour, Smith et al, Geophysical Research Letters (2001)

Advertisements

Like this:

LikeLoading...

Related

75 Responses

Consider the possibility that it is not the CO2 and positive feedback that cause the increased water vapor and thus warming of the surface and cooling of the upper troposphere, but rather the warming due to other causes (ocean long term cycles, cloud variation from cosmic rays, etc.) that cause both the near surface warming and upper level cooling, and this may cause the water vapor concentration to vary with altitude. Cause and effect have not been established for the source of water vapor change vs temperature change. The reversal of lower stratosphere water vapor level occurred for about the last decade even though the CO2 has continued to rise strongly. In addition, upper troposphere temperature only dropped immediately following the heat pulse from Pinatubo in the early 1990’s, and has not dropped any more since. How much more time is needed to be convincing.

OMG – those citations!
I don’t believe I’ve ever seen such extensive use of the subjunctive tense in any writing outside of – well, [ I snip myself ]
The subjunctive tense is created for the purpose of discussing things that are not. It is not even conditional. It is made for fantasy – hence it has largely vanished as a distinct verb form in English and is conspicuously labored to perform. It is also conspicuously devoid of legitimate declaration of fact vis a vis causation in the form of an argument that connects various elements in some logical pattern.
That much effort to avoid a simple declarative statement is a well remarked hallmark of unreason in masquerade demanding attention it has no right to claim.

Here’s a joke that it immediately brought to mind:
How do you get out of a locked room with only a mirror and a table?
You look into the mirror and see what you saw.
You take the saw and cut the table in half.
Two halves makes a whole and you crawl out.

The papers don’t show that the earth should do anything, sir.
They show that somebody WISHES it should, never mind the fact that it did or didn’t. They show that a man has spent a lot of time trying to create an explanation for a set of a priori assumptions and has completely failed.
It is pretty much straight up argument ad ignorantium.

It’s been around for thousands of years, of course (who do you think put all this chaos in the universe- vast impersonal forces or something!!??!! It’s got to be Goddess! How about a latter day climate chiliast?)

Maybe you could edit that and remove all the subjunctive nonsense and boil it down to a simple sentence? Like – ‘meh, dunno but I tried to play ball for the team’.

40% of something imaginary is not just a bigger pile – there’s nonlinear for you. . (Emphasis added)

I didn’t like those quotations at all. People who come by with WatchTower talk like that.

So, this “predicted” drop in stratospheric temp due co2 is an independent effect to the one above?(still got issues “getting” that one.)

Yes – in brief.

Less in brief – Each trace gas has a radiative effect. In the case of the stratosphere: ozone, CO2 and water vapor are all important and affect both the tropospheric temperatures and stratospheric temperatures.

Remember that this is “with all other things being equal” – especially important when we consider the surface and tropospheric temperatures. Less happens in the stratosphere so it is more predictable – countered by the fact that until quite recently there was a lot less measurement of the stratospheric trace gases (and temperatures).

Not wanting to make my comment go on too long, but it is important to understand that the “radiative forcing” is a real and verifiable value. The physics of calculating radiative effects are very solid and we can point a measuring instrument in the right direction and get a value which matches theory.

Temperature effects on the other hand rely on understanding radiative effects PLUS everything else in climate..

In the article on Stratospheric Cooling you can see model results (radiative transfer equations solved) for all 3 trace gases and then the combination.

That article is very much about the stratospheric effects. This article is about the surface and tropospheric effects.

Great Blog. Finished slogging through the CO2 tonight, then saw this last post. You have a very good conceptual approach to laying out each issue. It used to be common knowledge that “greenhouse” warming heated the atmosphere.

I don’t yet get the stratospheric lapse rate and cooling cause yet. Will have to slog through that post more.

WRT 2000-2009 decline H2O in the stratosphere, perhaps this is where the “missing” heat went (condensation?)

The .5ppmv H20 increase from 1980 to 2000 is within the time frame of two lower stratosphere warming periods (MSU TLS) associated with major volcanic events.

“The term “non-linear differential equations” is a strange one. In fact, books about linear differential equations should be called “linear differential equations” and books about everything else should just be called “differential equations” – after all, this subject describes almost all of the real-world problems”

Another real world reality – Almost 0% of equations describing the real world are solvable analytically. I realized this about half way through a numerical analysis class way back in the mid 70s after having spent several semesters learning how to analytically solve differential and partial differential equations.

Yes, non-linearity is tricky. I have a pan of water on the stove, the gas set to ‘1’. The temperature is 80C. I turn the gas up to ‘2’ – the temperature goes up to 160C, right? Oh, hang on a sec. what if we’re measuring in Fahrenheit? Or Kelvin? But when we look, it seems to have gone up to 100C. I turn the gas up to ‘3’. 100C. The gas goes up to ‘4’ and things are looking very exiting in there. The boiling is vigorous. The temperature is… 100C.

When people speak of so many W/m^2 here and so W/m^2 many there, they expect that you can add them all up and that the total will tell you what the overall effect on temperature is going to be. If CO2 adds forcing, that difference will be added (possibly with a constant multiplier) to the total effect of all the forcings.

I have to say, I thought you kept switching direction a lot. At first I thought you was going to talk about what proportion of the greenhouse effect was caused by CO2 and H2O respectively – a questions whose complications are well worth bringing out. Then you seem to switch to non-linearity. Then to stratospheric water vapour changes. And then conclude that we don’t really understand why it has changed or what effect it will have, so we shouldn’t jump to hasty conclusions. Or maybe that we are certain it will increase warming by 40%. I wasn’t sure.

There’s a much simpler way of explaining why you can’t easily divide the greenhouse effect up amongst the component gases. Imagine we have blinds that block half the light. We put another set up in front of the first that again blocks half the light. And a third. So now we have, under the linear picture, three halves of the light blocked.

So what proportion of the blocking does the third blind really contribute? A half, because that’s what it would block on it’s own? A third, because the three blinds are identical? Or an eighth, because that’s how much adding the third blind to the other two reduced the light by?

When the blocking is unequal, and the blinds overlap across part of their range, it all gets even more difficult. And because of the non-linearity, the proportion of the effect caused by CO2 is not necessarily the same number as the proportion of the change contributed by a change in CO2.

But having introduced the question, I didn’t really see it answered by the rest of the post. Never mind. A lot of interesting stuff, nevertheless.

This is me pointing out that I am assuming something not quite explicit in the Ramanathan and Coakley paper.

Later papers, like Trenberth & Kiehl, and everyone else using the RTE find water vapor has around 2.5x the effect of CO2. This is done by calculating the value of radiative forcing with all the GHGs in place, then removing each in turn and recalculating the number. You can see it described in reasonable detail in the 1997 Kiehl & Trenberth paper (link in Part Five ).

Sure. But it’s a matter of understanding how/why. And what the numbers mean.

The point about discussing the complexity of the issue is to say that there are several numbers that could validly be taken to be “the” contribution of CO2.

Are you talking about absorption, radiative forcing, or temperature change? Because all three are different, and the relationships non-linear. A is 50% reduction from B and B is a 100% increase over A, is that a 50% change or a 100% change? And so on.

What I initially expected you to say was that Dexter was wrong because the question is meaningless, unless you are very specific about what question you are asking and why. And that this is because of the nonlinearity. What I didn’t expect you to say was that there was in fact a definite answer and this was 2.5x.

From a certain point of view, yes. From other points of view, not quite.

It doesn’t really matter to the AGW question, but it’s good to understand anyway.

So given that this Dexter Wright was talking about energy absorbed, have you really answered the question?

He says quantity x is one in a thousand, you say no, and to prove it show quantity y is 25%. (Or rather, cite a paper that states it without showing any working.)

Now it may well be that quantity x is around 25% as well, and Wright has got it Wrong.
And I rather suspect that the amount of energy absorbed is the wrong number to look at anyway, because it depends a lot on where it is absorbed. But whatever the answer might be – it hasn’t been clearly shown, and readers could be more confused than ever over exactly what all these different percentages refer to.

With my pan on the stove, I have changed the gas setting from ‘1’ up to ‘4’; you could call it four times as much thermal forcing, a 300% increase, the removal of a 75% decrease from ‘4’ to ‘1’, a 20C temperature change, which is 25% increase in Celsius or 5% in Kelvin, or no doubt many other ways of looking at it.

“The right equations to solve are the radiative transfer equations – not multiplying 2 numbers together. And solving the radiative transfer equations comes up with a very different result.”

I don’t disagree, but I think there’s a problem with the way you argue it.

To the layman, they see two different approaches come to conflicting answers. One of them clearly must be wrong, but which one? One approach is complicated, difficult, and has a lot of “trust me” in it and many steps skipped over in the presentation. The other is simple, fairly intuitive, and sounds plausible.

It’s also a valid method. In a complicated calculation it’s very easy to go wrong, and because of the complexity not be able to see the error. But it’s normal practice to do a ‘sanity check’ approximate calculation to see that the answer makes sense. 12,345×81 = 9,999,945. But ten thousand times a hundred is a million, so we seem to be an order of magnitude out. Which calculation is more likely to contain the error: the one with many complex steps in it (in which I haven’t even shown you my working), or the one with only one simple one?

“You will see that the radiative transfer equations are derived from 1st principles and can’t be solved with a pocket calculator.”

I know that. But you know that to apply them to the real atmosphere you have to interpret, approximate, measure, and model, and it isn’t always clear what “first principles” you’re supposed to be applying. Does the calculation you’ve done apply to the situation you’re interested in? The arithmetic may be perfect, but the physical interpretation wrong.

Take this term “radiative forcing”. It is the radiative imbalance at the tropopause that would result from making the change to atmospheric composition but before letting the temperature adjust. Except that in practice the temperature always does adjust. So it’s an imaginary number; a hypothetical situation.

That doesn’t stop a lot of people calculating with it as if this was the “imbalance”, the net power input going into the system that constituted the global warming. 4W/m^2 multiplied by the number of seconds in a year is 126 MJ/m^2 build up per year. I’ve seen people do it! But fairly obviously, that’s got to be wrong. Instead, the troposphere increases its output to compensate.

So it’s important to be clear on what all these different numbers mean, and why the one you’re using is the right one to use.

That said, I think the message that you’re trying to get across with the stratospheric bits of this post is an important one, and one that I think people like Dexter Wright would probably appreciate if they saw it spelled out clearly. That you can have a tiny effect in an unnoticed and poorly measured part of the climate system (like water vapour in the stratosphere) have a relatively massive effect on the final result at the surface, that could easily explain a significant chunk of observed warming or cooling without the need for your main hypothesis – this is an important thing to know when trying to decide whether the observations “prove” the hypothesis true.

I don’t know if that’s what you was trying to say. Like I said, the post seemed to jump around to me. But I hope you realise that in being critical I’m not being unappreciative.

harrywr,

“Almost no articles pointing out that each ‘ppm’ of additional CO2 has less impact then the previous.”

I’ve seen lots of articles that do. It’s true that many others don’t, but I certainly wouldn’t say ‘almost none’ do. The internet is a big place.

Solomonic esoterical studies aside, can any current instrumentation accurately measure stratospheric water vapor changes of 1 ppmv at every 1km of elevation? If so please explain how this works. If not, what in Blazes is accomplished by discussing this gibberish at all! If Solomon etal’s study can never be tested (1ppmv/1km) how can we understand whether strato-water vapor feedback means anything at this time?

I was thinking of the nature of the problem of explaining non-linearity to someone unfamiliar with it.
First, I had to figure out that there is a reason why someone might be unfamiliar with it.
That person will have never designed a wheel, cooked a meal, or worked out the interest on a business deal.
I guess why I totally lost the thread?

Anyhow, here’s an excerpt from a nice children’s story- children my age, when we were very very young, had this sort of thing instead of Toxic Avenger.

“The humble poet declined the gold and begged to be rewarded in the following manner:
I will be content if we simply get a chess board, and have your treasurer put one grain of rice on the first square. For each turn, the treasurer will move the rice grain to the next square, but double it as he does so. In the first move, I’ll have one grain; in the second, two; in the third, four; and so on till we’ve moved through all sixty-four squares.”

The emperor was delighted, and agreed at once.

LOL – it’s the emperors and their offspring who are the ones with no clue about nonlinearity- not the poets or elephant washers.

Solomonic esoterical studies aside, can any current instrumentation accurately measure stratospheric water vapor changes of 1 ppmv at every 1km of elevation? If so please explain how this works.

It’s a very technical and (my opinion) dull field – from the rest of your comment you seem to think it can’t be measured this accurately. If you have a specific reason for believing that this can’t be done accurately, feel free to post it and I can have a look.

Satellite measurements provide a high level of geographical coverage but don’t provide as much vertical resolution. Radiosondes do provide the vertical resolution needed. The point of the paper showing the sensitivity at various levels in the atmosphere is to demonstrate the non-linear effect of different heights, and therefore why the location of water vapor changes needs to be known.

Here’s a paper which reviewed the accuracy of the HALOE measurement –Validation of measurements of water vapor from the Halogen Occultation Experiment (HALOE), Harries et al (1996). When I get the chance to read the whole paper I will post another comment:

The Halogen Occultation Experiment (HALOE) experiment is a solar occultation limb sounder which operates between 2.45 and 10.0 μm to measure the composition of the mesosphere, stratosphere, and upper troposphere. It flies onboard the Upper Atmosphere Research Satellite (UARS) which was launched in September 1991. Measurements are made of the transmittance of the atmosphere in a number of spectral channels as the Sun rises or sets behind the limb of the atmosphere. One of the channels, at 6.60 μm, is a broadband filter channel tuned to detect absorption in the ν2 band of water vapor. This paper describes efforts to validate the absolute and relative uncertainties (accuracy and precision) of the measurements from this channel. The HALOE data have been compared with independent measurements, using a variety of observational techniques, from balloons, from the ground, and from other space missions, and with the results of a two‐dimensional model. The results show that HALOE is providing global measurements throughout the stratosphere and mesosphere with an accuracy within ±10% over most of this height range, and to within ±30% at the boundaries, and to a precision in the lower stratosphere of a few percent.

You can also see the error bars on the graph in the body of the article for the HALOE results, which seem to be in tune with this paper on accuracy.

There are plenty more papers out there as well which review the accuracy of stratospheric water vapor measurements.

It’s not a field I find particularly fascinating, unlike trying to understand why the Eemian interglacial ended, for example. If you have a particular reason for believing that instruments can’t tell the difference between 4ppm and 6ppm of water vapor after taking a look at the latter paper, feel free to add another comment.

Eyeballing the Vostok data, glacial/interglacial transitions look very much like impulse responses. If that is the case, then an interglacial will always end and decay back to glacial conditions. The end of an interglacial does not require a trigger. It’s the end of a glacial that requires a trigger and also a stored source of energy or something to magnify the rather small initial forcing, assuming Milankovitch cycles are indeed the trigger. Once the stored energy is used up, the temperature will decline and start the storage process again. The Holocene was distorted by the Antarctic Cold Reversal and the resulting Younger Dryas in the NH so the peak temperature wasn’t as high as the Eemian and the decay hasn’t been as fast. AGW may also be contributing, a la Ruddiman.

A small point. You might wish to distinguish non-linarity from recursivity. In many mathematical models of physical phenomena the terms are recursive such that, say, y = f(x,z) and x = f(z,q,r) and r = f(y,m) (purely hypothetical example). This is entirely apart from the question whether r, say, appears as r squared, or cubed, or to the 1/2, or etc.

“He has a zucchini in his hand. He picks up a potato in the other hand and it weighs three times as much. He needs 100 potatoes – big cooking plan ahead – clearly 100 potatoes will weigh 300 times as much as one zucchini.”

I suggest you rephrase for accuracy. 100 potatoes will weigh three times as much as 100 zucchini, not one zucchini.

The REAL story is at the end, but first a clarification on “non-linear”:

The discussion on “non-linear” is, well, bush-league, as some of the respondents have noticed. It makes NO difference whatsoever to a model/equation if it has, say y=x^2. You can always apply a change of variables, let u=x^2, so now y = u and Bob’s your uncle, no more non-linearity.

Yes, x^2 is non-linear in that it’s a curve, not a straight a line, but in mathematical modelling, differential equations, etc … that is not the issue.

The TWO important non-linearities are:

1) Non-linearity in coefficients e.g. y = a(x,y) * x, where a is a function of possibly both x and y. That is a proper (normal) non-linear equation.

This is mostly the sense in which the term non-linear (Partial) Differential Equation (PDE) arises. E.g. which of these are non-linear DE’s?

Only the last one is a non-liner (P)DE, even though there is not a “^anything” in sight. Notice, a) is a linear DE, even though it has a “^2” in it.

HOWEVER, the much much more difficult non-linearity arises in the study of non-linear dynamics, which includes chaos, fractals, and generally aperiodic system.

For the most part, if the dynamic you are trying to model is aperiodic (i.e. non-linear in this fractal sense) … you are SCREWED. It is a near certainty that you will NOT (ever) be able to forecast in any practical sense, even if you got the model equations exactly right.

If you don’t believe me, prove for yourself in a few seconds. Look up a thing called the Logistic equation: x(i+1) = 4 L *x(i)*[ 1 – x(i)], where i is the time step counter. Let x(0) bet something like 0.5 and L something like 0.9333. Assume this is exactly correct as a model of some dynamic. Whack that into your spreadsheet, and forecast a few time steps.

What happens if you get the initial condition x(0) a little wrong (say it should be 0.501) or what happens if your “physical parameter” L was miss-measured and it should be 0.9393? Or both? Put that in your spreadsheet to see what “real” non-linearity is.

Any problem that is aperiodic will be subject to these often intractable problems, and be (in many cases) entirely unpredictable for all practical purposes.

NOTICE, this is a fundamental property of the space-time continuum.

…. oh what a surprise, the climate is an aperiodic system … Oh, well, your screwed if you want to forecast the climate with those precious GCM’s … it can’t be done (no matter how many math geeks or supercomputers you throw at it).

Incidentally, you need NOT KNOW ANY MATHS to prove conclusively in just a few seconds that GCM’s must be necessarily rubbish. You can do this with just one word … volcanoes. Look at Fig 8.1 in the AR4, or Fig 9.8 in the AR5. How is it possible for those IPCC models to track sudden global cooling just after each of the major volcanoes so perfectly????

It is NOT! Unless you cheat.

Is there any proper scientist who believes or has proof that volcanoes are predictable (and apparently decades in advance by the IPCC). Nonsense!

They cheated to force the model to track known data. The models would have been about 2C (massive error) above the 0.7C warming of the last century, if they had not cheated. Indeed, they use very clever cheating sometimes called Volcano Response Models (VRM’s) so that then they can add an extra layer of “solicitors’ tricks” to bamboozle the reader with “techno-babble”.

… if you can’t predict … its ALL OVER, since without prediction, you CANT SET POLICY …

… if you would like a few pages with mostly pictures, not too many equations, let me have an email address, and I will post one.

BTW, all the nonsense about radiative forcing being well understood, is just that. Those Myhre et al and similar forcing “formula’s” are just summary measures of model results … but models which cannot possibly be correctly calibrated to either initial/boundary conditions (for the PDE’s etc), and especially NOT for the dynamics between the two dates. It is absolutely correct to say those forcing relationships are (at least in part) “circular”.

The absolute reliance on “pure” model results is a fatal problem for the real world. When the data contradict your predictions, it is the model that is wrong … at least in proper science.

Phenomena of the Earth system are chaotic, but that alone does not tell, whether useful predictions can be made or not. Actually it’s clear that many predictions that can be made are very likely correct. Just to give examples:

– All surface temperatures will stay in the range -100C – +100C far to the future.
– Summers are warmer than winters at high latitudes.

On the other hand it’s certainly impossible to predict the weather in New York for the day 23 Oct 2023, or to predict with the accuracy of 0.01 C the average surface temperature in year 2100.

These examples are enough to tell that something can be predicted while other things cannot. Arguments based on supposed logic, as presented in your message, cannot tell where the limit of predictability is. Therefore such arguments are useless.

Only practical work with GCMs and various tests performed on them can tell what we can conclude from them and what not. There are large gaps in the knowledge on their usefulness in making climatic predictions. The most straightforward method of testing: doing predictions on the future and comparing results with real outcome is extremely slow in adding to our knowledge. Other tests can be done based on historical data, but interpreting them is much more difficult, because model builders knew much about the history when the built and tuned the models. Even so comparisons with history, and also with details of the present tell about the validity of the models, while the scientists themselves are unlikely to agree on how much such comparisons tell.

Thank you for your extended remarks. Unfortunately, and please don’t take this personally, there are a number of fatal errors in your submission. I offer a few remarks to demonstrate this:

1) If a model’s predictions are contradicted by real data, then that’s that. It’s back to the drawing board for that model. If the GCM is predicting 0.22 C/decade, and it is consistently actually about half that around 0.14 C/decade, then there is a serious problem

It makes no difference whatsoever whether anyone else can come up with a (different) working model (to replace the error of the original model). If it is wrong, then it is wrong, and that’s that. Good scientist accept this, and go back to the drawing board … they don’t insist that they are right, when the data contradicts them.

2) As far as limits of predictability go, in fact I had provided explicit examples of the nature of the error in the iPCC GCM predictions, as you can prove within a couple seconds yourself. I offered the evidence of the “volcano cheating” in Fig 8.1 AR4 or Fig 9.8 AR5. In particular, as it is not possible to forecast volcanoes (I trust we agree on that), the “model cheating” for the period 1,900 – 2,000 clearly demonstrates an error on the order of 1 – 2 C/100 years, and during a period when the net warming was 0.7C/100years, as I stated earlier.

That is, the IPCC error is about 2 – 3 time the size of the entire warming over 100 years. If that’s not fatal, then ….

That explicit example of prediction error is just for the volcano cheating, there are many other fatal problems.

2) As far as the prediction horizon for typical comparable aperiodic dynamics are concerned, I point you to weather forecasts (a much simpler, but still mostly intractable problem). In reality, most weather forecasts start to fall apart badly after a 2-day horizon.

… and there is very very much more money, math geeks, supercomputers, instrumentation etc devoted to weather forecasting.

As such, we can accept that process that have forecast horizons measurable in days, are completely ill-suited for human models with needs 100-years out.

3) If we are going to debase this discussion to suggest that knowing temps will be +/- 100C is to be considered … then I must respond with:

a) Then, why spend even a penny on GCM’s, if that is the order of accuracy that satisfies you.

… You have just completed the entire “modelling” effort, and the IPCC et al can go home.

b) If that is the order of accuracy of interest, then the entire Climate Policy/Debate is OVER, since there is not single sensible decision that can be made with forecast accuracy of that sort.

Clearly, the forecast precision required must be commensurate with needs/objectives at hand.

3) It is the nature of aperiodic systems to make virtually any forecasting utterly meaningless, even if you got the model equations exactly right. That is the essential theme with aperiodic (chaotic) systems, as explained earlier.

I had provided in my note the Logistic equation, and some inputs, and asked the reader to take a few seconds to whack that into a spreadsheet (since I could not see a way to upload my sheets/charts). Have you tried that forecast? If you had, you would immediately see fatal difficulty with forecasts that have (high) sensitivity to initial conditions, and to parametrisation … even when you have model equation.

==> If ScienceOfDoom is interested, please send me an email address or upload instructions, and I will provide (as offered in the initial posting) a short pedestrian (mostly pictures, not too many equations) note illustrating these issues.

Finally, the GCM’s are built on deterministic models which do not explicit anticipate aperiodic issues. As such, they have no chance at all (save for the odd lucky guess), as they are not even in the right ball park. A bit like trying to use a deterministic equation to predict the role of dice (i.e. a stochastic problem).

Then, the reliance on “ensemble averaging” is yet another “solicitor’s trick”. And so, and so on ….

Please read the note again for the explicit data, and do have a go at the Logistic equation … who knows, you may learn something you haven’t seen before.

2) I don’t think that anybody has ever tried to predict volcanic eruptions. When their effects are present in the model results, we are always looking at results of calculations that use as input historical forcings that include in some way the volcanic eruptions, typically based on the estimated effect from the stratospheric aerosols that originate from the eruption. Most meaningful results for comparison are produced when the forcings are based on known history but the final results are produced by the model that’s not tuned in other ways based on the observed behavior.

3) Models are used in fundamentally different way in weather forecasting and in climate projection.

Weather forecasts are calculated from an initial state determined from most recent observations. Nowadays it’s a common practice to vary the initial state a little and to compare the resulting forecasts. Even in this case we are looking at initial states that are almost the same and the prediction is expected to tell much more than that the temperature is within the limits normal for the season.

In climate projections the goal is to determine the limits of variability of temperature and some other climatic variables for a future date. Determining the limits of variability is obviously a very different task from predicting the actual weather. Therefore it’s possible at some level. How accurately that can be done is another matter and I don’t go into that.

4) All GCM type models have chaotic behavior in each model run. In that respect they are similar to the real atmosphere. Again the quantitative success is another matter.

—

You may continue to argue that the models are not good enough, but to do that in a credible way you must give up the qualitative arguments based on false logic and learn enough about the models to argue using justifiable quantitative observations.

You need to do some homework on chaos theory. One can get chaotic behavior from a completely deterministic set of coupled non-linear differential equations. That’s what Lorenz found.

Even planetary orbits are chaotic over a long enough period. It’s the three body problem. There is no general analytic solution for a gravitationally bound system with more than two bodies. One uses finite difference techniques to solve the problem. That’s how the GCM’s solve the Navier-Stokes equations, which are also coupled and non-linear, for flow.

I’m a little less sanguine than Pekka about the utility of GCM’s for projecting future climate. It’s clear to all but the true believers that most GCM’s have a climate sensitivity to GHG forcing that is too high and that they rely too heavily on aerosol effects, which are treated differently in different models, to match the instrumental data from the late nineteenth and twentieth century.

Wouldn’t it be nice if people had actually read any of the posts I have provided:

1) In each one I speak about the Logistic equation, and ask readers to have a look at its behaviour, and I have even posted a link to a spreadsheet you can download to save you the 3 seconds it would take to type it in.

It is a deterministic equation.

2) Nowhere do I claim that deterministic equations cannot be chaotic, so I have no idea where you get your information from.

3) For the record, the Navier-Stokes equation(s) are NOT NECESSARILY chaotic.

You will find, for example, that in general 1- or 2-dimensionl continuous (P)DE’s may require at the very least special forcing to be chaotic, while higher dimensional version, can, under some circumstance exhibit chaotic behaviour spontaneously.

A 1-D N-S may have trouble “getting off”, even with many forcing functions.

By contrast, even 1-D difference equations can be chaotic, as with the (and the reason for) the example I provided … the deceptively not so easy but extremely simple looking Logistic equation. Who would have thought the, roughly speaking, equation for a parabola could be this complicated?

… I’ve only been doing this for some decades, so may be I got it wrong.

HOWEVER, the real question is not whether the models have or have not stochastic components (in fact there is stochastic chaos, too). Rather, whether the modellers had built models with the intention of modelling chaotic behaviour.

I have not seen in the IPCC literature that they had done so.

Moreover, if it is one’s intention to build a model in the anticipation of chaotic elements, then there is not just special/explicit theoretical matters required during the derivation of the equations of motion, etc, but also there is much special preparation. For example, to determine the nature of the phase space/attractors/fractal dimensions etc etc.

In reality, even just that preparation would not be possible for an explicit chaotic climate model, since generally and amongst other things, one needs extremely long time-series for estimating of embedding dimensions, fractal dimensions, etc etc. … which not only do we not have, but can’t possibly have.

… and all that is before we speak of accounting for volcanoes, and other “terminal” factors, etc.

HOWEVER, all that is rubbish to some extent, since if in fact the models are intended or understood to be for a chaotic system from the outset, THEN there is a substantial probability that the process is NOT predictable, in any practical sense!

All of my previous posts make this point. The repeated urging for the reader to actually look at the Logistic equation example, and to play with the IC’s and parameter, is for and to PROVE to yourself, exactly this point.

AS SUCH, and why I asked Pekka for his sources, if the IPCC modellers knew in advance somehow that indeed the model would have to cover aperiodic states, THEN is it not INSANITY to spend billions over decades to build models that we have not even checked to see if they can predict ANYTHING at all?

For example, and why you should play with the Logistic equation, if you get your IC even a tiny bit wrong (e.g. you incorrectly measure the initial simulation temperature(s) to be 0.001 C in error), and/or your lab measurements for parameters, say, viscosity (e.g. lambda in the equation) came up as 0.9393, when in fact they are 0.9343 (or whatever), then you are well and truly screwed … after just a few time steps/iterations into the “future”, the prediction will have (fatally) departed from the correct answers, and its over before you have even begun.

… and yes, that is with a 100% deterministic equation.

Put differently, under those conditions any notion of predictability vanishes.

If you can’t predict, then you can’t set policy either … that puts an emphatic END to the entire climate policy discussion.

… would you not consider that an important result?

At the very least, should we not have asked these question some decades ago, and then spent those wasted billions on feeding the hungry etc.?

Cheers

DrO

PS

Also, the use of RF in climate models is not a particularly good idea. It is a summary measure that, even if arrived at by “precise” means, is really just used because proper modelling is much more difficult. Though strictly speaking RF = a ln(C/Co) is rather silly, since it is not really connected to real atmosphere data (only lab spectrum data, and “more models”), and why is not something like:

RF = a ln(C/Co) + b ln(H2O/H2Oo) + c ln (volcano something/VSo) … ??

especially when the IPCC’s ARs start with statements like “water vapour is by the most important GHG”, etc

BTW, just for trivia, our understanding of chaotic systems (or more correctly non-linear dynamics) very considerably pre-dates Lorenz. Personally, I am a bit of a Poincare fan.

I tried to avoid presenting judgment in either direction on the usefulness of GCMs emphasizing only that the usefulness must be judged based on their observed performance, not through generic arguments.

I have stated elsewhere that my feeling is that presently the best way of figuring out, what can be said about future warming during this century is to use directly the estimates of transient climate sensitivity (TCR) rather than complex models like GCMs. Using TCR directly may require support from some simple energy balance model, but complex models like GCMs reduce transparency without improvement in the accuracy. GCMs may be of some value in estimating other changes than average warming, but even that is questionable as their reliability in making regional forecasts is not good.

Models have also some role in the estimation of TCR, but in a way that’s likely to make the model dependence very limited.

In fact, science is precisely about making judgements. It is simply that the rules for that judgement are also very precise. Notably, if the outcome of your theory is contradicted by the outcome of reality, then the judgement of science is that (usually) the theory is “toast”.

If the IPCC insist that a+ or a- 2C difference 100 years from now spells the difference between Heaven and Hell, then surely the models MUST have a prediction reliability with much higher than +/- 1-2C resolution.

Disallowing just the last four bits of “volcano cheating” in Fig 8.1 AR4 (or equivalently, Fig 9.8 AR5) shows the IPCC models to have an error on the order of 1 – 2 C/100 years. That is, their error alone is the difference between Heaven and Hell (according to them).

So, without knowing anything about PDE’s, fractals, or even parabolas, what should science’s judgement be (on the current state of AR forecasts)?

… clearly, in the judgement of “this” scientist :-), the models are “toast” :-(.

That is based on clearly (and perhaps reasonably) quantified analysis, not some hand waving arguments.

Without any loss of generality, we can say, since volcanoes cannot be predicted, the IPCC models are well and truly and necessarily “toast”.

However, as the analysis of non-linear dynamics shows. The problem is likely very much more certain to be “toast”. If the models exhibit SIC etc, then there is no hope whatsoever that models have any ability for any practical forecasting under any circumstances (at any price).

I have not passed through a black-hole (at least not that I recall), but I believe Einstein’s equations to be a (mostly) correct predictor of what will happen to me near the event-horizon.

It is with the same type of an even deeper property of the cosmos that we can generalise about certain properties of models/mathematics.

I would be extremely happy if anyone proved me wrong on this, really. Since in that case, it would mean we would be able to solve pretty much every important remaining problem out there, not just the climate problem.

… and Mr Payne could win the Millennium Prize for solving one of the last remaining great problems in physic/math – existence and smoothness of Navier-Stokes (right up there with proving the Riemann Hypothesis).

Until then, IPCC model forecasts are well and truly toast.

… in a sense I don’t really mind that the models are toast (they really are much too ambitious/heroic), rather it is more how the IPCC abuses the results … it makes me ashamed that they are called scientists too (I would be happier if they were called “Slientists”). Also there is a non-trivial probability that if they get their way, they could actually destroy the planet (or at least, life as we know it, Jim).

All atmospheric scientists since Lorenz have known that GCM type models have chaotic behavior. They do surely expect also that the real atmosphere has chaotic behavior. Some recent posts on this site have discussed these facts. All atmospheric models have been built well aware of this situation. Thus models with chaotic behavior have been built to describe a system with chaotic behavior. That has resulted in models that have been very useful in weather forecasting, while their value in climate science cannot be tested as easily.

There are arguments to support the view that similar models may also be useful in studying climatic questions. Thus building climate models was not initiated in a situation where it could be known that the attempts will be futile. At present it’s still reasonable to expect that the models will be useful. As commented by DeWitt Payne and also by myself, their value in projecting future temperatures is still questionable, but that’s not at all enough for concluding that they will be useless.

When we want to learn, how useful GCMs are, we must study GCMs. It’s of no value whatsoever to look at the logistic equation in that quest, because it’s a different equation with different properties, and because the modelers are well aware of such examples of chaos.

When are using at equations that have chaotic behavior in studying long term behavior, the hope is that the attractor is small enough. There will be chaotic behavior within the attractor, but knowing the properties of the attractor has a meaning similar to being able to project development of climate.

The Earth system is in many ways different from simple examples of deterministic chaos. Thus the concept of the attractor is not as clear as in the case of simple examples of chaos, but the idea should be clear.

GCM type models have the radiative forcing as input at a more detailed level than as a simple number calculated from a logarithmic formula. The forcing is introduced as local property of the atmosphere that affects absorption and emission of radiation. That true both for the influence of CO2 and for aerosols from volcanic activity.

All chaotic systems are different and only some general ideas are common to all of them. Therefore each of them must be studied separately.

Ugh. Not trying to be funny or anything, but we seem to be in the “a little knowledge is a dangerous thing” arena on chaos

… but much more worrying is that have you proven with your own statements that you really have missed the really important issues.

So, I offer three parts:

First, I will explain again, and also using your own statements, how and why using any models to predict the climate with the view to completely alter the course of human activity is not only futile, and necessarily so, but also very very dangerous and dishonest.

Importantly, it is the raison d’etre of SIC etc to defy predictability, and this is written into the fabric of the cosmos.

Second, I will provide just a little information about chaos etc to allow you start looking in certain directions for assistance. I simply cannot commit to the hundreds of hours it would require to provide you a proper introduction to the subject.

Finally, I revisit the moral dimension that arose in our previous exchange, but I do so for the last time.

… just for interest, systems of coupled non-linear PDE’s and non-linear dynamics is my area of expertise and has been for some decades. I derive models, solve the equations, and deal with all manner of attractor/state space and related issues on regular basis as a vocation (i.e. for a living, not just theoretical). I have written and lectured on the subject at a graduate level, which is why I know it would require hundreds of hours of my time to get you from where you are now to the “starting line”.

1) IPCC like models CANNOT under any circumstances have any value for the type of predictions they ARE making. Namely, the IPCC/UN objective is to alter completely the nature and welfare of the entire population of the planet. Indeed, the models/this use can have very negative and unconscionable value.

Do you understand the context? If, as I proposed earlier, their/your only interest is “research for fun” and purely academic, that is a very different context. I have no problem with that, though in that case I would not wish my tax dollars supporting that “just for fun” work.

… but, if you intend in anyway to sway or trick the masses by inept or dishonest use of these tools, then we are at a serious impasse … do you understand this context?

Just the volcano story alone fully and thoroughly ends any discussion on the “meaningfulness” of the IPCC models … in the big, screw the masses over context.

Next, it is a bush-league mistake to suggest that the Logistic equation is not fundamentally telling of the terminal problem with these types of dynamics. A child’s bicycle is a far cry from a Grand Prix motorcycle, but the principle of steering with the handle bar, that you need to be moving at a minimum to speed to avoid falling on your head, etc etc are the same.

So it is with aperiodic systems. All forecasting models must know the initial conditions (IC’s) and generally you must also estimate boundary conditions (BC’s), and various parameters, which are often complex non-linear forms of their own. If the model is, for example, highly sensitive to IC’s (called SIC), then a tiny error in the IC will completely derail the forecast very quickly into the simulation. Notice, this collapse will happen even if you got your model equations exactly right. GCM’s need IC’s. To predict “tomorrow’s” temperature, you need to know “today’s” temperature. If the model is SIC to say a 0.001 C error in the IC, and your thermometer is only reliable to, say, +/- 0.05 C, then, mon ami, you are well and truly screwed. It will be fundamentally impossible to make any meaningful forecast past one or few time steps into the future … it’s over!

That is the same regardless if it is the Logistic equation, a full set of Navier-Stokes equations, etc etc. If it blows up on SIC then it blows up on SIC … it’s over!

… Again, it’s over in the “you are now screwing the masses” context, not in some fun & frolic research just for the Hell of it.

Of course, and repeating again, the situation is much worse since the models will have sensitivities to errors also in parameters etc.

A tiny error anywhere at anytime is magnified exponentially as the error feeds-back on itself with each time step.

The Logistic equation demonstrates this beautifully, as does any chaotic system with SIC etc. SIC is one of the fundamental properties of chaotic systems. If cannot demonstrate SIC, then (almost surely) it is not chaotic.

Notice all this assumes the model equations are exactly correct (i.e. you know with precision what the attractor(s) look like). In reality, especially in climate science, you have NO HOPE at all in getting a handle on that, and NO HOPE at in getting the right equations/attractors, as explained already earlier.

Notice this also assumes that the attractors themselves are stationary in some sense, which is very often not the case, further destroying any hope at all of obtaining meaningful predictions.

Put differently, knowing that a physical system is chaotic in some sense is an ENTIRELY DIFFERENT problem compared to trying to model/predict that reality. One has little to do with the other, since in many cases regardless of your understanding of the dynamics, the SIC etc issues are pathological and put a full and final end to any notion of predictability in any practical sense.

If you believed that your GCM’s were useful in some sense, should you FIRST not prove that there is manageable SIC etc? Or should you, as you put it, “hope” that models will work, and then just force the population into massive upheaval on the “hope” that you got the models right?

… incidentally, where in the scientific method is there a notion that one should “hope” they are right, rather than use data/facts to prove they are right?

2) Just a few illustrations to point you in the right direction Re chaos/non-linear dynamics (as noted I simply can’t put several hundred hours into your education to get you to the “starting line”).

a) It is, I am sorry to say, gibberish to make comments like ” hope is that the attractor is small enough “. The size of the attractor is not particularly important in this context. The things that are important, indeed crucial, include:

i) Is the attractor a fractal, if so do we even have chance to characterise it?

ii) Is it stationary?

iii) Is there more than one in the state space?

iv) If there are multiple attractors, what is the nature of the transitions and “jumps” between them?

…. and so on, and so on, …

Of course, you can run the simulation with guesses and see what character those guesses produce, BUT THEN:

i) How do you know that is the correct “character” of the real chaos vs. your guessed chaos?

ii) Are the SIC’s with your guesses the same SIC’s that the real system exhibits, etc?

iii) Is there any chance that the SIC’s are manageable?

… and so on, and so on …

b) To say that all GCM’s are chaotic, or that all climate scientist know that the climate is chaotic is utter meaningless, and in many cases just plain wrong. The exact same set of equation may or may exhibit chaotic behaviour depending just on the values of the parameters.

The Logistic equation demonstrates this beautifully. When lambda (L) is less than about 0.8’ish the system is a pure periodic dynamic. As the parameter L increases, at some point the systems moves from periodic to aperiodic. This is a bit like having two attractors: one fractal, one not. The change in the parameter causes the system to transition between the attractors.

Then, getting L wrong (even a little, in many cases) radically alters the regime of the dynamics. Much worse, in the real world, the parameters are often themselves non-linear, so they change over the simulation. Thus the regime of dynamic is partially controlled by the non-linearity of the parameters (non-linearities embedded in non-linearities etc etc).

c) Then, suppose you actually manage a model. How do you calibrate it to the real world? Since these types of dynamics have high-dimensional attractors, and since the data requirements increase exponentially with each increase in attractor dimension, the amount of real world (especially time-series) data that is required becomes huge.

Where do you get time-series in the tens of thousands length for water vapour, or for decadal ocean cycles, or for how the heat coming up from 6,000K (near Sun) temperatures a few kilometres below your feet, etc etc etc.

Once again, you may consider it important and useful to research/study those things, fine, but you do not get to use my tax dollars, and under no circumstances is it permissible to screw over humans on your whim.

Finally, you are absolutely incorrect in your generalisation that the difference amongst different chaotic systems necessitates studying each one. As stated, it a tautological requirement for chaotic systems to exhibit SIC etc. As such, all chaotic system must have SIC. If they have SIC, then in many if not most cases, your thoroughly screwed in the “practical forecasting” context. It makes no difference how math-geeks or supercomputer you throw at it, it is NOT POSSIBLE to produce any practical predictions.

Put differently, it is the raison d’etre of SIC etc to defy predictability, and this is written into the fabric of the cosmos.

3) Finally, as it was considered in an earlier exchange, it may be that your morals are fundamentally different compared to mine. In my world, one is NOT permitted to screw over others on a whim and for self-serving purposes. If your morals are different, then science and mathematics are irrelevant, and my interest has expired.

So, if you are one of those people who desperately needs to get the last word in, go for it. I wont respond unless you actually provide real equations, verifiable data, and in general things that are direct scientific truths. The hand waving approach with many “it may be interesting to someone somewhere for some study” or “hope that it is right” is not something I can support any further, and my “educate Pekka” budget has been consumed.

Cheers

DrO

PS, Radiative Forcing is a tool to help reduce simplify of the problem, but at the cost of reliability It is actually something of a bad idea from a modelling perspective. Though, summary measures of the RF type are “convenient”. The correct rigorous approach for prediction is to just the entire system explicitly.

Moreover, RF’s as used by the IPCC are just “models on models”, and even then with massive abuses. Never mind for a moment that the HiTRAN etc embedded models are tide only the lab data (not the atmosphere), in the end they somehow simply arrive at

RF = a * Ln (CO2/CO2o)

to explain the planet’s change from 1850 to about now.

If you are going use this type of less reliable, but convenient summary measure, why wouldn’t it be something like:

Indeed, reading Myrhe’s paper (apparently a most cited paper for calculating RF), I could not find a single instance of the words water, water vapour, H2O etc anywhere. How does that square with the IPCC standard comment at the beginning of the AR’s “water vapour is by far the most important GHG”?

Where do they account for massive drop in air density by the troposphere and almost no air density by the stratosphere? I mean, so what if CO2 is 400ppm, the entire volume of the stratosphere contains, what, three and half molecules? By 20km you would need about 12 * 400 ppm to have the same amount of “stuff” as at sea level, etc etc.

So, there is much to be considered not only at the big picture level of IPCC models, but also the internal bits as well, particularly since their model predictions clearly, are contradicted by reality (in the let’s screw humans over context).

This discussions seems to be leading only towards longer messages from you repeating the same points. I skimmed your latest comment with no interest in looking at it carefully as you seem to be making only more unjustified generalizations with zero real content. I have explained my point above and that applies equally well to your larest comment as far as I can judge based on the skimming through.

I can’t speak for Pekka, but I don’t carefully read your posts because you aren’t saying anything new. Gerald Browning in articles at Climate Audit years ago raised similar points with much more rigor and active comments from a wide variety of people(here and here, with comments continued for the second article here and here ).

But the point is that whether it is even possible to model the global climate system doesn’t really matter. Simple energy balance models produce similar results, which are that increasing ghg’s will cause temperatures to increase.

The people who want to use climate change as a lever to increase government control love it when people like you spend time and effort attacking climate models. It’s a wasted effort. The public neither knows nor cares. The weak points in the arguments justifying drastic intervention are the estimates of the costs from increasing temperature, which appear to be highly inflated, and the costs of mitigation, which are seriously underestimated both in fiscal and human terms. These are to be found in the IPCC Working Groups 2 and 3 reports.

We seem to be in perfect agreement on the really big issues. I am completely with you on the notion that the biggest problem is that the public cannot get a full grasp on the models, and many attempt to do so will result in “eyes glazing over” etc. Indeed, that may be why mainstream media makes no attempt even to mention these problems (though some media are themselves ideologically driven).

This is why I had focused my initial submission on the volcano and Fig8.1 AR4 story (and also the dloadable doc here http://www.thebajors.com/climategames.htm). Two pretty pictures and just a couple pages of pedestrian explanation.

However, I had also included the SIC angle for those who wished also to obtain a tiny bit of insight as to why the mathematics must be impossible as a fundamental property of the universe, e.g. via the “Kneading Dough thought experiment” in that doc.

The objective in both cases is to allow the uninitiated to have a chance at understanding the (lack) of value of IPCC forecasts, and to think twice before voting for a global upheaval.

My submission soon digressed on deeper details of non-linear dynamics primarily to assists Pekka in understanding the facts and mathematics. I too saw that Pekka-route was a cul de sac regardless of how much reason or fact are provided, so, as you see, “I bailed” (eventually :-).

… and thus the dilution of the main point due to effort to help Pekka.

Regarding you links to other analysis of apparently similar issues, thank you for that … but those are not actually the same issues to a large extent.

… this may be where more “eyes start glazing” over, so some readers may wish to skip to that last “Objective” part below, but I offer just a few points to illustrate the differences:

1) Many of those model issues focus on “micro” issue, such as storm forecasting. While there is uncertainty in those, some of those problems are “easier” in a sense. For example, forecasting the path of a hurricane is in a sense an easier problem, since the hurricane has huge inertia and then Newton’s 1st law maybe more important than, say, Reynolds number, etc. I know … it’s an oversimplification, but I need to limit this digression.

… only a few of the many entries in those blogs touch on proper SIC, but it is not clear if any those points actually make it into the models.

2) Some of the issues there arise because they are dealing with systems that have a natural or core exponential character. Those may exhibit SIC even if it is not chaotic. For example, forecasting a moon launch is SIC in that a smaller error at launch time may cause the ship to miss the moon entirely. Though the dynamics are not chaotic.

3) Some of those discussions revolve around an entirely different type of SIC due PURELY to mathematical convenience. Notably, when we model flows etc, the PDE’s will require Initial Conditions (IC’s) etc. HOWEVER, there are times when the “initial” conditions are only known (or somehow more conveniently calculated at) at the “end of time” (or “other boundary). In those cases, the PDE’s IC’s are actually taken at the “end”, and the numerical simulators used to solve the PDE’s are “run backwards in time”. Notice, this is not “back testing”, this is purely a convenience for how the simulators are used due to “practicalities” of IC’s etc.

As it happens, when you flip the IC’s from the beginning of time to the end of time, the character of the PDE’s changes. Sometimes the “reverse” character has a property referred to as “ill posed”, which shows up in the solvers as “non-diagonally dominant Jacobian” etc, which are ill-posed matrices. Trying to solve an ill-posed matrix is the equivalent of dividing by zero, or very nearly. Not surprisingly, the solutions “blow up”, and before that exhibit something similar to SIC.

Hope I haven’t diluted the story too much.

Objective:

TO BE SURE: my primary goal is anything that helps spread the understanding of what (IPCC) forecasts actually mean (and do not mean). Clearly, they do not mean (or can mean) what the IPCC would like/wish/hope to mean. So, how do we get that message to the public in a proper yet pedestrian scientific manner without eyes glazing over?

Is the volcano story a good way to go? Or???

Regardless, I am happy to make myself available for any (proper/scientifically true) effort that achieves that.

I am also fully sympathetic to your point that there will always be “some” model that produce an IPCC “wished for” result, that will then be touted by “believers” regardless of science … and in any case there is some barrier to truth, since maybe there will never be proper explanation that can be shoe horned into a “sound bite” or 140 characters

Many people have read Myhre’s paper without the background of the previous 100s of papers in this field and, like you, concluded that the IPCC does not understand radiative forcing, or has created something useless, or the models are clearly wrong because the radiative forcing equation is.. etc, etc.

GCMs don’t use this formula. The paper of Myhre has a particular point. Without Myhre’s paper – i.e., if it had never been written and no one had this formula – it would have zero effect on climate models and zero effect on most climate research.

I can explain more if it matters.

On Chaotic Systems

I believe you do make a valid point, and although I am a novice in the field of chaotic systems (only a couple of books, Lorenz’s papers, and some playing around with solutions to some chaotic equations) it’s clear that without having a perfect model it is impossible to know the “chaoticness” – to use a non-technical term – of the system.

As many chaotic systems demonstrate, a tiny change to one parameter in an equation for a chaotic system can cause a major transition from a well-described, well-understood system to “a total mess” – again to write in non-technical language.

Thank you for the offer to explain further, I am interested, but first I think some clarification may better focus the “further” bits. I have reversed the order from RF & Chaos to Chaos & RF for clarity:

Chaos (and Volcanoes): Crucially, even if you get the (chaotic) models exactly right (e.g. just suppose the climate was exactly modelled by the Logistic equation), still it may well be an entirely useless result. That is because not only are the systems sensitive to parameters, but also sensitive to initial conditions (SIC). If you get “today’s” temperature, pressure, etc even slightly wrong, “tomorrow’s” forecast can be massively wrong (with the exact correct model).

For those who don’t “care” that the forecast puts the values +/- 100C, then “that” model may be acceptable for those people. However, if the planet is to undergo a massive upheaval on the just +2C movement, then clearly you likely are screwed in your forecast on SIC alone.

However, none of that matters in a sense, since you still have the problem of incorporating the effect of things like volcanoes, which I trust we all agree cannot be predicted, and for which even the IPCC’s data have huge implications (e.g. Fig 8.1 AR4 shows four volcanoes impact of around 1 – 2 C/100 years, when the total warming was 0.7C/100 years).

RF: Yes I would be grateful for additional detail, but I think I may not have made my earlier comments sufficiently clear. I have NOT “concluded that the IPCC does not understand radiative forcing, or has created something useless …” as you suggest. Rather, I have questions about what they ACTUALLY have included, and how exactly they use those results to arrive at their forecasts. There are many conflicting submissions on the subject, and I haven’t found conclusive proof of the IPCC actually used. I have asked people to comment on the RF formulas to provide citations in this respect, but so far no joy.

I would like this information to get a better understanding of what they use, not what they may have used. A few examples:

1) Is something like RF = a Ln (CO2/CO2o) in the forecasts? If so,

a) Is that the exact form, or ???

b) Which models are multi-dimensional, and especially which have independent variable in the vertical/altitude (or in my parlance, radial) direction?

2) If there are GCM’s with an active radial dimension, do they use RF, and if so, which one, and if so, are those density adjusted, etc?

… I hope that does not sound too cheeky, and any thoughts, or pointing in the right direction would be appreciated.

DrOli: Please pick narrower topics for comments. You appear to be confused about some aspects of climate models and radiative forcing, which are two very different things.

Radiative forcing is a system for calculating how much various phenomena will INITIALLY perturb the balance between incoming and outgoing radiation. The simplest forcing is to increase (or decrease) solar radiation, for example by 1% or 2.4 W/m2 (post albedo averaged over the earth). This imbalance will eventually cause the earth warm until it radiates an additional 2.4 W/m2 to space – but radiative forcing is only concerned with the initial imbalance or “forcing” before the system responds to it. (The IPCC definition for radiative forcing lets the stratosphere respond and uses flux measured at the tropopause rather than the top of the atmosphere, but these are minor refinements that obscure the big picture.)

The radiative forcing for changes in CO2 and most other GHGs can be calculated from highly accurate absorption data measured in the lab. We have an good understanding of how absorption and emission vary with pressure, temperature and mixing with other gases. So, it is a relatively simple job (once you have all the needed absorption data) to calculate how much doubling CO2 will initially reduce the radiative flux leaving the earth – if you FIRST SPECIFY the composition (including water vapor and ozone) and temperature of the atmosphere at all altitudes. Myhre did radiative forcing calculations for a variety of scenarios: tropical, temperate, polar, winter, summer, clear sky and cloudy sky and came up with the forcing averaged over the whole planet. In cloudy areas, upward radiation leaves the cloud tops, rather than the surface of the earth, so he also needed to specify cloud temperature and composition. That is where the 3.7 W/m2 figure for CO2 comes from. (Your formula RF = a Ln (CO2/CO2o) is a fit to the output from such calculations when CO2 is varied within a practical range.) “Saturation” and overlapping absorbing species are handled properly, but water vapor is somewhat problematic since it forms dimers and makes clouds with widely different properties. The optical properties of particulates have also been studied in the lab, but these are complicated, non-homogeneous materials and the IPCC admits to great uncertainty in their radiative forcing.

Since you must specify the temperature of the atmosphere before calculating radiative transfer through the atmosphere and since radiative forcing is the immediate imbalance in the radiative flux to and from the planet, radiative forcing does NOT directly predict temperature change. One can calculate an equilibrium warming of 1 degC for a radiative forcing of 3.7 W/m2 (2XCO2) for a simple greybody, which is sometimes called the no-feedbacks climate sensitivity. Since the photons that escape to space are emitted from all altitudes of the atmosphere and about 10% come from the surface, this greybody calculation is a rough approximation. As the earth changes in response to this warming, the surface and atmosphere will change, producing feedbacks.

AOGCMs calculate radiative fluxes, air and water flow (and coupling between the two at the surface), daily and seasonal changes in incoming solar radiation, cloud formation, precipitation, and ? for an earth broken up into grid cells. The initial temperature and composition of the air and water in each grid cell are specified, but then evolve with time. Radiative forcing is NOT input into models, but absorption data used to calculate radiative forcing is also used to calculate radiate fluxes in the model. Climate sensitivity and feedbacks are not directly input into the model either, they are calculated from model output. Radiative forcing is concerned only with radiative imbalance, but climate models convert any imbalance in heat flux (radiative or convective, specific or latent) into a temperature change.

GCMs have been used to reproduce the historical temperature record WHEN SUPPLIED with estimates of historical levels of GHGs, anthropogenic and volcanic aerosols and solar output. The models didn’t “predict” volcanos, they were told how much aerosol various eruptions added to the atmosphere on what dates and they calculated the transient cooling. They do the same with the increase of CO2 in the atmosphere. (Future warming projected by models is caused by a projected decrease in aerosols as well as the increase in CO2.)

In 1991, Lorentz wrote a prophetic paper (“Chaos, spontaneous climatic variations and detection of the greenhouse effect”) discussing the future use of climate models in detection and attribution of climate change despite the complications created by chaos and persistence. Lorentz believed that such models could be used to detect GHG-mediated warming (and presumably also project future warming) under the right conditions. He certainly understood the problem, including sensitivity to initial conditions (which can be handled by a using a variety of different starting conditions). I don’t think it makes much sense to argue with Lorentz on this subject, but read his short paper before you do.

However, climate models do contain one to two dozen parameters that control cloud formation, precipitation, turbulent flow and other processes that occur on scales far to small to be calculated for large grid cells from fundamental principles. Lorentz clearly explained that knowledge of historical trends MUST NOT be used to refine the parameters used by models, but it is obvious that the developers of the IPCC’s models have (intentionally or unintentionally) tuned these parameters so that the models fit the historical trend: Models with high climate sensitivity have high sensitivity to aerosols while the models with low climate sensitivity have lower sensitivity to aerosols. IMO, every range the IPCC reports for their models is statistically meaningless for this reason. In Section 10.1 of AR4 WG1, the IPCC says basically the same thing except that they call statistical interpretation of the range “problematic”.

Modelers have been experimenting with ensembles of simplified models (“perturbed physics ensembles”) where the model parameters are randomly chosen from within the range established by (non-modeling) experiments. These ensembles – which haven’t been tuned to the historical record – exhibit a much wider range of climate sensitivity and past warming than the IPCC’s models. See Stainforth (2005). Furthermore, no one set of parameters has been found to perform better at predicting CURRENT observations of temperature, TOA flux, and precipitation, so the process by which the IPCC’s more complicated models are “tuned” appears to be completely arbitrary. So the range of projections from the IPCCs models appears to significantly underestimate the full range of possible future climates that are compatible with known physics and chemistry. The IPCC’s likely range for climate sensitivity (1.5-4.5 degC, with 1.0 degC being possible) is a much better estimate of the uncertainty of future projections than model output. So warming could be 1/3 to double the IPCC’s central projections for each emission scenario, not the range they show on their graphs. Such a wide range would be useless for policymakers, so the IPCC shows ranges they admit are “problematic”. If the range of projected warming were valid, the IPCC would be able to report a much narrower range for climate sensitivity.

AOGCMs are similar to the models that are used to predict the weather, but the latter deal mostly with the atmosphere for short period of time. However, the accuracy of weather forecasts have been demonstrated by statistical analysis of their predictions, but this is impossible for climate models. The predictions of climate models are difficult to test in a few decades, but they have clearly over-predicted warming over the last two decades. Different parameters would have allowed them to do better. However, it isn’t clear that any model is capable of producing long-term variability (presumably associated with ocean currents) like the PDO, AMO, and alternating warm and cold period (LIA, MWP, etc.) It is not clear what the apparent failure of models to reproduce decadal variability like the PDO and even short term variability like the Madden-Julian oscillation means about the accuracy of their central projections of warming for the next century, but it suggests that their over-estimate of recent warming doesn’t mean as much as skeptics believe.

Stainforth et al, Nature 433, 403-406 (2005). A non-paywalled pdf can be found with a Google search for the title: “Uncertainty in predictions of the climate response to rising levels of greenhouse gases”, but I can’t link it properly.

Cheers for the thoughtful response, but I fear something has gone horribly wrong in the communication.

1) I fully understand RF and I am saying (mostly) what you are saying (not what you say I am saying). Notably, I can point to the lines of code in NCAR and their maths, etc etc to demonstrate that my comments on GCM’s are based on what they actually use. It is as along the lines you say, in those models absorption etc is calculated on the fly by integrate on the fly in part based HiTran etc etc and the like databases.

However, if you had looked at the code or their equations/methods, you would have noticed even on the rare occasion when they use their “best” estimate, such as via LBL’s etc, those are still approximations. Since even small integration errors cause a big difference in the transport equations, it rather emphasises the weakness of even the best incarnations of the models. Even for the things they include (not to mention the many things not)

My comments on the RF = a Ln(CO2/CO2o), are not what you say, but that because there are those who claim that “model of a model” can be used to “prove” that CO2 is responsible for “everything” since 1,850 (some of your bloggers for example), I wonder why those people do not at least consider that if they are to use that approach, why is not something like RF = a Ln(CO2/CO2o) + b Ln(H2O/H2Oo) … etc?

Indeed, I had even solicited input in at least one of submissions, just in case there were models with RF in it, and that I may not be aware of.

I am not sure how/where my comments could have been interpreted in the “opposite”, but if you have time, I would like to know.

As for volcanoes, I think you may have completely missed the crucial point. I say exactly what you say, but observe something further. In my comments, I specifically say that in places like Fig 8.1, they are “manually intervening” or more correctly “cheating”.

That is, Fig 8.1 is used by the IPCC to imply that they have “verified” their models against real data (i.e. 1,900 – 2,000).

That is the basis on which they then pretend that they can use the same models for predict, say, 2,000 – 2,100

… but surely that must be rubbish, and deliberate rubbish. Do you not see that?

If you had to cheat to get agreement with the volcanoes for 1,900 – 2,000, then what in blazes are you going to “manually intervene” with for 2,000-2,100? I trust we agree that they cannot possibly know anything about volcanoes in the future … and that’s that. There is no way around it.

So either, they fess up that they put information into the back-test that they cannot possibly have for forecasting and then come clean that the models can’t be used for forecasting (and putting the entire population of planet into much grief over a dishonesty). Or they have to change Fig 8,1 etc to show the non-cheating version.

… you can’t have it both ways (at least not in proper science).

Re Chaos etc, I wont go into a detailed explanation here, save for two points:

I am afraid your comments display a deep misunderstanding of what it means to model chaotic systems. You can perturb your models all you want … it is entirely meaningless unless you can show that your models (without cheating) can replicating the character of the real world. Perturbing a model with the wrong fractal dimension etc will have nothing in common with reality.

Moreover, not having even bothered to test for real world non-linearities (in the fractal sense) there is NO way to know what degree of SIC and SP sensitivities actually exist. That alone will almost surely make long term predictions meaningless.

Finally, you could have saved yourself much grief and credibility by asking some questions first. For example, my area of expertise is mathematical modelling, PDE’s, non-linear dynamics. I have been doing this for decades. Not only have I created many models and a colossal software arsenal, but I have fractal based models that actually make money in the real world … not some academic guessing. I am quite certain I know quite a bit on the subject, certainly more than the heavy bit of stomping you provided assumes, and certainly very considerable more compared to somebody who may have read a paper or a book somewhere sometime.

Given your heavy handed assuming approach, I trust you would be willing to put money where you mouth is …lets see who can get chaotic model verified to the real world, and make some money … I am here, bring it.

PS. I was not aware of any rules relating to length of post. I responded to several posts in a single go, as that seemed more efficient. I apologise if I have breached some rules or upset anyone.

DrOli: One of your posts suggested that radiative forcing was a component of AOGCMs. Radiative transfer calculations are are used in models, but the concept of a 3.7 W/m2 radiative forcing or the ln(C/C0) are not used. AOGCMs use dramatically simplified schemes (broadband rather than line-by-line) for calculating radiative transfer as fast as possible. I once read that the first inter-model comparison project apparently showed that different simplified RT methods gave significantly different results in some situations and everyone had to go back to line-by-line methods to find out who was right and wrong. These problems supposedly have been resolved.

I agree with you when you say that reproduction of the historical temperature record by climate models – using historical forcings as input – proves nothing when it is obvious that knowledge of the historical record was used in refining the models. High climate sensitivity is always found with high sensitivity to aerosols and low with low. Ask the question: What would happen to the funding for a climate model that didn’t reproduce the historical record? Work with perturbed physics ensembles suggests that one-by-one “tuning” of the parameters in a climate model was unlikely to have led to a global optimum set of parameters.

I didn’t intend to claim any expertise in chaos; I referred you to a paper by Lorentz about the role of chaos in using climate models for detection and attribution of anthropogenic climate climate change. His conclusions are very clear – models developed to reproduce the historical record aren’t suitable. Perturbed physics ensembles don’t have this problem, but the range of their output makes attribution impossible. The Stainforth paper illustrates the range of uncertainty that is associated with randomly varying only about 1/3 of the parameters in a model. From my perspective, the IPCC is hiding the uncertainty that associate with “parameter uncertainty” and this appears to be far greater than the uncertainty arising from initialization conditions.

Cheers for all that, but … ugh … again, you make claims about my statements that are simply not true … I have repeatedly emphasised, including to you specifically, that I am aware that RF=Ln() is NOT used in the GCM’s. Indeed, I told you I had detailed understanding of LBL and other estimations in the GCM’s. Not sure why you keep suggesting otherwise.

In any case, after this post, I will simply not respond to any of your claims relying on distortions of my statements and you will be free to fabricate whatever you like.

As far as LBL’s are concerned, your “leap of faith” that as you put it “These problems supposedly have been resolved” is simply not true, nor could it be true. I refer you to the detailed modelling and usage docs, e.g. from NCAR et al. Some observations include:

a) Not all models have moved to LBL’s
b) The ones that have, do so sporadically.
c) Even when used, they do not cover the entire spectrum, and the resolution of integration is not sufficiently high. In addition, there is some question about integrating atmospheric spectra based idealised lab born databases … but that’s for another day.
… these are decisions made by the modellers due to the enormous computational cost of including even “rough” LBL calcs.

… it’s kind of a trade off, do we have so many grids/slices/order of basis funcs, etc etc. Each of those make huge differences to the amount of cpu time, and so they juggle those to be able to obtain results so that they don’t require their grand children to assess the results.

The immediate point (as has been repeated ad absurdum) is that those choices introduce approximation errors. Small errors in LBL calcs still lead to substantial errors in the transport calcs.

Indeed, some bloggers, and some on this site, use EdGCM etc demonstrating for example EdGCM’s forecast for CO2 doubling. Of course, those results are rubbish, but they are produced by the appearance of sophisticated machinery, and that alone lends false credibility to the fear mongering relying on the EdGCM (etc) CO2/temp etc charts.

Now, if you wish to do modelling like Pekka has stated, and where are all you care about is the forecasts being +/-100c … then fine.

HOWEVER, if you cant show (and really show) that your models have at least, say, +/- 0.5C resolution 100 years out (especially if, according to the IPCC, +2C is already doom&gloom)

… THEN YOU CANNOT MAKE ANY STATEMENT ABOUT FORCING HUMANS to undertake the greatest upheaval in history.

… do you understand the different contexts?

If science “fun & frolic” is all that mattes, then go nuts … if screwing over the population matters, then models need much much much more reliability … indeed, one can show the near certainty that models may not ever be able to accomplish that (e.g. do you know anyone who can forecast the financial markets, or lava lamps, Hele-Shaw cells, etc, and will any of that ever be possible … of course they still try, but no one expects/accepts those attempts to be used to screw over billions of people)

… some real world process defy predictability as a fundamental property of the space-time continuum … that is fact.

… on several earlier occasions I have mentioned that there has been NO TESTS (that I know of) to show whether the climate has any chance of predictability … shouldn’t we prove that first?

Incidentally, it was responsible of you to mention your lack of understanding of chaos, and your statements certainly support that. As a bit of friendly advice, it would help your case to stop relying on Lorenz’s paper in general, though you had correctly sussed the main thrust there, and crucially that it is the main thrust here

models can’t be used to predict the climate.

HOWEVER, you comments like “Perturbed physics ensembles don’t have this problem” imply a very deep and catastrophic misunderstanding of what modelling means.

The “fine tuning” you speak of is, for all practical purpose, a variation on “curve fitting” (its a fancier version relying on “parametrising PDE’s”, but it amounts to the same thing).

That comment is pretty much an “own goal” for you. If all you can do is “fit
a curve” to historical data YOU DO NOT HAVE A FORECASTING MODEL.
… all you have is a curve fit.

Prove this for your self with a simple example. Get any, say, time series for the financial markets. Apply as fancy as you like curve (model) fit of the entire history …. the curve fit may end up looking “extremely clever” (appear “accurate”).

… now, take, say, 100,000 of your dollars (or your life savings), and do a trade based on that (curve fit) model, and insist that every body on the face of the planet must take their life savings and bet it on that trade also.

Do you think that would be a “safe” or “good” use of the model?

The point is, curve fitting, regardless of how good it looks over some known history, does in NO WAY say anything at all about the forecasting reliability of the model.

INDEED, and we come back to my very first point, the “curve fitting” of the GCM’s to the last 100 years also includes “curve fitting”, well “cheating”, in regards to the volcanoes. If volcanoes are incorporated into the curve fits, and since volcanoes are not predictable, and not correctly incorporated into the “real” forecasts (to 2,100) … and since there was 1 – 2C of volcanic cooling in the last 100 years …your forecast is immediately screwed … at least in the sense of enslaving the planet.

So the volcanoes alone are sufficient to topple the IPCC “model based dictatorship”.

… again, if the interest is pure science, then fine … but NOT if the interest is to be used to enslave the planet.

Also, if all you did was “curve fit”, then you could use almost any model as your model, and just keep fiddling with the parameters until there was a fit. As such, a completely ridiculous model could be fit to the data, and one that could be proven to have nothing to do with climate as such.

… again, crushing the notion of reliable forecasting.

Why not just use a dartboard that has values on it matching the range of temperatures seen in the past?

Why not use trend/statistical analysis, as some say? OK, but then where do you take your IC? If its 100 year ago you get one answer, if its 18,000 years ago you may get a kind of opposite answer. Then, also we must accept that the forecast resolution is, say, +/- 10C or worse.

… fine for science “fun & frolic”, but not suitable for planet usurping.

Finally, your continued promotion of ensemble averaging is worrying. If I have two models that are known to be rubbish, but I am allowed to average them with “fine tuning”, then I can prove anything I want via the “fine tuning”, regardless of reality. BUT, and much worse, I can then “spin” the story to say something like “many independent efforts/models yield something similar, therefore it must be believable” … sound familiar?

… complete and utter rubbish of the sort solicitors use to trick people, courts, and judges.

In fact, in a highly charged controversy in which the planet’s fate is held, it should not and must not be allowed even the hint of solicitor trickery.

Once again, to be absolutely clear, I don’t care what research you wish to do, I don’t care what models you arrive at, I don’t care how good you think your models are … at least not until you insist that those models allow you to enslave me or others.

It is an absolutely certainty that the IPCC models are, in this respect, rubbish, and that they are relying on tricks and dishonesty in an attempt to take control over global policy.

… that is a serious problem, and, in my view, requires/obliges responsible scientists to set clear what (if any) science can actually be used for planet enslaving purposes.

DrOli: We can debate the reliability of the RT calculations done by GCMs without facts, but we won’t get anywhere. There have been several efforts to compare RT calculations made by GCMs to more sophisticated LBL methods and to observations. It is easy to find accessible papers on this subject, but difficult to understand the significance of the disagreements, which grow from about 1% in clear shies to around 10% in more challenging skies with high humidity, aerosols, or clouds. A 1% error is about the same size as the radiative forcing for 2XCO2. However, we are interested in calculating the change is temperate that will be caused by 2XCO2, not the exact mean global temperature that 600 ppm of CO2 will produce. The existence of these intermodel comparison projects suggests that mistakes and weaknesses in RT have been and are being exposed and corrected and that broadband methods are accurate enough to not add to the large uncertainty already inherent in a GCM.

One of the references I found was a blog post from Judith Curry on the reliability of RT calculation by GCMs with linked papers. Since she frequently criticizes over-confidence by climate scientists and since she has personal experience in the field, I currently accept her confidence in the RT calculations done by climate models. Most importantly, she says the RT change for doubled CO2 will be reliable. Her Miawer ref says the radiative forcing for 2XCO2 calculated with model and LBL RT methods agree within 0.24 W/m2.

DrOli: The following passages from your 10/24 comment caused me to assume – rightly or wrongly – that you may have thought radiative forcing, RF, was being used in models to make forecasts. Perhaps you used the term RF instead of RT, radiative transfer. Since RF is an immediate response without warming and feedbacks and since forecasting requires both, I thought some clarification might be needed. No offense was intended.

“RF: Yes I would be grateful for additional detail, but I think I may not have made my earlier comments sufficiently clear. I have NOT “concluded that the IPCC does not understand radiative forcing, or has created something useless …” as you suggest. Rather, I have questions about what they ACTUALLY have included, and how exactly they use those results to arrive at their forecasts.”

“1) Is something like RF = a Ln (CO2/CO2o) in the forecasts?”

“2) If there are GCM’s with an active radial dimension, do they use RF, and if so, which one, and if so, are those density adjusted, etc?

Chaos
I understand the point about initial conditions. Some simple chaotic models will be “well behaved” with one parameter in one place, and yet once you change that one parameter by a tiny amount then slight changes in initial conditions cause massive changes in results.
So I believe I am agreeing with you about the problems of “almost intransitive systems” – as Lorenz would call them.

This Schwarzschild equation considers absorption and emission at each height in the atmosphere at each wavelength for each radiatively-active gas. The plane-parallel assumption allows for inclusion of each solid angle of radiation.

However, this equation requires integration through each gas, at each wavenumber (at resolutions of maybe 0.1 or 1 cm-1), and at many heights in the atmosphere. This is not an efficient use of the computing power of the computers running GCMs. Instead approximations of these equations are used. There is much literature on comparison work of band models and the like to get “close” results without doing the entire Schwarzchild equation integration at 100’s or 1000’s of grid points multiple times per day.

Like anyone doing finite element analysis many sensitivity studies are conducted to determine the effect of the various approximations – grid sizes, number of atmospheric levels, etc.

So back to the questions asked:

1) Is something like RF = a Ln (CO2/CO2o) in the forecasts? If so,

a) Is that the exact form, or ???

No and no.

There is also quite a bit of literature on the usefulness of a general “radiative forcing value”.

Again, I think we are mostly in agreement. You seem to have at least some understanding of chaos.

However, what I am very much concerned about is that regardless of what analysis/model properties one speaks of, if it is not tied to reality, then, especially with non-linear dynamics, you are almost surely screwed.

Crucially, even if you get the model equations exactly right (fat chance), almost surely your still screwed due to all the special sensitivities (and the potential dense in unstable attractors phase space, etc).

… is that not something that should be established in fact/data?

Here is one (perhaps overly dramatic) approach. Put the modellers on “notice”. If the models are outside of some useful resolution, then they don’t get paid. If they cheat, they go to prison …. that’s actually what happens in my industry (no kidding). I wonder how much that would alter their efforts, their models, and especially their verifications?

Some years ago, I wrote a series of three short notes on chaos and getting it to work in the real world. It was for people in the financial markets, but I am sure you would immediately see the parallels. Let me know if you would like to have a go (it would have to be via some direct exchange, since those are permitted for willy nilly distribution/public domain).

As for the comments on stability etc analysis of FE simulators (I am not fussed, but strictly speaking they use FD/SM simulators, but close enough). Numerical stability, convergence etc is not at all the same thing as fundamentally unstable phenomenon. Lava lamps are unstable, whether you model them numerically or not.

However, even if we restrict the discussion to the simulator’s properties (c.f. the model’s), then, and there is no reason why should know this, you could still not apply any of the standard analysis … because all the standard analysis are only shown for linear PDE, and break down on (even “ordinary” non-linear) PDE’s, never mind chaotic ones.

I think we have converged on the RF story, though interestingly, the people I was wishing to hear from are the ones that like that approach, to get their take … so far only the “anti’s” have posted.

Lorenz observed that his atmospheric GCM type models had chaotic behavior. He proceeded to study simpler models with chaos and learned interesting things in that, but it’s not possible in practice to transfer quantitative understanding of chaos from the simple models to large atmospheric models.

How the chaotic features manifest themselves in the GCMs can be studied experimenting with a large number of runs with different initial conditions, and by doing very long runs maintaining the boundary conditions unchanged. Such experiments tell, how variable the results will be in practice. Many “spaghetti” graphs included in IPCC reports tell about that. They tell that the chaotic variability in the model runs is not so large that that alone would make the results worthless.

We do get results from the models, but that does not prove that the results describe the real Earth system well enough.

A second question is the extent of chaotic variability in the Earth system. What we know about the past climates sets some limits on that. The climate has varied within rather tight limits over the whole Holocene. The average temperature has stayed in a range a few degrees wide. The more precisely known variability of most recent past has also been limited in amplitude. There has obviously been so much variability that it makes attribution of the recent changes difficult, but not more than that.

If the warming turns out to be as strong as the present main stream estimates consider most likely, the warming will reach in a few decades a level where attribution is straightforward. That level deviates from the present and recent temperatures significantly more than the variability of the climate has been in the past since beginning of the Holecene according to all estimates. That level deviates also from the present much more than the chaotic variability of the GCMs has been observed to be.

Based on all the above we can conclude that chaoticity of both the real climate and of the models seems to be so limited in range that it does not make meaningful climate projections using GCM type models impossible. It makes developing and testing models more difficult that it would be otherwise, but not impossible.

The existing climate models have obvious weaknesses. They are not all related to chaotic behavior, the other weaknesses may actually be worse than those related to chaotic phenomena. I have in mind mainly the lacking modeling of cloud formation and of oceans. (Chaos is probably involved in both, but not the only problem in them.)

Thank you for thoughts. Unfortunately, this round of exchanges exactly matches the pattern of the last round. You seem to like to pull things out of thin air, no proofs, no equations, no data. Some of the statements you make in connection with chaos, as before, and again i am sorry to say, are completely devoid of reality, and in some cases gibberish.

I am again forming the view that you would be happy to decide the world’s fate with completely or nearly unverified models.

I just don’t have the hundreds of hours it would take to explain these things to you, even with an open mind.

So, like last time … please feel free to get the last word in, my time on this exchange has expired.

I believe I know really well what you are talking about, when you discuss chaos. There’s hardly anything to gain there.

You should also have noticed I have great reservations on the present models, and that SoD, DeWitt, and Frank have made comments along this same line.

It’s clear in this discussion as it has been in numerous other discussions that people do not understand what others are trying to tell. Each of us should remember that others cannot tell all they know and that every comment is built on some hidden assumptions. The author may think that others realize those assumptions, but that’s seldom totally true.

As far as I have seen you have not discussed the properties of attractors at all. Without that it’s not possible to understand the proposed rationale of the use of GCMs in climate projections. Attractors do have properties, and these properties are affected by model parameters. CO2 concentration has an effect through that. I don’t claim that considering attractors solves the problem well, but that’s the rationale that the modelers propose. If you want to argue that the approach does not work, you must analyze that approach. Criticizing model use based on failure of another approach does not prove anything when that other approach is not used. (I use the term “attractor” as if it would be equally precisely definable for climate models as it’s for simple examples of chaos. This is not strictly true, but the idea is applicable.)

Another problem is with your comments on volcanoes. You have considered their influence fatal for modeling, but you have done that based on a hugely exaggerated estimate of their influence on the climate. Some volcanoes have a rather strong effect over a couple of years, but their effect is much smaller in the long term than what you have written. Therefore the inability to predict volcanic eruptions is not a serious problem.

I understand that fig.2.A is the result of a line-by-line calculation. But is it possible to put into words why 1ppmv water vapor more has so much more impact just above the tropopause than it has some km further up where the temperature is even higher? This “very non-linear effect” comes unexpected.

The atmospheric pressure drops by about 12% with every km of altitude, or is halved by 5.5 km extra altitude. As the concentration was given relative to the overall density, we can see that the effect is roughly proportional to the amount of water vapor above the altitude of the maximal effect.

Thank you for the prompt answer, Pekka!
Now I have to figure out why water vapor below 13 km (=Tropopause?) should have less of an impact than the maximum at 13 km.
Could it be that there is so much water vapor that 1ppmv doesn’t make much difference?

That would presume, I think, that hurricanes would increase with increasing temperature. While some have published arguments that they would, the subject is controversial and so far hurricane actual frequency has not increased globally.

That would tend to invalidate models that predict hurricanes increase with temperature. What models are left standing? Could the energy per hurricaine increase with temperature, leaving the number of hurricanes relatively stable?

Increasing destructiveness of tropical cyclones over the past 30 years

Kerry Emanuel1

Program in Atmospheres, Oceans, and Climate, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
Correspondence to: Kerry Emanuel1 Correspondence and requests for materials should be addressed to the author at Email: emanuel@texmex.mit.edu.

Theory1 and modelling2 predict that hurricane intensity should increase with increasing global mean temperatures, but work on the detection of trends in hurricane activity has focused mostly on their frequency3, 4 and shows no trend. Here I define an index of the potential destructiveness of hurricanes based on the total dissipation of power, integrated over the lifetime of the cyclone, and show that this index has increased markedly since the mid-1970s. This trend is due to both longer storm lifetimes and greater storm intensities. I find that the record of net hurricane power dissipation is highly correlated with tropical sea surface temperature, reflecting well-documented climate signals, including multi-decadal oscillations in the North Atlantic and North Pacific, and global warming. My results suggest that future warming may lead to an upward trend in tropical cyclone destructive potential, and—taking into account an increasing coastal population—a substantial increase in hurricane-related losses in the twenty-first century.

Second, I am doubtful of the theoretical claim of increased temperature leading to more/stronger cyclones. It is not temperature that matters, it is temperature differences since the latter are needed to convert random thermal motion into organized motion (Second Law of Thermodynamics). And from what I have seen, global warming reduces temperature gradients.

Third, I do not believe a claimed trend based on 30 years of data. It is well established that the AMO has a big impact on Atlantic hurricanes and the AMO has a period of about 60 years. I suspect there are linked cycles in other basins. So to spot a genuine trend requires several times 60 years of data.

Confirmation bias is a natural hazard of human thought and can be found in all areas of science. It seems to me that the global warming field has more than the usual amount of confirmation bias. So I apply more than my usual (fairly high) level of skepticism to claims in this field.

Take it easy Mike. I’m not religious about this. It is, however, interesting to me how _easy_ it is to find supporting evidence to trigger your aversion to “confirmation bias”. Indeed, by raising the issue of “confirmation bias” you close off rational dialogue by presuming any supporting arguments, research or evidence is irrational. I’m not interested in that kind of “dialogue”.

I think you are over reacting, perhaps because my style may have come across as overly aggressive. I am merely pointing out why I am doubtful of your suggestion. I mentioned confirmation bias not to accuse you, but only to point out that one has to be very aware of that bias in reading the climate science literature. Finding a paper or two that supports an argument might be more due to people liking the result rather than solid evidence for the result.

For example, the bias of the field is such that increased storminess is a desirable result. But that result seems opposed by basic thermodynamics. So one should be very careful about accepting it.

Jabowery: If hurricanes were a major source of stratospheric water vapor, wouldn’t you expect more water vapor in the Northern Hemisphere stratosphere than the Southern? Do aerosols from tropical volcanos tend to remain more concentrated in the hemisphere from which they originated?

This link should provide graph of stratospheric water vapor. As one might expect, the coldest – and therefore driest – air on the troposphere lies above the ITCZ. The dryness of the stratosphere parallels the dryness at the tropopause immediately below. It sure doesn’t look as it the tropics is a major source of water vapor. Brewer Dobson circulation presumably spreads that drier air to the rest of the stratosphere.

The abstract of the paper you cite says:

“Using infrared satellite imagery, best-track data, and reanalysis data, tropical cyclones are shown to contain a disproportionate amount of the deepest convection in the tropics. Although tropical cyclones account for only 7% of the deep convection in the tropics, they account for about 15% of the deep convection with cloud-top temperatures below the monthly averaged tropopause temperature and 29% of the clouds that attain a cloud-top temperature 15 K BELOW the temperature of the tropopause. This suggests that tropical cyclones COULD play an important role in setting the humidity of the stratosphere.[my emphasis].

Presumably colder means drier. Clouds 15 K less than the typical tropopause temperature, suggest drier air to me. Note “could” in the abstract and the vague nature of the concluding paragraphs. Little appears certain.

You wrote: ” If hurricanes were a major source of stratospheric water vapor, wouldn’t you expect more water vapor in the Northern Hemisphere stratosphere than the Southern?”

Excellent point. From your figures, it does look like at any given altitude there is a bit more water vapor in the NH, but it is not much.

You wrote: “As one might expect, the coldest – and therefore driest – air on the troposphere lies above the ITCZ. The dryness of the stratosphere parallels the dryness at the tropopause immediately below. It sure doesn’t look as it the tropics is a major source of water vapor. Brewer Dobson circulation presumably spreads that drier air to the rest of the stratosphere.”

I think you are misinterpreting the graphs. There are two sources of water vapor to the stratosphere. One is transport of water vapor from the troposphere. Air enters the stratosphere, mostly across the tropical tropopause, and spreads poleward. Once the air passes through the cold trap at the tropopause, the H2O mixing ratio should not change. So if that were the only source, the mixing ratio in the stratosphere would be pretty much uniform, somewhere between 3 and 4 ppm.

The second source is the oxidation of hydrogen containing compounds. Mostly, that means methane. Each methane oxidized gives two water molecules. So as the air spreads away from the source region (upward and poleward) water vapor increases.

The ironic thing is I put the inventor of the atmospheric vortex engine in touch with Peter Thiel’s Breakout Labs for funding, and therefore have a vested interest in there being no contribution to global warming by tropical atmospheric vortexes. See slide 21 of this presentation for why the tropical oceans are the best place to locate AVEs that extract CO2 from the atmosphere and synthesize carbon neutral transportation fuel:

What kind of global warming risks would equatorial oceanic deployment (specifically eastern Pacific) of tens of terrawatts of generating capacity by AVEs pose due to injection of water vapor into the lower stratosphere?

Exactly. If totally oxidized, 1.8 ppm of methane produces 3.5 ppm of water to supplement the 3-4 ppm that crosses the tropopause “cold trap”. However, if all stratospheric methane is oxidized, then it is NOT a well-mixed GHG. Hansen’s 2005 paper on forcing efficacy (the source of my Figure?) discusses production of stratospheric water vapor from methane. The starting assumption was that the scale height for methane is 50 km and peak methane oxidation is 1-10 mb over the equator. So, something is fishy here. If you want to produce a significant amount of water vapor from oxidation of methane, you need to consider depletion of methane and production of water vapor in each cell of stratosphere as air is transported into, through, and out of the stratosphere. The latest versions of climate models added additional layers of stratosphere and now exhibit a QBO. Perhaps they have have a process that properly represents what is happening in the stratosphere.

Methane is pretty well mixed in the troposphere, but not in the lower stratosphere. See, for example, the figures at http://earthobservatory.nasa.gov/IOTD/view.php?id=5270. And CH4 certainly declines with altitude in the stratosphere; there is almost none in the mesosphere.

From the second figure above, it looks like perhaps a quarter to a half the methane that enters the stratosphere gets oxidized there. That would agree with 1 to 2 ppmv of water vapor from methane oxidation, which seem to be implied by the figures you posted.

IPCC does seem to not actually give a definition of WMGHG, but it seems to mean well mixed in the troposphere, since many of the gases they list (methane, HCFC’s) are certainly not well mixed in the stratosphere. Few things are.

Jabowery: I read about the ability to reproduce the QBO with climate models somewhere. Perhaps it was in AR5 WG1. Below is a 2016 reference. (I must confess to complete ignorance of gravity waves.)

Simulating the QBO in an Atmospheric General Circulation Model: Sensitivity to Resolved and Parameterized Forcing

James A. Anstey and John F. Scinocca
Canadian Centre for Climate Modelling and Analysis, University of Victoria, Victoria, British Columbia, Canada
Martin Keller
DOI: http://dx.doi.org/10.1175/JAS-D-15-0099.1

Abstract
The quasi-biennial oscillation (QBO) of tropical stratospheric zonal winds is simulated in an atmospheric general circulation model and its sensitivity to model parameters is explored. Vertical resolution in the lower tropical stratosphere finer than ≈1 km and sufficiently strong forcing by parameterized nonorographic gravity wave drag are both required for the model to exhibit a QBO-like oscillation. Coarser vertical resolution yields oscillations that are seasonally synchronized and driven mainly by gravity wave drag. As vertical resolution increases, wave forcing in the tropical lower stratosphere increases and seasonal synchronization is disrupted, allowing quasi-biennial periodicity to emerge. Seasonal synchronization could result from the form of wave dissipation assumed in the gravity wave parameterization, which allows downward influence by semiannual oscillation (SAO) winds, whereas dissipation of resolved waves is consistent with radiative damping and no downward influence. Parameterized wave drag is nevertheless required to generate a realistic QBO, effectively acting to amplify the relatively weaker mean-flow forcing by resolved waves.

Mike M: The link you provided for stratospheric methane came from a model. If you want to see real measurements throughout the atmosphere, the following site was very interesting. Since 10 mb contains 10X as many molecules as 1 mb, the more extensive oxidation higher in the stratosphere produces less water vapor and less forcing. The higher altitudes with 6 ppm of water vapor have too low a density to matter, most the forcing is in the lower stratosphere.

The IPCC says the forcing from anthropogenic stratospheric methane oxidation to water vapor is 0.07 (0.02-0.12 W/m2) and direct methane forcing is 0.48 W/m2. A paper by Solomon claims that 1 ppm more water vapor everywhere above the tropopause produces a forcing of 0.24 W/m2 according to RT calculations. The anthropogenic increase in methane is about 1 ppm and if 50% of that were oxidized, that would be an additional 1 ppm of water vapor.

Solomon was trying to blame The Pause on a ambiguous drop in stratospheric water vapor. Other alarmists are coming up with scary scenarios increasing stratospheric water vapor. In reality, our understanding of this problem, especially the overall dryness of the stratosphere, is fairly poor.

Modeling stratospheric water vapor is indeed a very difficult problem. But getting the contribution from methane oxidation should be rather straightforward. You can get it from observations via the difference between measured methane at any location and that in the troposphere. Or you can get it from models since it should be possible to model stratospheric methane with a high degree of confidence.

You wrote: “The IPCC says the forcing from anthropogenic stratospheric methane oxidation to water vapor is 0.07 (0.02-0.12 W/m2) and direct methane forcing is 0.48 W/m2. A paper by Solomon claims that 1 ppm more water vapor everywhere above the tropopause produces a forcing of 0.24 W/m2 according to RT calculations.”

OK, so the IPCC stratospheric indirect methane forcing corresponds to 0.3 ppm (0.1 to 0.5 ppm) of extra water vapor. That ought to be a fairly reliable number, so I am surprised by the wide range.

You wrote: “The anthropogenic increase in methane is about 1 ppm and if 50% of that were oxidized, that would be an additional 1 ppm of water vapor.”

I think your estimate is high for two reasons. One is that even if methane is reduced by 0.5 ppm in some places, it is reduced by less in other places. The other is that the large reductions are at high altitude; as you point out, those regions don’t contribute much to the column.

In the sounding graphs you linked to, it looks to me like at 49.3 mbar (above most of the stratosphere), the average methane oxidation is less than 20%; to get half of the methane oxidized you need to get up to something like 15 mbar. So I see no reason the doubt the IPCC number.

Mike wrote: “I think your estimate is high for two reasons …So I see no reason the doubt the IPCC [central estimate].

I tried to come up with a consistent picture of what is happening and failed. (Hansen’s (2005) paper on the efficiency of methane forcing was very confusing. So was Solomon’s.) Part of the problem is that much information comes from modeling, rather than measurements. Part of the problem is that rate of radical chain reactions depend greatly on the rate of the “chain termination” steps (when two dilute odd electron species collide and end the chain). In the Antarctic, much of the chemistry occurs on the surface of ice (polar stratospheric clouds). These phenomena can be difficult to simulate and measure accurately in the laboratory. So, I SUSPECT that the existence of IPCC’s wide range for the forcing from stratospheric water vapor added by methane reflects controversy over the parameters that go into models of stratospheric chemistry.

The website with daily methane concentrations from satellites suggests that reliable observations of methane (but not water vapor) are now being made and they should eventually refine and create agreement among the stratospheric chemistry modules in AOGCMs. Exactly when this happened or will happen isn’t clear. Aren’t most CMIP5 models are still using forcing for aerosols widely recognized as being too negative? Loss of methane tells us how much anthropogenic water vapor exists, but the transport of “natural” water vapor into the stratosphere (and therefore forcing) is still challenging. My reading suggested that the coldness of the tropical tropopause and the dryness of the stratosphere is still a challenge.

And, indeed, the positive feedback scenario is _directly_ supported in this paper:

http://onlinelibrary.wiley.com/doi/10.1029/2009GL037396/full
…
It is well known that increases in stratospheric water vapor lead to surface warming [de F Forster and Shine, 1999; Shindell, 2001]. It is also widely believed that global warming will lead to changes in the frequency and intensity of tropical cyclones [Emanuel, 1987, 2005; Knutson et al., 2008]. Therefore, the results presented here establish the possibility for a feedback between tropical cyclones and global climate.