Dr. Kiehl’s Paradox

Back in 2007, in a paper published in GRL entitled “Twentieth century climate model response and climate sensitivity” Jeffrey Kiehl noted a curious paradox. All of the various different climate models operated by different groups were able to do a reasonable job of emulating the historical surface temperature record. In fact, much is made of this agreement by people like the IPCC. They claim it shows that the models are valid, physical based representations of reality.

The paradox is that the models all report greatly varying climate sensitivities but they all give approximately the same answer … what’s up with that? Here’s how Kiehl described it in his paper:

[4] One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.

[5] The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy?

How can that be? The models have widely varying sensitivities … but they all are able to replicate the historical temperatures? How is that possible?

Not to give away the answer, but here’s the answer that Kiehl gives (emphasis mine):

It is found that the total anthropogenic forcing for a wide range of climate models differs by a factor of two and that the total forcing is inversely correlated to climate sensitivity.

This kinda makes sense, because if the total forcing is larger, you’ll have to shrink it more (smaller sensitivity) to end up with a temperature result that fits the historical record. However, Kiehl was not quite correct.

My own research in June of this year, reported in the post Climate Sensitivity Deconstructed, has shown that the critical factor is not the total forcing as Kiehl hypothesized. What I found was that the climate sensitivity of the models is emulated very accurately by a simple trend ratio—the trend of the forcing divided by the trend of the model output.

Note that Kiehl’s misidentification of the cause of the variations is understandable. First, the output of the models are all fairly similar to the historical temperature. This allowed Kiehl to ignore the model output, which simplifies the question, but it increases the inaccuracy. Second, the total forcing is an anomaly which starts at zero at the start of the historical reconstruction. As a result, the total forcing is somewhat proportional to the trend of the forcing. Again, however, this increases the inaccuracy. But as a first cut at solving the paradox, as well as being the first person to write about it, I give high marks to Dr. Kiehl.

Now, I probably shouldn’t have been surprised by the fact that the sensitivity as calculated by the models is nothing more than the trend ratio. After all, the canonical equation of the prevailing climate paradigm is that forcing is directly related to temperature by the climate sensitivity (lambda). In particular, they say:

Change In Temperature (∆T) = Climate Sensitivity (lambda) times Change In Forcing (∆F), or in short,

∆T = lambda ∆F

But of course, that implies that

lambda = ∆T / ∆F

And the right hand term, on average, is nothing but the ratio of the trends.

So we see that once we’ve decided what forcing dataset the model will use, and decided what historical dataset the output is supposed to match, at that point the climate sensitivity is baked in. We don’t even need the model to calculate it. It will be the trend ratio—the trend of the historical temperature dataset divided by the trend of the forcing dataset. It has to be, by definition.

This completely explains why, after years of better and better computer models, the models are able to hindcast the past in more detail and complexity … but they still don’t agree any better about the climate sensitivity.

The reason is that the climate sensitivity has nothing to do with the models, and everything to do with the trends of the inputs to the models (forcings) and outputs of the models (emulations of historical temperatures).

So to summarize, as Dr. Kiehl suspected, the variations in the climate sensitivity as reported by the models are due entirely to the differences in the trends of the forcings used by the various models as compared to the trends of their outputs.

Given all of that, I actually laughed out loud when I was perusing the latest United Nations Inter-Governmental Panel on Climate Change’s farrago of science, non-science, anti-science, and pseudo-science called the Fifth Assessment Report (AR5). Bear in mind that as the name implies, this is from a panel of governments, not a panel of scientists:

The model spread in equilibrium climate sensitivity ranges from 2.1°C to 4.7°C and is very similar to the assessment in the AR4. There is very high confidence that the primary factor contributing to the spread in equilibrium climate sensitivity continues to be the cloud feedback. This applies to both the modern climate and the last glacial maximum.

I laughed because crying is too depressing … they truly, truly don’t understand what they are doing. How can they have “very high confidence” (95%) that the cause is “cloud feedback”, when they admit they don’t even understand the effects of the clouds? Here’s what they say about the observations of clouds and their effects, much less the models of those observations:

• There is low confidence in an observed global-scale trend in drought or dryness (lack of rainfall), due to lack of direct observations, methodological uncertainties and choice and geographical inconsistencies in the trends. {2.6.2}

• There is low confidence that any reported long-term (centennial) changes in tropical cyclone characteristics are robust, after accounting for past changes in observing capabilities. {2.6.3}

I’ll tell you, I have “very low” confidence in their analysis of the confidence levels throughout the documents …

But in any case, no, dear Inter-Governmental folks, the spread in model sensitivity is not due to the admittedly poorly modeled effects of the clouds. In fact it has nothing to do with any of the inner workings of the models. Climate sensitivity is a function of the choice of forcings and desired output (historical temperature dataset), and not a lot else.

Given that level of lack of understanding on the part of the Inter-Governments, it’s gonna be a long uphill fight … but I got nothing better to do.

w.

PS—me, I think the whole concept of “climate sensitivity” is meaningless in the context of a naturally thermoregulated system such as the climate. In such a system, an increase in one area is counteracted by a decrease in another area or time frame. See my posts It’s Not About Feedback and Emergent Climate Phenomena for a discussion of these issues.

124 thoughts on “Dr. Kiehl’s Paradox”

Someone once said of Plato’s idea of a Republic, run by an unelected expert panel of philosophers, selected and trained for this panel from birth, and who would decide how society was to be run and how everyone else was supposed to live.

‘He never seemed to ask how would such a social arrangement would effect the minds of those within it’.

When you quote the IPCC for example: “There is very high confidence that the primary factor contributing to the spread in equilibrium climate sensitivity continues to be the cloud feedback.” I would suggest that what they are really showing is how the organisational culture they are involved with has effected their minds, i.e. they really mean: ‘pretty much (95%) the only thing the organisation concerns itself with in trying to explain the spread in equilibrium climate sensitivity is cloud feedback’. Note the difference, the organisation they are involved with has become the source of what it true, or likely, or relevant to be studied, not what goes on in real external world.

Willis, I think you and IPCC are not really disagreeing on this. They say that net forcing is based on Tsi, humidity, green house gasses, land use, clouds, etc… then they assume that they have all the rest correct so the trend difference in forcing is due to clouds but that is still trend difference.

Of course, since the different models get different climate sensitivities, they don’t have the other stuff right.

I think that the bigger point is that since they all get different climate sensitivities yet still match historical temperatures it is clear that they are “tuned” and are not really derived from physical principals as claimed.

“… were able to do a reasonable job of emulating the historical surface temperature record”
Stating the obvious…
Given various models are so good at emulating the historical surface temperature, the models should be equally as good at emulating future surface temperatures.

Why then did the models so badly miss actual global temperatures of past decade?

Possibly because the models were designed to produced future Alarming results needed by Global Warming Scammers, then tweaked to produce results matching historical temperatures to boost credibility.

Jknapp says: “… bigger point is that since they all get different climate sensitivities yet still match historical temperatures it is clear that they are “tuned” and are not really derived from physical principals as claimed.”

Bingo!
The models are incapable of predicting real world values, and instead were/are “tuned” to provide Alarmists values. Models they aren’t.

“[5] The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy?”

Conversely to ratios of 2 to 3 times spread, if the sensitivities go from 4.5 to 1.5 with no apparent sizeable effect on the outcome why are so many still not seeing it could easily go from 1.5 to zero in like manner? (properly parameter “tuned” that is) Is that not the real possibility? I think Spencer and Monckton are firmly planted in the camp that co2 sensitivity from trace amounts are real though they continue to mess around with just how much lower they are than from other various documents, but I also see it very close to zero, if not zero at these concentrations and think time will add credence to that over time.

They say “We’re sure (95% confident) it’s something we know little about” and I say “something they’re sure of is wrong”. The problem is because Lambda is what it is, they are already sure what ∆T should be. It is in their charter and Mikey’s hockey stick. It is settled science then that ∆F is what ever is required to make Lambda match observed. It’s sausage all the way down.

The critical variable here must be something which can, maybe, be effected politically. Toug h to tax clouds so CO2 has to be the driver. Which, in turn, means that sensitivity is the single most important number. So important that the SOP chose not ro report it at all rather than dropping the value to less than scary levels.

Lets face it the whole paradigm that atmospheric CO2 effects temperature is arse-up. Temperature affects the significant part of the atmospheric CO2. Human created CO2 (5GtC/yr) compared to approximately 150 GtC/yr by natural sources has very little effect. The bulk of the CO2 in the atmosphere is related to the integral of temperature.

The net of all of this is that the so called climate models are nothing but high order curve fit exercises. It is well known that high order curve fits can often predict the reference data set rather well. However, when one starts to extrapolate in order to predict well outside of the reference data set, the “predictions” rather rapidly go off track. It is rather difficult to obtain accuracy of extrapolation beyond even a small fraction of the length of the reference data set. To extrapolate by a major fraction of the length to the reference data set is to guarantee a gross failure in prediction.

Reality doesn’t pay any attention to what the models say it must do. It simply does its thing in its own way. It always has and always will. This no matter what the size of the consensus is that says otherwise and without any regard to the certifications held by the members of the consensus.

As a layman trying to puzzle his way through this morass, I end up thinking every CO2 molecule is the ‘surface’ of that mass of CO2 in any given volume of the atmosphere. Therefore every CO2 molecule will be at the local air temperature. If the sun is shining the CO2 molecules can absorb energy in the 2.7 and 4.3 micron bands and warm the air a bit. But when there is no sun every CO2 molecule is back to local air temperature and thus, within the troposphere, will be too warm to absorb any energy from the surface in the 15 micron band. By the same argument, every CO2 molecule is warm enough to radiate in the 15 micron band. Some of this radiation is reaching the surface. But how can it warm that surface since it is already radiating in the 15 micron band?
Any tutorials on this subject for baffled laymen?

“PS—me, I think the whole concept of “climate sensitivity” is meaningless in the context of a naturally thermoregulated system such as the climate. In such a system, an increase in one area is counteracted by a decrease in another area or time frame. See my posts It’s Not About Feedback and Emergent Climate Phenomena for a discussion of these issues.”

Sensitivity is not meaningless Willis. Even accepting your tropical governor hypothesis which , as you know, I am quite supportive of, no regulator is perfect. Every regulation system will maintain the control variable within certain limits yet it needs an error signal to operate on.

The better (tighter) the regulation the less sensitive it will be do outside “forcings”.

It’s that simple. Sensitivity is a measure of how good the regulator is. So your emergent phenomena and the rest do not negate the concept of sensitivity , they rely on it.

You made this incorrect relationship the centre of your last post on the subject and then ignored it when I Paul_K and Frank pointed out it was fundamentally wrong. Not only did you not address those issues , you now repeat the same mistake.

The reason the results were similar when you did a more correct method (which unfortunately you did not report on , preferring to detail the incorrect method) is probably that the diff of and exponential decay is also an exponential decay.

Your truncated ∆T = lambda ∆F does not represent the linear feedback assumption and the lambda it gives will not be the same as climate sensitivity. Though it will probably be of the same order.

“The reason is that the climate sensitivity has nothing to do with the models, and everything to do with the trends of the inputs to the models (forcings) and outputs of the models (emulations of historical temperatures).”

yes. this comes as a surprise? check the relationship between aerosol forcing ( a free knob) and the sensitivity of models.

“Climate sensitivity is a function of the choice of forcings and desired output (historical temperature dataset), and not a lot else.”

The above notwithstanding, that is basically true if we understand “choice of forcings” to include any _assumed_ feedbacks that are added to the equations inside the models. The cloud feedback being the key issue and it is still unknown whether this is even positive or negative as a feedback.

Yes, the historical forcings for aerosols have large uncertainties. depending on how you set them you can get higher or lower sensitivity.

The clue is that folks like Hansen and others think that models are poor source of information about sensitivity. Paleo and observations are better. Models.. dont really give you any “new” information about sensitivity.

So question. Since you’ve pretty much got this do can you apply it to the models…

Such as make perfect hind casts but has say a -5 climate BS factor?

This gutter trash love the models. If you can take one of their models and get an ice age from it you can force them to dump the models and then they got nothing.

Since it seems that the model are simple trends and such can you not find the “inverse” factor and then send it into the ice age and post out hansen model now says were all doomed from a future ice age and with have .99 R blah blah blah.

Removing the _assumed_ x3 cloud feedback amplification and exaggerated volcanic (aerosol) sensitivity would get rid of a lot of the divergence problem but would not solve it. Post 2000 would still not be flat enough.

There obviously at least one key variable that is missing from the models.

There finally seems to be a grudging acknowledgement in AR5 that the sun may actually affect climate.Though they are still trying to spin it as a post 1998 problem so that they don’t have to admit it was a significant component of the post 1960 warming too.

I thought ‘think of a number then produced data to support it’ was the standard way to work in climate ‘science’ so I cannot see how there is a problem here as they are merely following the ‘professional standards ‘ of their area .

My volcano stack plots show that tropics are highly _insensitive_ to aerosol driven changes in radiative input. That would presumably apply to other changes, be it solar, AGW or other, that affect radiation flux in the tropics, assuming that they does not overwhelm the range of tropical regulation (which may be the case during glaciation/deglaciation).

Willis – Thanks for the analysis. Your “at that point the climate sensitivity is baked in. We don’t even need the model to calculate it” is brilliant, and demonstrated by your Figure 2. So … the part of the historical record which is not understood is assigned to CO2 and Climate Sensitivity is the factor used to make it match. In other words, the whole thing is an Argument From Ignorance.

Following on from this, when the models are claimed to be accurate because they match the historical record, that is a circular argument.

It can be seen that the LHS is the (almost) the diff of successive temperature diffs, ie acceleration not simple difference. This is point Frank was making. There is a scaling factor on the right which is close to unity for long tau delay constants.

This is why the incorrect method was not too far off for dt/tau or around 6 or 7 in the “Eruption” post.

Sensitivity is not meaningless Willis. Even accepting your tropical governor hypothesis which , as you know, I am quite supportive of, no regulator is perfect. Every regulation system will maintain the control variable within certain limits yet it needs an error signal to operate on.

Greg, you seem to assume that the chaotic system of chaotic systems has a ‘single linear sensitivity’ to all the varied inputs (forcings) and feedbacks this appears to be illogical. Can you explain why you think such a non linear chaotic system will have only one simple linear sensitivity?

“The reason is that the climate sensitivity has nothing to do with the models, and everything to do with the trends of the inputs to the models (forcings) and outputs of the models (emulations of historical temperatures).”

yes. this comes as a surprise? check the relationship between aerosol forcing ( a free knob) and the sensitivity of models.

Steven Mosher says: “check the relationship between aerosol forcing ( a free knob) and the sensitivity of models.”

Which is precisely why I did the volcano stacks, having pointed out to Willis that inflated volcano forcing was the pillar which was propping exaggerated AGW sensitivity. You can’t have one without the other. Even inflating both only works when both are present which is why post-Pinatubo period disproves the IPCC paradigm.

My own research in June of this year, reported in the post Climate Sensitivity Deconstructed, has shown that the critical factor is not the total forcing as Kiehl hypothesized. What I found was that the climate sensitivity of the models is emulated very accurately by a simple trend ratio—the trend of the forcing divided by the trend of the model output.

True, you have shown that and (as e.g. demonstrated by comments from Greg Goodman in this thread) it really annoys some people. But “the trend of the model output” is adjusted to fit by an arbitrary input of aerosol negative forcing.

The fact that – as Kiehl showed – each model is compensated by a different value of aerosol negative forcing demonstrates that at most only one of the models emulates the climate of the real Earth. And the fact that they each have too high a trend of model output without that compensation strongly suggests none of the models emulates the climate of the real Earth.

Simply, the climate models are wrong in principle. They need to be scrapped and done over.

I have often explained this on WUWT and most recently a few days ago. But I again copy the explanation to here for ease of any ‘newcomers’ who want to read it.

None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.

This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.

More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.

The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.

And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).

Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.

He says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.

The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.

Thanks to Bill Illis, Kiehl’s Figure 2 can be seen at

Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^-2 to 2.02 W/m^-2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^-2 to -0.60 W/m^-2.

In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.

So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.

richardscourtney says: “True, you have shown that and (as e.g. demonstrated by comments from Greg Goodman in this thread) it really annoys some people.”

Richard, if you read my comments you will see that I agree with a lot of what Willis is saying. It is not what he is suggesting that “annoys” me (your idea not mine) but the fact he makes mistakes, fails to correct and then repeats them.

I correct Willis where I think he makes technically errors. That is usually in the sense of reinforcing his work not dismissing it.

If he could understand that you can’t regress dF on dT when the basic assumed relationship is dT/dt is proportion to F he would look a lot more credible. That is such a fundamental error that it undermines what he is trying to say and makes it easy to anyone is “annoyed” by his observations to ignore them.

One thing that does annoy me a little is people like you and Ian W who can’t be bother the read what I write before trying to criticise it.

Thanks Willis, CO2 forcing is very likely near zero anyway because external IR is absorbed but cannot be thermalised in the gas phase. It thermalises at heterogenities like clouds, surfaces and space. This process may play an important role in your climate governor system.

I could be wrong but are you conflating statistical modelling with “deterministic” modelling?

1) The one you describe, the trend between temperature and forcing sounds like a statistical climate model.

2) But this isn’t what the IPCC models are doing. As far as I know they perturb the system with the expected increased thermal energy from an observed increase in CO2.

– They then run their models with different feedback assumptions in order to create the training set. But this is only one part of it.
– One must also model convective transfer – during the modelling process – which is a function of finite-gradients from the model resolutions (this is where they loose the plot I think) and the assumed feedbacks and their magnitudes.
– But they also have dampeners such as aerosols to play around with and in coupled models (did temperature result in drier regions) atmosphere-ocean transfer.

I guess what I am trying to say does the Total Forcings = Net Forcing in the above plot.

I never understood the previous post, and I don’t understand this one. The emphasis on trend seems a trivial corollary of what you’d done before.
You’d already established that the following is a pretty accurate black-box representation of model behavior:
\frac{dT}{dt} = \frac{\lambda}{\tau}F – \frac{1}{\tau}T
So you know that the temperature response to a forcing having a trend from time t=0, i.e., to F = rt, is
T = \lambda r \left[ t – \tau(1 – e^{-t/\tau}) \right]
That is, the rate of change of temperature is
r\lambda(1 -\tau^2 e^{-t/\tau}).
If you ignore the transient response, that is, the ratio of the temperature trend, \lambda r, to the forcing trend, r, is the transient climate sensitivity \lambda
This result does not seem startling in light of what you’d established previously.
What am I missing?

I’d like to start a climate model at a thousand years ago or better yet before the last ice age and watch as it just runs wildly around missing paleoclimatology history at every turn. You’d thnk they’d be excited to do this and prove their long term forecasts. The idea that these toys can simulate the earths climate is so absurd.

Regarding the idea that the climate models simulate past temperatures ‘with a reasonable degree of accuracy’, didn’t someone mention the related idea somewhere that correlation is not causation? I read somewhere that wheat prices correlate with global warming, but nobody suggests wheat supply/demand is a primary cause of global warming.

For a grade 7 project, one of my sons built a maze, drew a learning curve and then borrowed a pet rabbit from a friend because he had to take all his project to school. Unfortunately, the rabbit was a way too big for the maze. My son decided to take a sick day off and he paid his friend 50 cents to take the “project” to school on his behalf and to explain that the rabbit had grown since completing the experiment. His friend got a detention for bringing a live animal to school which was listed as one of the no-nos for the project. He agreed to pay his friend another 50 cents. I, of course, learned all this much later. My son got an ‘A’ for his bogus learning curve because it pretty much matched the psych 101 stuff that school teachers take. He would have been a star on the IPCC team.

Re volcanic aerosols, are they uniform across the globe? Wouldn’t the trade winds tend to block out these aerosols somewhat, at least along the ITCZ? How do, say, southern hemisphere aerosols cross the ITCZ?

Let me try that last comment again. I tried the LaTeX before submitting it, and it worked then. Maybe it’ll work this time:

I never understood the previous post, and I don’t understand this one. The emphasis on trend seems a trivial corollary of what you’d done before.
You’d already established that the following is a pretty accurate black-box representation of model behavior:
So you know that the temperature response to a forcing having a trend from time , i.e., to , is
That is, the rate of change of temperature is.
If you ignore the transient response, that is, the ratio of the temperature trend, , to the forcing trend, , is the transient climate sensitivity
This result does not seem startling in light of what you’d established previously.
What am I missing?

“So we see that once we’ve decided what forcing dataset the model will use, and decided what historical dataset the output is supposed to match, at that point the climate sensitivity is baked in. “

So if I understand it well then basically if in reality there’s no climate sensitivity (say to CO2) whatsoever, but still a trend (due to another unconsidered factor[s]) in the data to be emulated by the model, the (CO2) climate sensitivity is invented and its function adjusted to fit the trends of the emulated data for replacing the unconsidered factor[s] in the model and so the resulting emulation then works hindcasting, but proves being without predictive value for the further development of the real data emulated by the model, exactly because the unconsidered factor[s] behave differently than the invented CO2 forcing function?

Richard111 says: @ October 1, 2013 at 11:02 pm
… Any tutorials on this subject for baffled laymen?
>>>>>>>>>>>>>>>>>>
You might try John Kehr’s posts. John “is a Chemical Engineer by schooling and Research and Development Process Engineer by profession.”The Earth’s Energy Balance: Simple Overview

Assume a spherical cow, feed the cow peer reviewed “scientific” articles. Calibrate the simulation using selected data from the real world that has been adjusted to fit the the model of the spherical cow. Then compute how much butter can be produced from the milk the cow produces. Force everyone on earth to use the resultant butter on their morning toast. Finally, label anyone who objects to being forced to use the non-existent butter on his non-existent morning toast a “denier”.

What could go wrong?

All the so called climate modelers have done is hidden a tangle of circular reasoning inside of a web of complexity of undefined and undisclosed computer code. Now, assuming what you are to prove is always wrong! From that point on, everything will go wrong. No matter how well you have adjusted your training data set to the assumed behavior of your spherical cow.

Never fear, you are wrong too because the modelers have called you a “denier”. As everyone “knows” to name a thing something changes the thing to mean the name. Oh wait. That went wrong too.

The real question is, did they do anything right? If so, how will we know it? Clearly not by asking the modelers.

I am sure that i am just repeating what Willis has said here, but since I am not as good at the maths I will throw in my 2 cents anyway.

The lack of warming for the past 15/6/7 years is finally being recognized and at the same time we have estimates of sensitivity to a doubling of CO2 coming down from 3 (+++) to 1.5 (—). It seems to me that all people are doing is continuing to assume that all warming is because of CO2 and since there is less warming, the sensitivity to CO2 (and equivalent greenhouse gases) is lower.

This sounds way too simplistic, but I can’t get it out of my head. Nowhere in here is there room for any other “forcing” than CO2 and the lack of warming means simply that we got the sensitivity wrong. If the models had any other variable forcing, then you would not need to explain everything by the CO2 sensitivity and so all of this is based on the one single assumption. Yes, I know the physics of IR absorption/emission, but to make CO2 the only variable forcing is how we got into this mess in the first place.

WordPress is barely loading with my dial-up connection, I’m not getting full pages. Starting at around 3AM EDT, aka midnight for WordPress and Left Coast times, it went bad. WP and several other sites are only loading in up to ten second spurts, followed by a minute of nothing. Google and Drudge Report load fine.

While loading, the browser flashes from where it’s fetching data. A hangup appears to be akamai-dot-net. For those wondering what Akamai is, it’s a mirror cache service. Rather than directly loading everything for a page of a company’s or group’s servers, content is sent from Akamai’s caches instead, reducing the need for directly hosted bandwidth.

MANY commercial sites use Akamai. When it goes down, much of the internet is broken.

I’ve reconnected several times. Upgraded browser, at usual dial-up blazing speed of about 4.4 KB/s. Switched to different computer, and to its WinXP partition while using a different modem. Same thing, internet still broken.

(Government “shuts down”, the “non-essential” work ceases, and when a NSA internet content monitoring operation would need a human inputting the next-day start-up signal to keep running, a large internet chunk shuts down instead. Curious.)

Some sites like Breitbart-dot-com, no friend of the current occupier of the White House, are inaccessible, taking too long to respond.

I’m dropping this comment here as this page is loaded enough to post a comment, and I effectively won’t be able to reply to comments on my comments here until WordPress is working again.

Gary Pearse says: @ October 2, 2013 at 5:02 am
…. My son got an ‘A’ for his bogus learning curve because it pretty much matched the psych 101 stuff that school teachers take. He would have been a star on the IPCC team.
>>>>>>>>>>>>>>>>>>
He also would be smart enough to desert the sinking CAGW ship at this time as some of the “Team” seems to be tiptoeing out. doing. More HERE from last year.

(A recent commenter at WUWT noted the controversial IPCC graph had the name Stott attached.)

Willis, they are not saying that the clouds are the cause of the warming, but rather the uncertainty over the clouds is the reason for the spread in the different models. Presumably that would also cause a wide variation in forcing estimates.

Joe Born : “If you ignore the transient response, that is, the ratio of the temperature trend, lambda r, to the forcing trend, r, is the transient climate sensitivity lambda”

Not only is that simplification ignoring the transient , it is assuming zero d2T/dt2 as per your first equation. Is that a reasonable assumption? Having looked the temp data from many sources in some detail for both dT/dt and and d2T/dt2 there is a definite long term acceleration over the last century and a half of useful data.

When I say acceleration that is from neg slope to positive slope, which may include other variations than AGW.

Rob Potter says: It seems to me that all people are doing is continuing to assume that all warming is because of CO2 and since there is less warming, the sensitivity to CO2 (and equivalent greenhouse gases) is lower.

They are desperately trying to patch up a sinking ship.

There are clearly many cardinals that are not happy with the current doctrine but for the moment the conclave that controls the cannon texts that get included in the bible are still attached to the political power they derive from CAGW.

I note the feedbacks are a critical component of the theory. If the water vapor, lapse rate and cloud feedbacks are higher than is assumed, we have runaway global warming. If they are less, say in the 50% range of that assumed, we have just 1.5C per doubling.

So far, water vapor is coming in much lower than expected, clouds are completely unknown in reality.

The latest IPCC AR5 report did not really change any of the values, we still around 2.3 W/m2/K in feedbacks total. Very little evidence was presented that did not use the ENSO impacts to hype the values.

Greg Goodman: “Not only is that simplification ignoring the transient , it is assuming zero d2T/dt2 as per your first equation. Is that a reasonable assumption?”

If you’re asking me whether it’s reasonable to assume a non-zero second time derivative of temperature, the answer in the real world is of course no. But in this parallel universe of a single-order linear system that the models in effect unknowingly simulate, there’s no such assumption; first order systems don’t care about second derivatives.

But I’m not sure that this is relevant to my question, which is what it is Mr. Eschenbach is saying about trend ratio. I think he means something less trivial than I’m understanding him to say, but I haven’t been able to figure out what it is he does mean.

I noted some of my comments in the last few days have completely disappeared without a ‘Snip’ to indicate they have been received.

You’re an anti-government cons & piracy nutjob with access to agricultural-grade ammonium nitrate and other weaponry. Maybe one of the “non-essential government personnel” did the quick scan and pass-along for your monitored group so they weren’t there to authorize the flagged remarks.

(Internet still broken, I waited 20 minutes for the page to mostly reload.)

More brilliant work from Willis Eschenbach. This essay and your other essays that you link above constitute a brilliant analysis of the follies found in Alarmist modelers’ assumptions. Your version of the paradox is reason enough to junk the climate models.

It always looked to me that the models were trained to match the warming side of the PDO (ignoring that the PDO existed). That was a pretty much continuous increase in temps. That seems easy to do in a model. Just turn a knob so that modeled temps rise! Done! Reality has become much more complex now that temps or flat or falling slightly. Is it even possible to re-train these models to work now?
They have to show: the rise, a leveling off, perhaps a drop, then their hoped-for rise again (sometime after they are due to retire)?

Greg Goodman and Joe Born: I am curious about the climate sensitivity and the absolute temperature of the models compared to each other. It would be more than just interesting if the ratio Willis is commenting about would indicate a ranking according to the absolute temperature. It would provide a basis for something other than in one sense Willis’s comment of the relationship is mathematically trivial. His use of the equation does not bother me much, since this is one of the simplified equations used to model a linear response system for CS. You can see it in write ups trying to explain climate sensitivity. Not saying such are particularly good or bad. Their use was for concept.

I expect this has been covered in the past, but something about these confuses me and I rarely see people just dismiss out of hand any “I can prove the past” assertions.

I used to work in pretty deep theoretical aerodynamics. Watching papers presented and the like, heuristic methods were often presented. Some measurements were taken (let’s say with PIV), someone came up with a methodology for predicting the flow velocities based on those measurements, and then a new map was presented based on the equations they had invented. You knew the method was really bad if they couldn’t get the far field correct. But I was rarely impressed with the near field predictions. To give you an idea, the R^2 values were often .3 or lower (this was not easy stuff).

Then I was at a conference where the R^2 values jumped up to about .8 and higher. It took me a little while to figure out what was going on. Basically, to validate their models, they compared to the data used to create the models. I asked about this (and, as a fairly junior guy in the room, I was expecting to get laughed down). Suddenly, a lot of people agreed: You can’t validate your model with the same stuff you used to create your model. It lacks rigor. And, besides…predicting the same data set you used to create a model to generate the data set really shouldn’t be impressive. You need an independent data set (i.e. a different set of trials that should be comparable but imply some tweak to the system, such as different far field velocity or different disturbance size or whatever) to see if you’ve got something worth while.

So, if climate modeling can be done and tested against a data set to prove that it works, they would have to use only past data to create it and use future data to test it. Alternately, they could save a subset for use. For instance, if the theory goes that you need 30 years for a trend, you stop using historical data for model generation in about 1980. With the model thus generated, you then “future-cast” to 2010 and compare how you did (trends being the goal here), without any adjustments for reality (unexpected volcanos or whatever; you should have a statistical model that includes a predicted number of those anyway). If you can’t “backcast” you already failed, so you should check that before doing the “future-cast”, but your success at a backcast really shouldn’t impress anyone.

The models will never be correct due to the fact they will never have complete,accurate,or comprehensive enough data to begin with, and the proper state of the climate to begin with which in the end results in the forcings they use to be useless.

As the decade goes on the climate forecast that the models have given will be so off, that the IPCC and the models will be obsolete, as the temperature trend will be down in response to very prolonged solar minimum conditions. The exact opposite outcome.

@Joe Born.
You are not missing anything. The only problem is that the climate system does not behave like a simple first order system, or at least its autocorrelation properties suggest that it doesn’t. Also, consideration of the non-linearities in the climate system suggest that it would be very unlikely to behave as non-linear system.

This post seems to be a trivial, circular argument. It would seem unlikely that the “magic” lambda is a constant, but this would mean integrating a product that is probably some way beyond WE’s capacity.

Climate models cannot “hindcast”. Their vision is not 20/20. It’s more like 30/50. They cannot predict the past because the past is not linear. They cannot predict the future because, the future hasn’t happened yet.

I do like how people are getting along here though. Respect for each other and all that.

“How can that be? The models have widely varying sensitivities … but they all are able to replicate the historical temperatures? How is that possible?”

Easy, they use variables that can’t be measured (Fudge Factors) to match the past and then speculate into the future.

That’s what gets me with that graph that shows temperature outputs from models with just natural versus natural with anthropogenic compared to observed, anyone with half a brain knows that just as easily could be natural that we know of and natural that we don’t compared to observed. How “scientists” could ever present that as evidence is beyond me.

This result does not seem startling in light of what you’d established previously.
What am I missing?

Thanks, Joe. As I said, it shouldn’t be surprising. However, Dr. Kiehl was surprised by it, as was I. Why? Because it means that the shape and form of the model are immaterial for climate sensitivity. It is determined only by the ratio of the trends of the output and the forcing. Given that the models are all trying to emulate the same historical surface temperature record, the spread in the reported sensitivities results almost entirely from the spread in the forcings.

Compare and contrast that with the IPCC claim, that they are “very certain” (95%) that the spread in the reported sensitivities is due to differences in “cloud feedback” … you may not be startled by my results, but the IPCC certainly would be …

@Joe Born.
You are not missing anything. The only problem is that the climate system does not behave like a simple first order system, or at least its autocorrelation properties suggest that it doesn’t.

RC, you seem to be under the mistaken impression that we’re discussing “the climate system”, or that my model is a model of “the climate system”. We’re not, and it’s not.

My model is an emulation, and a very accurate one, of the climate models. Not of the climate system. Of the climate models. As a result, the autocorrelation of the climate system says nothing at all about my results. I’m not emulating the climate, I’m emulating the models.

The autocorrelation of the output of my model, as I showed you in another thread, is quite similar to the autocorrelation of the output of the climate models.

Also, consideration of the non-linearities in the climate system suggest that it would be very unlikely to behave as non-linear system.

Huh? I don’t understand what that means. I’m glad you considered the non-linearities and decided that the climate itself is likely to be linear … you’ll forgive me if I say that your argument lacks force. Not to mention lacking data, examples, logic, and anything other than vague assertion.

This post seems to be a trivial, circular argument. It would seem unlikely that the “magic” lambda is a constant, but this would mean integrating a product that is probably some way beyond WE’s capacity.

So it “seems to be” circular, and it “seems unlikely”, and “consideration of the non-linearities suggest” … and at the end, once again you’ve not identified any specific thing that I’ve done wrong. And of course, you’ve capped it off with some nasty snark at my capacity.

RC, I invited you on the last thread to give us your estimate of the effect of the volcanic forcing on the temperature. I’d calculated it two ways, and each way gave me about 0.2°C for a doubling of CO2. After all your complaints about my methods and my abilities, I said OK, RC, you’re so unhappy with my estimates and so dismissive of my methods, what is your estimate of the effect of volcanoes on the climate?

…

…

You said it would take you two months to get your head around the problem and decide how to attack it.

So I just shook my head, and I invited you to come back in November when you’d gotten it whupped into shape.

It sounds like you could “make” your own model which could compete with the various existing ones. It might be interesting to:
1. Make a model for the known CO2-only sensitivity, and also one for zero sensitivity, generating temperature prognostications.
2. Verify that your model parameters (but not the output) fall within the spread of existing models
3. Publish the results, making those sensitivities eligible to be included into AR6 (if there is one)

Other than climate sensitivity likely varying with climate change, and cloud feedback not being the only poorly understood contributor to climate sensitivity, I think Kiehl was correct. And that IPCC was correct in stating that climate sensitivity to CO2 change is anywhere in a wide range because the cloud feedback is poorly understood.

The insufficiencies of the models appears to me considering wrong values for forcings other than CO2, such as aerosols, direct effect of solar variation, cloud effects of solar variation, and multidecadal oceanic oscillations. If only all of these and other significant forcings are correctly entered into the models, and then the feedbacks are adjusted to achieve hindcasting, then the models would be more accurate and come up with more accurate figures for climate sensitivity to CO2. Which can change with climate change, due to change in strength of feedbacks, such as the lapse rate one, the cloud albedo one, or the surface albedo one.

Other than climate sensitivity likely varying with climate change, and cloud feedback not being the only poorly understood contributor to climate sensitivity, I think Kiehl was correct. And that IPCC was correct in stating that climate sensitivity to CO2 change is anywhere in a wide range because the cloud feedback is poorly understood.

The insufficiencies of the models appears to me considering wrong values for forcings other than CO2, such as aerosols, direct effect of solar variation, cloud effects of solar variation, and multidecadal oceanic oscillations. If only all of these and other significant forcings are correctly entered into the models, and then the feedbacks are adjusted to achieve hindcasting, then the models would be more accurate and come up with more accurate figures for climate sensitivity to CO2. Which can change with climate change, due to change in strength of feedbacks, such as the lapse rate one, the cloud albedo one, or the surface albedo one.

Oh, so you “think Kiehl was correct” except for “climate sensitivity likely varying with climate change”. But Kiehl only considered climate sensitivity in the models (not in reality) and – as he reported – it has a fixed value in each model.

Also, your assertion that the models could “come up with more accurate figures for climate sensitivity to CO2” can only be true if the reason is known for why each model ‘runs hot’, and each ‘runs hot’ by a different degree from each other model. The modelers don’t know why the models ‘run hot’, do you?

Willis: After all, the canonical equation of the prevailing climate paradigm is that forcing is directly related to temperature by the climate sensitivity (lambda). In particular, they say:

Change In Temperature (∆T) = Climate Sensitivity (lambda) times Change In Forcing (∆F), or in short,

∆T = lambda ∆F

====

No, Willis. This is not the canonical relationship , it is what you reduced the canonical relationship to in eqn 7 of your previous thread under the limit of a physically unreal constant rate of change being reached. ie ∆T1=∆T2 and constant ∆F. This is equivalent to what Joe posted earlier with F=r.t with the transient removed. You are now suggesting this is base relationship, it is not.

(where dt is the discrete data time interval which is not a dimensionless unity ).

So what is your trend ratio telling us? If you draw a straight line through the input time series and a straight line through the output temp TS. and take the ratio , it gives a constant, by definition. Not a result.

That the result is close to climate sensitivity, eye-balling your graph the largest deviation looks like about 10%.

is that surprising?

Most of the low level relationships are linear. Many are linear negative feedbacks. Most obvious exception Stephan-Boltzmann T^4, but local variation of the order of 30 degree C is only about 10% at most. The bits the most likely to be non linear : tropical storms , cloud formation and precipitation are precisely the bits that are so poorly understood that they are not modelled at all and replaced by WAG “parameters”.

One of the easiest ways to control a non linear, unstable system it to add neg. feedback. Earth has proved to be long term stable, so anything close to matching a hind-cast almost certainly will be stable too and dominated by neg. f/b.

Is there much scope for something constrained by hind-cast to be much different from what you remarked on?

Perhaps it would be informative to look at the one or two models that lie off the line. What is special about them ? Are they less/more stable?

I agree you do seem to have pinned it down better than Kiele’s initial observation.

I think Mosh has hit it on the head. There are enough free knobs in all this to make any lambda fit.

The current IPCC position is untenable. Having been unable to put a figure on CS, they are no longer able to say what proportion of recent warming is AGW.

Willis: “Because it means that the shape and form of the model are immaterial for climate sensitivity. It is determined only by the ratio of the trends of the output and the forcing. ”

It makes no sense to say the sensitivity (a parameter) of a model is determined b its output.

“Compare and contrast that with the IPCC claim, that they are “very certain” (95%) that the spread in the reported sensitivities is due to differences in “cloud feedback” … you may not be startled by my results, but the IPCC certainly would be …”

Aren’t the cloud feedback “parameters” also input parameters? I would have thought that what they choose to use as cloud parametrisations almost certainly does determine what they choose to use for CS.

That’s one of the few things they can be 95% sure of since they draw both cards from the bottom of the pack.

Aren’t the cloud feedback “parameters” also input parameters? I would have thought that what they choose to use as cloud parametrisations almost certainly does determine what they choose to use for CS.

That’s one of the few things they can be 95% sure of since they draw both cards from the bottom of the pack.

However, the modelers needed to hindcast the global temperature evolution of the twentieth century. And that evolution was not a linear rise.

Each model ‘runs hot’. That is, each model tends to increase global temperature over time more than was observed over the twentieth century. And each model ‘runs hot’ by a different amount. Each model is tuned to hindcast the twentieth century by adjusting aerosol cooling arbitrarily to constrain the ‘running hot’ while also adjusting CS to obtain the evolution of global temperature (which was not a linear rise).

It is that tuning of two parameters which provides the value of CS. Of course, as you suggest, instead of adjusting aerosol cooling they could have adjusted cloud behaviour or some other assumed parameter. But that would not be likely to much affect the needed value of CS in each model.

Simply, the models are basically curve fitting exercises and, therefore, it is not surprising that Willis can emulate their behaviour(s) with a curve fitted model.

Atmospheric CO2 also LAGS temperature in the ice core record by ~800 years on a longer time scale.

So atmospheric CO2 LAGS temperature at all measured time scales.*

So “climate sensitivity”, as used in the climate models cited by the IPCC, assumes that atmospheric CO2 primarily drives temperature, and thus assumes that the future is causing the past. I suggest that this assumption is highly improbable.

Regards, Allan

______

Post Script:

* This does not preclude the possibility that humankind is causing much of the observed increase in atmospheric CO2, not does it preclude the possibility that CO2 is a greenhouse gas that causes some global warming. It does suggest that neither of these phenomenon are catastrophic or even problematic for humanity or the environment..

As regards humanity and the environment, the evidence suggests that both increased atmospheric CO2 and slightly warmer temperatures are beneficial.

Finally, the evidence suggests that natural climate variability is far more significant and dwarfs any manmade global warming, real or imaginary. This has been my reasoned conclusion for the ~three decades that I have studied this subject, and it continues to enable a more rational understanding of Earth’s climate than has been exhibited by the global warming alarmists and the IPCC.

To answer your question I once spent some time working through the fortran actually comprising the majority of the model. What I found was a handful of 1960s equations coupled with dozens of functions that didn’t do anything and lots of “parametrization” whose primary consequence was making the thing’s retrocasts come out about right.

I’ve noticed other examples where there seems to be disconnects in the logic of the confidence ratings. For example somewhere in the document they suggest the pause is caused in equal measure by internal variability and reduced rate of change of forcings in the past decade or so and give this high confidence. But then go on to express a much low confidence on quantifying the forcings.

Assuming these guys aren’t idiots and also aren’t simply dishonest then the only reason such disconnects exist is because the confidences come from many different sources. Confidence can come from quality of data sets, understanding of processes or from professional opinion (maybe there are more). So when it comes to clouds it might be that we don’t understand the physics or have good, long data sets but it might be the professional opinion of many climate scientists that it’s cloud feedback that cause the model spread. Viola! you have high-confidence that springs from a low level of understanding.

(Note This is just my attempt to try to understand the reasoning of the IPCC, in no way does it mean I think it’s a reasonable way to proceed.)

@Willis Eschenbach
I think that modelling climate is quite a serious biusiness. I actually think it is incredibly difficult.

If I were to model volcanic activity, i would regard it as a multiplier of the forcing, not an additive effect. That is if a lot of ash goes into the atmosphere, it would diminish the forcing by 10% or whatever.

In the case to recover the influence of volcanic ash on temperature would involve a homomorphic deconvolution. Given the fact the the climate response is certainly non-linear, one would be on a hiding to nothing using the awful data that is available.

If I were to try and take this problem on, which I am not because it is not my field, it would require at least 6 months serious work. However, this does not mean that I, and others who have a mathematical and technical education, cannot comment on the use of basic maths and satatistics. We can because we know a lot more about these methods than your good self. However, you seem to churn out what you believe is profound analysis several times a week. You should ask yourself if what you write makes any sense before posting it and then give the problem some serious thought.

Your analysis is both circular and naive.

I’m sorry that you are unable to respond to the valid criticism made by educated scientists about your posts. Simply ranting at people is not argument. I am perfectly capable of having a rational discussion about elementary signal processing and statistical methods with anyone and am prepared to be corrected by those who have greater expertise than myself.

Why not start with trying to determine if your “models” encapsulate the non-linear characteristics of the data and see how far you get?

“The clue is that folks like Hansen and others think that models are poor source of information about sensitivity. Paleo and observations are better. Models.. dont really give you any “new” information about sensitivity.”

But isn’t the problem here that the only way to determine sensitivity in the paleo and observational data is to model it? Snake eats tail.

Steven Mosher says:
check the relationship between aerosol forcing ( a free knob) and the sensitivity of models.
The clue is that folks like Hansen and others think that models are poor source of information about sensitivity. Paleo and observations are better.
————————————

Paleo also has a free knob – Albedo.

In my estimation, Albedo on planet Earth can vary between 24% to 50%.

24% as in Pangea / Cretaceous hothouses (zero ice, continents weighted toward the equator, extensive shallow oceans), 33% as in Last Glacial Maximum (glaciers down to Chicago, sea ice down to 50 N/S) to 50% as in Snowball Earth (glaciers and sea ice to 30 N/S).

Hansen and paleo-climate-sensitivity scientists have never been objective in determining these values but have just played around with the numbers / ignored the changes completely so that they can assign more sensitivity to GHGs/CO2.

Technically, the paleoclimate indicates a GHG/CO2 sensitivity somewhere between 0.0C (as in None) to 1.5C per doubling.

Willis: “…in the context of a naturally thermoregulated system such as the climate. In such a system, an increase in one area is counteracted by a decrease in another area or time frame.”

When you write ‘another area or time frame’, what scale do you mean? Do you mean 10s of miles and hours or much larger scales? I can see how your naturally thermoregulated system works in the moist tropics to keep temperatures nearly constant, but how does that regulate most temperate and especially polar zones?

JKnapp: “… it is clear that they are “tuned” and are not really derived from physical principals as claimed.”

Excellent point. With the models all matching history so well, all having different sensitivities, and all failing at predicting the pause over the last 15 years, that’s a rather indisputable point at this juncture.

Willis: “…in the context of a naturally thermoregulated system such as the climate. In such a system, an increase in one area is counteracted by a decrease in another area or time frame.”

When you write ‘another area or time frame’, what scale do you mean? Do you mean 10s of miles and hours or much larger scales? I can see how your naturally thermoregulated system works in the moist tropics to keep temperatures nearly constant, but how does that regulate most temperate and especially polar zones?

===

Willis’ hypothesis seems good for tropics (it’s a tropical phenomenon). It does not apply to extra-tropics, though they probably benefit by some ocean mixing through the main ocean gyres which helps stabilise SST.http://climategrog.wordpress.com/?attachment_id=310

Willis: I enjoyed this post and the two posts on climate sensitivity/emergent phenomena linked at the end. A couple of comments on the latter:

1) The onset of convection and cloud formation can only serve as a negative feedback for part of the planet – rising air must come down somewhere. No matter how hot it may become, there won’t be any clouds where the air is descending. I’m sure you are aware of all of the deserts below the descending loop of the Hadley cell. (Roy Spenser once answered a question from me by saying: Look at any satellite picture of the planet. Where there are clouds, the air is ascending. Where it’s clear, the air is descending.) As radiative forcing from GHGs increases, can the fraction of the planet covered by clouds increase enough to compensate? Or is the proportion of cloudy and clear areas determined by the relative rates of ascent and descent, not surface temperature?

2) The daily emergent phenomena you describe in the tropics are driven by massive change in radiation – from a low of about 400 W/m2 of DLR at night to a maximum of about 1500 W/m2 of combined DLR and SWR at noon. It’s not surprising that new phenomena emerge in response to such massive changes in radiation. Knowledge that a daily 1000 W/m2 radiative forcing produces unquantified negative feedbacks associated with clouds and wind (and positive feedback from water vapor) gives me no confidence about how the planet will respond to a forcing from GHGs <1% as big lasting for at least a century.

3) The emergent phenomena you convincingly describe apply mostly to the tropical oceans and your earlier post has demonstrated that tropical SSTs rarely exceed 30 degC. (You could add hurricanes to the phenomena that emerge – fortunately rarely – when SSTs exceed 26.5 degC threshold.) In addition to exporting excess heat to the upper atmosphere, however, increased radiative forcing in the tropics will driven poleward transport of heat away from the tropics by convection – which will warm the rest of the planet. A "local climate sensitivity" of 0.5 at the equator due to negative cloud feedback could gradually increase to a "local climate sensitivity" of 5 at higher latitudes. In the temperate zones, thunderstorms are phenomena that usually occur mostly in the summer, but not daily. Upward convection of heat mostly occurs at moving "fronts" between moving warmer and colder air masses and at lows that follow that follow Rossby waves in the jet stream. (The most violent thunderstorms that produce tornados are associated with such clashing air masses in the spring in the Central US, not the warmest temperature.) These phenomena exist all year in the temperate zone; they don't emerge only when the surface temperature exceeds a certain threshold.

4) The increase in evaporative cooling with wind speed may deserve more attention. When air is saturated with water vapor, water molecules are returning to the ocean as often as they escape the ocean's surface – no matter what the ocean's surface temperature is. Evaporative cooling only occurs when the water molecules that escape the surface move far enough away from the surface that they usually return as precipitation. Molecular diffusion is an extremely slow process compared with convection (wind). The change from laminar flow to turbulent flow produces the planetary boundary layer. The transport of water vapor through the boundary layer precedes the development of thunderstorms and cloud formation at the top of the boundary layer. Warm boundary layer clouds are effective at reflecting SWR with minimal reduction in outgoing LWR.

When answering a question its always important to check that you are answering the right question:

In climate science, climate sensitivity is the WRONG question. If one frames the issue of climate, CO2, whether climate change is possible without the effect of humans, the history of climate – has there been any change of climate prior to 1850, etc. around “climate sensitivity” then one has failed before you have even started.

“Climate sensitivity” is loaded with unproved assumptions. It is the WRONG QUESTION. Asking the question shows that you are a dishonest person, that you have a political agenda more important than the curiosity to know what is really happening in climate. This is why the question of “climate sensitivity” is completely irrelevant to the true issues of climate science. Climate sensitivity pre-assumes that CO2 changes drive climate temperature changes which is a weak and unproven hypothesis at odds with the palaeo history of climate taken as a whole. If there is a sensitivity to CO2 then this figure:

strongly indicates that climate sensitivity (to CO2) is not distinguishable from zero.

Real climate science does not start from inductive hypotheses about backradiation etc. but instead starts with the DATA about how climate has changed in the past, how ALL the possible influences on climate have changed or not changes in parallel with climate temperature, and from the most complete possible body of data, start to frame and test hypotheses. Climate science done this way would be unlikely to involve CO2 except as a minor footnote.

Thank you Gail. I have a copy of John Kehr’s book. I very much like his explanation of the hemispherical heating cycles of our planet. Makes more sense than most. Especially the time scales as confirmed by the performance of glaciers.
I like KevinK’s comment. I used to work in electronics and wondered about those points he makes.

—————————————–

kadaka (KD Knoebel) on October 1, 2013 at 11:31 pm:

KD kindly posted a series of links for me to study at my request. Unfortunately I have fallen at the first link.
If I may point out where my confusion starts…

“”Since, at this stage of my physical analogy, there are no GHG in the Atmosphere, the purple balls go off into Space where they are not heard from again. You can assume the balls simply “bounce” off like reflected light in a mirror, but, in the actual case, the energy in the visible and near-visible light from the Sun is absorbed and warms the Earth and then the Earth emits infrared radiation out towards Space. Although “bounce” is different from “absorb and re-emit” the net effect is the same in terms of energy transfer.If we assume the balls and traytop are perfectly elastic, and if the well-damped scale does not move once the springs are compressed and equilibrium is reached, there is no work done to the weight scale. Therefore, Energy IN = Energy OUT. The purple balls going out to Space have the same amount of energy as the yellow balls that impacted the Earth.””

My understanding is the atmosphere in contact with the surface WILL WARM via conduction and convection. Only when ALL the atmosphere reaches thermal equilibrium will the outgoing energy balance incoming energy. The atmosphere is transparent. The atmosphere cannot lose energy to space. The atmosphere without GHGs is still an INSULATOR.

Why is the ‘greenhouse effect’ needed? Our planet’s atmosphere is nothing like a greenhouse.

You have challenged me on several occasions to produce an analysis, You asked if anyone was prepared to help you when you said you wanted to produce a more complex model.

On the first occasion, I wrote a very simple piece on signal processing in response to your article that contained a concept that was completely wrong.

The second occasion you challenged me to put up or shut up, I wrote a piece on some of the pitfalls of modelling and sent it to WUWT. This did not see the light of day.

Your response to my offer to help you was extremely rude and you told me that I was an amateur on the basis of attributing complete rubbish to me, which I certainly had never said.

The problem is that:
1) You have editorial control over what is posted.
2) You are excessively sensitive to criticism.and you attempt to stifle dissenting views from your own.
3) You seem either unwilling or incapable of responding objectively to valid criticisms your “maths”.
4) When you produce maths it is not set out so that it easy to understand what you have done.

Since you are for ever challenging me to say what is wrong with your models, I am prepared to do this but I will write it in a way that the majority of people who read this blog can understand. However, I am only prepared to do so on the understanding that my response will be published – I see no reason why I should do a significant amount of work simply to have it binned.

I think the problem here is that everyone is making assumptions about what each model is doing – or rather an assumption that they all are doing the same thing. But even if they deal with one part of the system slightly differently the outputs can be drastically different. For example, the models have a voluminous component therefore differences in warming between one part of the globe will cause, even if poorly modelled, convective transfer, which is not likely to be expressed in an initial temperature change. I’m sure each modelling team will have their own unique method of implementing this and the result must also be a function of resolution. In short, the models have spatial and temporal responses – emergent phenomenon during run-time. So unless we know line for line, each algorithm used during simulation, and what the experimental setup is, I can’t see how one can answer the apparent paradox.

On a general point your posts do seem a little adversarial. Furthermore, you do – only sometimes – appear to dismiss some of Willis’ arguments on weight of qualifications rather than the strength/weakness of the arguments.

Since you are for ever challenging me to say what is wrong with your models, I am prepared to do this but I will write it in a way that the majority of people who read this blog can understand. However, I am only prepared to do so on the understanding that my response will be published – I see no reason why I should do a significant amount of work simply to have it binned.

If you truly do know something “wrong with {Willis’} models then please “say what is wrong”. I and others would like to know of it.

However, you refuse to post such criticism but, instead, you keep writing snark and insults, and you feign offence when Willis replies in kind. That is not helpful to anybody, and it is certainly not informative of Willis’ models.

I can only think of two reasons for your suggestion that your “work” may be “binned”.

You may think the mods. will censor a post or post(s) in this thread. Well, that is implausible, but you could test it by posting a criticism of Willis’ work in this thread and seeing what happens.

Alternatively, you are demanding the right to provide a head post article which our host will guarantee to publish on WUWT in an unabridged form whatever the contents and quality of the article. That demand is unreasonable by any standards. Also, you could submit your article for publication on WUWT and if it is rejected then post it on a blog of your own creation and post a link to it on WUWT.

If you have a criticism of Willis’ work then please provide it. And please stop pretending you could provide such a criticism but you won’t because the “bin” may eat your homework.

I think it’s implicit in what RC has said. I don’t wish to put words into his mouth, but in order for Willis to be correct he has to make a sweeping assumption about how the models work – he could be right. And his “radiation-balance” approach is fine if the models follow the logic expressed above.

But the models, as far as I understand them, are not just expressing a simple physical model where temperature responds directly to forcing and from this we have a series of mathematical constructs. Again, the models are likely to have local physical models that govern how the system responds locally and these will effect the rate of warming.

Model 1 and Model 2 will use the extra energy (express as net change in terms of forcing if you wish) in the system to different types of work and on different time scales. Has Willis looked into what physical models the different modelling groups try to emulate and how they implement these approaches? I guess that is what RC is asking.

I’m not saying this is what is going on, but I agree with RC that the above post, while good and engaging, needs important caveats and RC shouldn’t be shouted down for merely questioning the simplicity presented – which one could see as a plus.

I’m not saying this is what is going on, but I agree with RC that the above post, while good and engaging, needs important caveats and RC shouldn’t be shouted down for merely questioning the simplicity presented – which one could see as a plus.

Sorry, but “questioning the simplicity presented” is not valid criticism (especially when accompanied by snark and insults as provided by RC Saumarez).

An explanation of why “the simplicity presented” is inadequate or misleading would be valid criticism.

All models are simplified representations of reality. Indeed, they are constructed to provide a simplified framework which aids understanding of some aspect of reality. For example, a cow may be modeled as a sphere with similar surface area to a real cow. This model may be useful for understanding of how a cow’s metabolic rate affects its surface temperature. If the ‘spherical cow’ model is only used for the metabolism and surface temperature considerations then is not a valid criticism of the ‘spherical cow’ merely to say it is too simple because it does not have legs. But study of a cow’s movements requires a model which includes legs and, therefore, if the model is to assess the cow’s movements then it would be a valid criticism to say the model does not include legs.

In other words, a good model is as simple as possible for its purpose but not too simple for that purpose. Saying a model is simple is a truism of no value and is no a valid criticism. But it is a valid criticism to say a model is too simple because it omits a factor which is significant to whatever the model attempts to emulate.

Please note that you did not quote a valid criticism from RC Saumarez but said what you think is “implicit in what RC has said”.

If RC Saumarez has an explicit criticism he needs to make it and not provide silly excuses for throwing insults instead of stating the explicit criticism. Nobody has “shouted down” his arguments and explanations. Willis has rightly “shouted down” the insults and disparaging remarks which RC Saumarez has provided instead of arguments and explanations of what he claims would be a valid criticism if he were to explicitly state it.

I think the issue is, and it is only a suggestion as to why Willis’ approach might be wrong, is that until you know exactly what each model is doing and how it deals with things like conduction and convection, then he’s second guessing why the paradox exists. In short, and why I agree with RC point – “naive” argument, I prefer simple/elegant – is that you’re starting with the end point, drawing a conclusion without trying to back-engineer what assumptions/processes the models are making, all the while assuming that the additional energy in the models does the same work irrespective of model.

It’s like trying to work out why two cars (of equal weight) travel different distances on the same volume of petrol, without knowing anything about the engine and only having the volume of petrol and distance traveled available.

I have made a number of criticisms of Willis’s models.
1) They do not give the correct autocorrelation functions
2) There is no consideration of non-linearity
3) The statistical characterisations are incorrect.

This is generally met with abuse and is not addressed.

I was asked to put up or shut up on a number of occasions and asked by you and others to justify my criticisms. I am perfectly happy to do so. However, since this involves some graphs and maths it is not easy to do on a simple response here.

I was urged to justify myself. It is a matter of record that I wrote a respose on the pitfalls of modelling and sent it WUWT. This was not published.

If someone posts what he claims to be an intellectual construct on this blog, has editorial control, preferential ability to post answers but does not address the issues made by myself and others, this is not “science” as claimed by WE.

You will note that WE produces models on a weekly basis and claims that they work as well as anything else in the field. Does this not seem to be a rather extreme claim?

Your comment and figure are consistent with my hypo and comment from above:

So atmospheric CO2 LAGS temperature at all measured time scales.*

So “climate sensitivity”, as used in the climate models cited by the IPCC, assumes that atmospheric CO2 primarily drives temperature, and thus assumes that the future is causing the past. I suggest that this assumption is highly improbable.

Regards, Allan

* This does not preclude the possibility that humankind is causing much of the observed increase in atmospheric CO2, not does it preclude the possibility that CO2 is a greenhouse gas that causes some global warming. It does suggest that neither of these phenomenon are catastrophic or even problematic for humanity or the environment..

In this reply I quote then address address each of your points and if I have missed any then that is not intentional.

You say you have made a number of criticisms of Willis’s models which you number 1 to 3.

1) They do not give the correct autocorrelation functions

That is an assertion and not a criticism.
Why is the determination of Willis wrong?
What would be the correct autocorrelation functions and why?

2) There is no consideration of non-linearity

That is also an assertion and not a criticism.
Why not assume linearity?
What form should be considered and why if not linearity?

3) The statistical characterisations are incorrect.

That is merely another assertion and not a criticism.
Why are they incorrect and how?
What would be correct statistical characterisations.

After listing those spurious assertions of criticisms you have made although you have not, your post says

This is generally met with abuse and is not addressed.

No, you have been repeatedly abusive and demeaning of Willis. Repeatedly, you have made the assertions that you have pretended are criticisms of Willis work (n.b. they are assertions and not criticisms), and then claimed your assertions prove Willis is incompetent. Typically you have accompanied that with an appeal to authority and compounded the offense by citing the authority as being yourself!

And you complain when Willis replies to your barrage of abuse with the contemptuous put-down it deserves. Contemptible behaviour deserves to be treated with contempt.

Please note that Willis is not one of your students. He has no need to cower in fear of what grades you award. Hence, he has no reason to soak-up your abuse.

I was asked to put up or shut up on a number of occasions and asked by you and others to justify my criticisms. I am perfectly happy to do so. However, since this involves some graphs and maths it is not easy to do on a simple response here.

I accept your “graphs and maths” difficulty. However, I am not a computer buff and I can think of ways to overcome that difficulty, so I am surprised it is beyond your capabilities. Perhaps a conversation with one of your students would overcome that difficulty?

Anyway, if you are unable to express your point in words then I suggest that inability perhaps indicates you lack sufficient understanding of your point; e.g. Robert Brown provides cogent posts critical of of models without using “graphs and maths”.

Simply, if you cannot justify your assertions then you may want to consider the wisdom of making the assertions. In this context, I note your saying

I was urged to justify myself. It is a matter of record that I wrote a respose on the pitfalls of modelling and sent it WUWT. This was not published.

I take your word for that but it is not clear what you are saying.

If you are saying a post was censored then I am extremely surprised. Mods often snip a post but always show where and when it was snipped together with a brief explanation of why. I have been given ‘time out’ from WUWT as punishment (most recently in the last month) for ‘overstepping the mark’, but I have never been censored.

If you are saying you submitted an article that was rejected for publication on WUWT then ‘tough luck’. Try to do better and you may have more success in the competition for publication of an essay on WUWT.

On the basis of your posts on this and a previous thread I suspect your attempt “to justify {your}self” may have been an essay which was an abusive tirade directed at Willis. If my suspicion is anywhere near correct, then your essay did not warrant publication on WUWT.

If someone posts what he claims to be an intellectual construct on this blog, has editorial control, preferential ability to post answers but does not address the issues made by myself and others, this is not “science” as claimed by WE.

Willis does address issues put to him. Indeed, you claim to be offended that he replies to your abuse in the manner it deserves. In light of his replies to substantial points, I am certain he would address substantial points if you were to make any instead of your assertions.

You conclude by asking me

You will note that WE produces models on a weekly basis and claims that they work as well as anything else in the field. Does this not seem to be a rather extreme claim?
No, it is not “a rather extreme claim” if the models used “in the field” are rubbish, and I know they are.

My knowledge of this is far from unique because anybody who undertakes such a study discovers the climate models are rubbish. I found that, Kiehl found that and e,g, in this thread Engelbean says he found that.

1) Autocorrelation functions. These do not conform to an ARMA 1 model. I have posted this on Judith Curry;s blog following the controversy between Richard Tol and Ludeke et al.http://judithcurry.com/2012/02/19/autocorrelation-and-trends/
I have pointed this out several times, the last after being told to grab some data and do some calculations.

The model proposed by Eschenbach cannot possibly be right in the basis of the well known auto-correlation properties of the temperature signal.

2) Non-linearity.
The structure of the ACF points to either a multicompartment model or a Hurst process which is non-linear. There are many non-linear process in the climate system, water condensation, ice formation for example or turbulent mixing in the atmosphere or oceans. One would expect the system to be non-linear a priori and, if one were going to apply a linear model to data, one has to show that it is indistinguishable from data.

3) Statistics.
I have pointed out on several occasions that R^2 is an insufficient test of a model. This is well known. Although WE claims to have validated his models on this basis, Based on my experience, I am highly suspicious of whether he has done it correctly. Specimen calculations show that the errors involved are very large, although WE claims that this is not the case. I would agree that the statistical characterisation of model results are a specialised field that requires training and experience.

I, and others, have commented that WE’s mathematical development is often unintelliglible and you end up trying to guess what what he is trying to do. When he publishes the R code, one can read between the lines. This is hardly “science”.

I have made serious, objective criticisms of an intellectual approach. I have made them on a number of occasions but they have never been addressed. This is certainly not being a troll as you called me, If I were teaching students I would not give them bad grades or abuse them, but would hope that they would approach the problems above rigorously.

I have tried to make my criticisms known after being challenged to write a post, which I did. Ok, it wasn’t published, you win some and you some. If you read my post in response to WE’s challenge after I criticised his conclusions on correlation and filteringhttps://wattsupwiththat.com/?s=saumarez
I do not think many people would regard the tone as offensive and the response that I sent recently was in the same vein. Frankly I don’t really care.about this any longer, I have better things to do.

1) Autocorrelation functions. These do not conform to an ARMA 1 model. I have posted this on Judith Curry;s blog following the controversy between Richard Tol and Ludeke et al.http://judithcurry.com/2012/02/19/autocorrelation-and-trends/
I have pointed this out several times, the last after being told to grab some data and do some calculations.

The model proposed by Eschenbach cannot possibly be right in the basis of the well known auto-correlation properties of the temperature signal.

I say again, Richard, what the temperature signal does is meaningless. My model is not a model of the temperature signal. It is a model of the GCM climate models temperature output, and NOT A MODEL OF THE TEMPERATURE SIGNAL. Sorry to shout, but I’ve said this several times.

Following your raising the issue of the ACF in a previous thread, I showed the ACF of both the model outputs and my emulation of those outputs here. Actual data. As far as I know, you haven’t commented on that data at all. Here it is again:

2) Non-linearity.
The structure of the ACF points to either a multicompartment model or a Hurst process which is non-linear. There are many non-linear process in the climate system, water condensation, ice formation for example or turbulent mixing in the atmosphere or oceans. One would expect the system to be non-linear a priori and, if one were going to apply a linear model to data, one has to show that it is indistinguishable from data.

Again, please note that my model is NOT a model of the climate system, so “non-linear processes in the climate system” mean nothing to my model. It is a model of the GCMs, not a model of the climate.

Would a multi-compartment model give a better fit to the GCMs? Yes, they do … just not much better. I’ve run those kinds of models as well, and typically there is a marginal improvement in the emulation. However, the match with a single-compartment model is so good that the increase in complexity is not warranted.

I also don’t understand you saying that you “expect the system to be non-linear”, when above you said:

Also, consideration of the non-linearities in the climate system suggest that it would be very unlikely to behave as non-linear system.

In any case, as I said, what the climate system itself does is meaningless to my model.

3) Statistics.
I have pointed out on several occasions that R^2 is an insufficient test of a model. This is well known.

I would agree that the statistical characterisation of model results are a specialised field that requires training and experience.

So … give us the benefit of your training and experience. Whenever I ask you for that, you say it will take two months to get back to me, or six months to understand it …

I, and others, have commented that WE’s mathematical development is often unintelliglible and you end up trying to guess what what he is trying to do. When he publishes the R code, one can read between the lines. This is hardly “science”.

“I, and others,” ??? You have asserted above, and for the first time as far as I can recall, that you can’t understand my mathematical logic. You also assert that others can’t follow my math either. Note that you haven’t provided a single fact to back up your claim.

Throughout all of my work, I have been quite rigorously transparent as to my data and my code. I have explained and laid out my math and my logic. I have provided spreadsheets and programs in R. And I have explained (or done my best to explain) exactly what I’ve done when people have asked specific questions.

So if you don’t understand my math, if you find it unintelligible, you should ask a specific question. Which part of it can’t you follow?

I have made serious, objective criticisms of an intellectual approach. I have made them on a number of occasions but they have never been addressed.

Here’s a typical one of your serious, objective criticisms of an intellectual approach, your claim that my math is wrong, presented in its entirety so folks can see the weight of your keen, penetrating analysis and clear exposition of your findings:

As an aside, if you are dealing with distributions of variables, as you discuss in you “cold Equations”, it seems to me that your mathematics does not capture this.

Sorry, but that says nothing.

RC, I need specifics. Not intellectual handwaving. Not “it seems to me”. If I’ve said something that you disagree with, QUOTE MY WORDS so we can all understand what you are referring to, and then point out exactly what I’ve said that is wrong. If it’s math, quote the equations I used and tell us where I went off the rails.

You repeatedly make vague (and often unpleasant) assertions that I’m wrong … but that is very different from actually demonstrating exactly where and why I’m wrong.

This is certainly not being a troll as you called me, If I were teaching students I would not give them bad grades or abuse them, but would hope that they would approach the problems above rigorously.

I have tried to make my criticisms known after being challenged to write a post, which I did. Ok, it wasn’t published, you win some and you some. If you read my post in response to WE’s challenge after I criticised his conclusions on correlation and filteringhttps://wattsupwiththat.com/?s=saumarez
I do not think many people would regard the tone as offensive and the response that I sent recently was in the same vein.

I read your post cited above at the time, and I commented at the time:

Willis Eschenbach says:
April 9, 2013 at 2:51 pm

My thanks to RIchard for a very understandable explanation of the Fourier transform. As a self-taught mathematician, such expository work is very valuable to me.

However, there was nothing in there that was a “criticism” of my work. You never mentioned my work at all, either in the head post or in the comments.

My problem is that you say you wrote it in response to my errors (in April of this year, 2013), to show me where I was wrong … but I can’t find a single comment from you in all of 2012 or in early 2013, so I’m not clear what errors you are referring to. I also don’t know which of my ideas you think was shown wrong by that exposition (and it was a good one) of Fourier Analysis.

As to your allegation above that I censored your recent post, I fear you give me far too much credit. I have absolutely no say as to what Anthony publishes, nor do I want any. He doesn’t discuss his decisions with me either before or after the fact.

Frankly I don’t really care.about this any longer, I have better things to do.

You mean you’re leaving? I won’t stand in your way. To date, your contribution to the discussion has been to re-iterate, over and over, endlessly, that you think I’m wrong. OK, got it. I understand you think I’m wrong.

But until you can point to exactly where I’m wrong about some one thing, and quote the wrong thing I said, and explain to us why I’m wrong about that one thing, what you think is of no interest to me.

You then refute that quoted statement of RC Saumarez (n.b. not me) but the heading could suggest you were refuting me by replying

I say again, Richard, what the temperature signal does is meaningless. My model is not a model of the temperature signal. It is a model of the GCM climate models temperature output, and NOT A MODEL OF THE TEMPERATURE SIGNAL. Sorry to shout, but I’ve said this several times.

Yes, I know. And I have pointed that out myself e.g. in this thread at October 2, 2013 at 1:42 pm where I concluded

Simply, the models are basically curve fitting exercises and, therefore, it is not surprising that Willis can emulate their behaviour(s) with a curve fitted model.

Reply to Greg Goodman: ∆T = lambda ∆F is perfectly fine – if the system has come to equilibrium after a change in forcing. During the modest temperature drop after a volcanic eruption, both forcing and temperature are changing; so this equation isn’t applicable (and one must integrate).

In replying previously to Willis, I also realized that ∆T = lambda ∆F won’t be useful if the relationship between (functional relating) forcing and temperature is poorly behaved (chaotic) or the temperature change is too big. I don’t think it is likely that either of these caveats seriously interferes with applying the concept of climate sensitivity to our current concerns about GHGs, but I wouldn’t deem other opinions wrong. These caveats appear to apply to other situations: 1) Warming might eventually reduce the height of the Greenland ice sheet, warming surface temperature (1 degC/166 m – lapse rate) and decreasing albedo. These positive feedbacks don’t go away the as soon as the GHG forcing disappears. 2) Ice ages appear to develop slowly and end relatively suddenly, even though orbital forcing is a composite of several time-symmetric sine curves.