Fake Forcing

Willis Eschenbach has a new post on WUWT. It’s actually a follow-up to this one, which we’ll look at in detail. In it he claims that the temperature result from the GISS modelE is just 0.3 times the forcing, i.e., it’s simply a linear function of the input forcing. That one is a follow-up to this one, in which he discovers that the model output is not simply a linear function of the input forcing. Yet in its follow-up he finds that it is! Problem is, the “forcing” he uses to get this result is a fake.

Eschenbach seems to think that if a model responds roughly linearly to climate forcing, something must be wrong. I expect the opposite. The changes in forcing due to natural and man-made influences are small compared to the total forcing, and the changes in temperature are small compared to the absolute temperature (on the Kelvin scale). Based on fundamental energy balance for the globe as a whole, I’d be quite surprised if we were far enough outside the “linear regime” that the response — both for a good model and for the real climate system — was not linear.

But I would not expect the real climate system to show simple linear response to forcing. I would expect the physics to be well-approximated by linear response, with a zero-dimensional energy-balance model as a first approximation. But that doesn’t mean linear response to the forcing.

You can get the GISS modelE results here. I changed the “mean period” to 1 in order to avoid smoothing, and the “output” to “Formatted page w/ download links” to get a link to actual data, then I took 1-year averages of the model output to get annual values. The forcing data is here, but it’s not what Eschenbach used. He tried, but the fit wasn’t very good. Here’s a simple linear fit of forcing to GISS modelE output:

The difficulty is that the forcing from volcanic eruptions, which doesn’t persist for very long, doesn’t have as strong an impact as the other forcings which evolve on a much longer time scale. To prevent the volcanic excursions from straying too far from the temperature, the linear fit suppresses the forcing coefficient. But then the slower forcing is also suppressed, so it fails adequately to model the long-term temperature change. It’s almost as though we need the model to respond on two different time scales, including both a “fast response” and a “slow response.” But wait — if the model does that, it might be following some actual physics.

In the first post, Eschenbach added a linear time trend in addition to linear response to forcing, which allowed his theory to track the long-term temperature change. But such a trend is purely an ad hoc addition, there’s no physical (or mathematical) justification. Then he accused GISS of “excessive use of forcing” and of essentially fabricating forcing data to make their model go. They didn’t — but Eschenbach does.

In the follow-up he tried multiple regression against all the separate forcings that GISS uses. This led him to conclude:

After some reflection and investigation, I realized that the GISSE model treats all of the forcings equally … except volcanoes. For whatever reason, the GISSE climate model only gives the volcanic forcings about 40% of the weight of the rest of the forcings.

This is based on his multiple regression, as he says:

I looked further, and I saw that the total forcing versus temperature match was excellent except for one forcing — the volcanoes. Experimentation showed that the GISSE climate model is underweighting the volcanic forcings by about 60% from the original value, while the rest of the forcings are given full value.

Then I used the total GISS forcing with the appropriately reduced volcanic contribution …

He further adds:

So I took the total forcings, and reduced the volcanic forcing by 60%. Then it was easy, because nothing further was required. It turns out that the GISSE model temperature hindcast is that the temperature change in degrees C will be 30% of the adjusted forcing change in watts per square metre (W/m2).

Let me translate: the actual forcing didn’t fit his preconception, so he changed it to a fake forcing.

What he doesn’t do is make the connection: that the short-lived volcanic impulses have reduced impact, not because the GISS modelE treats them differently from all the others, but because they are short-lived and there’s more than one time scale for the model’s climate system response. There is for the real climate system, too — a potent argument for the fundamental soundness of the GISS modelE.

I also took the GISS forcing data, translated it into a fake forcing by reducing the volcanic component to 40% of its actual value, and fit the modelE temperature data to that:

Compare to Eschenbach’s:

It’s a better fit — not because Eschenbach is right about the GISS modelE treatment of volcanic forcing, but because reducing the volcanic component emulates the reduced impact of short-lived forcing components, i.e., the “fast response” part of the climate system.

In his latest post, he uses a model suggested in a comment by Paul_K to his previous post. It’s actually just a discretized version of the solution to the zero-dimensional 1-box energy balance model used in Schwartz 2007. This model is that the temperature is influenced by time-dependent forcing via

,

where is the climate sensitivity and is a “time constant” for the climate system as a whole. Instead of creating fake forcing by keeping only 40% of the volcanic forcing, Paul_K creates a different fake forcing by keeping only 72.4% of the volcanic forcing, then fitting the “exponentially smoothed” forcing to the GISS modelE data. By exponentially smoothing the forcing with a given time constant, you effectively solve the 1-box energy balance model — but the forcing data used are still a fake. I can do that too:

Compare to Paul_K’s:

Bottom line: if you put in enough parameters, and fake the data because otherwise your model isn’t very good, you can get an excellent fit to the GISS modelE output. But it’s nothing but curve-fitting; the work of Willis Eschenbach and Paul_K is an outstanding example of mathturbation.

There’s no justification for them to fake the forcing, physical or mathematical. There’s no investigation of “effective forcing” to see how different forcings might actually have a different impact (in part because of feedbacks). That’s an effort which has been pioneered by James Hansen and colleagues. To contribute meaningfully, you’d have to do some actual science other than make an ad hoc change to the forcing data so you can impugn the results of somebody’s climate model.

What kind of physical theory — even rudimentary — might make just as good a fit? There are two major flaws with Schwarz’s model. One is that his method of diagnosing the time constant is flawed. Seriously flawed — even if his model were correct his method would give the wrong result. The other major flaw is that the real climate system has more than one “time constant.”

Maybe we could do better with a 2-box energy balance model. This has two time constants, and , one of which represents the “fast response” of the part of the climate system with less heat capacity, the other the “slow response” from the more sluggish part with more heat capacity. I too can get an outstandingly good fit to the modelE temperature, using a 2-box model with time constants about 2 and 45 years:

This fit uses the total net forcing, unmodified. It’s not based on changing the data, it’s based on physics. What a concept.

In my opinion, the fact that the GISS modelE output is so close to that of a 2-box energy balance model is a powerful argument for its correctness. The GISS model is built on the laws of physics, applied to detailed interactions over the entire globe, which enables it to simulate a helluva lot more than just global average temperature — like the geographical distribution of temperature changes, patterns of precipitation, wind, clouds, snow and ice. But global average temperature should be well approximated by fundamental considerations of energy balance, albeit with more than one time constant. If the GISS model did not fit the 2-box (or 3-box or more) energy balance model, then I’d be surprised.

We can even fit the 2-box model to actual (not model output) temperature data, and get a real-world estimate of climate sensitivity as well:

Doing so indicates sensitivity of 0.64 deg.C/(W/m^2), or 2.4 deg.C per doubling of CO2. What a surprise.

37 responses to “Fake Forcing”

All of the above simulations seem to fail miserably at reproducing the 1940 to 1978 cooling. Why is that ?

[Response: There is no “”1940 to 1978 cooling.” Maybe 1945 to 1950, followed by a flat period until about 1975. The models flatten around that time too, but not as much as observations. The main model-observation discrepancy I see is from about 1935 to 1945.]

The ‘discrepancy’ isn’t limited to just this one model. It featured prominently in the IPCC AR4 Summary for Policy Makers. The figure linked just below clearly shows that mid-century hump as outside the model range:

What’s missing is anthropogenic aerosols, which were greatly reduced when clean air legislation came into effect in the 1970s, thus revealing the underlying trend that was always there but had been masked by aerosol cooling.

The late 40s-late 70s were cooler than the early 1940s but it is clear that all the drop (if it really happened) was during 1945-7. From there we see a slight upward trend until we reach the more rapid recent increases.

It must be dome kind of ensemble because there should be more year-to-year temperature variation in the model output if it was a single model run.

As for the importance of multiple time scales, anyone who has done any meaningful modeling in any earth science knows that different processes act at different time scales. This is one of the key problems in modeling nature: you often have to run your model at the time scale of the fastest key process, which can greatly slow down a model simulation.

The idea that the earth’s climate system could have one time scale is laughable. I’m rather surprised that Schwartz managed to get away with a a one-box model in the first place. As I recall, one of the first climate model textbooks I ever read (Henderson-Sellers 1987 “A Climate Modelling Primer”) described the multi-box model approach in a chapter about simple climate models.

[Response: I guess you’re not aware that clueless crackpot theories about global warming being due to ocean cycles, without any understanding of the fact that heat is *energy* and it doesn’t come from nowhere, exceed the “stupid threshold.”

Since you believe equally idiotic claims that there’s no evidence of sulfate aerosol concentration, look here. As for the timing of the plateau in sulfate emissions, it’s neither mystery nor unbelievable coincidence — it’s the clean air act.

I made the mistake of reading some of the comments on Willis’ thread. Ugh. The usual “Warmists are now exposed” tripe (if the deniers were correct about what WTFWT and the other denier blogs have “accomplished”, by now climate science would be less than naked – whatever that might mean) and folks demanding to see “the equation” that’s in the model. Plus, one fellow who doesn’t do “homework” (demanding, essentially, that the entire literature on climate models be replicated at WTFWT because he’s lazy). The capper is Willis’ own estimation of his effort:

“What I have shown is that [climate models] are not chaotic or complex in their essence—they are purely and mechanistically linear.”

He clearly understands absolutely nothing about the climate system. He thinks he can replace a GCM with Excel! I’d recommend he read Tamino’s excellent dissection of his simplistic take, but we all know he’s too cowardly to come here – reality is too far from his safe comfort zone over at WTFWT.

Of course, all he has shown is that climate, unlike weather, isn’t chaotic and unpredictable.

He has also shown that simple models can produce the same basic results as much more complex models – something Tamino demonstrated years ago.

It doesn’t surprise me that deniers don’t understand the results of their own work. They can’t see their errors, why should they spot any useful results they come up with? All the “work” is just noise they use as an excuse to repeat their preconceptions.

Like, say, the spatial structure of warming, or the vertical temperature profiles, or the evolution of sea ice cover, or precipitation changes, or–well, you get the picture. Functionally, “Model Es” is to GISS E as a child’s toy spatula is to a fully-equipped kitchen. It’s telling that he doesn’t even consider any other functionality in the course of the three posts: for him, it’s not about understanding the climate, it’s about the ‘bottom line.’

By the way, Eschenbach also slips in a blatant falsehood–the idea that GISS E contains “something that acts just like a built-in inherent linear warming.” Normal model ‘spin-up’ runs out to equilibrium, as I understand it.

(And how does he even reach this conclusion, given that his whole thesis is that the model output ‘slavishly’ follows forcings? Seems like it’s his argument that is “excessively forced.”)

He does raise a question that interests me, though: why do some forcings (land use or black carbon, for instance) “freeze” to a horizontal line at 1990? Are we still awaiting someone else to publish something? Or is there some other reason that that data apparently hasn’t been updated since then?

I thought there was a serious change in the way how SSTs are measured during/around WWII, and that this change does affect all the measurement data from that period. It is expected that was this measurement bias is properly accounted for, this unexpected cooling period will disappear. Afaik, no global reanalysis data has been published so far.

[Response: AFAIK, HadCRU is currently preparing a revised SST record. Whether it alters the pre/during WWII era data significantly or not, remains to be seen.]

Would that be because of buckets vs. intakes (May 2008)? It seems to have taken a long time for the SST corrections to be made, but I suppose lots of checking of which ships contributed measurements and which method they used at various times was essential.

Only a few SST measurements were made during wartime, and almost exclusively by US ships. Then, in the summer of 1945, British ships resumed measurements. But whereas US crews had measured the temperature of the intake water used for cooling the ships’ engines, British crews collected water in buckets from the sea for their measurements. When these uninsulated buckets were hauled from the ocean, the temperature probe would get a little colder as a result of the cooling effect of evaporation. US measurements, on the other hand, yielded slightly higher temperatures due to the warm engine-room environment.

…

Climate researchers can now start setting the twentieth-century temperature record straight. The abrupt drop in 1945 will then probably disappear, but what the corrected time series will look like is not yet clear.

Yes, the bucket vs. intake water thing is what needs to be corrected for.

I’ve always assumed that the reason why the Brits stuck to buckets for so long was that back when Britannia ruled the waves, they were taking temperature and other sea data and of course being sailing ships, had no cooling water intakes. They used buckets. Presumably the navy favored continuity of the sampling technique.

Good Tom Fuller story … he insists that the SST data is totally unreliable because he claims to have, as a petty officer in the US Navy, to have been in charge of his ship’s detail which took bucket temperature samples during the Vietnam War, and that they intentionally screwed up the data by messing around with the thermometers, sample depth, blah blah.

It was pointed out to him that even during WW II, about three decades earlier, the US Navy recorded temps at the cooling water intake port.

In other words, Fuller was being economical with the truth.

He still swears the US Navy was still taking bucket samples and never used intake data as late as the Vietnam War, despite the Navy’s insistence to the contrary.

Is it possible that some US ships used buckets, following the Brit tradition, while other, perhaps newer, ships used intakes? This might explain the delay in updating the temperature record, as you would need to be sure which measurements need adjusting. Of course, if the US Navy says that all its ships have used intakes since Year X, that’s not an issue.

I expect the corrections to reduce the early 40s peak and raise the late 40s trough, but I wonder if they will affect the record into the 60s and 70s?

Well, it was also pointed out by a navy vet that there were other aspects of Fuller’s story of his time in the Navy that shall we say … did not bear scrutiny?

AFAIK the US record comes from logs of intake water temperature, period. And Fuller wasn’t claiming to serve on an antique.

On the other hand, his claim that as a petty officer he helped make any data collected useless would be consistent with the level of integrity he shows today.

Remember, too, that the English Navy, which dwarfed the US Navy in numbers until after WW I, has a long, rich history of gathering hydrological data throughout the world going back to the beginning of the 19th century if not a couple of decades before. A history not shared by the other Navies of the world, at least not as consistently or thoroughly.

Multiple time constant effects are ubiquitous, but they’re not necessarily obvious. In electronics, the rising edge of a digital signal (or high speed analog signal) usually has two dominate time constants – a fast time constant that controls how fast the signal rises to 90% of its “final” value, and a slow time constant that controls how long the signal takes to rise from there to its “final” value.

The fast time constant is usually good enough for digital logic, since digital logic switches between a logic “0” and “1” based on thresholds that are nearly always below the domain of the slow time constant. But analog circuits, especially those where high accuracy is required, have to use the long time constant (which is often 10-100x larger than the fast one) to ensure that the circuit settles to the required accuracy level. It’s a serious constraint on how fast you can make high accuracy analog designs, and it’s a hell of a lot of fun (yay math!) to figure out how to design the circuits so that the long time constant is negligible.

I see where McIntyre has picked up the trail. No matter, R or Excel, it’s still amusing to see these two stumbling around like they’ve discovered something amazing. I’d still like to see either one replicate either the models’ decline in Arctic sea ice volume or the PIOMAS analysis…

Phil Jones gave a lecture at our place last week and my understanding was that the new SST record will indeed be removing the dip in temps for at least the ’40 – ’45 period due to the identification of the persistence of ‘bucket-testing’, but it may have been ’40-’50 [sorry not to be more exact, but it had been a long week]

Great post!
I talked about the multiple-time scale issue, and the traps it lays for naive assessment of climate sensitivity from the climate response time scale, in a paper I worked way too long on: http://www.atmos-chem-phys.net/9/813/2009/
Figure 6 shows how you really need multiple heat capacities to get a global climate variability that looks like the real time series.

Haven’t you heard that the statospheric cooling is adiabatic?
Well, Arctic Polar stratospheric cloud formation is connected to adiabatic cooling, but from there it is obviously not far to conclude that the entire stratospheric cooling is due to adiabatic cooling if one only is in the right frame of mind.

And with some fascinating free parameters, such as scaling fat-tail exponentials to fit the solar/albedo data. And the inherent assumption(s) that climate sensitivity is extremely low for CO2, while very high for solar/albedo. And still (apparently) using HadCRUT3, when HadCRUT4 is out, 1984-1998 (15 years), when lots of other data is available, etc.

… Mathturbation at its worst.

And once again, Eschenbach uses a single-box model – wherein a two-box model that acknowledges multiple climate compartments, with different response times to forcings, would be a better physical match, and not show wildly unequal sensitivities.

“…the sun plus the albedo were all that were necessary to make these calculations. I did not use aerosols, volcanic forcing, methane, CO2, black carbon, aerosol indirect effect, land use, snow and ice albedo, or any of the other things that the modelers claim to rule the temperature. Sunlight and albedo seem to be necessary and sufficient variables to explain the temperature changes over that time period.”

Multiple regression – while ignoring many of the primary factors. Guaranteed to overfit on the parameters that are used – mathturbation minus physics.

Search for:

Support Your Global Climate Blog

You can help support this blog with a donation. Any amount is welcome, just click the button below. Note: it'll say "Peaseblossom's Closet" and the donation is for "Mistletoe" -- that's the right place.

New! Data Analysis Service

Got data? Need analysis?
My services are available at reasonable rates. Submit a comment to any thread stating your wishes (I'll keep it confidential). Be sure to include your email address.