Model Charged with Excessive Use of Forcing

The GISS Model E is the workhorse of NASA’s climate models. I got interested in the GISSE hindcasts of the 20th century due to an interesting posting by Lucia over at the Blackboard. She built a simple model (which she calls “Lumpy”) which does a pretty good job of emulating the GISS model results, using only a model including forcings and a time lag. Stephen Mosher points out how to access the NASA data here (with a good discussion), so I went to the NASA site he indicated and got the GISSE results he points to. I plotted them against the GISS version of the global surface air temperature record in Figure 1.

Figure 1. GISSE Global Circulation Model (GCM or “global climate model”) hindcast 1880-1900, and GISS Global Temperature (GISSTemp) Data. Photo shows the new NASA 15,000-processor “Discover” supercomputer. Top speed is 160 trillion floating point operations per second (a unit known by the lovely name of “teraflops”). What it does in a day would take my desktop computer seventeen years.

Now, that all looks impressive. The model hindcast temperatures are a reasonable match both by eyeball and mathematically to the observed temperature. (R^2 = 0.60). True, it misses the early 20th century warming (1920-1940) entirely, but overall it’s a pretty close fit. And the supercomputer does 160 teraflops. So what could go wrong?

To try to understand the GISSE model, I got the forcings used for the GISSE simulation. I took the total forcings, and I compared them to the GISSE model results. The forcings were yearly averages, so I compared them to the yearly results of the GISSE model. Figure 2 shows a comparison of the GISSE model hindcast temperatures and a linear regression of those temperatures on the total forcings.

Figure 2. A comparison of the GISSE annual model results with a linear regression of those results on the total forcing. (A “linear regression” estimates the best fit of the forcings to the model results). Total forcing is the sum of all forcings used by the GISSE model, including volcanos, solar, GHGs, aerosols, and the like. Deep drops in the forcings (and in the model results) are the result of stratospheric aerosols from volcanic eruptions.

Now to my untutored eye, Fig. 2 has all the hallmarks of a linear model with a missing constant trend of unknown origin. (The hallmarks are the obvious similarity in shape combined with differing trends and a low R^2.) To see if that was the case I redid my analysis, this time including a constant trend. As is my custom, I merely included the years of the observation in the analysis to get that trend. That gave me Figure 3.

Figure 3. A comparison of the GISSE annual model results with a regression of the total forcing on those results, including a constant annual trend. Note the very large increase in R^2 compared to Fig. 2, and the near-perfect match of the two datasets.

There are several surprising things in Figure 3, and I’m not sure I see all of the implications of those things yet. The first surprise was how close the model results are to a bozo simple linear response to the forcings plus the passage of time (R^2 = 0.91, average error less than a tenth of a degree). Foolish me, I had the idea that somehow the models were producing some kind of more sophisticated, complex, lagged, non-linear response to the forcings than that.

This almost completely linear response of the GISSE model makes it trivially easy to create IPCC style “scenarios” of the next hundred years of the climate. We just use our magic GISSE formula, that future temperature change is equal to 0.13 times the forcing change plus a quarter of a degree per century, and we can forecast the temperature change corresponding to any combination of projected future forcings …

Second, this analysis strongly suggests that in the absence of any change in forcing, the GISSE model still warms. This is in agreement with the results of the control runs of the GISSE and other models that I discussed st the end of my post here. The GISSE control runs also showed warming when there was no change in forcing. This is a most unsettling result, particularly since other models showed similar (and in some cases larger) warming in the control runs.

Third, the climate sensitivity shown by the analysis is only 0.13°C per W/m2 (0.5°C per doubling of CO2). This is far below the official NASA estimate of the response of the GISSE model to the forcings. They put the climate sensitivity from the GISSE model at about 0.7°C per W/m2 (2.7°C per doubling of CO2). I do not know why their official number is so different.

I thought the difference in calculated sensitivities might be because they have not taken account of the underlying warming trend of the model itself. However, when the analysis is done leaving out the warming trend of the model (Fig. 2), I get a sensitivity of 0.34°C per W/m2 (1.3°C per doubling, Fig. 2). So that doesn’t solve the puzzle either. Unless I’ve made a foolish mathematical mistake (always a possibility for anyone, check my work), the sensitivity calculated from the GISSE results is half a degree of warming per doubling of CO2 …

Troubled by that analysis, I looked further. The forcing is close to the model results, but not exact. Since I was using the sum of the forcings, obviously in their model some forcings make more difference than other forcings. So I decided to remove the volcano forcing, to get a better idea of what else was in the forcing mix. The volcanos are the only forcing that makes such large changes on a short timescale (months). Removing the volcanos allowed me to regress all of the other forcings against the model results (without volcanos), so that I could see how they did. Figure 4 shows that result:

Figure 4. All other forcings regressed against GISSE hindcast temperature results after volcano effect is removed. Forcing abbreviations (used in original dataset): W-M_GHGs = Well Mixed Greenhouse Gases; O3 = Ozone; StratH2O = Stratospheric Water Vapor; Solar = Energy From The Sun; LandUse = Changes in Land Use and Land Cover; SnowAlb = Albedo from Changes in Snow Cover; StratAer = Stratospheric Aerosols from volcanos; BC = Black Carbon; ReflAer = Reflective Aerosols; AIE = Aerosol Indirect Effect. Numbers in parentheses show how well the various forcings explain the remaining model results, with 1.0 being a perfect score. (The number is called R squared, usually written R^2) Photo Source

Now, this is again interesting. Once the effect of the volcanos is removed, there is very little difference in how well the other forcings explain the remainder. With the obvious exception of solar, the R^2 of most of the forcings are quite similar. The only two that outperform a simple straight line are stratospheric water vapor and GHGs, and that is only by 0.01.

I wanted to look at the shape of the forcings to see if I could understand this better. Figure 5 has NASA GISS’s view of the forcings, shown at their actual sizes:

Figure 5: The radiative forcings used by the GISSE model as shown by GISS. SOURCE

Well, that didn’t tell me a lot (not GISS’s fault, just the wrong chart for my purpose), so I took the forcing data, standardized it, and took a look at the forcings in a form in which they could be seen. I found out the reason that they all fit so well lies in the shape of the forcings. All of them increase slowly (either negatively or positively) until 1950. After that, they increase more quickly. To see these shapes, it is necessary to standardize the forcings so that they all have the same size. Figure 6 shows what the forcings used by the model look like after standardization:

Figure 6. Forcings for the GISSE model hindcast 1880-2003. Forcings have been “standardized” (set to a standard deviation of 1.0) and set to start at zero as in Figure 4.

There are several oddities about their forcings. First, I had assumed that the forcings used were based at least loosely on reality. To make this true, I need to radically redefine “loosely”. You’ll note that by some strange coincidence, many of the forcings go flat from 1990 onwards … loose. Does anyone believe that all those forcings (O3, Landuse, Aerosol Indirect, Aerosol Reflective, Snow Albedo, Black Carbon) really stopped changing in 1990? (It is possible that this is a typographical or other error in the dataset. This idea is supported by the slight post-1990 divergence of the model results from the forcings as seen in Fig. 3)

Next, take a look at the curves for snow albedo and black carbon. It’s hard to see the snow albedo curve, because it is behind the black carbon curve. Why should the shapes of those two curves be nearly identical? … loose.

Next, in many cases the “curves” for the forcings are made up of a few straight lines. Whatever the forcings might or might not be, they are not straight lines.

Next, with the exception of solar and volcanoes, the shape of all of the remaining forcings is very similar. They are all highly correlated, and none of them (including CO2) is much different from a straight line.

Where did these very strange forcings come from? The answer is neatly encompassed in “Twentieth century climate model response and climate sensitivity”, Kiehl, GRL 2007 (emphasis mine):

A large number of climate modeling groups have carried out simulations of the 20th century. These simulations employed a number of forcing agents in the simulations. Although there are established data for the time evolution of well-mixed greenhouse gases [and solar and volcanos although Kiehl doesn’t mention them], there are no established standard datasets for ozone, aerosols or natural forcing factors.

Lest you think that there is at least some factual basis to the GISSE forcings, let’s look again at black carbon and snow albedo forcing. Black carbon is known to melt snow, and this is an issue in the Arctic, so there is a plausible mechanism to connect the two. This is likely why the shapes of the two are similar in the GISSE forcings. But what about that shape, increasing over the period of analysis? Here’s one of the few actual records of black carbon in the 20th century, from 20th-Century Industrial Black Carbon Emissions Altered Arctic Climate Forcing, Science Magazine (paywall)

Figure 7. An ice core record from the Greenland cap showing the amount of black carbon trapped in the ice, year by year. Spikes in the summer are large forest fires.

Note that rather than increasing over the century as GISSE claims, the observed black carbon levels peaked in about 1910-1920, and have been generally decreasing since then.

So in addition to the dozens of parameters that they can tune in the climate models, the GISS folks and the other modelers got to make up some of their own forcings out of the whole cloth … and then they get to tell us proudly that their model hindcasts do well at fitting the historical record.

To close, Figure 8 shows the best part, the final part of the game:

Figure 8. ORIGINAL IPCC CAPTION (emphasis mine). A climate model can be used to simulate the temperature changes that occur from both natural and anthropogenic causes. The simulations in a) were done with only natural forcings: solar variation and volcanic activity. In b) only anthropogenic forcings are included: greenhouse gases and sulfate aerosols. In c) both natural and anthropogenic forcings are included. The best match is obtained when both forcings are combined, as in c). Natural forcing alone cannot explain the global warming over the last 50 years. Source

Here is the sting in the tale. They have designed the perfect forcings, and adjusted the model parameters carefully, to match the historical observations. Having done so, the modelers then claim that the fact that their model no longer matches historical observations when you take out some of their forcings means that “natural forcing alone cannot explain” recent warming … what, what?

You mean that if you tune a model with certain inputs, then remove one or more of the inputs used in the tuning, your results are not as good as with all of the inputs included? I’m shocked, I tell you. Who would have guessed?

The IPCC actually says that because the tuned models don’t work well with part of their input removed, this shows that humans are the cause of the warming … not sure what I can say about that.

What I Learned

1. To a very close approximation (R^2 = 0.91, average error less than a tenth of a degree C) the GISS model output can be replicated by a simple linear transformation of the total forcing and the elapsed time. Since the climate is known to be a non-linear, chaotic system, this does not bode well for the use of GISSE or other similar models.

2. The GISSE model illustrates that when hindcasting the 20th century, the modelers were free to design their own forcings. This explains why, despite having climate sensitivities ranging from 1.8 to 4.2, the various climate models all provide hindcasts which are very close to the historical records. The models are tuned, and the forcings are chosen, to do just that.

3. The GISSE model results show a climate sensitivity of half a degree per doubling of CO2, far below the IPCC value.

4. Most of the assumed GISS forcings vary little from a straight line (except for some of them going flat in 1990).

5. The modelers truly must believe that the future evolution of the climate can be calculated using a simple linear function of the forcings. Me, I misdoubts that …

In closing, let me try to anticipate some objections that people will likely have to this analysis.

1. But that’s not what the GISSE computer is actually doing! It’s doing a whole bunch of really really complicated mathematical stuff that represents the real climate and requires 160 teraflops to calculate, not some simple equation. This is true. However, since their model results can be replicated so exactly by this simple linear model, we can say that considered as black boxes the two models are certainly equivalent, and explore the implications of that equivalence.

2. That’s not a new finding, everyone already knew the models were linear. I also thought the models were linear, but I have never been able to establish this mathematically. I also did not realize how rigid the linearity was.

3. Is there really an inherent linear warming trend built into the model? I don’t know … but there is something in the model that acts just like a built-in inherent linear warming. So in practice, whether the linear warming trend is built-in, or the model just acts as though it is built-in, the outcome is the same. (As a side note, although the high R^2 of 0.91 argues against the possibility of things improving a whole lot by including a simple lagging term, Lucia’s model is worth exploring further.)

4. Is this all a result of bad faith or intentional deception on the part of the modelers? I doubt it very much. I suspect that the choice of forcings and the other parts of the model “jes’ growed”, as Topsy said. My best guess is that this is the result of hundreds of small, incremental decisions and changes made over decades in the forcings, the model code, and the parameters.

5. If what you say is true, why has no one been able to successfully model the system without including anthropogenic forcing?

Glad you asked. Since the GISS model can be represented as a simple linear model, we can use the same model with only natural forcings. Here’s a first cut at that:

Figure 9. Model of the climate using only natural forcings (top panel). All forcings model from Figure 3 included in lower panel for comparison. Yes, the R^2 with only natural forcings is smaller, but it is still a pretty reasonable model.

6. But, but … you can’t just include a 0.42 degree warming like that! For all practical purposes, GISSE does the same thing only with different numbers, so you’ll have to take that up with them. See the US Supreme Court ruling in the case of Sauce For The Goose vs. Sauce For The Gander.

7. The model inherent warming trend doesn’t matter, because the final results for the IPCC scenarios show the change from model control runs, not absolute values. As a result, the warming trend cancels out, and we are left with the variation due to forcings. While this sounds eminently reasonable, consider that if you use their recommended procedure (cancel out the 0.25°C constant inherent warming trend) for their 20th century hindcast shown above, it gives an incorrect answer … so that argument doesn’t make sense.

To simplify access to the data, I have put the forcings, the model response, and the GISS temperature datasets online here as an Excel worksheet. The worksheet also contains the calculations used to produce Figure 3.

Well who’d a thunk it… another nail in the coffin of trying to base policy only based on ‘models’, which for all their teraflopiness are really rather simple and by the looks of it not very good at all…

This is a classic example of why the data, methodologies, code, etc. require rigorous scrutiny, quality control, version control, archiving AND access. This kind of analysis can only be done if the data, widely defined, are available. Does anyone really wonder why this is proving to be so difficult to obtain?

This is an insufficiently understood characteristic of complex modeling. After a certain point, the only real impact of adding complexity becomes its ability to conceal from the modelers the basic nature of what they have done.

As in so many other areas, the climate modelers seem here to have magnified a common scientific error to the point of absurd self-parody. If they now choose to go with the usual flat denial response, their absurdity will only be more apparent, and the eventual judgment of history will only be more damning.

I think that instead of saying “a linear regression of the total forcings on those temperatures” you meant to say “a linear regression of the temperatures on those total forcings”. You are predicting the temperatures using the forcings, not the other way around.

FTA:” 2. The GISSE model illustrates that when hindcasting the 20th century, the modelers were free to design their own forcings.”

Which means that, essentially, all it is, is a glorified multivariable curve fit, and the CAGWers have convinced themselves that it is somehow miraculous that the curve fit fits the data, and that this therefore confirms their worst fears.

This is the kind of (non) thinking which brought humankind voodoo dolls and leeches. How depressingly… primitive.

Now, that 0.25 degree increase per century in the absence of any change in forcing is interesting, and really makes any action on GHG’s quite irrelevant, since it implies that the oceans will start boiling before we’re even halfway through the next glacial cycle no matter what we do.

Judith Curry has resumed her thread on Climate model verification and validation: Part II at Climate Etc judithcurry.com/2010/12/18/climate-model-verification-and-validation-part-ii/ Her reason is the interest that an invited paper received at AGU last week. The title of the paper is: “Do Over or Make Do? Climate Models as a Software Development Challenge (Invited)” and is found at adsabs.harvard.edu/abs/2010AGUFMIN14B..01E I reproduce the abstract below.Please delete if there are any IP issues. As several of my friends in the legal profession say, res ipsa loquitor.

“We present the results of a comparative study of the software engineering culture and practices at four different earth system modeling centers: the UK Met Office Hadley Centre, the National Center for Atmospheric Research (NCAR), The Max-Planck-Institut für Meteorologie (MPI-M), and the Institut Pierre Simon Laplace (IPSL). The study investigated the software tools and techniques used at each center to assess their effectiveness. We also investigated how differences in the organizational structures, collaborative relationships, and technical infrastructures constrain the software development and affect software quality. Specific questions for the study included 1) Verification and Validation – What techniques are used to ensure that the code matches the scientists’ understanding of what it should do? How effective are these are at eliminating errors of correctness and errors of understanding? 2) Coordination – How are the contributions from across the modeling community coordinated? For coupled models, how are the differences in the priorities of different, overlapping communities of users addressed? 3) Division of responsibility – How are the responsibilities for coding, verification, and coordination distributed between different roles (scientific, engineering, support) in the organization? 4) Planning and release processes – How do modelers decide on priorities for model development, how do they decide which changes to tackle in a particular release of the model? 5) Debugging – How do scientists debug the models, what types of bugs do they find in their code, and how they find them? The results show that each center has evolved a set of model development practices that are tailored to their needs and organizational constraints. These practices emphasize scientific validity, but tend to neglect other software qualities, and all the centers struggle frequently with software problems. The testing processes are effective at removing software errors prior to release, but the code is hard to understand and hard to change. Software errors and model configuration problems are common during model development, and appear to have a serious impact on scientific productivity. These problems have grown dramatically in recent years with the growth in size and complexity of earth system models. Much of the success in obtaining valid simulations from the models depends on the scientists developing their own code, experimenting with alternatives, running frequent full system tests, and exploring patterns in the results. Blind application of generic software engineering processes is unlikely to work well. Instead, each center needs to lean how to balance the need for better coordination through a more disciplined approach with the freedom to explore, and the value of having scientists work directly with the code. This suggests that each center can learn a lot from comparing their practices with others, but that each might need to develop a different set of best practices.”

Yes, as I have written about extensively. The state of GISSE, or any of the other models I have inspected, would not come close to passing muster anywhere I have worked (except Gov.). I have been designing and developing software for something closing in on 30 years now, even in the early days we maintained higher levels of controls and scrutiny. Today it is mandatory, or you don’t eat.

Graphs with background photos disrupt and disturb comprehension of the data while adding nothing but prettification. The first photo, with the many colors and sharp contrast, is particularly distracting and is an excellent example of chartjunk:

The temperature impact from GHG forcing in Model E follows 4.053 ln(CO2) -23.0 so it is not quite linear (using CO2 as a proxy for all the GHGs). We are just in a particular part of the formula which is close to linear right now.

They are not playing around with the GHG forcings, it is all the other forcings like Aerosols and the unrealistically high Volcano forcings that are being used for the plugs to match the historical record.

The TempC response per watt/m2 has always bothered me. One needs to assume all the feedbacks will occur to get to the higher numbers often quoted (1 W/m2 of GHG forcing results in an additional 2 W/m2 of water vapour and Albedo feedbacks). Hansen also assumes there is lag as the oceans absorb some of the forcing and then some of the feedbacks like Albedo are more long-term. The response could start out at 0.5C/W/m2 and rise to 0.81C/W/m2 after the lags kick in.

But GISS Model E net forcing was +1.9 W/m2 in 2003 and that would only produce 0.34C/W/m2 of response (including all the feedbacks). After 2003, the oceans stopped absorbing some of the forcing so it might even be falling from this low number. It is probably the actual response that the Earth’s climate gives because I have seen this same number in all the historical climate reconstructions I have done.

GHG doubling +3.7W/m2 X 0.34C/W/m2 = +1.26C

REPLY: Bill I sent you an email a few days ago, but got no response. Check your spam folder – Anthony

An interesting analysis Willis. Somehow it all looks suspiciously like the process of using a simple mechanical model to fit a known data set. This is the technique used to come up with race horse tipping programs based on historic race results. Unscrupulous scammers continue to sell these race tipping programs to gullible punters.
Doesn’t this seem to have a familiar ring to it?

Thanks Willis! Will need to look at the spreadsheet.
Everyone knows that the GISS temperatures are not correct and have been exaggerated by selection of sites with UHI effects and the speading of temperatures from these sites to areas where there is no measurement.
If I have your comments correct then it is possible to take out the supposed GHG effect altogether and still be able to model the actual temperatures. This would add to the findings in icecores and past experimental data (such as compiled by Beck) that CO2 lags temperature and so has no effect on climate (or weather).

Willis says:
“4. Is this all a result of bad faith or intentional deception on the part of the modelers? I doubt it very much. I suspect that the choice of forcings and the other parts of the model “jes’ growed”, as Topsy said. My best guess is that this is the result of hundreds of small, incremental decisions and changes made over decades in the forcings, the model code, and the parameters.”

IMHO, Hansen and Schneider and the other team leaders wanted to show warming, and their programmers had the task to deliver that warming while doing a good hindcasting. The motivation of everybody in the system was to make this happen, by parameters or by inventing the past history of the forcings. Everybody turned a blind eye on it. It would have been the job of QA to find this. There was no QA. Where there is no QA, anything can happen. Oh, we have peer review, but that was rigged.

That being said – the job of the modelers is even easier than i thought. They are all natural born slackers. We’ve all been taken for a ride.

I am wondering what they do with all of those extra teraflops, sounds to me like I could do the same processing on my WII at 2.5 MIPS (million instructions per second) with equal results (and a little bit cheaper). Maybe those extra teraflops are contributing to catastrophic warming? Perhaps someone should design another model and look in to that…

The use and reference to a 160 teraflops capable machine to add heft to the credibility of models/simulations, calls to mind the tale of a mega-rancher in Texas. Wanting to know why his black cattle ate more grain than his white cattle, he hired a team of experts and leased a couple of Cray computers. After a year of effort, the report concluded that he had more black cattle.

I an not a scientist, I do read quite a bit, and perhaps understand some of it. A computer model can only attempt to simulate reality (however defined). And then, as I understand it, must be verified by actually measuring the reality that was simulated. The KISS principle seems to tell me, that if you must make up fudge factors, to get the model to work, then the model itself didn’t simulate this reality at all. We may learn quite a bit about the modelers intentions, by studying their efforts, but nothing at all about the reality we are studying.

When Lucia was working on Lumpy I suggested that she use the model to do quick and dirty IPCC forecasts. Now, having been to AGU I sat through a talk where a nice wizard lady took GCM results and created a simular emulation using regressions on the results. This allowed them to do many more hindcasts as part of a paleo recon, where the GCM was used as a prior in a baysian approach to proxy recons.

I don’t have time to go over the details of your handling of the forcing ( maybe Lucia can chime in) But the point you make about attribution studies bears some looking into. I’ll remind you that in the attribution studies they only used models that have neglible drift. ( see the Supplemental material chapter 9) Also, the comparison against observations is done in a unique way.

David Attenborough did a little Utube showing CO2 must be the cause because the models predicted the output so perfectly if CO2 was included. I knew then that they were tuning the models, because the match was too perfect. I think it was Anthony who pointed out that the lines (model and actual) crossed every few months and never diverted by more than a small fraction of a degree.

Judith Curry is starting another thread on “Climate model verification: Part II.” This seems appropriate given Willis’ analysis. judithcurry.com/2010/12/18/climate-model-verification-and-validation-part-ii/ She mentions tha abstract of an invited paper from last week’s AGU meeting by Easterbrook titled “Do Over or Make Do? Climate Models as a Software Development Challenge (Invited)” The abstract follows from adsabs.harvard.edu/abs/2010AGUFMIN14B..01E Please delete abstract if there are copyright issues with my copying and pasting it here.

“We present the results of a comparative study of the software engineering culture and practices at four different earth system modeling centers: the UK Met Office Hadley Centre, the National Center for Atmospheric Research (NCAR), The Max-Planck-Institut für Meteorologie (MPI-M), and the Institut Pierre Simon Laplace (IPSL). The study investigated the software tools and techniques used at each center to assess their effectiveness. We also investigated how differences in the organizational structures, collaborative relationships, and technical infrastructures constrain the software development and affect software quality. Specific questions for the study included 1) Verification and Validation – What techniques are used to ensure that the code matches the scientists’ understanding of what it should do? How effective are these are at eliminating errors of correctness and errors of understanding? 2) Coordination – How are the contributions from across the modeling community coordinated? For coupled models, how are the differences in the priorities of different, overlapping communities of users addressed? 3) Division of responsibility – How are the responsibilities for coding, verification, and coordination distributed between different roles (scientific, engineering, support) in the organization? 4) Planning and release processes – How do modelers decide on priorities for model development, how do they decide which changes to tackle in a particular release of the model? 5) Debugging – How do scientists debug the models, what types of bugs do they find in their code, and how they find them? The results show that each center has evolved a set of model development practices that are tailored to their needs and organizational constraints. These practices emphasize scientific validity, but tend to neglect other software qualities, and all the centers struggle frequently with software problems. The testing processes are effective at removing software errors prior to release, but the code is hard to understand and hard to change. Software errors and model configuration problems are common during model development, and appear to have a serious impact on scientific productivity. These problems have grown dramatically in recent years with the growth in size and complexity of earth system models. Much of the success in obtaining valid simulations from the models depends on the scientists developing their own code, experimenting with alternatives, running frequent full system tests, and exploring patterns in the results. Blind application of generic software engineering processes is unlikely to work well. Instead, each center needs to lean how to balance the need for better coordination through a more disciplined approach with the freedom to explore, and the value of having scientists work directly with the code. This suggests that each center can learn a lot from comparing their practices with others, but that each might need to develop a different set of best practices.”

I am not sure if some of the climate science modelers would recognize “best practices” if they jumped up and bit them.

Graphs with background photos disrupt and disturb comprehension of the data while adding nothing but prettification.

As the Romans used to say, “de gustibus et coloribus non est disputandum”. That means there’s no use arguing about tastes and colors. If you say you like blue, I can’t dispute that.

So while you may dislike graphs with background photos, I like them. I think that they add to the presentation. I don’t mind if people have to study them a bit to figure them out. Plus, I like science to be fun and interesting. Finally, while you seem to think that “prettification” is something to be avoided, I like pretty things. Go figure. De gustibus et coloribus …

One cause for the discrepancy between your low simple regression sensitivity (0.13 degree per watt) and the GISS model sensitivity is the vast assumed accumulation of heat in the oceans in the GISS model (on the order of 0.85 watt per square meter)… which is obviously not correct.

Your analysis of the assumed forcings being a pure kludge to make the different models fit the historical temperature record is absolutely correct. I can image no other field where you can make any model fit the data by making up data sets for unknown inputs, and then claim your model is “verified”. As pure a form in intellectual corruption/self deception as I’ve ever seen in science. The models all disagree about the true sensitivity, yet assume vastly different historical forcings, and still we are told by the IPCC that these models can provide useful information about what will happen in 50 or 100 years, if we form an ‘ensemble’ of models, each of which uses a different set of forcing kludges. All such predictions are pure rubbish, and should be treated as such by the public.

The warmers argue that, whilst their models use different assumptions about sensitivity, areosols etc, that they all show one thing only CO2 can explain the 20th century temperature record. They say if they take out CO2 nothing works. So therefore the observed warming is due to CO2, QED.

Of course it is absolute cobblers. They are in charge of the various other parameters and their values. The models use different assumptions on some of these parameters such as aerosols and that is how they are made to fit a backcast record. Well what a surprise that is!

I always say, send me a random sample of roulette spins and I will send you a model that will win you money. I can do it every time with 100% certainty. Send me 1000 different samples and I will send you a model that proves you can win money. Now a lot of these models will be slightly different, just like the GCMs, but just like the GCMs they all have one thing in common, they all prove you can win money playing roulette.

Therefore, presumably the warmers would have to agree that models have proven you will win money at roulette, QED.

You can have a million GCMs all ‘proving’ that CO2 is the culprit but whilst the designers get to play about with the parameters it proves nothing.

Willis, you attempted to show that solar could be responsible for the 20th century warming and seeing as you could control the parameters you had little difficulty in producing a model that showed this.

The warmers do the same thing with the GCMs but with greater complexity and more obfuscation but the same thing really.

As so many people have said, before these models become the basis for expensive public policy decisions, we need to see audits — reviews by an unaffiliated multi-disciplinary team of experts. Just like testing of new drugs, third party review is essential.

Cementafriend said: “…If I have your comments correct then it is possible to take out the supposed GHG effect altogether and still be able to model the actual temperatures.”

Willis Eschenbach replied: “Only in the simplest sense. The problem is that both my model (and apparently the GISS model) contain a built-in trend. This makes their predictive value something like zero.”

Is it the predictive value or the predictive skill of the models that is zero?

The reason you can’t replicate the IPCC’s estimate of 3C for a doubling of CO2 is that the IPCC’s number is an “equilibrium” sensitivity calculated from temperatures which aren’t reached for at least another 500 years, according to the models. (See Nakashiki 2007 for some illustrative plots.) Table 8.2 of the IPCC AR4 shows roughly a factor of two difference between model-derived transient and equilibrium climate sensitivities, but the transient sensitivities are still higher than any sensitivity you can back out of the IPCC’s models and forcings over the 1900-2100 period.

Incidentally, when I performed the same analysis a couple of years ago I found I was able to reproduce the GISS Ocean-Atmosphere model temperatures between 1900 and 2100 almost exactly by taking the observed and predicted CO2 concentrations, converting them into watts/sq m forcings using 5.35 * ln(C2/C1), multiplying these forcings by 0.55 and adding 13.8. And I did this in about two minutes on a computer that works at maybe a couple of kiloflops on a good day and at a cost of zero.

Good report and all Willis, but stand back and look at this folks….even those graphs that look to turn up so sharply we’re still talking just 1/10ths of a degree change here. I mean, really….panic in the streets. As if planet earth can’t flucuate by 0.6C in a 150yrs. Static she is not!

I’m a bit new to all of this AGW stuff, but I have a question about figure7. I look at the part 1850 to 1875. I see that natural forcing is relativly high and positive. Then I look at antropegnic forcing for the same period and I see that some part of it has a positive forcing too. So then I look at both combined for that period and see that the total forcing is smaller than the natural one. I was on the impression that 2 positive forcing would kind of “add up” and force even more. What am I missing here?

For those that do not like Willis’ backgrounds, you could seek –for a small fee—his image work-flow files. Then you could vary the opacity of that layer (after promoting it) until it suits your taste. While there is no “arguing about tastes and colors”, Willis probably likes green as well as most of us, so all could be happy.http://www.pixalo.com/community/tutorials-guides/high-key-18631.html

I’m with Rhett Butler on this issue. I don’t give a . . .
—————————————

I found the post and comments interesting and educational. Thanks, Willis and the rest.

My beef is the trend of aerosols assumed in the models. Always getting worse? Excuse me? What about the worsening air quality in the 40-70’s in the industrial northern hemisphere as compared to now? The modelers erased this fact like they erased the MWP. It’s all rubbish.

Yes, you do have to wonder as to how many more “forcings” would be required to fit the “global average” graph exactly.

The problem with models is that subconsciously you need a “ball park” to play into. By that I simply mean that, whether you have designed it from scratch or modified some readily available code, examining the output forces you to look at what others have published and then decide if your run is in the “ball park”. Supposing that my run suggests twenty years of cooling… do I publish or re-write the code? Twenty years of rapid warming … publish or re-write? Somewhere between all the other models.. publish or re-write? I know which one would be safe.

Given that the “global mean” itself (gridded RSM, FDM) is a model, when designing “V3_mean” what “ball park” am I testing against and how will I know that I’m in it?

Just how much synergy exists between the models and the “global mean” within a particular organisation? GISS “real” temps and GISS modelled temps track each other if you include enough variables… colour me shocked pink.

Perhaps those treenometers and their hidden decline have been right all along … how would I know? Then again we will always have harry..read..me…

Lets go back and take a look at that figure 4 for a moment. I think that graph DOES tell us something useful about the assumptions behind their modelling. For example, take a look at the line for “Land Use”. They are essentially saying that land use has had, and will have, no effect on climate whatsoever. Seriously?

The magnitude of the assumed forcing due to GHG dominates their model (and their thinking) and thus inevitably leads them to the conclusion that they pre-assumed. This of course, is well understood by the skeptic, but it’s interesting to see it displayed in their own forcings graph.

Also, wouldn’t a forcing due to the effects of Black Carbon, and a forcing due to lowering the albedo of snow (due to black carbon) essentially be modelling the same thing? Are they double counting there?

The ‘radiative forcing constants’ used in all the climate models have no phyical meaning, nor do the tempertures that they ‘predict’. They are just modeling ‘fudge factors’ used to change the surface temperature. The radiatative forcing trick is described in Hansen at al., 2005, ‘Efficacy of Climate Forcings’ J. Geophys. Research, 110, D18104, pp1-45. [ http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_2.pdf ].

The starting point is that it is assumed that a 100 ppm increase in atmospheric CO2 concentration has produced a 1 C rise in ‘average surface temerpature’. This is the rise in ‘meteorological surface air temperature’ from the ‘hockey stick’. Yes – GISSE is ‘calibrated’ using the ‘hockey stick’! [Reliable] Spectroscopic calculations show that a 100 ppm increase in atmospheric CO2 concentration also produces an increase in ‘clear sky’ downward LWIR flux of 1.7 W.m-2.

Now, there is no physical cause and effect relationship between the 1 C rise in air temperature measured at eye level above the ground and a change in LWIR flux of 1.7 W.m-2 at the surface 5 ft below. (The 1 C rise is from changes in ocean surface temperatures, urban heat islands, and downright data ‘fixing’.) However, the radiative forcing constant for CO2 is defined as 1/1.7 = 0.67 C/(W.m-2).

This is then extended as a ‘calibration constant’ to all other atmospheric species. In other words, it is arbitrarily and empirically assumed that a 1 W.m-2 increase in LWIR flux from any species produces a 0.67 rise in ‘average equilbrium surface temperature’. The increase in LWIR flux for other greenhouse gases such as CH4, O3 etc. can be calculated from their spectroscopic constants. Aerosols etc. are just used as empirical ‘levers’ to fix the model output. That is how the volcano terms are used. The whole thing is pseudoscience. All we need to add are the signs of the zodiac and it becomes climate astrology.

Reality is that it is impossible for a 1.7 W.m-2 increase in downward ‘clear sky’ LWIR flux to produce any measurable change in surface temperature. The LWIR flux has to be added correctly to the total flux at the surface and used to calculate the surface temperature of a real surface with all of the heat flux terms and proper thermal properties included. This is discussed at:

The fundamental assumption that changes in surface temerpature can be simulated using small changes in long term ‘equilbrium’ flux averages is incorrect. There is no climate ‘equilbrium’ on any time scale.

I always enjoy your posts and learn a lot from them. However, I agree with Madman2001 and others who complain about the picture backgrounds of your graphs.

I know you say you like them. Since you are the author, I grant that you can present them however you want. But, hopefully your purpose is not just to present something you find pretty, but actually to inform and influence others. Some folks have said that they find the background pictures a hindrance to understanding what you are trying to convey. No one has said that the pictures help them understand the graph

If it were me, and I was trying to inform and convince people of the correctness of my work, I would go with what had the best chance of achieving my goal, regards of my personal preferences. Just a thought.

Consider yourself a computersystem developer receiving the well-paid task to develop a system describing the development of world temperatures and calculating the future . What would you do , knowing that further orders would be depending on your ability to dance with the high priests of climate alarmism ? Why is the building of these models declared a secret ? Because there is really something to hide ! What happened before climate-gate ? Rejections by the self-declared scientists of requests to publish the underlying facts . Climategate proved that these rejections were necessary to keep the fraud going on . Is there any real difference between the lack of GISS modeling particulars and the lack of information from the UK universities ?
We are dealing here with a modern-style fashioned priesthood getting tremendously wealthy with their fearmongering theories at the expense of the rest of our society and utterly showing their disdain for any real intellectual confrontation by telling the science is settled …… Do we really wish to be ruled by stupidity , then poverty shall be our fate .

I haven’t read all the comments, someone might have pointed out the same issue:

In figure 8, I keep wondering why the model simulate significant warmer temperatures in the 1860-70’s and 1910-30 (natural forcing only). It’s almost a 0,5C difference over several decades. And when they add the input/AGW, the temperature goes down.

Some commentators have querried the need to use massive super computers with many teraflops to do what in essence is a rather simple task.
Somebody even asked how were the excess teraflops used?
I have the answer.

When, long ago, I was writing simple prograns in Basic on my trusty Tandy TRS80, I often can upon a problem.
I wanted to see what was happening to a number of variables, while the program was running, as well as seeing the end result when it was finished.
My problem as that the numbers just falshed past on the screen too quickly to be taken in.
My solution was quite simple, which I will reveal at the end.

I think the climatologists problem is rather different.
They have been successful in getting so much money in grants that they have far too much computer power.
Use it or lose it is the rule in government and quango circles.

The answer is to add an extra subroutine like the following,
Where “N” is any number large enough to slow things down a lot.

Willis on a more serious note:
There is a much more fundamental flaw in the whole concept of modelling the climate, with the hope of predicting outcomes many years into the future.

It is the chaotic nature of the climate.
Chaos means that there are an almost infinite number of overlapping cycles, up to millioms of years long or more.
At any moment the next largest (and hence more powerful) cycle will intervene without warning.

Even within the existing cycle, the inter-connections are so complex that things, which seem to be going on in an orderly fashion, suddenly change in unpredictable ways.

That’s why it is impossible to forecast the economy with any accuracy for more than a few months in advance.
The economy has been the subject of longer and much more intense study than the climate.
Yet very little real progress has been made.
Just enough to construct next year’s government budget with reasonably good accuracy in most years, except when the unexpected happens, as in 2007.
At least many of the economic variables are known and quantified.

The climate is both much more complex and much more unknown.

We should not take long term climate forecasting seriously, but should make a concerted effort to educate the public, press and politicans, that you cannot forecast the long term future climate, particluarly when the main forces are largely unknown.

Very nice post, entertaining and informative as always. Interesting how little surprise there is at your revelations… it feels more like deja vu. A year ago this would have whipped up WUWT readers into a frenzy. BTW I like your background pictures; I find I can mentally block the images out when concentrating on the trend lines and real information, but they are initially eye-catching and attractive. But then, I’m not an engineer.

Urederra says:
December 19, 2010 at 6:24 pm

Why do they need a 160 teraflops supercomputer?

To make forcings ‘a la carte’ that fit with their desired predictions.

It is ecneics, science made backwards.

……….

How do you pronounce that? With a standard pronunciation, ecneics could become a useful word!

If it were me, and I was trying to inform and convince people of the correctness of my work, I would go with what had the best chance of achieving my goal, regards of my personal preferences. Just a thought.

I agree with you, but I notice the “old”. Maybe it has to do with age. Eye sight is not what it was and backgrounds have to be masked out in the head.

On the other hand, having to use a slide rule and find logarithmic paper for the plots, and painstakingly transfer the histogram information from the primitive computer output, and taking it to a graphics technician to make a pretty slide for a conference or publication sort of discouraged adventurous backgrounds :). So it could just be a conditioned reflex for us.

It absolutely does not matter how fast your computer flops if the algorithms used are themselves flops–garbage in/garbage out is the time-tested description. Might just as well write on a napkin and throw it away after “climsci” fails yet again.

I am wondering what they do with all of those extra teraflops, sounds to me like I could do the same processing on my WII at 2.5 MIPS (million instructions per second) with equal results (and a little bit cheaper).

Almost. That MIPS is Mega (as opposed to Tera) instructions (as opposed to Floating Point Operations) per second.

1. I suspect, unless you are really still using a 2.5 M(ega)Hz computer, you mean G(iga)Hz, and therefore Billions of Instructions per second.

2. The floating point operations bit does make a difference since each involves many instructions, but not as much as a factor of 1,000 ;-)

“It’s doing a whole bunch of really really complicated mathematical stuff that represents the real climate and requires 160 teraflops to calculate, not some simple equation.”

This exercise points to an additional finding.

The terabytes and teraflops of capability the modelers have would say that the equations they are solving are incredibly complex. Since the complexity isn’t in the global forcings or the dampenings then it must be in the transfer functions. These transfer functions would depend on cloud type, mountain ranges, the shape of the arctic ice cap and the like. So it’s the interaction of the transfer functions with the forcings and dampenings that require all that processor capability.

Actually, no. Additional computational capability doesn’t lead to markedly better results. This means that the equations have been approximated in a way that they are inherently stable. So stable in fact that they don’t need “teraflops” of capability.

The obvious question is then, why is the capability needed if the equations have been approximated in a way that they are inherently stable?

I always enjoy your posts and learn a lot from them. However, I agree with Madman2001 and others who complain about the picture backgrounds of your graphs.

I know you say you like them. Since you are the author, I grant that you can present them however you want. But, hopefully your purpose is not just to present something you find pretty, but actually to inform and influence others. Some folks have said that they find the background pictures a hindrance to understanding what you are trying to convey. No one has said that the pictures help them understand the graph

If it were me, and I was trying to inform and convince people of the correctness of my work, I would go with what had the best chance of achieving my goal, regards of my personal preferences. Just a thought.

Thanks, o. e. I do it in part to keep myself interested. I’m an artist, and a plain vanilla graph is just boooring to me. In addition, it clearly distinguishes my work from that of others. However, I can tone down the colors and such to make them more readable.

One of the most surprising findings to me, which no one has commented on, is the sensitivity. Depending on whether we include a linear trend term or not, the sensitivity of the GISSE model is either half a degree C or 1.3°C per doubling of CO2. Regardless of the merits of my analysis, that much is indisputable, it’s just simple math.

But both those numbers are way below both the canonical IPCC value (2° – 4.5°C per doubling) and the value given by the GISSE modelers for their model (2.7°C per doubling). The larger value from the analysis is less than half what GISS says the sensitivity of the model is.

Wouldn’t it be nice if someone from the GISSE modeling team would comment on this, or explain to me where I’m wrong? Or say anything?

But I suppose they’re at the AGU conference learning about how to communicate the holy writ of science to us plebians …

GISS should be able to explain how they calculated their model sensitivity as 0.7°C per W/m2 (2.7°C per doubling of CO2). The difference between Willis’s calculated sensitivity and the 2.7C/doubling is astounding.

Here’s one possible explanation:

The amount of carbon we add to the atmosphere can be estimated with reasonable accuracy, as can the actual increase. There is a discrepancy where about 50% of the added carbon is missing …… absorbed by the biosphere and oceans.

Perhaps we seeing the same sort of thing with forcings and heat content?

Net forcings go up, but only half of that shows up as temperature increase. The other half goes into warming the ocean. So the actual climate sensitivity is twice the 0.34°C per W/m2 (1.3°C per doubling of CO2) that your model shows (using the higher number that includes the 0.025C/decade warming trend)

Blame it all on Trenberths’s missing heat. I assume that the GISSE model predates the full deployment of the Argo network and reasonably accurate measurement of Ocean Heat Content. A decade ago, it was reasonable to assume a large increase in OHC each year as a sink for much of the energy from the increased forcings.

Not a mathematical insight, a psychological one. They probably doubled all “forcings” ( I hate the terminology) and thought they were doubling only CO2’s :; . I remember the Harry manipulations , or was it Henry?, in the climategate papers.

Here is a nice story of how group work can get off the tracks, which might matter not much for everyday business, but can be disastrous for scientific conclusions:
In a physics lab way back then, first year students were divided into groups of five and set to determine the parameters of a pendulum, they were given a stop watch. So, one of them got hold of the watch, another started the pendulum and when the time was up the stop watcher said, “how many oscillations?” . Nobody had been counting, everybody assuming that the others would! Now if there were a Harry among them, they could invent an approximate number :).

Willis, thanks for the data. Love to extract some amazing things from such datasets as the GISS you provided. As you said the weighting given each forcing are not logically all identical so I added a weighting factor to each row in your spread and came up with this after letting Excel find this 11-way minimization of variances. The weights it came up are listed below. Apply those weights to GISS forcing data and you closely replicated (.833 R^2) GISS observed temperature data, according to them. The only real difference suppression the r2 is that the observed data is much more noisy and volatile, bigger jumps up and down.

What I found interesting in this is the light weight it applied to GHGs and how massively it weighted SnowAlb and AIE. Some weights are even negative implying that the sign is wrong in these forcings supplied by GISS.

The use of a pictorial background is excellent for assisting retention of the message.

Here are two fake “graphs” of the century-long global temperature record, reinforced with a pictorial message that expert practitioners can make simple mistakes with models. (I would give thanks and attribution, but the photo cartoon came to me with no author noted.)

Is there any reason why “Land Use” is a negative? I’d expect changes of land use to increase warming. Asphalt warm. Grass cool…

Also, what happens if you replace their “strait line all the same” parameters with some that are more representative of the actual data? For example, that “black carbon” curve… and maybe having a 1/2 C or so UHI in the thermometers correction…

IMHO the reason for the “Go Flat” at the end of the non-GHG curves is so that the GHG curve can start out lagging, then as it catches up, the others can be dropped out and leave GHG as very dominant while hiding the fact that it was WAY too fast a rise (nearly exponential?) when it ought to have been a decreasing impact (log). So you hide the needed real “log like behaviour” in the other curves having a compensating lag, and leave the GHG curve more exponential, that way runs into the further future have highly divergent heating, but you can say “Look the model matched in the past!”.

All in all, a “neat trick”.

So put in a Log decay curve on the GHGs, un-flatten the other curves, and see if suddenly your ‘future 40 years out’ looks like “not much happens”… Then tell me again why that model they are using has a non-Log GHG curve… (it does look like an exponential in the early stages to me. Would be nice to have a curve fit to it…)

(Fig 5 looks like GHG accelerates in the middle. Fig. 4 GHG looks to have a ‘belly sag’ in the middle and rise at the end. More precise than ‘eyeball’ analysis would be helpful ;-)

Willis.
A thought experiment for you…
It’s the year 1880. Visualize all the locations on earth where accurate temperature measurements were being recorded. Then, after accepting that coverage was rather patchy back then, ask your self these questions….
Was the method uniform across all of these sites?
How were these instruments calibrated?
Wet bulb dry bulb? Urbanization of surroundings etc etc…?
Was it exclusively maxima and minima being recorded or other day/night temperatures…if so at what times?
Were some being recorded in degrees F and others in degrees C for instance.
Most importantly…were these readings taken against a scale accurate to one tenth of one degree!?
Now visualize the world today and the pandemic spread of electronic meterological measuring gear. All over the world tens of thousands of systems that can log a day’s data second by second, from high resolution digital thermometers.

Ask yourself if you think it is fair to put these two types of Data 1880 and 2010 on the same graph.

It’s only my opinion but as someone whose job it is to control temperatures, I can assure you that measuring air temperature in any meaningful way down to 1 tenth of a degree resolution is nigh on impossible in a natural space ( by that I mean a space where convection and general air circulation is possible). You probably could measure a steady stream of air more accurately or a sealed container.
In the real world of rooms, streets, fields, mountains jungles and oceans however, one tenth of a degree is so transient as to be meaningless.
Looking at your brutal ramp up of .4 degrees C over the second half of the twentieth century I’m beginning to think that the temperature rise that has been ‘observed’ during this period might well be down to the greater number of more accurate observations made as the century progressed.

Thiose models are garbage. All they do is mimicking the Keeling curve, and where it conflicts with reality, they add ad-hoc aerosols.

One sign of good theory is, that it predicts things and does not need any “cosmological constants” added here and there. These playstation models need
a) some kind of solar forcing to explain 1910-1940 global warming by 0.7 deg C
b) aerosol plug to explain 1940-1975 cooling, combined with continuous eradication of this cooling trend from global datasets by cherry-picking stations and whatever data manipulation
c) CO2 forcing which finally takes over after 1975, when solar suddenly does not work
d) most of all, those models completely ignore oceans, which are the main climate driver.

Is there any reason why “Land Use” is a negative? I’d expect changes of land use to increase warming. Asphalt warm. Grass cool… ”

Just saw a video someone posted in the next posts that talked of that very subject. This scientist was middle-of-the-road on the subject but was speaking of asphalt and dark roofs causing UHI as towns and cities developed and also spoke of wooded areas, great absorbers, being replaced by light colored wheat fields which caused cooling. But, this is a very complicated subject of whether more dark has been replaced by man’s signature on the land overall or the other way around. Cities do occupy a small percentage of land area compared to farming but also many fields are also bare and plowed and dark brown.

My little box of data above, and it is purely GISS data supplied by Willis (I don’t place much confidence on most of it), seems to agree with land-use having a negative influence but once again, that is by GISS’s forcings and observed temperature anomalies. I’m still looking at those weightings that fit to the temp anomalies and trying to decipher what they may be saying if GISS’s data can be trusted enough to base any confidence on analysis of it. If the forcings are wrong and the GISS temps wrong then that analysis is also wrong.

Us humans are brilliant, that we can model such complex systems and pick out a tiny atom from this complex system and prove how it is affecting the Earth is Nobel prize stuff.

Whats that the only reason they show the real world is they fiddled the figures, well no matter as the bankers have show with there bonus’s I’m sure 2011 will not see a drop in there tax payer funded grants, even if they are lying and cheating.

Modern civilisation? people freezing to death because of winter fuel shortages. 100 years ago we’d have just shovelled some more coal on the fire.

I agree with madman2001 – the background rubbish on the graphs is distracting and poor scientific presentation – get rid of it if you want to be taken seriously. Marketing whiz kids do this sort of thing – serious scientists don’t.

The sensitivity question is just the usual transient vs equilibrium warming issue.

We’ve had a 40% increase in CO2, which is ‘half’ a doubling, so we ought to get about half the sensitivity as a temperature rise. Since we observe around 0.65 C warming, we get 1.3 C sensitivity. Simple as.

No model that purports to reproduce observations can have a transient sensitivity much higher than this. The equilibrium sensitivity is the scarier number.

“The IPCC actually says that because the tuned models don’t work well with part of their input removed, this shows that humans are the cause of the warming … not sure what I can say about that.”

I’ve always regarded this ‘proof’ as little more than a confidence trick. Climate models are just computer programs – and you can easily write programs to ‘prove’ anything you want. Here’s how I would reproduce this ‘proof’.

1. I would put in all the required physical laws and initial conditions: honest, but it would fail miserably.
2. I would add a large forcing due to CO2: still honest (assuming I believed AGW to be correct in the first place). It would still fail to accurately reproduce historical climate.
3. I would then add in other arbitrary forcings and adjustments to achieve a good agreement with the historical climate (I would give this process an impressive name such as ‘parameterisation’): completely dishonest, but the model now perfectly reproduces the historical climate.
This would be Exhibit A: it gives a perfect match.
I would then remove the CO2 forcing. By definition, it will no longer match the historical climate. This would be Exhibit B.
For gullible people such as David Attenborough (as shown at the end of his film entitled ‘The Truth About Global Warming’), Exhibits A and B would provide perfect proof for AGW.
But of course it would prove nothing. Because it would be dishonest. In other words, a confidence trick.
Chris

That is interesting Anna and also the post by Willis has been very informative, as always. Thank you.

A similar story but from the health field. We used to blood let on a massive scale for testing of treponema pallidum (syphilis) in very remote areas. A very nasty disease and especially for infants/children born to undiagnosed mothers. And particularly those living in the now ‘eco-regions’ of the world.
The tests were reported within certain sensitivity parameters and treatment ordered within these parameters. We were able to treat well (very basic clinics, no telephones) based on these results (and our diligent provision of past history and treatment on the pathology forms). Then different pathology labs with different sensitivities directing different treatment regimes reared up.

The ruler or the instrument (specificity) had changed.

We realised almost 100% detection and treatment in a small population of tribal peoples. Years later they report rates 30-60x that of the nation population.
What happened?

The attitude was changed from sound clinical health practices to sociological understandings. These latter understandings and thus parameters effected massive funding for ‘forcing’ of other regressive policies.

As mentioned in the posts:- Urederra and Hartley.
Urederra says:
December 19, 2010 at 6:24 pm

Why do they need a 160 teraflops supercomputer?

To make forcings ‘a la carte’ that fit with their desire…………..[d predictions].

The GISSE model results show a climate sensitivity of half a degree per doubling of CO2, far below the IPCC value.

Wow. Sounds like the science is settled. Seems everyone is converging on this value lately, whether they wanted to or not. A compendium of results showing sensitivity < 1°C might be a handy reference tool.

I got an r²=0.92 for a 6-parameter global climate model with CO2, and r²=0.74 for the model without CO2. Parameters (SOI, aa, AMO, NAO, volcano, CO2 vs. GISS yearly temperature data) were chosen for their long-term availability, and not really for their potential impact on climate.

It was during this project that I got confronted for the first time with the GISS data-tinkering, the deviation between surface and satellite measurements (better correlation with no-CO2-models), and that the solar influence might perhaps be less than what I originally had assumed (perhaps I should have taken another parameter than aa).

My main conclusion at the time was that “…The foregoing analysis certainly does not rule out CO2 as a contributor to the observed global warming. Rather, it provides grounds that its contribution is much less than that claimed by the IPCC, and that AMO contributes a significant chunk to the temperature evolution. Also, the obtained results warrant a review of the temperature data, and the way in which they are obtained and handled. …”

Have you tried Roy Spencer’s Excell model. It is simple and not too different from yours.

I wrote some years ago at SteveMc’s site that I was a Physicist and Software Programmer and Project Manager before my retirement and had looked at the models as best I could at that time and thought they were lacking control. Then along came climategate and voilà there was the proof. They have zero version control, zero VV&T, zero intra-module control, zero variable management and so on. I know for certain no matter what anyone would want to tell me that none of these models can be relied on even for basic understanding of climate.

It was mainly a call for better monitoring but it also contained the following chart which tells the story better than any other you will see because it includes for the very first time, the feedbacks that are occuring/expected.

==> IPCC Anthro forcing to date (lower than GISS Model E) = +1.6 W/m2

==> Feedbacks which are supposed to be occuring (mostly water vapour) = +2.1 W/m2

Some of this Negative Radiative Feedback could be the oceans absorbing some of the forcings but the most anyone can come up with for this is 0.5 W/m2 and it is has gone to Zero in the last several years. Some of it could be that the feedbacks just aren’t occuring as expected. Even if that were true, there would still have to be some small negative feedback left anyway.

But Trenberth calculated the negative feedback number based on the 0.75C/W/m2 response that is expected in the theory and in the climate models. The negative feedback, however, wouldn’t exist if the actual climate responds according to the Stefan-Boltzmann equations instead (which is how it should be calculated anyway).

+3.7 W/m2 [of Anthro/and water vapour feedback forcing] -0.5 [ocean absorption] = +3.2 W/m2 or just 0.22C/W/m2 which is very close to what the SB equations say it should be.

So, either there is some mysterious really large negative feedback to date that we can’t find or the global warming community got so carried away with their 0.75C/W/m2 response factor and their climate models that they forgot how to do basic math.

GHG doubling with actual climate response to a given forcing to date = +1.5C

The main problem I have with GCMs like Model E is that in many cases (at least in the case of Model E) they are poorly documented, poorly designed and poorly written. For those who wish to see the Model E source code in all its FORTRAN glory, you can find it here…

For all the money we spend on this “research”, you think they could do better, especially (as one poster said above) since we are basing public policy decisions worth billions of dollars on the results of these simulations. Unfortunately, as Gavin Schmidt once replied on another blog, they don’t have time to provide full documentation and testing of their code – they are paid to do “science”!!

Is anyone concerned that the GISSE Climate Model Results are much smoother than the GISS Global Temperature in Figure 1? While year to year variations are more weather than climate and we expect a climate model to reproduce the trend and average measured global temperature rather than exact annual values, why doesn’t the model reproduce the wide variation that occur from year to year?

“Natural forcing alone cannot explain the global warming over the last 50 years.” Figure 8 doesn’t make any sense to me.

Question: looking only at IPCC “Observations” red lines in figure 8, if “Natural Forcing” and “Anthropogenic Forcing” roughly end in the same C range (a) and (b), is the scale incorrect in (c) as a+b˜=c in the model results?

Thanks, Willis, for another great post that makes me feel vastly more comfortable with labelling GISS’s climate science and their modelling dubious at best. My original suspicions that ‘the science’ was and is being grossly maipulated to fit a particular scenario have been reconfirmed.
As a fellow artist and educator, I enjoy your graph’s background visuals and believe they generally stimulate thinking and discernment. But us humans are a varied bunch and those suffering from some forms of dyslexia might find them a bit confusing.
I am thankful that the modellers under discussion are not moonlighting as aircraft designers as any aircraft from such modelling would fail in short order and usually in similar ways!

Here is the sting in the tale. They have designed the perfect forcings, and adjusted the model parameters carefully, to match the historical observations. Having done so, the modelers then claim that the fact that their model no longer matches historical observations when you take out some of their forcings means that “natural forcing alone cannot explain” recent warming … what, what?

I’ve been almost certain they were doing that for quite some time – rigging the system or begging the question by adjusting the other parameters and “forcings” as needed – simply from seeing the way the “Climate Scientists” were doing their “science” otherwise: in effect no fasification possible, attempting to erase the MWP using extremely isolated populations of wild tree rings, insisting that fossil fuel CO2 concentrations must now be contolling an atmospheric “Global Mean Temperature”, refusing to the death to publish the actual “materials and methods” science behind their conclusions, claiming “peer review” by a few selects peers would insure the “given truth” of whatever they had reviewed, and about a million other very telltale practices.

So I always thought it was hilarious when they’d say things like, “We can’t explain the temperature record without using CO2 concentrations.” Well, of course you can’t – because you are either totally inept or else knowingly operating in a completely propagandistic, video game Fantasyland.

It seems they can’t explain the past record without CO2, but they can’t make any successful predictions with CO2.

The amount of carbon we add to the atmosphere can be estimated with reasonable accuracy, as can the actual increase. There is a discrepancy where about 50% of the added carbon is missing …… absorbed by the biosphere and oceans.’

Alternatively a big chunk of the so-called ‘missing sink’ is merely an artefact of assuming a longer residence time for CO2 than is actually the case.

Bill Illis, I am fairly sure the radiative feedback (labeled as a response) is just the Planck response, which is due to global warming. This is what has to balance the forcing and feedbacks on the long term.

So Willis, this is probably a dumb question; but I’m going to ask it anyway.

Your fig 3 graph of the GISSE model (the grey-Blue line. So they take the known laws of physics, programmed into their 160 Terraflop; and they take the present global Temperature anomaly condition; presumably from the last data point that Dr James Hansen plotted on his GISSTemp graph; and then they hit the RUN (back) button on Terraflop, and it computes this blue-grey graph all the way back to 1880 ??

Do I have this correct; just exactly what are they modelling in GISSE; and why would they not simply graph the actual Temperatures themselves; rather than the anomalies. With 160 Terraflops, they should cerytainly be able to replicate the avctual Temp[eratures at each one of their GISSTEMP weather Stations; so why the anomalies, rather than real global Temperatures; since they do have 160 Terraflops to play with. That’s almost as much climate computing power as Mother Gaia has in my front yard.

“One of the most surprising findings to me, which no one has commented on, is the sensitivity. Depending on whether we include a linear trend term or not, the sensitivity of the GISSE model is either half a degree C or 1.3°C per doubling of CO2. Regardless of the merits of my analysis, that much is indisputable, it’s just simple math.

But both those numbers are way below both the canonical IPCC value (2° – 4.5°C per doubling) and the value given by the GISSE modelers for their model (2.7°C per doubling). The larger value from the analysis is less than half what GISS says the sensitivity of the model is.

Wouldn’t it be nice if someone from the GISSE modeling team would comment on this, or explain to me where I’m wrong? Or say anything?

But I suppose they’re at the AGU conference learning about how to communicate the holy writ of science to us plebians …

Anyone with any insights on that question about sensitivity?

w.”

I don’t have the time to go through your mathematics in detail, but it seems to me there is a flaw in your logical analysis. The basic physics says, that the change in temperature due to a radiative imbalance will proceed until the radiative imbalance is reached, at which point there is said to be an equilibrium condition with emission of radiation balancing absorption of radiation by the earth/atmosphere system. The ultimate temperature change, at which the equilibrium condition is reached is the climate sensitivity.

The time for this imbalance to be corrected takes quite large, because the heat capacity of the earth is large. There are a number of different components in the heat capacity and vastly different time constants between components, some of which are not well understood. The longest time constant is associated with the transmission of heat from the ocean’s surface into the depths of the ocean. So the ultimate temperature change associated with the the forcing will take a long time to develop. It is this ultimate temperature change that is the climate sensitivity.

Because of the time lag, it doesn’t make sense to do a simple linear correlation between the temperature and instantaneous forcing, and claim the result is should equal the climate sensitivity obtained by the climate modelers.

It seems that you have become so enmeshed in the mechanics of the mathematics involved in the linear correlation, that you have lost sight of the important basic ideas involved in the theory of global warming.

In the world of modelling we call this a “fudge”. When made up data sets are used to create a fit with observed whilst also accomodating a theory we beleive in, its nothing but a “look it could work, assuming this this this and this happened like this this and this”. If there are no records for many of the forcings, then the models are nothing but a fudged model with no skill. But I always knew that ;0)

The GISSE model results show a climate sensitivity of half a degree per doubling of CO2, far below the IPCC value.

Wow. Sounds like the science is settled. Seems everyone is converging on this value lately, whether they wanted to or not. A compendium of results showing sensitivity < 1°C might be a handy reference tool.

See also Kerr’s issues with the Idso paper for the other side, where he criticizes Idso for doing physical experiments rather than doing what real scientists would be doing in that case — “constructing a mathematical model whose workings would mimic the physical world” …

Well, that’s kinda bizarre … I decided to look at the cumulative forcing in the dataset used for forcing the GISSE model.

Anyone care to take a guess (no peeking) at the cumulative forcing (running sum of all of the forcings used by GISSE) between 1880 and 1980? Bear in mind that there is a significant temperature difference between the two dates.

The real slight of hand here is convincing people that fitting the “temperature anomaly” is a sign of understanding rather then producing a temperature map of the earth. A temperature map is a genuine physical quantity that can be directly compared to how the climate behaves. The anomaly is not physical and has no direct meaning for anything that happens in the real world. The plants in your backyard don’t respond to some global average, they only respond to local temperature.

There was short discussion of this on Lucia’s Blackboard once and the models created rather unconvincing maps.

Is anyone concerned that the GISSE Climate Model Results are much smoother than the GISS Global Temperature in Figure 1?

Speed, good question. The variance (a mathematical term for year to year variations) is likely smaller in the GISS model simply because it is probably an average of several model runs, rather than a single run.

See that little dip at 1975? That’s the year HP came out with their first HP-65 programmable calculator for scientists for $800. I bought one immediately but most people in science no longer had to think, the program would do it for you, and the temperature record has done nothing but rise linearly ever since, (per GISS that is).

Hansen’s RPN GCM must still be running, for as they say, NEVER re-write logic in code that works. Save your brain, just use it. ☺

See that little dip at 1975? That’s the year HP came out with their first HP-65 programmable calculator for scientists for $800. I bought one immediately but most people in science no longer had to think, the program would do it for you, and the temperature record has done nothing but rise linearly ever since, (per GISS that is).

Hansen’s RPN GCM must still be running, for as they say, NEVER re-write logic in code that works. Save your brain, just use it. ☺ “””””

Well actually, that would have been a model 35 Calculator; and they never were anything like $800. I believe that HP employees could have got one for $350 and I think normal retail was around $400.

The much later model 65 had a magnetic card reader; and it was not $800 either. I have a model 65 in mint condition that works perfectly; and an older model 35 that has a shot on/off switch. A cheap slide switch that simply couldn’t take the on/off usage. The Battery packs and chargers were among the weak links in what otherwise was a landmark product line.

It was Litronix with a $60 simple four function plus square root calculator that first introduced a hand held calculator with a key stroke on/off function that solved the on-off switch problem; and vastly improved battery life. (and automatic time out shutoff.)

This was discussed in the early days of RC, before the censor devil made any serious discussion impossible there.

The main point in all current climate models is that they expect one sensitivity for all kinds of forcings: 1 W/m2 increase in insolation has the same effect (+/- 10%) as 1 W/m2 more downward IR from more CO2. Which is quite questionable.

Solar has its main effects in the tropics, as well as in the stratosphere (ozone, poleward shift of jet stream positions, rain patterns) as in de upper few hundred meters of the oceans. And there is an inverse correlation with cloud cover. CO2 has its main effect more widespread over the globe, mainly in the troposphere, IR is captured in the upper fraction of a mm of the oceans (more reflection, more evaporation?) and has no clear effect on ocean heating or cloud cover.
That models don’t reflect cloud cover can be found here:http://www.nerc-essc.ac.uk/~rpa/PAPERS/olr_grl.pdf

The moment you use different sensitivities for different forcings, you can attribute any set of forcing x sensitivity and match the past temperature with better and better R^2, where the (mathematical! not necessarely the real) optimum may show a very low sensitivity for CO2, as Wayne calculated: December 19, 2010 at 11:36 pm

Further, the multi-million dollar GCM’s don’t perform better in hindcasting the temperature trend: Your (and others) simple EBM (energy balance models) only based on the forcings do as well or better than the very expensive GCM’s.
That was discussed by Kaufmann and Stern, their work is not anymore online, but it was discussed here:http://climateaudit.org/2005/12/21/kaufmann-and-stern-2005-on-gcms/
From that link:

These results indicate that the GCM temperature reconstruction does not add significantly to the explanatory power provided by the radiative forcing aggregate that is used to simulate the GCM

This story (of the GISSE Terraflop) sounds much like a minor event that occurred sometime in 1961, in an Electronics trade journal, named Electronic Design, in their column; “Ideas For Design.”
Now recall this was in the days of IBM mainframes (and Control Data) and computer timesharing.
So in this “Idea”, the author (a reader) had been donated one hour of computing time on some IBM mainfame timesharing system; and he had been tasked with designing a simple two transistor amplifier. The requirement was for an amplifier with a Voltage Gain of 10.0 +/-1.0; and the designer “claimed” that he had done a “Worst-Case” design, based on 5% tolerance resistors, and some reasonable production spread in transistor beta (common emitter current gain).

So our hero proposed to use his hour of expensive IBM time to do a Monte Carlo analysis of this design and find out what the production yields might turn out to be.

The circuit design consisted of a common emitter gain stage, with the base biassed up on a resistive divider across the (ten Volt) power supply, with a collector load resistor, and a small emitter degenerating resistor. A second identical transistor was connected to the load resistor as an emitter follower (common collector) stage, to provide the final output.

So he runs the MC analysis on his worst case designed circuit, and plotted his results. Holy Cow !!! Howdat happen ?

The IBM MC said that the gain was NOT 10.0 +/- 1.0 but was more like 9.6 +/- 1.2; but all was not lost; because that multiflopping IBM monster (1103 I think) told him that the emitter degenerating resistor was the most critical component in the desing, and the collector load resistor was the second most critical; and the Transistor Beta was the third most critical design parameter.

But the computer said it could fix his circuit, and it recommended changing the collector load resistor to the next highest 5% resistor value; and he would get the right gain of ten pretty much, but he would still have some fallouts, beyond that +/- 1.0 gain spread. Computer couldn’t think of any way round that; just basic laws of Physics; Tough S*** !!

Now this design genius figured that Monte Carlo was a great thing, if you ever got donated a free hour on an IBM 1103.

Did I explain that this chap had already done a “WORST CASE” design, that said his wonder circuit would do the job; so how the hell did MC find some examples that lay outside the worst case boundaries.

But the neat part was that the computer could tell him that the next resistor value up from his 4.7 kOhm load resistor would be 5.1 kOhm; problem solved.

Well his idea for design got rave reviews; and lots of folks wondered how to scrounge some number crunching time.

Doesn’t this sound like this NASA GISSE Terraflop situation; it sure does to me.

Here’s some of the things the IBM machine didn’t, and couldn’t have told our hero.

Hey Dummy ! If you are going to do a circuit design; don’t set the load resistor to 5kOhms; unless you pay real money for a precision component you can’t buy such a thing; so you slapped in a 4.7 kOhm, which is actually a 10% tolerance list value; that you probably had on hand when you breadboarded the prototype; and then you simply called out a 5% tolerance for it; when you found that 10% wouldn’t fly with your WC design (and I do mean it was a WC design; and fit for any Loo !)

The real crime was that this designer didn’t realize that this was a totally brain dead circuit architecture to begin with.
He could have used those two transistors; both as CE Voltage gain stages, to create a much higher gain that 10.0; and then he could have applied overall negative Voltage feedback which would have let him set his gain quite accurately as the ratio of just two resistors; and the gain would have been largely independent of any ordinary range of transistor beta spread as well.

He wasted a whole hour of valuable flops on what was a shitty circuit to begin with; and if he had used a decent architecture; he could have donw the WC design in his head.

Well this is about how I see this GISSE story. What is the good of all that computer power; if the damn model is a WC design to begin with.

Mother Gaia models this problem (planet earth climate) on her “ANALOG COMPUTER” and she does it in real time; and she has more ocmputing power in her little finger nail, that NASA has in all its Terraflops.

Is it any wonder, that Mother Gaia’s model always matches the real world climate; while the muscle bound computer geeks, are still playing with their model.
So they produce garbage out, at ever increasing rates.

Hey for the record; I DO BELIEVE that such number crunching power can be usefully utilized in looking at local patterns of WEATHER; so I am not at all unhappy that NASA has spent my tax dollars on this behemoth.

A part of any modelling program should be the optical scattering due to clouds. When we fly over clouds in an aeroplane; we can’t help noticing that those big billowing thunderheads or cumulus clouds look cotton wool white; and we talk about cloud reflectance numbers of 80% or more from such clouds. Thinner laminar cloud layers (izzat stratus) look far more grey on top; somewhat like the standard Kodak 18% reflectance grey card that film photographers all used to own.

Well actually water in bulk has quite low reflectance; about 2% for normal incidence over most of the solar spectrum energy range; with may be an integrated total of about 3% reflectance over a borader angle of incidence range. So how can clouds reflect 80% plus ?

Well the answer is that they don’t. Mostly it is just scattering over large angles, so most of the light simply gets turned around and sent back out from whence it came; and everywhere else too.

So to get some numbers, I set up a simple rain drop model. I picked a raindrop that is 2.000 mm in diameter made out of ordinary fresh water. Well I could have picked any size but a 1 mm radius seemed a nice number.
So my light source is 432,000 mm radius; and it is located 93 million mm from my rain drop. Well I just used a mm per mile for the sun to establish a rough angular diameter. Well that comes in at 0.5323 deg angular diameter; for the sun average.
So I clipped my sunbeam with an aperture stop in front of the rain drop, with a radius of 0.8 mm or 80% of the rain drop size.
It turns out that at the edge of that aperture, the ray incidence angle on the water drop is 53.1 degrees, and that is quite close to the Brewster angle for water, so the edge reflected sunlight would be almost perfectly plane polarised, and the real reflectance of the droplet would be quite close to the 2% normal incidence value; but would then increase rapidly beyond that.

So my 1 mm radius raindrop, becomes a simple biconvex lens, with a front radiust of +1.000 mm, a back radius, of -1.000 mm and a central thickness of 2.000 mm, making a perfect sphere lens.
Well such a lens focusses the sunlight into a beam whoe extreme marginal rays strike the optical axis at about 32.5 degrees, making for a 65 degree full cone angle of light at the focus region. That near focal point is almost exatly 0.5 mm or 1/2 the drop radius from the second surface of the drop. Now the image is beset by a whole lot of spherical aberration, so it is anything but a point image; the point being that an input beam with zero divergence is converted into a 65 degree full angle beam coming out of the raindrop. If you take away the aperture stop, and illuminate the full droplet, then the cone angle goes way up to 82.6 degrees cone HALF angle.

The collimated beam, now is spread over almost a full hemisphere. If I actually take light from the full solar disk, rather than just its axial point; then the light scatters into a full hemisphere; after passing through just one raindrop.

But some cautions. Because of the Fresnel Reflection formulae, the reflectance climbs very rapidly beyond the Brewster angle, so less light is transmitted. Note however that the reflected light itself also contibutes to the total scattering, inlcuding about 2% max coming straight back; so the single drop of water scatters a nice 0.5 degree solar beam into a full spherical output distribution.

But I prefer to stay with the 80% aperture and limit myself to the basic 65 degree full cone angle. It only takes a few rain drops in succession; I’d guess 3-5 and you have a full spherical beam of almost isotropic angular distribution.

Of course the size is somewhat irrelevent. 1.o mm radius is huge in visible light optics. Your optical mouse (specially laser mice) has lenses in it that can have 1 mm radius of curvature surfaces and less than 0.5 mm apertures.

So the apparent reflectance of clouds, is actually a fairly efficient scattering that quickly turns a solar collimated beam into an isotropic light distribution; with relatively little actual loss, except at the spectral regions in the 0.7 to 4.0 micron range, where H2O has strong absorption bands; specially at 0.94 and 1.1 microns.

Lucia’s Lumpy model looks like it’s a restatement of Newton’s “Law” of Cooling. In Lumpy, the unrealized temperature change to a given forcing is realized by the formula exp(-t/T). In Newton’s model I’ve seen this expressed as exp(-rt). So it looks like: r = 1/T. Using this model, one can show that the “heat in the pipeline” converges to a maximum value. Given a high enough r value, this convergence happens quickly, resulting in the same rate of heating going into and coming out of the pipeline. This would imply that estimating sensitivity using linear regressions of ln(co2) and temperature is valid.

However, from what I’ve seen, it looks like using a constant “r” is not generally accepted by the thermal dynamics experts. I haven’t seen a clear explanation of how the change is realized, but it looks like it’s some sort of “exponential integral” curve. Effectively, this has the realization rate decreasing over time due to ocean conduction, heat building up in the pipeline, and with linear regressions no longer being valid. However, since the amount realized is fairly rapid initially, it just looks like there is a linear relationship between the forcing and the modeled temp.

How one would use observations to pick the more suitable model or the parameters for the “exponential integral” model is beyond me. At one time I thought that the zero lag between Milankovich Cycles and temperature might challenge the “exponential integral” model, but that was just a SWAG.

“The first surprise was how close the model results are to a bozo simple linear response to the forcings plus the passage of time (R^2 = 0.91, average error less than a tenth of a degree). Foolish me, I had the idea that somehow the models were producing some kind of more sophisticated, complex, lagged, non-linear response to the forcings than that.”

They are, what you have done is approximate it.

“This almost completely linear response of the GISSE model makes it trivially easy to create IPCC style “scenarios” of the next hundred years of the climate. We just use our magic GISSE formula, that future temperature change is equal to 0.13 times the forcing change plus a quarter of a degree per century, and we can forecast the temperature change corresponding to any combination of projected future forcings …”

That makes no sense to me. Can’t you just look at the actual ModelE run for an IPCC scenario rather than trying to guess it with linear regression?

“Second, this analysis strongly suggests that in the absence of any change in forcing, the GISSE model still warms.”

No it doesn’t. What you’ve shown is that approximating modelE with a line of best fit doesn’t work and you have to add an extra constant term. That extra constant term is likely needed because the model has lagged response, not because there is an inherent warming trend in GISTEMP.

That is also probably why you find a climate sensitivity so low, because you are excluding the extra constant term as if it has nothing to do with the forcings (only in your regression model does it have nothing to do with the forcing – in the actual modelE it probably does)

“Third, the climate sensitivity shown by the analysis is only 0.13°C per W/m2 (0.5°C per doubling of CO2). This is far below the official NASA estimate of the response of the GISSE model to the forcings. They put the climate sensitivity from the GISSE model at about 0.7°C per W/m2 (2.7°C per doubling of CO2). I do not know why their official number is so different.”

Obviously you’ve made a mistake, because the answer is known. If they run the model with forcing of 4wm-2 and get a 3C temperature rise out of it then ModelE has a sensitivity of about 0.7C per W/m2. If you find a different result for what sensitivity ModelE should show using linear regression, then you have found the wrong answer which probably implies there is a flaw with the linear regression method (and I bet in this case it has to do with the exclusion of that constant 0.25C/century term)

It’s nice to know that by the judicious application of a computer model, naturally chaotic changes can be smoothed away, just like the boom/bust cycles were taken out of the computerized economic model.
Unfortunately computers’ communicate with the climate is as good as they communicated with the economy.

For those who are surprised that the result of such a complex model can be relicated so simply, consider modelling the following situation.

A beaker of water contains various objects of complicated shape in a known volume of water which is constantly stirred. A known amount of dye is added and the model is intended to predict the resulting concentration of the dye after a short time.

Method 1: Concentration = amount added/known volume.

Method 2: Set up the Navier-Stokes equation for the stirrer; assume a value for the viscosity (etc) of the water. Determine equations to describe the shapes of the objects (including the stirrer, which will need to be descibed as a function of time) in the beaker. Use these equations to set up the boundary conditions for the numerical solution to the Navier-Stokes equation. Get a supercomputer. Write the code, run it and come back tomorrow.

It would probably take quite a few attempts but I think you could eventually get Method 2 to give the same results as method 1.

Willis Eschenbach says:
“Second, this analysis strongly suggests that in the absence of any change in forcing, the GISSE model still warms. This is in agreement with the results of the control runs of the GISSE and other models that I discussed st the end of my post here. The GISSE control runs also showed warming when there was no change in forcing. This is a most unsettling result, particularly since other models showed similar (and in some cases larger) warming in the control runs.”

There is no reason to be unsettled by this result, if one understands the basic physics underlying the theory of global warming. The surface temperature can continue to increase even when forcing decreases. Forcing is an imbalance between the rate at which energy from the sun is absorbed by the earth and the rate at which it leaves the earth by radiation. The rate of change in temperature is the forcing integrated over the earths surface divided by the effective heat capacity of the earth. Actually, the earth’s heat capacity is not a simple number, because of the time constant for the heating of the ocean surface is much shorter than the long time constant for heating of the deep oceans, due the the imbalance between surface and deep ocean temperatures. The temperature will keep on increasing until the surface is warm enough that the outgoing radiation flux equals the incoming flux even while the forcing decreases. So an increase in forcing is not necessary to have an increase in temperature.
Finding that hindcasts exhibit linear correlation between the global temperatures and forcing is not sufficient grounds to conclude that there is some kind of general law relating the two variables.

It’s not surprising that your simple model of near-straight lines can match the teraflop versions. This is because some of the critical assumptions for the teraflop model are extremely simple and no amount of computer power can add valid complexity to them.

For example, aerosols are one of the main downward-facing forcings. (Stratospheric) aerosols are part described on the GISS page you referenced. Here is an extract from Sato –

“Updates (April 2002)
Data for Krakatoa and Santa Maria were modified. The optical thickness for Krakatoa is 1.1 times that for Pinatubo, based on three-year integration of pyrheliometer data, with the spatial distribution based on Pinatubo but with the two hemispheres switched. The optical thickness for Santa Maria (0.55 times that of Pinatubo) has comparable aerosol amount in both hemispheres based on ice core data.

The effective radius of the aerosol particles is defined for large volcanoes as

reff = 0.20 + taumax(latitude)0.75 × f(t-t0) (µm)

where f(t-t0) is a function of time derived from the observed reff for Pinatubo, while keeping the observed values for El Chichon and Pinatubo.”

The optical thickness is measured at 550 nm (green light to the eye), but the absorption of light in stratospheric clouds could well be rather different at different wavelengths. (Dunno, have not measured it personally).

Now, the the period 1880 to 1920, we have a strong cluster of aerosols but no instruments. The “constants” used in equations like the one above seem to be from reconstructed proxy upon proxy, to about one significant figure, which seems a bit cavalier because of the dominance of the effect on the model. Elsewhere, it is said that aerosols are kept constant since 1990, which is a bit of a gasp as well, given reports of the increasing haze over China as more coal is used.

To cut a long story short, what is the point of a teraflop model when so many of the major constants or coefficients are just a guess, enlightened as the guess might be?

Willis, Just FWIW according to a lecture I heard at AGU there are 32 parameters that get fixed for a GCM. This researcher took an approach that was familiar to me. They did a perturbed physics ensemble. I believe that Tim Palmer talked about this in his “grand challenge” presentation. I had lunch with him, judith and Peter webster. I only wish it could have lasted longer, he’s got a book out ( he was signing it for a student when we met) you might give it a look see

Seems like the volcanic forcing should not be net negative (but is), because you would need a positive bias just to maintain a steady climate. This might mean that the GISS background warming is the response to an unusually clean stratosphere. The mean volcanic forcing should be subtracted out if this is considered a part of the natural forcing.

Sorry George but that was how I got into computer programming in the first place, and I was not speaking of a HP-35, my roommate in engineering had one during the early 70’s. But after looking up that $800, it was actually $700, from the OSU bookstore. Hot off the HP production line. 1975 in February. The price dropped as the month’s went by but that is what I had to fork over for it. It took my bother’s business from nobody to #5 in two years and all in hand written in RPG, three magnetic strips. So please check your facts. That story is a relished part of my life.

Now, if you still have a working one, great, I don’t but still have the manual, don’t you think that was such a classic model if you view it’s technology at those times? I do.

…
The moment you use different sensitivities for different forcings, you can attribute any set of forcing x sensitivity and match the past temperature with better and better R^2, where the (mathematical! not necessarely the real) optimum may show a very low sensitivity for CO2, as Wayne calculated: December 19, 2010 at 11:36 pm.

Ferdinand, I see your point. That analysis I did above is saying some forcings have more weight but they are all given as firm Wm-2. If you fit to the anomalies as I did, that results is really saying, if real at all, that the 10 or so forcings Wm-2 are not really correct and their actual values should be found in the future to be per the weights as laid out above. This could be right or wrong or in-between. I do still see some curiosity in those weightings that did match the observed, they are the same forcings with high factors that have been pointed out here so many times in the past.

I have made a copy of my weightings above to see if in the following years if science finds that each forcing is in fact not really correct as assumed in 2010 and if some are corrections move in the direction given by the weightings that a simple Excel fit says it must be, for that is what the temps say they must be. That would be great, saying physics does actually work, even in climate science. (But it really, really also needs a column for the UHI synthetic forcing imprinted incorrectly in the GISS temps)

Thanks to Willis again for the data and the opportunity to actually learn, without the actual data, it’s a barren desert of conflicting opinions.

Speed says:
>>
Is anyone concerned that the GISSE Climate Model Results are much smoother than the GISS Global Temperature in Figure 1? While year to year variations are more weather than climate and we expect a climate model to reproduce the trend and average measured global temperature rather than exact annual values, why doesn’t the model reproduce the wide variation that occur from year to year?
>>

Probably the main reason is that they do not even attempt to model major variations in ocean currents. It’s a bit like cloud formation: they don’t have any real understanding of the processes so they can’t model them. Instead they IGNORE THEM.

Yes, I was gob-smacked when I found that out. If it had not come from a direct reply from someone at Hadley Met. Office research team I would have doubted it.

Apparently they “think” that what they call “internal variability” is unimportant and should average out over time.

Since ocean currents clearly do have profound effects on climate, without understanding the mechanisms and the timescales, that seems to be a gross assumption.

Climate models are decades away from being any use as predictive tools.

There is NO WAY they should be given the slightest consideration for future global energy policy.

“The IPCC actually says that because the tuned models don’t work well with part of their input removed, this shows that humans are the cause of the warming … not sure what I can say about that.”

Yup, that is pretty much how I understood that. Whenever they claimed that they could not reproduce the modern warming in their models without the Anthopological inputs, then my response has always been, “Your model is wrong, or the global homogenised temperature record is wrong.”

“One of the most surprising findings to me, which no one has commented on, is the sensitivity. Depending on whether we include a linear trend term or not, the sensitivity of the GISSE model is either half a degree C or 1.3°C per doubling of CO2. Regardless of the merits of my analysis, that much is indisputable, it’s just simple math.”

Wills
Playing Devil’s Advocate for a minute – are you assuming that the linear relationship continues into the future whereas the all powerful GISSE does not limit itself to a linear relationship once the positive feedbacks really get going.
Ed

The variance (a mathematical term for year to year variations) is likely smaller in the GISS model simply because it is probably an average of several model runs, rather than a single run.

One of the arguments against anthropogenic global warming, at least on the scale postulated by warmers, is that climate is quite variable — what some view as “global warming” or “climate change” may simply be natural variability. In the GISS temperature signal I see a lot of noise. Or variability.

If we postulate that individual GISSE runs are equally (or more) noisy but many runs are averaged together to reduce the noise (a type of signal averaging), are they not reducing the reported natural variability of their model output?

Perhaps there is a model run that mimics the story of Phaeton, “The running conflagration spreads below. But these are trivial ills: whole cities burn, And peopled kingdoms into ashes turn.” until the operator (echoing Zeus) mercifully terminates the run or averages it into oblivion, returning the GISSE model Earth to a more believable regime.

Sorry George but that was how I got into computer programming in the first place, and I was not speaking of a HP-35, my roommate in engineering had one during the early 70′s. But after looking up that $800, it was actually $700, from the OSU bookstore. Hot off the HP production line. 1975 in February. The price dropped as the month’s went by but that is what I had to fork over for it. It took my bother’s business from nobody to #5 in two years and all in hand written in RPG, three magnetic strips. So please check your facts. That story is a relished part of my life.

Now, if you still have a working one, great, I don’t but still have the manual, don’t you think that was such a classic model if you view it’s technology at those times? I do. “””””

Wayne,
When I rechecked your date (1975) I realized that was considerably later than the HP-35 introduction, and much more like the HP-65 release date; with the 45 coming in between.

I was using an HP Desktop “calculator”; the model 9830, to do lens designs; actually for the actual Litronix LED displays, that went into their el cheapo calculators which were also 1975 era. We were not in competition with HP’s market. I also wrote the whole Optical Ray Tracing program for that 9830, and used it to make better LED displays (for calculators); that even HP couldn’t match (the displays).

My HP-65 still operates flawlessly; but I most often use newer models like the HP-32 and I think HP-34.

I first ran into those scientific calculators with the Wang machines that multiplied by using logs. It didn’t make any sense to me to calculate a log, instead of doing a multiply or even a divide; and I spent eons trying to understand the algorithm that Wang was using. It was actually a very crude forerunner of the general CORDIC algorithm that was the core of the HP-35.

I tried to make my own scientific calculator and even strung up my own magnetic core memory, using 30 mil ferrite cores. I still have that mag memory plane somewhere.
I actually purchased the HP-9830 off Litronix, when I left the company, and continued to do lens designs for them on it; until some varmints broke into my house and stole the machine.

But as to the HP-35-45-65; I agree with you they were totally game changing products. I believe that the guy who brought the project to HP was named Osborne; maybe Tom Osborne; but I can’t be sure, in any case he made quite a name for himself; and wrote a number of books on computers and programming.

I know I didn’t pay $700 for my HP-65, but I was certainly not one of the early buyers, as I had the 9830.

P. Solar says:
December 21, 2010 at 2:40 am (Edit)
Speed says:
>>
Is anyone concerned that the GISSE Climate Model Results are much smoother than the GISS Global Temperature in Figure 1? While year to year variations are more weather than climate and we expect a climate model to reproduce the trend and average measured global temperature rather than exact annual values, why doesn’t the model reproduce the wide variation that occur from year to year?
>>

Probably the main reason is that they do not even attempt to model major variations in ocean currents. It’s a bit like cloud formation: they don’t have any real understanding of the processes so they can’t model them. Instead they IGNORE THEM.

##################

this is a common misunderstanding. You do not model EMERGENT phenomena.
Let’s take a simple case of a fluid dynamics simulation. Modelling the flow of a fluide over a surface. You do NOT explicitly model a vortex. IF you model the fluid and the surface correctly, THEN you will see these flow structures emerge. Similarly, the atmosphere and the oceans are modelled using the same equations that we use to model fluid flows in say aircraft design. If, if you get the geometry right, if you get a good number of the physical factors correct ( largely), then the circulation patterns emerge. So you will see some of the well known circulation patterns EMERGE as the simulations run. This is evidence that the models are getting the problem right ( after all its just Navier Stokes) The devil is in the details. Do they get all the circulations?, do they get them with the correct frequency. matching timing against the historical record would probably require a data assimilation step– huge computation load. So for example a model would only match the 1998 el nino by chance, not by construction. The situation is very much the same in say CFD. We could predict with some accuracy that a vortex would form off the leading edge extension of an aircraft ( see the F/A-18) and we could predict that this high energy flow would be of tremendous help, but unfortunately the models could not predict the kind of buffet the vertical tails would see at high AOA.
Consequently, the production plane had problems with the tails:http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.992

Nobody in aerospace concludes from this that models are junk. The physics is known. making them work on a computer is tough. What should concern you about GCMs is perhaps the inability to predict real catostrophic climate change.

Wayne, indeed it doesn’t make any mathematical difference if you adjust the forcings themselves or the sensitivities for each forcing. The latter is somewhat more correct, if the forcing is known (as is in the case for CO2, at least theoretically, based on absorption spectra), but in other cases, like the forcing attributed to human aerosols, far from certain… What is certain is that the current models with “one sensitivity fits all” forcings are far away from the real world and only fit the past temperatures, thanks to lots of implied fudge factors (clouds, aerosols,…).

“Similarly, the atmosphere and the oceans are modelled using the same equations that we use to model fluid flows in say aircraft design.”

Errr…not quite. The equations used to model atmospheric flows are a bit different, with many built-in assumptions to filter out undesirable solutions (like acoustic waves). See chapter 1 of the MIT GCM documentation here…

(NOTE: The foregoing manual shows that you can REALLY document a code well if you put your mind to it…right NASA GISS??).

George, now that brings back some old memories, HP-9830 desktop, one line l.e.d.. Only used one for a few months before our Wang got in. Your experiences seem pretty parallel to mine over that period. We used a Wang computer, eventually mvp/cdc drives, from that period until PCs came out in early 80’s. Those were fascinating days and i’m still cringe on how much was spent on those.

[Willis] “The first surprise was how close the model results are to a bozo simple linear response to the forcings plus the passage of time (R^2 = 0.91, average error less than a tenth of a degree). Foolish me, I had the idea that somehow the models were producing some kind of more sophisticated, complex, lagged, non-linear response to the forcings than that.”

They are, what you have done is approximate it.

My point is that if the GISSE model can be that closely emulated with a linear model, then the GISSE model itself is very close to linear. But climate is very far from linear. Perhaps that doesn’t bother you. I think it should.

[Willis] “This almost completely linear response of the GISSE model makes it trivially easy to create IPCC style “scenarios” of the next hundred years of the climate. We just use our magic GISSE formula, that future temperature change is equal to 0.13 times the forcing change plus a quarter of a degree per century, and we can forecast the temperature change corresponding to any combination of projected future forcings …”

That makes no sense to me. Can’t you just look at the actual ModelE run for an IPCC scenario rather than trying to guess it with linear regression?

You’re missing the point. If I can very closely emulate a program that requires days of time on a supercomputer to run with a simple linear model … then the extra complexity of the supercomputer model is only making a trivial improvement in the model. Again, given the huge investment of time, energy, and belief in the models, this should be worrisome.

[Willis] “Second, this analysis strongly suggests that in the absence of any change in forcing, the GISSE model still warms.”

No it doesn’t. What you’ve shown is that approximating modelE with a line of best fit doesn’t work and you have to add an extra constant term. That extra constant term is likely needed because the model has lagged response, not because there is an inherent warming trend in GISTEMP.

That may be so, and I commented on that in my post. But to make your point real, you need to demonstrate that it can be done, not simply claim it … and Lucia was unable to do it with anything like the fidelity that my model has. So your suggested improvement has to beat both my model and Lucia’s model …

In addition, you seem to forget that we know, not think but know, that the GISSE model warms when there is no change in forcing. In the main post I cited the study above that demonstrates that it warms without forcing change. So the onus is not on me to prove that the GISSE model warms with no change in forcing. The onus is on you to show that it doesn’t.

I await your contribution on both those issues. This is a scientific blog, and I have made, substantiated, and provided data, citations, and other backup for my claims. Time for you to do the same.

That is also probably why you find a climate sensitivity so low, because you are excluding the extra constant term as if it has nothing to do with the forcings (only in your regression model does it have nothing to do with the forcing – in the actual modelE it probably does)

Go back and read the head post. I checked the sensitivity both with and without the extra constant term. Both are way, way below what GISSE modelers claim. Please do us all a favor and read my post very carefully. You are arguing against straw men.

[Willis] “Third, the climate sensitivity shown by the analysis is only 0.13°C per W/m2 (0.5°C per doubling of CO2). This is far below the official NASA estimate of the response of the GISSE model to the forcings. They put the climate sensitivity from the GISSE model at about 0.7°C per W/m2 (2.7°C per doubling of CO2). I do not know why their official number is so different.”

Obviously you’ve made a mistake, because the answer is known. If they run the model with forcing of 4wm-2 and get a 3C temperature rise out of it then ModelE has a sensitivity of about 0.7C per W/m2. If you find a different result for what sensitivity ModelE should show using linear regression, then you have found the wrong answer which probably implies there is a flaw with the linear regression method (and I bet in this case it has to do with the exclusion of that constant 0.25C/century term)

“The answer is known”??? My friend, your faith is positively heartwarming, but tragically misplaced. Haven’t you noticed by now that this is climate science, and that the amount of misinformation is about equal to the amount of information?

My point is simple. My analysis shows a different answer. Your bet that it has to do with the exclusion of the 0.25° per century is a bet I am happy to take. How much money you want to put on it? Because I already calculated it without the 0.25°/century … you really should do your homework before you offer to bet.

And yes, certainly there may be a flaw in my method. I’ve been wrong lots of times, haven’t we all … including GISS.

You say that because Gavin Schmidt of GISS or someone else has given us the ‘official answer’ about sensitivity from on high, there must be a mistake in my method. While that might pass in church, it won’t pass here. It is a statement of faith and not a statement of science. Science works by someone building a scientific edifice, and other people using science (math, logic, data, etc.) to tear it down.

Saying “you are wrong, because the answer is known” is not just unscientific. It is anti-scientific. As Richard Feynmann commented, “Science is the belief in the ignorance of the experts.” And yes, that includes the experts at GISS who wrote the climate model.

I await your improved model that beats out Lucias and mine, and your demonstration that the GISSE model doesn’t warm when the forcings don’t change. That’s how science works, not by you claiming over and over again that I must be wrong. I certainly may be wrong … but you have to show that, not simply claim it.

Willis Eschenbach says:
“Second, this analysis strongly suggests that in the absence of any change in forcing, the GISSE model still warms. This is in agreement with the results of the control runs of the GISSE and other models that I discussed st the end of my post here. The GISSE control runs also showed warming when there was no change in forcing. This is a most unsettling result, particularly since other models showed similar (and in some cases larger) warming in the control runs.”

There is no reason to be unsettled by this result, if one understands the basic physics underlying the theory of global warming. The surface temperature can continue to increase even when forcing decreases. Forcing is an imbalance between the rate at which energy from the sun is absorbed by the earth and the rate at which it leaves the earth by radiation. The rate of change in temperature is the forcing integrated over the earths surface divided by the effective heat capacity of the earth. Actually, the earth’s heat capacity is not a simple number, because of the time constant for the heating of the ocean surface is much shorter than the long time constant for heating of the deep oceans, due the the imbalance between surface and deep ocean temperatures. The temperature will keep on increasing until the surface is warm enough that the outgoing radiation flux equals the incoming flux even while the forcing decreases. So an increase in forcing is not necessary to have an increase in temperature.

The “basic physics underlying the theory of global warming” mean that a model should warm for eighty years in the absence of forcings? Really?? Then why do only some of the models show that warming during the same identical control runs? Are some of the modelers too stupid to put in the basic physics? Serious question, and if you can’t answer it, then you need to do some homework.

You’ll have to explain that better. Read up on the control runs done on these models, and come back and give a better explanation. Because in the model world, with unchanging forcings, none of the various processes you give above (forcing imbalance, etc.) are going on … so why is the model warming?

For those who are surprised that the result of such a complex model can be relicated so simply, consider modelling the following situation.

A beaker of water contains various objects of complicated shape in a known volume of water which is constantly stirred. A known amount of dye is added and the model is intended to predict the resulting concentration of the dye after a short time.

Method 1: Concentration = amount added/known volume.

Method 2: Set up the Navier-Stokes equation for the stirrer; assume a value for the viscosity (etc) of the water. Determine equations to describe the shapes of the objects (including the stirrer, which will need to be descibed as a function of time) in the beaker. Use these equations to set up the boundary conditions for the numerical solution to the Navier-Stokes equation. Get a supercomputer. Write the code, run it and come back tomorrow.

It would probably take quite a few attempts but I think you could eventually get Method 2 to give the same results as method 1.

davidc, I appreciate your example, but I’m not sure what it means. If by concentration you mean the amount of dye added divided by the amount of water (the usual meaning of concentration), then it has that concentration from the first second it was added. How could a supercomputer help with that? So you can’t mean that.

On the other hand, if you want to know the distribution of die in the water at time T, you’ll have a very hard time doing that with a simple linear model.

So I’m not clear what you are trying to say in your example.

The problem here is much trickier than in your example, for a couple of reasons. First, unlike in your example, to a high degree of similarity the complex model outputs a simple linear equation of the form

Temperature = Forcing * 0.10 + 0.25°/century

The output of the complex model’s estimate of the distribution of dye in the water, on the other hand, is not going to be anything like that simple.

Second, it is trickier because we can’t solve the Navier-Stokes equation for the planet-wide climate. As a result, unlike in your experiment, any model must be tested and refined by checking it against reality. Including the complex models.

In your experiment, you could refine your model by checking it against the supercomputer model. But we can’t do that in climate, as the models to date have performed so poorly. So we end up having to tune both complex and simple models to the historical record.

Now, if the output of the complex model is complex and non-linear as the climate is, we won’t be able to match our linear model against it. The output of the complex model would not be replicable (as above) by a simple model.

But (as in this case) if the output is a simple linear transformation of the imputs, what is the point of the complex model? We can’t test our simple models against it, and it provides no better results than the simple linear model.

But perhaps I misunderstand your example. My point is that if the output of a model can be replicated by a simple linear function, that model is linear in practice. It might be solving complex equations involving strange attractors inside the black box, but they’re not affecting the output. And that is definitely not the situation in your example.

Now, I don’t know about you, but the idea that we can forecast future climate scenarios by using a formula like F*S + 0.25T doesn’t strike me as being at all probable … which means that the GISSE model is useless for the purpose. But YMMV.

(Let me clarify by saying that I am not opposed to computer models, I use them, I write them. I know that all models are wrong, but some models are useful. I am aware of and use a variety of heuristic engineering formulas that give us quick-and-dirty answers to complex questions. And I do think that the climate can be successfully modeled, although I think we’re a long ways from it.

However, for forecasting the future evolution of the climate, F*S + 0.25T strikes me as both wrong and not useful …)

Willis, Just FWIW according to a lecture I heard at AGU there are 32 parameters that get fixed for a GCM. This researcher took an approach that was familiar to me. They did a perturbed physics ensemble. I believe that Tim Palmer talked about this in his “grand challenge” presentation. I had lunch with him, judith and Peter webster. I only wish it could have lasted longer, he’s got a book out ( he was signing it for a student when we met) you might give it a look see

And my model has only two parameters … sounds like a fascinating lunch.

BTW Willis, that small excursion I took above (December 19, 2010 at 11:36 pm) was never meant to be competitive with your method above but more to take a variance of it. Your post intrigued me and with the data, thanks Steven, that was the icing on the cake. I would be curious of your thoughts.

You seem in your methodology that all of those forcings are handled on a more or less equal basis while mine was merely saying that I don’t think we have enough long term accurate data or the understanding of the influence of any of those forcings on this planet to place them on equal footholds. Just wanted to see if a combination of just the forcing listed from GISS in your spreadsheet, accurate or not, could alone explain the small rise closely we have seen in the temperatures over the last decades without the linear addition. When I saw the results I just thought you and others might be interested for it draws right in the middle of the GISSTemp graph from bottom to top.

For myself it was not what I expected but the forcing weights are bound to be off by the missing or unrealistic UHI corrections we all know are really there.