Updates to model-data comparisons

It’s worth going back every so often to see how projections made back in the day are shaping up. As we get to the end of another year, we can update all of the graphs of annual means with another single datapoint. Statistically this isn’t hugely important, but people seem interested, so why not?

For example, here is an update of the graph showing the annual mean anomalies from the IPCC AR4 models plotted against the surface temperature records from the HadCRUT3v and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs. The 2009 number is the Jan-Nov average.

As you can see, now that we have come out of the recent La Niña-induced slump, temperatures are back in the middle of the model estimates. If the current El Niño event continues into the spring, we can expect 2010 to be warmer still. But note, as always, that short term (15 years or less) trends are not usefully predictable as a function of the forcings. It’s worth pointing out as well, that the AR4 model simulations are an ‘ensemble of opportunity’ and vary substantially among themselves with the forcings imposed, the magnitude of the internal variability and of course, the sensitivity. Thus while they do span a large range of possible situations, the average of these simulations is not ‘truth’.

There is a claim doing the rounds that ‘no model’ can explain the recent variations in global mean temperature (George Will made the claim last month for instance). Of course, taken absolutely literally this must be true. No climate model simulation can match the exact timing of the internal variability in the climate years later. But something more is being implied, specifically, that no model produced any realisation of the internal variability that gave short term trends similar to what we’ve seen. And that is simply not true.

We can break it down a little more clearly. The trend in the annual mean HadCRUT3v data from 1998-2009 (assuming the year-to-date is a good estimate of the eventual value) is 0.06+/-0.14 ºC/dec (note this is positive!). If you want a negative (albeit non-significant) trend, then you could pick 2002-2009 in the GISTEMP record which is -0.04+/-0.23 ºC/dec. The range of trends in the model simulations for these two time periods are [-0.08,0.51] and [-0.14, 0.55], and in each case there are multiple model runs that have a lower trend than observed (5 simulations in both cases). Thus ‘a model’ did show a trend consistent with the current ‘pause’. However, that these models showed it, is just coincidence and one shouldn’t assume that these models are better than the others. Had the real world ‘pause’ happened at another time, different models would have had the closest match.

Another figure worth updating is the comparison of the ocean heat content (OHC) changes in the models compared to the latest data from NODC. Unfortunately, I don’t have the post-2003 model output handy, but the comparison between the 3-monthly data (to the end of Sep) and annual data versus the model output is still useful.

Update (May 2012): The graph has been corrected for a scaling error in the model output. Unfortunately, I don’t have a copy of the observational data exactly as it was at the time the original figure was made, and so the corrected version uses only the annual data from a slightly earlier point. The original figure is still available here.

(Note, that I’m not quite sure how this comparison should be baselined. The models are simply the difference from the control, while the observations are ‘as is’ from NOAA). I have linearly extended the ensemble mean model values for the post 2003 period (using a regression from 1993-2002) to get a rough sense of where those runs could have gone.

And finally, let’s revisit the oldest GCM projection of all, Hansen et al (1988). The Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%), and the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the current best estimate (~3ºC).

The trends are probably most useful to think about, and for the period 1984 to 2009 (the 1984 date chosen because that is when these projections started), scenario B has a trend of 0.26+/-0.05 ºC/dec (95% uncertainties, no correction for auto-correlation). For the GISTEMP and HadCRUT3 data (assuming that the 2009 estimate is ok), the trends are 0.19+/-0.05 ºC/dec (note that the GISTEMP met-station index has 0.21+/-0.06 ºC/dec). Corrections for auto-correlation would make the uncertainties larger, but as it stands, the difference between the trends is just about significant.

Thus, it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world, but assuming (a little recklessly) that the 26 yr trend scales linearly with the sensitivity and the forcing, we could use this mismatch to estimate a sensitivity for the real world. That would give us 4.2/(0.26*0.9) * 0.19=~ 3.4 ºC. Of course, the error bars are quite large (I estimate about +/-1ºC due to uncertainty in the true underlying trends and the true forcings), but it’s interesting to note that the best estimate sensitivity deduced from this projection, is very close to what we think in any case. For reference, the trends in the AR4 models for the same period have a range 0.21+/-0.16 ºC/dec (95%). Note too, that the Hansen et al projection had very clear skill compared to a null hypothesis of no further warming.

The sharp-eyed among you might notice a couple of differences between the variance in the AR4 models in the first graph, and the Hansen et al model in the last. This is a real feature. The model used in the mid-1980s had a very simple representation of the ocean – it simply allowed the temperatures in the mixed layer to change based on the changing the fluxes at the surface. It did not contain any dynamic ocean variability – no El Niño events, no Atlantic multidecadal variability etc. and thus the variance from year to year was less than one would expect. Models today have dynamic ocean components and more ocean variability of various sorts, and I think that is clearly closer to reality than the 1980s vintage models, but the large variation in simulated variability still implies that there is some way to go.

So to conclude, despite the fact these are relatively crude metrics against which to judge the models, and there is a substantial degree of unforced variability, the matches to observations are still pretty good, and we are getting to the point where a better winnowing of models dependent on their skill may soon be possible. But more on that in the New Year.

906 Responses to “Updates to model-data comparisons”

#129 – Like the man said, 8 years of data has pretty high signal to noise. The trend is clearly upwards, and the 30 year trend in line with what the models predicted.

In any case, your eyeballing is not correct. I pulled the figure out from page 70 and blew it up. The model ensemble between 2000 and 2010 (scale in 10 yearly intervals) shows a warming of between 0.1 and 0.3 degrees in that 10 year period.

Why not go back further? SAR predicted a mid-range increase of 2 degrees this century. Which is, roughly, 0.2 ºC/dec

Remarkably skillful I’d say, for old fellas.

As a policymaker, I’d say that climate models clearly give a range of outputs, and it is currently not possible to reduce the range of climate sensitivity from the 1.5 to 4.5 95% confidence interval range. However, as a policymaker, that is sufficient certainty to begin to take actions to reduce emissions, especially given that there are numerous actions I can take at low cost, and many of these have significant co-benefits in terms of energy security and improved air quality.

The science isn’t ‘finished’ but anyone who looks at this and says “no problem there” has their head in the sand.

Re# 51
Most of the historical differences between UAH and RSS have been resolved and the data recomputed where there were errors in the previous computations. There is slight differences in how they do thier computations, but I think no more than HADCRU and GISS.

I happened to download the monthly corrections for UAH from here: http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt
the other day. I did a linear trend fit through the monthly temperature anomalies from 1984 and ended up with a trend of 0.17 degrees per decade. This seems quite close to the 0.19 that GISS and HADCRU produced, when you consider that UAH is for the mid-troposphere and not the surface temperatures.

[Response: MSU-LT is the lower-troposphere, not the ‘mid’, and should be running slightly warmer than the surface temperatures. RSS is more consistent that way, but there are still structural uncertainties in the MSU-LT trend analyses that are larger than the anomalies. – gavin]

“I am, obviously, a skeptic. I agree that additional CO2 will tend to warm the planet, but I think that climate modelers are incredibly arrogant to think that they can actually model our planet.”

I find it rather arrogant (and I’m not modeler) to come on here and tell established scientists they don’t know what their doing.

You want to be skeptical? Go away and do some paleoclimatology that shows that climate sensitivity to CO2 is lower than 2 degrees. /That/ would be a real scientific discovery, and one that would have a massive impact on scientific knowledge.

“In ten years, I’ll feel comfortable drawing conclusions”

In 10 years we’ll have locked into an emissions trajectory that takes us to 550ppm CO2 or higher.

How about we take a range of sensible actions now to avoid that, and if the models turn out to be over-predicting warming 10 years from now, we can turn our attention to something else. Risk management?

Questions
The second graph is most interesting.Noise in graphs for overall heat content.

1. One of the curves shows a lot of high frequency and large amplitude noise. Is this the one labeled ‘Global OHC’ ? Is it possible to explain what causes this?

[Response: It is 3 monthly data, so there will be more real variation, but also possibly more sampling errors than in the annual mean observations or model output. – gavin]

2. A very different question refers to the red and blue curves obtained from the models. Is it possible to use the models to interpret the causes of the dips which occur? Do these refer to a loss of energy to other energy stores (e.g. deep oceans, atmosphere, land?) or to outer space?

[Response: Good question. Some of the dips are associated with volcanic eruptions (post 1963, post 1991), but I haven’t seen an analysis that answers your question in general. – gavin]

Re Ken W (137). I have read the comments, I would say that if we are looking at how well the PROJECTIONS are shaping up it is disingenuous to include hindcasts within the graphs. An honest method would show the comparison at the point that the prediction was made.

Also I made a mistake in my original post – the TAR projections were for warming of 0.3 to 0.4 with the actual measured warming being between 0 to 0.05 (not 0.5 as I initially said)

[Response: Sorry, but read one of the preceeding comments. This is an error in the news piece – they meant to say emissions, not levels – which have not risen 35% more than expected. – gavin]

Sorry, you’re obviously correct that the opening sentence in the BBC article is a mistake. However, even if you assume the sentence from the BBC article was correct, it still would only be saying that CO2 levels are rising 35% faster than expected, which is not at all the same thing as CO2 levels are 35% higher than expected (which is what the original commenter was apparently suggesting).

FWIW, 1980-2009 trends in the AR4 ensemble are 0.20+/-0.13 degC/dec (2 sig) – total envelope [0.08,0.34] degC/dec. This is very clearly inconsistent with no temperature change from 1980. Observations are 0.16+/-0.04 degC/dec. The skill of the ensemble mean relative to the ‘no change’ forecast in predicting that trend is very clearly positive.

Geoff Wexler (154) — For your second question, I’ll suggest variations in tropical cyclone activity which is modulated in part by El Nino. Similarly, I suppose, other means of transporting heat landward and poleward and thence to space.

“I have read the comments, I would say that if we are looking at how well the PROJECTIONS are shaping up it is disingenuous to include hindcasts within the graphs.”

It would be “disingenuous” if the models were calibrated to fit the historical mean temperature data. On the other hand, if the models are not calibrated to fit the observed mean temperature data, then a historical hindcast is perfectly valid.

This is done all the time in the Earth Sciences (for example, my own field of Hydrology). The key rule is: clearly state what the calibration period was (if there is any – not all models need to be calibrated) and don’t use this period as part of your verification period.

From what I understand, GCMs require little to no calibration because the models are physically based and most (if not all) of the parameters are defined a priori. As mentioned earlier, this is an important feature for climate change studies because calibration limits a model’s ability to predict behaviour under conditions different from the calibration period.

I dare say that I would expect the historical fits would be much better if GCMs were calibrated to historical data. It’s really not that hard to make a calibrated model fit historical data. What’s difficult is building a model that can reproduce historical data with virtually no calibration – and that’s what GCMs do.

Geoff #117
Believe it or not, I’m genuinely interested in actual answers. If you look at my postings, I’ve responded in every case! People certainly have great confidence in their answers, but unfortunately they all disagree!
Q: Can we measure global radiation signature over time? here are some of the answers I’ve gotten:
1) No, we need to distinguish by hemispheres, too complicated.
2) No, we need a fleet of satellites, too complicated.
3) Yes, we just need one satellite, but it’s in storage due to Bush tax cuts.
4) Yes, it’s already been done, here’s the link: (pointer to 2001 study comparing 1997 to 1970 – but study was site specific and clear skies only, not really a global assessment – then a more recent study comparing 2003 to 1997 shows no changes)
5) Yes, it’s already been done, here’s the link: (pointer to downward radiation – interesting, but it seems like they used a model to get final results)

Q: Can we predict global radiation signature over time? Have we?
1) no one has answered this

Should I give all of these answers equal weight?

Then someone linked me to thishttp://airs.jpl.nasa.gov/AIRS_CO2_Data/ which shows CO2 globally using a satellite IR sensor. Clearly they are getting some global IR data? How should I think about this existing data in relation to the various answers above?

I guess I came here for the “consensus science” view on this question, and would still like to get one.

Then the question that is actually more interesting to me! CO2 energy issues in the atmosphere. We know CO2 is a small proportion of greenhouse gases, the largest being water vapor, and others including methane, etc. So it’s certainly likely that a CO2 will absorb a wavelength, then transfer a portion of that energy to a water molecule via collision path through non-IR-emitting atmosphere components, which water molecule may then emit a IR photon at a different wavelength? A very interesting systems question to my mind. And of course the answer will vary with atmospheric pressure and makeup. So feed x photons into the atmosphere that are absorbed by CO2 – what % are re-emitted by CO2, what % are transferred to other molecules, what % of those eventually cause IR emissions by H2O, CH4, vs transfer to ground via convection, etc.

Thanks Gavin @109 – I guess this leads me to this question: ultimately, will earth system models replace GCMs in the long run? Or are they too hard to parameratize? Seems like the ocean components are more possible than the terrestrial components.

[Response: That’s already happening – but it’s not a replacement. It all depends on the application. For our AR5 simulations, we will have fully interactive chemistry and aerosols as well for instance. The ocean and terrestrial carbon components will also be used in some experiments. – gavin]

Ray #95: I’ve mentioned some specific items in a response to another poster. Briefly, there are other greenhouse gases including H2O a higher %. Photons are absorbed by CO2, on average what % are re-emitted vs passed via kinetic collisions to other molecules? Where does that kinetic energy ultimately go? Assume some portion to H2O, then emitted as IR? That’s certainly one pathway. Anyway, goes to impact of CO2 on radiation signature. Again, different than this idea that 100% of the energy that CO2 absorbs in a specific wavelength is emitted in that same wavelength, right?

[Response: Not at all. The emission is related to the local temperature and occurs at all the relevant wavelengths. Absorption is dependent on the incoming flux of IR (from wherever). If there is no flux at the key wavelength, there will be no absorption, but there will still be emission. – gavin]

Just makes a really, really weak claim:
The conclusion in total:
“We now know that stratospheric cooling and tropospheric warming are intimately connected and that carbon dioxide plays a part in both processes. At present, however, our understanding of stratospheric cooling is not complete and further research has to be done. We do, however, already know that observed and predicted cooling in the stratosphere makes the formation of an Arctic ozone hole more likely.”

Scanning through the comments, I noticed a Tom P (don’t know who he is, but Watts identified him as working for NASA) who identified that and some other problems.

OT, but partial or complete “outing” of people who prefer to post anonymously is one of the intimidation techniques used by Watts to limit comments from non-denialists (he doesn’t do it to those in the amen chorus).

Re Jason @139: “In data I trust. If mother earth starts following the models, I will be convinced.”

And in the mean time the area under the curve will continue to grow, said area representing the total amount of carbon added to the atmosphere and to the active carbon cycle.

Do you have a plan for how we will draw down the increase in that reservoir once you are convinced?
Or a plan for how we will cope with the consequences for the next millennium, or will you just cross that bridge once you are convinced?

Re Pat @141, “Since we have been warming from the 1600’s (or cooling since the Holocene Optimum), when did it switch from natural to anthropogenic?”

Since both CO2 and CH4 not only stopped falling but began rising soon after the Holocene Climate Optimum, anthropogenic forcing appears to have begun when humans invented agriculture, particularly the growing of rice in artificial wetlands, allowing human population to grow exponentially.

Again, different than this idea that 100% of the energy that CO2 absorbs in a specific wavelength is emitted in that same wavelength, right?

[Response: Not at all. The emission is related to the local temperature and occurs at all the relevant wavelengths. Absorption is dependent on the incoming flux of IR (from wherever). If there is no flux at the key wavelength, there will be no absorption, but there will still be emission. – gavin]

I don’t see why you say “not at all”. Your statement agrees entirely with mine, doesn’t it? I mean, where does “local temperature” come from anyway? Essentially from radiation absorption and emission across the entire system. CO2 absorbs, converts to kinetic, at some point that kinetic is converted to radiation by another molecule and emitted (yes, not the only pathway). Thought experiment. Assume there’s an IR wavelength that only CO2 absorbs. Create an environment that replicates the ground level tropical atmosphere. Radiate the environment with just that wavelength. You will not see CO2 emitting 100% of that energy in the same wavelength. You will instead see a relatively broad band of emissions from H2O, CH4, etc, etc. As you say, local temperature goes up and the system radiates at all “relevant wavelengths”. I think we agree on this one!

[Response: Sorry, I thought you were claiming that 100% of the energy absorbed in one wavelength is emitted in that same wavelength. My bad. – gavin]

Try @163, 166, First, I think you need to learn to ask your question more clearly. Clearly there are satellites that can measure the outgoing IR and which show the big bite taken out by CO2. The thing is that this is a snapshot. The signature of greenhouse warming would take years, if not decades, of detailed satellite measurements to tease out–just like the signature in terrestrial data.
DISCOVR/Triana would have been an excellent addition to our arsenal of measurement tools, but L1 is a very long way from Earth. Likewise, IR instruments on GEO birds (e.g. GOES, etc.) are good, but look only at one hemisphere and have little visibility at the poles. Low-Earth-Orbit birds in a polar orbit traverse the entire globe, but each orbit is brief (minutes to a few hours), and the next pass over the same spot won’t be for weeks at least.

So, we can make the measurements. We are making the measurements. They look very much like you’d expect for a world warming due to a greenhouse mechanism. It is just that YOU are looking for a single quick and dirty measurement that will remove all doubt, and that doesn’t exist for climate.

As to the tropospheric warming/stratospheric cooling, you really should be more impressed by that. It shows that we have warming and cooling about where we expect AND of about the expected magnitude. THAT is really impressive, because THAT is a signature of a greenhouse mechanism.

The evidence for anthropogenic causation is really overwhelming. However, it is evidence that comes from many, many separate, independent sources and phenomena. The denialists will focus on a single study (e.g. Mann et al. 98), just as the creationists will focus on a single fossil. Removed from its context, they can distort its meaning and make it look more problematic than it is. That simply is not how science works. You have to look at ALL the evidence. You don’t get to cherrypick only the studies that support you.

One quick question: are all models in the CMIP3 archive included in the IPCC ensemble above?

And one more: how are multiple runs for a given model weighted in the ensemble average vs. the weighting for each individual model? For instance, are all runs for a given model first averaged to create an ensemble average that is then used for calculating the inter-model (pseudo) ensemble average? Or is some other weighting used?

[Response: This consists of 55 simulations that were available at some point a couple of years ago. Every simulation is given an equal weight – which is not how IPCC did it or how I would do it in a publication – but it’s probably ok for a blog post. – gavin]

Dano @144 – that’s my point in a nutshell, I think of the average working stiff with 2 hours of commuting and screaming kids and 8 hours of drudgery at work, they come home and if they go online they just want to look at YouTubes of dancing bananas et al. Perhaps their curiosity’s been piqued by something they heard at work or from a friend about AGW; they conduct a search for whatever issue, and what do they get but WalltowallWatts. You need to nip that in the bud right from the start.

I’ve been discussing energy issues online for years now, have always been on board with the AGW argument, and lately have been wanting to research things in more depth, and was pretty staggered by the amount of frustration you get attempting to find a balanced assessment of anything. Knowing how to find the information you seek should be step A for newcomers to this site. Many people don’t know how to do a site-specific search, even; I bookmarked that for RC years ago, that’s easy, but sometimes there’s a dearth of discussion on whatever topic – last night I was trying to find out more about V.K. Raina’s independent study of Himalayan glaciers, which your opponents were in a veritable froth over. And it was a real chore digging up real info on the total number of glaciers, how many have been under satellite observation, which ones are critical for feeding the major rivers, how long the studies had been conducted, and so forth.

Beyond that, something I’ve been interested in for a while as well is a better way to present the veracity of either side’s arguments. Perhaps giving each researcher, septic or otherwise, a score, in the case of a real researcher, say, assigning negative marks for having multiple studies passed on due to not meeting editorial standards, or having a slew of opposing studies contradicting its conclusions. In the case of bloggers and pundits, well, where do you begin? You did this in the case of Plimer; the RC Wiki is a great start.

Raypierre is commenting there that even assuming Triana goes up it would be reporting only visible light albedo, not infrared, and we’d need another instrument (on the night side, in the infrared) along with it to get the needed data on the total energy flux in and out, all from one fixed location pair of instruments, in a long time series, assuming everything worked and kept working.

What you’re asking for is something that the climate scientists have been trying to get — for a very long time.

Scanning through the comments, I noticed a Tom P (don’t know who he is, but Watts identified him as working for NASA) who identified that and some other problems.

OT, but partial or complete “outing” of people who prefer to post anonymously is one of the intimidation techniques used by Watts to limit comments from non-denialists (he doesn’t do it to those in the amen chorus).

I thought it was sleazy even before he started doing it to me.

I don’t know if there is a Tom P working for NASA. Nor do I know anything about the alleged “outing” other than what was said above. But if a Tom P wants to remain anonymous online, he ought to call himself Peeping Tom. This is not rocket science.

@Mike Cloghessy#121: Is this raw data or has this data been adjusted, homogenized and re-adjusted.

A quick google-search on “Mike Cloghessy” turned up this:
By Mike Cloghessy on Aug 10, 2009 | Reply
Carbon offsets are like global warming…alarmists would have you pay for something that does not exist.

Anybody want to place odds on how likely it is that Cloghessy will actually look at any of the data that he’s been pointed to?

Im fairly new to this debate but I have been watching this site and a number of the skeptics sites closely over the past few months.

My background is in statistics and computer science; so I know a little about the techniques used in climate science.

I am hear to learn about climate science; not to proffer an opinion. You might consider me a fence sitter whose own view about AGW has switched on more than one occasion.

So, to the subject matter. IMHO, the conclusions re the charts seem to be valid, but, if you look at chart 1 and chart 3; the instrumental records from around 1998 onwards (as was mentioned) are trending negative. There doesn’t seem to be any other 10 year period in the instrumental record where the trend has been negative.

[Response: There is a lot of variance in short term trends in the data- but you have to be careful because the period over which the expected warming is around the same as now is relatively short, and there are volcanoes every now and again which mess up the analogy. If you look at model simulations for the same period, you generally find negative 10 year trends happening at about the 10% level. Thus it isn’t likely to be happening at any one period, but it is likely to happen if you look over a 100 year period. What we have seen in the last ten years doesn’t stand out in that context. – gavin]

Gavin, you mentioned that a trend of 15 years or less is insignificant statistically. I am curious to understand why you say that.

[Response: From model simulations which show that initial condition ensembles need that long to have their trends clearly come out of the weather noise. ’15 years’ is not a hard and fast number though and there is always some uncertainty as to whether the character of the intrinsic variability in the models is close enough to that of the real world. – gavin]

In relation to El Nino; we are in an El Nino pattern at present which is past its peak as I understand. Shouldn’t we therefore have seen a spike in this years data like the one that occurred in 1998?

[Response: The very large El Nino in 1997-1998 gave rise to a record year in 1998. The (smaller) El Nino of 2009-2010 will likely give a spike in 2010. In the GISTEMP and NCDC records, the El Nino of 2004-2005 allowed 2005 to break the 1998 record. – gavin]

Moreover, does that not also mean that temperature trend for the next few years is likely to continue downwards? And I know that this is speculation, but if the trend does continue downwards for say another 3 years, what does that say about the models?

[Response: I very much doubt the next three years will trend downwards. – gavin]

I understand there is another temperature record maintained by the University of Huntsille, Alabama (known as UAH?). I am curious to know how that stacks up to the same kind of analysis.

[Response: It’s a different metric. The MSU-LT series is an average of temperatures from the lower troposphere and is calculated by two groups (RSS and UAH) whose trends differ quite significantly. Which (if any) is more correct is still unclear. – gavin]

As I said already, I am here to try and understand the science so please dont jump on me. The instrumental record of the past 10 years is something that skeptics point to as an indicator that AGW theory is incorrect.

[Response: That is because they don’t understand that there is unforced variability in the system. The year to year sigma in surface temperature is about 0.1 to 0.2 degC. The expected trend for this decade is 0.2 degC per decade. The signal to noise ratio implies you need more than ten years to see the trend clearly. There are however other measures that have less noise – stratospheric temperatures, or Arctic sea ice – the signals are stronger there. – gavin]

I do like Tamino’s quote “All models are wrong but some are usefull”. Models are usefull, they reduce but not eliminate surprises. We are facing droughts, reduced snowpack, higher sea levels and a higher incidence of extreme events.

If we use the information provided, by models, preparatory adaption can reduce the untold devastation we are facing. Or do we wait until planes are trying to land on water to build levees around runways.

The biggest impediment to the usefullness of models is our collective unwillingness to prepare in advance.

Nope, each pass over a point on the ground is brief, minutes; each orbit is very close to exactly the same length of time (call it 90 minutes or so, the exact number depending on its altitude).

An engineer explained to me that the reason we can’t predict passes overhead precisely for the ISS more than a week or so in advance, is that the satellite will experience some perturbation on each orbit from mass variations in its ground track. An equatorial orbit passes over exactly the same masses on each orbit; a polar orbit passes over a different slice on each orbit.

Andrew (post #88) is absolutely correct. I am fluent in enough programming languages that I can pick the right tool for the job. Fortran is the correct tool for numerical work. Lamont falls into the hammer trap: I have a hammer so everything is a nail

Why do people have such a hard time understanding that the climate models in question aren’t forecasting interannual or interdecadal temperature changes? The models are concerned with longer time scales. NO ONE has predicted a linear warming over decadal time scales. These “the models have failed” posts are just signs of ignorance: Arrogant, willful ignorance.

There are people working at shorter term climate models that factor in shorter-term natural variability. At least a couple of them say that natural forcings may overwhelm AGW forcings and “stall” or even reverse warming for the next decade or two. That does NOT mean that AGW isn’t occuring, even though their work has been misrepresented by denialist conspirators.

Most of the predictions are for well into the future (2050-2080), so I’m not sure how we can assess at thiss stage how well the predictions are holding up.
That said there are some effect of CC already obvious – i.e. shifts in species distributions, changes in flowering periods etc.

I have been reading this site for the last couple of months and have found it to be very helpful. I will be honest, I am disturbed by this post and I have some comments / points to make:

1) The first sentence is: “It’s worth going back every so often to see how projections made back in the day are shaping up.” But the graphs you then discuss aren’t that at all. The graphs start well before when the projections were made which simply shows that the model matches history (not surprising). I know this point has been made in the comments but the post itself remains very misleading. It should be either fixed or clarified.

2) For the first graph, the comments state that the actuals versus the projections are since 2002 and you have provided a graph of just that period. In that period, actual termperature is down versus the projection that it would be up. While I agree with your point that not much can be drawn from this, why show the graph at all then? Since you don’t think that the actuals are significant at this point, why write the post at all?

3) I am not sure what the point of second graph is. It doesn’t have any information on actual versus projection for the last six years.

4) The last graph shows the actuals versus the oldest model. If I am reading this right, the actuals are below all three scenarios. In your commentary, you make the point that the actuals are within the standard error, but the graph doesn’t show that standard error. It would be helpful to see the size of that error and how close the actuals are to violating it (similar to the first graph).

5) Would I wrong to say that according to this the actual termperatures have consistently been below the projections of the GCMs although within the standard error?

Gavin,
I’ve got a question about the pre-industrial control runs. On the PCMDI website, I found that it says that “the control should allow us to subtract any residual, unforced drift from all perturbation simulations.” So the 20C3M data is basically: drift + forced change + intrinsic variability. By subtracting the control run the data then becomes: forced change + intrinsic variability. Is it appropriate to compare such data to observational data? I wouldn’t think so because the observational data is an anomaly relative to some climatology whereas the model run minus the control run is an ‘anomaly’ relative to an unperturbed climate.

[Response: The drift in surface temperature by the end of the 20th C is very small, so the issue is moot for the SAT projections. For ocean heat content it is more important and I plotted the drift corrected values in the second figure. You still need to baseline things (as I did in figure 1, following IPCC), but I’m still not sure what the OHC data are anomalies with regard to, and so I haven’t done any more processing for that. As it stands the spread in the OHC numbers is related to absolute differences in total heat content over the 20th C – if you just wanted the change in heat content since the 1960s or something, the figure would be a little different. – gavin]

Gavin said: “At what point might you think there is enough information to accept that their projections are pointing in the right direction?”

I actually think I’ve defined some very specific tests which, if the current models are reasonably accurate, will be sufficient to convince me of this fact within a decade.

I’d like to see Real Climate do likewise. Suppose that the observed trends that I mentioned are less than half what the models predict (starting at AR3 for surface, or from the switch to all Argo data for OHC) is less than half of what the models predict. For how many years would this have to persist before you concluded that the models substantially exaggerate climate sensitivity.

[Response: For us to be able to constrain sensitivity from transient changes you need to make some pretty reckless assumptions (see the main post). You need accurate estimates of the forcing for a start, and you need to be able to connect the transient sensitivity to the long term sensitivity. As you can see I did that for the Hansen et al runs and got a sensitivity of just above 3 deg C (though rather poorly constrained). That is very close to the mean sensitivity of the AR4 models. So you could do this with 20 odd years of data. But since I can do this already for the Hansen simulation, and also show that the AR4 models do as good a job for the same period (0.21+/-0.16 degC/dec), why isn’t that sufficient for you? – gavin]

#168: “And in the mean time the area under the curve will continue to grow, said area representing the total amount of carbon added to the atmosphere and to the active carbon cycle.

Do you have a plan for how we will draw down the increase in that reservoir once you are convinced?
Or a plan for how we will cope with the consequences for the next millennium, or will you just cross that bridge once you are convinced?”

I do have a plan. Lets replace all us taxes on income with a tax on carbon. If CO2 is a big problem, then this will fix it (at least as far as the US contribution is concerned). If not, swapping the income tax for a carbon tax is likely have a strongly positive impact on growth (especially if the carbon invested in imports and exports is properly accounted for), and certainly not a negative impact.

If Democrats seriously believed Al Gore’s brand of alarmism, they would implement this immediately. Republicans would be happy to end the income tax. And which is worse “frying the planet” or sacrificing income redistribution?

The problem, for those who favor immediate action, is that even the most liberal Democrats on capitol hill don’t believe that climate change is such an immediate problem that they should sacrifice their legislative agenda in order to do something about it.

If I can get back to the runaway feedbacks. This free textbook has a decent introduction to planetary climate, although it is a bit long, it requires roughly undergrad physics level understanding:http://geosci.uchicago.edu/~rtp1/ClimateBook/ClimateVol1.pdf
Its been several months since I read it. The question concerns the fact that water vapor feedback becomes stronger as temperatures rise. IIRC correctly the following results are seen:
At some level of solar radiation, which I think corresponds to an average surface temperature of 50C the runaway effect sets in. I think it is also true that if we were able to wall off the equatorial regions, that they would undergoe the runaway feedback effect. But for the planet, it is not thought to become runaway until the total solar input is considerably higher -perhaps 500M to a billion years from now. But with todays solar luminosity no serious climate scientists thinks we are in danger of setting the runaway effect.

For those that are interested the rate of increase of solar luminosity is about 1% per hundred million years. This rate is slowly increasing, but a simple linear rate of increase will be pretty decent over a period of a few hundred million years.

45, Martin Vermeer: Jaynes (2003, p. 504) quotes Jeffreys: “Jeffreys (1939, p. 321) notes that there has never been a time in the history of gravitational theory when an orthodox significance test, which takes no note of alternatives, would not have rejected Newton’s law and left us with no law at all. …”
;-)

True enough. But if we had decided for some reason to spend a trillion dollars to “stabilize” the precession of the perihelion of Mercury, we’d have wasted our money. Newton’s laws do very well for interplantetary travel, but not for that purpose. Right now, they may fail to account properly for the relationship of mass to gravity or of mass, force and acceleration (or, there might really be large amounts of cold dark matter, or enourmous masses just beyond the vision of the Hubble telescope, or something.)

The possibly oscillatory nature of the warming of the last 1 1/2 centuries may be the functional equivalent of the precession of the perihelion of Mercury: the evidence that the theory has a major omission, or flaw. That we don’t know it’s cause (or don’t agree that it is solar cycling) is not evidence that it is negligible.

As you know from debates among Neyman, Pearson, and Fisher, there is an operational distinction between having enough information to substantiate belief and having enough information to support a huge investment.

For Gavin:
Thank you for cleaning up RC. I had totally given up on this site as a source of useful information, but with the new tone I will be back.
As a farmer I watch grow lines, (North America), and they have not moved north at all during the time of increased global temp.

I would make a recommendation that you check in with your source writers at various papers and media outlets. The catastophic predictions are falling on deaf ears as there have been numerous droughts/floods etc throughout the Holocene period.

The actual science, as of yet, seems a bit tenuous but is improving I would hope.

A couple years ago, the top 1% of U.S. earners pay a total of about 35%-40% of the federal income tax. Do you think they accounted for 35%-40% of the carbon use?

The median income is about $40K, the top 1% earners make about $1.25M. The tax rates would be 25% and 35% respectively for single filers. That means that the rich guy pays $440K and the median pays $10K. Now change to carbon. The rich guy isn’t using anything like 30 times the median carbon footprint. 30 times median is a gigantic carbon footprint. So let’s say highest 1% of earners use four times the carbon as everyone else (which I think would actually be somewhat high). What that means is cutting that high earners’ tax down to say, four times the median, then everyone else gets something like a 40% tax increase, with the lower income people getting a good deal more than that. Taking into account that there are incomes low enough to not pay income tax at all, then it only makes the burden worse on people who do pay taxes.

Well yes, that would um, do something.

Not really too hard to figure out why that isn’t the Democrats first guess though, is it?

The biggest terrestrial ecological disruptions are likely to occur through massive, abrupt loss of forest and other habitats. In my opinion, it’s pretty easy to see that forest cover will disappear as quickly and unexpectedly as artic sea ice and that this will take a lot of plant and animal species down. Forest loss is likely to occur from longer drought periods as well as increased pests and fire. This is already happening in the western US and Canada.

Predictions in the literature seem to concentrate on how increased temperatures require biota to migrate and whether or not it is capable of it. But this assumes an orderly march to the poles by all species. Totally unreal asusmptions are being made here in my opinion. I see a world of rapidly shrinking forest cover with biota for the most part standing still and meeting their doom.

ALW (155):
“I would say that if we are looking at how well the PROJECTIONS are shaping up it is disingenuous to include hindcasts within the graphs”

You still don’t get it. If you had read the Realclimate model FAQ articles I linked too, you would understand the difference between a statistical model and a physics based model. If the plot were based on a statistical model (i.e. one that merely tried to fit itself to the existing temperature data record) then it would be a pretty useless graph. But that’s not the case. Here are the links again: