Temperature Prediction: the next few months

Let’s have a little fun, and predict the global average temperature (land+ocean) for the next few months. We’ll base the prediction on GISS data, so this will be a prediction for the upcoming temperature according to GISS.

For those interested in the gory details (those who just want the numbers can skip this paragraph), I’ll model GISS global land+ocean temperature as a function of MEI (the multivariate el Nino index), volcanic forcing according to Amman et al., solar forcing (represented by monthly sunspot numbers), a linear trend in time, and a “residual annual cycle” which I’ll model as a 2nd-order Fourier series. I’ll allow for a lag in the influence of MEI, volcanic forcing, and solar forcing. I’ll fit the model using observed data from 1975 to the present. I’ll further model the residuals as an AR() process.

The model gives an excellent approximation to the observed temperature data:

The model as a whole explains 76% of the variance in global temperature since 1975. It’s worth noting that the linear time trend in this model (which is an approximation of the influence of man-made global warming) is 0.0172 deg.C/yr.

Using the model to forecast the next 3 months gives this:

For those who want numbers, the predictions are (all error ranges are 95% confidence intervals):

Also, I would like your model better if you replaced the linear time component with a component that was proportional to the logarithm of CO2, even if it’s just the log of the average CO2 over the past 12 months.

Questions:
– What factor causes the dip in September?
– Since the error bars only increase ~1/100 degree/month over the period shown why not carry out predictions farther than 3 months and increase your odds of being *right* (by reducing “noise”)?

[Response: The model has a 3-month lag for sunspot number, so I can’t use it to predict more than 3 months ahead.]

According to Roy Spencer’s site, the August anomaly is +0.33C. That’s only a drop of 0.04C from July. Joe’s really not having a good year for global temperature forecasts. It looks like in Bastardi’s “duel” with the UK Met Office (UK Met +0.24C in 2011 against 30-year mean according to JB, JB +0C), the UK Met prediction is looking pretty good so far.

David B. Benson, I think mtobis’ point is that there is a great deal more physics built into the big GCMs whereas this model is going off of a little statistics with the Fourier analysis and best-fit lags. However, but to split the difference… Essentially what Tamino is going off of is an empirical relationship with a fair amount of data behind it. What it reminds me of in part is some work that Gavin Schmidt was doing a while back.

Gavin was analyzing gases given off by organic material in the ocean, partly as a function of churning, if I remember correctly. In place of the ocean he had a tank where everything was well-measured. Similarly there are representative species of plants that stand in for a wide variety of plants, where we know through observation and measurement how the representative plants respond to light, moisture and temperature.

These are the new Earth Systems Models that incorporate elements of the carbon cycle and biology in addition to more traditional physics of radiation transfer theory and fluid dynamics. But these new elements are nevertheless empirical relationships observed and validated with respect to local phenomena, then applied over a three dimensional polar coordinate grid with perhaps 40 layers of atmosphere and 40 layers of ocean, with column cross sections that are 2° x 2° and time increments that are perhaps a quarter of an hour long. Give or take.

What Tamino is doing is essentially a one-dimensional model with almost no physics. A toy model, but not really that different from a zero dimensional radiative transfer model or a two box model of the carbon cycle or heat transfer, except insofar as the “physics” is roughly comparable to the empirical relationships observed at a local level involving the carbon cycle or biology. He did say, though, that he wanted us to have a little fun.

The big GCMs are based on physical modelling, which this is not. OTOH the big GCMs also parametrize a lot of things (due to lacking spatial resolution, e.g., they cannot resolve individual clouds — not to speak of individual droplets), which is like what tamino is doing, which is all parametrization.

So yes, the difference is one of degree, but it’s a huge amount of degree.

No, this is not a physical simulation of the atmosphere – what we’d usually call a “climate model”. These physical simulations come in all shapes and sizes:

– energy balance models;
– 1-dimensional modeling of a vertical slice of the atmosphere (including radiative/convective transfer of energy);
– 3-dimensional global circulation models which model parcels of the atmosphere (finite element analysis), and attempt to solve numerically the state equations (Navier-Stokes equations);
– Atmosphere-Ocean GCM’s which model ocean mass/energy transport as well;
– fully coupled AO-GCM’s which include simulations of the carbon-cycle on land and in the oceans (and other chemical/biological cycles).

This most advanced category (“Earth System Models”) is increasingly being used for the CMIP5 archive of simulations for IPCC AR5. All GCM’s are used to calculate ‘ensembles’ – many many runs to investigate the response of the model to slight changes in starting conditions & forcings.

There is nothing wrong with a purely stochastic approach like Tamino is using, but it rests on the assumption that past trends and cycles (challenge-response correlations) continue unchanged into the future. For shorter time periods I would most certainly put my money on Tamino’s predictions.

Or a “regression model.” It’s not strictly statistical (or based on correlation) either since we know from physics that volcanic, solar forcings, etc., directly affects the climate. Perhaps not as informative as a GCM in a sense it doesn’t reproduce the entire Earth’s climate from scratch. But it definitely shouldn’t be lined up equally with correlation studies with dubious causality (“born in Massachusetts makes you smarter!”). This model provides clarity by breaking physical attributes into simple, tractable components — informative, in a different sense.

if Horatio’s calculations are correct, the “stupidity lag” — time between posting of a legitimate (though not very meaningful) statistical result and (really) far-reaching conclusions drawn from it — is only about 3 nanoseconds.

Lately I have noticed that the International Business Times seems to be the first out of the blocks to put bizarre negative spin on science stories. They seem to be very adept at gaming Google searches.

The forecast is simply 10+ what happened in ’08, which is the average difference between the two years up until this point.
The linear relationship between the two years up to this point can explain
55% of the variance.

Average for 2011 is therefore 54 (or 0.54 deg C above average) and will be the 9th warmist year on record.

Tamino you could easily extend the prediction to nine months: MEI could be predicted by persistence (i.e., propagating the last value, or the average of the last few months, into the future), and the sunspot number by extending the linear trend of the last few months. It would still be better than guesswork.

I guess that a prediction with a minimal information, i.e. the current temperature +/- the standard deviation (computed for instance over one year), without any other climate model, would have pretty much the same predictive power ?

in other words, does a climate model help significantly reducing the uncertainty, or not ?

[Response: The influence of exogenous factors (el Nino, volcanism, solar activity) turns out to be both strong and statistically significant. So, a model based merely on “persistence” won’t perform as well.]

For the fun of it I tried to make my own version of tamino’s forecast model. Mine is based on the same data except I replaced his linear time trend with NOAA’s seasonally corrected monthly CO2 data (ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_mlo.txt). I also used a weighted average of the last 7 months of MEI, volcanic activity (all zero of late because I don’t have up to date data), and sunspots.

In order to forecast into the future I extrapolated the recent CO2, MEI, and sun spot data.

Robert Murphy is correct about NOAA’s latest update, but it does seem like La Nina may be making a return. The latest Nino 3 and Nino 3.4 SSTs are 0.5C or more below normal, and the models seem to be trending colder. Even so, I doubt we’ll see much ENSO impact on global temperatures by October. As isotopious pointed out, we may be having a similar ENSO evolution as in 2008-2009, and global temperatures did not drop in October or November of 2008. Even if a weak or moderate La Nina returns, I doubt that we’ll see the -0.25C global temp anomaly (UAH-based) that Joe Bastardi predicts. Over the next three months, I cannot think of an adequate justification for going against Tamino’s predictions.

As an aside and on the topic of El Nino, I wish that James Hansen had not indicated a high probability that we would see a strong El Nino in 2012: http://thinkprogress.org/romm/2011/03/29/207781/nasa-james-hansen-sure-bet-decade-warmest-in-history/
I did not think it seemed likely at the time, and even though it has nothing to do with climate change, it just gives the fake skeptics ammo for discussing how bad Hansen’s predictions have been, even though his most meaningful predictions have not been bad.

A prediction of a -0.25 monthly anomaly (UAH) seems implausible at this point. Unless there is a return to strong La Nina conditions over the next couple of years I think there is a good chance that sub-zero anomalies will be entirely absent from now onwards, until the next UAH baseline change at least.

Regarding Hansen’s predictions, I think a little too much time is wasted worrying about how “skeptics” are going to react. If Hansen were to change his behaviour in order to dodge “skeptics” he would be effectively allowing them to dictate his actions.

I’ve just been pondering the observed changes to incoming and outgoing infrared radiation spectra, which as described at Skeptical Science here, provide empirical proof of the enhanced greenhouse effect and of the amount of forcing it imposes on the climate system. They have about 26 years of satellite data measurements of the outgoing spectrum. I’m not sure how long ground measurements of the downward spectrum have been done for. The analyses that have been undertaken seem to be basic calculations of the difference that have occurred between now and when measurements commenced. I wonder if this data is ammenable to some form of trend analysis that can perhaps be presented as plots. For example as plots of the trend in forcing over time. Perhaps the data is too noisy – otherwise it seems to be an obvious thing for someone to be doing.

“I’m not sure how long ground measurements of the downward spectrum have been done for.”

A matter of definition, I suppose–I’d say, based upon your use of the word “spectrum,” that we should probably go back at least to Samuel Langley’s bolometric observations. For useful data, though? “Several decades.” See the BSRN website for station record information on that.

The latest ENSO monthly update has a “La Nina watch” in effect and it appears that the current trend, as noted, is heading in that direction. Also, as mentioned in their weekly updates, atmospheric circulation anomalies are still in a La Nina state, having never really gone away from earlier this year, even when the MEI went to neutral. I think it is likely that we will see a repeat of 2008-early 2009 for ENSO, with global temperature probably tracking similarly but warmer.

Also, on Hansen’s prediction of a strong El Nino starting this summer, he apparently based it on subsurface anomalies, as in the development of a large pool of warm water, which occurs as La Nina weakens. 2008 had a similar progression – but so did 2009 – but the outcomes were very different – and unpredictable; NOAA made no mention of an El Nino in May 2009:

I assume those are average temperatures, in degrees Celsius, for a given latitude at a given time.

I decided to convert the 80 deg N data to temperature anomalies and calculate the rate of temperature rise. I got a slope (1979 to 2010) of +0.14 (deg C?) per YEAR, almost all of which is occurring in the winter (monthly slopes):

That’s a really, really alarming rate of warming. So alarming that I want to know just how reliable the data source is before I draw any strong conclusions. I know ERA is generally top class, but the source website is in Polish (I believe) … so I can’t evaluate who they are.

Well, that is timeseries from original ERA Interim data downloaded from ERA-Interim site:http://data-portal.ecmwf.int/data/d/interim_daily/
I’ve checked all times, only first step (0) and 2m temp. Then I downloaded the GRIB file and wrote GrADS scripts to get values for north of 66N and north of 80N. For some reaseons GrADS aave function:
aave(no2tsfc,lon=-180,lon=180,lat=66,lat=90)
gives some strange results and must be set to:
aave(no2tsfc,lon=0,lon=360,lat=66,lat=90)
Also there are maps on my webpage:http://gfspl.rootnode.net/klimat/arctic/

Anyway i think that would be a great idea if Tamino can check these results (there is possibilty to download netCDF file and import to R as I think).

Well, because it was really alarming rate, i’ve check my script again and found that i forgot to change aave function for second and third GRIB file (unfortunetaly it is impossible to download all data series to one file).

TS File is corrected now. Now slope is much lower and equal to 0.064 / year (0.25 for last decade). My mistake.
Maps are ok and was not affected. Example:

For moderator:
Maybe it’s quite good idea to merge my comments into one:

Well, that is timeseries from original ERA Interim data downloaded from ERA-Interim site:http://data-portal.ecmwf.int/data/d/interim_daily/
I’ve checked all times, only first step (0) and 2m temp. Then I downloaded the GRIB file and wrote GrADS scripts to get values for north of 66N and north of 80N. For some reaseons GrADS aave function:
aave(no2tsfc,lon=-180,lon=180,lat=66,lat=90)
gives some strange results and must be set to:
aave(no2tsfc,lon=0,lon=360,lat=66,lat=90)
In first case we get average for lon=0,lon=180,lat=66,lat=90.
Also there are maps on my webpage:http://gfspl.rootnode.net/klimat/arctic/

Because it was really alarming rate, i checked my script again and found that i forgot to change aave function for second and third GRIB file (unfortunetaly it is impossible to download all data series to one file).

TS File is corrected now. Slope is much lower and equal to 0.064 / year (0.25 for last decade). My mistake.
Maps are ok and was not affected. Example:

Anyway i think that would be a great idea if Tamino can check these results (there is possibilty to download netCDF file and import to R as I think).

Interesting to ponder those numbers for a bit. I (think I) see two factors:

1) the well-known fact that warming is most marked in winter (which is reflected in the very low numbers for July and August, and the fact that the four highest trends occur from Oct. through February); and

2) the greatest change in albedo, due to the lengthening melt season, occurs in the transitional months of September and May (which is reflected in ‘bumps’ in the trend then–though May is more properly called a ‘bump’ than September, which could be termed a ‘ramp’ instead.)

I’m sure there is more sophisticated analysis that could be done from this point of view. (OK–“actual analysis.”)

I am going to stick my neck out and guess that within 5 to 10 years there’ll be some consistent record minima set for the May-June period, presaging some significant new overall summer minima following hot on the heels…

I just read that post of Goddard’s that you link, where he claims that Irene was merely a tropical storm when it went ashore in the carolina’s, and that there must be some kind of conspiracy to inflate the storm, because he found winds of only 30 knots at a surface statin there. And the comments, blaming the conspiracy on Obama. Gads.

While I was reading it, I was also watching coverage of Irene in Long Beach, a day later and much weaker than when Goddard claimed it only had 30 knot winds, watching a reporter hit by a gust of wind that sent him sliding down the boardwalk.

The idiocy, the lack of even basic knowledge, and the cock-surednes of their pronouncements, is simply astounding. It really is.

Ernst K,, those numbers are quite likely correct as the humidity over there (80N is on top of the Arctic Ocean) is likely very high during late autumn. I think it’s the water vapor feedback you’re seeing there.

I wonder why no-one is betting through betting shops on matters like this – I recall a torrent of global interest in Ladbooks ‘big bird race’ in 2004 – are there no betting shops prepared to take bets on climate change? Is it too dull to bet on? Must we take our guesses/analysis to obscure websites such as Taminos …

The sad thing is, I come here for some really meaty analysis, and sometimes I have to wait weeks for it. But when I go to wtfwt and its ilk, they have new stuff every day. Its so unfair that its easy to produce crap in large quantities, but hard to produce good stuff!

The rate of inventing and spreading rubbish is always larger then the rate of gathering data, analyzing, publishing science and debunking nonsense.
The difference between both rates is proportional to the mean speed of communication.