This is a guest post by Rob Wilson. It addresses some concerns I raised when I spoke in St Andrews the other week. I had discussed Climategate and the lack of trust this engendered and then went on to briefly cover other issues that made me uncomfortable. In particular, I mentioned the tendency of corrections to the temperature records to produce cooling in the nineteenth century and warming in the twentieth, and the recent lack of warming.

In light of Myles’ and other comments w.r.t. instrumental data, I thought this might be a good time to quickly try and address some of Andrew’s observations that he made in his talk at St Andrews in April.

I had hoped to write a guest post along with colleagues from the Met Office showing temperature trends along with AR4 projections, but I already see the summer running away and what spare time I have, I would rather concentrate on a series of later guest posts focussing on dendroclimatology. So below, I concentrate only on temperature trends in the HADCRU and HADSST data-sets. Thanks to Ed Hawkins and John Kennedy for providing feedback.

There were two issues that Andrew raised:

That updates of large scale temperature data-sets appear to depress 19th century and raise 21st century temperature values.

That over the last decade or so, there had been a flattening off in temperature trends.

At the time, I could not comment on either as I had not looked at the new data-sets in detail.

So – the following link will take you to a series of figures that compares CRUTEM3 and 4 (land temperatures) and HADSST2 and 3 (SST) for northern and southern extra-tropics (ET) and tropical (TROP) latitudinal bands. References at end.

I have purposely not added trend lines, or smoothing functions and have just plotted the temperature anomalies (w.r.t. 1961-1990).

I am not going to describe trends in exhaustive detail, but really want to address Andrew’s two main concerns.

Older vs newer temperature data-sets

There has been little change in the NH ET input data-sets. The major changes I am aware of are some early instrumental corrections of temperature data-sets in the Greater Alpine Region. However, this is only a small number of records in the extensive NH data-set so does not impact the large scale mean series. For those interested, the Alpine data-set correction is detailed here:

For TROP and southern hemisphere ET land temperatures, the major changes are in the 19th century which reflects the addition of newly digitized station records – probably mainly from Australia. Early instrumental temperatures are always going to be less certain and there is less data.

Changes in the late 20th century appear to be minimal.

w.r.t. SST, again little difference between HADSST2 and 3 in the ET NH.

The period of greatest difference in the TROP SST data is around the post 1940’s period which are related to biases in HADSST2 w.r.t. an “uncorrected change from engine room intake measurements

(US ships) to uninsulated bucket measurements (UK ships) at the end of the Second World War.” These have been adjusted in HADSST3. For those interested, the relevant paper is:

Thompson et al. (2008) A large discontinuity in the mid-twentieth century in observed global-mean surface temperature. Nature 453, 646-649

For the southern hemisphere ET SSTs, we again see similar corrections in the 1940s as in the tropical SSTs, but interestingly, the HADSST3 are actually marginally “colder” in the recent period than HADSST2. I only highlight this to show that correction can go both ways.

As a final note, correction for homogeneity biases in temperature record is very important and if you want more information on the basic theory, a really good review paper is:

As for the recent flattening. Well this appears to vary markedly. For NH ET winter temperatures, there is clearly an “eye-ball” flattening in winter temperatures, but likewise, a continued increase in summer temperatures. Tropical land temperatures appear to show continued warming for all seasons, but tropical SST records could be argued to have flattened. SH ET land temperatures is a little mixed – perhaps a flattening in summer, but still increasing in spring and autumn.

Statistically, due to internally forced multi-decadal variability expressed in all of these records and the fact that we are “at the end of the time-series”, I think it is really very difficult to “quantify” a flattening or even a continued increase. Yes, we can fit linear trend lines to the latter end of the time-series, but with the known naturally forced decadal variability expressed in these data-series, I personally think that such exercises are really not very helpful. This issue will simply become clearer over the next 10-20 years............but should we wait until we have statistical certainty?

Final thought

So – my take home message. Let’s not generalise too much. The newer HADCRU4 and HADSST3 data-series are incrementally improved data-sets using new data and utilising corrections related to robust theory and methods. Many on this blog will disagree with this statement, but all I can urge is please read the papers below. Much effort is focussed on the uncertainties and biases in these records. I do not see a systematic change (between old and new) to cooling (warming) of early (late) large scale instrumental series – rather I see improved data-sets (HADCRU4 and HADSST3) with well documented uncertainties.

As for temperature trends, in the same way that it does not really matter if the medieval period was warmer or cooler than today, it does not really matter if a particular seasonal time series shows an increase or flattening in temperatures. What MATTERS is that we need to understand the drivers of these changes. Natural or anthropogenic (or a mix of both). CO2 cannot explain all trends since the 1850s, but likewise internal dynamics (PDO, ENSO, NAO etc) or changes in the sun or large-scale volcanic events cannot alone explain the variability in climate.

Reader Comments (190)

Jun 6, 2012 at 9:52 PM | Dung

"CO2 works by "absorbing" or holding radiation that would otherwise be reflected back into space, there is no time lag in the so called simple physics."

Dung, the time lag refers to the various periods by which different elements of the climate system come to equilibrium with the forcing. So although CO2 and the enhanced water vapour resulting from CO2-induced warming, start "retaining" heat immediately, much of this enhanced thermal energy is absorbed in the oceans. And the oceans have a large effect on surface temperatures. So in El Nino years when warm surface waters spread across much of the equatorial Pacific the Earth surface gets a significant temperature "boost" (as in 1998 which was a super El Nino). During La Nina's (predominating during last several years) enhanced upwelling of cold waters in the East Paciific has a depressing effect on surface temperatures. The slow rise in temperatures of the upper oceans moderates the rate at which which we experience warming from enhanced greenhouse gas emissions, and the El Nino/La Nina oscillations is a major contributor to the fact that the temperature rise resulting from enhanced greenhouse forcing isn't monotonic..

The question of whether it's possible to agree on the existence or value of "global average temp" is, in my view, not especially interesting. What does interest me is assessing climate models.

Specifically, can climate models simultaneously get right absolute temperatures and the temperature "anomaly"? I wouldn't expect models to precisely forecast the temp of Oxford or its average value or the average value of even quite large areas. However, if they're any good, as a basic test I'd expect them to get right simultaneously both the average of absolute temp and anomaly for a large region (where "large" is comparable with the Earth's surface).

If a model is unable to do this, I'd say it isn't "fit for purpose": whatever it models, it isn't the Earth's atmosphere.

As for practicalities, a large enough region would be the sub-polar areas for which satellites provide temp measurements. Average temps for such a region from surface stations or from satellites are known to an accuracy of at least +/- 0.5 degrees. So if the anomaly was about right while the absolute temp was off by, say, 5 degrees, there'd be no doubt that the model was bad.

So I'm interested in finding out what tests of this kind have been done and what the results have been. I asked on an earlier thread but no one seemed to know. I contacted John Christy of UAH, one of the people who manage satellite temp measurements. Unfortunately (although he was currently looking at just such a test for the next IPCC report which - for confidentiality reasons - he wasn't able to tell me about) he couldn't recall an already published source. He thought they might not be so easy to find as such "studies are often buried because they are not flattering to model output".

In any case, if anyone knows where the results of tests of the kind I've asked about have been published, please let me know.

You are telling me that CO2 in the atmosphere and water vapour in the atmosphere absorb infra red radiation but do not produce warming of that atmosphere?If the oceans absorb the heat then it must be at the surface so can you point me at a record that shows SST responding to atmospheric levels of CO"?

"So although CO2 and the enhanced water vapour resulting from CO2-induced warming, start "retaining" heat immediately, much of this enhanced thermal energy is absorbed in the oceans."

I keep meaning to get a clarification that the hypothetical 1 degree or so warming from CO2 doubling would be present without the presence of the oceans, is that correct chris?? (Assuming clouds, water vapour etc still functioned as before but just without the huge energy sinks of the oceans themselves present)

I still struggle to understand why the hottest and coldest places are the driest on Earth. ie Antartica, Libya, Iran for example, whereas the high water vapour tropical areas such as SE Asia don't get to anywhere near the temperatures of the sub tropical deserts, even with the presence of all the GHG water vapour. Of course the moist regions don't cool down as much either.

So in my thinking the dry deserts (say Libyan Sahara) say 50 in the day and zero at night compared to say Singapore (33 in the day and 25 at night) with the main difference being water vapour and clouds seems to show the water in the atmosphere acting as a negative feedback.

So, true to " Climate Scientist " form, Dr.Wilson flounces off displaying mock anguish in somebody daring to question him on the veracity of the so called " insulting and threatening " e-mails received from this blog's denizens. If these were true, it wouldn't take him too long to produce the mails. Why is he not doing so and completely avoiding the topic? Why is he running away without addressing that question? Methinks this is one more ANC " death threat " style hoax. Sorry Dr.Wilson, till you produce evidence that you really received such mails, we don't believe you. We've seen too many dishonest acts from the climate community and do not believe what you say unless it is supported by evidence. Your words alone no longer count as your community has shown to be lacking in ethics and truthfulness and you signed a letter supporting the Climategate clique after the e-mails were released. That makes you also as somebody whose words can't be trusted as you stood up in support of bad practices.

So put up evidence or be prepared to be called out as being unreliable and deceptive.

Chris, just so we're on the same page, are you saying that increased CO2 doesn't raise the temperature of the earth immediately, but takes 30 years because of oceanic absorbtion? If that's so what happened between 1980 and 1998 when we had a steady increase in CO2 accompanied by a steady increase in temperature?

I must say I was taken aback by Martin A's FOI request to get Rob Wilson release the insulting emails he claims he received in recent days. Was that really necessary ... at this stage?

Like many others on this thread, I too wish Rob Wilson released the emails instead of just talking about it. As things stand Rob's remark tars the good reputation of the entire Bishop Hill community. Had he actually released those emails perhaps we could investigate together whether those emails were really insulting, who sent them and then name and shame the people who sent them, together.

I believe an FOI request was unnecessary before Rob Wilson was pressured/cajoled to release them on his own accord.

the first post in this thread Jun 5, 2012 at 8:23 AM Streetcredpointed out the effect of sun and clouds was sufficient to explain warming from 1990 to 2000http://wattsupwiththat.com/2012/06/04/sun-and-clouds-are-sufficient/

during this period IPCC state CO2 was responsible for warming of 0.8 w/square meterchanges in sun's output and cloud cover were responsible for an warming of at least 6w/square meter

The models do not include the effects of the sun and clouds.As these have at least 7 times bigger effect than co2 what is the point of discussing models at all?

As the models do not include the effects of sun and clouds their interpretation of the effect of CO2 has to be amplified.

Since around 2002 temperatures(whatever they are) have fallen CO2 models do not predicit this

Once again what is the point of discussing the models other than to conclude they don't work

Finally IPCC do not produce predictions or projections they produce a number of scenariosEach scenario makes different assumptions about future human activity.This is the really fatal problem. In 1900 or even 1912 who would have predicted, atomic bomb, lady gaga, mass jet travel, the rise of china, computer models (or even computers), an electricity grid, supertankers transporting liquified gas?

People predict on what is known at the time and so can't account for the unknown.

it seems you are prone to making sweeping statements without any justification at all

for example you wrote of a comment I made Jun 6, 2012 at 5:33 PM

Your analogy is false.

Oh really, why? You didnt say then proceeded to talk about satellite temperatures.

Earlier you wrote Jun 6, 2012 at 4:48 PM_______________________________________

These are all difficulties for ground station measurements. Not insurmountable but tedious and all adding to errors.

So average global temp can be (and has been) calculated from ground station measurements. However, currently the best method of measuring global temps (and hence averages) is via satellite. Such measurements have been made for about 20 years. UAH's website gives globally averaged temps in degrees K to 3 decimal places.

______________________________________

The devil is in the detail which dismiss so quickly as " Not insurmountable but tedious and all adding to errors."

I wonder what sort of background you have (mines physics maths and computing)have you ever interpolated anything for example?

Maybe I'm wrong but I detect a whiff of belief in modern magic where instead of saying abracadabra we now say interpolate correlate model

I strongly recommend reading the paper posted by Paul_K. A large part of the models claimed success is the useof averages which surpresses variablity. Just compare the dots with the curves in the figures of the Koutsoyiannis paper.

Models are just computer programs, which required input data to produce their output data. I believe past temperatures are part of the input data. If this is true it's not saying much that past temperatures can be (badly) reproduced). It reminds me of the salesman who sold instant water to red indians(just add water). He did allright until they worked what was going on and scalped him.

When it is winter(when it tends to get colder) in northern hemisphere it is summer in souther hemisphere. Is it seriously being suggest colder winters in north are ALWAYS accompanied by colder summers in south? If not what is the meaning of global temperature.

Now time for satellites. Chris de Freitas published a pape in 2002 "are observed changes in concentration of carbon dioxide...". This contained 14 fallacies. Fallacy 4 - Global Temperature has increased over past 2 decades.

Chris says that the increase in temperatures is caused by urbanisation around weather stations. When people look at rural stations (e.g. Christy 2002, Michael Palmer http://wattsupwiththat.com/2011/10/24/unadjusted-data-of-long-period-stations-in-giss-show-a-virtually-flat-century-scale-trend/) they find no rise in temperatures over the last century.

So Simon do the satellites agree with the rural stations, or the urban ones?

Chris says in fallacy 5 satellite temperature measurements do not show an increase in temperature. Hmmm. If there was warming from infrared or LW radiation it should be detectable at surface and by satellites but there is no clear signal. (Douglas et al (2004, 2007).

Ross McKitrick has noticed the number of weather stations roughly halfed around 1990 and about the same time temperatures jumped about 1 degree http://www.uoguelph.ca/~rmckitri/research/nvst.html. Do the satellite temperature data show this?

On 12 April His Grace posted about a heated exchange between between Doug Keenan and Scott Denning, a climatologist who has made outreach efforts to sceptics, notably attending the Heartland Conference last year.

His Grace found the whole thing rather exasperating to tell the truth. Keenan's point - that we cannot detect any global warming signal in the temperature records - and Denning's point - that CO2 is a greenhouse gas - both seemed to be substantive, but not decisive. The conversation would be more meaningul if both parties recognised this, and discussed what would be decisive. http://www.bishop-hill.net/blog/2012/4/12/heat-exchange.html

But this IS the substantive issue CO2 is a greenhouse gas AND it's effect is so small it can't be detected.

Simon Anthony: the paper linked to (through a related Climate Audit post) by Paul_K is a good one, but a more recent one from the same group of Koutsoyiannis, titled "A comparison of local and aggregated climate model outputs with observed data, Hydrological Sciences Journal" is even more valuable as it discusses objections to the earlier paper and adds some information. You can find this paper at Demetris Koutsoyiannis's webpage, here (currently paper 15). Fig. 12 is nice.

Thanks for the reference Jeremy. At the end of the paper you point to is a discussion of whether climate is predictable in in deterministic terms.

This paper is referenced http://itia.ntua.gr/getfile/923/15/documents/hess-14-585-2010.pdfwhere this toy model is discussedhttp://itia.ntua.gr/getfile/923/1/documents/2009_EGU_Toy_Model.xlsThe toy model has simple, fully-known, deterministic dynamics, and with only two degrees of freedom (i.e. internal state variables or dimensions); but it exhibits extremely uncertain behaviour at all scales, including trends, fluctuations, and other features similar to those displayed by the climate.

There is something extremely ironic about people trying to make deterministic calculations about climate for it was Edward Lorenz, a meteorologist, who pioneered chaos theory (non linear dynamical systems) in modern times. (Poincare had discovered it earlier but it gave him the colly wobbles).

I think you're still arguing against something that no one has claimed. Yes, the required accuracy depends on the purpose for which you use the measurement (which is why I object to blanket statements like "global temp is meaningless" or its value is "poorly specified"). But neither I (nor I think anyone else on this or the previous thread) said that absolute temps should be used instead of anomalies for assessing temp change. I again explained the context in which I was interested in my comment at 12:12am (note to self: go to bed earlier). Accuracy of large-scale average temp measurements to within 1 degree would be more than adequate for testing whether models are accurate to within 5 degrees.

Your point about the correct temp scale to use to assess the size of errors is interesting. My view is that, if a GCM is based on sound science (ie not specifically tuned to Earth's relatively small temp variation, atmospheric constistuents, land and sea distribution, distance from the Sun etc) it will work for any planet's atmosphere, no matter what that planet's distance from the Sun. It should therefore work at extreme distances where temps would be close to absolute zero. I'd therefore say that the appropriate scale is in degrees K. To choose a scale which is related to an Earth-specific measurement would suggest that the models have been tuned to fit.

On the two apparently different versions of "sea surface temp" - I might be wrong on this - isn't the satellite measurement of air temperature while NOAA's is of water temperature? If so, I'm not sure what you're getting at in showing that they have different values.

"Simon Anthonyit seems you are prone to making sweeping statements without any justification at allfor example you wrote of a comment I made Jun 6, 2012 at 5:33 PMYour analogy is false.Oh really, why? You didnt say then proceeded to talk about satellite temperatures."

Sorry, rereading my post, I was a bit terse. What I meant was that the problems you mention is your analogy...

"A better analogy would be to try and estimate the heat content of an unkown number of bathseach filled with an unknown amount of unknown stuff, with unknown specific or latent heatsand to make this estimate from a thermometer measuring the temperature of the air some wayfrom these baths."

are most clearly not problems (or rather are lesser problems) when viewed from satellites. Then the "number of and volume of baths" is known, the amount of atmosphere is known, as is its composition (in an averaged sense). Surface temp readings don't individually do averaging of a similar type, but en masse I think one can convincingly argue that they do.

Then you asked:

"I wonder what sort of background you have (mines physics maths and computing)"

My academic background is also in maths and physics (BSc in mathematical physics, D Phil in theoretical particle physics, later MSc in non-linear dynamics). I did some programming also, mostly in Fortran (do people still use it?).

"have you ever interpolated anything for example?"

Well, it's been a while, but after a post-doc I worked for various companies modelling this and that. Now I come to think of it, interpolation was involved quite a lot, for example: filling-in of seismic data, generating artificial synthetic aperture radar images, processing of MRI images, various image and signal processing applications.

On the rest of your points, I'll take a chance and risk a sweeping statement: I almost certainly agree with some of the reservations you have. I'm very dubious about all the main supports of the AGW argument, including the accuracy of temperature measurements, however they're obtained.

I have some further comments on your points regarding the Koutsoyiannis paper, but I'll put them into a separate comment.

My reason for suggesting that "global average temp" (or sub-polar average temp or whatever large area is averaged over) is an interesting thing to think about, is that it is possible to obtain an average temp, whether from surface stations or from satellite measurements. Now the models have values for variables at spatial grid locations and successive time steps. The models can therefore be interpolated (as for example is done in the Koutsoyiannis paper) to give temp readings at the locations of the surface stations. Or to reproduce what the models predict satellites should see.

The point is that it doesn't matter whether the average represents "global average temp". It can be called whatever one likes to call it. The important thing is that it's obtained from a well-defined, reproducible procedure which allows direct comparison between measured and model average values.

Now I was surprised that Koutsoyannis compared individual station measured/model values rather than averaging them. He must have had a great deal more confidence in the regional detail of models than I have. You'd certainly expect a model prediction of a value averaged over a large area to be more reliable than point-values. So I'm not surprised that the models get long-term predictions of regional details wrong; it doesn't in any case preclude the possibility that the large-scale averages do much better. So I wonder why he didn't also do the basic test - is the average absolute temp about right?

"Suddenly, as if by magic". No sooner do I ask why Koutsoyiannis doesn't do a spatial average, than you show me a paper where he does just that.

Very interesting, and very hard to see how the people who make these models can, in good faith, claim that their models "work" because of an apparent anomaly agreement when the actual temperatures are out by several degrees.

I notice that in your reply your were silent about global temperature datasets being altered. I won't repeat the links but here are two more which cropped up in the last recentlyhttp://stevengoddard.wordpress.com/2012/05/11/hansen-covering-his-tracks-in-iceland/

I am utterly amaze you think just by repeating a measure of temperature the process is repeatable. Yes you can repeat the process of the measurement, but if the wind is blowing clouds containing different amounts of water and/or ice, which will dramatically change the heat capacity of the air around the thermometer there is an unknown amount of variation.

For cells of 5 degree longitude and latitude are used. That's 100s of kilometers. I within 1/2 mile of sea and know if you go only 5 miles inland the temperature will be different.

Given your background (BSc in mathematical physics, D Phil in theoretical particle physics, later MSc in non-linear dynamics) I am astonished you think that GCMs can calculate 100 years into the future. I doubt if they can calculate a week.

Did you read Demetris' Toy Model paper (A random walk on water). He explains in great detail why it is not possible to model complicated dynamical non linear systems (i.e. chaotic).

I have made a small change to Demetris' Toy Model spreadsheet which you can download herehttp://www.jeremyshiers.com/downloads/globalWarming/chaoticModels/jcs2009_EGU_Toy_Model.xls

I've added drop down boxes so you can choose the number of decimal places that are rounded to. Also I have copied the last cell of the predictions to just below the drop down boxes.

Changing the rounding say from 5 to 6 decimal places produces a clear dramatic change in the graph.

Changing from 10 to 11 doesnt change the graph so much but does change the last predicted result.

How many decimal places are the global temperatures accurate to???

Chaos (or non linear dynamics) was a discovery every bit as profound and unexpected as quantum mechanics or relativity. Nearly a century after it was discovered it is still being ignored.

The results of calculating simple deterministic equations can, in some circumstances, have results that are in effect highly undeterministic and essentially random.

If the toy model can change so much just by altering the rounding by 1 decimal place, the idea of calculating climate 100 years in advance is literally hopeless.

"I notice that in your reply your were silent about global temperature datasets being altered."

when in the post to which you replied I'd written...

"On the rest of your points [precisely regarding global datasets being altered], I'll take a chance and risk a sweeping statement: I almost certainly agree with some of the reservations you have. I'm very dubious about all the main supports of the AGW argument, including the accuracy of temperature measurements, however they're obtained."

You say...

"Given your background (BSc in mathematical physics, D Phil in theoretical particle physics, later MSc in non-linear dynamics) I am astonished you think that GCMs can calculate 100 years into the future. I doubt if they can calculate a week."

when I'd written 3 posts above yours...

"Very interesting, and very hard to see how the people who make these models can, in good faith, claim that their models "work" because of an apparent anomaly agreement when the actual temperatures are out by several degrees."

Now you seem from some of your remarks to be thoughtful and considered. Unfortunately, like other contributors you seem sometimes entirely to misconstrue (this wasn't a subtle distinction: you attributed to me diametrically opposed opinions to those I expressed), ignoring what's actually written in favour of some prejudice or hobby-horse.

As for your other remarks, I'm quite aware of these points. Hence my reservations about climate models that I've mentioned on this and other threads.

As I said, I think you make some good points and I'm quite happy to discuss things with you, but only if you read my posts.

There's an interesting article here on computer modelling by someone who works in the field.

Perhaps it should go as an FoI request to the Met and any other UK institutions who do climate modelling. If the code that appeared in ClimateGate I is any indicator, I'd be surprised if they meet this guy's (eminently sensible) criteria

I can only respond to the words you write, not the thoughts in your head. Richard Bandler of NLP has a saying "the meaning of your communication is the response you get". I am going to try and show that the words you wrote do not mean, or at least did not convey to me what you think they should have.

I said

"Given your background (BSc in mathematical physics, D Phil in theoretical particle physics, later MSc in non-linear dynamics) I am astonished you think that GCMs can calculate 100 years into the future. I doubt if they can calculate a week."

You replied this was unfair as you had written previously

"Very interesting, and very hard to see how the people who make these models can, in good faith, claim that their models "work" because of an apparent anomaly agreement when the actual temperatures are out by several degrees."

But you did not mention chaos at all. This means as the climate is chaotic the entire premise of the models (that they can calculate future climate) is false. If you feel you did say this I am completely at a loss to see where and how.

If you understand how non linear dynamics makes the models pointless why don't you say so explicitly?

I said

"I notice that in your reply your were silent about global temperature datasets being altered."

you replied

"On the rest of your points [precisely regarding global datasets being altered], I'll take a chance and risk a sweeping statement: I almost certainly agree with some of the reservations you have. I'm very dubious about all the main supports of the AGW argument, including the accuracy of temperature measurements, however they're obtained."

To me there is a very large difference between concerns about accuracy of data (which suggests measurement issues) and people deliberate altering the data so it false shows a rise.

You asked me if I was happy with satellite data, which you said didnt suffer from problems of land observations. And satellite data agreed with land data so land data must be moreorless ok.

I asked if you meant satellite data agreed with land data before, or after the land data had been altered. So far I have not seen a response.

I referred to Chris de Freita 2002 and paper in an earlier comment but I forgot to say why, silly me. The reason I referred to that paper is most of the information that shows the IPCC claims are wrong is not new. It needs to be said again and again and again.

This is very tedious and it's a political not scientific problem. The people on the other side of the argument simply ignore any evidence that is inconvenient for them.

And its not just scientists, there are goverment departments, Natural England, DEFRA, Environment Agency, Climate Change Act, EU Habitats Directive, EU CO2 emissions targets. So there is a huge body of people and opinions to be changed. And there are any number of delaying tactics.

I have been talking to EA about there desire to knock down sea walls near where I live (because of 1.9m sea level rise predicted by MET OFFICE!!). When I have found errors in documents I am told well that document has been agreed by representatives of 16 different bodies and can't be changed unless all agree to change. The next committe meeting is in three months time.

On top of that Climate Change is taught as fact in school. I have a child coming up to GCSE, in geography the choice is either repeat the party line or fail that part of the course.

On different brighter note I email Demetris Koutsoyiannis today to tell him he'd been mentioned here. He seem very pleased and felt it fitting as it's ITIA's 25th aniversary.

When I said "Very interesting, and very hard to see how the people who make these models can, in good faith, claim that their models "work" because of an apparent anomaly agreement when the actual temperatures are out by several degrees."

you interpreted this as meaning I think that "GCMs can calculate 100 years into the future" just because I fail to mention the word "chaos"?

Perhaps someone else could explain how my statement that suggests the models don't work, can be taken to mean they "can calculate 100 years into the future". It's beyond me.

The reason why I didn't mention "chaos" (which so many people wrongly seem to think clinches the debate) is the same as the reason I didn't mention "missing elements in models", "tuning of parameters to fit", "lack of predictive testing" etc. There are so many reasons for doubting models; the one I've been pursuing is that they don't get absolute temperatures right, hence my mention of that one and omission of the others (incl "chaos").

Incidentally, there are several reasons why "chaos" may be irrelevant to this debate. Just to mention a few: the varations due to chaos may be negligible or suppressed; an otherwise chaotic system may have built-in stabilizers; important quantities may have non-chaotic average values; external drivers overcome chaotic internal dynamics.

Now please don't misinterpret that. I didn't say that chaos is irrelevant, just that in some chaotic systems, for practical purposes, it may be. Just now, nobody knows whether that applies to climate or not.

As for the rest of your post, well, as I said I'd be happy to discuss other things but only if you're able to see that your initial interpretation of what I said was just wrong. If you're not able to see that, I don't think we're going to be able to understand one another.

The list of criteria seems eminently sensible although I'm sure it could be expanded in various ways.

I do however disagree with the last statement which discusses answers to the last question "How much new information has your computer model given you?"

The author writes: "The last one is a trick question, by the way. Either the answer is "None" or the scientist knows nothing about computers and should be ignored at all costs. No computer has ever given any human being any new information whatsoever, because they are literally incapable of doing so."

Well, in the general case that's just obviously wrong (protein folding, aircraft design...) but it's also wrong in the case of weather (the models do improve at least short term forecasts) and also, perhaps significantly, in the case of genuinely important discoveries. Lorenz wouldn't have found his chaotic "weather" model had he not been given some very useful information by his computer.

Maybe the author's being pedantic and insisting that "information" can only exist as an idea in a human brain, or something of the kind but in the everyday sense of the word, I think he's mistaken.

I've pointed out IPCC claim that CO2 was responsible for 0.8 watts/square meter warming from 1990 to 2000, yet over the same period increases in solar output and decreases in cloud cover resulted in an increase of at least 6 watts/square meter.

Cloud cover has recently increased, the suns output has decreased and temperatures are down.

All of which indicates that warming due to greenhouse gases, if it occurs at all, is not a major driver of climate and probably does not even have a measurable effect.

What I should have stated explicitly is the models are not programmed to take account of changes in suns output or cloud cover. For this reason they are meaningless.

Which corresponds with my approach of best evidence. Models, proxies and fiddled records do not constitute any kind of proof. Albedo and EM budget perhaps would. The ARGO buoys should be giving us a reliable picture of any putative stored heat. I reckon that if they had it they would produce this sort of support for their case. The lack of it in their output makes me think it is not there. And makes me wonder whether they have any data which does not match the narrative and has been suppressed, because we know that is what they do, don't we?

...(...) Lastly, some of you are a prickly bunch and I will try and keep my responses measured. However, some of the personal e-mails sent to me today would be rather embarrassing to some of you if I posted them on BH. So let's please keep this civil. I can accept that some/many of you are rather sceptical, but insults will not help the discourse.RobJun 5, 2012 at 9:17 PM Rob Wilson

Several commenters asked to see the insulting/embarrassing emails, but they were not revealed.

I made a Freedom of Information (Scotland) Act ("FOISA") request for the emails. The University of St Andrews replied that:

- They did not hold the emails for FOISA purposes, as Dr Wilson had posted on BH "of his own volition".- In any case, data protection principles would preclude the emails being provided.

I asked the University of St Andrews review its decision. I said that a university senior lecturer does most parts of his job "of his own volition" and I pointed to several criteria which showed that Dr Wilson was posting on BH as part of his university work. I said that I did not require personal information in the emails, so data protection did not apply.

The University of St Andrews conducted its review.

It now gave s different reason why it did not hold the emails for FOISA purposes. It referred to Decision 050/2007. In that decision, it was ruled that emails to do with the Dingwall and District Angling Club were nothing to do with the recipient's work for the Scottish Environmental Protection Agency, so it was ruled that the Agency did not hold emails related to the Angling Club.

The University said that, even if it did hold the emails, they would not release them as "... there is a real threat that academic exploration which depends upon the exchange of viewpoints and ideas would be inhibited to the detriment of the academic process" if the emails were released. They cited section 30(b)(ii) of the FOISA.

So my request for the emails was again refused.

I appealed to the Scottish Information Commissioner. I pointed out that, as Dr Wilson was posting on BH in his role as a Senior Lecturer of St Andrews University, Decision 050/2007 was irrelevant. I also pointed out that the St Andrews University' review talked only in vague terms about "public confidence in the academic process" but it had not been specific, as required by the Information Commissioner about when and how such harm would occur.

On 14 March the Scottish Information Commissioner issued her decision. She found that the University had failed to deal with my request in accordance with both FOISA and the EIRs.

The University can appeal (on points of law only) within 42 days of the decision. If they do not appeal, they must reveal the insulting/embarrassing emails by 30 April 2013.

Thanks for the update Martin. Well done for sticking with it. Do you have an online link to the case at all?

Couple of things strike me:

1) is there any penalty for St Andrews' failure to deal with your request according to the relevant laws?2) if this had been an issue upon which any matter of substance turned, how would that now be impacted?

I'm not aiming these comments at you in particular - they are more to note what I believe are significant shortcomings in the way FOI functions. Apologies if there are differences with the Scottish approach.

Assuming no legal challenge of the decision, by the time this information is released nigh on a year will have passed since your request. Depending on the circumstances this is sufficient time for decisions of substance to move forward. My understanding of the current situation is that there is no recourse in these circumstances, neither on the status of any decisions taken, nor for non compliance with a request.

In their review of FOI practice, The Justice Committee did accept that the issue of the non compliance prosecution window should be changed://The Government accepts the conclusion of the Select Committee that the current provisions under section 77 are insufficient to allow the Information Commissioner’s Office sufficient time to bring a prosecution where appropriate. However, the Government does not consider it necessary that cases under section 77 are heard by the Crown Court, nor that the existing penalties are insufficient in being an effective deterrent to misconduct. To address the problem, the Government is instead minded to extend the time available to the ICO to bring a prosecution to six months from the point at which it becomes aware of the commission of an offence rather than six months from the point at which such an offence occurs. This change will address the core problem of insufficient time available to bring a prosecution without an excessive response of making the offence triable either way [i.e. in the Magistrates' courts or the Crown Court].

From The Bish's post here: http://www.bishop-hill.net/blog/2012/12/3/changes-to-foi-enforcement.html// Just for info - In checking on the JC site now I see they have just published on "The functions, powers and resources of the Information Commissioner"

I was impressed with the detail and precision of the Commissioner's analysis - in contrast to the St Andrews decision and review.

Here is her decision:

DECISIONThe Commissioner finds that the University of St Andrews (the University) failed to comply with both Part 1 of the Freedom of Information (Scotland) Act 2002 (FOISA) and the Environmental Information (Scotland) Regulations 2004 (the EIRs) in responding to the information request made by Mr Ackroyd.

The Commissioner finds that by failing to identify and respond to Mr Ackroyd's information request as one seeking environmental information as defined by regulation 2(1) of the EIRs, the University breached regulations 5(1) and (2)(b) of the EIRs.

The Commissioner finds that the University incorrectly stated that it did not hold the information.

The Commissioner also finds that the University incorrectly applied the exemption in section 30(b)(ii) of FOISA to the withheld information and, in doing so, failed to comply with Part 1 (and in particular section 1(1)). Similarly, the Commissioner finds that the University was not entitled to withhold the information under any of the exceptions in the EIRs and, in doing so, breached regulation 5(1) of the EIRs.

The Commissioner therefore requires the University to provide Mr Ackroyd with the withheld information (subject to the redaction of identifying information) by 30 April 2013.

Couple of things strike me:

1) is there any penalty for St Andrews' failure to deal with your request according to the relevant laws?2) if this had been an issue upon which any matter of substance turned, how would that now be impacted?

1) I don't think so. But it's not good for a university to have rulings that it broke the law even if there is no explicit penalty. However, if they did not comply with the Scottish Information Commissioner's ruling, that would amount to a contempt of court, with penalties.

2) Dunno. Maybe whatever it was could have been put on hold. At present it has to be accepted that the wheels turn slowly.