New peer reviewed paper shows just how bad the climate models really are

One of the biggest, if not the biggest issues of climate science skepticism is the criticism of over-reliance on computer model projections to suggest future outcomes. In this paper, climate models were hindcast tested against actual surface observations, and found to be seriously lacking. Just have a look at Figure 12 (mean temperature -vs- models for the USA) from the paper, shown below:

Fig. 12. Various temperature time series spatially integrated over the USA (mean annual), at annual and 30-year scales. Click image for the complete graph

The graph above shows temperature in the blue lines, and model runs in other colors. Not only are there no curve shape matches, temperature offsets are significant as well. In the study, they also looked at precipitation, which fared even worse in correlation. The bottom line: if the models do a poor job of hindcasting, why would they do any better in forecasting? This from the conclusion sums it up pretty well:

…we think that the most important question is not whether GCMs can produce credible estimates of future climate, but whether climate is at all predictable in deterministic terms.

Selected sections of the entire paper, from the Hydrological Sciences Journal is available online here as HTML, and as PDF ~1.3MB are given below:

A comparison of local and aggregated climate model outputs with observed data

We compare the output of various climate models to temperature and precipitation observations at 55 points around the globe. We also spatially aggregate model output and observations over the contiguous USA using data from 70 stations, and we perform comparison at several temporal scales, including a climatic (30-year) scale. Besides confirming the findings of a previous assessment study that model projections at point scale are poor, results show that the spatially integrated projections are also poor.

According to the Intergovernmental Panel on Climate Change (IPCC), global circulation models (GCM) are able to “reproduce features of the past climates and climate changes” (Randall et al., 2007, p. 601). Here we test whether this is indeed the case. We examine how well several model outputs fit measured temperature and rainfall in many stations around the globe. We also integrate measurements and model outputs over a large part of a continent, the contiguous USA (the USA excluding islands and Alaska), and examine the extent to which models can reproduce the past climate there. We will be referring to this as “comparison at a large scale”.

This paper is a continuation and expansion of Koutsoyiannis et al. (2008). The differences are that (a) Koutsoyiannis et al. (2008) had tested only eight points, whereas here we test 55 points for each variable; (b) we examine more variables in addition to mean temperature and precipitation; and (c) we compare at a large scale in addition to point scale. The comparison methodology is presented in the next section.

While the study of Koutsoyiannis et al. (2008) was not challenged by any formal discussion papers, or any other peer-reviewed papers, criticism appeared in science blogs (e.g. Schmidt, 2008). Similar criticism has been received by two reviewers of the first draft of this paper, hereinafter referred to as critics. In both cases, it was only our methodology that was challenged and not our results. Therefore, after presenting the methodology below, we include a section “Justification of the methodology”, in which we discuss all the critical comments, and explain why we disagree and why we think that our methodology is appropriate. Following that, we present the results and offer some concluding remarks.

Here’s the models they tested:

Comparison at a large scale

We collected long time series of temperature and precipitation for 70 stations in the USA (five were also used in the comparison at the point basis). Again the data were downloaded from the web site of the Royal Netherlands Meteorological Institute (http://climexp.knmi.nl). The stations were selected so that they are geographically distributed throughout the contiguous USA. We selected this region because of the good coverage of data series satisfying the criteria discussed above. The stations selected are shown in Fig. 2 and are listed by Anagnostopoulos (2009, pp. 12-13).

In order to produce an areal time series we used the method of Thiessen polygons (also known as Voronoi cells), which assigns weights to each point measurement that are proportional to the area of influence; the weights are the “Thiessen coefficients”. The Thiessen polygons for the selected stations of the USA are shown in Fig. 2.

The annual average temperature of the contiguous USA was initially computed as the weighted average of the mean annual temperature at each station, using the station’s Thiessen coefficient as weight. The weighted average elevation of the stations (computed by multiplying the elevation of each station with the Thiessen coefficient) is Hm = 668.7 m and the average elevation of the contiguous USA (computed as the weighted average of the elevation of each state, using the area of each state as weight) is H = 746.8 m. By plotting the average temperature of each station against elevation and fitting a straight line, we determined a temperature gradient θ = -0.0038°C/m, which implies a correction of the annual average areal temperature θ(H – Hm) = -0.3°C.

The annual average precipitation of the contiguous USA was calculated simply as the weighted sum of the total annual precipitation at each station, using the station’s Thiessen coefficient as weight, without any other correction, since no significant correlation could be determined between elevation and precipitation for the specific time series examined.

We verified the resulting areal time series using data from other organizations. Two organizations provide areal data for the USA: the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA). Both organizations have modified the original data by making several adjustments and using homogenization methods. The time series of the two organizations have noticeable differences, probably because they used different processing methods. The reason for calculating our own areal time series is that we wanted to avoid any comparisons with modified data. As shown in Fig. 3, the temperature time series we calculated with the method described above are almost identical to the time series of NOAA, whereas in precipitation there is an almost constant difference of 40 mm per year.

Determining the areal time series from the climate model outputs is straightforward: we simply computed a weighted average of the time series of the grid points situated within the geographical boundaries of the contiguous USA. The influence area of each grid point is a rectangle whose “vertical” (perpendicular to the equator) side is (ϕ2 – ϕ1)/2 and its “horizontal” side is proportional to cosϕ, where ϕ is the latitude of each grid point, and ϕ2 and ϕ1 are the latitudes of the adjacent “horizontal” grid lines. The weights used were thus cosϕ(ϕ2 – ϕ1); where grid latitudes are evenly spaced, the weights are simply cosϕ.

CONCLUSIONS AND DISCUSSION

It is claimed that GCMs provide credible quantitative estimates of future climate change, particularly at continental scales and above. Examining the local performance of the models at 55 points, we found that local projections do not correlate well with observed measurements. Furthermore, we found that the correlation at a large spatial scale, i.e. the contiguous USA, is worse than at the local scale.

However, we think that the most important question is not whether GCMs can produce credible estimates of future climate, but whether climate is at all predictable in deterministic terms. Several publications, a typical example being Rial et al. (2004), point out the difficulties that the climate system complexity introduces when we attempt to make predictions. “Complexity” in this context usually refers to the fact that there are many parts comprising the system and many interactions among these parts. This observation is correct, but we take it a step further. We think that it is not merely a matter of high dimensionality, and that it can be misleading to assume that the uncertainty can be reduced if we analyse its “sources” as nonlinearities, feedbacks, thresholds, etc., and attempt to establish causality relationships. Koutsoyiannis (2010) created a toy model with simple, fully-known, deterministic dynamics, and with only two degrees of freedom (i.e. internal state variables or dimensions); but it exhibits extremely uncertain behaviour at all scales, including trends, fluctuations, and other features similar to those displayed by the climate. It does so with a constant external forcing, which means that there is no causality relationship between its state and the forcing. The fact that climate has many orders of magnitude more degrees of freedom certainly perplexes the situation further, but in the end it may be irrelevant; for, in the end, we do not have a predictable system hidden behind many layers of uncertainty which could be removed to some extent, but, rather, we have a system that is uncertain at its heart.

Do we have something better than GCMs when it comes to establishing policies for the future? Our answer is yes: we have stochastic approaches, and what is needed is a paradigm shift. We need to recognize the fact that the uncertainty is intrinsic, and shift our attention from reducing the uncertainty towards quantifying the uncertainty (see also Koutsoyiannis et al., 2009a). Obviously, in such a paradigm shift, stochastic descriptions of hydroclimatic processes should incorporate what is known about the driving physical mechanisms of the processes. Despite a common misconception of stochastics as black-box approaches whose blind use of data disregard the system dynamics, several celebrated examples, including statistical thermophysics and the modelling of turbulence, emphasize the opposite, i.e. the fact that stochastics is an indispensable, advanced and powerful part of physics. Other simpler examples (e.g. Koutsoyiannis, 2010) indicate how known deterministic dynamics can be fully incorporated in a stochastic framework and reconciled with the unavoidable emergence of uncertainty in predictions.

Haven’t we learned anything from models that try to forecast weather or the economy … or, even that sub-prime asset backed investments were safe?
Statistics, mathematics, and program code are just the new smoke and mirrors for the illiterate AGW believer.
Dumb, dumber, dumbest.

It is about time we’ve gotten around to comparing these models to actual data. This should have been done for each model for each publication using a model to give the reader an idea of its accuracy. This is the standard for other fields in science.

Great. If I were ten years younger I would have tried to weasel my way into working with their group. They obviously know their physics and statistics.
Lucia has shown the bad fit of model temperature outputs with reality, but I do not know whether she is aiming at publishing in peer review.

This is exactly where the attack needs to concentrate. I have never understood why we keep battling AGW alarmists on territory that is no use to anyone – ie Raw Data. (was this year hotter? how much ice melted? what caused that flood? etc)
The second weakest link in the whole AGW chain is the modelling. It is so clear that the models are just feeble guesses which crumble on first contact with evidence. It is quite likely that modelling is impossible given the chaotic nature of the problem.
(The first weakest link, by the way is the steering mechanism that AGW alarmists think that they have control over. It is the giant thermostat in the sky that they are currently trying to appease by sacrificing dollars in Cancun…)

I can’t think of any field where this kind of inaccuracy in modeling would be OK. No place where real money is at stake, certainly. Have these people no standards? Does nobody think they ought to check anything? Do they honestly think that trying hard is all you need?
The mind boggles.
Thank you for publishing these results. It may be painful, but not as painful as the results of taking models more seriously then they deserve.

It is worth noting that this paper exposes the real reason why “anomalies” have been foisted on us. Anomalies, which are a delta(T) study the shapes of terrains ignoring the height. Under a gravitational field, for example, one expects the shapes of mountains to be similar, but can one really average the Himalayas anomaly( average height of mountains taken as base of anomaly) with a hilly country anomaly and come out with an anomaly in height that has any meaning for anything real?

Demetris’ paper showing deterministic chaos in a toy model was published in 2006 and is here, rather than at the “2010” linked above. It’s important to note that the hindcast reconstruction method Demetris and his co-authors used maximized the likelihood that the GCMs would get it right.
In a conversation I had with Demetris by email awhile ago, he agreed that hindcast tests such as he’s done really could have, and should have, been done 20 years ago. That would have spared us all a lot of trouble and expense.
It’s a fair question to ask professionals such as Gavin Schmidt or Kevin Trenberth why they didn’t think to test what they have so long asserted.
Thanks for posting the work, Anthony, and Demetris may you and your excellent colleagues walk in the light of our admiration, and may some well-endowed and altruistic foundation take benevolent notice of your fine work. 🙂

Robert says: (December 5, 2010 at 10:00 pm) I can’t think of any field where this kind of inaccuracy in modeling would be OK. No place where real money is at stake, certainly.
There is real money at stake, Robert. That would seem to be the reason why “this kind of inaccuracy in modeling” is acceptable, or perhaps that should be welcomed, or perhaps even “necessary”…

I don’t think that following the suggestion presented in this latest paper about how to model climate — namely, assuming that the change in climate is a stochastic, random process modulated by the physical processes involved — will affect the basics of the climate argument much. Instead of asserting that there will be, say, a 4C temperature rise in the next century, now alarmists will say something like “there is a 90% chance of a 4C or higher temperature rise in the next century” and the skeptics will retort that, to the contrary, the chance of that happening is much smaller — that it is less than, say, 10%. The essence of the argument will not go away, and the new stochastic climate forecasts will come to resemble current short-range weather forecasts with their predictions that tomorrow there is a certain percentage chance of rain, snow, etc. Another point worth remembering is that we do observe a great deal of non-randomness in how the climate changes over very long times — for example the beginning and end of the ice ages have followed a fairly regular 100,000 year to 130,000 year cycle over the last several million years.

Climate models are an illustration of what the ‘climate scientist’ wants you to thinks that they know.
Comparing a climate model to reality is a simple way of illustrating the ‘climate scientists’ ignorance, amorality and hubris.

The team will argue that Koutsoyiannis and his little band of deniers in Greece are playing in the wrong sand box again. They are hydrologists – not climatologists and their article is not published in a team controlled approved journal.

Computer models? Something like the models the National Weather Service used to forecast our recent snow event here in Buffalo?
Let’s see how well that one turned out.
Wednesday morning they were still calling for us to get 2 to 4 inches of snow before the band of lake effect snow drifted south to ski country. They were doing pretty good, till that evening, when the band changed direction and drifted back to its start point over us and stayed put and kept on dumping snow so that, instead of 2-4 inches, we got 2-3 feet of heavy wet snow.
The only thing the models got right was there was going to be lake effect snow. They didn’t even get the amount right. Originally they called for 1-2 feet in ski country. My neighboring town got 42″, I only got about 30″. Ski country only got a few inches.
One good thing … I was able to burn hundreds of calories an hour shoveling all that global warming.
An added word about lake effect events (snow and even rain) … they are a bear to nail just right. Wind speed, humidity, direction, temperature all have to be just right. I can sympathize with what local forecasters have to come up with. But I don’t give that kind of understanding to the climate scientists and their models which pretend to encompass the globe and cover all possible variations.

Pat Frank says:
December 5, 2010 at 10:39 pmIn a conversation I had with Demetris by email awhile ago, he agreed that hindcast tests such as he’s done really could have, and should have, been done 20 years ago. That would have spared us all a lot of trouble and expense.
It’s a fair question to ask professionals such as Gavin Schmidt or Kevin Trenberth why they didn’t think to test what they have so long asserted.
But I am sure they did have these studies. That is why they came up with the brilliant idea of anomalies, as I discuss above.
It is true that matter when subjected to forces behaves in a similar manner just because there is a limited way the gravitational and electromagnetic forces can impact it and a limited way that matter can react: elastic, inelastic, etc. That is why when we look at a terrain we need an absolute scale to be able to know whether we are looking at low level rock formations or the Alps. The scale is very important to life and limb. Similarly for waves, we need an absolute scale to know whether it is a storm in a teacup or out in the wild wild world.

ZZZ says:
December 5, 2010 at 10:46 pmI don’t think that following the suggestion presented in this latest paper about how to model climate — namely, assuming that the change in climate is a stochastic, random process modulated by the physical processes involved — will affect the basics of the climate argument much. Instead of asserting that there will be, say, a 4C temperature rise in the next century, now alarmists will say something like “there is a 90% chance of a 4C or higher temperature rise in the next century”
The models as they are now do not propagate errors and thus cannot give a consistent probability of output expected. The spaghetti plots are sleight of hand, where is the pea method of hiding this.
Once one gets models that have error propagation, trust will go up because they will be truly predicting and not handwaving.
I still believe that an analogue computer specifically designed for climate would be a solution. Or chaotic models on the lines of Tsonis et al .

Thanks very much, Anthony, for this post, and Pat and all for the comments.
You may find interesting to see the accompanying Editorial by Zbigniew W. Kundzewicz and Eugene Z. Stakhiv in the same issue (go to the official journal page linked above and hit “issue 7” around the top of the page to get to the issue contents). There is also a counter-opinion paper by R. L. Wilby just below the Editorial.

With my pathetic undergraduate mathematics I thought the comments in para 2 of the conclusions were self evident.
I am sure I have read studies and mathematical proofs before (I will have to hunt them down), relating to climate, financial markets and purely mathematical models, that proved conclusively that stochastic systems with many degrees of freedom simply cannot be modelled to predict extreme events. In particular for financial markets extreme events such as bubbles or busts cannot be predicted at all. This would seem analogous to climate warming runaway, or cooling for that matter.
And this doesn’t appear to me to be a difficult concept to grasp. I have never quite understood this faith that AGW proponents place in the GCM models. So I assumed it was my ignorance.

Lew Skannen;
The second weakest link in the whole AGW chain is the modelling….
(The first weakest link, by the way is the steering mechanism that AGW alarmists think that they have control over>>
The weakest link is in fact the physics. The models, the temperature records, the glaciers receding, the tree ring proxies, sea ice extent…these are all distractions from the actual physics. The AGW proponents keep changing the subject to one misleading data set to the next until their arguments is so warped that polar bear populations quadrupling becomes proof that they are going extinct due to global warming.
The fact is that they won’t discuss the physics because they can’t win the argument on the fundamentals, so they ignore them. But it won’t change the facts:
CO2 is logarithmic. The most warming that CO2 can cause is long behind us and it would take centuries of fossil fuel consumption at ten to a hundred times what we are using now to get another degree out of it over what we are already getting.
Almost no warming happens at the equator, the most happens at the poles. Most of what happens at the poles happens during winter. Most of what happens during the winter happens at the night time low.
So a really hot day at the equator goes from a day time high of +36 to +36.1 and a really cold night, in winter, at the pole, goes from -44 to -36. The lions and tigers aren’t likely to notice, and neither will the polar bears. Well, unless a climatologist shows up to study the polar bears and it warms up enough that they come out of hibernation. On the other hand, WE might notice less in that case due to a sparcity of climatologists.

Now tell me, how do we know what level of CO2 will give us 2degC of warming. An accurate figure would be appreciated.
I’m sure the guys and gals at UKCIP (http://www.ukcip.org.uk/) can tell us very accurately.
At http://ukclimateprojections.defra.gov.uk/content/view/857/500/
they provide projections of climate change, and absolute future climate for:
* Annual, seasonal and monthly climate averages.
* Individual 25 km grid squares, and for pre-defined aggregated areas.
* Seven 30 year time periods.
* Three emissions scenarios.
* Projections are based on change relative to a 1961–1990 baseline.
Now that is what I call good science.

So the models get more complicated and the computers more powerful, and all they are able to produce is just some kind of variation of the old Keeling curve.
Notice that GCMs are not able to model the PDO/AMO cycle at all, since all they are working with is the fictional “radiative forcing” concept. They are tuned to catch the 1975-2005 warming trend, but they wildly divorce with reality before and after.
Polar regions, allegedly the most sensitive areas to “increased greenhouse forcing” do not show any sign of it: Arctic shows just AMO variation and Antarctic shows even slight cooling.http://i43.tinypic.com/14ihncp.jpg
This alone totally and unequivocally disqualifies the AGW theory. No other scientific hypotesis would survive such discrepancy between theory and observation. Shame on all scientists, who keep their mouth shut.

Demetris’ paper showing deterministic chaos in a toy model was published in 2006 and is here, rather than at the “2010″ linked above.

Actually the 2010 is correct. It is a more recent toy model by Koutsoyiannis, in what we think is his best paper to date.
ZZZ:

we do observe a great deal of non-randomness in how the climate changes over very long times — for example the beginning and end of the ice ages have followed a fairly regular 100,000 year to 130,000 year cycle over the last several million years.

The fact that you have cycles that resemble periodicity does not necessarily mean that that they are non-random. (See the 2010 paper by Koutsoyiannis linked above for a definition of randomness.) The toy model of Koutsoyiannis (in the same paper) has unpredictable cycles and trends without any difference in forcings. Having unpredictable cycles and trends can be more random than not having them, because if you have a constant long-term average “signal” plus “noise”, then you know more than if you don’t have even that. I also explain that in the epilogue of my HK Climate web site.

Dear Greeks, Well done! Very well done!
U.S. climate scientists: With more than a billion a year allocated for research … who wants answers?
Anna … You’re smart. Very smart.
Davidmhoffer … You too. [snip]
Our climate way well be too chaotic to model in the fine grain. But even the proverbially “chaotic” r^2 equation is bounded. In fact it’s closely constrained between a max and a min. Make an r^2 plot and has lots of chaotic ups and downs. But stand far enough back and all you see if a flat line. Our climate is probably like that. Certainly on multi-millennial scales. It’s almost never -20°C in LA, and never plus 35°C at the polls.
dT

anna v says: (December 5, 2010 at 10:32 pm) It is worth noting that this paper exposes the real reason why “anomalies” have been foisted on us. Anomalies, which are a delta(T) study the shapes of terrains ignoring the height. (and at December 5, 2010 at 11:00 pm) Similarly for waves, we need an absolute scale to know whether it is a storm in a teacup or out in the wild wild world.
Thank you, Anna. I always read your posts, and always either learn or find myself puzzling — my hope is that our present scientists-in-the-making do likewise.
Better still, perhaps our present crop will take note and learn from your wisdom (and gentle humour).

Very interesting (and very topical discussions with some real live climate modellers (all of whom are terrified of our esteemed host here and so won’t turn up) at Judith Curry’s blog.
Simply put they do not see verification and validation as a priority. Their work is ipso facto so brilliant that no checks (by observation or by external scrutiny) are needed.
Here’s the link.

Having failed to make accurate short-term forecasters (the Met office have an atrocious forecasting record …. and e.g. singularly failed to forecast the snow in Glasgow (I know because I started some outdoor work and removed all the gutters because the five day forecast was “light snow” on a single day followed by clear weather … I now have 2 foot icicles hanging from the entire roof)
So … clearly the modellers tried to justify the huge amount of money the public spent on their toy computers by claiming that “whilst we can’t predict the weather … we can predict the climate”.HA HA HA HA HA HA HA!
What a joke they are!

‘Duh!’ as Mr Simpson would so cogently expresses his frustration with and scorn for the dumb ideas that supposedly-intelligent blokes with proper Phds and everything have persuaded the alarmist and alarmed world to take seriously for far too long.
When I was a boy, I built model airoplanes badly, being blessed with more than my share of metaphorical thumbs plus a very limited set of construction skills, but the quality of my models never stopped me dreaming they were real and capable of marvellous aeronautical feats. Sometimes they even flew.
When I grew up I realised that childhood dreams should remain in childhood, but, sadly, many clever kids never mature to full and responsible adulthood; a few of them become scientists and go on to scare the world with their clever but childish fantasies. And sadly, many ordinary people seem to like the thrill of being scared, but never willingly put themselves in any kind of actual danger, so the timid fasten on to the scaremongers for our ration of thrills and the Marxists fasten on to the models to satisfy their anti-humanity rage and control freakery. It is very interesting that many (but not all) of the alarmist tribe never go motor-racing, blue-water yachting, mountain-climbing or play vigorous contact sports and so never put themselves in a situation which might supply a good ration of adrenaline or even plain old-fashioned fun, which, apart from the joy of control or exterminating something, is not permitted by the Marxist doctrine.
And finally, we find the alarmist scientists’ juvenile climate models are c**p – wow, who woulda thunk it!!

Demetris,
I note that you did not cite the Wentz publication in the journal Science that found that the models reproduced less than one-third to one-half the increase in precipitation observed during the recent warming. Unfortunately, model based studies projecting increased risk of drought in the future fail to discuss this result and its implications for model credibility in representation of future warming associated increases in precipitation. I have wondered if they’ve been able to ignore the Wentz paper because it doesn’t report the specific model results. Your work may be more difficult to ignore, but it did not see a specific comparison of the increase in precipitation during the warm decades in the models relative to the increase seen in the observations. It would help to know if your results are consistent with those of Wentz. I’d have to read the Wentz paper again, but I assume we’d expect to see this in a comparison of the 80s or 90s with the 70s or 60s. Thanx.

Roger Carr says: “There is real money at stake, Robert.“.
True. But is there a difference between the way in which money is at stake here, compared with how it is normally at stake? When organisations build models, it is usually with the aim of making money or saving money. Their own money. So the models tend to get tested carefully, and are ditched if they aren’t up to standard.
Is it possible that the climate models’ true aim is to lose vast amounts of other people’s money? If so, then accuracy in the models would not help achieve this, would therefore not be an objective, and need not be tested for. A model could then be coded to produce certain required results, and could remain in operation for as long as it can produce those results.
Nah. Absurd. Forget it.

Another classic example of the difference between real science and ‘climate science’ – no different from Mannian Maths versus real statistics.
Any real branch of science would have a pre-requisite of testing a model by hindcasting, but with ‘climate science’, this is is not deemed to desirable or necessary, unless the raw data has been sufficiently manipulated to fit the models.

Apart from the fact that this should have been done, thoroughly, before a single character of any IPCC report was struck I don’t mind taxes spent out on exposing the bleeding obvious if it saves us all money.

How can there be offsets. The absolute temperature is of critical importance to the radiation balance and it must be an initial condition of the simulation. If they don’t get that right, especially with an error of 2.5-3 degress C, the error exceeds the so called CO2 forcing during the 20th century several times.
Also, I’ve spent many years fighting divergences in numerical simulations. With linear problems, it can be almost done, at least in many cases. I have always assumed that people like Gavin S. with degrees in applied math have these things under control. Now, I’m not all that confident they do.

This confirms what we knew in essence, that its darn hard to model over a long period of time any system which is under a constant state of flux as to what exactly constitutes the system and hence where the ‘model boundary’ should exist – on so many dimensions. Models work best with well defined and understood materials within a finite and often small ‘domain’ – as soon as you go beyond this the usefulness of the model quickly tails off and you just end up observing what is in essence noise.
Although very nice to have it be written up in the literature in such a precise way I must say – well done!

In this paper climate models were hindcast tested against actual surface observations, and found to be seriously lacking.
Let’s be clear about this. GCM’s do not match the past, which is known. The modelers disdain the past. They don’t care that their models don’t fit known data, i.e. the reality of the known past. They excuse that lack of correlation as too mundane to matter. Their theoretical framework, a gross oversimplification of a complex system, is not based on empirical data. Their model outputs fail when compared to known reality.
It’s laughable to consider whether GCM “projections” into the future are “science”. Science is the study of reality (we could argue about what “reality” is, and get bogged down in epistemology, but there is a common ground we mostly all share). The GCM’s do not comport with reality as is generally agreed upon, and as indeed the modelers themselves agree upon.
If the nut does not fit the bolt today, it is not going to suddenly fit that bolt tomorrow.

Robert says: December 5, 2010 at 10:00 pm
All the Roberts, I can think of where modelling is useful.
Self interest.
What’s the bet, that is the the parameters (ever shape shifting) that frame the financial, gambling, voting, acquiescence, alcohol (or any other addictive substance) devised enterprises FOR and OF human nature which one cares to mention. Neuro-cognitive research is in it’s infancy.
It was Friedman or von Mises that stated ‘own interest’ not ‘self interest’ I thought. But I will check.
Also what is hindcast? I may have missed the original discussion, but this is a new word. A priori and posteriori is understandable.
I would appreciate an explanation on hindcast thanks.

That global climate models are not very good at local and regional levels and on shorter times scales is well known. This Science News article gives an overview of work being done to improve models. Most of the article is about modeling aerosols but the last section “Getting Local” deal with efforts to improve local projections. I did not see anything shockingly new in the Greek research paper.http://www.sciencenews.org/view/feature/id/65734/title/The_final_climate_frontiersREPLY: It wasn’t meant to be “shockingly new”, that’s your panic descriptor. It is an update to a previous paper. – Anthony

AGW is being demolishd from all sides. The above peer-reviewed report demolishes AGW’s GCM’s, the following demolishes the CO2 is causing global warming theory:http://notrickszone.com/2010/12/05/conference-recap-and-my-big-disappointment/
>Nir Shaviv then explained solar activity and climate, showing that the link between solar activity, cosmic rays and cloud formation is pretty much in the bag. CERN will be soon releasing results on this, too. <

Oh, I am so underwhelmed I could just fart.
Global. GLOBAL. G.L.O.B.A.L.
If you wanted to make the GCM’s look really bad, why not get their predictions for Koolyanobbing in Western Australia? I’m sure that one would look bad too.
The idea is that it is a global phenomenon. The whole world gets warmer on average. Any predictions of local climate by a GCM is bound to be far less accurate.
Back to the drawing board guys. Better luck next time.

And anyway, the models DO fit the hindcast data.
It is not the models that are at fault, here, it is the faulty data – but with a little homogeonisation and rebasing, the data WILL conform to the models.
Simple really.

As has been noted many times. The ONLY way to validate ANY model is to have it predict the future correctly a sufficient number of times that the match could not have happened by chance. Since climate models “tell” us what is supposed to happen 20, 30, 50 or 100 years in the future, they can NEVER be validated.

“we think that the most important question is not whether GCMs can produce credible estimates of future climate, but whether climate is at all predictable in deterministic terms.”
Internal Randomness generated within the various climate systems precludes the possibility of predictability as Stephen Wolfram proves in A New Kind of Science. Systems with internally generated randomness can’t be predicted, to know their future state you must observe then in real time.

Read the editorial that prof. Koutsogiannis suggested above.
The editorial is fair, impartial and stressing the need of useful modeling for applications.
There is a cognitive dissonance in people who can think that billions can be spent on the predictions of models for climate control, when the said models fail miserably when questioned on large scale details.Moderator, this is the correct link for 10 above in the post. The one there points to the paper discussed once more.

A very interesting, and very frank work! Congratulations on your paper, Demetris and team!
I’ve come to regard GCMs as invaluable in one particular respect: They are a demonstration of how LITTLE we understand our climate system.
I’ve become increasingly frustrated with the climate science establishment’s over-reliance on climate modelling, and their wilful abuse of the Scientific Method by treating the product of models as if it were experimental data.
A climate model run is at best, manifestly, an exploration of a hypothesis, or an extrapolation of a hypothesis. It is not a TEST of that hypothesis. The only test of a climate model’s veracity with any merit whatsoever is its juxtaposition with good old, real-life observational data.
And, most importantly, that test is itself subject to acute scrutiny, requiring the model’s output to match observation for the right reasons. A model that matches observational data for the wrong reasons is not valid.
The climate science community’s collective failure, to recognise that model results are science invention rather than scientific discovery, is a fundamental flaw in the discipline and may yet prove its undoing.

“David says:
December 6, 2010 at 1:26 am”
No, it is consistent with the political concensus that C02 is, somehow, carbon pollution. I think a better solution would be, as users of x-IBM hardware did in the UK with “mainframes” apparently in the late 70’s and early 80’s being used as anchor points in harbours for boats (Apparently there are some x-machine in the solent, around Portsmouth and Gosford).

ZZZ Says:
“Another point worth remembering is that we do observe a great deal of non-randomness in how the climate changes over very long times — for example the beginning and end of the ice ages have followed a fairly regular 100,000 year to 130,000 year cycle over the last several million years.”
No, they haven’t. They followed a approximately 40,000 cycle until about 1,000,000 years ago, then gradually shifted to an 100,000 year cycle over 200,000 years, and have followed an approximately 100,000 year cycle since then (8 cycles up till now).
This shift is obviously fundamental to understanding the causes of ice ages, but unfortunately the reason for it is not well understood (which is the accepted way to say “nobody has a clue” in a scientific context).

@ John Brookes
‘The idea is that it is a global phenomenon. The whole world gets warmer on average. Any predictions of local climate by a GCM is bound to be far less accurate’
But years ago when climate models were first found to be crap, the reasons given were that they were done on too big a scale. They were not precise enough. And that by investing zillions of resources they could do ever more detailed (=more localised = smaller grid cells) predictions that would therefore be much more accurate.
The resources were forthcoming…the improved models were run. And are still crap.
How can we reconcile this bit of history with your remarks that the problem now is ‘too much precision’ please?

anna v says:
December 5, 2010 at 11:06 pm
Anna the problem with models of chaotic systems (weather models) is that they randomly break down from the moment you start the model rolling. The Met offices have what they periods of predictive and no-predictive outcomes. That is to say that sometimes they can run the model many times with slightly different starting parameters and the outcome remains APPROXIMATELY the same for each run. Other times when they perform the same experiment the outcomes are completely chaotic.
In weather terms you will here them say that there is “some uncertanty in the forecast. Therefore, the significant breakdown can be relatively quick (within 12 hours) or relatively slow (within 3 days). Ilike the idea of an analogue computer but the same problems apply. The most important player in our ‘climate’ system ‘, as opposed to weather, is the ENSO and that cannot be predicted either in time or magnitude. Therefore, a viable climate model will almost certainly remain a mythical beast.

Simon Hopkinson says:
December 6, 2010 at 3:59 am
“A climate model run is at best, manifestly, an exploration of a hypothesis, or an extrapolation of a hypothesis. It is not a TEST of that hypothesis. The only test of a climate model’s veracity with any merit whatsoever is its juxtaposition with good old, real-life observational data.”
The situation is even worse than described. Even if one had “good old, real-life observational data,” the hypotheses that would explain and predict that data are missing. The climate models contain no hypotheses. The proof of that fact is easy. Everyone knows that a run of a climate model cannot falsify some component of that model. All that a run can show is the results that are to be expected given various assumptions. None of the assumptions are even partially confirmed in their own right apart from the model. In other words, none of the assumptions have any empirical meaning whatsoever.

We should not really be surprised by these results because, contrary to what was stated in the paper, the IPCC did test the models. The problem is they also found them wanting. Buried deep in the several hundred pages of the technical section is a series of tests to which the subjected the most important models. It is a long time since I read the report but, from memory, about 11 models were tested against 6 scenarios like the 20th centrury cyclical pattern, the annual ice melt and Roman warm period. None of the models performed well on all tests and all of them had to employ fudge factors in order to make the data fit any of the hindcasts.
To the scientists’ credit none of this was hidden. The problem came in the technical summary and worse still in the summary for policy makers where all these errors miraculously turned into scientific certainty. This was when I stopped being sceptical about the science and became sceptical about the motives of those who were “managing” the science.

I believe “hindcast” is used here in the sense of “backfit”. Indeed, the climatologists’ models do not backfit the available data time series well enough to mean anything as far as I can see. So obviously the models require no further examination. Long since time to throw them in the trash. The absolute last straw, to me, is the “correcting” of the data time series to produce the desired models. This is fraud. A hoax.
This is very sad. A whole “scientific” specialty appears to have gone mad. Crying “Wolf!” so loudly and so long means no relatively sane person will believe the cry of “”Wolf!” even if the wolf is real.

We could easily have predicted this outcome simply by looking at the old Wilson Spaghetti graph. These models do a fair job of predicting each other — when zoomed out to the 1000 year scale, anyway — in the time period to which they are all “tuned”. But none of them agree on paleo climate with any reliability.
One can only assume given the paleo-divergence of these models that a future divergence is also quite likely.

Having read through many of their papers, it is my view that the Itia group at NTUA, headed by Prof. Koutsoyiannis, are putting forward the most complete, thoughtful and scientific critique of catastrophic AGW that I am aware of today.
Many congratulations to Anagnostopoulos et al for taking on the many-headed hydra of climate peer review and for publishing another good paper chiselling away at the current stagnant view of climate science.

I would think precipitation would be an even bigger headache to model. It’s dependent on clouds, prevailing winds at all relevant altitudes, temperature at all relevant altitudes AND geography just to name a few. Anyone selling you on funding them for research on predicting precipitation patterns is selling snake oil.
I mean, weathermen today have the advantage of imagery telling them where cells of precipitation are, and what direction they’re headed. Imagine taking all of that away and asking someone to tell you when it’s going to rain. Can’t do it.

Dean McAskil says:
December 5, 2010 at 11:32 pm
“I am sure I have read studies and mathematical proofs before (I will have to hunt them down), relating to climate, financial markets and purely mathematical models, that proved conclusively that stochastic systems with many degrees of freedom simply cannot be modelled to predict extreme events. In particular for financial markets extreme events such as bubbles or busts cannot be predicted at all. ”
I would modify the statement to, “cannot be predicted from the models at all.”
The existence of a bubble may not be obvious to the models, but I think we can all agree that humans who are not directly involved in the bubble, may easily discern their existence. To wit, the Dot Com Bubble, the Housing Bubble, South Sea Bubble, Mississippi Bubble, Tulip Mania, were all clearly visible to external observers.
To this extent, I’d like to rename the entire AGW mess to Climate Bubble. It sure seems to fit, with massive amounts of funds trading hands on little or no reality.

It’s widely acknowledged that regional climate prediction by GCMs is not skillful. The lower 48 states of the US comprises just 2% of the earth’s surface. That sounds pretty “regional” to me. The selection of a specific 2% of the earth’s surface to integrate seems to be the very definition of cherry picking.
I fail to see how this study tells us anything the GCM boffins haven’t already admitted. Coupled ocean-atmosphere GCMs are still under development and it is hoped that these will eventually demonstrate more skill in regional prediction.

Paul Callahan says:
December 6, 2010 at 5:53 am
==============================
My sentiments exactly Paul.
I’ve always been curious as to what kind of person would believe that we know
enough about weather, to model climate, in the first place….

We think that it is not merely a matter of high dimensionality, and that it can be misleading to assume that the uncertainty can be reduced if we analyse its “sources” as nonlinearities, feedbacks, thresholds, etc., and attempt to establish causality relationships.
Money Quote. Can’t say it plainer.

OK. There are some systems which are complex and which we do understand.
If you add a little bit of extra sunshine each day, and if the sun gets a bit higher in the sky each day, then over time you might expect the weather to get warmer. And this happens every year, as we move out of winter into summer. Is the system complex? Well, yes it is. Is it predictable? Well, yes and no. January in Perth Western Australia is hotter than July – every year. Can you say that the maximum on December 8th each year will be 28.7 degrees? No, of course you can’t.
Another interesting thing is that Perth has hot dry summers, while Sydney had warm humid summers, even though they have roughly the same latitude. So in the face of similar forcings, the response is different – but it is warming in both cases.
To summarise, you can reasonably expect to make sensible predictions of trends, even in a complex system. That an attempt to model the whole world doesn’t happen to correctly predict the few percent of the globe which happens to be continental USA is not really a problem.

stephen richards says:
December 6, 2010 at 5:07 am
In weather terms you will here them say that there is “some uncertanty in the forecast. Therefore, the significant breakdown can be relatively quick (within 12 hours) or relatively slow (within 3 days). Ilike the idea of an analogue computer but the same problems apply. The most important player in our ‘climate’ system ‘, as opposed to weather, is the ENSO and that cannot be predicted either in time or magnitude. Therefore, a viable climate model will almost certainly remain a mythical beast.
Have a look at the alternative neural net climate model of Tsonis et al , particularly the publication that uses not only ENSO but also PDO etc. and predicts cooling the next 30 years.
The problem set by prof Koutsogiannis , that climate may be deterministic in part and stochastic in another part is real, but if one could have an analogue computer where all the coupled differential equations were simulated, the deterministic part could be simulated, the way Tsonis does with neural nets and the level of currents. The remaining stochastic, real gausian randomness, could be simulated with the errors or the parameters entering in the simulation. Studying the output of such models would indicate if there are attractors, that would help gauging probabilities of climate paths .
The GCMs cannot yield probabilities because they do not have propagation of errors, thus there is no gauge about the probability of finding the different paths. Changing parameters by hand does not simulate chaos, as sometimes climate modelers claim. The spaghetti graphs and the ensembles of models demonstrate only the chaos in the thought processes of the modelers ensemble.

GCMs aren’t crap but it’s painfully obvious they lack the skill reaonably needed upon which to base major policy decisions. They are a work in progress. The two major areas in need of more work are adequately modeling the water cycle and figuring out exactly what influence solar magnetic field changes have on high altitude cloud formation.
The earth’s albedo is represented in these models as a constant with the value for the constant selected to obtain the most consistent hindcast. The selected value varies between models by up to 7% in the range of 30-40%. A 1% difference in albedo is more than the forcing of all anthropogenic greenhouse gases combined which provides some perspective on how important it is to get albedo modeling done right.
It is exceedingly difficult to measure earth’s average albedo. Various recent attempts by different means are not in satisfactory agreement with each other but they are all in agreement on one thing – the earth’s albedo is NOT constant and in just the course of a decade has been seen to vary by one percent.
Another confounding factor is exactly where the “average global temperature” is being measured and where it is predicted. Model outputs are being compared to thermometers approximately 4 feet above the surface. Just the other morning at dawn when the nightly low temperature was at its lowest my thermometer at a height of 4 feet indicated 42 degrees F yet there was frost on my roof where that roof had a clear view of the sky and no frost where the view directly overhead was obstructed. That’s called hoarfrost and it is the result of radiative cooling – the clear night sky was literally sucking heat from the roof surface where that surface could “see” the sky directly above and where the view was blocked the roof temperature was the same as the air temperature. The difference was over 10F simply due to the radiative path being obstructed or not. So if you were a bird in a tree it was a not terribly cold 42F but if you were a tomato seedling an inch above the ground you froze would have frozen to death. Altitude makes a big difference.
‘
This raises the point about how evaporation and convection of water vapor swiftly, efficiently, and undetectably (to a thermometer) removes heat from the surface and transports it thousand meters more or less above the surface where upon condensation it is released as sensible heat that a thermometer can measure. Who really cares if it’s getting warmer a thousand feet above the surface. We don’t live in the cloud layer. We only care about the temperature very near the surface where we and other living things actually spend our lives.

english2016 says:
December 5, 2010 at 10:44 pm
At what point does Al Gore get charged with perpetrating a fraud, and the nobel prize withdrawn?

Remember, Gore won the Nobel Peace Prize. Unlike Nobel prizes in science or medicine, a Nobel Peace Price does not have any accomplishments as a prerequisite. How many Nobel Peace Prizes have been given for the Middle East? Do we have peace in the Middle East?
Maybe we can start by requesting Carter, Gore and Obama voluntarily give their Nobel Peace Prizes back?

John Brookes says:
December 6, 2010 at 6:39 amThat an attempt to model the whole world doesn’t happen to correctly predict the few percent of the globe which happens to be continental USA is not really a problem.
The problem is not with the USA area. It is with the whole world. The USA area has enough measurements to give statistically significant estimates of the deviation. Their first publication which has the same results had 8 locations all over the world.
In their conclusion at the time:At the annual and the climatic (30-year) scales, GCM interpolated series are irrelevant to reality. GCMs do not reproduce natural over-year fluctuations and, generally, underestimate the variance and the Hurst coefficient of the observed series. Even worse, when the GCM time series imply a Hurst coefficient greater than 0.5, this results from a monotonic trend, whereas in historical data the high values of the Hurst coefficient are a result of large-scale over-year fluctuations
(i.e. successions of upward and downward “trends”). The huge negative values of coefficients of efficiency show that model predictions are much poorer than an elementary prediction based on the time average.

Trenberth (2010) soberly assesses the transient deficiency related to model improvement: “Adding complexity to a modelled system when the real system is complex is no doubt essential for model development. It may, however, run the risk of turning a useful model into a research tool that is not yet skilful at making predictions.”
Another money quote. To paraphrase. If the model is as complex as the real system, we won’t be able to make our predictions.
Doh!

The thing I could never figure is how one could model something that never exactly repeats. The earth’s climate has never been and will never be exactly the same from the earth’s inception to it’s eventual destruction. We do see some cyclical patterns from the geological record, but there’s no guarantee that a particular pattern will ever repeat once a new pattern emerges. Perhaps we might be able to get a handle on the climate on multi-millenial scales someday, but even then, what’s not to say that eventually a new pattern will emerge and we’d have to start over nearly from scratch?

I always thought the definition of a chaotic system was one that could not be modelled.
It seems to me that unchecked computer models have become the de facto way of overcoming common sense. It was pretty obvious a long time ago that the financial models had a problem, and rather than requiring simple checks – house price to earnings ratios the models were accepted at face value. As of itself a computer model is evidence of nothing but the idealised system it models.
It does make you wonder about the nuclear deterrents. New explosions were banned and replaced by computer simulations. Are those models valid? Good computer models are improving our world – but only when somebody is betting on it with their own money.

It’s like climate scientists forget the three basic rules of computing. These rules are great because you don’t need to know anything about the topic to recognize them when they happen.
1. If the input data is wrong, the ouput will be wrong. Otherwise known as garbage in, garbage out.
2. If the algorithm (the model) is wrong, the output will be wrong (even with the right input).
3. If you can’t predict the past, you can’t predict the future.
These are universal. So when someone asks if you’re an expert in the field, they just lost the argument when one of these points apply.

Well, that’s 2 recent papers that makes any GCM programmer look entirely silly. For some reason, while reading the study, I had a particular Eagles song blaring in my mind.She(modelers) did exactly what her daddy (alarmist scientists) had planned.
She was perfect little sister until somebody missed
her and they found her in the bushes with
the boys in the band ..

@dave springer
‘GCMs aren’t crap but it’s painfully obvious they lack the skill reaonably needed upon which to base major policy decisions. They are a work in progress’
H’mmm
Given that there has been thirty years development and zillions of dollars spent, wouldn’t it just be sensible to say ‘enough is enough’, file the project in the bin marked ‘too difficult’ and spend the money on something that does have some practical value to real people living today…not putative people who may or may not be living tomorrow when the climate may or may not be a tad warmer.
There is no point is throwing good money after bad.

Fascinating. And note that the prediction of past temperatures are always higher, as if the models themselves are biased in that direction. And this after the utilization of ‘homogenized’ historical temperatures which in and of itself is biased.

“However, we think that the most important question is not whether GCMs can produce credible estimates of future climate, but whether climate is at all predictable in deterministic terms.”
This is the crux of the argument for using models as a basis for major policy decisions. A deterministic model is one where all of the relationships and feedback structures between the influencing factors are known, and the target problem is bounded.
If in fact there are only approximations to the rules in the model, or the input data has measurement errors, or state disturbing random one-off events can occur, the model predictions are not likely to match the subsequently observed data. Basing major environmental economic policy on such models would, in a sensible world, be considered too high a level of risk.
If we are not able to model all factors and input all correct state data due to the complexity of the targeted physical processes, we will only have a training device, which is of some value in determining sensitivities and general understanding. However, if it can be shown that this useful training model will not match past known target system performance, we should not be relying to any extent on the model whatsoever for serious policy making.

Lew Skannen says:
It is the giant thermostat in the sky that they are currently trying to appease by sacrificing dollars in Cancun…
———————————————————————————————————–
I really like that image. Yes have the AGW league progressed much further than native bushmen in the climate wilderness ?

Well this may be a dumb question; in fact it is almost certain to be; but forgive me, because I haven’t picked myself up off the floor yet from laughing my A*** off. So the Max Planck chaps, use a 1.9 x 1.9 global degree grid in their models; so from henceforth, I am not going to worry any more about Optical Mouse digital cameras only having 32 x 32 pixel sensors; well some are only 15 x 15; well you have to give a little back if you want 10,000 picture frames per second.
So my dumb question:- For those 96 x 192 grids (and the other grids), HOW MANY of those grid points actually hava a MEASURING THERMOMETER located there to gather the input data.
Well it seems like a fourth grade science question; if you want to draw an accurate map of something; that you should actually make measurements of just where places really are. Biking around Central Park will hardly give you the information you need to draw an accurate map of Manhattan Island.
Now just compare models Max and the boys have 96 x 192 grid points, and they don’t actually have a thermometer at any of them (or do they); but then there is Mother Gaia’s model; and she actually has a themometer in every single atom or molecule; and somehow, Mother Gaia always gets the weather and climate correct; it always is exactly what her model predicts it will be; specially the Temperature.
See I told you it was a dumb question; maybe MG actually read the book on the Nyquist Sampling Theorem.
Izzere actually somewhere a brochure or document or whatever on any of these models, that says exactly what Physical variables, and parameters they use, and what equations they use to calculate all this stuff.
I presume that they at least accurately track the sun’s position , on the earth surface, where it intersects the line joining the sun, and earth centers, continuously as the earth rotates, for at least the whole thrity years of one IPCC approved climate period.
That would seem to be necessary to figure out just where solar flux falls on the surface; and whether it lands in the water, or on the land; because different things would happen depending on that question.
Well if somebody has a paper that lists ALL of the variables, and equations, and parameters; this place (WUWT) would be a good place to show that; well are there copyright issues; or si this stuff some sort of state secret. Does Peter Humbug have his own secret formula ?

Untenable assumptions cannot be tolerated. The defeatist stochastic approach is dangerous to both society & civilization and will only be advocated by the lazy and those wishing to maintain blinders over the eyes of the masses. There may arise a time in the future when adding stochastic components will be the last remaining sensible thing to do, but sophistication of climate data exploration has [to date] been absolutely inadequate and thus climate science remains mired in the pit of Simpson’s Paradox. The era of meaningful statistical inference & modeling will necessarily follow an era of careful data exploration.

I’m astonished that only one model seems to be presented and then held on to.
From a mathematical and statistical point of view it makes more sense to examine a number of possible models in parallel.
But then, the purpose of a model is to hold up a mirror to the actual events observed and to discuss how accurately it fits the events it decribes.
It is in the examination of the differences that we seek to refine our understanding.
So the fact that a model doesn’t fit is not, in itself a failing.
Perhaps hanging on to one and not being willing to explore divergences is.

Darkinbad the Brightdayler says:
December 6, 2010 at 11:43 amI’m astonished that only one model seems to be presented and then held on to.
What one model? Have a look at the table and see there are six GCMs considered in the comparison.
Maybe you are on a wrong thread?

There were some AGW defenders claiming that the models had to be great because they matched history. (I discounted that, assuming that even an idiot will compare their model output with real data before making wildarse projections, and then, if it doesn’t match, either figure out what’s wrong with the model or begin introducing “parameters” that will make it match history.)
These boys were not only incompetent, (or blinded by agenda) or REALLY lazy !

John Brookes says:
December 6, 2010 at 6:39 am
“OK. There are some systems which are complex and which we do understand.
If you add a little bit of extra sunshine each day, and if the sun gets a bit higher in the sky each day, then over time you might expect the weather to get warmer.”
Defenders of climate modelling regularly make this point as though it actually supports their case. In fact it just highlights the weaknesses. “The system” can be seen as having
1. Deterministic features which have been well understood for a few centuries and have been subjected to empirical tests countless times (earth’s orbit around the sun, its rotation, precession ..)
2. Everything else, not well understood at all (but apparently a highly complex chaotic system, of the sort that does not allow prediction) and no serious empirical support for any theory.
So when you say that the long term behaviour is predictable, eg hotter in summer than winter, you are referring to 1. which says nothing at all about 2., which is where any man-made effects are to be found.

anna v
Very good point you’ve been making about anomalies, rather than actual temperatures, that they can be used to obscure failures of the models (and in climate science if anything can be used to obscure something, it gets used). If you redraw the plots at the top of the post in terms of anomalies it would look as though there was good agreement. (This is in addition to the use of anomalies in making very small changes appear large)

anna v: “But I am sure they did have these studies. That is why they came up with the brilliant idea of anomalies, as I discuss above.”
Many thanks, anna v, I’ve been bothered by the use of “anomalies” right from the start: who really cares about 10ths or 100ths of degrees? Only if they are compared to 0 degrees?

The use of anomalies causes serious problems in understanding north-south (& continental-maritime) terrestrial asymmetry in temperature-precipitation relations. It is the freezing point of water that flips the correlations upside-down over a very large portion of (for the most notable example) the northern hemisphere. Anomalies are useful for some data exploration purposes, but Simpson’s Paradox is guaranteed for researchers ignoring the freezing point of water when spatiotemporally integrating patterns across disparate geographic seasons/elevations/regions.

Relevant to this discussion is the recent Issac Newton Institute Seminar that looked at Stochastic Methods in Climate Modelling (http://www.newton.ac.uk/programmes/CLP/clpw01p.html). The opening address is interesting (particularly the second half) with the abstract, .PPT and .AVI available at http://www.newton.ac.uk/programmes/CLP/seminars/082310001.html.
Palmer makes three major points for why non-linear stochastic-dynamic methods should be introduced to climate modeling: Firstly (as noted on this thread) “climate model biases are still substantial, and may well be systemically related to the use of deterministic bulk-formula closure. … Secondly, deterministically formulated climate models are incapable of predicting the uncertainty in their predictions; and yet this is a crucially important prognostic variable for societal applications. ….. Finally, the need to maintain worldwide a pool of quasi-independent deterministic models … does not make efficient use of the limited human and computer resources available worldwide for climate model development.” (from abstract)

I appreciate the comments about “anomalies”. As a Greek, I can tell you, in addition, that even the term “anomaly” is illegitimate and suggestive of poor knowledge of Nature (and of language). “Anomaly” is Greek for “abnormality”. It is not abnormal to deviate from the mean. That is why we avoid this term in our paper (except a couple of times in quotation marks).

Juraj V. says:
December 6, 2010 at 3:30 am
Can anyone run the models backwards? I want to see, how it matches
a) CET recordhttp://climate4you.com/CentralEnglandTemperatureSince1659.htm
b) Greenland ice core recordhttp://www.ncdc.noaa.gov/paleo/pubs/alley2000/alley2000.gif
########
huh, you dont run it backwards. You pick a point in time. You set the inputs for historical forcing. You let it run forward. you check.
1. The knowledge of historical forcings is uncertain.
2. The initial “state” of the ocean is going to be wrong. ( ie what water was going where)
So you are never going to get answers that will match a single location. You will if you run the model enough times or run collections of models get a spread of data that encompasses the means of large areas. Sorry that’s the best the science can do.

“”””” Demetris Koutsoyiannis says:
December 6, 2010 at 2:38 pm
I appreciate the comments about “anomalies”. As a Greek, I can tell you, in addition, that even the term “anomaly” is illegitimate and suggestive of poor knowledge of Nature (and of language). “Anomaly” is Greek for “abnormality”. It is not abnormal to deviate from the mean. That is why we avoid this term in our paper (except a couple of times in quotation marks). “””””
Well Greek is; as they say; all Greek to me, but even I know that “anomaly” means something that isn’t supposed to be.
So in that sense, when Mother Gaia plots an “anomaly” function; or plots a global “anomaly” map like SSTs she gets these boring pictures with absolutely nothing on them in the way of slopes trends gradients or the like; everything is flat; because Mother gaia never gets anything that isn’t supposed to be.
But I do understand why people report them for some data; because it is a bit silly to be plotting 1/100ths of a degree C on a daily Temperature graph that has a full scale range from about -90 C to about + 60 C. Trying to see parts in 15,000 on a graph is not too easy on the eyes. It’s also part of the reason, I think the whole question of global temperature changes is just a storm in a teacup.
It seems that “anomalies” are somewhat akin to first derivatives; and that is always a noise increasing strategy.

If one cannot predict each of the variables, such as ocean oscillations, volcanoes, solar flares, and cloud amounts, just to name a few, that affect the climate, then prediction of the climate itself is impossible.
On another note, do we know whether or not forecast verificaions have been attempted using a starting point, say 50 years ago?

anna v, that’s a very good point about anomalies. I hadn’t thought about them removing the distinction of intensity, but you’re dead-on right. I always took them as the modelers’ way of subtracting away the mistakes. Of course, that rationale is a crock, too, but at least it has a superficial plausibility.
I remember looking at the CMIP GCM control runs that show baseline global air temperature spread across 5 degrees, and wondering how anyone could credit models that have such disparate outputs. And then seeing how use of anomalies just subtracts all that variation away, as though it never existed.Antonis, thanks for the new link. The “2010” link just took me back to the same GCM study, and I didn’t realize there was a newer version of the chaotic model paper. Congratulations to you as well on the great work you all are doing.

Several commentors have critisized the subject paper because it uses only one region, not the whole earth.
However, it is my understanding that the models don’t treat the whole globe at one time. But rather break it down, into a grid. If they can’t get each grid (region) right, how can the results for whole globle be right? Unless people are suggesting that the grid errors cancel out, so total is right.
Therefore, it seems to me that, using only a region to check the model makes perfect sense. I am realitively new at trying to understand, all the arguements about the validity of the gobal temperature predicting models, so if I have it wrong, someone please correct me.

Anna,
I’d like to add my ‘voice’ to the chorus of appreciation. Like others here I was also always troubled by the use of anomalies, and now I know why. Your explanation of how they have been used was one of those ‘lightbulb’ moments for me. Where are all the defenders of anomalies? Usually they hone in on these arguments like a swarm of killer bees.
To Drs. Anagnostopoulos, , Koutsoyiannis, , Christofides, , Efstratiadis, and Mamassis (thank God for cut-and-paste) thank you all for this invaluable addition to the scientific arsenal of peer-reviewed criticism of CAGW. Your paper’s findings are not exactly unexpected, but offer the kind of solid confirmation we need for what skeptics have been arguing for a couple of decades.

James Macdonald says:
December 6, 2010 at 5:03 pm
…”On another note, do we know whether or not forecast verificaions have been attempted using a starting point, say 50 years ago?’
I second the question, only suggest 60 years (a full ENSO cycle) is a better bracket, but why not 120 as their proxys are so accurate?

It is good that people start noticing how non-sense anomalies are.
A derivative, gives the rate of change. It would be delta(T)/delta(time) and has that meaning, the rate of change. Taking the nominator only and thinking that it is legal to use it as an energy correlated variable is non-sense.
I am copying comments I made on another board because I have put some numbers in.We have been bamboozled with anomalies, and worse, with global average anomalies.
The Power radiated is j=sigma*T^4, not anomaly^4, and it is power/energy that makes or breaks the temperatures.
A 15K anomaly in the antarctic is not in energy content the same as a 15K anomaly in the tropics 🙂 . If one were negative and the other positive the average would be 0, but the tropics would be melting from the amount of energy falling and leaving.
Lets put some numbers.
Let the antarctic be at 265K and do an anomaly of 15 for the month of november, 280K.
That gives 348-280=68watts/m^2 energy “anomaly”
Let the tropics be at 304CK and now an anomaly of 319K
That gives 587-484= 103 watts/m^2
such large variations in radiated energy happen not only geographically but at the same spot, day and night, shadow and clear.
In addition the temperatures used are the 2meter height atmospheric temperatures, whereas most radiation comes from the solid ground that can be 15C different between ground and air due to convection effects and shadow effects, from the poles to the deserts.
If one goes into gray body constants,
the fractal nature of the surfaces, the way the programs integrate over the globe and God only knows what else the relevance of anomalies to whether the planet is heating or cooling seems to me not at all established.
I think those who keep advocating the use of the sea surface temperatures as the world’s thermometer are right.

davidc says: (December 6, 2010 at 1:03 pm) to anna v: Very good point you’ve been making about anomalies,
Pleased to see you pick this up, David. Anna’s point struck me as a secret waiting to be told, and I hope others will pursue it. Could it be that complexity has obscured simplicity? Perhaps even by intent?

It occurs to me that if computers even super ones cant pick a winner from a horse race or predict the stock market what chance is there they can predict the future climate infinitely more complex methinks. I am reminded of the fools who send money for the sure system to pick the winner horse or stock!

If I were ten years younger I would have tried to weasel my way into working with their group.

We are flattered. Does ten years make such a difference?

The problem set by prof Koutsogiannis , that climate may be deterministic in part and stochastic in another part is real

Professor Koutsoyiannis has not been making this point. On the contrary, he has been calling the distinction between a deterministic and a random part a “false dichotomy” and a “naïve and inconsistent view of randomness”. He has been insisting that randomness and determinism “coexist in the same process, but are not separable or additive components” (A random walk on water, p. 586).Edward Bancroft:

If we are not able to model all factors and input all correct state data due to the complexity of the targeted physical processes, we will only have a training device

Even if we are able to model all factors and input all correct state data, we will still be unable to predict the future. Let me repeat that: Even if we are able to model all factors and input all correct state data, we will still be unable to predict the future. See “A random walk on water” linked to above for an enlightening in-depth investigation of the issue, or the epilogue of my HK Climate site for an overview.On the “anomaly” issue:
We don’t say that you should never use departures from the mean. We say you might use them, but with care and only when you really know what you are doing. See subsection “Comparison of actual values rather than departures from the mean” in Anagnostopoulos et al. (p. 1099) on that (our argumentation is similar to Anna V’s above).
We also say: don’t call them “anomalies”. It is extremely confusing to use “anomaly” for something that is perfectly normal.Old engineer:

If they can’t get each grid (region) right, how can the results for whole globe be right?

We explore this question in subsection “Scale of comparison” in Anagnostopoulos et al. (p. 1097).

It would be interesting to start the models at some reasonably instrumented year – say 1950 or 1960 do a run and then see what using actual starting conditions from 1951 or 1961 produces. i.e. kind of a shot in the dark to give some estimate of how chaotic the models are. i.e. do things actually average out if you don’t do an average? Do different initial conditions (on the actual path) give different results for 2009?

Antonis Christofides says:
December 7, 2010 at 2:59 amAnna V:
” If I were ten years younger I would have tried to weasel my way into working with their group. ”
We are flattered. Does ten years make such a difference?
Ta panta rei, when those ten years are in retirement. By the way I also am greek, retired particle physicist from Demokritos. and I have been following your group’s publications with great interest.Anna V: ” The problem set by prof Koutsogiannis , that climate may be deterministic in part and stochastic in another part is real “.
Professor Koutsoyiannis has not been making this point. On the contrary, he has been calling the distinction between a deterministic and a random part a “false dichotomy” and a “naïve and inconsistent view of randomness”. He has been insisting that randomness and determinism “coexist in the same process, but are not separable or additive components” (A random walk on water, p. 586).
I stand corrected.My view is that given the power to program directly the nonlinear differential equations in an analogue computer then errors due to badly known input parameters would be stochastic. Of course in the output values the stochastic and deterministic chaos deviations could not be untangled, so maybe my view is not so different after all.

I agree on the issue of tangling (i.e. that mainstream climate science is silly in believing in rashly considered decompositions).
Climate scientists should be aiming to produce models that can reproduce earth orientation parameters. If they cannot accomplish this, it should help them realize what they are ignoring.
I don’t even get the sense that most climate scientists recognize how SOI “anomalies” alias onto nonstationary spatitemporal modes (or even ones so simple as temporal semi-annual). There are asymmetries that fall into at least 3 categories: north-south, continental-maritime, & polar-equatorial. Why do so few even bother to condition their spectral coherence studies on these simple spatial variables? And why do so many treat SOI as univariate when it is bivariate? Even with techniques like EEOF (which goes a step beyond EOF), nonrandom coupling switching is missed (probably dismissed as “chaos” by those falling for Simpson’s Paradox). Recently I was delighted to find this paper:
Schwing, F.B.; Jiang, J.; & Mendelssohn, R. (2003). Coherency of multi-scale abrupt changes between the NAO, NPI, and PDO. Geophysical Research Letters 30(7), 1406. doi:10.1029/2002GL016535.http://www.spaceweather.ac.cn/publication/jgrs/2003/Geophysical_Research_Letters/2002GL016535.pdf
These guys are not so blind. (Note their speculation on a change in the number of spatial modes.)
Here’s another paper:
White, W.B.; & Liu, Z. (2008). Non-linear alignment of El Nino to the 11-yr solar cycle. Geophysical Research Letters 35, L19607. doi:10.1029/2008GL034831.https://www.cfa.harvard.edu/~wsoon/RoddamNarasimha-SolarENSOISM-09-d/WhiteLiu08-SolarHarmonics+ENSO.pdf
I suspect many will misunderstand the authors’ simplest point if they do not condition their analyses on coupling switching.
It is only a matter of time (for more carefully conditioned data exploration) before sensible spatiotemporal coupling matrices are designed.
With the pace of recent developments in solar-terrestrial relations, Charles Perry will likely soon realize he doesn’t need 32 year lags (see his 2007 paper) if he figures out the simple aliasing (SOI & NPI semi-annually).

Anna V,
“I think those who keep advocating the use of the sea surface temperatures as the world’s thermometer are right.”
Many (such as Pielke, Sr.) are advocating using ocean heat content rather than sea surface temperatures to determine whether the earth is heating up or cooling down. Some advantages of using OHC are that (by far) most of the climate energy is stored in the ocean, the OHC is an integrated value rather than an instantaneous value, the OHC takes into account thermal mass, the OHC measuring network is managed better, and the OHC measuring network is less vulnerable to political corruption than the surface sensor temp measuring network..

anna v says:
December 5, 2010 at 11:00 pm (Edit)
Pat Frank says:
December 5, 2010 at 10:39 pm
In a conversation I had with Demetris by email awhile ago, he agreed that hindcast tests such as he’s done really could have, and should have, been done 20 years ago. That would have spared us all a lot of trouble and expense.
It’s a fair question to ask professionals such as Gavin Schmidt or Kevin Trenberth why they didn’t think to test what they have so long asserted.
But I am sure they did have these studies. That is why they came up with the brilliant idea of anomalies, as I discuss above.
—…—…—
I fear the problem (of trying to hindcast/backcast/start-from-a-known-year’s condition’s-and-work-forward-60-years) is even worse than you fear.
The models – as I understand them from their textbook descriptions – begin from a certain assumed conditions, usually all expressed as a certain radiative forcing constant plus a yearly change in radiative forcing is imposed. Model conditions in each 2 degree x 2 degree “cube” (average wind, average temperature, average humidity, average moisture content, amount of radiation emitted into space, and all the “exchange” of this energy are not so much preset (before the model begins) but rather are determined from the model after a *(very large) number of iterations of the model’s equations are run. The model is then rerun with the results of the n-1 run forming the n run to create the results for the n+1 run. After many thousand of simulations, the modelers then select the “final average” conditions which they then filter and select to represent the outcome for the world at large after so many years.
At 2×2 degree cubes being their smallest calculation area, is it any wonder the modelers do NOT want to let anybody directly compare one cube’s results after 30 thrity years be compared to real world after thirty years?
Worse, the modelers’ 2×2 degree “cubes” are merely large “plates” with very, very thin “walls” around each edge. The atmosphere doesn’t behave like that: energy is exchanged vertically, horizontally, and through every side of the “cube” that are 1 km x 1 km x 1km high. The number of “cubes” varies from equator to pole, but the models require rectangles. The real solar input varies from season to season, but the models don’t allow that. The real solar input varies from pole to equator too as the earth rotates – and the emitted radiation barriers change also. The models do not simulate day and night except by averaging conditions. Real cubes rotate as the earth does so their atmosphere are subject to the Coriolas effect and jet streams. Real cubes are affected by currents and oceans and changing seasons – not just the ENSO and AMO and PDO – but the modelers assume only ice cap melting and glaciers retreating and storm conditions based only on temperatures and conditions of the global air.
Also:
The real “cube” conditions vary from sea to coast to inland mountain to inland icecap to woodland and jingle and forest (low Co2 and high clouds) to plains (med CO2 and med albedo) to deserts (very high CO2 and no clouds). There are many dozen cubes stacked vertically too through the atmosphere: not just one, two, or three “slices” from a cube that is assumed to receive an “average” annual sunlight from “average” clouds distributed “averagely” across the world.
Perhaps in a distant future the models will simulate these.
But today’s models do hindcast. But the results of those hindcasts are not used to “qualify” or check the models’ accuracy. Instead they are sued to CREATE the “corrections” – primarily soot forcings, aerosol loading forcings, and volcano forcings – that are then loaded back INTO the modeled outputs so the period from 19xx to 1995 DOES fit the GISS/HadCru temperatures generated by the model itself.
Larger problems. (Yes, it is worse than you thought.) There is no real-world whole-world “soot level” primary data available. There is no real-world “aerosol content” values affecting albedo. Instead “increased soot levels” are assumed by the modelers to vary over time equally over the whole globe. The result of these “increased soot levels” are then factored into the forcings that generate temperatures to re-create the drop in global temperatures between the 1940 Medium Warming Period to the 1970’s low point. After 1970, worldwide soot/aresaol levels are sassumed to get cleaned up (with the US EPA rules used as an example) and thus the required reflected energy (from soot/aerosol levels) is allowed to decrease, and thus absorbed energy allowed to rise enough, to make the model output increase as required to fit world temperatures between 1970 and 1995. Recent soot and aerosol forcings are assumed based on India, China, and Brasil industrialazation if models are run after 1995 conditions.
After 2010, soot levels are assumed to be removed completely, depending on whether a geo-engineering “positive” or “negative” result is desired.
What is actually surprising is that, despite the fact that their model conditions are carefully changed to allow “calibration” of the models by hindcasting, that the actual results are still so inaccurate.

anna v, I’ve been thinking a little about your comments concerning model tests and anomalies, and if you really are sure that back in AGW year zero, main stream modelers did hindcast tests of GCM reliability, similar to those of Antonis and Demetris, et al., and then deliberately didn’t publish them, they’d be guilty of having lied by omission for 20 years.
If, knowing the unreliability of their models, they went ahead and developed anomalies, as you suggest, also here, in order to disguise the lack of model reliability, they’d have been guilty of lies of commission for 20 years.
Do you really, really think that’s the case?

Pat Frank says:
December 7, 2010 at 5:47 pmIf, knowing the unreliability of their models, they went ahead and developed anomalies, as you suggest, also here, in order to disguise the lack of model reliability, they’d have been guilty of lies of commission for 20 years.
I hope you read the post of racookpe1978 above. It describes what I found when reading up on the models very well, and has gone in more depth then I . My assurance lies in having worked for over 30 years with computer simulations of models, mainly Monte Carlo, in my field of particle physics, and in recognizing that one cannot tune the models without knowing the temperature curves, certainly before anomalies were latched on. The precipitation curves’ failures are all there in AR4 disguised by spaghetti graphs of ensembles of models.
I do no know if I would call them lies of commission or “self” delusion of the ensemble “self”. High energy physics is a field where large numbers of competent scientists work and group meetings when I started were of 15 people and when I retired of 2000. In addition the international nature of the projects requires large committees directing sources etc. There is a sociology of groups of scientists that I would address as ” the head scientist is right ab initio”, if the group is small, or “the directing group is right ab initio” if the group is large. Like beehives, if too much challenge exists the group splits in two , one leader/leading-group taking its convinced followers to a new project.
Paradigms change, then the whole group starts revolving around the new paradigm that they were rejecting vehemently before, if the leader/leading-group accepts the new paradigm. An example was the transition from the theory of the parton model to QCD that I lived through in many details. The parton model had Feynman behind it and the leaders/leading-groups were slow to convince that the real data did not vote Feynman.
This was not bad, because people followed the leaders and worked hard to create complicated experiments and progress was incrementally made since fortunately theoreticians are not of a group mentality. The fate of the world economy was not hanging on to the flow of research, as it does is in climate research.
I think what happened was that the leaders of the pack of climate realized that temperatures could not be hindcast in measure but were OK in shape and had the brilliant idea of using anomalies instead of temperatures, and the pack followed.
I am sure they sincerely believed the blurbs about averaging over details etc. that come out when anomalies are attacked. There is no sincerest believer in a model than the model instigator, believe me. In all scientists there hides a perpetual motion machine inventor:).

anna v says: (December 7, 2010 at 9:33 pm) In all scientists there hides a perpetual motion machine inventor:).
Such inventors today, Anna, lack the social conscience of the old-timers. Those from the past agonised over the perfecting of a brake they could use to stop their perpetual motion machines if necessary once they were started — just in case.
Do today’s scientists have that vision?

anna v, I hadn’t considered persistent self-delusion. But that’s what you seem to be suggesting.
I did read racookpe1978’s post above, and thought it was very cogent. I was happy to see you mention that his/her comments matched your own conclusions.
But look, as a particle physicist, you must have used some standard model application, such as GIANT to simulate and predict resonances in particle interactions. I recall reading that you folks didn’t credit an observed resonance unless it passed the 3-sigma test. That means the observation must have been replicated enough to give good error statistics. It also means that you must have paid quantitative attention to the resolution limits of your detectors and the systematic effects of such things as thermal load on detector response, and so forth.
You’d need all that information just in order to calculate the 3-sigma test. You’d also need to know the uncertainty width around your simulated resonance, in order to decide whether an observation some eV away from your prediction could actually be a confirmation, or a different resonance altogether.
I can’t believe that the delusional effect of leader-paradigm loyalty would be enough to subvert that sort of very basic paying of attention to the gritty details of your scientific practice.
But that’s what we see going on in AGW climate physics. Regardless of self-delusion about their beloved paradigm, what we see in AGW climate science practice is disregard of theoretical uncertainty in the models along with neglect of uncertainty in the data due to instrumental resolution and the error from systematic effects.
I just don’t understand how this poor practice could be so amazingly persistent and could be so widespread. For 20 years! Haven’t these people ever taken an instrumental methods lab as undergraduates?

Pat Frank says:
December 8, 2010 at 3:05 pmI can’t believe that the delusional effect of leader-paradigm loyalty would be enough to subvert that sort of very basic paying of attention to the gritty details of your scientific practice.
But that’s what we see going on in AGW climate physics. Regardless of self-delusion about their beloved paradigm, what we see in AGW climate science practice is disregard of theoretical uncertainty in the models along with neglect of uncertainty in the data due to instrumental resolution and the error from systematic effects.
I just don’t understand how this poor practice could be so amazingly persistent and could be so widespread. For 20 years! Haven’t these people ever taken an instrumental methods lab as undergraduates?
Evidently not? Or were convinced by arguments that the methods were not applicable where “chaos” reigns?
We have a proverb in Greek ” the fish starts smelling from the head”, to summarize some of what I meant in the post above.
Take James Hansen :After graduate school, Hansen continued his work with radiative transfer models and attempting to understand the Venusian atmosphere. This naturally led to the same computer codes being used to understand the Earth’s atmosphere
Does that sound as if he has familiarity with instruments and errors?
And a lot of the people involved in creating and working with GCMs must be computer focused, not physics focused, and easily follow the leader.
Also the last twenty years has seen an inflation in the number of students entering universities and a lowering of standards. At the same time this inflation created many jobs that needed publications and climate studies were easy ( group work) and offered grants and tenure track posts, creating a positive feedback:). Group work means that often the only one who has a complete picture of the project is the group leader. Members of the group survive by doing their part of the work and trusting that the checks necessary are being done by the others, or are not needed if the group leader thinks not. They are at the level of the GEANT constructors, in your discussion, not the users of GEANT who will check against data and will be demanding statistical accuracy of results. GCMs are cumbersome programs that demand large computer time and are treated like reality generators, not simulation programs, by their creators and users :). The results are called “experiment”.
It is synergy of all these factors with the pack mentality.
Well, otherwise one must construct a conspiracy theory, on the lines of planned depopulation by Malthusian ecologists, which is very far fetched. Not that they do not exist in the fringe and taking advantage of the situation, but they were not in the position, twenty years ago, to create such a situation, IMO. Hansen was, and has the typical ego and mentality of a scientist in the head of a pack. Remember the story of heating up the hall at congress when the climate was going to be discussed ? All means are legal in love and war.

Thanks for your thoughts, anna. I don’t credit AGW conspiracy theories beyond the crass conniving of the climategate miscreants. So, I guess we have to chalk most of it up to ‘go along to get along.’
But it’s still hard to understand the crippled practice being so widespread and so enduring. Maybe the attendant self-righteous moralistic social pressures, brewed up with such frenzy by the environmental NGOs, has given it an especially refractory character.
But as a social phenomenon, it’s probably a safe prediction to suppose that one day social scientists will be cutting Ph.D.s studying the phenomenon. And then, 50 years later, maybe neuropsychologists.

As this thread moves into its sunset, I hereby note my disappointment, perhaps despair, that the following statement has not been given consideration and debate here.
There is a ring to it which focuses my attention and sounds an alert; but I do not have the skills to pursue it myself, nor to even investigate it; but nevertheless feel a strong conviction that it sounds truth, and therefore importance in the matter of the world at climate war.

We have been bamboozled with anomalies, and worse, with global average anomalies. — Anna V (December 6, 2010 at 10:03 pm)

For permission, contact us. See the About>Contact menu under the header.

All rights reserved worldwide.

Some material from contributors may contain additional copyrights of their respective company or organization.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!
Cookie Policy