Climate Modeling: “not always clear”

The previous story suggested that climate models predict with certainty “permanently hotter summers”. OK then, but in the same Eurekalert stream today, we have this release, where it seems they need to “prop up” the modeling, saying they “are not always clear”.

How important are climate models for revealing the causes of environmental change?

The human impact on the environment, especially through the release of greenhouse gases, is an area of controversy in public understanding of climate change, and is important for predicting future changes. Many studies into our collective impact use climate models to understand the causes of observed climate changes, both globally and in specific regions. Writing in WIREs Climate Change, Professors Gabriele Hegerl from the University of Edinburgh and Francis Zwiers from the University of Victoria assess the role of climate models in studies of observed changes and the robustness of their results.

“Since the mid-1990s, a wide range of studies have shown that greenhouse gas increases have influenced the climate, globally and regionally, affecting many variables,” said Hegerl. “However, even to scientists, the roles of observations, physical insight, and climate models in estimates of the human contribution to recent climate change are not always clear.”

In this review paper the authors assess research methods for understanding the causes of observed climate change, ranging from approaches that refrain from using climate models, to different approaches using models including ‘fingerprint’ analysis and large-scale detection and attribution studies.

“Detection and attribution methods attempt to separate observed climate changes into components that can be explained either by the variability of the climate system or external changes, such as human activity,” said Hegerl. “Most detection and attribution studies use climate models to interpret the observations. Models are used both to determine the expected ‘fingerprint’ of climate change and to access the uncertainty in the estimated magnitude of observations given climate variability.”

Hegerl and Zwiers also explore how some researchers have attempted to identify manmade and externally forced climate change from observations only. However, while methods that do not use climate models avoid assumptions about the expected response, they do use other strong assumptions, such as the response to forcing being instantaneous or that climate change and variability can be separated by timescale.

Another challenge facing observation based studies is the impact of natural events, such as volcanic activity which bellows dust and aerosols into the stratosphere, which could have an anomalous cooling effect for a few years.

Professor Hegerl argues that research which does not rely on climate models makes strong assumptions about how the effects of human influence on the climate can be distinguished from the effects of the natural variability of the climate system. This research supports the conclusion that human influence has changed recent temperatures that is drawn from studies that use models. These strong assumptions do not have to be made when using physically based climate models, but because climate models are not perfect their use does introduce other uncertainties. These uncertainties are small for large-scale temperature change, but are larger and less well understood for changes related to impacts, such as regional temperatures, extremes, and precipitation.

“Our review discusses the role climate models play in determining the causes of recent climate change, and shows that results about the causes of recent climate change are firmly based on observations,” concluded Hegerl. “Climate change detection and attribution is first, and foremost, about understanding these observed changes. However, detection and attribution requires a model of why the climate may be changing to be able to draw conclusions from observations.”

Post navigation

27 thoughts on “Climate Modeling: “not always clear””

“Professor Hegerl argues that research which does not rely on climate models makes strong assumptions about how the effects of human influence on the climate can be distinguished from the effects of the natural variability of the climate system. This research supports the conclusion that human influence has changed recent temperatures that is drawn from studies that use models. These strong assumptions do not have to be made when using physically based climate models, but because climate models are not perfect their use does introduce other uncertainties. ”

Say what!!

Of course the models use different “strong assumptions” to those of “research which does not rely on climate models”. At issue is which of those assumptions are least erroneous.

Arm waving does NOT make the climate models more reliable than empirical studies.

At issue is whether the results of empirical studies or the outputs of climate models most resemble reality. To date, the climate models have provided no predictrions that give any confidence that their ‘projections’ should be trusted. Of course, the empirical studies may also be wrong, but mere assertion that outputs of climate models are better means nothing.

“These strong assumptions do not have to be made when using physically based climate models, but because climate models are not perfect their use does introduce other uncertainties.”

“Physically based climate models” cannot mean physical hypotheses programmed into a model. Why? Because if you had the physical hypotheses, you would not need a model. The physical hypotheses can be used to predict the phenomena to be explained and offer an explanation in terms of well-understood physical processes for the observed phenomena. Clearly, then, this study is just as confused about the relationship between physical theory and model as any of the other articles that we have seen over the years.

““Detection and attribution methods attempt to separate observed climate changes into components that can be explained either by the variability of the climate system or external changes, such as human activity,” said Hegerl”

instead of carrying out an experiment to test the hypothesis.

It is as if they think the only way of telling if the climate is sensitive to CO2 is by trying to separate the natural and human caused components of the climate.

Normally you would use the hypothesis to be tested to make a prediction about the real world. Next, you should go and look at the real world and see if your prediction is accurate. Then you publish all of your method and results ( not just the bits that support your hypothesis) so that others can check to see if you have made any errors.

If the prediction is correct, you can conditionally accept the hypothesis. If not, then the hypothesis is false.

The anthropogenic global warming hypothesis (that the climate is sensitive to changes in greenhouse gasses) makes a number of testable predictions.

Here are three.
1. There should be a “hot spot” in the troposphere over the tropics at about 10000 metres. The GCMs say so.
2. Changes in atmospheric temperature and atmospheric CO2 should be highly correlated in all time scales.
3. Changes in the atmospheric CO2 should happen before changes in atmospheric temp.

If even one of these predictions turns out to be false, then it is safe to conclude that the anthropogenic global warming hypothesis is false.

I wonder if anyone has ever tested these things?

Detection and attribution seems like a way of pretending to try to find out an answer when you don’t really want to know.

“Arm waving does NOT make the climate models more reliable than empirical studies.”

Did you take the time to go to the WIREs Climate Change to read the paper? Although the above is a press statement there are more details on their “arm waving” in the actual publication…….imagine that.

Pardon me for butting in again, but the views expressed in this news release are just more of the deranged belief that models can somehow substitute for physical hypotheses in science. No such substitution is possible.

No component of a model can be falsified. Why? Ask yourself what connects that one component to experience. The answer is: the rest of the model. Remove anything from the remainder of the model and it will not solve, so the component that you want to test cannot be tested separately. The model solves as a whole or not at all. (It is a simulation after all.) It is accepted or rejected as a whole along with its rather elaborate heuristics and the uninvestigated assumptions found in them.

By contrast, the physical hypotheses that make up a physical theory are assembled over time and tested individually or as they are added to the theory. They can be investigated individually and falsified individually, at least until you surpass the level of theory being investigated at CERN.

If you hang around with hard scientists, you will learn that one of their favorite phrases is “harsh assumptions.” It is usually found in the absolutely universal maxim: “If you want to get on with your science, make some harsh assumptions and test them against experience.”

So far as I am aware, there is only one attempt to use a climate model to predict the future on a quantitative basis, in a time frame short enough so that we can compare the model predictions with actual data. This is Smith et al Science August 2007. The main prediction is what the global temperature anomaly is going to be in 2014, still nearly 4 years away.

But the paper includes the statrement that, after 2009, at least half the years would have a temperature anomaly greater than 1998. Now we know that 2010 was not greater than 1998. Some data bases claim it was equal, but the HAD/CRU data on which the hindcasting of the model was based, clearly shows that 1998 was greater than 2010. We now have RSS data up to and including May 2011, and it would seem that there is a snowball chance in Hades that 2011 can exceed 1998; average Jan to May 0.07 C; 1998 for the full year, 0.54 C. Using the formula that the chances that the forecast is wrong is 1-(1-p)^n, where p=0.5 and n=2, then the chances are 75% that this forecast is wrong.

For years exponents of climate change theories have used images of deforestation to support their cause.

However, the density of forests and woodland across much of the world is actually increasing, according to a respected scientific study.

The change, which is being dubbed the ‘Great Reversal’, could be crucial in reducing atmospheric carbon, which is linked to climate change.
…
The research, carried out by teams from the University of Helsinki and New York’s Rockefeller University, shows that forests are thickening in 45 of 68 countries, which together account for 72 per cent of global forests. Traditionally, environmentalists have focused their concern solely on the dwindling extent of forested areas, but the authors believe evidence of denser forests could be crucial in reducing the world’s carbon footprint.

Professor Pekka Kauppi of Helsinki University, a co-author of the study, said: ‘People worry about forest area, and that’s quite correct. But if you want to know the carbon budget, it cannot be monitored observing only the changes in area. It is more important to observe this change in forest density.’

If the woodlands are denser, then there is more transpiration, more heat moving upwards carried away by the extra water vapor. We’re familiar with the Urban Heat Island (UHI) effect driving up surface temperatures. Do thicker woodlands lead to an increased Forest Cool Area effect? We already see this in the Amazon rainforest, which generates its own thunderstorms providing cooling during the day. Will these denser woodlands worldwide significantly lower surface air temperatures on land?

The above article is “appears” to be so much Kantian mental m@5+urbation. Or at least it “could” be. I mean, really. What, aside from basically nothing, are Hegerl & Zwiers saying? Is this some kind of convoluted apology? Balderdash.

I agree mostly with Theo Goodwin and would add in any model, if one key component is in error, then the entire model fails. At least that is what I learned in FEA. Is it different with climate models? It shouldn’t be.

“However, detection and attribution requires a model of why the climate may be changing to be able to draw conclusions from observations.”

Really? I thought a theory was necessary to be able to draw conclusions from observations. One possible conclusion may be that the observations do not fit the theory and, well … guess what?

As one who has used models successfully in engineering I think I safely state that a model is based on a theory and will only work if the theory is supported by facts! Using a model to support a theory is a little like an adage related to me many years ago by someone wiser than me. He said I should spend more time listening than speaking because when my mouth was open I was not learning!

Is it perhaps significant that scientists of the calibre of Rutherford succeeded in establishing theories without the benefits of models?

During the past 12 years the US has spent $100Billion looking for this human signature in climate but have failed to find it. ie. is is so small that it is lost in the noise of measurement. These grant grabbing scientists have to use models to exaggerate this hidden signal.

Recently we have had a series of trolls coming to WUWT and attempting to disrupt rational discussion by making unjustified assertions which they fail to substantiate when challenged. It seems this may be a coordinated effort to inhibit raional discussion at WUWT.

Your post at June 6, 2011 at 8:01 pm is a classic demonstration of such troll behaviour.

My post (the first post in this thread) quoted the above item and said;
“Of course the models use different “strong assumptions” to those of “research which does not rely on climate models”. At issue is which of those assumptions are least erroneous.
Arm waving does NOT make the climate models more reliable than empirical studies.”

You asserted the full paper contains more than mere arm waving when you replied to me (June 6, 2011 at 12:04 pm ) saying;
“Although the above is a press statement there are more details on their “arm waving” in the actual publication…….imagine that.”

So, at June 6, 2011 at 4:00 pm, I asked you to justify that assertion by quotation from the paper or in your own words. Your proper response to my request would have permitted discussion of the points in the paper which you claim to have worth but you have not stated.

But you did not justify your assertion: instead, you have replied.

“My only point was that your comment was no less arm waving then the press release.”

That reply is nonsense (my point was not arm waving) and it is an excuse for your failure to justify your assertion that “there are more details on their “arm waving” in the actual publication”.

The whole family of models are first ‘validated’ by hindcasting – but this in effect is simply adjusting many parameters which have rather wide ranges until the models get a reasonable ‘fit’…..that done, they then are used for prediction. This pattern of model use is not truly ‘scientific’ methodology. It ought only to be used as an heuristic device – to identify sensitivities in complex multi-factoral systems. Even if a model scored well in the first few decades (ie matched the observed data) that is still no guarantee the model truly replicates a climate system that is known to have longer multidecadal cycles which are not incorporated in the model other than with some vague formula for ‘variability’.

As far as I am aware – and I would appreciate any comments – all the current models have been built using assumptions about sulphur aerosol during the global ‘dimming’ years (1945-1980) and also since then (with a global minus that is set against the radiative forcing of GHGs). Yet, science from 2005 on showed that the sulphur issue was hugely over-estimated – it was a local phenonmenon, and the upshot is that the global cooling from 1945-1980 was largely natural (as I believe is the warming that followed). All the available satellite data support this conclusion, yet as far as I am aware, there has been no retraction of the model assumptions. I raised this in my book ‘Chill’, but have received no answers yet to this question.

“The whole family of models are first ‘validated’ by hindcasting – but this in effect is simply adjusting many parameters which have rather wide ranges until the models get a reasonable ‘fit’…..that done, they then are used for prediction. This pattern of model use is not truly ‘scientific’ methodology. It ought only to be used as an heuristic device – to identify sensitivities in complex multi-factoral systems.”

Wonderfully well said! My hat is off to you, Sir. Our sophistication in discussion of models and theories is growing.

DR says:
June 6, 2011 at 6:39 pm
“I agree mostly with Theo Goodwin and would add in any model, if one key component is in error, then the entire model fails. At least that is what I learned in FEA. Is it different with climate models? It shouldn’t be.”

Right on the money, DR. Climate models on supercomputers perform in exactly that way. One key component in error and the entire model fails.