Climate Science Glossary

Term Lookup

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Term:

Settings

Beginner Intermediate Advanced No DefinitionsDefinition Life:

All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

How reliable are climate models?

What the science says...

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable
"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere." (Freeman Dyson)

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years. CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been proved to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. For example, here’s a graph of sea level rise:

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. Climate models form a reliable guide to potential climate change.

Comments

You can't analyze the data in that manner Mizimi. See post #101 on the climate sensitivity and its relation to the temperature increase at equilibrium, and the contribution of other factors to the temporal temperature evolution.

I am trying to remain objective as I learn more about the Physics of climate change. This has not been easy amid all the opinion and hyperbole surrounding the subject. However, this web site has impressed me with the intelligence shown by the author and the commentators. I have some questions related to the GCM controversy. Perhaps someone can point out primary references where they can be answered.

1. What is the "predictive" variability between climate models?

2. Do they all have the same free model parameters (i.e. fudge factors)?

3. If so, are these parameters set to the same values to accurately fit historical data?

It would raise my "skeptical" level if, in fact, the GCMs contain significant differences in their predictability and technical structure.

"Remember this: a climate model is really nothing more than a scientific hypothesis. If a hypothesis is consistent with observations, then it is standard scientific practice to say that such a hypothesis can continue to be entertained. In this case, that hypothesis can then serve as a basis for other subsidiary models or, in reality, subsidiary hypotheses. If the hypothesis is not consistent with observations, it must be rejected. That does not mean that human-induced climate change may or may not be real, but it does mean that (in this case) the magnitude of prospective change has—with high probability—been overestimated. That means that all subsidiary hypotheses on economic costs, strategic implications, or effects on health are similarly overestimated."
TESTIMONY OF PATRICK J. MICHAELS TO THE SUBCOMMITTEE ON ENERGY AND ENVIRONMENT OF THE COMMITTEE ON ENERGY AND COMMERCE, U.S. HOUSE OF REPRESENTATIVES..

Since 2000, atmospheric carbon dioxide has increased 18.4% of the increase from 1800 to 2000. According to the average of the five reporting agencies, the trend of average global temperatures since 1998 shows no increase and since 2002 the trend shows a DECREASE of 0.8°C/century. This separation shows the lack of connection between atmospheric carbon dioxide increase and average global temperature.

Many Climate Scientists are completely unaware of some relevant science and understand other relevant science poorly (it’s not in their curriculum). The missing science proves that added atmospheric carbon dioxide has no significant influence on average global temperature. See my pdf linked from http://climaterealists.com/index.php?tid=145&linkbox=true for the proof. Or email at danpangburn@roadrunner.com

As the atmospheric carbon dioxide level continues to increase and the average global temperature doesn’t it is becoming more and more apparent that many climate scientists have made an egregious mistake and a whole lot of people have been mislead.

There is a few points I would like to make regarding modelling (as i have worked in modelling for government). Most Government agencies use simplified models to describe and plan for the real world, because most see that as their job, to attempt to bring order amongst all the 'noise' that is out there. But most of these government agencies, and most of their models, are actually based on rather socialist-type assumptions, which eg reduce 'noise', to irrelevancies, and eg frequently rely on linear relationships, weakening away from an identified mean or dominant factor. The argument is complicated, but I would say that this is primarily why extreme forms of socialism fail-in the real world there is plenty of 'noise' which isn't 'noise', or irrelevant, or ‘linear-weakening-strengthening by a simple factor/set of factors’, at all. (One of the best non-linear examples I can think of is the element Iron in the periodic table, which causes stars to explode in supernovas-but that is another story).

The models used in complex systems such as climate can be fundamentally flawed, in exactly the way the bank models were flawed in the financial crisis-modellers simply tell the decision makers/executives what they want to hear, and what the modellers want them to hear (giving themselves promotion and bonuses etc, and reducing the need for costly data gathering etc etc). Major assumptions are played down, and data which doesn’t suit is left out or relegated to 'noise' etc. The real world just doesn’t work like that. Wherever models determine policy these models can be dangerous ie 'weapons of data destruction', especially in any political context.
There is alot more I could say on modelling, as I have worked in this field, but maybe another day.

A common argument heard is "scientists can't even predict the weather next week, how can they predict the climate years from now". This betrays a misunderstanding of the difference between weather, which is chaotic and unpredictable and climate which is weather averaged out over time. While you can't predict with certainty whether a coin will land heads or tails, you can predict the statistical results of a large number of coin tosses. Or expressing that in weather terms, you can't predict the exact route a storm will take but the average temperature and precipitation will result the same for the region over a period of time.

Using this argument is admitting that there is no such thing as climate science , only climate statistics. It is also a dodgy use of statistics in that the toss of a coin is fundamentally a result of a physical law with a random element of how hard the spin is induced.
If you have any doubts on this score try a simple rig which drops a coin horizontally to catch its edge on a bar placed in its path. It is possible to get about 90% selection either way by moving the bar if the coin lands on a relatively bounce free surface. I was told that it is actually possible to get better than 99% with a fancier rig but never tried to do so as I only needed 75% for my free lunch.

Can anyone point to the references for the IPCC peer reviewed projection of the effect of cloud cover changes in both type and coverage as a result of temperature and the more erratic weather patterns they claim will occur?

Climate Scientists have adopted the word 'feedback' but apply it completely differently from the way engineers had already successfully used it for decades. Correct use of 'feedback' combined with paleo temperature data proves that added atmospheric carbon dioxide has no significant effect on average global temperature. Any activity to curtail the atmospheric carbon dioxide level puts freedom and prosperity at risk.

I have worked in computer modelling within science and government, and have had some run-ins with those within science who attempt to reduce complex modelling down to one variable-their field of research. I have seen hundreds of millions of dollars of development projects almost shelved because these projects were not supposed to have even been occuring, under one scientists or faction of scientists (generally those who have spent their entire careers within the public service, outside the real world), particular, individual model or dataset. Some of these 'only my field/dataset' modellers don't even bother to check all relevent data, and moreover they want policy decisions to be based on simple models, by default, as away of bringing 'order' to the world. Their 'order'.

This sort of process, is the very reason we don't allow governments to control societies; there are always those within government, including within science, who want to impose their partcular 'science models' on the world, when in fact it is really about imposing their political philosophy (commonly socialist), and self-interest.

There are other patterns that tend to occur in these sort of modellers, and their cohorts that I have noticed:

-They don't like chaotic systems
-They don't like inbuilt uncertainty
-They don't like changes in uncertainty
-They don't think that the common 10% or so of data that doesnt fit into a dominent model, is relevant, or at best think that it can only account for 10% of effect.
-They tend to think all natural systems are smoothly curved.
-They tend to think that fields of research outside the 'dominant' have little relevance.
-They have a common disrespect for market forces in society.
-They think that their field is superior to other fields.
-They don't like being unable to dominate or control human politics
-they get to the point that they believe that the issues are settled, and that debating issues and prolonging the political process is a waste of time and taxpayers money, examining any new data is also a waste of time, and inefficient, since the debate was settled long ago-by their dataset.

They are geniunely astonished when one points out real-world instances which have significant effects (eg >10%), which do not fit into their 'dominant' model. They would have bet their house that these wouldn't occur.

The above assumptions are not based on actual data, but on social and political assumptions that those who hold them tend not to be aware they even have, or that they are even questionable; and are inconsistent at best when applied to the real world, or at worst, simply wrong.

A good example is the 'nature is generally smoothly curved' assumption. It is surprising how common this is in 'modellers' (eg financial and in climate), and how uncommon it is in nature, and moreovoer what effect the common ~10% of data that doesnt 'fit' can have. Some of the best examples I can think of are the element iron in the periodic table (which causes stars to go supernova-there is nothing 'smooth' in this process-and the periodic table in general for that matter), and the process of natual selection itself-where a minority variant can replace an entire pre-exisitng variety/species.

(In both these cases, according to the assumptions innate in many modellers, we wouldn't even be here! since eg our solar system formed from a supernova, and of course from evolution, which are both, not 'smoothly curved' processes. (So much for the ~10% of a dataset having 'low effect').

Note also, if you want historical examples of where intellectualism and modelling/ideology can go drastically wrong:

- Richard Pipes of Harvard blames radical academics for providing the foundation, framework and justfication for radical Bolshevic communism in the late 19th century-early 20th century.
-Weikart blames German Social Darwinists and intellectuals in the late 19th-early 20th century for providing the foundation, framework and justification for radical Nazism
-Social Darwinists/Eugenics movement came from within radical academics and intellectuals, who also attempted to impose their 'science model' on the world in the early 20th century (with Nazism as an offshoot of this).
-The financial crisis of 2000s, where the 'expert banks' and their modellers got it all wrong.
-Human-induced global warming modellers, (>90% sure that there is >90% effect from human activity).

The jury is still out on the last one, but their general manner and methods, in my opinion, are not all that dissimilar to the previous ones.

Alot of people are worried about the motives and science behind 'human induced global warming', because they perceive it as an example of backdoor socialist- determinism, the bane of the 20th century-think Eugenics, Nazism, Communist-Bolshevism, as examples. These were all 'models', or ideologies, of the way the underlying science and human activities interacted.

One consistent and dangerous theme with these three movements is that they all claimed to be based on science, with direct 'links', but were really political agendas masquerading as science. They were all examples of supposedly irrefutable 'science', where doubts were heavily suppressed. Those who advocated their 'causes' were very, very sure of themselves.

The big question is whether or not 'human induced global warming' is also a form of socialist-determinism. Psychologically, the foundations and underlying assumptions are very similar. Human activities are usually elevated above other factors, the future is largely preordained and inevitable, society must be re-ordered acccording to the 'new science' etc etc.

In something so big and fundamentally chaotic as the economy, or climate, is it questionable, at best, that we can ever be so sure about 'links', to re-order entire societies.

There are aspects of general determinism in the politics of the human-induced climate change movement-people want to control and re-order society in the manner that 'human induced global warming' dictates. They 'link' human activities to climate, (which is itself a form of determinism).

Their absolute sureness of the pervasiveness and dominance of the link, without mitigating or confounding factors, is very close to a deterministic style of thought. One dataset or factor is raised in importance above all others, to latter to which they ascribe simple 'noise'. They are, by default, above the squabbling of the market, or democratic process. The future is certain, and pre-ordained, and it is C02. Nothing is more moral or certain, than a re-ordering of society according to the fundamental principles of the new idea. Those who cant or wont change will be discarded, in the new world. It is a matter of life and death. And so on. Trouble is, people have heard it all before-it may therefore be entirely psychological and political,related to peoples pathological need to order and control society, and nothing at all to do with the 'science'.

Is it really true that there is a direct causal link between human activities and climate? Perhaps one should pause at the previously 'certain' links between eg, biology, race and fitness in society; the previously certain links between capitalist class struggle and communist inevitablility; the previous certain links between evolution, race, war, and Aryan racial struggle for Europe.

What was the underlying major problem with these ideas?. It was the determinism, that there was a direct link between the underlying science, and human activities. No wonder people are worried about the 'models'. Should give pause for thought.

#114
"Is it really true that there is a direct causal link between human activities and climate? "

Certainly there is. In the same way that vulcanism or other factors have an effect on climate. But the real question is to what extent do human activities affect climate..and that we have yet to quantify. And therein lies another problem; those who wish to control us( for whatever reason) will turn tentative indications into cast iron certainties in order to achieve their purposes..and both sides are equally guilty of this.

"While there are uncertainties with climate models, they successfully reproduce the past and have successfully predicted future climate change".

As someone who has built computer models based on natural data, I can make some comments on this. The models I build are based on spatial data rather than time-based data, but I suppoose the methods are similar. It is not surprising that the IPCC's climate models reproduce the past because I would think that the models are based to a large extent on past data.

When I build a model I start with the basic ( past ) data values and come up with a mathematical formula which will model how these values change from place to place in 3 dimensional space. Then the formula is used to predict hundreds of "hypothetical", estimated values within the physical model limits. After doing this it will be apparent that in some places within the physical model, there are places where there exist an original (real) data value and a nearby estimated value in close proximity. The real and estimated values can then be compared to see how well the formula predicted reality. The formula can then be "tweaked" if necessary to give a better fit between real and etimated data values. This process is known as cross-validation.

I assume the IPCC builds its models using historical data and carries out similar cross-validation techniques. Therefore of course the IPCC models can predict the past in a general sense.

With regard to predicting future climate change, only time will tell. I think we would have to wait a minimum of say 30 years to see how the IPCC model's future predictions compares to reality. So I think it is premature to say that the IPCC models predict the future accurately. We do not know yet. However, I understand the IPCC produces lots of predictions based on its computer climate models and these show a large range of possible outcomes ( correct me if I am wrong ). Therefore, if this is the case, which IPCC model do we take as its prediction of future climate ?

The real data up to 2008 shows that global temperatures are tracking below Hansen's (1988) scenario C since about 2005. As I understand the predictions of Hansen's models, scenario C relates to a drastic REDUCTION in the increase of CO2 growth. Yet in reality CO2 emissions have continued to INCREASE. So real temperatures are tracking even lower then Hansen's most optimistic prediction based on a drastic reduction in CO2 emissions.

If I compare real data to Hansen's scenario B (assumed to represent CO2 emissions frozen at 1988 levels ), the real data is now about 0.4 degrees below that predicted by Hansen. This may not seem a lot, but these are the sort of anomlalies which the IPCC is describing as catastrophic. I would argue that Hansen’s model is not validated by real-world data and I think that as time passes, it is likely that Hansen's 1988 predictions will diverge even further from real world temperatures.

Arguably, real temperatures should be compared to Hansen's scenario A ( continued growth in CO2 emissions ). If such a comparison is made, Hansen's prediction is about 0.6 degrees above reality.

Not really Neil. You should really familiarize yourself with the data before attempting to trash it! You can read about the Hansen scenarios and the models here [*](see Figure 2 and accompanying text).

Scenario C is the imaginary situation that greenhouse gas emissions were stopped in 2000

Scenario B described as "the most plausible", is a scenario with moderately increasing greenhouse gas concentrations and some volcanic eruptions, much as we've observed in reality.

Scenario A is a model used to bracket the high end of likelihood with rapid exponential increase in anthopogenic emissions and no volcanos.

If one compares the models with reality based on 2005 data, the results are:

predicted temp rise 1998-2005, relative to 1951-198 mean:

model A: 0.59 oC
model B: 0.33 oC
model C: 0.40 oC

real world measurement:

land suface: 0.36 oC
land-ocean surface: 0.32 oC

That seems a pretty good prediction (a 17 year projection into the future). The most plausible scenario has been almost smack on. Of course this is a rather lucky observation, since the models cannot predict the noise in the climate system which is rather large especially on the decadal time scale.

So one can hardly claim a model hasn't been a rather good predictor of future when it's made a prediction that's very close to real world observations!

Of course what happens over very short time periods (a few years) is of little consequence in comparing climate simulations with reality, since as is very obvious, a climate simulation cannot predict as yet contingent events like El Nino's, La Nina's, volcanic eruptions, changes in solar output and so on. So a succesful simulation is expected to produce the broad progression of temperature rise while the fluctuations around the trend is expected to be completely discorrelated with real world observations.

We are talking about basically the same thing in relation to Hansen's scenario C. You are saying that
Scenario C is the imaginary situation that greenhouse gas emissions were stopped in 2000. I am saying that it is an imaginary situation in which the CO2 emission's growth was drastically reduced ( as you point out,it represents the case that growth actually stopped ) in 2000.

In your comparison above, why did you stop at 2005, when data is available at least up until 2008?

As you say, "Of course what happens over very short time periods (a few years) is of little consequence in comparing climate simulations with reality". Therefore to my way of thinking, we will not know the "truth" until about 2030. But it should be apparent already that to date the real data is diverging from Hansen's more likely scenarios A and B.

model B is the relevant one Neil. It corresponds most closely to what has happened emissions-wise in the real world.

The data in the paper I cited goes up to 2005. That's why I stopped at 2005.

Why wait until 2030? The simulations have done a very good job of predicting reality for almost 20 years. So we can say that the real world has "evolved" in a manner that is consistent with our understanding of the greenhouse effect as it stood some 20 years ago. That seems a rather good indication that even 20 years ago we understood the basic elements of the climate system with respect to radiative forcings and heat retention. Obviosuly we know a whole lot more now and we expect our current models to be considerably better (not to mention the vast improvements in computational speed, efficency and data storage and analysis).

You need to make up your mind about what you think constitutes a long enough period to assess a computational projection into the future! If you consider we won't "know the "truth" until about 2030", how can you possibly say that "it should be apparent already that to date the real data is diverging from Hansen's more likely scenarios A and B"! Those are trwo mutually exclusive notions.

In reality, as i said in my post #118, a projection cannot simulate contingency (giving rise to much of the "noise") in the climate system, and therfore we expect considerable short term divergence of simulated and real word data. That's obvious I would hope! We can see that this is the case by inspecting Hansen's simulated projection (Figure 2 here:

http://www.pnas.org/content/103/39/14288.abstract

and observe that despite the overall good correspondence between scenario B simulation and real world temperatgure evolution, that there are very large short term deviations (e.g. 1974-1976; 1992-1994; 2008 etc.). We can understand these in hindsight since we kbnow what contingent events (volcanic eruptions; El Nino's etc.) caused them. Since these events result in temporary perturbation of the surface temperature evolution, the temperature response recovers and the long term temperature evolution remains driven by the anthropogenic increase in radiative forcing despite short term fluctuations...

Good science predicts the future. We know enough about biology to know that what we can to do grow more food; and we bet our lives on that...if that science let us down one year, we'd starve in our modern society that depends on optimum agriculture.

We know enough about cars to know that all our cars will start and get us home tonight when we leave work. We aren't afraid as we approach other cars at alarming rates because science guarantees brakes and tires do predictable things.

We know that if we dump lead into rivers the lead gets into fish and human tissue and causes terrible consequences. We can bank on this being true today as well as 1000 years from now. *That* science is done.

But the science of climate modeling is certainly *not* done, at least in the minds of most people. Let's face it; we are talking very tiny numbers - .038% CO2, < 0.2 degree temperature deltas. We are asked to believe that while human emissions are small compared to natural ones, that the earth absorbs *exactly* the amount it emits, and so any tiny perturbation is a disaster. Hogwash.

There was a time where the orbits of the planets were explained by invisible crystal spheres. And they have models too - ones that would even explain the retrogradation of Mars - all of which functioned using the inviolate assumption that Earth was the center of the visible universe. That premise drove the model - which, while far from elegant - was made to work.

Here we are in the same situation. I have to believe that climate researchers have an agenda. People who don't believe in AGW are certainly not going to devote their lives to studying it. And so, we start with one premise - it's all our fault - and work from there. When an objection comes out - say to the magnitude of CO2 and man's contribution to that - out come the curves. Out come the crystal spheres. Out come the graphs showing tiny variations drawn on an offset scale for emphasis. Out comes a vague paper digestable by 'the community' but no one else.

If you truly want people to believe that "the science is done", do some actual physical research. Create a large enclosed simulated atmosphere and show the effect of doubling CO2. Don't slip away saying it's not that simple. Science is all about reduction to experiments that *are* that comprehensible.

Otherwise, the models are about as believable as those that can model old stock market data but won't make a dime.

The above article attempts to debunk the "skeptic argument." However, the attempted debunking rests upon the abnormal semantics which the article attaches to the word "prediction."

As normally defined, a "prediction" is a logical proposition about the outcome of a specified statistical event that is made at a specified interval in time in advance of the occurrence of the event's outcome. As it is an example of a proposition, a prediction is true or false.

I understand that the climatologist James Hansen once predicted that the highway outside his office in Manhattan would be underwater 20 years later. Hansen had made a prediction. In the event, Hansen's prediction proved false, invalidating Hansen's hypothesis.

All of the article's examples of "predictions" are computed temperatures. They provide the basis for comparison of computed to measured temperatures. However, by itself such a comparison neither validates nor invalidates the associated model for the events are unspecified. With the events unspecified, the model lacks the property of "falsifiability" that is possessed by every model that is "scientific" in nature.

To render one of the IPCC's models falsifiable, the builders of this method would have to specify the statistical event that is associated with each prediction. According to authorities that include the IPCC itself, this task has not yet been accomplished. In its most recent report, the IPCC states that its models do not make "predictions" but rather that they make "projections." While predictions support the validation of a model, the IPCC's "projections" support only "evaluation."

The distinction is an important one, for to control any sort of system, one must have the capacity for predicting the outcomes from movement of the control system's actuators. Whether the IPCC's models have the capacity for doing so remains unknown pending the definition of the events and conduct of a validation exercise. Thus, whether regulation of carbon dioxide emissions would have the desired effect of controlling global temperatures is also unknown.

Associated with confusion over the differing meanings of "prediction" and "projection" in the language of climatology is a mistake repeatedly made by people who are interested in climatology but unfamiliar with the methodology of science. This mistake is to confuse a model built by scientists with a scientific model. A scientific model makes predictions. A model that makes no predictions is not a scientific model even when built by scientists.

The science of prediction is well studied and not very reliable. A straight line fit with available historical data is statistically just as good as a fancy computer model. A stock market model works fine over 100 years as a straight line with unpredictable ups and downs. That's as good as a climate model will ever be over a few hundred million years - a straight line with ups and downs. The argument about weather vs climate is germane - they say we can't predict weather but it's OK we can predict climate and then try to predict the jiggles which is as good as predicting weather.

So has anyone seen a model run try to account for the paleoclimatology data?

But if you like straight lines, fit one to the last 30 years or so and you'll end up with about 2 °C by 2100. Not confortable anyway. But you don't need to use "fancy computer models" to do better than a straight line fit and get a better representation of reality.

By the way, you can also do nice fits to the ice ages cycles without "fancy computer models", but guess what?, they do a better job.

Hi John,
Great site, the best one I've seen yet, especially as you have links to actual journal articles.

One thing that I've noticed is that some "skeptics" have a you feed junk in you get junk out mentality when it comes to computer models. I recall when I used to debate creationists at my university and a very similar argument was used of carbon dating of being exactly like that.

Although I don't want to stretch the comparison any more than that it is just an interesting point.

There is an article that discusses this a little more I've linked to below.

I'm a medical doctor with no climatology experience beyond a recent lay interest. I've got some grounding in research and am doing an MD (a UK higher medical research degree) and a Cochrane review of a medical topic. I'm trying to bring these transferable skills to bear in helping me make my own mind up about anthropogenic global warming. I was concerned to hear our Prime Minister (no less) publicly dismiss those who question the scientific orthodoxy as "flat-earthers". My reading of this whole article (and sources referenced in the few balanced websites I can find - apart from this one, climatechangefacts.com and sourcewatch.org are good) is that the modeling can be criticised enough to plant 'reasonable doubt' regarding future projections. I think that this is a strong argument against AGW; correspondingly it requires a strong refutation. I don't think this article and thread has achieved this.

Regarding global temperature data, the very simple point I'd like to make is that Hansen (2006), whose results you cite as the main defence of model prediction, themselves state that "a 17-year period is too brief for precise assessment of model predictions [because of inherent uncertainty within the model]". They continue, "close agreement of observed temperature changes with simulations [for scenario B] is accidental given the large unforced variability in the real world". I think this is appropriate scientific caution and does not necessarily disprove the model - yet this sense of balance is missing from your headline response to skeptics: "[climate models] have successfully predicted future climate change".

I also note that the point at which scenarios A and B divide (ie discriminate between predictions) has not yet occurred, or is occurring now. Overall I would say the Hansen data is not irrefutable evidence that models work.

Incidentally Hansen 2006 also suggest that the volcanic eruption estimated for the 1990s, (which you single out for special mention) was 'sprinkled' there - my reading of the paper is that the authors simply dispersed three eruptions across a 50-year period. Certainly any suggestion that the eruption was a spectacular success of the general climate model would seem to be misleading. I'm not sure that was your intention.

This is very important because as I understand it modeling is the main evidence cited by the IPCC, which in turn is driving the current political process. If they are inaccurate (as, intuitively, they may well be if they do not include unknown forcing) then predictions are scientifically meaningless. As I say, the fit of the Hansen model is described, at best, as tentative by the authors themselves.

I don't believe it's constructive to label critical questioning and rational scepticism as "denial", being "full of junk" or "spouting rubbish" as one blogger has done in this thread. I would also caution against automatically rejecting any article that is not peer reviewed. Peer review is also flawed; it is often not double-blind and therefore can be biased, and because peer-reviewed journals are extremely competitive, articles in them may tend to be those based on well-funded research; funding often following political agendas (and then there is the separate problem of publication bias). The source is simply something that must be weighed along with everything else.
al

The true root and bulk of the evidence is basic physics, with details added in the form of progressively more advanced physics. But the media and public have gotten the misimpression that (a) scientists are merely guessing that human-produced greenhouse gasses are responsible for the portion of warming that scientists' models can't otherwise predict; and (b) there might be no unusual temperature rise needing to be explained, because the temperature hockey stick graph might be wrong.

You will save yourself a lot of time and frustration if you read a quick overview of the wide range of evidence from cce's The Global Warming Debate. (Be patient, his server is slow, and sometimes gets completely bogged down; try again later). Then get a quick history from Spencer Weart's The Discovery of Global Warming; his summary Introduction is nicely short, but the rest of his site is quite rich.

If you want to continue reading background material after that foundation, look at the Start Here section on RealClimate, which has links to materials categorized by level of technical background required.

But if instead you then want to pursue pointed questions, this SkepticalScience site is a great place to turn next. Note there are two types of posts here: the concise Skeptic Arguments linked at the top left of the page (click "View All Arguments"), and the longer "Posts."

The actual physics of CO2 cannot be questioned. There is a secondary player with the CO2 emissions that I do not see discussed. What I would like to be pointed to is reference information on the actual heat generated by the oxidation of hydrocarbon fuels. Where can I find discussion about the retention, transmission, and conversion behaviors of the ~4 exajoules of infrared radiation released annually by burning fossil fuels?

Human-produced direct heat is trivial compared to human-produced greenhouse gas forcing. For details see (in the post The Albedo Effect) the comment 56 by Steve L, and the subsequent comments 57 and 58 by me.

The claim is made that the climate models "...have made predictions that have been subsequently confirmed by observations. This claim is refuted by the noted climatologist Kevin Trenberth; he states at http://blogs.nature.com/climatefeedback/recent_contributors/kevin_trenberth/ that that the models referenced by the United Nations Intergovernmental Panel on Climate Change do not make predictions. It follows that: a) the UN-IPCC models are not falsifiable and b) the IPCC models are not scientific, by the definition of "scientific."

Rather than make predictions, the IPCC models make what the IPCC calls "projections." A "projection" is a mathematical function which maps the time to the global average temperature. A "prediction" is a logical proposition which states the outcome of a statistical event. A "projection" supports comparison of the computed to the measured temperature and computation of the error. However, it does not support falsification of the model for the apparatus is not present by which the proje3ction may be proved wrong. A "prediction" provides this apparatus.

The IPCC curves that you reproduce in figure 1 always looked wrong to me. Granted they're hard to read, but they show "observed" temperatures increasing much more than actual measurements from Hadcrut3, GISS,etc.....by a factor of 2-3 in some periods. e.g. 1975-2000. I think these were published in "Nature" years ago and I recall there was controversy about them then.

Response: I'm having trouble determining which is the observed temperature record in Figure 1 as the IPCC TAR doesn't say which explicitly for that particular graph. I'm guessing it's the HadCRUT3 record as that seems to be the favoured record used throughout TAR. If this is the case, then the temperature record shown is a slight underestimate of actual warming as the HadCRUT record excludes some of the regions on earth that are warming the most.

"If this is the case, then the temperature record shown is a slight underestimate of actual warming..."
Are you implying that IPCC uses temperature records that aren't published and we don't have access to?
None of the surface air temperatures or the satellite temperature records that I'm aware of come close to showing the temperature increases in figure 1.
Certainly HadCRUT reflects less than half the increase in figure 1.

Not at all. The IPCC TAR use the HadCRUT record, NCDC and NASA GISS. They just don't indicate which of these records are used in Figure 1 above. As for the trends in Figure 1, just eyeballing the graph, it looks like the trend in the last few decades is 0.2°C which is consistent with all three temperature records.

A bit nitpicking on my side, John, but in the "Further reading" section you say "Tamino compares IPCC AR4 model results[...] versus observations", wherheas the picture is his Fig. 3: "compare the GISS data to the models listed in IPCC AR4 chapter 8 except for the CCCMA models". The one that reflects all the IPCC models is his Fig. 1. Both graphs are very similar, indeed.

Cheers!

Response: Thanks for the feedback, I've updated the wording to reflect this.

It seems that Hansen's 1988 model is indeed (slightly) overestimating the observed warming trend: "the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the current best estimate (~3ºC) [...] it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world". Hansen's model shows 0.26 +/-0.05 ºC/dec, whereas the real world shows 0.19 +/-0.05 ºC/dec. However, for this comparison, as well as the climate sensitivity, it must be taken into account that "Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%)". AR4 models give 0.21 +/-0.16 ºC/dec.

Anyway, this was already highlighted by Hansen et al 2006: "Close agreement of observed temperature change with simulations for the most realistic climate forcing (scenario B) is accidental, given the large unforced variability in both model and real world. Indeed, moderate overestimate of global warming is likely because the sensitivity of the model used (12), 4.2°C for doubled CO2, is larger than our current estimate for actual climate sensitivity, which is 3+/-1°C for doubledCO2, based mainly on paleoclimate data"

IPCC AR4 8.6.4 How to Assess Our Relative Confidence in Feedbacks Simulated by Different Models?

[quote]A number of diagnostic tests have been proposed since the
TAR (see Section 8.6.3), but few of them have been applied to
a majority of the models currently in use. Moreover, it is not yet
clear which tests are critical for constraining future projections.
Consequently, a set of model metrics that might be used to
narrow the range of plausible climate change feedbacks and
climate sensitivity has yet to be developed.[/quote]

I have been meaning to post here for a while after reading the post from Poptech who claims that "Only Computer Illiterates believe in "Man-Made" Global Warming".

I am a computer scientist with 30 years experience who has no doubt that the theory of AGW is correct.

I want to deal specifically with Poptech's claims about computer science as he claims to be an "expert". As most of his post consists of unintelligible rant it is difficult to nail precisely what "straw man" the hapless Poptech is railing against but he does appear to have an issue with physicists or in particular climate scientists who program in FORTRAN.

Computers have been used for solving problems in Physics since the beginning of the computer age. In fact most universities run degree courses which allow you to major in Physics and Computing. I did a variation of that degree in the mid 1970s majoring in Applied Maths, Physics and Computer Science. There is a whole range of computer algorithms designed for solving complex mathematical problems using computers and as any physicist will tell you mathematics is the language of physics.

He also claims that because some climate scientists use the computer language FORTRAN, their code must be full of bugs.

Why? Because Poptech cannot understand FORTRAN code? Because FORTRAN has been around for a while? He does not say. I no longer use FORTRAN but in my experience ability and training is a much better guide to good programming than choice of language.

The principals of Computer Science are universal and not tied to any specific computer language. In fact computers are language agnostic as they execute machine code. Many of the changes to programming methodology over the years have addressed the issue of software bugs by promoting the use of tested library components or frameworks, structured coding techniques, the use of design patterns and object oriented programming techniques. That is we break our complex code down into smaller testable units and ensure that they work correctly by testing them rigorously before combining them into the whole. This does not guarantee bug free code but these approaches have been proven to reduce bugs substantially.

All these approaches are available to the FORTRAN programmer with the added advantage of having access to a well proven library of scientific and statistical routines.

Does our "expert" check every time he flies as to what programming language the aircraft's control system is written in? Most are written in a specialist programming language called ADA which is of the same vintage as FORTRAN.

His rant against climate models is really a rant against science of any form.

But there is a built in uncertainty in nature so there will always be questions that cannot be answered with absolute precision whether those questions are answered using computer models or with pen and paper. It is the reason why every scientist needs a good handle on statistics because many questions can only be answered within a range of certainty.

Sometimes a general question can be answered with more certainty than a more specific question. Actuaries working for health insurance companies use statistical computer models to work out the average health costs of a range of population groups so their employers can set insurance premiums. But they cannot tell you precisely how many people will get sick next week or more specifically if you are likely to need medical care.

So it is with climate and weather.

Contrary to Poptech's assertion, weather forecasts have actually become much more accurate over the last few years. With better computer climate models, use of satellite measurements and faster computers, weather bureaus now offer five day forecasts which were not reliable enough in past decades. Ironically some forecasters complain that climate change is affecting their forecasts as the changing climate is altering many of the assumptions based on the historical experience that is built into the models.

Computer models which deal with climate change have not been designed to forecast the weather over the next century. They cannot tell you the summer temperature in 2050. They are tools for examining climate science the physics of which, contrary to Poptech's opinion, is well understood. They are able to give a range of projections which examine the effects of C02 as well as other factors on the long term climate. In that they have been remarkably successful.

O.k, I'm sorry if my first post sounds agressive towards a side or another, I just want this to get out of my "do-to" list.

It's 2010 now and even with El Nino from what I can see from Climate4You (wich I presume is one of the most objective sources there is for climate information), no dataset reaches the 1 degree limit, like the Hansen's "B" scenario seems to have finally gone over.

While indeed if I'm not incorrect and that seems to have happened, we can only hope that we have learned trough the decades (wich Hansen 2006 seems to suggest :) ) and at this day of age have had the resources and the time to create the best damn models we can[/End the dramatic b-grade speech].

cloneof, you'll notice that on a year-by-year basis, model output is noisy. For instance, a few years from now the Model B scenario shows a predicted dip in temperature of some two tenths of a degree, passing below your "1 degree limit", a feature we can probably agree is unlikely to be reproduced with exactitude by the actual climate. Equally, expecting Earth's annual temperature to track model output in any given year with faithful reproduction of the model output is bound to lead to disappointment.

Rather than throw up my hands in sorrow over the matter, I think I'll go and try to discover why the model output graphs are not smoothed. It's a choice made by the authors, with good reason I suspect, if nothing else intended to convey that we're not to expect a monotonously predictable rise. I can well imagine the hue and cry over divergence from a smoothed result come to think of it.

It seems that at least some effects are still not really based upon a fundamental understanding of underlying physics. The effects of clouds are still apparently used as fitting parameters to climate data. The fits to climate data are then used to predict climate over other periods. I don't really have a problem with this in principle, but it does seem that these are not really fully based on fundamental physics and this type of fitting leaves open the possibility of trying to use the fitted parameters outside the region of validity (extrapolation rather than interpolation). Apparently things like clouds are not really understood in enough detail to truly predict climate from fundamental physics.

The answer is in the RealClimate post FAQ on Climate Models, the "Questions" section, "What is the difference between a physics-based model and a statistical model?", "Are climate models just a fit to the trend in global temperature data?", and "What is tuning?" A relevant quote from those: "Thus statistical fits to the observed data are included in the climate model formulation, but these are only used for process-level parameterisations, not for trends in time."

Part II of that post then provides more details on parameterizations, including specifics on clouds.

Tom Dayton quoted from another source, ""Thus statistical fits to the observed data are included in the climate model formulation, but these are only used for process-level parameterisations, not for trends in time.""

I'm not sure what "process-level parameterisations" means. Presumably one needs a model of cloud properties for a range of atmospheric conditions in order to predict climate trends with time. Either you get the properties from an understanding of the physics of cloud formation and their properties or you infer them from fitting to measured climate data. "Process-level parameterisations" sounds like the fitting. Again, I'm not judging it, I'm just trying to understand it. The language is just not familiar to me. My modeling experience is in a different field.

I understand the idea of modeling at that level and have done things like that myself in my professional life. The issue isn't that you will get something utterly unphysical when parameterizations are used outside the fitted range (though that can happen with a poorly formulated model), but that the accuracy may be lower for predictions than for the fit. For example, if a climate model is parameterized by fitting to measured climate data, then those parameterizations are used to predict climate in conditions that do not include the same levels or rates of changes of variables (e.g. CO2 concentrations), then there is very likely greater uncertainty in the predictions than the errors between the model and the actual climate in the fitted range.

pdt, you seem to implying that models "tune" the parameters to match climate, but the parametrization is done independent of the models, and the values used in the model. Also note that it is not blind fitting of a statistical function but usually determining the empirical value of coefficients in a functional form derived from the physics. Note also that for some (like clouds), the parametrization can be checked against output of a model with full physics to check for accuracy - its just not practical to use the full physics in a model run. It is also being improved all the time.

Either way, even the early Hansen models were way better guide to what the future held just hand-waving about empirical guesses. Of course, there may still be unmodelled physics which is going to save us all - but would you want to bet on such possibility? What the models show, is that with the best physics available to us, our continued emissions of GHGs is going to heat the earth rapidly and we ignore that physics at our peril.

Yep, pdt, I agree with you that the accuracy of the predictions likely will be lower than for the fit. So researchers keep trying to reduce the numbers of parameters they use, and to improve the estimates of the parameters they must use.

The claim (not yours!) I was initially responding to was the misperception that the climate models' predictions are evaluated against the same data that the models were statistically fit to in the first place.

By the way, there is more discussion of parameterization on Open Mind, especially starting with Ray Ladbury's comment. When you get down to Tim's comment below Ray's, skip it because Tim then posted a correction and then a final correction.

I was talking about paper released in 2008 by Spencer and Braswell that discussed a potential positive feedback bias caused by cloud variability. The paper makes a strong claim how this bias basicly makes the models show too much positive feedback.

The link you gave me talks about one of hi's un-peer reviewed blog posts how PDO would affect climate. See that posts comment number 171.

To this day I have not seen a debunking article nor any response from the modelling community about this paper. Considering this paper was released in the pretigius Journal of Climate and even Piers Forsters couldn't but give him a green light, I must wonder.