First, there are a bunch of red herrings out there on both sides. One of the common ones is that it has been getting warmer and ALL THE EVIDENCE shows it. The real fight is over CO2 induced large temperature increases. Even all the modeling done about the potential effects are a side issue compared to the real question. The real question is CO2 guilty, and how was it proven. WRT the emails, do they give reason to doubt this proof? To answer, we go to IPCC 4AR.

We start with some relevant methodology from Chapter 8.

From Section 8.1.1 WG1 IPCC 4AR http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8.html Over time, statistics can be accumulated that give information on the performance of a particular model or forecast system. In climate change simulations, on the other hand, models are used to make projections of possible future changes over time scales of many decades and for which there are no precise past analogues. Confidence in a model can be gained through simulations of the historical record, or of palaeoclimate, but such opportunities are much more limited than are those available through weather prediction.

This is pretty straight forward. If we want to determine if a model is reasonable in under a 100 to 130 years for a 100 year prediction, we need historical and paleoclimate simulations, which means a good historical record and good reconstructions. http://rsta.royalsocietypublishing.org/content/365/1857/2053.full where it is stated

And even if the scenario were to be followed, waiting decades for a single verification dataset is clearly not an effective verification strategy. This might sound obvious, but it is important to note that climate projections, decades or longer in the future by definition, cannot be validated directly through observed changes. Our confidence in climate models must therefore come from other sources. The judgement of whether a climate model is skilful or not does not come from its prediction of the future, but from its ability to replicate the mean climatic conditions, climate variability and transient changes for which we have observations, and from its ability to simulate well-understood climate processes. For example, climate models are evaluated on how well they simulate the present-day mean climate (e.g. atmospheric temperature, precipitation, pressure, vertical profiles, ocean temperature and salinity, ocean circulation, sea ice distributions, vegetation, etc.), theseasonal cycle and climate variability on various time scales (e.g. the North Atlantic oscillation, ENSO, etc.). Their response to specified forcing is compared to the observed warming over the industrial period. They are evaluated against proxy data from past climate states, e.g. the Last Glacial Maximum, the Mid-Holocene, the last Interglacial period, or even further back in time.

The next quote confirms the need of good observations

From 8.1.2 A climate model is a very complex system, with many components. The model must of course be tested at the system level, that is, by running the full model and comparing the results with observations.

The next section gives the basic approach. However, the Royal Society publication by Tebaldi and Knutti linked above is a better, more complete read with history and updates. The important point is that projections’ skillfulness cannot be tested directly or how good a test of skill of a model’s result cannot be tested directly.

8.1.2.2 What does the accuracy of a climate model’s simulation of past or contemporary climate say about the accuracy of its projections of climate change? This question is just beginning to be addressed, exploiting the newly available ensembles of models. A number of different observationally based metrics have been used to weight the reliability of contributing models when making probabilistic projections (see Section 10.5.4)….. For any given metric, it is important to assess how good a test it is of model results for making projections of future climate change. This cannot be tested directly, since there are no observed periods with forcing changes exactly analogous to those expected over the 21st century. However, relationships between observable metrics and the predicted quantity of interest (e.g., climate sensitivity) can be explored across model ensembles.

Now to a series of quotes that go to the heart of whether the emails matter. In this section, the use of “present climate” is an important part of the evaluation. Note that the models run with fixed radiative forcing and try to simulate known or assumed conditions including pre-industrial. The use of aerosol forcing is allowed to vary within its range of variability.

8.1.2.3 Testing models’ ability to simulate ‘present climate’ (including variability and extremes) is an important part of model evaluation (see Sections 8.3 to 8.5, and Chapter 11 for specific regional evaluations). In doing this, certain practical choices are needed, for example, between a long time series or mean from a ‘control’ run with fixed radiative forcing (often pre-industrial rather than present day), or a shorter, transient time series from a ‘20th-century’ simulation including historical variations in forcing…Models have been extensively used to simulate observed climate change during the 20th century. Since forcing changes are not perfectly known over that period (see Chapter 2), such tests do not fully constrain future response to forcing changes. Knutti et al. (2002) showed that in a perturbed physics ensemble of Earth System Models of Intermediate Complexity (EMICs), simulations from models with a range of climate sensitivities are consistent with the observed surface air temperature and ocean heat content records, if aerosol forcing is allowed to vary within its range of uncertainty. Despite this fundamental limitation, testing of 20th-century simulations against historical observations does place some constraints on future climate response (e.g., Knutti et al., 2002). These topics are discussed in detail in Chapter 9.

Section 8.1.2.4 states why the emails are important in its first sentence. Next follows a discussion of problems or potential problems that highlight why the emails are important and those who claim it can’t or won’t matter are wrong. First, models, by themselves, with just the modern period are not sufficient for determination. Using aerosol forcings which in its full range of uncertainty goes from a slightly positive to a negative forcing, increases uncertainty not decreases it since even the sign can either vary or is unknown. The next part of the section discusses limitations and uncertainties. But it is important to note “20th-century climate variations have been small compared with the anticipated future changes under forcing scenarios.” Then states the issues are discussed in depth in Chapter 6, which is where we want to go. The last part quoted is the use of initial conditions being used in a test. The inappropriateness of these sorts of tests can be found here http://climateaudit.org/2007/02/11/exponential-growth-in-physical-systems/ . This is an important consideration when considering some of the arguments made in support of using a “model” only approach. The work by Dr. Browning indicates that this is a questionable practice with undefined (for the modelers) pitfalls. http://climateaudit.org/2006/05/15/gerry-browning-numerical-climate-models/ is a must read for understanding that running to the model only as presented in the IPCC AR4 is not methodologically correct to recent publications that have NOT been incorporated into the AR4. Using models that have a compromise, such as a hyper viscous layer in an initial value and boundary condition model does not mean the models can’t be useful. It does mean that the math and physics are not correct. Take using such a layer, it violates one of the assumptions necessary to pose the PDE’s namely it is a continuum. But IF and ONLY IF it can be shown to be useful can such be used. The methodology is to use “modern” records and especially the last 1000 years to justify skill.

8.1.2.4 Simulations of climate states from the more distant past allow models to be evaluated in regimes that are significantly different from the present. Such tests complement the ‘present climate’ and ‘instrumental period climate’ evaluations, since 20th-century climate variations have been small compared with the anticipated future changes under forcing scenarios derived from the IPCC Special Report on Emission Scenarios (SRES). The limitations of palaeoclimate tests are that uncertainties in both forcing and actual climate variables (usually derived from proxies) tend to be greater than in the instrumental period, and that the number of climate variables for which there are good palaeo-proxies is limited. Further, climate states may have been so different (e.g., ice sheets at last glacial maximum) that processes determining quantities such as climate sensitivity were different from those likely to operate in the 21st century. Finally, the time scales of change were so long that there are difficulties in experimental design, at least for General Circulation Models (GCMs). These issues are discussed in depth in Chapter 6… Climate models can be tested through forecasts based on initial conditions. Climate models are closely related to the models that are used routinely for numerical weather prediction, and increasingly for extended range forecasting on seasonal to interannual time scales.

The rest of this is a repeat for those familiar with the methodology, so I have tried to limit the discussion to certain sections that highlight why the emails are important. The first section has most, if not all the key words, as to why the emails are so important. Replication, independent, cross-verification, confidence, inferences, and of course, the all time champ ROBUST. Each of these keywords has been called into question by the emails. Probably the best source, unless you want to do it yourself, is Climategate: The CRUtape Letters by Mosher and Fuller. This is not to say that it has been proven, but that the evidence is that confirmation bias should be assumed to have occurred unless it is shown that it has not. The evidence indicates that until the work is redone in an open environment, it should NOT be trusted. Please note and understand the methodological claim of depends heavily on replication and cross-verification from independent sources. This is a keystone to understanding whether or not the emails matter. They do.

Section 6.2.1.4 The field of palaeoclimatology depends heavily on replication and cross-verification between palaeoclimate records from independent sources in order to build confidence in inferences about past climate variability and change. In this chapter, the most weight is placed on those inferences that have been made with particularly robust or replicated methodologies.

In case there is any doubt, this section links models, and past climate changes which are stated to be “key to testing physical hypotheses.” Two points 1.) the initial value and scale problems in Dr. Browning’s work supported by Sylvie Gravel show that without some empirical justification, models cannot be supported as developed; 2.) Section 8.1.2.4 underscores that as one goes further into the past the utility for explaining current conditions, forcings, or relationships become more and more suspect i.e. their utility becomes progressively limited. This translates to the best time period to use is the one in question: the medieval warm period to the present. I have noted in discussions by AGW proponents that far past CO2/Temperature relations are being offered. A reading of the quoted Chapter 6 sections show that the conditions may be so different that such a claim as being currently applicable is unlikely. If it is unlikely, then these AGW proponents are actually laying claim to that the certainty indicated in AR4 and AR3 is false or the IPCC has it wrong. Take your pick.

6.2.2 Climate models are used to simulate episodes of past climate (e.g., the Last Glacial Maximum, the last interglacial period or abrupt climate events) to help understand the mechanisms of past climate changes. Models are key to testing physical hypotheses, such as the Milankovitch theory (Section 6.4, Box 6.1), quantitatively. Models allow the linkage of cause and effect in past climate change to be investigated…At the same time, palaeoclimate reconstructions offer the possibility of testing climate models, particularly if the climate forcing can be appropriately specified, and the response is sufficiently well constrained. For earlier climates (i.e., before the current ‘Holocene’ interglacial), forcing and responses cover a much larger range, but data are more sparse and uncertain, whereas for recent millennia more records are available, but forcing and response are much smaller. Testing models with palaeoclimatic data is important, as not all aspects of climate models can be tested against instrumental climate data. For example, good performance for present climate is not a conclusive test for a realistic sensitivity to CO2 – to test this, simulation of a climate with a very different CO2 level can be used. In addition, many parameterizations describing sub-grid scale processes (e.g., cloud parameters, turbulent mixing) have been developed using present-day observations; hence climate states not used in model development provide an independent benchmark for testing models. Palaeoclimate data are key to evaluating the ability of climate models to simulate realistic climate change.

For other considerations as to why the emails matter, there have been a series of posts on The Air Vent and other sites examining specific issues and emails. Using the keyword climategate will yield a number of relevant posts on both sides of the issue.

52 Responses to “The Next Battlefield”

” One of the common ones is that it has been getting warmer and ALL THE EVIDENCE shows it. The real fight is over CO2 induced large temperature increases. Even all the modeling done about the potential effects are a side issue compared to the real question. The real question is CO2 guilty, and how was it proven.”

Yes! We must keep our eye on the ball. The ball is not warming that started at the end of the LIA, the ball is not:

The “ball” is whether the cause of the warming, which even Dr. Phil Jones says is not unusual, is anthropogenic CO2 emissions or natural.

When CO2 levels rising or falling have not preceded temperatures rising or falling before, I can see where there would be questions regarding whether it is now, especially since our current rising temperatures fit within natural variation.

This might sound obvious, but it is important to note that climate projections, decades or longer in the future by definition, cannot be validated directly through observed changes.

Why not? If we are going to use these models to guide policy concerned with effects on century time-scales, why can’t we spend a decade or two validating some predictions, and understanding what our predictive capability actually is. If you are really taking a long-term view, this seems like a reasonable course of action.

The judgment of whether a climate model is skillful or not does not come from its prediction of the future, but from its ability to replicate the mean climatic conditions, climate variability and transient changes for which we have observations, and from its ability to simulate well-understood climate processes.

This actually says nothing about predictive capability. The observational data we have is not orthogonal with respect to the inputs the way a designed experiment would be, and it is not independent of the model development process. Successful hind-casts are a nice sanity check, but they are not validation.

Paul Linsaysaid

Model ensembles are the unscientific heart of AGW. No physicist would take an average of several models and accept agreement of the average with the measurements as true understanding. At best, one model is correct and the rest are wrong. It’s also possible that all the models are wrong. The whole idea of model averaging should make an engineer’s hair stand on end. In model A the roadbed collapses but the cables are OK, while in model B the cables snap but the roadbed is fine. But heh, don’t worry, if we take the average of the models everything is fine.

j fergusonsaid

I could not understand how a model which was “adjusted” to follow past events could have any credibility or as they say “skill” (which I guess is a term of art in this industry) if the past events were inaccurately described.

In other words, if the model accurately tracks phony data then one might suppose any projections it provides will be just as phony.

Bender, who seems never to have returned, likely still cleaning Mosher’s pool, suggested (at CA) that the models weren’t solely based on these erroneous data sets, tree-rings, pond-sediment, etc.

Hoping I haven’t put false words in his mouth, are we certain that there are no other bases for these models than GHCN, tree-rings, pond sediment, etc?

Timsaid

#2 – The problem is no scientist can afford to admit that their research is basically useless so they are forced to come up with rationalizations for why their research has meaning even if ‘conventional wisdom’ says it is not. That is why they invented this claim about being able to replicate paleo climate is evidence for the skill of a model.

Obviously the claim is nonsense since we do not have the paleo-climate data required to properly determine whether models can replicate it or not.

John F. Pittmansaid

Previuosly on tAV. I posted on “Why Yamal Matters” and other posts about the Bayesian a priori for models in AR4. “”In statistics, a priori knowledge is prior knowledge about a population, rather than that estimated by recent observation. It is common in Bayesian inference to make inferences conditional upon this knowledge, and the integration of a priori knowledge is the central difference between the Bayesian and Frequentist approach to statistics.”” From Wiki.

That is why Dr.s Browning and Gravel are important. It also shows why Lucia’s is important. There may be a bias. At present, there appears to be a statistical biss. I believe it was GAvin who pointed out that if the histoprical temperature record was a little flatter (removal of a small UHI artifact) the models would fit better. Visually the statement appears to be true. It is interesting to note that if the UHI were to take the recent lack of warming down wrt the mean, the bias would be even more likely. Somewhere, I think Bender or Mosher had a discussion of this IIRC.

actually thoughtfulsaid

First of all I agree with the basic point – the question is about CO2 (and other greenhouse gasses) and temperature.

#1 I would add:
– hockey stick
– emails
– the MSM, LSM, blogosphere, etc.
– whether it was warmer, with more CO2 (or less) in the distant past
– the “predicted” ice age of the 1970s
– add almost everything that EVERYONE is talking about. The discussion of global warming is a demonstration of massive failure of critical thinking. Because people have taken their eye off the ball of “Does increased CO2 cause temperature increases? If so, how much? And is it human caused? And, if so, can it be human solved?

Given that the CO2 increases are verifiable and seemingly beyond question, some of the focus on modern proxies for warming are somewhat justified – patterns in ice field changes and other things that one would predict based on the basic premise being true.

But it got out of hand about the time that the MSM dubbed Katrina an AGW event and has stayed out of hand.

curioussaid

7 – I think a comparison with mature FEA software validation and verification is relevant. My understanding is that there has to be an auditable trail of development, testing, calibration and verification. IIRR Bender commented at CA he had been through every line of code in one of the models (ModelE perhaps?) and he was critical of it’s coding and algorithms.

John F. Pittmansaid

Mosher, The book was a good read. Please send my compliments to T Fuller. The hindcast part is the really interesting aspect of Lucia’s work. The other I thought was good was the natural and model variability posts she did. The Tebaldi and Knutti paper I listed has something I think that is weak in the older model development. In the paper, they propose that arbitrarily removing data/runs that outlie may be the wrong thing to do. I hope they explore this further. The idea that drier gets drier and wetter, wetter has been a glossed over weakness in the methodology. I really don’t think progress can be made on local/regional projections unless they either answer Browning and Gravel, or the modellers develop something as T&K propose.

Model ensembles are the unscientific heart of AGW. No physicist would take an average of several models and accept agreement of the average with the measurements as true understanding.

There’s nothing unscientific about ensembles per se they can be used quite appropriately to quantify our uncertainty in the results based on uncertainty in initial conditions, parameters in constitutive relations (because these are generally established empirically), or even model structure (and Jaynes is an example of a good physicist who would be quite comfortable with Bayes model averaging). There’s plenty of room for honest skepticism; no need to attack an uncertainty quantification method.

At best, one model is correct and the rest are wrong. It’s also possible that all the models are wrong. The whole idea of model averaging should make an engineer’s hair stand on end.

I guess I’m not a good engineer by your criteria; I see nothing wrong with model averaging, no model is 100% correct or 100% wrong, all have varying degrees of usefulness.

Layman Lurkersaid

Thanks John. Just a quick read for me at this time. Your essays are best studied alongside your cited references.

It strikes me as ironic that any unrecognized bias in the paleoclimate or the insturmental record will ultimately undermine the skill and validity of the models. The sad thing is that potentially skillful models might be obscured by stubborn insistence that paleo and insturmental contain no bias.

Phillip Bratbysaid

Just to add a comment re model ensembles based on my experience in the nuclear industry. There are many models used to calculate postulated accident scenarios at a nuclear plant. The models are based on the basic laws of physics (conservation laws) with correlations etc. The models were developed using a variety of experiments at various scales. They are validated against other experiments at various scales, with many of the validation calculations being done blind, i.e. calculate a simulated accident before the experiment is performed and then compare the results. The results of such calculations were highly illuminating. Different users of the same model could get wildly different results (timescales wrong, events wrong etc), and different models could give wildly different results. However, skillful users of well-developed and verified models, who learned from previous calculations, and developed use guides, could get good results. To take the ensemble average of the model results would have been as meaningless as any of the bad models. Only the results from models that were well-validated, well documented, well verified and used by skillful engineers who fully understood the models, had any predictive worth.

Based on my experience, I would suggest the climate models don’t have any of the characteristics I have just listed and ensemble averaging would be worthless.

hjbangesaid

Working at Hughes Aircraft in the 80’s I managed the systems engineering program for the Phoenix missile, the long range air to air missile carried by the F-14 Tomcat.

We had 2 groups that independently developed six degree of freedom (6 DOF)aerodynamic simulations for the flight characteristics. The Analysis group’s sim always overpredicted the performance (i.e. flight time to intercept) while my Systems group’s sim underpredicted. It turned out that the real performance fell in the envelope between the 2 simulations, typically 2/3 better than ours, 1/3 worse than theirs.

We studied the 48 physical parameters (such as lift, drag) used in the 6 DOF sims which were independently derived and turned out to be similar, but not exactly the same. The programming was entirely different, they used the guidance equations, while we used the missile’s software execution of the implemented guidance design.

We were never able to reconcile the results, but ended up using the 1/3 and 2/3 difference to predict future shot performance for the Navy, which worked out well.

So I believe that different simulations (models) can be averaged successfully, but if there is no way to validate results with actual performance they are just exercises of bit manipulation and have no predictive value.

They are like financial simulations that predict the market will be up in 50 years, but vary wildly on how it gets there, not very useful for retirement investments that you count on in 20 years. And the financial markets are not nearly as complex as our climate.

Excellent discussion. There has been much discussion over temperature records, but little about the climate models. The key is your statement above, “The real question is CO2 guilty, and how was it proven.”
The basic “evidence” of the guilt of CO2 is the fact that the models cannot duplicate the observed warming using natural causes alone, therefore it must be due to greenhouse gasses. This is stated in the IPCC AR4 report and many other places. There are 3 shortcommings with this argument: 1)The ability of the temperature measurements to accurately measure 0.6 C warming. 2)The ability of the climate models to model with such accuracy that they can say that 0.6 C warming would be significant. 3)The logic that absense of evidence of natural causes is evidence of human infuence is a logical fallacy.

I saw somewhere recently the statement that the test of a science being mature is the ability to predict. The climatology science is simply not mature enough to be able to predict even a few weeks into the future.

ex northrop Analysis here. That’s so funny you would call out the Phoenix 6 dofs. When we built displays for the YF23 I inherited legacy code
created for F15’s and Aim120 and Aim9 Missile launch envelope (MLE) estimations. What a mess. Just kidding. But there was lesson there as well.
The code developed for the pilot ended up using a 3Dof which was good enough to get the job done. Fox 3!

Bayes model averaging a lot different than the simple averages used by the IPCC.
I think Paul Linsay was correct to criticize what the IPCC does.

You’re right, but I didn’t see anything in Paul’s comment about the IPCC; maybe I missed something.

Phillip Bratby (#13) said:

Based on my experience, I would suggest the climate models don’t have any of the characteristics I have just listed and ensemble averaging would be worthless.

I’d agree, I’ve been looking for the sorts of V&V process you describe when it comes to GCMs and have come up mostly empty-handed. In fact, I’ve found passing reference to folks out of the national labs giving talks to the climate boys about solid V&V processes, but so far there seems to be little ‘buy-in’. Or at least they aren’t advertising the fact that they are doing V&V in their public-facing reports.

Timsaid

You are not missing anything V&V and an unknown concept to climate modellers.
The claims of accuracy rest entirely on the models ability to reproduce course metrics like GMST for current and paleo data.
Massive errors in the regional climate approximations that are used to calculate the course metrics are ignored.

One other subtle point that I surprised to find out: the models do not produce large scale weather features like storm fronts and cyclones.
Yet we are told these things simulate the ‘physics’ of the earth’s climate system.

Timsaid

#20 – They use these “projections” to create nonsense like this: http://globalchange.mit.edu/resources/gamble/
Which claims there is a 99% chance of a temperature rise of 3degC or greater.
So the distinction between a prediction and a projection does not mean much.

The term “projection” is used in two senses in the climate change literature. In general usage, a projection can be regarded as any description of the future and the pathway leading to it. However, a more specific interpretation has been attached to the term “climate projection” by the IPCC when referring to model-derived estimates of future climate.

Forecast/Prediction

When a projection is branded “most likely” it becomes a forecast or prediction. A forecast is often obtained using deterministic models, possibly a set of these, outputs of which can enable some level of confidence to be attached to projections.

Scenario

A scenario is a coherent, internally consistent and plausible description of a possible future state of the world. It is not a forecast; rather, each scenario is one alternative image of how the future can unfold. A projection may serve as the raw material for a scenario, but scenarios often require additional information (e.g., about baseline conditions). A set of scenarios is often adopted to reflect, as well as possible, the range of uncertainty in projections. Other terms that have been used as synonyms for scenario are “characterisation”, “storyline” and “construction”.

Spen said:

…state that these are not predictions but modelling scenarios – that is answers to ‘what if’ hypotheses?

If the ‘what if hypothesis’ were represented to the public as including the significant caveat of ‘if this model has anything at all to do with the real world’, then I’d say you are right. They seem to be making at least some sort of claim on the probability of future climate states conditional on the model(s) and emission scenarios.

Peter of Sydneysaid

Before we can even attempt to try and get back on the right track (namely the truth) with climate research, we have to get rid of many of the leading “scientists” who corrupt, twist, hide and/or distort the data and findings. If this doesn’t happen then there is no hope getting back on the right track. That should be pretty obvious if one thinks about it. Now, the only way to get rid of those people is to charge them with fraud, and if found guilty put them behind bars. If this doesn’t happen then the previous goal will never be reached. This is clear from the avalanche of revelations showing that AGW in its present form is a hoax and a fraud, and that nothing really has changed since the AGW hoax is continuing everywhere, even in some of our schools. This too should be pretty obvious if one really thinks about it.

To my knowledge, there has been no public criticism of Mann et al’s methods or results by any scientist who subscribes to the AGW Consensus.

In my opinion, the studies described in that publication have no merit. They contain no useful information on the Earth’s climate history.

If hindcasting is important to GCMs, and if work of the quality of Mann et al (2008) is used to produce or verify data to be used in such hindcasting, then…
… then there are problems in the assessment of GCM reliability.

John F. Pittmansaid

Peter of Sydney. It just takes us back to the 2AR. The reasoning is here at tAV in the Why Yamal Matters posts. I do not believe it is fraud or hoax. I do think the science needs to be updated and the possibility of confirmation bias elimenated as much as possible from both sides of the issue. The obstinence of refusing to correct known problems is escalating the heat far beyond what is appropriate. Add trying to keep out sceptics from the peer reveiw system while claiming it to be the gold standard was poorly done. The propagandizing has been shameful. Pulling children into an adult disagreement is most shameful.

Pat Franksaid

The implied point in Jeff’s post, and the comments by Phillip Bratby and Hjbange are spot on. That is, the parameterized climate models, despite including the best-available physics, are engineering models rather than fully physics-based models.

Fully physics-based models predict results using physical theory, only. Engineering models are parameterized where the physics is unavailable or is too complex to solve numerically. Parameter sets are empirically adjusted and so engineering models can be unphysical. This is acceptable, so long as the engineering model is used within its empirically verified bounds. Fully physics-based models are valid over the entire range of the relevant physical phase-space. Engineering based models are valid only within the bounds for which the parameters have been set and verified by experiment.

With respect to climate, that means GCMs that employ parameter sets validated against the known climate of the 20th century are valid only for examining the climate of the 20th century. Everyone who has used empirically adjusted non-physical models knows that they can quickly go wildly wrong outside of their validated bounds.

Climate models with parameter sets that are adjusted against a known climate with a set of known forcings cannot be validated for extrapolation to other climate regimes, by tests against the validation climate.

As Phillip Bratby also pointed out, ensemble averages of imperfect models produce only a different brand of uncertain results when the the full error range of individual models is unknown. The only valid ensembles are repeated runs of the same model, to explore its error range against a set of known observations.

This has been done in a way, with a few climate models, in what are called “perfect model” experiments. In these, a climate model is used to produce a synthetic climate. The same model is then used to simulate the synthetic climate it just produced. Such experiments involve the “perfect model” because any model is a perfect model of itself.

In these experiments, tiny changes in the initial conditions typically result in rapid decoherence of the model output, relative to the synthetic test climate. “Rapid” means that after about four seasons the correlation is about zero. Such tests are relevant because the observational results that are used to spin up GCMs always have non-zero errors, relative to the “true” physical state of the climate.

So, climate models that cannot yet predict past about 1 year of their own synthetic climate are being used to project the physical state of the real climate, 100 years out. And these projections provide the entire basis for AGW alarm.

John F. Pittmansaid

Well Bad Andrew, that is why I beleive as Mosher and Fuller said: In context, the emails were worse than we thought. At what point does stubborness and confirmation bias cross into fraudulent? I wish it were simple. The problem is that the way there were running the peer reveiwed system precluded the system of checks and balance that would have meant that one had to knowingly cross that line. This part of the problem. I believe Mosher and Fuller touched on this briefly. Truthfully if it was securities the “hide the decline” would result in probable convictions, even more probable loss in a law suit. But this is sceince in Academia. The real loss and battle will be about trust. Monbiot got it right. I beleive we need to as well. The “fraud” is secondary. The, let us call them, email shennanigans are the means to correct the science not the ends of what we seek which should be the best answwer.

harrysaid

I am actually going through AR4. I am curious why the ICPP takes more than 200 years as halflife time for atmosferic CO2. The halflife time is crucial for the models to make sure that once CO2 is emitted, it stays in the atmosfere for a prolonged time. The physical basis for this assumption is very well hidden in the report, mentioned in Chapter 2, 8 and 10. The definition is in table 2.14 page 213, chapter 2. It mentions Chapter 10, and refers to Joos et al 2001 and gives the halflife time of a CO2pulse as: a0+Sigma(i=1:3)Ai*exp(-t/taui), with a0 0.217, a1 0.259, a2 0.338, a3 0.186, tau1: 172.9, tau2: 18.51 and tau3: 1.186. In Joos 2001 page 906 this formula is split into two parts, one for t=0 to 2 years, and for t beyond 2 years, which uses different parameters than IPCC. Most striking is the parameter a0. This parameter does not show time dependent decay, resulting in the presence of 21.7% of a single pulse of CO2 for EVER in the atmosfere in IPCC models. Strange

Bad Andrewsaid

Thank you for the response. I know you egghead scientists (and I mean that affectionately, so don’t get mad) have your own culture and concerns. That is fine, but out here in the streets, we have a big problem with pretty much the entire political class making fraudulent claims on pretty much a perpetual basis. We are way beyond subtleties and debating whether or not somebody crossed some line somewhere. If there was a line at one time, it has been crossed, erased and scrubbed from memory. We don’t need any further enabling of criminals.

hjbangesaid

Unbiased SETH BORENSTEIN, AP Science Writer says
“A special World Meteorological Organization panel of 10 experts in both hurricanes and climate change — including leading scientists from both sides
— came up with a consensus , which is published online Sunday in the journal Nature Geoscience.”

JAEsaid

I agree with what JohnWho said in No. 1: The whole AGW issue really boils down to his very short comment. Until the “climate scientists” can explain why the MWP was as warm or warmer than the present (as well as why it got so damn cold during the LIA), the AGW hypotheses (which does not yet qualify as an actual theory) should not be directing public policy and the whole economic systems of the world. And this is especially true, when the supporters of this really questionable hypothesis refer to those that question that hypothesis as “deniers,” flat-earthers,” “scum,” etc. Their insecurity is made very clear by their name-calling and unwillingness to fairly debate the issues!

Ausie Dansaid

Perer of Sydney
Hi – you keep on advocating prosecution for fraud.
Now I’m not a lawyer but I fear it is not that simple.
Fraud has a precice legal definition and it may be very difficult to prove, and different in each country.

However, various US state governments are taking the EPA to court in an endevour to reverse their finding of harm against CO2.

That may be our best legal hope for the moment.

If these cases succeed then this may provide the necessary evidence,
or at least perform the basis for Royal Commission type investigations, (as however described in different countries) headed by real judges in various countries.

That would require a change in political power.
That would need a change in public perception.

DeWitt Paynesaid

The assumption seems to be that the combined oceanic/biosphere carbon reservoir is about four times larger than the atmosphere and there is a concentration dependent equilibrium with constant exchange between the reservoirs. So if you add a slug to the atmosphere and allow it to equilibrate, 80% will end up in the oceanic/biosphere reservoir and 20% in the atmosphere. The geologic carbon cycle time constant is so much larger that for the time scale of a thousand years, it can be ignored.

You were lucky to get to do test range stuff. I lived in the sim, which meant we had to work with the crap data that came from test.
just kidding. But I did get 5 years working with a dozen fighter pilots so I know what after hours was like. I had a couple fun
trips to nellis to get range data, platform stuff, and plenty of stuff from edwards, flight test. also some esoteric high AOA stuff
on AC and missiles.

Phillip Bratbysaid

I agree totally. In the nuclear industry, attempts were made by government labs to develop fully physics-based models using physical theory only (called hands-free models). However, it was soon realised that the models didn’t work in the real world when trying to calculate from first principles, for example, the critical flow through a relief valve. Parameterisation was introduced for all sorts of things (flow, heat transfer etc) and then of course uncertainty in the correlations had to be considered and the models could only be used within the limits of validity of the correlations. Large numbers of calculations were necessary investigating the effects of the correlation uncertainty in order to get a bound on the likely range of uncertainty in any given postulated accident scenario.

Of course all that was in a closed system, where there was a vast array of experimental data to develop the models and further vast array of different experimental data to validate the models, and all within the range of applicability of the correlations.

None of the required experimental data (and uncertainty) exists in the far more complex climate system. We would have been laughed at if we had tried to license a nuclear plant using models of the standard used in climate science (well we wouldn’t even have bothered trying). And yet trillions are projected to be spent on the ‘garbage out’ produced by climate models.

harrysaid

That would only apply near saturation of the ocean. Since the uptake of CO2 by the ocean is a partial inhomogenic reaction due to the CO2 being bound by Ca2+ to form calcium carbonate, a constant shift of CO2 to the ocean would be the result. It is never wise to hardcode a partioning coefficient into a formula, and adopt the formula for progressing saturation. Joos et al 2001 uses A0 of 0.12935 for t less than 2 years, and a0 of 0.022936 for times beyound 2 years, and not 0.217 as the ICPP does, which is about 10 times higher?

#32, #38 You guys are funny. Hey, I have a neat, heat-sensitive, B-2 coffee cup from Edwards. There it is in the center of the enemy’s target display. You pour in the coffee and B-2 disappears.

Anyway, modeling-wise, I think the future can include grownup versions of the mesoscale weather models being used, applied to larger areas. 1.6 km resolution and assessment of regional impacts predicted. If one looks at what is possible vs. what CRU, et al did by locking people into a 19th century, plot-the-new-point-by-hand approach (and keep throwing away the plots until they look right) then the contrast is apparent. The ‘young turk’ CRU modelers merely did the graphs desired on a computer. I’m sure Phil and Keith were impressed. Mann’s methods weren’t physics; more akin to a counterfeiter going for good color reproduction. [Which reminded me of the NASA-GISS email to National Geographic Maps: “Sorry your colors are off. Too late?, Oh well…]

DeWitt Paynesaid

It is never wise to hardcode a partitioning coefficient into a formula, and adopt the formula for progressing saturation.

I quite agree. The ocean isn’t a beaker of water in the lab. The biologic carbon pump is quite efficient resulting in a pH in the surface layer significantly higher than the equilibrium value for the partial pressure of CO2 in the air. It makes one wonder if a decimal point wasn’t dropped somewhere when the report was assembled. Considering that the deep ocean contains more than 20 times as much inorganic carbon as the atmosphere, the use of a factor of four for the difference in reservoir size seems too small by nearly an order of magnitude.

DeWitt Paynesaid

Going to a smaller grid size for climate models is not just a matter of getting a computer with enough teraflops or even petaflops. You get whole new stability problems which will likely require yet new unphysical kludges in the software to get them to run more a few days without blowing up. The Exponential Growth in Physical Systems threads at Climate Audit discusses this in detail. Here’s a quote from the post at the top of part 1:

If current global atmospheric models continue to use the hydrostatic equations and increase their resolution while reducing their dissipation accordingly, the unbounded growth will start to appear. On the other hand, if non-hydrostatic models are used at these resolutions, the growth will be bounded, but extremely fast with the solution deviating very quickly from reality due to any errors in the initial data or numerical method.

Then there’s the vastly different time scale of the ocean system compared to the atmosphere. An ocean model with realistic physical constants would need to be spun up for thousands of years compared to 100 for the atmosphere.

*43 Ya, I agree, DeWitt. New areas of research, very complicated. A lot better than Mann’s MATLAB scripts.

I think ya gotta get some clouds in, somehow. Also, I’m not saying one believes the BIG model, for the reasons you describe. I think ‘they’ should go ahead and try and then say “Not ready for prime time”. Just be honest and say, “meh, we can’t predict but we’ll keep watching”. Sorta like SETI, assume the null hypothesis and if ya get a signal…

edwardsaid

There seems to be a lot of talk about the models. I do not understand how they work myself however the machinations for one is posted at the GISS site. I have yet to hear a critical review of the design and output of this model. What exactly does it do wrong and how? Maybe the data is a bit off but what data is not? Perhaps the models are tweeked (parameterized?) incorrectly but I have yet to hear someone put forward a better model that hindcasts and forecasts. (I’m excluding Spencer’s simple blog model as that is pretty rough)
Any comments?
Thanks
Ed

DeWitt Paynesaid

I have yet to hear someone put forward a better model that hindcasts and forecasts.

It is not necessary for a critic to put forward a better model in order to falsify the model in question. The burden of proof is on the results of a given model not on the results or lack of them of some hypothetical better model.

Maurice Jsaid

Even IF the Computer Models could predict Climate accurately (They cannot) it would still not be proof (Empirical Evidence) of the NULL HYPOTHESIS of ANTHROPOGENIC GLOBAL WARMING PERIOD.
However we cannot deny that there is MANN MADE GLOBAL WARMING with the assistance of JONES BRIFFA HANSEN et al, but the fact that it bears little or no resemblance to REALITY (MWP and LIA to wit) WILL NOT DETER AGW CLIMATE CHEATS.

John F. Pittmansaid

Edward, in the post there are links to articles or blog pages by proponents and sceptics that indicate the challenges. The biggest is as Pat Frank states at 28. One way of looking at it is that the physics for fluids is based on a continuum. So how small do you take your grid? This is where Browning and Gravel come to play. The real problem for the models is that we have one reality with multiple runs from computers. This is the opposite for the models we have that we know work in fluids. The model developers, for say an aircraft wing, were able to compare the model output to many real expressions of the physics in different wings, or by changing an aspect of a wing, re-model and then see if the model’s output matched the new configuration.

February 22, 2010 at 9:10 am
#32, #38 You guys are funny. Hey, I have a neat, heat-sensitive, B-2 coffee cup from Edwards. There it is in the center of the enemy’s target display. You pour in the coffee and B-2 disappears.

HAHAHA. those are great cups. The b-2 is a beautiful piece of machinery. When it rolled out I cried like a baby.