The IPCC model simulation archive

In the lead up to the 4th Assessment Report, all the main climate modelling groups (17 of them at last count) made a series of coordinated simulations for the 20th Century and various scenarios for the future. All of this output is publicly available in the PCMDI IPCC AR4 archive (now officially called the CMIP3 archive, in recognition of the two previous, though less comprehensive, collections). We’ve mentioned this archive before in passing, but we’ve never really discussed what it is, how it came to be, how it is being used and how it is (or should be) radically transforming the comparisons of model output and observational data.
First off, it’s important to note that this effort was not organised by IPCC itself. Instead, it was coordinated by the Working Group on Coupled Modelling (WGCM), an unpaid committee that is part of an alphabet soup of committees, nominally run by the WMO, that try to coordinate all aspects of climate-related research. In the lead up to AR4, WGCM took up the task of deciding what the key experiments would be, what would be requested from the modelling groups and how the data archive would be organised. This was highly non-trivial, and adjustments to the data requirements were still being made right up until the last minute. While this may seem arcane, or even boring, the point I’d like to leave is that just ‘making data available’ is the least of the problems in making data useful. There was a good summary of the process in Bulletin of the American Meteorological Society last month.

Previous efforts to coordinate model simulations had come up against two main barriers: getting the modelling groups to participate and making sure enough data was saved that useful work could be done.

Modelling groups tend to work in cycles. That is, there will be a period of a few years of development of a new model then a year or two of analysis and use of that model, until there is enough momentum and new ideas to upgrade the model and starting a new round of development. These cycles can be driven by purchasing policies for new computers, staff turnover, general enthusiasm, developmental delays etc. and until recently were unique to each modelling group. When new initiatives are announced (and they come roughly once every six months), the decision of the modelling group to participate depends on where they are in their cycle. If they are in the middle of the development phase, they will likely not want to use their last model (because the new one will almost certainly be better), but they might not be able to use the new one either because it just isn’t ready. These phasing issues definitely impacted earlier attempts to produce model output archives.

What was different this time round is that the IPCC timetable has, after almost 20 years, managed to synchronise development cycles such that, with only a couple of notable exceptions, most groups were ready with their new models early in 2004 – which is when these simulations needed to start if the analysis was going to be available for the AR4 report being written in 2005/6. (It’s interesting to compare this with nonlinear phase synchronisation in, for instance, fireflies).

The other big change this time around was the amount of data requested. The diagnostics in previous archives had been relatively sparse – the main atmospheric variables (temperature, precipitation, winds etc.) but not huge amounts extra, and generally only at monthly resolution. This had limited the usefulness of the previous archives because if something interesting was seen, it was almost impossible to diagnose why it had happened without having access to more information. This time, the diagnostic requests for the atmospheric, ocean, land and ice were much more extensive and a significant amount of high-frequency data was asked for as well (i.e. 6 hourly fields). For the first time, this meant that outsiders could really look at the ‘weather’ regimes of the climate models.

The work involved in these experiments was significant and unfunded. At GISS, the simulations took about a year to do. That includes a few partial do-overs to fix small problems (like an inadvertent mis-specification of the ozone depletion trend), the processing of the data, the transfer to PCMDI and the ongoing checking to make sure that the data was what it was supposed to be. The amount of data was so large – about a dozen different experiments, a few ensemble members for most experiments, large amounts of high-frequency data – that transferring it to PCMDI over the internet would have taken years. Thus, all the data was shipped on terabyte hard drives.

Once the data was available from all the modelling groups (all in consistent netcdf files with standardised names and formatting), a few groups were given some seed money from NSF/NOAA/NASA to get cracking on various important comparisons. However, the number of people who have registered to use the data (more than 1000) far exceeded the number of people who were actually being paid to look at it. Although some of the people who were looking at the data were from the modelling groups, the vast majority were from the wider academic community and for many it was the first time that they’d had direct access to raw GCM output.

With that influx of new talent, many innovative diagnostics were examined. Many, indeed, that hadn’t been looked at by the modelling groups themselves, even internally. It is possibly under-appreciated that the number of possible model-data comparisons far exceeds the capacity of any one modelling center to examine them.

The advantages of the database is the ability to address a number of different kinds of uncertainty, not everything of course, but certainly more than was available before. Specifically, the uncertainty in distinguishing forced and unforced variability and the uncertainty due to model imperfections.

When comparing climate models to reality the first problem to confront is the ‘weather’, defined loosely as the unforced variability (that exists on multiple timescales). Any particular realisation of a climate model simulation, say of the 20th Century, will have a different sequence of weather – that is, the weather pattern on Jan 31, 1967 in one realisation will be uncorrelated to the weather pattern on Jan 31, 1967 in another realisation, even though each run has the same climate forcing (increases in greenhouse gases, volcanoes etc.). There is no expectation that the weather in any one model will be correlated to that in the real world either. So any comparison of climate models and data needs to estimate the amount of change that is due to the weather and the amount related to the forcing. In the real world, that is difficult because there is certainly a degree of unforced variability even at decadal scales (and possibly longer). However, in the model archive it is relatively easy to distinguish.

The standard trick is to look at the ensemble of model runs. If each run has different, uncorrelated weather, then averaging over the different simulations (the ensemble mean) gives an estimate of the underlying forced change. Normally this is done for one single model and for metrics like the global mean temperature, only a few ensemble members are needed to reduce the noise. For other metrics – like regional diagnostics – more ensemble members are required. There is another standard way to reduce weather noise, and that is to average over time, or over specific events. If you are interested in the impact of volcanic eruptions, it is basically equivalent to run the same eruption 20 times with different starting points, or collect together the response of 20 different eruptions. The same can be done with the response to El Niño for instance.

With the new archive though, people have tried something new – averaging the results of all the different models. This is termed a meta-ensemble, and at first thought it doesn’t seem very sensible. Unlike the weather noise, the difference between models is not drawn from a nicely behaved distribution, the models are not independent in any solidly statistical sense, and no-one really thinks they are all equally valid. Thus many of the pre-requisites for making this mathematically sound are missing, or at best, unquantified. Expectations from a meta-ensemble are therefore low. But, and this is a curious thing, it turns out that the meta-ensemble of all the IPCC simulations actually outperforms any single model when compared to the real world. That implies that at least some part of the model differences is in fact random and can be cancelled out. Of course, many systematic problems remain even in a meta-ensemble.

There are lots of ongoing attempts to refine this. What happens if you try and exclude some models that don’t pass an initial screening? Can you weight the models in an optimum way to improve forecasts? Unfortunately, there doesn’t seem to be any universal way to do this despite a few successful attempts. More research on this question is definitely needed.

Note however that the ensemble or meta-ensemble only gives a measure of the central tendency or forced component. They do not help answer the question of whether the models are consistent with any observed change. For that, one needs to look at the spread of the model simulations, noting that each simulation is a potential realisation of the underlying assumptions in the models. Do not – for instance, confuse the uncertainty in the estimate of the ensemble mean with the spread!

Particularly important simulations for model-data comparisons are the forced coupled-model runs for the 20th Century, and ‘AMIP’-style runs for the late 20th Century. ‘AMIP’ runs are atmospheric model runs that impose the observed sea surface temperature conditions instead of calculating them with an ocean model, optionally using other forcings as well and are particularly useful if it matters that you get the timing and amplitude of El Niño correct in a comparison. No more need the question be asked ‘what do the models say?’ – you can ask them directly.

The usefulness of any comparison is whether it really provides a constraint on the models and there are plenty of good examples of this. What is ideal are diagnostics that are robust in the models, not too affected by weather, and can be estimated in the real world e.g Ben Santer’s paper on tropospheric trends, the discussion we had on global dimming trends, and the AR4 report is full of more examples. What isn’t useful are short period and/or limited area diagnostics for which the ensemble spread is enormous.

CMIP3 2.0?

In such a large endeavor, it’s inevitable that not everything is done to everyone’s satisfaction and that in hindsight some opportunities were missed. The following items should therefore be read as suggestions for next time around, and not as criticisms of the organisation this time.

Initially the model output was only accessible to people who had registered and had a specific proposal to study the data. While this makes some sense in discouraging needless duplication of effort, it isn’t necessary and discourages the kind of casual browsing that is useful for getting a feel for the output or spotting something unexpected. However, the archive will soon be available with no restrictions and hopefully that setup can be maintained for other archives in future.

Another issue with access is the sheer amount amount of data and the relative slowness of downloading data over the internet. Here some lessons could be taken from more popular high-bandwidth applications. Reducing time-to-download for videos or music has relied on distributed access to the data. Applications like BitTorrent manage download speeds that are hugely faster than direct downloads because you end up getting data from dozens of locations at the same time, from people who’d downloaded the same thing as you. Therefore the more popular an item, the quicker it is to download. There is much that could be learned from this data model.

The other way to reduce download times is to make sure that you only download what is wanted. If you only want a time series of global mean temperatures, you shouldn’t need to download the two-dimensional field and create your own averages. Thus for many purposes, automatic global, zonal-mean or vertical averaging would have saved an enormous amount of time.

Finally, the essence of the Web 2.0 movement is interactivity – consumers can also be producers. In the current CMIP3 setup, the modelling groups are the producers but the return flow of information is rather limited. People who analyse the data have published many interesting papers (over 380 and counting) but their analyses have not been ‘mainstreamed’ into model development efforts. For instance, there is a great paper by Lin et al on tropical intra-seasonal variability (such as the Madden-Julian Oscillation) in the models. Their analysis was quite complex and would be a useful addition to the suite of diagnostics regularly tested in model development, but it is impractical to expect Dr. Lin to just redo his analysis every time the models change. A better model would be for the archive to host the analysis scripts as well so that they could be accessed as easily as the data. There are of course issues of citation with such an idea, but it needn’t be insuperable. In a similar way, how many times did different people calculate the NAO or Niño 3.4 indices in the models? Having some organised user-generated content could have saved a lot of time there.

Maybe some of these ideas (and any others readers might care to suggest), could even be tried out relatively soon…

Conclusion

The diagnoses of the archive done so far are really only the tip of the iceberg compared to what could be done and it is very likely that the archive will be providing an invaluable resource for researchers for years. It is beyond question that the organisers deserve a great deal of gratitude from the community for having spearheaded this.

169 Responses to “The IPCC model simulation archive”

If this coming Septempber doesn’t set another record this year it most likely will in 2009. Having a run of consecutive record melts is very rare so I think we are due for an adjustment year where the melt mightn’t be quite as severe as the past few years, but the unmistakable trend is taking on an increasingly exponential appearence. Loss of sea ice begets further loss as the sun warms the growing expanse of ‘dark’ sea. Multi year ice is also heavily fractured in compostion with each season’s layering differing from the last..so it is anything but a homogenous mass but rather- rarefied crystalline with heaps of surface area exposed to the warming rays of spring and summer. Am am not a scientist but have a great interest in all things of a scientific bent and by what I can picture is that the artic territory will collapse like a pack of cards..very soon. Thanks Alastair and Ray for giving me further insight. So public education is the best way change governmental views? The media watching public I’ve found also tends to become increasingly cynical and war weary about climate change stuff especially in Australia..the echos of Ho-Hum are getting louder. What does the media do in that regard? Maybe to really include everyone in the street in this fight..which is absolutely true. Make everyone feel important for doing their contributing bit..maybe competitions as to which suburb changes over the most lights to CFL’s etc..ideas like that?

The American Denial of Global Warming
(#13459; 58 minutes; 12/12/2007)
Polls show that between one-third and one-half of Americans still believe that there is “no solid” evidence of global warming, or that if warming is happening it can be attributed to natural variability. Others believe that scientists are still debating the point. Join scientist and renowned historian Naomi Oreskes as she describes her investigation into the reasons for such widespread mistrust and misunderstanding of scientific consensus and probes the history of organized campaigns designed to create public doubt and confusion about science.

Jim Cripwell, let me introduce you to the concept of the future tense. In English, the use of the word “will” prior to the clause that contains the verb generally conveys a sense that the events referred to take place in the future. For example, JCH says, “Tell me why I’m wrong in believing the reality of what will happen in September is going to be either a new record, significant, or a nonevent, insignificant?” The “will happen” would imply he is talking about next September, not last September. Since last September represented a record low in sea ice, it can hardly be called a nonevent or insignificant.

World supply of oxygen: Over every square foot there is a column of
oxygen that weighs about 440 lbs. There is additional oxygen dissolved
in water.

Large, but finite. As I pointed out.

We will never ever run out of oxygen,

I agree… we’d be very dead very long before that. But ignoring that little detail, and assuming fossil fuel runs out “never ever”, it would take only 11 doublings à 30 years of the current CO2 concentration anomaly of 0.01%, to completely replace atmospheric oxygen (assumed at 20%; including ocean dissolved oxygen left as an exercise for the reader), i.e., we would be there by the year 2338.

Mean temperature by that time — or a few decades later at equilibrium — would be 48 degs C [suppressing vivid imagery of going to sauna with an oxygen mask on :-) ].

That’s for exponential growth. Linear growth gives us more time, but we’ll still run out in the end. The only way to prevent that, is a sufficiently steep exponential decrease. Which amounts to stopping the use of fossil fuels in finite time.

which by the way is a renewable resource.

Not on the relevant time scale, a few centuries. It took millions of years to form the current oxygen atmosphere by photosynthesis and deposit the produced organic matter underground, and would take a similar time today to happen again. That’s not remotely an interesting time scale for us.

(BTW are you the same Harold Pierce Jr that can be observed foaming at the mouth on CA and some other forums? Google remembers. How does it feel to have to appear minimally civilized on RC?)

88. Gavin: “New ice is always thinner than multi-year ice”. Quite an assertion, I must say. If there is a thin layer of old ice left after a warm summer, and a new freezing season sets in rapidly, surely the new ice may become thicker than the old.

89 Pwkka: The Baltic sea is so small that its ice cover changes are insignificant in global and hemispheric ice area calculations. Whatever extra CO2 is emitted from Finnish soil this winter is surely compensated by the CO2 not emitted from China and other Asian locations covered with snow.

Liquid water with discharge implys a base temperature of 0C. The UM model only offers a visual impression of how large an area is that warm.

In other work, Ted Scambos discusses the importance of melt ponds to ice shelf colapse. It is worth getting on Google Earth or looking at Modis (http://modis.gsfc.nasa.gov/index.php) and other sites to see just how many melt ponds are forming on various ice structures. Greenland with Google Earth is a particularly target rich environment.

Re: Dodo @ 107: “New ice is always thinner than multi-year ice”. Quite an assertion, I must say. If there is a thin layer of old ice left after a warm summer, and a new freezing season sets in rapidly, surely the new ice may become thicker than the old.”

Surely not.

The existing “thin layer of old ice left after a warm summer” will indeed get thicker, but it is, by definition, multi-year ice, not “new” ice.

Re # 74 Dodo: “Is there any reference for the assertion that “new ice” melts faster than old? One could also think that, as old ice is dirtier and more porous than new, it would melt faster.”

As any general oceanography textbook will tell you, new sea ice has a high salt content, which keeps its melting temperature low (near the freezing point of seawater, roughly -1.8 degrees C). As the ice ages, the salt migrates out, leaving relatively pure frozen water, with a melting point closer to that of pure water, 0 degrees C). Hence, as the temperature warms in the late spring/early summer, the new ice should start melting first. The surface of old sea ice may well be dirtier, but why do you assume old ice is more porous?

High lattitude warming is continuing: http://data.giss.nasa.gov/gistemp/2007/
Whether some areas have experienced very cold weather recently is an issue of just that: weather.
The underlying trend is that of marked warming.

And permafrost is responding eg: http://gsc.nrcan.gc.ca/permafrost/climate_e.php
“Not all permafrost in existence today is in equilibrium with the present climate. Offshore permafrost beneath the Beaufort Sea is several hundred metres thick and was formed when the shelf was exposed to cold air temperatures during the last glaciation. This permafrost is presently in disequilibrium with Beaufort Sea water temperatures and has been slowly degrading.”

Canada is the other side of the world from Eurasia.

It seems to me that you are trying to dismiss Pekka Kostamo’s observations by employing the technique of ‘directed attention’, as employed by conjurers.

Looking at the 2007 anomaly and scanning the history back to 1978, one might well believe that something quite extraordinary happened in 2007. The range of “normal” variability was exceeded very substantially indeed. A few future years will tell if we have passed one of the tipping points, causing a profound change in the processes that govern the melting and re-freezing over this ocean.

By the way, I learned somewhere that water entering via the Bering strait has both lower salinity and higher temperature than the water in the Arctic ocean. I could not find a reason for the low salinity, but it was a measurement result from the buoys there. Obviously this helps the summer melting quite substantially (as seen in the satellite imagery) as the Pacific water stays on top. The shallowness of the Bering strait also helps, as the flow is taken from the warmer upper layers.

How much Arctic sea ice melting there will be is of course also dependent on rather random meteorology, like winds and cloudiness.

Sorry if the question is out of place on this thread, but I was hoping someone has worked on studying the relationship of earth quakes and CC.
Having worked in construction and concrete I have come up with the idea of “crust expansion”.
When concreting it is important to leave expansion joints within the slab to allow for heat expansion I can’t see how this does not apply to the earths crust. Are Fault lines accentually natural expansion joints? If so the heating of earth crust should indeed increase pressure on these faults and therefore have an effect on increasing quake magnitude and frequency.

Can anyone verify this concept or point me in the direction of such research.

How do you arrive at the figures in your post? Are they derived from Table 6 (page 17 in the PDF), or are calculated in some other way?

[Response: They can be calculated from numbers in the paper (with the possible exception of the clouds number), but I first used the values given in a published table whose original source I can’t put my finger on right now. It is widely available though (i.e. here). – gavin]

Misanthropic #115, Climate change tends to affect mostly Earth atmosphere and oceans. Have you noticed that when you dig down, even on a hot summer day, the ground stays cool. The excursions from summer to winter and from day to night will be more than those we experience due to climate change.
However, the problem with climate change is we’re moving the whole shebang hotter, and that has huge effects on the biosphere, agriculture, etc. One area where you intuition might have some validity would be with what will happen to permafrost. We’re looking at big changes there. Hope this helps.

Thanks for all the advice about ice, old and new, although I did not get any references to actual scientific literature on ice. But I notice there is a lot of attitude that compensates for any original quotes. “Any textbook…”

You can be increasingly sure that temperature measurements are accurate. For instance, for years, there was a disagreement between the MSU data for satellite and ground-based temperature measurements. It’s rather famous; an outside group found a math error; the MSU people confirmed it; the disagreement was due to the math error.

#118Thanks for your help.
Ray said:
“However, the problem with climate change is we’re moving the whole shebang hotter, and that has huge effects on the biosphere, agriculture, etc.”

The whole shebang hotter must have a slow effect on expansion?
If more heat is trapped then the final resting place for that energy must be the earths crust?
The oceans and atmosphere must slowly transfer heat to the crust?
Rocks at and near the surface must heat and expand?
I understand that there are more pressing maters concerning CC that will effect us more and sooner but there must be some effect.

Misanthropic, the change due to anthropogenic CO2 will be less than the change from winter to summer or day to night. If we don’t see big geologic effects there, we probably won’t see them due to climate. A bigger effect might be due to isostatic rebound as glaciers melt.

I have just found the NASA/GISS temperature anomaly for January 2008, and it is 0.31 C; the lowest monthly anomaly in the 21st century (assuming 2001 is the first year). Do you know Gavin, if what looks like a significant drop since December 2007, has anything to do with colder temperatures in the Arctic?

Do you know Gavin, if what looks like a significant drop since December 2007, has anything to do with colder temperatures in the Arctic?

Well, I’m not Gavin, but a one month drop is not “significant” in any climatological sense. And I would suggest that the cool anomaly has a lot more to do with the current intense La Nina than with Arctic temps (see NOAA SSTs – the big blue bit is a very large cool anomaly in the Pacific).

Hello again. Back in July 2007 I posted on the subject of 3 papers I found interesting: Green & Armstrong, Lockwood & Frohlich, and Archibald. One respondent recorded several criticisms of the Archibald paper, so I decided to study it more closely. As a result, I have produced my own article on a temperature model which combines both CO2 and solar effects, which you can find here. I am now in a position to comment on the aforesaid criticisms, as follows.

– Instead of using the world wide temperature, from say GISS or Hadley, he choses a total of five stations. Yes five, all in the within several hundred kilometers from each other in the South Eastern United States.

Yes, it’s a good point – the only global series he uses is a satellite record for 28 years. If he wanted to use rural temperatures he should have been able to find a larger set.

– The stations chosen buck the trend of increasing temperatures in the later half of the twentieth century. The stations chosen indicate lower temperatures in the later part of the twentieth century, which is not reflected in the vast majority of stations worldwide. Since so few met stations were chosen, one has to wonder if they were chosen so that they would fit Archibald’s argument.

Agreed – though as a later comment points out, these are not actually used in the solar cycle analysis, so their relevance is limited regarding the main thrust of the paper.

– In order to predict the temperature response to the changing solar cycle length, he uses a single temperature station, (De Bilt in the Netherlands). Not even 5 stations. A single station.

That is incorrect (in the Lavoisier Society June 29 2007 version I am looking at). Armagh is also used, and indeed the Cycle 22 / Cycle 23 comparison is graphed against the Armagh data. But that is still only a single station, even if it is likely to be a proxy for a good chunk of Atlantic Ocean given its location. However, use of a single station is not, in this case, evidence of cherry-picking, because there are so few stations to choose from with venerable records.
A more serious error, which I note in my paper, is that the graph Archibald quotes is based on a 1+2+2+2+1 year filter applied to the cycle lengths. This means that a new, long, cycle length can only have 1/8 of the effect which he claims (followed by a further 1/8 next cycle when it shifts under the ‘2’ coefficient). So instead of
0.5*(12-9.6) = 1.2
we would get 0.15, and properly the full 5 cycle filter should be used.

– He then decides that the correlation between De Bilt and the solar cycle length is good (but fails to mention the R^2 value, because the correlation is poor). He also uses cherry picks data from the complete set of temperature records from the station. This is misleading and wrong. When the full data set is used, the R^2 value
for the correlation is only 0.0177.

I cannot comment on that, as I have not looked at the De Bilt data. It is possible that the best filter (I favour 1+1+1) has not been applied.

– Archibald then predicts a reduction in temperature of 1.5C over the next solar cycle. He claims he can do this due to correlation with cycle amplitude, but then presents a graph of solar cycle length to illustrate the correlation. He offers no explaination. He also has used very strange predictions of the solar cycle.

The figures I see in Archibald are 1.2C for a 12-year cycle and 1.6C for a 13-year cycle, definitely based on cycle length not amplitude. As before, because of the filter effect, they should be divided by about 8. Now that we have seen the first sunspot of the new cycle, about 2 years later than expected, Archibald’s estimates for the length of Cycle 23 are looking quite credible.

To conclude, despite the criticisms here of Archibald’s choice of datasets, my article shows that the solar cycle length signal does shine through the global Had CRUT3 data too. But to make best sense of it a gradual trend, probably induced by CO2, needs to be included, and it corresponds to a CO2 doubling sensitivity of 1.4C, well below IPCC estimates. The prediction for mean HadCRUT3 for 2008.0-2019.0 is a modest fall, of 0.15C, from the 1995.0-2006.0 value, followed by further CO2-induced rises.

Here is the summary of the paper, in case you prefer not to follow the link.

This article presents new research in which a model for temperature linearly combines an effect from the lengths of the 3 preceding solar cycles and an effect from carbon dioxide atmospheric concentration. It is applied to about 150 years’ worth of “Armagh” data and of “HadCRUT3” data. In the latter case, two variations of the model (one with CO2 effects delayed by one solar cycle) perform similarly on cycles 10 to 22, and yet give quite different CO2 doubling sensitivities – 1.18C and 1.45C respectively. The CO2 effect is statistically confounded with time, so any Long Term Persistence or emergence from the Little Ice Age would imply overestimation of this parameter. The unbiassed standard residual errors are about 0.070C, equating to impressive R^2 values of 0.87, and the estimate of solar cycle length sensitivity is 0.05C for each year of variability (with accumulation of this over 3 cycles). The flat period between Cycle 17 (midpoint 1937) and Cycle 20 (midpoint 1968), which can be a difficult feature for climate models to explain, is quite well modelled here. But Cycle 23 (using data 1995.0-2006.0) poses a problem for the model, suggesting that neither CO2 nor solar effects are principally responsible for the surface warming recorded in that period. Some speculations are made about this, endorsing the climateaudit.org drive for auditting of these records, whilst allowing for the possibility of an anthropogenic effect of unknown source. Used predictively, these models suggest some modest imminent cooling given that Cycle 23 is turning out to be significantly longer than average.

Rich,
I’m afraid I don’t understand some of the motivation for parameters in your model. For instance, why would you delay the contributions of CO2 by a solar cycle? Why do you still insist on a limited dataset? Why do you claim that climate science has trouble with cycles 17-20 or 23? I don’t think these claims can be substantiated if aerosols are included for solar cycle 20. Solar cycle 23 is well within IPCC predictions.

Best summary of the current state of aerosols is in the IPCC summaries and references therein. So, your “1 solar cycle” is in fact an adjustable number with no independent support. You do of course know the quote by von Neumann: “Give me 4 adjustable parameters and I will fit an elephant; five and I will make him wiggle his trunk”.

By limited data, I mean that you seem to place a lot of weight on one or a few stations.

As to solar cycle 24, of course a lot depends on what happens. A change in solar output would of course affect temperature, and would not in any affect the validity of the models. Volcanic eruptions, ENSO, etc. are other factors. There is no one prediction, but rather a range, and again this is summarized by the IPCC. All other things being equal, the prediction is for more warming.

OK, I’ll look for the aerosol stuff in IPCC – but I’m away for a week so it must wait until then.

I think your quote of von Neumann works to my advantage, as my model only has 3 parameters, whereas I feel sure the GCM models have many more – but any information on that would be welcome.

On the data front, HadCRUT3, like GISS, is an average of thousands of stations worldwide, and though I analyzed Armagh, most of my comments and predictions refer to the global HadCRUT3.

Regarding change in solar output and validity of models, I agree that the structure of the models would remain the same. But with more observations to assess the relative sensitivities of climate to solar and CO2 effects (and these do not have to be the same, because of albedo), the parameters in the models may have to change. And as long as sensitivity to CO2 is positive, as most scientists believe, then, yes, all other things being equal the correct prediction would be for more warming.

But how much? And if Cycle 24 is very weak, then all other things are not equal, and will that outweigh the CO2 effect, and if so for how long and by how much? These are the big questions. The current evidence, based on Jan 07 to Jan 08, suggests a big effect of this slow solar minimum, but there’s the difficulty of correctly allowing for El Nino/La Nina.

Rich, there is a huge difference between a parameter fixed by independent data and an adjustable parameter determined by optimizing the fit to the data you are assessing. GCMs mainly feature the former, so the agreement they exhibit with the data is highly significant. If solar cycle 24 is weak, then we have to look at how all the forcers change. CO2 forcing is among the best constrained, so it will likely change the least.

#126Thanks again for your help Ray.
I do understand the day night range, we are talking about shifting the whole shebang hotter. The day night range and winter summer has had time to settle and form faults over thousands of years.
My understanding of the interior of the earth is that it’s heated through radioactive decay and left over heat from the accretion that formed earth, this should give an even heat that is slowly cooling and shrinking.
I found out that thermal stress was the process that formed the plates in the first place, this must still be at play today.
I could not find an explanation for seismic activities or what is producing the stress that builds up and is released in a quake can you point me in the direction of an explanation?
I have always thought that these seismic waves were a result of thermal stress and a struggle between different layers of rock will different thermal expansion rates, is this wrong?
I found a recent article that clams the probability of earthquakes is significantly lower in areas of higher crust temperature.
Earth’s temperature linked to earthquakeshttp://www.physorg.com/news121524857.html

Could a cooler crust contain more stress as its temperature has shifted more from its original temperature?
I have a look at the way plates are moving and it looks to me that the plates on land generally are expanding and pushing together, and plates in the sea or bordering the cooling oceans seem to be shrinking or moving apart.

Mis, you postings are full of statements of belief, what “should” or “must” be true — you should, in fact you must, look these things up for yourself.

When you come here, proclaim your belief, and ask for support, all we can do is help you learn how to check what you believe by looking at the published science.

Start by questioning what you are sure must be true.
Particularly when you have no source, things like:

“it looks to me that the plates on land generally are expanding” — well, you can look this up. I’d recommend you stay away from the “Expanding Earth” sites (there’s a religious group that believes in that); stay with mapping and geology science sites, and you can check your belief.

Google Scholar, or the reference desk at your public library if you are near one, will help.

Understanding this isn’t going to help you understand climate change over the scale of decades to centuries.

The reference to improved synchronization in the modeling studies brought to mind the work of Stephen Lansing of the University of Arizona and Santa Fe Institute. His research in the 1980s showed that rice farmers in Bali synchronized their planting and harvest cycles to maximize the benefits of water distribution while minimizing crop losses from pests, all coordinated through the religious system of water temples and ceremonies.

While at the UN climate conference in Bali last December, I wrote a piece in the Jakarta Post that fortuitously ran on the dramatic final day of the meetings, summarizing Lansing’s research and suggesting that decentralized, local knowledge is crucial to climate response.

But even with a long article space I couldn’t put in everything, and one part was Lansing’s more recent thinking about how these kinds of human cultural, technological or scientific cycles draw from natural synchrony. Below is an excerpt from his article on “Complex Adaptive Systems,” Annu. Rev. Anthropol. 2003. 32:183–204, where he ties this theme directly back to climate change. My feeling is that climate modeling synchrony and distributing climate data could well be a key driver in a broader adaptive management strategy as our society struggles to reduce our climate forcing and the disruption of natural synchrony that sustains biodiversity, resilience and ecosystem services:

In the Balinese case, global control of terrace ecology emerges as
local actors strike a balance between two opposing constraints:
water stress from inadequate irrigation flow and damage from rice
pests such as rats and insects. In our computer model, the
solution involves finding the right scale of reproductive
synchrony, a solution that emerges from innumerable local
interactions. This system was deliberately disrupted by
agricultural planners during the Green Revolution in the 1970s.
For planners unfamiliar with the notion of self-organizing
systems, the relationship between watershed-scale synchrony, pest
control, and irrigation management was obscure. Our simulation
models helped to clarify the functional role of water temples,
and, partly as a consequence, the Asian Development Bank dropped
its opposition to the bottom-up control methods of the subaks,
noting that “the cost of the lack of appreciation of the merits of
the traditional regime has been high” (Lansing 1991, pp. 124–25).

An intriguing parallel to the Balinese example has recently been
proposed by ecologist Lisa Curran (1999). Forty years ago Borneo
was covered with the world’s oldest and most diverse tropical
forests. Curran observes that during the El Ni˜no Southern
Oscillation (ENSO), the dominant canopy timber trees
(Dipterocarpaceae) of the lowland forests synchronize seed
production and seedling recruitment. As in the Balinese case,
reproductive success involves countless local level trade-offs, in
this case between competition among seedlings versus predator
satiation. The outcome of these trade-offs is global-scale
synchronized reproductive cycles. But forest management policies
have failed to take into account this vast self-organizing system
(Curran et al. 1999). As Curran explains, “With increasing forest
conversion and fragmentation, ENSO, the great forest regenerator,
has become a destructive regional phenomenon, triggering droughts
and wildfires with increasing frequency and intensity, disrupting
dipterocarp fruiting, wildlife and rural livelihoods” (p. 2188).
As a consequence, the lowland tropical forests of Borneo are
threatened with imminent ecological collapse (L.M. Curran,
personal communication).

Re: 87 Bill Nicholls Says:1. Nowhere was the actual size of data sets specified, except for ‘very large’ and ‘Terrabytes’. I’d like to know the size of an individual data set, group of same model sets, and whole ensemble of open access sets.

The entire AR4 collection hosted at PCMDI is approximately 30 TB of data. Individual netCDF files are no larger than 2GB in size, to respect those folks stuck with 32-bit netCDF-3 libraries. Given that “data set” is a rather fuzzy term (does that mean a single netCDF file, all the 20C3M atmospheric monthly mean netCDF files from a single model, a single realization, and so on) as well as the different geographic resolutions used by the different modeling groups, as well as other factors, I can’t really answer your questions.

2. The reason for question one is that a number of people, myself included, have > 1T local storage and non-trivial amounts of compute capacity, and are already involved in climateprediction.net, about 50,000 active hosts currently. What is the possibility that this cross-model analysis could be written to run under BOINC control and distributed to many individual systems for a substantial increase in run capacity?

As mentioned earlier, there may be restrictions imposed by non-US modeling groups regarding distribution of their data by 3rd parties. Additionally, reasonably good metrics on data accesses are needed to report back to funding agencies, to gain more funding, and distribution via 3rd parties makes that more difficult.

3. Have you considered asking Amazon or Google to host the open part of the datasets for no charge to assist in this process?

I have difficulties to understand some of the reasoning about water vapour being a positive feedback, and still there being an equilibrium in the future.

I understand that higher temperatures mean higher evaporation and higher specific humidity (without higher relative humidity due to the temperature increase). Because water vapour is a greenhouse gas (an important one), this leads to more greenhouse effect and higher temperatures and higher water vapour and on and on.

However I don’t understand the part where it is said “until a new equilibrium is found”. Why would it stop at all? I mean, what will be the cause for, at a certain point, more humidity not to increase the greenhouse effect, or the increase of the greenhouse effect not to increase the temperature, or the increase of temperature not to increase the humidity? How does the cycle break? How is the new equilibrium found?

Some people have written that at some point the increased evaporation won’t mean increased humidity, thanks to increased rains. Then more temperature wouldn’t mean more water vapour in the atmosphere and the cycle would break. However, rain means clouds, and some other people are defending that increased humidity levels won’t lead to increased cloudy areas. Furthermore, it has not happened so far with the increase we have already experienced in both temperature and humidity levels: there is no increase in cloudiness.

So, what could be exactly the cause for the would-be new-found equilibrium? If it is because of rains, will it affect the clouds? How? When?

Thanks,
Nylo.

[Response: You get to a new equilibrium because there are bounding effects – principally the long wave radiation out to space goes like T^4, which means that it eventually goes up faster than the increased impact of water vapour. See here for a little more explanation. – gavin]

Thanks a lot for your response gavin, it was very illustrating and I could understand it much better. It is not complete equilibrium but the increases in T become too little to be noticed. So it can be called equilibrium.

However this thing has brought to me some other doubts. After realising that the emissivity changes with T^4, I wondered more about emissivity and found out, in a previous text from you in this website,

when talking about Earth’s emissivity you wrote: “If you want to put some vaguely realistic numbers to it, then with S=240 W/m2 and \lambda=0.769, you get a ground temperature of 288 K – roughly corresponding to Earth. So far, so good”.

The lambda value can be easily measured, I guess, experimentaly, but to me it seems that the S=240W/m2 value is only a chosen value that properly gives the well-known Earth average temperature of 288ºK. Please correct me if I am wrong. So S is not the data but the result of the other data which can be measured, and the formula.

Why do I think this?

Because it is incorrect to think that the Earth emmits as a blackbody with a temperature which is equal to the Earth’s average temperature. Because the Earth’s average temperature is the mean of a very variable range of temperatures. Let’s put it this way: a blackbody whose temperature is 268ºK half of the time, and 308ºK the other half, doesn’t emmit in average the same ammount of Watts as a blackbody of 288ºK, the average temperature. It emmits quite more. In fact, it emmits like a blackbody of constant 290ºK. This is because of the dependance of T^4, which means that higher temperatures weight a lot more in the average of the emissivity.

What does this mean? It means that if, like I guess, they are using the S=240W/m2 as a result of the formula for a blackbody of constant temperature equal to the earth’s average temperature, they are doing it wrong. Because the earth’s temperature is far from constant and it varies a lot both in time and space. Averaging it is wrong. You should integrate the formula for the blackbody emissivity in time and space, because of the Earth’s temperature variability with those 2 parameters, to get the real value of the emissivity.

I did some rough calculations and the result I would expect would be at least about a 2-3% higher than those 240W/m2. So it would be for me very important to know if the value of 240W/m2 is something that has actually been measured OR if it has been obtained from the rest of the data (Tav and lambda), so that the whole thing keeps consistency. A 3% of 240 W/m2 is a big deal of watts per square metter. Bigger than what is attributed to CO2 increase, for example. It would force to recalculate much of the Earth’s energy balance.

My apologises. I got it all wrong by confusing S (solar radiance) and G (earth’s emissivity) in the formula. S can be measured, no doubt. Therefore either there is a mistake in the calculation of lambda, or the Earth only emmits a big fraction of what a true blackbody would. It is most likely the second.

Gavin wrote:
“The other way to reduce download times is to make sure that you only download what is wanted. If you only want a time series of global mean temperatures, you shouldn’t need to download the two-dimensional field and create your own averages. Thus for many purposes, automatic global, zonal-mean or vertical averaging would have saved an enormous amount of time.”
…

“A better model would be for the archive to host the analysis scripts as well so that they could be accessed as easily as the data. There are of course issues of citation with such an idea, but it needn’t be insuperable. In a similar way, how many times did different people calculate the NAO or Niño 3.4 indices in the models? Having some organised user-generated content could have saved a lot of time there.”

A lot of this was discussed by folks at PCMDI and elsewhere when the archive was being planned. There are practical issues here:
1. As you know, calculating even a simple thing such as global average can get involved. Do you include land / ocean / any other masking (say where obs are available) etc. etc. It gets more complex if you get into anomalies – what period is your climatology based on etc. Indices such as SOI or NAO (where there are multiple methods of computing the index) are more complicated with many “choices” to be made.
My point is – it is not just a write one script -> let it run -> serve up the data process that every analyst would be happy with.

2. As for serving up analysis scripts – surely you know there are issues with distributing software: which language – Fortran/C/C++/R/S/Python/Ferret/Grads/GMT? Which version of your code? Do you support it? Can you port it for Windows Vista or AIXx.y or Ubuntu? etc. etc. and more fundamentally – can you guarantee that there is no security threat or virus/trojan horse if I download it and run it?

3. Another fundamental cultural issue is that many scientists would rather have control over the way the analysis is done than download some “unpublished” (in the peer-review sense) analysis. Heaven forbid you had to retract a paper because you used someone else’s data or code and it had a flaw!

An idea that does have legs (but technology is not quite there yet!) is whether there is a way to let the user browse through – and slice the data and perform some simple analyses themselves (on the server side) before downloading just the data they need. It has not yet been done because (among other reasons) there needs to be user authentication and also knowing (and being able to control) what load on the data server said analysis will be putting. This is being addressed by many folks and it does not seem very far away but will still suffer because of point 3 above .

[Response: Hi Krishna, Thanks for the comments. I think I can add a few lines. Firstly, there is an unlimited number of possible analyses. just because you could do the global mean in a few different ways, that shouldn’t prevent you from doing one specific way by default. All you are doing is adding something, but if someone wants to do differently they can – nothing has been prevented. Storing scripts (in whatever language) is similarly additive – you don’t need to use them. And ask yourself whether it is more likely that a problem will be found more quickly if a script is in an archive or not? People are using the data from many sources with the expectation that it is correct, but with the knowledge that it might be flawed (MSU temperatures, ARGO floats etc.) – that is not unique to model analysis and does not undermine anything. I think if such as system were set up, it would function much better than some may expect. – gavin]

Concerning the issue of variance in weather data that is used for input into climate model and the problem of lack of correlation between them I am wondering if consideration has been given to the construction of fixed weather data models similar to what geographers use in constructing maps. Any map starts with a mathematical datum construct called an ellipsoid. An ellipsoid is a smoothed representation of the shape of the earth. All the coordinates of physical objects are projected onto this ellipsoid and then this is used as the basis for the various projections (mercator, conic, etc) that result in a flat map. There are dozens of different standard ellipsoids (NAD27, NAD83…) perhaps hundreds. Each has different strengths and weakness which are known. Some ellisoids or datums are better depending on the final use or geographical scope of the final map. Once a final map projection is produced the underlying datum can be mathematically switched and the different results compared. I am wondering if some system of standard weather datums might not be produced that could be easily intercorrelated and referenced when running models?

1. As you know, calculating even a simple thing such as global average can get involved.

As a potential data consumer, I’d point out that some useful reductions are not problematic – regional subsets for example would be assumption-free.

3. Another fundamental cultural issue is that many scientists would rather have control over the way the analysis is done than download some “unpublished” (in the peer-review sense) analysis.

Don’t underestimate the appetite for this data among nonscientists. That will create its own set of cultural issues (e.g. emergence of the model equivalent of surfacestations.org), but on the whole should be positive.

We have been asked to join a review panel consultation for the development of a specification for the assessment of the life cycle greenhouse gas emissions of goods and services in the UK. There are some points I wish to raise, which may be helped by Global Climate Models.

Two examples:

1. When 70 tonnes of CO2 is released into the atmosphere when a new flat is built there is a convention that this has a “similar” effect on the atmosphere as releasing one tonne per year for 70 years, the assumed lifetime of the flat. Is this sensible?

2. When assessing the Global Warming Potential of beef should the methane in the life cycle assessment be measured over 20, 100 or 500 years?

A more general question, that might help in understanding these issues might be: “If one gigatonne of CO2 is released into the atmosphere now, how much CO2 must be extracted in 20 years time to counteract the effect of the initial release.”

I suspect this question is not precise enough. Has anybody help with better ones?