FAQ on climate models

We discuss climate models a lot, and from the comments here and in other forums it’s clear that there remains a great deal of confusion about what climate models do and how their results should be interpreted. This post is designed to be a FAQ for climate model questions – of which a few are already given. If you have comments or other questions, ask them as concisely as possible in the comment section and if they are of enough interest, we’ll add them to the post so that we can have a resource for future discussions. (We would ask that you please focus on real questions that have real answers and, as always, avoid rhetorical excesses).

Quick definitions:

GCM – General Circulation Model (sometimes Global Climate Model) which includes the physics of the atmosphere and often the ocean, sea ice and land surface as well.

Simulation – a single experiment with a GCM

Initial Condition Ensemble – a set of simulations using a single GCM but with slight perturbations in the initial conditions. This is an attempt to average over chaotic behaviour in the weather.

Multi-model Ensemble – a set of simulations from multiple models. Surprisingly, an average over these simulations gives a better match to climatological observations than any single model.

Model weather – the path that any individual simulation will take has very different individual storms and wave patterns than any other simulation. The model weather is the part of the solution (usually high frequency and small scale) that is uncorrelated with another simulation in the same ensemble.

Model climate – the part of the simulation that is robust and is the same in different ensemble members (usually these are long-term averages, statistics, and relationships between variables).

Forcings – anything that is imposed from the outside that causes a model’s climate to change.

Feedbacks – changes in the model that occur in response to the initial forcing that end up adding to (for positive feedbacks) or damping (negative feedbacks) the initial response. Classic examples are the amplifying ice-albedo feedback, or the damping long-wave radiative feedback.

Questions:

What is the difference between a physics-based model and a statistical model?

Models in statistics or in many colloquial uses of the term often imply a simple relationship that is fitted to some observations. A linear regression line through a change of temperature with time, or a sinusoidal fit to the seasonal cycle for instance. More complicated fits are also possible (neural nets for instance). These statistical models are very efficient at encapsulating existing information concisely and as long as things don’t change much, they can provide reasonable predictions of future behaviour. However, they aren’t much good for predictions if you know the underlying system is changing in ways that might possibly affect how your original variables will interact.

Physics-based models on the other hand, try to capture the real physical cause of any relationship, which hopefully are understood at a deeper level. Since those fundamentals are not likely to change in the future, the anticipation of a successful prediction is higher. A classic example is Newton’s Law of motion, F=ma, which can be used in multiple contexts to give highly accurate results completely independently of the data Newton himself had on hand.

Climate models are fundamentally physics-based, but some of the small scale physics is only known empirically (for instance, the increase of evaporation as the wind increases). Thus statistical fits to the observed data are included in the climate model formulation, but these are only used for process-level parameterisations, not for trends in time.

Are climate models just a fit to the trend in the global temperature data?

No. Much of the confusion concerning this point comes from a misunderstanding stemming from the point above. Model development actually does not use the trend data in tuning (see below). Instead, modellers work to improve the climatology of the model (the fit to the average conditions), and it’s intrinsic variability (such as the frequency and amplitude of tropical variability). The resulting model is pretty much used ‘as is’ in hindcast experiments for the 20th Century.

Why are there ‘wiggles’ in the output?

GCMs perform calculations with timesteps of about 20 to 30 minutes so that they can capture the daily cycle and the progression of weather systems. As with weather forecasting models, the weather in a climate model is chaotic. Starting from a very similar (but not identical) state, a different simulation will ensue – with different weather, different storms, different wind patterns – i.e different wiggles. In control simulations, there are wiggles at almost all timescales – daily, monthly, yearly, decadally and longer – and modellers need to test very carefully how much of any change that happens because of a change in forcing is really associated with that forcing and how much might simply be due to the internal wiggles.

What is robust in a climate projection and how can I tell?

Since every wiggle is not necessarily significant, modellers need to assess how robust particular model results are. They do this by seeing whether the same result is seen in other simulations, with other models, whether it makes physical sense and whether there is some evidence of similar things in the observational or paleo record. If that result is seen in multiple models and multiple simulations, it is likely to be a robust consequence of the underlying assumptions, or in other words, it probably isn’t due to any of the relatively arbitrary choices that mark the differences between different models. If the magnitude of the effect makes theoretical sense independent of these kinds of model, then that adds to it’s credibility, and if in fact this effect matches what is seen in observations, then that adds more. Robust results are therefore those that quantitatively match in all three domains. Examples are the warming of planet as a function of increasing greenhouse gases, or the change in water vapour with temperature. All models show basically the same behaviour that is in line with basic theory and observations. Examples of non-robust results are the changes in El Niño as a result of climate forcings, or the impact on hurricanes. In both of these cases, models produce very disparate results, the theory is not yet fully developed and observations are ambiguous.

How have models changed over the years?

Initially (ca. 1975), GCMs were based purely on atmospheric processes – the winds, radiation, and with simplified clouds. By the mid-1980s, there were simple treatments of the upper ocean and sea ice, and clouds parameterisations started to get slightly more sophisticated. In the 1990s, fully coupled ocean-atmosphere models started to become available. This is when the first Coupled Model Intercomparison Project (CMIP) was started. This has subsequently seen two further iterations, the latest (CMIP3) being the database used in support of much of the model work in the IPCC AR4. Over that time, model simulations have become demonstrably more realistic (Reichler and Kim, 2008) as resolution has increased and parameterisations have become more sophisticated. Nowadays, models also include dynamic sea ice, aerosols and atmospheric chemistry modules. Issues like excessive ‘climate drift’ (the tendency for a coupled model to move away from the a state resembling the actual climate) which were problematic in the early days are now much minimised.

What is tuning?

We are still a long way from being able to simulate the climate with a true first principles calculation. While many basic aspects of physics can be included (conservation of mass, energy etc.), many need to be approximated for reasons of efficiency or resolutions (i.e. the equations of motion need estimates of sub-gridscale turbulent effects, radiative transfer codes approximate the line-by-line calculations using band averaging), and still others are only known empirically (the formula for how fast clouds turn to rain for instance). With these approximations and empirical formulae, there is often a tunable parameter or two that can be varied in order to improve the match to whatever observations exist. Adjusting these values is described as tuning and falls into two categories. First, there is the tuning in a single formula in order for that formula to best match the observed values of that specific relationship. This happens most frequently when new parameterisations are being developed.

Secondly, there are tuning parameters that control aspects of the emergent system. Gravity wave drag parameters are not very constrained by data, and so are often tuned to improve the climatology of stratospheric zonal winds. The threshold relative humidity for making clouds is tuned often to get the most realistic cloud cover and global albedo. Surprisingly, there are very few of these (maybe a half dozen) that are used in adjusting the models to match the data. It is important to note that these exercises are done with the mean climate (including the seasonal cycle and some internal variability) – and once set they are kept fixed for any perturbation experiment.

How are models evaluated?

The amount of data that is available for model evaluation is vast, but falls into a few clear categories. First, there is the climatological average (maybe for each month or season) of key observed fields like temperature, rainfall, winds and clouds. This is the zeroth order comparison to see whether the model is getting the basics reasonably correct. Next comes the variability in these basic fields – does the model have a realistic North Atlantic Oscillation, or ENSO, or MJO. These are harder to match (and indeed many models do not yet have realistic El Niños). More subtle are comparisons of relationships in the model and in the real world. This is useful for short data records (such as those retrieves by satellite) where there is a lot of weather noise one wouldn’t expect the model to capture. In those cases, looking at the relationship between temperatures and humidity, or cloudiness and aerosols can give insight into whether the model processes are realistic or not.

Then there are the tests of climate changes themselves: how does a model respond to the addition of aerosols in the stratosphere such as was seen in the Mt Pinatubo ‘natural experiment’? How does it respond over the whole of the 20th Century, or at the Maunder Minimum, or the mid-Holocene or the Last Glacial Maximum? In each case, there is usually sufficient data available to evaluate how well the model is doing.

Are the models complete? That is, do they contain all the processes we know about?

No. While models contain a lot of physics, they don’t contain many small-scale processes that more specialised groups (of atmospheric chemists, or coastal oceanographers for instance) might worry about a lot. Mostly this is a question of scale (model grid boxes are too large for the details to be resolved), but sometimes it’s a matter of being uncertain how to include it (for instance, the impact of ocean eddies on tracers).

Additionally, many important bio-physical-chemical cycles (for the carbon fluxes, aerosols, ozone) are only just starting to be incorporated. Ice sheet and vegetation components are very much still under development.

Do models have global warming built in?

No. If left to run on their own, the models will oscillate around a long-term mean that is the same regardless of what the initial conditions were. Given different drivers, volcanoes or CO2 say, they will warm or cool as a function of the basic physics of aerosols or the greenhouse effect.

How do I write a paper that proves that models are wrong?

Much more simply than you might think since, of course, all models are indeed wrong (though some are useful – George Box). Showing a mismatch between the real world and the observational data is made much easier if you recall the signal-to-noise issue we mentioned above. As you go to smaller spatial and shorter temporal scales the amount of internal variability increases markedly and so the number of diagnostics which will be different to the expected values from the models will increase (in both directions of course). So pick a variable, restrict your analysis to a small part of the planet, and calculate some statistic over a short period of time and you’re done. If the models match through some fluke, make the space smaller, and use a shorter time period and eventually they won’t. Even if models get much better than they are now, this will always work – call it the RealClimate theory of persistence. Now, appropriate statistics can be used to see whether these mismatches are significant and not just the result of chance or cherry-picking, but a surprising number of papers don’t bother to check such things correctly. Getting people outside the, shall we say, more ‘excitable’ parts of the blogosphere to pay any attention is, unfortunately, a lot harder.

Can GCMs predict the temperature and precipitation for my home?

No. There are often large variation in the temperature and precipitation statistics over short distances because the local climatic characteristics are affected by the local geography. The GCMs are designed to describe the most important large-scale features of the climate, such as the energy flow, the circulation, and the temperature in a grid-box volume (through physical laws of thermodynamics, the dynamics, and the ideal gas laws). A typical grid-box may have a horizontal area of ~100×100 km2, but the size has tended to reduce over the years as computers have increased in speed. The shape of the landscape (the details of mountains, coastline etc.) used in the models reflect the spatial resolution, hence the model will not have sufficient detail to describe local climate variation associated with local geographical features (e.g. mountains, valleys, lakes, etc.). However, it is possible to use a GCM to derive some information about the local climate through downscaling, as it is affected by both the local geography (a more or less given constant) as well as the large-scale atmospheric conditions. The results derived through downscaling can then be compared with local climate variables, and can be used for further (and more severe) assessments of the combination model-downscaling technique. This is however still an experimental technique.

Can I use a climate model myself?

Yes! There is a project called EdGCM which has a nice interface and works with Windows and lets you try out a large number of tests. ClimatePrediction.Net has a climate model that runs as a screensaver in a coordinated set of simulations. GISS ModelE is available as a download for Unix-based machines and can be run on a normal desktop. NCAR CCSM is the US community model and is well-documented and freely available.

464 Responses to “FAQ on climate models”

RodB, 393. You ALWAYS wait for someone else to knock the obvious denialist. Ever thought of getting in there first?

And your “pals” are because whenever you’re talking about being a skeptic, you always talk in the plural. Either you know them or you’re using a verbal technique to spread any blame (“There are others, are you going to call them on this???”) that I’ve been on the receiving end before. There are two ways to deal with it:

a) Ignore the plurality and make it individual (this is also, oddly, the way to win a fight when you’re one against many: make it one to one)
b) Ask who else they are

Mark, skeptics covers a broad populace. I don’t know them all; I don’t agree totally with all; I might not even like some; I can still refer to skeptics as a collective group (2nd or 3rd person — doesn’t matter). Seems neither complicated nor nefarious to me. What’s your beef?

Now, even in #3 you’re better than the very vast majority of denialists.

Your misalignment is that you don’t approach ALL the data with skepticism. You’re denial is that you “still want to be convinced” which will end when you want it to, not when you’ve been convinced. And your denial is if you’re appropriating the cloak of skepticism (which is so often done: you NEVER hear someone say “Well, I’m a denialist”, do you?).

My beef is that you should show the same level of skepticism (which at least has a reason of some sort toward it) to the posts trying to show something wrong with AGW.

Do that and the beef is gone.

PS you can refer to skeptics as a collective. Just don’t do that in the same sentence that you call yourself one. Separate the two out and then people can argue with *your* skepticism and not have to deal with “hypothetical skeptics” or with denialists who will use (abuse, if you’re not one of them) your posts as extra attack vectors.

I don’t talk of myself any ANY other group. I talk about me and I talk about other groups. I don’t “borrow” the weight of others in my beliefs.

Because when I’m wrong, it is *me* that got it wrong. “But that other guy did it” was a common defense tactic at university when the professor marking got a “where did you get that from?” from the student. That’s a BIG part of the reason why more than one “marks” the papers. Of course, they don’t, or maybe only a couple, they just lend their weight to the marking of the one who did it so you can’t complain.

Needless to say, it didn’t work with me. I told the individual what I thought and said “I’ll talk with them too”.

As a scientist and as a world citizen, I fully support RealClimate and I am very grateful to my colleagues for the energy they put in persuading people of the reality of climate change and of the care that most scientists take when interpreting their results. Nonetheless, I would have provided a more balanced answer to some of the FAQs, in particular about the tuning and validation of our GCMs. We do know for example that different GCMs capture the late 20th century observed warming despite the use and/or the computation of different radiative forcings. This is the reason why I am sometimes less enthusiastic than my colleagues about the way we build and evaluate our models. I also feel that climate change is progressing faster than the accuracy of our 21st century climate projections and that recent observations have done at least as much as our modelling activities in convincing people (including many scientists) of the human influence on climate.
I know how dangerous such remarks are (especially in this blog) and my idea is not to provide new arguments to those who do not trust climate models. On the contrary, I wonder if it’s not time for RealClimate in particular and climate scientists in general to consider a new target: instead of trying to convince the blind or dishonest players, we could take action to make governments aware of their duties. We already know enough about climate change to inform important and necessary decisions for adaptation and long-term mitigation. We know enough about models’ strengths and limitations not to challenge their global and long-term (late 21st century) projections on the one hand, while on the other, to be very modest about our ability to rapidly improve the regional details and decadal evolution of our climate scenarios.
This leads me to a surprising conclusion: I wonder if the most useful decision that climate modelers should make after the IPCC got the Nobel prize is not simply to postpone the next IPCC report (AR5), but to inform our governements that no report will be delivered until more ambitious and resolute decisions are made about the reduction of GHG emissions (yes we can !).
Such a break would be also very valuable for the climate modeling community. It would give us time to fully draw on the lessons of the AR4 simulations (which are still being analysed) and to discuss new priorities. As an example, do we really all agree with the idea of seamless prediction (see RealClimate article of October 9th) and how can we reconcile the fact that seasonal predictions show more skill in the Tropics where climate scenarios show more spread ? Do we all agree that the development of more complex Earth System Models, including new couplings and feedbacks, is the best solution for issuing more reliable projections and, if so, on which timescales ?
In my opinion, IPCC simulations should never become a routine activity (as weather prediction has) because every 5 years we basically achieve the same “prediction”, but with a shorter lead-time. Once again, it is not out of question that climate is evolving faster than our dynamical understanding of regional climate change. While the window for global decisive action is rapidly closing, climate scientists should not make careless promises about their ability to reduce uncertainties in climate scenarios over the next few years, and thereby provide our governments with excuses to shun their responsabilities until they know more detail about how fast and adverse their regional impacts of global warming will be (compared to those in other countries).
Hoping that my colleagues of RealClimate will find real questions and no rhetorical excess in my proposition and looking forward to reading your comments.

Mark (404), Man! All this convoluted analysis is making my head hurt. 1) As said before, I have in specific circumstances directly criticized and corrected other skeptics. I don’t know if this counts in your book of my being skeptical of other skeptics or not. (Nor do I particularly care.) 2) I defend my own science assertions, and now and then the assertions of other skeptics if they have the same scientific thought. 3) I occasionally take the side of skeptics as a collective group when they are pigeon-holed and attacked with ad hominem or collective derogatory accusations — which is quite often. 4) I do not even play with “denialist” lest the term gets some credence beyond the egregious flagitious nefarious playground smear and guilt by association of its origin.

If I’m skeptical with one specific aspect of AGW science, I have to be skeptical of ALL of the science??!!? That’s nonsense.

Skeptics are not a monolithic homogenous group.

It is neither my job, responsibility, nor interest to challenge other skeptics (even though I do from time to time). My interest, and hence onus, is only on the science, understanding as best I can and questioning aspects that don’t seem right — almost all of which has to be directed to AGW scientists and proponents. I pay attention to both proponents and skeptics if they are credible; to neither if they are simply strident. (I’m ignoring the AGW tenet that “skeptic” and “credible” are never to appear in the same sentence…)

Herve (405) writes: “I wonder if the most useful decision that climate modelers should make after the IPCC got the Nobel prize is not simply to postpone the next IPCC report (AR5), but to inform our governements that no report will be delivered until more ambitious and resolute decisions are made about the reduction of GHG emissions (yes we can !).”

Dear Herve -and I know how the French treasure their Nobels, the real ones like de Gennes, Charpak or my dear professor Fert-, you surely must be aware that the Nobel Peace Prize is an award given by members of parliaments, not by scientists.

Your candid comment sounds like a tacit acknowledgement that in spite of this Nobel public relation boost, this worldwide “coup” and the infinite mileage the media was willing to borrow against this asset, the fortune of your paradigm still remains in Nature’s hands more than in governments’…

As for the routine of weather predictions, the Monday referendum at the Cafe du Commerce would suggest this activity is still fraught with high risks… “yes, we can!”, hopefully for all of us, one day you may!

Herve, Your points are well taken. We will certainly never convince everyone–not even every honest man–of the reality of climate change. However, I do think we need to remain aware of the potential for “swiftboating” from anti-science types. We need to remain vigilant and counter disinformation.
There is also the fact that while the evidence for anthropogenic climate change is cogent and unambiguous, we still do not fully understand all of the implications thereof. For this reason, improvements in climate models are crucial to mitigation efforts–if we can’t model the risk, we can’t bound it. So, in my opinion, we must advance on both fronts–confront decision makers with the necessity to address climate change now so we can buy time for the future while at the same time advancing our understanding of the implications of climate change. It is really a problem of mitigation in the face of uncertain and unbounded risk.

> ambitious and resolute decisions are made about the
> reduction of GHG emissions

Given that the world seems to be meeting the Kyoto emission reduction levels so far, one approach would be to _replace_ the idled capacity with clean energy, rather than restarting the old dirty technology whenever the economy begins to revive.

That would mean using the “stimulus” money (borrowed from the grandchildren) for building the new clean tech needed.

I could see bailing out Detroit — with money committed to getting the supply chain recreated for new lightweight efficient best-available vehicles, for a decade, to take the dirty tech off the highway, for example.

Bailing out the electric power industry — with money for distributed solar, even knowing solar’s getting better and cheaper as fast as computers, because investing in the _distributed_ part, a smart network, would be the big cost to pay in advance of better installations. It would need a lot of programmers and a lot of electronics.

Bailing out the cities and the housing industry — with money spent on insulation, even for renters, putting people to work doing what’s needed to make the current housing stock last another 50 years while replacements are being figured out, but at much greater efficiency.

Bailing out the fisheries industry because people need to eat — on the assumption we’re not going to destroy the breeding stock.

Bailing out agribusiness — because continuing to put fossil fuel products (toxic waste, melamine, and the like) onto fields and call it “fertilizer” (per today’s NYT) is a dead end, and must stop.

#411 Rod,
Its a set of maps of (monthly/annual) plotting all heat components (solar,sensible,latent,long wave (thermal)and net (all summed))

Since these heat components are so fundamental and must be central variables in all GCMs, I would anticipate that most modellers would have similar such GCM output plots as key diagnostic utilities in their model “toolkits”. I am just not aware of any being published/made available; hence my original question #408.

The map (out of ~130 total), that seems to display as default is(net, annual) flux; which I guess is the one you perhaps are referring, and gives the “biggest picture” so to speak:

Yes, net land flux is (measured not assumed)near zero, land having little thermal capacity.

What I find most intriguing is the South:North distribution, with broadly speaking, the Southern oceans appearing to act as a heat sink and North Atlantic/Pacific as dispersers of heat.

I was wondering whether this contributed to the North/South global mean temperature difference of 1.2C (earlier question), and how effectively GCMs model that.

Cumfy, thanks. I find this very interesting. I’m a bit confused…, or reading it wrongly. It says positive flux is net downward which I would take as energy/power entering the surface, which shows a bunch of power entering the S. Pacific and a lot leaving the Arctic — seems backwards from what everyone says. Can you clear this up?

So for all practical purposes the net net flux entering/leaving solid terra firma, including Antarctica, is zero with only minor exceptions. I find that fascinating as I would have never guessed. Since there is virtually no latent or sensible heat involved here, this means the sum of incoming solar (not including albedo), outgoing longwave, and incoming (returning) longwave is zero. True?

My next question of wonderment: how on earth (pun intended) are these measurements made with such accuracy and granularity??

PCMDI is leading the development of a global data archive federation to support CMIP5. It needs to be global: conservative estimates of the volumes of data to be produced for CMIP5 are that there will be PB produced in the many modelling centres involved in producing conforming simulations….”

Gavin, it might be worth a mention in the FAQ of how climate models overlap with ocean chemistry models. Maybe with a pointer to those doing that kind of modeling; I really don’t know how much overlap there is.

Notes from a recent conference here. Of course there’s much else out there. Just curious where these overlap.

Given the rate and apparent irreversibility of ocean pH change in only a few decades, will that change assumptions used in some climate models that include ocean plankton species, like LeQuere’s work?

Regarding #405 by Hervé. The proposal that the IPCC withhold its valuable services until the world does something serious about the supposed threat of climate change is amusing. But the post does raise an important point, one that the FAQ do not address. This is the distinction between basic and applied modeling. Doing weather and climate forecasting for practical purposes is applied science, it is not scientific research. Weather and climate forecasters are not scientists, they are like engineers, using science.

In science, forecasting is only done to test hypotheses, or at least to play what-if with exploratory hypotheses. What I do not see happening is climate models being used to test or play with hypotheses, that is, for basic climate science. There don’t seem to be any exploratory models, just applied models. This is a serious gap in the climate research program.

David Wojick,
Wow, you actually raised a relevant point. Once a threat has been demonstrated credible, the role of the model is to bound (from above) the risk it poses. This is quite difficult at this point, since some potential risks posed by climate change (e.g. Ocean acidification/anoxia, etc., global failure of agriculture,…) in effect spell the end of human civilization. In effect, we have unbounded risk. Under such circumstances, the proper risk mitigation strategy is to do whatever is possible without bankrupting the system to limit/avoid the risk while work continues to improve models/understand and pose more stringent upper bounds.
David, your point argues far more for rapid action than it does for delay!

The discussion of feedbacks and fallouts is interesting in that it reveals some of the misunderstandings people have. One in particular I have encountered contends that, yes, CO2 is a greenhouse gas, but that the feedbacks have been inflated to produce a high CO2 sensitivity. People do not seem to understand that CO2 does not have its own set of feedbacks. Rather most feedbacks tend to be driven by temperature, and the system doesn’t care whether the extra watts are coming from greenhouse gasses or increased insolation. If you change the feedbacks significantly, you wind up with a model that fails dramatically–not just with respect to CO2 forcing.

Re:#177/178
Barton Paul Levenson, I know but I think that the faktors I wrote above are more important.

Hank Roberts, I assume that the total amount of nitrogen is almost constant over the last billion years. So change in oxygen means change in total pressure.
I doubt that pterosaurs or other flying animals can used to estimate the total atmospheric pressure directly. Because drag and lift is changed by total pressure in the same way, the direct effect of oxygen for respiration is more important I think. Note that birds can currently fly in high altitude.

I hope still for an at least particular answer to my questions in #173. Is there really no publication on these subjects yet? Please look on my question again. Thanks.

Uli, what have you done since you asked, to try to look these questions up? Where have you looked, what did you read to establish the lack of publications? Whose help did you ask?

I doubt we could have reached 10x CFCs before losing the ozone layer; look up Paul Crutzen’s Nobel Prize speech, it’s online.

If you haven’t found anything in the journals on linearity of the effect of those molecules at 10x concentration — assuming you’ve asked a good reference librarian for help and she’s not been able to find anything to answer you — I’d speculate that nobody’s published on that narrow question because all the other feedbacks at that point would be so complex as to muddy the waters, so to speak. If you haven’t asked a reference librarian, try that approach.

As to your assumption about nitrogen and total pressure, I’d check that first. Same advice on how to check, if you haven’t asked for help beyond us amateur readers on blogs.

How much gas is dissolved in the oceans, for example, and would also have to be eliminated somehow before atmospheric levels change? Remember even a massive CO2 change would be handled by natural biogeochemical cycling over a long time span, its only our 100x natural rate of increase that’s changing ocean pH. I’d bet the same is true for any other gas — the ocean’s a far greater buffer over long time spans than it can be over the short time span of human change.

My question to this distinguished discussion group is whether anyone has ever modeled building a dam across the Strait of Gibraltar to combat rising sea levels in the Mediterranean?

The Strait is only 14.2km across and such a dam is technically feasible. In a previous Ice Age an ice wall across the Strait dammed the Mediterranean.

I’m sure such a dam would have many unfavourable environmental consequences. However, I wondered whether blocking the hotter Mediterranean sea away from the rest of the World’s oceans would have any favourable effect on world ocean temperatures or would the result be negligible?

[Response: Actually, we did think about an online journal club which could be hosted by different people each week and discuss interesting or controversial papers outside of the simply the ones that get a lot of press. Comments anyone? – gavin]

Some of the most interesting posts at RC are ones (especially on ice) that involve discussion on “the latest stuff” which are not well known. This would be a good aside to tutorials on stuff which are well known or on spending too much time addressing nonsensical talking points.

[Response: Actually, we did think about an online journal club which could be hosted by different people each week and discuss interesting or controversial papers outside of the simply the ones that get a lot of press. Comments anyone? – gavin]

I would definitely like to see something like that. One of RC’s great contributions is opening science literature to the public outside the normal scientific circles.

Maybe writing something like the editor’s choice section in the Journal Science where the findings of interesting papers are summarized and then adding some commentary about the paper. This would be very interesting and useful. The commentary could discuss the merits of the paper or how it changes what is known in the area discussed.

Hank – thanks for the response. I did a google search on damming the Strait of Gibraltar and found a few hits. However, I couldn’t get a sense of whether it is a seriously accepted idea or one on the fringes of science (and whether it has ever been studied / funded seriously)??

My question was prompted as last week I attended Dr James Hansen’s evidence session to the UK Environmental Audit Committee. A friend had told me that Hansen was ‘a legend’ and the meeting was open to the public. It was not well attended (circa 20 people) (or well reported in the media). The general public and politicans’ attention is elsewhere on the recession… I was frustrated in the meeting at how all the politican’s questions related to measuring CO2 emissions and whether the IPCC process could be improved. Very few questions were on solutions.

People have now generally accepted scientific theories on global warming, and I, for one, would like to see more research/studies on solutions and less research on measuring the problem in greater and greater detail and on IPCC reports. Simply measuring the problem and then proposing increasingly unrealistic targets for globally co-ordinated CO2 reduction doesn’t seem to be enough. Is there a plan B? Perhaps Realclimate.org should seek open a sub group on fringe theories to give some small air time to the other theories/solutions being considered (such as ocean mirrors etc.).

Thanks I found that link hilarious! :-) What verse do you think that the the world is currently up to?

I agree that draining the Great Lakes would be easier than damming the Strait of Gibraltar (as you say, less countries are involved). I would put them in the following order of difficulty:

1. draining the Great Lakes
2. convincing the nations bordering the Mediterranean to build a dam in the Strait of Gibraltar
3. achieving a scenario where all the countries in the world go carbon neutral.

“Atmospheric oxygen shows a major broad late Paleozoic peak with a maximum value of about 30% O2 in the Permian, a secondary less-broad peak centered near the Silurian/Devonian boundary, variation between 15% and 20% O2 during the Cambrian and Ordovician, a very sharp drop from 30% to 15% O2 at the Permo-Triassic boundary, and a more-or less continuous rise in O2 from the late Triassic to the present.”

Thanks, I can see why they haven’t made movie out of Peter Ward’s theories – too depressing!

I’m still not convinced that geo-engineering, in conjunction with doing everything else achievable, won’t help the climate.

For example, it might well be that the results of properly funded study that climate models suggest that the dam only needs to stay closed for 5 years to have a positive effect on ocean temperatures. After that you just open it. That’s the beauty of modern dams – they can be open or closed at the press of a button. If at any time climate scientists dislike the results the dam is having they just press the open button. How many disruptive theories can be switched off that easily?

the models account for the temperature anomalies in a largely convincing way (within 0.1C or so), and as stated above they are not tuned to achieve this. Yet they can have very different implied climatic sensitivities.

The question is do they give the absolute temperatures to the same accuracy?

This a bit of a tricky question as it is possible to be confident of the temperature record anomalies, HadCRUT3, etc., but not be certain of the precise absolute global average baseline temperature to the same accuracy.

Thanks for this FAQ, very useful & informative. One naive question I have: How is a “global average temperature” measured/calculated/estimated? Is it a theoretical temp. that the atmosphere would have for a given heat content, assuming thermodynamic equilibrium? A time-integrated statistical mean of weather station measurements? Or … ??

Scott M. (445) — AFAIK the surface temperature products use monthly weather reports from ground stations and something similar from ship reports of SSTs. These are then weighted by area and then averaged.

Determining troposhere temperatures from satellite data is much more complex.

I wondered whether it had been studied or dismissed as an idea since using more up to date climate models (or different types of dams). Looking at it very simplistically, you would think that insulating the rest of the world’s oceans from the hotter Mediterrean sea should have some helpful cooling effect on ocean temperature. However, I have no idea what the follow on consequences might be for ocean currents, salt levels, rainfall etc.

A very simple “climate model” is to divide up the atmosphere per person and look at the effect that burning a ton of carbon has on one share of atmosphere. The atmosphere weighs about 6 million gigatons (billion tons) so that is a million tons per person. If I burn a ton of carbon (MW = 12) I increase the concentration of CO2 in my hypothetical sealed share of air (MW = 29) by 2.4 ppm (by volume), correct?

I think this simplified “climate model” is essential. It illustrates the possibility that all the other factors, like carbon sinks (oceans, algae and plants), are being overwhelmed by our burning of fossil fuels.

Click under the paper, where it states the number of later papers citing it (don’t take that as the absolute number, various citation tracking services exist and criteria vary, but it’s a good start — a reference librarian can give you a far better answer than I can).

(PS, for going beyond this one example, on how to find subsequent cites generally — I tried various search strings in Scholar before hitting on the one that pulled up the paper; if you are trying to find a paper in Scholar and your first search doesn’t work, usually some fragment of its title, authors, or journal reference will locate it — just experiment.)