Back to the future

A few weeks ago I was at a meeting in Cambridge that discussed how (or whether) paleo-climate information can reduce the known uncertainties in future climate simulations.

The uncertainties in the impacts of rising greenhouse gases on multiple systems are significant: the potential impact on ENSO or the overturning circulation in the North Atlantic, probable feedbacks on atmospheric composition (CO2, CH4, N2O, aerosols), the predictability of decadal climate change, global climate sensitivity itself, and perhaps most importantly, what will happen to ice sheets and regional rainfall in a warming climate.

The reason why paleo-climate information may be key in these cases is because all of these climate components have changed in the past. If we can understand why and how those changes occurred then, that might inform our projections of changes in the future. Unfortunately, the simplest use of the record – just going back to a point that had similar conditions to what we expect for the future – doesn’t work very well because there are no good analogs for the perturbations we are making. The world has never before seen such a rapid rise in greenhouse gases with the present-day configuration of the continents and with large amounts of polar ice. So more sophisticated approaches must be developed and this meeting was devoted to examining them.

The first point that can be made is a simple one. If something happened in the past, that means it’s possible! Thus evidence for past climate changes in ENSO, ice sheets and the carbon cycle (for instance) demonstrate quite clearly that these systems are indeed sensitive to external changes. Therefore, assuming that they can’t change in the future would be foolish. This is basic, but not really useful in a practical sense.

All future projections rely on models of some sort. Dominant in the climate issue are the large scale ocean-atmosphere GCMs that were discussed extensively in the latest IPCC report, but other kinds of simpler or more specialised or more conceptual models can also be used. The reason those other models are still useful is that the GCMs are not complete. That is, they do not contain all the possible interactions that we know from the paleo record and modern observations can occur. This is a second point – interactions seen in the record, say between carbon dioxide levels or dust amounts and Milankovitch forcing imply that there are mechanisms that connect them. Those mechanisms may be only imperfectly known, but the paleo-record does highlight the need to quantify these mechanisms for models to be more complete.

The third point, and possibly the most important, is that the paleo-record is useful for model evaluation. All episodes in climate history (in principle) should allow us to quantify how good the models are and how appropriate are our hypotheses for climate change in the past. It’s vital to note the connection though – models embody much data and assumptions about how climate works, but for their climate to change you need a hypothesis – like a change in the Earth’s orbit, or volcanic activity, or solar changes etc. Comparing model simulations to observational data is then a test of the two factors together. Even if the hypothesis is that a change is due to intrinsic variability, a simulation of a model to look for the magnitude of intrinsic changes (possibly due to multiple steady states or similar) is still a test both of the model and the hypothesis. If the test fails, it shows that one or other elements (or both) must be lacking or that the data may be incomplete or mis-interpreted. If it passes, then we a have a self-consistent explanation of the observed change that may, however, not be unique (but it’s a good start!).

But what is the relevance of these tests? What can a successful model of the impacts of a change in the North Atlantic overturning circulation or a shift in the Earth’s orbit really do for future projections? This is where most of the attention is being directed. The key unknown is whether the skill of a model on a paleo-climate question is correlated to the magnitude of change in a scenario. If there is no correlation – i.e. the projections of the models that do well on the paleo-climate test span the same range as the models that did badly, then nothing much has been gained. If however, one could show that the models that did best, for instance at mid-Holocene rainfall changes, systematically gave a different projection, for instance, of greater changes in the Indian Monsoon under increasing GHGs, then we would have reason to weight the different model projections to come up with a revised assessment. Similarly, if an ice sheet model can’t match the rapid melt seen during the deglaciation, then its credibility in projecting future melt rates would/should be lessened.

Unfortunately apart from a few coordinated experiments for the last glacial period and the mid-Holocene (i.e. PMIP) with models that don’t necessarily overlap with those in the AR4 archive, this database of model results and tests just doesn’t exist. Of course, individual models have looked at many various paleo-climate events ranging from the Little Ice Age to the Cretaceous, but this serves mainly as an advance scouting party to determine the lay of the land rather than a full road map. Thus we are faced with two problems – we do not yet know which paleo-climate events are likely to be most useful (though everyone has their ideas), and we do not have the databases that allow you to match the paleo simulations with the future projections.

In looking at the paleo record for useful model tests, there are two classes of problems: what happened at a specific time, or what the response is to a specific forcing or event. The first requires a full description of the different forcings at one time, the second a collection of data over many time periods associated with one forcing. An example of the first approach would be the last glacial maximum where the changes in orbit, greenhouse gases, dust, ice sheets and vegetation (at least) all need to be included. The second class is typified by looking for the response to volcanoes by lumping together all the years after big eruptions. Similar approaches could be developed in the first class for the mid-Pliocene, the 8.2 kyr event, the Eemian (last inter-glacial), early Holocene, the deglaciation, the early Eocene, the PETM, the Little Ice Age etc. and for the second class, orbital forcing, solar forcing, Dansgaard-Oeschger events, Heinrich events etc.

But there is still one element lacking. For most of these cases, our knowledge of changes at these times is fragmentary, spread over dozens to hundreds of papers and subject to multiple interpretations. In short, it’s a mess. The missing element is the work required to pull all of that together and produce a synthesis that can be easily compared to the models. That this synthesis is only rarely done underlines the difficulties involved. To be sure there are good examples – CLIMAP (and its recent update, MARGO) for the LGM ocean temperatures, the vegetation and precipitation databases for the mid-Holocene at PMIP, the spatially resolved temperature patterns over the last few hundred years from multiple proxies, etc. Each of these have been used very successfully in model-data comparisons and have been hugely influential inside and outside the paleo-community.

It may seem odd that this kind of study is not undertaken more often, but there are reasons. Most fundamentally it is because the tools and techniques required for doing good synthesis work are not the same as those for making measurements or for developing models. It could in fact be described as a new kind of science (though in essence it is not new at all) requiring, perhaps, a new kind of scientist. One who is at ease in dealing with the disparate sources of paleo-data and aware of the problems, and yet conscious of what is needed (and why) by modellers. Or additionally modellers who understand what the proxy data depends on and who can build that into the models themselves making for more direct model-data comparisons.

Should the paleo-community therefore increase the emphasis on synthesis and allocate more funds and positions accordingly? This is often a contentious issue since whenever people discuss the need for work to be done to integrate existing information, some will question whether the primacy of new data gathering is being threatened. This meeting was no exception. However, I am convinced that this debate isn’t the zero sum game implied by the argument. On the contrary, synthesising the information from a highly technical field and making it useful for others outside is a fundamental part of increasing respect for the field as a whole and actually increases the size of the pot available in the long term. Yet the lack of appropriately skilled people who can gain the respect of the data gatherers and deliver the ‘value added’ products to the modellers remains a serious obstacle.

Despite the problems and the undoubted challenges in bringing paleo-data/model comparisons up to a new level, it was heartening to see these issues tackled head on. The desire to turn throwaway lines in grant applications into real science was actually quite inspiring – so much so that I should probably stop writing blog posts and get on with it.

The above condensed version of the meeting is heavily influenced by conversations and talks there, particularly with Peter Huybers, Paul Valdes, Eric Wolff and Sandy Harrison among others.

Rod,
Temperature predates atomic interpretations of thermodynamics by hundreds of years. It was known to be a valid concept for gasses (where degrees of freedom are largely kinetic), liquids (where things are still largely kinetic, but motion is more restricted and solids (where motion is mostly vibrational). And the temperature was measured in the same way: insert a “thermometer” and allow it to come into thermal equilibrium with the material you want the temperature for and read how much the material in the thermometer has expanded. Alternatively, you can measure how much resistance in a thermistor changes. The thing is that it doesn’t matter whether the energy is kinetic, rotational, vibrational, electronic, etc. What matters is that the thermometer and the medium will exchange energy until they are at the same temperature–that is until they come into equilibrium. Note that the concept of equilibrium is critical here. Two bodies in contact and equilibrium will have the same temperature. Likewise, if a system is not in equilibrium–e.g. if equipartition does not apply–then you’ll get ambiguous answers for its temperature.
I have emphasized that temperature is defined as the partial derivative of energy wrt entropy, holding all other variables (e.g. volume, numbers of particles, etc) constant. This definition actually predates the kinetic interpretation of temperature by several decades. In fact, the only reason the kinetic interpretation was introduced was because the physics of billiard balls is easier to interpret than that of particles with internal degrees of freedom. Where you are running into trouble is in trying to apply a kinetic interpretation of temperature to a single particle/molecule. This quite simply is not valid. Statistical mechanics only works in the limit of large numbers of particles–and that includes any definition of temperature. This is one reason why nanoparticles are such a hot topic–how small does the particle have to get before you start seeing significant departures from normal thermodynamic behavior. The other frontier in stat mech is nonequilibrium stat mech. This is a very difficult. Fortunately, planetary atmospheres are sufficiently close to equilibrium that the normal rules of stat mech apply.

Ray, Hank re angular momentum: First, Hank, we’re discussing a mass with linear velocity and momentum and not physically tied to a disc-blob or its axis but with angular momentum about that axis. Getting angular momentum from a rotating (about an axis) mass relative to another different axis, ala the swinging bucket of water, is beyond my comprehension, so I need to go one step at a time.

I think we concluded and agreed 1) the bullet has angular momentum ( L ) about its target (a fixed rotatable disc). 2) That L is constant through time and distance from where the bullet was shot (100m from the disc blob say) all the way to the disc. 3) When the bullet strikes the disc (off center) and its applied torque spins the disc, the combined L_subT of the spinning disc with its embedded bullet will be the same as the bullet’s initial L. 4) if the bullet is mis-aimed and misses the target disc, it still has L relative to the disc. And, 5) the mis-aimed L is slightly different from the original L by virtue of a slightly different angle between P and R. The latter two might be in contention….

I contend: The bullet does not have to actually strike (and end up with a physical connection with) its rotatable target to have L about it. It can miss it big. If it can miss the original target and still have L about it, it ought to be able to also miss another different fixed rotatable target significantly distant from the first target (just to make it easier to draw), though in the same fixed unmoving coordinate system. The bullet then would have L_2 about the second target, since it is fired only once, at the same instant(ces) and from its specific position at any instant that it has L_1 about the first target, wouldn’t it? This L about the second “target” is numerically different from the first. This tells me that a single bullet with specific, explicit and inherent linear momentum can have two different numerical angular momentums depending on the relative location of two different targets. What is wrong with this? Can’t have L about target #2? Then how can it have L about target #1, at least if the aim is a bit off?

Finally, if it has two Ls, its a no-op jump to 3, 4, 20, 87, 1,000,000, etc.

Ray (502), I don’t think we are going to resolve this, but let me ask about just one point for now. You said “… Likewise, if a system is not in equilibrium–e.g. if equipartition does not apply–then you’ll get ambiguous answers for its temperature….”

I thought equipartition was an intramolecular (one) property. If equipartition is off kilter, why would one get an ambiguous temperature, as (you claim — though as you know, I disagree) temperature for a single molecule is a no-op and doesn’t exist? Or did I misread your statement?

ps to my #503: Angular momentum characteristics are actually more obvious that complicated stuff I got bogged down in. My bad. In a forceless constant frame/coordinate system containing bodies and a fixed observer, a body’s numerical linear momentum never varies no matter what. A body’s numerical (and vector) angular momentum can vary infinitely depending on what axis of rotation the observer chooses — my linearly moving bullet relative to any axis of all those other bodies in the example, or, say, a single cylinder rotating about 1) its centerline, or 2) an axis through it’s cross-section, or 3) an axis anywhere.

Different thought: can I assume the angular momentum of a photon, as small as it is, has to appear only in the rotation of a molecule after absorption; and likewise has to be taken from rotation when a photon is emitted? Does this somehow (or noticeably) alter the rotational energy pickup of a photon’s energy?

In classical physics momentum, not angular, is m * v, i.e. the product of mass and velocity. As Einstein showed, photons have no mass but they do have momentum, which is equal to h / wavelength, where h is Plank’s constant.

Thus, when photons collide with molecules the law of conservation of momentum is conserved, and the molecules do recoil.

If two mono-atomic Argon molecules collide, then the result is similar to that of a billiard balls. However, if two tri-atomic carbon dioxide molecules collide, their shape makes the result much more complicated. For instance, if they meet head on oxygen to oxygen, then the result will be a stretching oscillation, but if they meet head on oxygen atom to middle carbon atom, the result will be a bending oscillation. If it is a glancing blow, the the result will be a rotation, but in general is will be a combination of all three.

However, when CO2 is only 300 parts per million of the atmosphere, then a CO2 – CO2 collisions are unlikely. CO2 molecules are more likely to receive collisions with diatomic molecules such as O2 and N2. The geometry of the collisions with these molecules will also affect the resulting excitation of the CO2 molecule. It is all very complicated. It is all a matter of quantum mechanics :-(

Alastair, thanks for waking me up. I once knew photons have linear but not angular momentum; must be the senility setting in :-P . This means the photon momentum manifests in translation movement of the molecule, while photon energy appears in vibration or rotation molecular energy, except for a teeny amount that has to be shared/added to translation to cover the increased momentum. True?

The photon has very little momentum and so has very little direct effect on the translation (kinetic temperature) of the molecule. What it does have lots of is electro magnetic energy, and this goes into the bipolar vibrations of the greenhouse gas molecule. It is only molecular vibrations and rotations which produce an oscillating magnetic field that can absorb photons.

For instance CO2 is a linear molecule, and rotation about that line as an axis does not produce an oscillating field, but because H2O is bent, then that rotation is IR active. Rotation of the CO2 molecule about the central carbon atom also produces a field, so CO2 has two IR active rotation modes, actually one – doubly degenerate.

Rod, remember that the photon has momentum h/lambda and energy hc/lambda. c is a pretty big number. The photon will also have angular momentum, but since it will not interact much beyond its wavelength (small perpendicular distance), the angular momentum isn’t particularly relevant.
As to equipartition–it does not apply to single molecules, but really to large assemblies of molecules, and if you want to get technical to ensembles (many repetitions) of such assemblies. I think somewhere in my collection of technical books, I have a history of statistical mechanics if you are interested. I’m visiting relatives for my niece’s graduation right now, so I don’t have the title, but it might provide you with a little bit of background on why physicists define things as they do. Energy (or kinetic energy) and temperature are not equivalent concepts. Energy is extensive. Temperature is intensive. Temperature tells us about the flow of energy. It is only when you are dealing with equilibrium systems that you can come up with unambiguous relations. Expecting those to hold for all systems is asking for trouble.

Ray, since neither one will budge for technical/physics rationale, let me try philosophical: Why can a massless and possibly imaginary entity like a photon easily be ascribed physical characteristics like momentum, angular momentum, velocity, energy, and “characteristic” temperature (ala Planck), and polarization, but individual molecules (real entities with real mass and structure you know), you seem to argue, can not?

Ray, Alastair, et al: A follow-up curious/interest question, re a photon’s angular momentum (L): What axis is its angular momentum about? Would it be like the equivalent of a single rotating sphere with L about its own axis of spin? Second, is it correct to say that an absorbed photon’s L is conserved and has to manifest itself only in molecular rotation? And if about the/a molecule’s central axis of rotation, is that consistent with the photon’s axis? And, since incredibly small, would the added rotational energy come anywhere near the allowable rotational energy levels (though it would seem to fall within the narrow band(s) of the levels..?? — is there a band around the zeroeth level?)? Or is the photon’s L more of a virtual, imaginary, or characteristic quality (none-the-less with effects) and not actually “real” (like electron spin??)? Or does anybody really care?

Rod, relativistically, a photon has momentum becuase it has energy. Look at the relationship of momentum and energy relativistically–E=hc/lambda, p=h/lambda. And, no, a single photon does not have a temperature.
Also, you are getting wrapped around the axle wrt coordinate transformation. It doesn’t matter where you put the axis. The physics (what happens when two bodies interact) won’t change. In your example of the photon and the molecule, the photon won’t interact unless it passes within a wavelength of the molecule–so the change in angular momentum will be within the quantum mechanical uncertainty of the excited state. Again, which axis you take is not the interesting part–the physics is invariant.

To the main theme, read: “Tipping elements in the Earth’s climate system” (Timothy M. Lenton*†, Hermann Held‡, Elmar Kriegler‡§, Jim W. Hall¶, Wolfgang Lucht‡, Stefan Rahmstorf‡,
and Hans Joachim Schellnhuber†‡): “Society may be lulled into a false sense of security by smooth
projections of global change. Our synthesis of present knowledge
suggests that a variety of tipping elements could reach their
critical point within this century under anthropogenic climate
change. The greatest threats are tipping the Arctic sea-ice and
the Greenland ice sheet, and at least five other elements could
surprise us by exhibiting a nearby tipping point. This knowledge
should influence climate policy, but a full assessment of policy
relevance would require that, for each potential tipping element,
we answer the following questions: Mitigation: Can we stay clear
of crit? Adaptation: Can Fˆ be tolerated?”

Seems to me, that society “may be lulled into a false sense of security” not as much by “smooth projections” (which frankly most powerholders doesn’t even bother to understand beyond the point of politically correct gobbledegook as far as they think that’ll help them catch some voters extra) as by a very modern human tendency to automatically ignore what doesn’t fit into the megalomaniac expectations stemming from the dogmatic and scolastic exercise of economic “science” by the real priesthood of our times ex media cathedra.

Re #1: Certainly mankind will never be able to do the job of
“Laplace’s demon”:

“In the history of science, Laplace’s demon is a hypothetical “demon” envisioned in 1814 by Pierre-Simon Laplace such that if it knew the precise location and momentum of every atom in the universe then it could use Newton’s laws to reveal the entire course of cosmic events, past and future.”

But that doesn’t mean we can’t say anything about climate change past and present. There are lots of possible positions between total determinism and total uncertainty/scepticism.

As for “the free market global economy” (#512): around 50 pct. of the global economy is internal trade/transactions within less than hundred transnational corporations. Not very much “free market” there, except in the media mythologies, but certainly lots of oligopoly/monopoly capitalism and centralistic planning (without democracy as in the Soviet Union).

I have found your site is very informative despite the technical nature of much of the discussion, even for someone like me with no science since high school (I’m not counting math, economics and statistics).

I would like you to commend you for your efforts in dealing with the distractions of the so-called “skeptics”.

While you will clearly never convince them (there comes a point when the distinction between ignorance and wilful ignorance can no longer be overlooked), your patient response to their faux-naif questions and repetitive red-herring-dragging is not to no avail.

It has been less than a month since I started trying to educate myself about climate change.

In this short space of time I have reached the point where even I can spot almost all the inconsistencies and fudges in posts from the gusbobbs of this world, without having to wait for your expert replies. I am sure there are many others interested non-experts in my position.

While dealing with such distractions have no doubt interfered with this site’s function as a clearing house for genuine ideas exchanged between experts in your field, it has been beneficial in highlighting the paucity of many of the wrong-headed arguments being bandied around.

If nothing else, the quality of one point of view can often be judged by the quality of the arguments ranged against it.