Past reconstructions: problems, pitfalls and progress

Many people hold the mistaken belief that reconstructions of past climate are the sole evidence for current and future climate change. They are not. However, they are very interesting and useful for all sorts of reasons: for modellers to test out theories of climate change, for geographers, archaeologists and historians to examine the impact of climate on past civilizations and ecosystems, and for everyone to get a sense of what climate is capable of doing, how fast it does it and why.

As a small part of that enterprise, the climate of the medieval period has received a very high (and sometimes disproportionate) profile in the public discourse – due in no small part to the mistaken notion that it is an important factor for the attribution of current climate change. Its existence as a period of generally warmer temperatures (at least in the Northern hemisphere) than the centuries that followed is generally accepted. But the timing, magnitude and spatial extent are much more uncertain. All previous multiproxy reconstructions indicate a Northern Hemisphere mean temperature less than current levels, though possibly on a par with the mid- 20th century. But there are only a few tenths of a degree in it, and so the description that it is likely to have been warmer now (rather than virtually certain) is used to express the level of uncertainty.

A confounding factor in discussions of this period is the unfortunate tendency of some authors to label any warm peak prior to the 15th Century as the ‘Medieval Warm Period’ in their record. This leads to vastly different periods being similarly labelled, often giving a misleading impression of coherence. For instance, in a recent paper it was defined as 1200-1425 CE well outside the ‘standard’ definition of 800-1200 CE espoused by Lamb.

Since a new ‘reconstruction’ of the last 2000 years from Craig Loehle is currently doing the rounds, we thought it might be timely to set out what the actual issues are in making such reconstructions (as opposed to the ones that are more often discussed), and how progress is being made despite the pitfalls.

The Loehle paper was published in Energy and Environment – a journal notable only for its rather dubious track record of publishing contrarian musings. The reconstruction itself is based on a network of 18 records that are purportedly local temperature proxies, and we will use those as examples in the points below. More discussion of this paper is available here (via the wayback machine).

Issue 1: Dating Nothing is more important than chronology in paleo-climate. If you can’t line up different records accurately, you simply can’t say anything about cause and effect or coherence or spatial patterns. Records where years can be accurately counted are therefore at a premium in most reconstructions. These encompass tree ring width, density and isotopes, some ice cores, corals, and varved lake sediments. The next most useful set of data are sources that have up to decadal resolution but that can still be dated relatively accurately. High- resolution ocean sediment cores can sometimes be found that fit this, as can some cave (speleothem) records and pollen records etc.

There are nonetheless more problems with the decadal data – they may have been smoothed by non-climate processes, and their dating may be off by a decade or two. But there are enough records that are widely enough dispersed to make them a useful adjunct in a reconstruction that hopes to capture decadal to multi-decadal variability.

Using data that has significantly worse resolution than that in reconstructions of recent centuries is asking for trouble. The age models tend to have errors in the 100’s of years, and the density of points rarely allows one to reach the modern instrumental period.

For instance, South-Eastern Atlantic ocean sediment data from Farmer et al (2005) (Loehle data series #17) nominally goes up to the present 0 calendar years. This is really 1950 due to the convention that years “Before Present’ (BP) almost invariably begin then (some recent papers use BP(2000) to indicate a different convention, but that is always specifically pointed out). However, the earliest real date for that core is 1053 BP, with a 2-sigma range of 1303 to 946 BP – almost 400 years! That makes this data completely unsuitable for reconstructions of the last 2000 years – which in all fairness, was certainly not the focus of the original paper.

Similar issues arises with data from DeMenocal et al (2000) (Loehle #10) and SSDP-102 (Kim et al , 2004) (Loehle #18). In the the first record, the initial data point nominally comes from 88 BP (i.e. 1862 CE), but the earliest dated sample is around 500 BP. In the second, the initial date is closer to the present (1940), but the age model is constrained by only 3 ages over the whole Holocene (and it’s not clear that any are within the last two millennia. So while both records have more apparent resolution than Farmer et al, their use in a reconstruction of recent paleo-climate is dubious.

It should probably be pointed out that the Loehle reconstruction has mistakenly shifted all three of these records forward by 50 years (due to erroneously assuming a 2000 start date for the ‘BP’ time scale). Additionally, the series used by Loehle for the Farmer et al data is not the SST reconstruction at all, but the raw Mg/Ca measurements! Loehle #12 (Calvo et al, 2002) is also off by 50 years, but since it doesn’t start until 1440 CE, its presence in this collection is surprising in any case. The dates on two other ocean sediment cores (Stott et al 2004 – #14 and #15) are on the correct scale thankfully, but are still marginal in terms of resolution (29 and 44 years respectively, but effectively longer still due to bioturbation of the sediments). Neither of them however extend beyond the mid-20th Century (end points of 1936 CE and 1810 CE) and so aren’t much use for looking at medieval-vs-modern data.

Other dating issues arise if the age model was tuned for some purpose. For longer time scale records, the dates are often tuned to ‘orbital forcing’ periods based on the understanding that precession and obliquity do have strong imprints in many records. However, in doing so, you remove the ability to assess with that record whether the orbital expression is leading or lagging another record. Since reconstructions of recent centuries are often pored over for signs of solar or volcanic forcing, it is crucial not to use those signals to adjust the age model. Unfortunately, the Mangini et al (2005) speleothem record (Loehle #9) was tuned to a reconstruction of solar activity so that the warm periods lined up with solar peaks. This invalidates its use on that age model for any useful reconstruction, since it would be assuming a relationship one would like to demonstrate. If put on a less biased age model, it could be useful however (but see issue 3 as well).

Issue 2: Fidelity

This issue revolves around what the proxy records are really recording and whether it is constant in time. This is of course a ubiquitous problem with proxies, since it well known that no ‘perfect’ proxy exists i.e. there is no real world process that is known to lead to proxy records that are only controlled by temperature and no other effect. This leads to the problem that it is unclear whether the variability due to temperature has been constant through time, or whether the confounding factors (that may be climatic or not) have changed in importance. In the case where the other factors seem to be climatic (d18O in ice cores for instance), the data can sometimes be related to some other large scale pattern – such as ENSO and could thus be an indirect measure of temperature change.

In many cases, proxies such as Mg/Ca ratios in foraminifera have laboratory and in situ calibrations that demonstrate a fidelity to temperature. However, some proxies, like d18O which do have a temperature component, also have other factors that affect them. In forams, the other factors involve changes in water mass d18O (correlated to salinity), or changes in seasonality. In terrestrial d18O records, the precipitation patterns, timing and sources are important – more so in the tropics than at high latitudes though.

A more prosaic, but still important, issue is the nature of what is being recorded. Low resolution data is often not a snapshot in time, but part of a continuous measurement. Therefore the 100 year spaced pollen reconstruction data from Viau et al (2006) (Loehle #13), are not estimates for the mid-point of each century, but are century averages. Linear interpolation between these points will give a series that actually has a different century-long means. The simplest approach is to use a continuous step function with each century given the mean, or a spline fit that preserves the average rather than the mid-point value. It’s not clear whether the low resolution series in Loehle (#4, #5, #6, #10, #13,#14, #15, #17, #18) were treated correctly (though to be fair, other reconstructions have made similar errors). It remains unclear how important this is.

Issue 3: Calibration

Correlation does not equal causation. And so a proxy with a short period calibration to temperature with no validating data cannot be fully trusted to be a temperature proxy. This arises with the Holmgren et al (1999) speleothem grey-scale data (Loehle #11) which is calibrated over a 17 year period to local temperature, but without any ‘out-of-sample’ validation. The problem in that case is exacerbated by the novelty of the proxy. (As an aside, the version used by Loehle is on an out-of-date age model (see here for an up-to-date version of the source grey-scale data – convert to temperature using T=8.66948648-G*0.0378378) and is already smoothed with a backwards running mean implying that the record should be shifted back ~20 years).

As mentioned above, there are a priori reasons to assume d18O records in terrestrial records have a temperature component. In mid-latitudes, the relationship is positive – higher d18O in precipitation in warmer conditions. This is a function of the increase in fractionation as water vapour is continually removed from the air. Most d18O records – in caves stalagmites, lake sediment or ice cores are usually interpreted this way since most of their signal is from the rain water d18O. However, only one terrestrial d18O record is used by Loehle (#9 Spannagel), and this has been given a unique negative correlation to temperature. This might be justified if the control on d18O in the calcite was from local cave temperature impact on fractionation, but the slope used (derived from a 5-point calibration) is more negative even than that. Unfortunately, no validation of this temperature record has been given.

Issue 4: Compositing

Given a series of records with different averaging periods, spatial representation and noise levels, there are a number of problems in constructing a composite. Equal averaging is simple but, for instance, implies giving equal weight to a century-mean North American continental average (Viau et al, Loehle #13) to a single decadally varying N. American point (Cronin et al, #3), despite the fact that one covers a vast area and time period and the other is much less representative. Unsurprisingly, the larger average has much less variability than the single point. To address this disparity, a common practice is to normalise the records by their standard deviation and to areally weight records – but without that, the more representative sample ends up playing a much smaller role.

Another approach, used implicitly in climate field reconstruction methods (like RegEM for instance), is to use current instrumental records to assess the relevance of any particular point, region or time period to the desired target. Another idea would be to estimate the changes in noise characteristics over larger areas and longer times and build that into the normalisation. That might also be useful for records whose resolution decays in time (the GRIP borehole temperature for instance, Loehle #1).

Finally, one needs to be very careful to deal with each series consistently. Treating an interpolated low-resolution record differently to another low-resolution record that wasn’t interpolated seems inconsistent. Keigwin’s Sargasso sea record is very low-resolution (Loehle #4) but Loehle appears to use it as though it was a real high-resolution record, while Kim et al (Loehle #18), which is equally low-res, is only used within 15 years of a datapoint.

Issue 5: Validation

It is inevitable that many seemingly ad-hoc decisions need to be made in building a particular reconstruction. This is not in itself cause for concern – the inhomogeneity of the data and its sparsity require that kind of consideration. Given that there is then no mathematically perfect way of doing this, the test of whether any particular approach is worthwhile lies in the validation i.e. does the reconstruction give a reasonable fit to the target field or index over a period or with data that wasn’t used in the calibration? There’s a good discussion of these issues in two recent papers in Climatic Change (Wahl and Amman, 2007; Amman and Wahl, 2007) in relation to the original Mann Bradley & Hughes papers. One would also like to test how sensitive the answers are to other equally sensible choices – a result can be considered robust if it is relatively insensitive to such methodological choices.

What does this imply for Loehle’s reconstruction? Unfortunately, the number of unsuitable series, errors in dating and transcription, combined with a mis-interpretation of what was being averaged, and a lack of validation, do not leave very much to discuss. Of the 18 original records, only 5 are potentially useful for comparing late 20th Century temperatures to medieval times, and they don’t have enough coverage to say anything significant about global trends. It’s not clear to me what impact fixing the various problems would be or what that would imply for the error bars, but as it stands, this reconstruction unfortunately does not add anything to the discussion.

So where does this all leave us? Since the early days of multi-proxy reconstructions a decade ago the amount of suitable data has definitely increased, and so many of the issues related to specific proxies are becoming increasingly unimportant. As the amount of data grows, the picture of climate in medieval times will likely become clearer. What seems even more likely is that the structure of the climate anomalies will start to emerge. The simple question of whether the medieval period was warm or cold is not particularly interesting – given the uncertainty in the forcings (solar and volcanic) and climate sensitivity, any conceivable temperature anomaly (which remember is being measured in tenths of a degree) is unlikely to constrain anything.

However, if the tantalising link between medieval American mega-droughts and potential long-term La Nina conditions in the Pacific can be better characterised, that could be very useful at constraining ENSO sensitivity to climate change – something of great interest to many people. That will have to wait for a better next-generation reconstruction though.

Thanks to Eric Swanson for helping find some of the more interesting choices made by Loehle in his reconstruction, and Karin Holmgren for swift responses to my queries about her data

Update (Jan 22): Loehle has issued a correction that fixes the more obvious dating and data treatment issues, but does not change the inappropriate data selection, or the calibration and validation issues.

thanks gavin!
“Of the 18 original records, only 5 are potentially useful for comparing late 20th Century temperatures to medieval times, and they don’t have enough coverage to say anything significant about global trends.”

Could you list those 5 explicitly
And are you saying that reconstructions that use the other 13 should be re examined?

Thanks!

[Response: The Loehle 5 that extend beyond 1970 are #1, #3, #7, #8, #16 – grip, chesapeake, shihua, yang, ge. Reconstructions are done for many purposes – and calibrated in many ways. I laid down some criteria above for what is worth keeping in or not, and it’s up to other people to decide whether that’s appropriate. Low-resolution cores with age errors of 100’s of years are not useful IMO for millennial reconstructions. We discussed Moberg et al when it came out, and the problem there is the same as that highlighted above – how to you calibrate low-resolution data? That is still an open question. – gavin]

This one actually sounds like you could use the old Extended Kalman Filter to assimilate the various series; the individual natural or artificial smoothings peculiar to each series being the nonlinear parameters that the EKF would identify. The point of that is then you could have a single probability model with which all the individual series treatments were consistent.

The reason the EKF could be expected to work well is that the proxy series appear all to have at most weakly nonlinear processing that needs to be “undone”.

I don’t know the paleoclimate reconstruction literature enough to know if this has already been tried. It would be an old idea to meteorologists though.

[Response: Francis Zwiers and a student of his have been looking at this, the tricky part is finding the right model for the temporal dependence. Mark Cane’s group at Lamont Doherty/Columbia has looked at this as well, though the primary applications have involved sparse early historical data rather than proxy data, see e.g. here and here. -mike]

What a great article, I have been looking for information on reconstructions, as for the low resolution data – I know core reading can sometime be inaccurate. I work for a core drilling company, and we see it all the time – inconsistancies.

Thank you for the great article. The reason I think the MWP and the LIA are important is the calibration of solar effects. Even if TSI varies only slightly, the climate response to them may be large. If we knew for certain, which we don’t, that the climate varied little when solar varied to its maximum extent possible, we could rule out solar effects. As it stands, we cannot. For instance, one has this study which shows large sensitivity to variations in solar TSI, at least in the sub arctic.

Cyclic Variation and Solar Forcing of Holocene Climate in the Alaskan Subarctic

Could you please clarify this statement: “This is a function of the increase in fractionation as water vapour is continually removed from the air.”

It is my understanding that fractionation decreases with increasing temperature, because the difference in the zero point energies of ‘light’ and ‘heavy’ isotopologues of water becomes less important as thermal energy is added to the system. In other words, both kinetic fractionation processes (i.e. evaporation) and equilibrium fractionation process (i.e. condensation) tend to discriminate against d18O more heavily at lower temperatures than at higher temperatures.

The end result is still the same — in general, there is a positive correlation between d18O and temperature at high latitudes — but the explanation is somewhat different. Perhaps you are thinking of a different mechanism, however.

[Response: I agree I could have been clearer. The change in the fractionation coefficient with temperature is a minor issue. Rather it is simply that the colder/drier air has had more rain out, and so the isotopic composition of the remaining vapour (and next amount of precip) has become more depleted. – gavin]

I realize that Gavin is writing for fellow climate researchers rather than such as I who only have a medical doctorate, but surely there is some language, Gavin, that could more clearly, in plain English, describe what his objections, in the main, are, to those who raise some doubts as to the long-term climate record and what may have caused previous warmings. That previous warmings did occur is not in doubt and that basic fact caused humankind and our hominid ancestors to lose much of their body-hair. This fact is reassuring as Canadians try to accomodate to the current very frigid ( unseasonably so) temperatures right across this country. One awakes with frozen water pipes and wonders whether or not one can wait for warming to occur and the sooner the better.

Robert Strom, professor emeritus at Arizona State has a new book out entitled Hot House which seems worth reading. He has a chapter on Holocene climate that addresses this topic – here is a quote:

“The climate reconstructions for the past 2,000 years have led to a simplistic picture of a Medieval Warm Period and a Little Ice Age. Instead, the records of climate variability indicate much more complex patterns of past regional variations that rarely coincide with the actual patterns of hemispheric or global average variations (Mann and Jones 2003, Jones and Mann 2004). They are probably biased due to emphasis on one part of the world such as the North Atlantic/Europe region. . . It is probably better to view the climate changes during the last 2,000 years in terms of cool and warm centuries in various parts of the world. For example, the early 19th century was cool in North America. In Europe the 16th, 17th, and 19th centuries were cool, but the 18th century was warm. Eastern Asia had a cool 19th century, and there was a cool period in the tropics from 1650 to 1750. During the Little Ice Age there was a discernible warm period and during the Medieval Warm Period there was a cool period. In other words, there was considerable climate variability throughout the past 2,000 years, but most of the variability appears to have occurred regionally in the Northern Hemisphere.”

I suppose the reason this came up at all is that Steve McIntyre has been trumpeting it over at ClimateAudit. He also claims that the failure of Loehle to describe the uncertainties laid out in this RC post are not important becuase, as he says “. . . in my opinion, uncertainties are not appropriately discussed in any proxy reconstruction article.” So much for all of paleoclimatology.

Reading the scientific-sounding but content-free word salad of McIntyre and comparing it to the above post is pretty revealing. For example, Mcintyre: “Loehle’s network is the first network to be constructed using series in which every proxy as input to the network has already been calibrated to temperature in a peer reviewed article. This is pretty amazing when you think about it. It’s actually breathtaking.” Really?

Congress requested that the NRC look into the temperature reconstructions, and they produced this 2006 Report from the National Research Council. As they point out, “Collecting additional proxy data, especially for years before 1600 and for areas where the current data are relatively sparse, would increase our understanding of temperature variations over the last 2,000 years.” Thus, if Loehle actually wanted to make a useful contribution, he should have gone out and collected some new data in the field, rather than reworking a selected subset of proxy records.

Sorry, 9 Vern Johnson, but probability and statistics is a laboratory course that changes you into a different person. Once you are changed, you can’t go back. You had best be a math or physics student before attempting to take it. It is the course that separates those who will be scientists from those who won’t. Most physics majors change majors during their first Prob&Stat course. There is no royal road to mathematics, and the road to Prob&Stat is probably the toughest undergrad course there is. As with all physics/math courses, it only gets far, far harder in graduate school. There is definitely NOT any way to express it in English, plain or otherwise. If there were, you still wouldn’t get it unless you had already taken as many statistics and math courses as Gavin has. Gavin did as well as possible. Just be glad he didn’t give you the real thing. The best you can do is to believe what Gavin tells you.

Humans lost their hair through neoteny. Due to the increasing size of human heads, birth had to take place earlier and earlier in development as time went on, in order to get a baby through the pelvic canal. Thus adult humans retain many juvenile traits, such as hairlessness.

It is impressive that Loehe’s reconstruction is not dependant on any one time series. He does the tests overlaying all the n-1 and 18 of the n-4 subsets of data. Will the hockey stick tree-ring reconstructions pass that thest? If not, this reconstruction should be considered more believable.

[Response: This kind of robustness is a standard test – see Osborn and Briffa, or the Wahl and Ammann papers linked above. I would argue that validation is a more stringent test. – gavin]

Vern, Personally, I think Gavin’s post is pretty clear, but to translate a bit for the nonscientist:
1)Dating–The question here is whether the dataset is appropriate for the period you are interested and whether it has enough resolution that you can make meaningful statements.
2)Fidelity–Are you really measuring what you think you are, or are compounding effects producing spurious correlations.
3)Calibration–How do you turn your measurements of your proxies into estimates of what you are interested in?
4)Compositing–How do you combine your data–with different errors, resolution, temporal and spatial coverage, etc.–so that you can make meaningful statements about the questions you’d like to answer–e.g. global temperatures.
5)Validation–How do you demonstrate that what you have done actually works? Usually, you apply it to a dataset where you know the answer already by some accepted, independent methodology.
Hopefully that helps. However, it should have been clear from what Gavin wrote that the objections he raised were to methodology, not to the conclusions. The deficiencies in methodology more than account for the questionable conclusions.

[Response: Thanks! Maybe I should employ you to write abstracts for me… – gavin]

Please consider what you are saying. The “evidence” that you refer to will only be available
and testable in the future. The projections will only ever be projections until tested at a point
in the future – Red.

[Response: No. Projections are always for the future of course, but the confidence that are placed in those projections is based on evidence that has been accrued to date. Therefore evidence that models and theories have performed well in comparison to observations of the past, and that previous projections were borne out, do affect our confidence in future projections. Only in the case when we know absolutely nothing, do all projections have equal standing. -gavin]

Humans lost their hair through neoteny. Due to the increasing size of human heads, birth had to take place earlier and earlier in development as time went on, in order to get a baby through the pelvic canal. Thus adult humans retain many juvenile traits, such as hairlessness.

That is not an ‘explanation’ in my book. Surely furs can grow after birth if there is a good enough evolutionary motive. A real explanation identifies the evolutionary pressures that favour hairlessness, as the one presented by me did.

“… probability and statistics is a laboratory course that changes you into a different person.”

Those like me who made it through the basic grad level statistics class decades ago for the biological sciences are perhaps only half-changed — I learned:
— to get help from a statistician at the beginning, and
— to keep on after I found what I wanted, until I’d found everything I’d planned to collect, and
— to do the statistical test planned from the beginning, not to massage the data or change the test, and
— to get help from a statistician at the end.

Read this article, it makes the same point in detail:http://www.amstat.org/publications/jse/v3n1/konold.html
——-excerpt——
“… implications for assessment of intuitive understanding. …
(1) students come into our courses with some strongly-held yet basically incorrect intuitions,
(2) these intuitions prove extremely difficult to alter,
and
(3) altering them is complicated by the fact that a student can hold multiple and often contradictory beliefs about a particular situation. …
… we want to affect how students think (as opposed to how they respond on exams).”
——-end excerpt——–

Statisticians want to affect how people think, as opthamologists want to affect how people see; they know we can improve.

An amateur wonders – During and since the International Geophysical Year, there has been an enormous increase in the number and precision of climate data recording devices. How does this change in data collection have an impact upon the way we perceive and interpret the results?

#21
Ray, Nicely put. This is similar to most of the points made on CA by JEG and others. But don’t they all also apply to Moberg? And if “The deficiencies in methodology more than account for the questionable conclusions.” doesn’t that also apply to Moberg?

[Response: These things aren’t equivalent. Moberg et al was an attempt to incorporate lower resolution data as well as high resolution data using a new methodology based on wavelet analysis – that is why it was interesting. The specific records they used were not so much the point, it was more a proof of concept. If they were to update it, they would likely have a larger sample and leave out the least well-dated records. More to the point, they seem to be aware of what they were doing and what they were dealing with. With Loehle, there is no new methodology to speak of, and so everything depends on the records and their treatment – you get that wrong, there’s not much left. – gavin]

As Coby reminds us, there’s no “Wisdom” button — the search does bring up a lot of chaff — but just reading the first few dozen hits will get you several good articles about climate research relevant to your question. And some skeptics and irrelevancies of course.

“… We present a new analysis of millions of ocean temperature profiles designed to filter out local dynamical changes to give a more consistent view of the underlying warming. Time series of temperature anomaly for all waters warmer than 14°C show large reductions in interannual to inter-decadal variability and a more spatially uniform upper ocean warming trend (0.12 Wm−2 on average) than previous results. This new measure of ocean warming is also more robust to some sources of error in the ocean observing system. Our new analysis provides a useful addition for evaluation of coupled climate models …”

gavin> The simple question of whether the medieval period was warm or cold is not particularly interesting – given the uncertainty in the forcings (solar and volcanic) and climate sensitivity, any conceivable temperature anomaly (which remember is being measured in tenths of a degree) is unlikely to constrain anything.

That may be true if you are talking about climate models, but in determining the impact of higher temperatures on ecosystems and agriculture, knowledge about the MWP and other past temperature extremes is likely very interesting.

[Response: That’s more of a regional issue, and of course, the regional patterns of change – whether forced or internal to the system -are of great interest for understanding climate dynamics. -gavin]

Contributing to the hairlessness debate – surely the most off-topic branch in any RC thread – most people I ever examined have hair: who are these hairless humans you write of? There appears to be a significant range of extent of hairiness among individual members of the species, and no obvious reason to assume that that has not always been the case.
As for the bigger-brain-less-hair evolutionary argument, a number of not notably intellectual species – field mice, for example – are born hairless and helpless but manage to become hairy adults.

I have a few questions, some methodological, others paleoclimatological. However, I am not sure to what extent some of them at least are directly relevant to current discussion, so please feel free to skip those which you regard as more tangential.

*

You state that in weighting various lines of evidence (e.g., different proxies for different subregions), you consider both the range of error (with a larger range of error resulting in a given proxy receiving less weight) and the area (with a larger area resulting in the proxy receiving more weight). Likewise, you point out that the larger the area, the lower the range of error — given the law of large numbers. As such these are two different aspects of the same issue and can’t actually be treated separately, I presume.

But furthermore, when one combines such multiple lines of evidence, the range of error that results should be narrower than the range of error associated with any subset of those lines of evidence — although this wouldn’t necessarily be the case — if the different lines of evidence diverged to an extent that was considerably greater than their purported range of error. In this latter case, it would be necessary to re-evaluate their ranges of error. Something which occasionally happens I presume.

But assuming there is no such conflict between the different lines of evidence, the range of error should be narrower. In that case, how often is a Bayesian-type approach employed for combining the different lines of evidence in order to arrive at that narrower range of error? Are the issues typically amenable to that sort of an approach? What alternatives are there to the Bayesian and to what extent are they in use?

*

One issue that I have wondered about for some time is to what extent the paleoclimate record supports the distinction between slow-feedback and fast-feedback climate sensitivity. Does it support the view that the slow-feedback sensitivity is double that of the fast feedback? How confident are we with regard to this?

*

It appears that we have been raising the level of certain greenhouse gases for the past five thousand years. To what extent does this appear to have affected the long-term behavior of the Holocene?

*

At a more abstract level, to what extent does the speed with which the climate system responds to a greenhouse gas forcing appear to be dependent upon the magnitude of the forcing? To what degree does the magnitude of the response appear to be dependent upon the speed with which the forcing is applied? Or are we even at the point that we can arrive at tentative answers to such questions? Frame them in a form that we may either analytically or empirically address them?

More concretely, do we have reason to expect the speed of slow-feedbacks (e.g., the cryosphere) to be greater when the magnitude of the forcing is greater? Do we have reason to expect the magnitude of the feedback from the carbon cycle to be greater when the duration over which a forcing is raised to a higher level is shorter, e.g., that various carbon sinks can be overloaded by the rate of change such that they become less effective not simply in the short-term, but long-term? More prone to become net emitters?

I greatly appreciate the recent critiques that have been posted on RC (in addition to being enlightening about climate, they show what scepticism in science is really about), but I wonder if there’s any chance that some of the contributors can choose a recent paper they like, and perform a similar dissection, showing what’s good, new, and significant about it. It need not be anything truly ground-breaking – it’s just that plenty of positive examples must be out there, and they, too, can be enlightening. Naturally, beggars can’t be choosers and all that; it’s just a thought. If buying gavin lunch is a necessary bribe, I could probably manage it, but I suspect raypierre’s tastes are a bit rich for my wallet.

Hank,
Thanks for the response. I read some of the links served up by Google. Beyond my grasp, but I see how there is some effort to tease apart the possible misconstructions that come from such a dramatic shift in data collection methods.

Gavin:
I have no idea how you can argue that Moberg is somehow exempt from the methodological points that Ray raised. Whether Moberg is illustrating a new approach or an established process is irrelevant to the need to address these 5 points. Loehle has to address them. Moberg should have addressed them. They are too basic to try to rationalize why one guy didn’t incorporate them.

[Response: If Moberg et al didn’t know what ‘BP’ meant, or what the difference between a Mg/Ca ratio and temperature was, I’d criticise them too. Just as I would if they used inappropriately tuned records or inconsistently treated equally low-resolution records or didn’t do any validation. Except that they did none of those things. Loehle did. Fix all that, and then we can talk. They are not equal just because they have 8 series (out of 18) in common. – gavin]

Humans lost their hair through neoteny. Due to the increasing size of human heads, birth had to take place earlier and earlier in development as time went on, in order to get a baby through the pelvic canal. Thus adult humans retain many juvenile traits, such as hairlessness.

That is not an ‘explanation’ in my book. Surely furs can grow after birth if there is a good enough evolutionary motive. A real explanation identifies the evolutionary pressures that favour hairlessness, as the one presented by me did.]]

Your statement would be true only if hyperadaptationism of the sort favored by Richard Dawkins were true, and the vast majority of biologists don’t think it is. A lot of traits have no immediate effect on differential survival, and will survive simply because there is no evolutionary pressure against them. Not every trait is an adaptation.

Gavin, I agree with your points. However, as a paleoclimatologist/paleoceanographer I would hope that you would get away from the ‘high’ and ‘low’ resolution description of climate records. Those of us in the field now describe records as annual, decadal, millennial, and milankovitch to avoid total confusion. When we work in deep-time, anything resolving Milankovitch is considered high-resolution.

The big problem with Loehle’s paper starts at the beginning–he never shares the records with us but only the composite. If he was limited by E&E on publishing his original records and his sorting criteria, he should have published elsewhere.

It only took a quick look at one of the papers (Kim et al., 2004), to see that Loehle was ‘cherry picking’ his data. Of the three records with marginally sufficient age control, two were not added to the composite. These two records,from the California margin, had warming trends.

Furthermore, the data in the composite is the most mixed grab bag of data I have ever seen–one cannot make any climatological sense for the choices, so one must assume that the choices were based on some other, unstated criterion.

We need honest help to understand how climate works; eventually real analysis will drive out the bad composites.

[Response: Good point. I’ve actually made it myself when trying to translate between paleo people and physical oceanographers – they have very different conceptions of the term ‘high resolution’! So to be specific, I use ‘low resolution’ here to imply multi-decadal and longer. Sufficiently high resolution to be useful in this kind of exercise is decadal (with similar sized age model errors). Finally, I agree, real analyses will win out in the end. I’m hopeful that isn’t too far off. – gavin]

It only goes up to 1980, but it’s interesting to compare it with GISTEMP. According to Loehle, 1966 was the warmest year in the last century – and since then we’ve been cooling at about 0.2 degrees per decade. :)

Re # 43 Martin Vermeer: “Are you telling me that furs have no usefulness to those animals having them?

Try explaining the adaptive value of the coats of various breeds of dogs, such as a Besenji, a dalmation, an Alaskan malamute, and a collie.

The term “adaptation” is used in various ways by various people. Evolutionary biologists tend to define adaptation as a trait that arose through selection for its current function. Many traits arise as a serendipitous consequence (an “accident”) of some other trait, or traits, that may be adaptive; those serendipitous traits may or may not have adaptive value, but since their function was not a result of selection, they would not be considered adaptations. This concept was articulated very nicely by Stephen J. Gould and Richard Lewontin in the their classic “Spandrels of San Marcos” essay:
Gould, S.J. & Lewontin, R.C. (1979). The spandrels of San Marcos and the Panglossian program: A critique of the adaptationist programme. Proceedings of the Royal Society of London, 205, 581-598.

[Note that some architects have criticised Gould and Lewontin for their ignorance of the architectural design of cathedrals and the function of spandrels. Thus, while their main point, that not all traits are adaptive, is valid, their example may not be.]

For a more thorough analysis of the concept of “adaptation”, you might consult one of the many books on the subject, such as

Try explaining the adaptive value of the coats of various breeds of dogs, such as a Besenji, a dalmation, an Alaskan malamute, and a collie.

Those were bred by humans, so I don’t see the relevance to my point, which is about the selective advantage (not: adaptive value; an interesting but subtly different subject) of having a (dense, thermally insulating) coat vs. not having one.

Anyway I should apologize for what is arguably the most off-topic thread in ages. I suppose it has to do with a passion for figuring out how things really are — about Canadian water pipes or other things :-)

Re #44 Cuck Both “The term “adaptation” is used in various ways by various people. Evolutionary biologists tend to define adaptation as a trait that arose through selection for its current function. Many traits arise as a serendipitous consequence (an “accident”) of some other trait, or traits, that may be adaptive; those serendipitous traits may or may not have adaptive value, but since their function was not a result of selection, they would not be considered adaptations.”

Does this mean that some traits, like for instance a ‘timing of hair growth’ knock out mice or cave dwelling albino insects should not be considered an adaptation, since the ‘lack of a trait’ trait often is a question of an single accidental mutation and accidents happen all the time :-). While an population of mice that change from thick white in the winter to normal brown summertime, that most certainly an adaptive feature to cold climates.

Try explaining the adaptive value of the coats of various breeds of dogs, such as a Besenji, a dalmation, an Alaskan malamute, and a collie.

Chuck,

I might agree with the point you are making, but I am not sure that this is a good example. Breeds of domesticated dogs (which technically still wolves) were subject to a great deal of artificial selection in recent millenia. What they are adapted to are their specific uses by humans, including the aesthetic value of their appearance.

You might like the following.

Just as the DNA transposons known as MITEs have made the domestication and adaption of rice to our needs possible at a greatly enhanced speed, dogs have been particularly plastic in large part due to the existence of tandem repeats in the regulatory protein coding areas.

All but the shorter tandem repeats are thought to have originated by way of the poly tails of LINEs and SINEs – in multicellular eukaryotes. Retroelements, genetic relics left behind from endogenized retroviruses. And interestingly, today I just ran into something on self-synthesizing transposons which appear to be descended from plasmids which acquired a retroviral integrase perhaps a billion years ago.

Not sure if any of this might be of interest to you, but if so:

timothy [no spaces] chase [at] g mail [dot] com

Its been a bit of an obsession for me, the role of viruses and virus-like elements (e.g., phages) in the evolution of life.

Had Martin specified wild animals with fur coats, your questioning of my canine example would be fully justified…but, he didn’t. My real point was that Barton was correct – not every trait is adaptive. While the term “adaptation” is easy to use, trying to figure out which traits are, and which are not, adaptations is difficult (in fact, when you consider that many so-called “traits” involve one or more organs carrying out multiple functions and composed of multiple tissues constructed from the products of multiple genes, it often becomes difficult to identify precisely what is a trait, let alone figure out how that trait evolved.)

I will leave it at that – as Martin has noted, this thread has strayed well off topic (Unless one sees a parallel between the difficulty of reconstructing past climates and the difficulty of recontructing the evolutionary history of current biological traits)