Thompson Gets New NSF Grant

Thompson and Gabrielli

Lonnie Thompson, senior research scientist at Ohio State University’s Byrd Polar Research Center, and his colleague Paolo Gabrielli, have just been awarded a three-year $588,000 grant from the NSF’s Division of Atmospheric and Geophysical Sciences “to assess the human impact on the chemical characteristics of the glaciers in the Himalaya and the Tibetan Plateau from the pre-industrial era to the present time”:

Gabrielli and Thompson will use an existing set of unique ice cores retrieved from Guliya (Western Tibetan plateau), Naimona’nyi and Dasuopu (Central Himalaya), Puruogangri and Dunde (Central and Northern Tibetean plateau, respectively) to analyze for a large suite of trace elements. These data will allow discrimination of the natural background components (e.g. crustal, volcanic constituents) from the anthropogenic components (e.g. fossil fuel combustion and non-ferrous metal production) of aerosol deposited to these glaciers over time.

The spatial and temporal characterization of atmospheric pollution at high elevations in the Himalaya and the Tibetan Plateau is very much needed because recent studies suggest that atmospheric “brown clouds” deposition to the Himalayan glaciers may affect their energy balance, resulting in an acceleration of ablation. Knowledge of the initial quality of the meltwater, resulting from the ongoing shrinking of the glaciers in the Himalaya, is also important for planning the availability of water resources for millions of people who live downstream from these glaciers.

Ultimately, this study will serve as a source of fundamental information for policy makers trying to mitigate the impact of trace metals in the environment.

Fortunately, since Jan. 2011, all NSF proposals must now include a Data Management Plan detailing how any data collected will be archived for public access, so that we can expect any findings under the new grant to be promptly archived. However, according to an NSF representative who recently spoke at OSU, this requirement merely formalizes a long-standing policy that the results of NSF research, including any “metadata” standing behind the bottom line results, must be made public so that others can use it and/or replicate the final results. Thompson is therefore still obligated to archive the results of his past NSF studies.

(Thompson also reloaded data for Quelccaya Core 1 on 1/13/12. However, it’s not clear whether this a revision of the original file dating back to 1997, or if it is just a new upload of the original file.)

Update: Steve has observed that the new Quelccaya file states right up front,

Note: This file was reformatted 13 January 2012 to provide column
separation between columns 4 and 5. No data values were changed.
The previous version of this file lacked separation between columns
4 and 5 (at AD 871 and at AD 615 through AD 617), potentially
causing errors reading the data.

Update 4/27
Per the request below by Kenneth Fritsch, Here are graphs of the data on 6 cores back to 1000AD that was used in Thompson’s 2003 Climatic Change article. The newly archived Puruogangri data was used in the PNAS 2006 7-core index that goes back to 0AD, but not in the CC index.

Update 4/29:
Here are Ken’s plots of the new Puruogangri Core 1 and 2 data:

While Core 2 is rather flat, Core 1 shows a Current Warm Period, but also suggests a Medieval Cool Period, preceded by a Dark Ages Warm Period. It would have been useful if Thompson had provided a concordance of inferred age versus depth so that the dating assumptions could be reviewed.

Update 5/2
Ken has also plotted for comparison the data from Quelccaya Core 1 and Summit Core, as shown below:
The d18O readings from the two Quelccaya cores clearly tell a more coherent story than the two cores from Puruogangri shown above. Their average (as used in MBH99) would presumably have less noise than either series by itself (as in Thompson’s CC03 article).

Both cores also show the attenuation of noise before 1000 AD that characterizes Core 1. This leads me to suspect that the H2O molecules may be able to migrate slowly through the ice. In the later layers this doesn’t make much difference, but in the earlier layers, which are both thinner and have been around longer, it may be causing differences in d18O to average out, creating the appearance of flatter temperatures than really occurred.

Also, if H2O molecules can migrate slowly through ice, it would be interesting to know whether CO2 can also be absorbed from air bubbles into ice, given enough time. This would greatly distort estimates of atmospheric CO2 from ice core records if true.

Re: bladeshearer (Apr 21 11:54),
According to Thompson’s online vita, he was a member of the Science Advisory Board of Gore’s AIT project.

I don’t subscribe to guilt by association, so that Thompson’s research should be judged on its own merits independently of any errors in AIT. Scientists are as entitled as anyone to join causes, religions, etc. that are not scientifically based.

I do however find it reprehensible that he did not come forward and correct the big error in AIT that pertains directly to his own work, namely Gore’s misidentification of what was in fact Mann’s hockey stick graph as “Dr Thompson’s Thermometer.”

It would have been easy enough, and in fact a “teachable moment”, to put out a press release pointing out the error, while showing the two graphs side by side and emphasizing how they looked so similar that someone in production must have just picked up the wrong one.

He should also have mentioned that they are not in fact 100% independent, since both rely heavily on his Quelccaya core, and that he had never statistically calibrated his series to temperature so that it could not validly be termed a “thermometer” as claimed by Gore.

The new Dasuopu file only has annual figures back to 1450 AD, but then also has decadal averages back to 1000.

The decadal averages on d18O are the same ones archived in 2004, and also available in the SI for Thompson’s CC03 paper whose 6-core index was supposed to have been the source for “Dr. Thompson’s Thermometer” in AIT. The new file does add the annual figures plus data on dust and 3 ions that was previously unavailable to the public.

Dasuopu was one of the 7 cores used in Thompson’s 2006 PNAS paper. One of the 7 only extended back to 1000. Which one was unidentified, but if it was Dasuopu, then this is the complete data that was used in that paper for Dasuopu.

However, 6 of the 7 PNAS06 papers went back 1550 years, and 4 of them went back at least 2000 years, so that the previously archived versions of the 6 CC03 cores, back to 1000 AD, were incomplete relative to PNAS06.

The new Guliya file goes back decadally to 0 AD, so we now, at long last, have its complete record as far as PNAS06 goes.

The new Puruogangri file has decadal averages on Cores 1 and 2 back to 0 AD, plus 5 year averages back to 1600 and centennial averages back to 7000 BP on Core 2. We therefore now also have the complete PNAS06 data on Puruogangri. It’s not clear whether PNAS06 used Core 1 or Core 2 or the average of both, but that could be backed out with a linear regression of the index on the complete data. (Puruogangri was the new 7th core added to the CC03 6 cores, so there had been no data on it previously except for the 5 year averages back to 1600 AD archived at PNAS.)

The archived data for Sajama and Huascaran give centennial averages back to 24,950 BP and 19200 BP, resp., plus Huascaran has annual figures back to 1894, but these can’t be used to reconstruct the decadal figures back to 0AD used in PNAS06, so that these sites are still incomplete.

Yang used data on Dunde back to about 800 AD (see https://climateaudit.org/2008/02/03/ipcc-and-the-dunde-variations/ ), so evidently the old 2004 Dunde file, which is decadal back to 1000, is also incomplete. In any event, either Dasuopu or Dunde must have decadal averages back to at least 450 AD, since 6 of the PNAS 7 cores extended back that far.

Steve (who is alive and well but just busy with his day job) has called to my attention that the new Quelccaya file states, right up front,

Note: This file was reformatted 13 January 2012 to provide column
separation between columns 4 and 5. No data values were changed.
The previous version of this file lacked separation between columns
4 and 5 (at AD 871 and at AD 615 through AD 617), potentially
causing errors reading the data.

That file must have got corrupted by opening it in Excel. Happens sometimes, dontchaknow.

I was about to wish Dr, Thompson a safe and successful field venture, but thought I should confirm whether or not field work was involved. It isn’t, as the ice core to be analyzed already exists. The NSF Award Abstract indicates they’ll be running ICP-MS analyses on mineral material entrained in the ice.

I doubt that Drs. Thompson and Gabrielli will be conducting the mass spectrometry in their respective labs. Rather, they probably will collect samples from the ice core and transmit them to a qualified laboratory that will perform the analyses. One lab that I am familiar with is ALS ( http://www.alsglobal.com/minerals/services.aspx#geochemistry ). Their catalog of services lists an ICP-MS analysis for 41 elements, which includes the ones mentioned in the award abstract, for the price of USD $22.35. The minimum sample size is one gram.

Let’s run the math on a ballpark estimate. University overhead is going to be in the area of 15% to 25%. Let’s make it simple and call it 17.75%, trimming $88,746 from the award to pay for the costs of running the University. There will also be moneys to cover salary for the principal investigators and post-doc, plus wages and benefits (tuition) for the grad student. I’m pulling a number form the air here, but it could reasonably be $150,000 and may very well be more like $300,000. The NSF web page doesn’t provide a copy of the proposal, so we’re left to guess how parsimonious the PIs will be on this project.

Allowing for other expenses, such as the necessary video conferencing equipment so that high school teachers can participate in this exciting project, and you’re left with perhaps $300,000 to cover ICP-MS analyses. For additional simplicity I’ll add in some shipping costs to the lab work price to make it an even $30.00 per sample. With the remaining budget OSU can acquire 10,000 analyses.

I wonder what qualifies as unprecedented spatial resolution. The abstract mentions Guliya, Naimona’nyi and Dasuopu, Puruogangri and Dunde. Are we looking at 25 overall core sites? If so, then the the per-site analyses is reduced to 400 samples. Given the stated desire to analyse from ~1500 A.D. to present time, thay may be enough of a budget to analyse down to a one-year resolution (with caveat that dating may be problematic). If overhead, wages, and other expenses chip away at the analyses budget, the spatial or temporal resolution will suffer.

I’m more concerned with the amount of testable material that can be extracted from the cores. My experience in mineral exploration involves the sampling, preparation, and testing of solid material, not ice. Assuming that the ICP-MS procedure is comparable to that of the mineral industry, a minimum sample size of one gram may result in a significant amount of binning. I confess to being ignorant of exactly how much dust is contained within the cores and exactly how much core material OSU may have at their disposal. But it seems to me that in order to get a minimum sample size they will need to melt a lot of ice, either by having significant volumes for a given year or by grouping samples by decade (or longer) to collect enough particulate material.

Should OSU be planning on performing the analyses themselves, I fear the result will not meet the quality control standards expected from any credible commercial laboratory. Such quality control does not come cheaply, and in my opinion a university laboratory lacks the incentive to implement such a process.

I don’t envy that grad student. Hopefully they’ve got good QC in place. Even with an incredibly accurate and precise spectrometer, contamination in sample prep and lab procedures remains a constant threat.

It would be nice to have some of the details of the subject proposal, but for what is described in the abstract this seems pretty reasonable. Need to do a lot of inferring, but I could easily spend that if I had to cover a few months of PI salary, a post-doc and a MS student each year. Not to mention whatever cost is involved with the chem analyses.

Uni is gonna take an overhead percentage, probably in the ballpark of 15% but that’s +/- 10%

If you have the “unprecedented” temporal scale of 1 sample per year, you have 500 samples per core. If you run those through a commercial lab doing ICP-MS you’re looking at $25 – $30 per sample. So for one core at super-duper sampling you’re looking at upwards of $15,000 per core location. Do that at 20 sites and you have ~ $300,000 in lab costs.

Whether or not you can practically collect particulate samples at that temporal scale is another matter.

Quelccaya in fact has two cores, and most of Thompson’s sites have two or more cores. I haven’t studied the differences to see which he is using and why, since it’s been hard enough to get the data for whichever core he does use.

If one core had technical problems that made it less satisfactory, then one should obviously go with the other core, but then stick with it for all purposes.

If both are equally good, it would make sense to me to average both together after examining their differences, in order to reduce measurement error. If their time periods don’t coincide, one should ordinarily be adjusted so that the d18O means are equal over the common period before averaging, since they might have different altitudes or exposures even if they are from the same site. The portion of the average represented by only one core will then be noisier than the rest, but that’s better than ignoring one of the cores altogether or omitting the one-core observations.

One of the Himalayan cores actually has a big lacuna in the middle where one of yaks carrying a few meters of core fell off a cliff! But there’s no reason not to use the rest of the data.

In any event, one shouldn’t just pick the core that has the best correlation to instrumental temperature, since that would generate a spuriously close fit to temperature in the instrumental period and a relatively flat fit earlier — a Hockey Stick, so to speak… I don’t know if Thompson did this in CC03 or PNAS06, since it’s hard enough to get the data for whatever core or cores he did use.

What I would like to see when a thread is devoted to a particular temperture reconstruction is to see graphs of the time series – and without the distraction of the instrumental record tacked onto the end. Even the raw data in a time series is informative. I could do this myself (and probably will) and comment on what I see. I suspect others here could also, but I would think that it would be better for Hu to provide these graphs.

The link below shows the time series for Quelccaya reconstructions with Delta O18 and Standardized Accumulation. I have made graphs of these time series before when reviewing and analyzing Mann(08). I have never read an explanation of why the variation in the O18 series changes with time nor of the unnatural looking and very visible patterns in the Accumulation series or whether those features can affect the validity of the reconstructions. It would appear to me that the authors of these reconstructions merely look for individual series that can be thrown together in order to obtain a final reconstruction result while the details of the individual series are mostly ignored. In fact, authors seldom show the individual proxy series.

Curiously, I was reminded of this in connection with the recent Thompson data archiving discussed in Hu’s post. The Dasuopu accumulation series have the same artifact as Quelccaya.

It’s too bad that Thompson has made so little data available as there are many interesting questions and issues about Thompson’s ice cores that are inaccessible because Thompson has failed to provide a complete archive of data – even data taken nearly 25 years ago.

Hans provided a corrected URL to his excellent study down in the comments on the 2005 CA post Steve mentions. It’s at http://members.casema.nl/errenwijlens/co2/quelccaya.htm
The Quelccaya readme file says that a “printed documentation volume is now [1992] available from NGDC.” Perhaps someone could get ahold of it and put a PDF online? It may have the raw data that Thompson has been too busy to digitize himself.

Thanks, Ken — it makes more sense now. The spurious pattern in the earlier years is very much like the one that appears in annualized monthly CPI inflation computed from an index that is rounded to the nearest 0.1, yet is set to 100.0 in a recent year: When the index is on the order of 200 as now, the rounding error is neglibigle, but back in the 1950s, it was on the order of 30.0 on the same base year. A one-tick monthly change in the index is then a 4% annualized inflation rate, so that when actual annual inflation is 2%, most monthly annualized values are -4, 0, +4, or +8%!

Hans Erren’s study shows how Thompson’s methodology leads to exactly the pattern you show when the data has been over-rounded.

Hans concludes:

Rounding of layer thicknesses to the nearest centimeter leads to accumulation reconstruction artifacts. As the used compaction formula is not mentioned in TMBK1985 nor at the NOAA archive, Lonnie Thompson was asked for the original core log data. After a reminder he responded as follows on 10 March 2004:

Dear Hans: Just returned from China! Unfortunately, those logs are all
hand done. These data where not put on electronic format.
We have just redrilled the Quelccaya ice cap in 2003 and brought back two
frozen ice cores and will be producing a new log based on
this new data. Unfortunately, right we are processing Bona-Churchill
ice cores and the new Quelccaya and Coropuna cores are
in the cue.

Sorry I can not be more helpful on these old data sets.

best wishes,

Lonnie

So we are are left with an interpreted accumulation log, without direct access to the core log data. The worrisome side is that this interpreted dataset is a cornerstone of the preferred millenium climate reconstruction of the IPCC.

Note that Thompson was too busy processing Bona-Churchill back in 2004, but still hasn’t archived anything on that site!

Below is a link showing the time series of the Puruogangri Cores 1 and 2 decadal averages of oxygen isotopic ratios. These proxies appear to me not to track one another as might be expected when assuming these core originate from nearly the same location. In my analysis the next step will be to determine whether I can model these series with an ARIMA model with long term persistence. I have done this modeling before with proxies from Mann (08) and found I can show the same series structure and obtain an upward (or downward) drift at the end of the series.

Very different pictures! I wonder which one he used in Figure 6 of PNAS06… (He doesn’t seem to have used the same data for the 400-year illustration in his Figure 5 that he used in the bottom-line 2000-year reconstruction in his Figure 6.)

Puruogangri is a newer core that was included in PNAS06, but not in CC03. The latter was supposed to have been the source of AIT’s “Dr. Thompson’s Thermometer”.

While I obtained rather different ARFIMA modeling parameters for the Puruogangri Core 1 and 2 proxies, the fact that we have decadal resolution and thus the number of points of 200 obtained is no doubt too few for a good LTP model fit. I obtained d=0.166 and ma=-0.185 for Core 1 and d=0 and ma=-0.281 for Core 2.

I have noted that the Dasoupa, Tibet ice core has annual resolution from 1450 to 1996 and that series would be more appropriate for a ARFIMA model fit/analysis and thus I’ll do that next.

Note: I murdered the spelling of Dasuopu in my previous post and have corrected it here.

An attempt to model the Dasuopu O18 proxy, annually resolved, with an ARFIMA model failed due to the model not being able to fit a d value for long term persistence, i.e. the series remained non stationary . I, therefore, did a breakpoint analysis (assuming linear segments) using the function breakpoints in R library (strucchange) with minimum segment length used for fitting at 5% the length of series.

I found breakpoints at the years 1772 and 1800. Looking at the Dasuopu series in the link below one can visualize mildly upward trending O18 to around 1772 and then a sharp downward trend to approximately 1800 and then a more or less steady upward trend from 1800 to 1996.

In my previous analyses of long term proxies I have not seen a similarly trending series as I have with Dasuopu O18 here. A steady upward trend from 1800 to present is a curious phenomena, although I suspect Thompson would make no claims that this is a “true” temperature proxy. Other reconstruction authors may, however, toss this one in the hopper with other proxies that may not show strong trends and get a combined result that now shows at least some trending.

I plotted the decadal multi-variable time series for Dasuopu and Guliya ice cores from Thompson with the variables of O18, Dust, Chloride, Sulfate and Nitrate for comparison in the link below. For better comparisons I standardized the series by subtracting the mean from each one and then dividing by the standard deviation of the resulting anomalies. Please note that the y axis is scaled the same for O18 for Dasuopu and Guliya as it is for the other variables. The time series are of different lengths, but the differences between series are so great this does not present a problem in making visual comparisons.

On inspection it can be seen that the variables in the Dasuopu series seem to trend upward together in the end towards modern times while in the Guliya series these trends are absent. There are spikes in both series but the spikes do not correspond in time.

I have a major problem when I see two proxies such as these two from the same region (Tibet) respond so differently across all the variables measured. If one were to assume that these differences were real, a regional and global average calculation with any reasonable confidence intervals would require a huge amount of close proximity but separate proxies.

In order to show that an ARFIMA Model with a fractiona d can show rather lengthy trends throughout a 1000 year series, I did the following two simulations: I used fracdiff.sim in R library (fracdiff) to simulate seven 5000 year long series with either the parameters ma=0.10 and d=0.35 or ma=0.10 and d=0.20. I windowed a 1000 year series randomly from each of the seven long series. The 14 windowed simulations are shown in the link below.

It is easy to visualize decadal and longer trends in these simulated series and with the higher d value series showing longer and more obvious trends. By showing these plots I am not claiming that real proxy series such as those from Thompson are entirely due to longer term persistence. Nor do I claim that long term temperature is determined by long term persistence – in fact temperature and proxy responses could be governed by different models. Given that the measurements are reasonably accurate, I would suspect that the measured proxy responses are connected to climate changes but not known to what specifically. Knowing that proxy responses can vary widely and when measured within close proximity makes me wonder, however, whether an ARFIMA model with a fractional d might be an appropriate model.

Adding to my wonder here is the fact that those who chose to publish proxies or use them in a reconstruction would not necessarily show those series that do not fit a modern warming period or show other unexpected features. Without seeing authors use an a priori selection process of proxies with criterion that has reasonable physical connections my wonderment here is not diminished.

The proxies from the Thompson ice cores in Tibet appear very different in responses but yet are located in close proximity to one another. In order to show the close locations of these sites I have shown them on a map of China in the link below.

The earlier annual layers are much thinner, but that in itself shouldn’t have this effect directly. I’m wondering if perhaps the water molecules are able to migrate through the ice at a slow (one might say “glacial”😉 ) pace. In the younger, thicker layers, this wouldn’t affect the annual random fluctations much. But in the older layers, which are both thinner and have been sitting there longer, the random annual fluctuations may be averaging out. This would tend to make any temperature reconstruction flatter in the earlier years than in later years, creating a HS effect even if there wasn’t one to start with.

Then, if H2O molecules can migrate slowly through ice, will ice absorb CO2 in the entrapped air bubbles? If so, there may have been more CO2 in the air bubbles originally than now. Also, if the CO2 can migrate, there may have been bigger fluctuations in CO2 than have been measured. The ice itself could be tested for CO2 after crushing it to release any air bubbles. I wonder if this was ever done in Law Dome, Vostok, etc.

I am an alumnus of Ohio State (PhD in economics, 1997),and one of Hu McCulloch’s former students.

I am appalled at the academic misconduct of professor Thompson and how he continues to receive financial support from both the university and the federal taxpayers after numerous deliberate errors and also his refusal to share data so that his results could be independently verified.

I don’t know if this will have any effect, but I am going to complain to the administration of OSU. This is ridiculous.

Thompson should be commended for finally archiving long-overdue data for Guliya and Dasuopu in the past year.

However, at a minimum, he also ought to archive all the remaining decadal data that went into the 2000-year d18O index in Figure 6 of his 2006 PNAS paper. This would include decadal averages for Sajama and Huascaran as far back as they were used for that figure (probably 0AD). He has put a spreadsheet online that has the decadal averages for these two sites, but only back to 1000AD as used in his 2003 Climatic Change paper. In any event, the URL for that spreadsheet is not linked anywhere, so there is no way for the public to find it. In addition, he has decadal data for either Dasuopu or Dunde back to 450 AD that was used in PNAS figure 6 but has never been archived.

He should also at a minimum archive his data for Bona-Churchill. There are probably other sites that still need to have data archived, but these are the most important and best known.

Of course, he also should have archived concordances of depth and inferred age for all these cores, since in all but a few cases (notably Quelccaya), age is inferred from depth plus some assumptions about layer compression and accumulation rates. But at this point that is probably like asking for the moon.

Hu, I only located the Thompson ice caps of which I had presented time series. For completeness I show in the link below all four Thompsom ice cap locations in China- including Dunde. Next I need to show all these location time series in one place.

Discussions of the adjustments required for ice cores were presented at CA a while back as I recall. I think it might have been Hans Erren who had a good overall view of the technology involved. There had been a controversy brought forward by an observer (do not recall names) who claimed that the cores were not properly adjusted for diffusion but I think he was discredited. I am not at all sure how the Thompson’s core were adjusted – if at all – but as I recall Mann used at least some of the cores in reconstructions without adjustment.

Per your request, I’ve now posted graphs of the d18O data from CC03 at the end of the main post above. But instead of averaging the d18O values directly, which would at least have some physical interpretation, Thompson first computed Z-scores for each series, and then averaged the Z-scores.

Furthermore, as noted in my indicated paper, the high end of Dasuopu gets undue weight in his final index, since it is the only Himalayan series that extends to the 1990s, and then he averaged the two regional averages together. As a result, its weight suddenly jumps from 1/6 to 1/2 in the last decade, resulting in a strong HS shape for the final index.

I’m not so worried that the series for the different sites look so unalike. On your map of China, they don’t look so far apart, but in fact China is huge, as is Tibet for that matter. These sites therefore might have very different climate and precipitation patterns.

I”m more concerned that multiple cores from sites like Puruogangri can look completely different, as in your plot above. Have you tried plotting the two Quelccaya cores together? CC03 used the Summit core by itself, whereas MBH99 averaged the two together.

On further thought, the discussions I referred to above involved CO2 and trapped gaseous bubbles. I am assuming that the O18 reported by Thompson in the cores under discussion here is from the H2O molecule in the snow, firn and ice and can be distinguished from oxygen isotopes in CO2. I would think that the CO2 entrapment as air bubbles in ice is much more problematic with regards to diffusion than would be case with ions and H2O, but I have no good view of the literature in this matter.

I am wondering whether these ice cores are obtained through significant amounts of funding and with a lot of hard physical effort expended and then comes the laborious work of analyzing the cores. At least some of the results are (eventually) made public and then someone has to make sense of them in order to avoid disappointment in what I would think would be great expectations for the work. I admit I am not familiar currently with any published analyses of these cores other to know that Mann was not hesitant to use them, and evidently as is, in temperature reconstructions without much detail on what the cores might actually represent. Maybe part of the problem with the dearth of published information is this lack of a coherent model that explains what was found and particularly so if the results vary considerably from core to core.

Thanks for the graphs, Hu. I have graphed the ice core series for O18 series over the total time period from 0 to 1990 where the data exist. My graphs present the same picture as Hu’s. I think presentations like these without the distraction of the instrumental record tacked onto the end (in the circlular reasoning and assumption that the proxies faithfully track temperature) and in showing the full range of proxy variations are informative and in clear contrast with the what too many papers in my view present with their spaghetti and instrumental records.

In the link below I have graphed together the O18 measurements from the four Thompson ice core sites in China as noted in this thread along with the Quelccaya site from Peru. The series were all standardized using the mean and standard deviations from the time period 470-1980. It is readily observed that these series do not have any overall coherency. If the differences as depicted in the graph are real and O18 is a reasonable proxy for temperature, obtaining a meaningful regional and global temperature reconstruction from O18 proxies would either have very wide CIs or require a very large number of proxy sites. What these variations portend is what the ice cores either not published or measured would look like.

Which Quelccaya do you use? Q Summit as in CC03, or the average of the two cores as in MBH99? It would be useful to see these side by side as with your Puruogangri graphs (which I’ve now added to the post above). Ordinarily, it would make most sense to average 2 cores from the same site, after shifting one or both to have the same mean over their period of overlap.

IMHO, it’s counterproductive to scale these series to equal sd, however, since they all have meaningful d18O units to start with. Standardizing as in CC03 and PNAS reduces Sajama’s MWP as well as Huascaran’s LIA. (And makes the odd wiggles in Purogangri 1 look no bigger than the patternless ones in Puruogangri 2.)

Note that in CC03, Thompson adds data for the 90s for Quelccaya from a pit that was dug after the cores themselves. I eyeballed the number for that off their Figure 5, and then included it in my graph and regressions.

Thompson’s new data file for Dasuopu Core 3 raises the question, what happened to Dasuopu Cores 1 and 2? He does now provide recent annual data for Core 2, but not a longer decadal series to fully compare to Core 3. It would still be useful to compare decadal averages over the short overlap period, however, to see if at least that period correlates. And then perhaps to average these together after shifting to a common mean to represent the recent overlap period.

“I’m not so worried that the series for the different sites look so unalike. On your map of China, they don’t look so far apart, but in fact China is huge, as is Tibet for that matter. These sites therefore might have very different climate and precipitation patterns.”

I do not agree with your lack of concern for several reasons amongst which are:

(1) Large long term variations over a regional or global area would have to produce huge CIs in obtaing average temperatures.

(2) It makes one wonder with these variation what the unpublished/measured proxies would look like.

(3)Incoherent series like these and further without an a priori selection criteria or a peak at all the proxy data could point to a proper model for proxy response having long term persistence with the show of any modern period warming in the proxies being merely happenstance of the selection process.

(4) We have the divergence problem in dendro and non dendro proxies that would agree with the foregoing LTP model.

(5) As you note above the same sites have differences in proxy responses and those differences combined with different site differences make one even more suspicious of the validity of the proxies as thermometers or indicators of other factors.

I am certainly not denying here that we have had a modern warming period but I am merely pointing to the weaknesses in assuming that the proxy responses are valid thermometers.

It should bother you that the measrements at the Dasuopu site for O18, dust, chloride, nitrate and sulphate all trend upward together and nearly at the same rate.

Hu, the Quelccaya core I was showing in my graphs was Core 1. In the link below I show both Core 1 and Summit for Quelccaya for O18 (not standardized) and Accumulation Z-Score. The O18 series are very coherent. The Accumulations series agree well except for the period around 1000 to 1200.

Thanks, Ken. I’ve displayed this new graph comparing the 2 Quelccaya cores at the end of the post above, with a comment on the possiblity of H2O migration and/or CO2 absorption through ice.

If I’m wrong on the migration or absorption issues, I hope someone will correct me. But what other explanation than migration is there for the attentuation of d18O fluctuations before 1000 in these 2 graphs?

Look at the lower left two graphs (Quelccaya and Summit Accumulation Z-Scores), and squint a bit. The first half of each graph (~700 to ~1500) has an obvious induced pattern… an 800 year-long “wave” so to speak. I can’t suggest the source although I suspect an anomaly of data processing more than field anomaly.

Can others see this? Any suggestions as to the source and/or implications?

MrPete, I thought you were referring to the approximate 800 year difference between the two cores where one series is concave and the other convex. That would/should take some explaining in my view- again given the core proximity and agreement for O18.

I see a number of proxy series where I would swear what I see are artifacts of something (could be measurement) and do not look natural, but that is a gut feeling with no evidence on my part. I see what you see for Quelccaya accumulation. Why should accumulation Z scores be that different given the close proximity of the cores?

Even more troubling in my view are the measurement series for Dasuopu where the O18, dust, chloride, nitrate and sulfate measurements all spike up and down around 1800 and then increase in concert from 1800 to almost 2000. A reasonable and physical explanation is in order and I think required.

Some of these problematic features are difficult to see without doing a moving average or using other smoothing procedures.

OK, Pete, thanks to Ken’s comment I now see what you mean in Ken’s 10-yr moving average accumulation graphs. The two Quelccaya cores do look very different before about 1500 AD. The difference is not so obvious in the unsmoothed annual accumulatioin data that I was looking at, since that is visually dominated by the rounding error aliasing pattern. Ken’s 10 yr MA makes that pattern disappear, but then leaves very different accumulation rates before 1500 AD.

For some reason MBH99 included Quelccaya accumulation as a temperature proxy, and there are only 12 such proxies for the 1000-1400AD leg of the hockey stick, so this issue could have had a significant impact on the HS reconstruction. MBH used the average of both cores, I believe, for both d18O and accumulation. Even aside from the issue of why accumulation would be a direct temperature proxy in the first place, it’s not clear why the average of both cores could be meaningful, if the two cores behave so differently.

For anyone still coming to this topic, a question: In addition to all the failures of data documentation (for many other ice cores in varied locations), don’t the differences between the two Quelccaya cores raise questions about the adequacy of assumptions about the physical relationship between d18O and temperature/climate?

It seems to this layman that a great deal of testing and analysis would have to be done before one could think that the two differing Quelccaya cores could either be averaged or used (either) alone with confidence that one had a reliable, calibrated relationship with temperature.

HU (above):
“OK, Pete, thanks to Ken’s comment I now see what you mean in Ken’s 10-yr moving average accumulation graphs. The two Quelccaya cores do look very different before about 1500 AD….”

Here is another comparison. I have linked graphs comparing Dasuopu Cores 2 and 3 for accummulation. The graphs show Cores 1 and 2 series for raw accumulation, Z-scores of accumulation and 10 year moving averages of the Z-scores. This comparison is best veiwed with the MA and does show major differences.

Yes, the two Dasuopu accumulation rates are like night and day before about 1570AD.

Also, the Dasuopu d18O used in CC03 and plotted above in the post marches steadily upward from 1000 to 1300AD and after 1600AD, unlike any of the other 5 CC03 cores (or either of the newer Puruogangri cores for that matter). This largely offsets the pronounced downtrend in raw Huascaran d18O, leading to a flat HS shaft in the CC03 index.

Re: Hu McCulloch (May 2 13:23),
Thanks for the reminder on the rounding issue; I’d forgotten about that, and obviously not tracking this thread all that closely. My apologies! (For those who care, here’s a direct link to Hans Erren’s original analysis.)

It’s true, as you both note, that there are a number of other visually identifiable anomalies in this data. And when dealing with long-time-series rounding, I suspect other quantization effects may also become significant.

Hu, you can obtain the Mann (08) proxy data from the link below. Mann (08) used an average, as you noted he did in Mann (99), of Summit and Core 1 for O18 and Accumulation from Quelccaya and O18 from the Thompson cores from Dasuopu, Guliya and Dunde. The cores from Puruogangri as discussed at this thread were not used in Mann (08). Neither were the Core 2 and 3 accumulations at Dasuopu used in Mann (08).

I am of the view that Mann et al use proxies without a lot of detailed analysis or caution to what the proxies might be responding.

Hu, I was trained as a chemist many years ago so your comments about diffusion in ice made me curious. I did a little research online and from that I think the problem with trapped gaseous CO2 bubbles in snow, firn and ice is different than diffusion of H2O (and O18 in the water molecule) in ice. The CO2 problem involves the formation of ice around the bubble to prevent it from diffusing. Depending on the snow accumulation the time to form the hard encasement can vary and thus the diffusion “mixing” can vary also as noted in the excerpt from the link here:

“The age of the layers of ice can be fairly easily and accurately determined. The age of the air trapped in the ice is not so easily or accurately determined. Currently the most common method for aging the air is through the use of “firn densification models” (FDM). Firn is more dense than snow; but less dense than ice. As the layers of snow and ice are buried, they are compressed into firn and then ice. The depth at which the pore space in the firn closes off and traps gas can vary greatly… So the delta between the age of the ice and the age of the air can vary from as little as 30 years to more than 2,000 years.

The DE08 core from Law Dome core has a delta of 30 years. When the core was drilled in 1992 pores didn’t close off until a depth of 83 m, in ice that formed in 1939. According to the firn densification model, air from 1969 was trapped at that depth in ice that was deposited in 1939.

It doesn’t seem reasonable to assume that ”1969″ air was trapped at 83 m in “1939″ ice It seems to me that at depth, there would be a mixture of air permeating downward, in situ air, and older air that had migrated upward before the ice fully “lithified.” The air trapped in the 1939 layer should be a blend of air from 1909 to 1969. At the time that the 1939 layer was deposited, the ice crystals above 1909 would not have “lithified” yet. In 1939, the air within the interstitial pore space would be a mixture of 1909 to 1939 air. By the time the 1969 layer was deposited and the 1939 layer “lithified,” the air at the 1939 layer would have been a blend of 1909 to 1969 air.”

The diffusion problem of O18 in H2O in ice cores is explained in the link and excerpt below and involves H2O vapor diffusion and not perceptible liquid or solid ice diffusion – at least as I interpret what I read. It appears that the article here talks about the limits of using O18 to determine the annual layer and not about the smearing of O18 concentrations/ratios we are talking about. The process as described in this article would ,however, agree with your thinking about diffusion in deeper ice tending to smear the annual O18 concentrations/ratios.

“Snow is slowly compressed into ice in the upper 80 meters of an ice sheet (read more about the process here). During this process, water vapour can move relative to the ice in the open pores between the snow grains, thereby smoothing the annual δ18O cycles. This diffusion process smoothes the δ18O signal and even erases the annual signal if the annual layers are thinner than 15-20 cm. In ice cores from sites with less than 15 cm of precipitation (measured in equivalents of compacted ice, not snow) per year, the annual cycle in δ18O will be obliterated, and dating based on annual δ18O oscillations is therefore not possible. This is the case for areas in north-eastern Greenland where the annual precipitation rate is significantly lower than 20 cm. For ice cores drilled in areas with about or slightly more than 20 cm of precipitation, diffusion will also blur the annual cycles, but it is possible to retrieve the annual cycle using diffusion correction techniques.

Very slow diffusive processes also take place deeper in the ice sheets. These processes slowly weaken the annual δ18O oscillations as the ice gets older and the layers thin due to the flow of ice.

Due to the diffusion processes, the limit of safe annual layer detection using δ18O / δD measurements is about 8500 years ago in the DYE-3 ice core. More favourable conditions at the summit of the Greenland ice sheet has permitted successful identification of annual layers from δ18O data in more than 14,000 year old ice from the GRIP ice core, while the NGRIP and NEEM ice cores cannot in general be dated using δ18O data alone.”

The diffusion of anions, cations and molecules in ice would appear to a much slower process and though I have not made any calculations I am guessing that those processes are not a problem in ice cores – or at least a much smaller one. The links below gave me some ballpark rates.

“We report molecular simulation studies of the diffusion processes in ice and CO2 clathrate hydrates performed using classical potential models of water ~SPC/E and carbon dioxide ~EPM2. The
diffusivity of H2O in ice is calculated to be 1.3×10^-18 meter^2/seconds at 200 K using molecular dynamics simulations, a result in good agreement with experimental data. ”

“Near ambient pressures, molecular diffusion dominates protonic diffusion in ice. Theoretical studies have predicted that protonic diffusion will dominate at high pressures in ice. We measured the protonic diffusion coefficient for the highest temperature molecular phase of ice VII at 400 kelvin over its entire stable pressure region. The values ranged from 10−17 to 10−15 square meters per second at pressures of 10 to 63 gigapascals. The diffusion coefficients extrapolated to high temperatures close to the ice VII melting curve were less by a factor of 102 to 103 than a superionic criterion of ∼10−8square meters per second, at which protons would diffuse freely.”

Thanks, Ken — I had overlooked the fact that sublimation will mix the H2O as well as the CO2 before the bubbles close, adding a smoothing factor to the d18O in addition to the CO2.

The diffusion you cite for H2O in ice itself is too slow to mess things up much by itself — 1.3×10^-18 m^2/s equals 4.1×10^-8 m^2/millenium, in other words 0.2mm in 1 millenium or 2.0mm in 100 millenia, not enough to erase the signal even in the Vostok cores.

However, I am still concerned about the absorption of CO2 into ice. I assume that newly fallen snow, unlike rain, is essentially free of CO2. However, under high pressure and given enough millenia, can ice absorb gasseous (or liquid) CO2, if only near the surface? Since a microscopic air bubble is all surface, this could greatly change its CO2 content.

I have in mind the ancient “cementation” or “calamine” process for making brass: zinc boils below the melting point of copper, so you can’ just put them both in a pot and melt them together. However, hot copper that has not yet melted will absorb zinc vapor into its surface (say the first 1 mm), even though it is still “solid”. Therefore if you hammer copper into thin sheets that are all surface, and place these sheets in a crucible with zinc that is smelting from calamine into the vapor state, the copper sheets will turn to brass with up to 28% zinc content.

Likewise, even though ice is “solid”, it may be able to absorb gasses like CO2, if only very near the surface. Especially at the highish pressures at the bottom of an ice core. (Nothing like the pressures needed for the protonic diffusion you mention, I think.)

It is usually assumed that all the bubbles seal right at 70 or 80 m of overburden, but obviously this is not true — If that’s the average, some must seal at 30 m and some at 100 m, adding a further element of uncertainty to the dating of the air. Back when he was speaking to me, Lonnie Thompson once cautioned me against making too much of the relative timing of the CO2 and d18O in the Vostok cores, because of the great uncertainty in the air/ice lag. Sounds reasonable.

Hu, I need to look further into any differences between the CO2 bubble entrapment in ice and the O18 in H2O vapor in ice. What I garnered from the links was that atmospheric air with H2O vapor and CO2 – and other gases- is entrained in the snow and that these gases can readily diffuse in the open spaces of the snow to such an extent that relatively thick layers are required to avoid smearing the annual gas content until such thicknesses and pore closures more or less permanently trapped the gases.

Here is where I have trouble understanding how much H2O vapor and CO2 capture in ice differ the final results for CO2 levels and O18 ratios. In the case of CO2 the authors always talk about the formation of bubbles in the firn and ice that trap the CO2 from further diffusion or at least more rapid diffusion and they appear to me to indicate that the most recently laid layers of snow cannot be used to determine an annually resolved CO2 level. With O18 we know that the most recent layer of snow can be, or at least are, used for determining annual ratios of O18 in the water precipitated as snow. It appears obvious that difference between CO2 and O18 levels in snow and ice are that CO2 is first and always a gas while the O18 in water is in the form of a solid in snow and with some lesser amount in the form of a gas entrainment depending on the density of the snow. I would suppose that the entrapment of gaseous H20 and CO2 in bubbles would be nearly the same but the big difference being that CO2 levels would be obtained primarily in gas form in the bubbles and secondarily from any dissolved CO2 in the ice. O18 from H2O is going to be primarily from the ice itself and secondarily from the H20 vapor in the bubbles. I do not know how much the O2 trapped in bubbles and dissolved in ice would interfere with the O18 in H20 measurement.

I am currently of the opinion that O18 would be affected mainly by molecular diffusion of H2O in ice at same long residence time and some great compaction of ice into thin layers and that CO2 is going to be affected by gaseous diffusion. After looking again at the diffusion constants for molecular H2O, and salts for that matter, in ice and realizing that I have to take a square root and divide by pi to obtain meters diffused in seconds, I think the slow diffusion process eluded to in the link on O18 was referring to molecular diffusion. I need to investigate this further, but at this point my supposition could agree with your point on the smearing of the annual O18 ratios at depths in an ice core – given enough compaction of the ice and sufficiently long periods of time.

Hu, the links below are to papers that describe out gassing of cores already collected and stored, and, as such, are not very relevant to what we are attempting to resolve. These articles do, however, give some good information about the ratios of bubble trapped to dissolved gases (100 to 1) and the diffusion constant for gases in ice (on the order of 2×10^-9 meters per second (which is in line with the molecular diffusion of water in ice). Secondarily these articles point to same measurement artifacts that are possible from storage and handling of ice cores.

As an aside here, if I use the molecular diffusivity of H2O, and here specifically H2O with O18 in ice, of 3×10^-10 meters per second and look at a time period of 1000 years (3.15 x10^10 seconds), I have migrations of O18 in H2O in ice on the order of a meter -providing my assumptions and calculations are reasonable.

“Equation (1) governs solely the part of the gas that is dissolved in the ice which corresponds to less than 1% of the total air content (over 99% are kept in inclusions, either in air bubbles or clathrates). The two reservoirs (gas dissolved in ice and kept in inclusions) are assumed to be locally in equilibrium at all times (i.e., for the numerical case within each layer).”

“Enrichment of nitrogen gas has been found from gas analyses of ice cores retrieved from deep parts of Antarctica. Neither climate change nor gas loss through ice cracks explain the enrichment. In order to investigate the mechanism of the gas composition change, we develop a model of gas loss caused by molecular diffusion from clathrate hydrates toward the ice-core surface through ice crystal. We apply the model to interpret the data on the gas composition change in the Dome Fuji ice core during the storage for 3 years at 248 K. The mass transfer coefficients determined using the model are 1.4×10-9 and 4.3×10-9 m•s-1 at 248 K for N2 and O2, respectively. The difference in the coefficient between N2 and O2 causes the change in the O2/N2 ratio of the trapped gas in the ice core during the storage. During the storage period of 1000 days at 248 K, the O2/N2 ratio changes from -9.9‰ to-20.5‰. The effect of the gas loss decreases as the storage temperature decreases. The results have important implications for the accurate reconstructions of the paleo-atmosphere from polar ice cores.”

Hu, I’ll attempt again to get the complete link. It has a double dot that might be the problem but that exact link gets the correct reference – so if the link does not work try typing in the exact link with the part beyond the hyperlink.

Hu, after my layperson researching of ice cores as temperature proxies I am of two minds on the practical considerations of using stable isotope ratios as temperature proxies. Unlike so many other proxies the stable isotopes provide a strong physically based and understood response to temperature -given the uncertainties of other effects such as diffusion within the ice core and origins of the water in the precipitated snow. The extreme temperature differences such as occurred during interglacial and glacial maximum periods and those arising from seasonal changes appear to be captured with O18/O16 ratios. When the ratio method is applied to the last one or two millennia I am very skeptical about the capability of the method to capture these smaller differences from both what I have read and personally viewing differences in O18 proxies over that period.

It would appear that one could put limits on some of these effects, like diffusion, and thus I would expect that uncertainty bounds could be attached to the results.

CO2 and other gases trapped in ice core bubbles are more problematic vis a vis smearing of the record than O18 ratios that reside with the water molecules which are part of the solid ice. The water vapor in the ice cores trapped bubbles is, I am assuming, a small fraction of the water in solid ice. Of course, that situation leaves the water vapor to migrate into the ice and the water molecules to diffuse through the solid ice. Most of the work I have viewed in the literature talks about CO2 and gases trapped in the bubbles and much less about O18 in water diffusion in the ice.

What I have noticed about the reporting of the Thompson ice core results is that while detecting millennial temperature differences seem rather secondary in the published journal material, the press reports zero in on any proxies that show late series warming and without any precautions about the uncertainties of the results or explanations of why there are uncertainties. It would also appear that those using these proxies in temperature reconstructions do not bother to obtain any independent view on the proxy’s validity as a temperature responder.

Below is a link to a paper on the Guliya Thompson ice core. By registering you can obtain a free copy of the full article.

“ABSTRACT. One common assumption in interpreting ice-core CO2 records is that diffusion in the ice
does not affect the concentration profile. However, this assumption remains untested because the
extremely small CO2 diffusion coefficient in ice has not been accurately determined in the laboratory. In this study we take advantage of high levels of CO2 associated with refrozen layers in an ice core from Siple Dome, Antarctica, to study CO2 diffusion rates. We use noble gases (Xe/Ar and Kr/Ar), electrical conductivity and Ca2+ ion concentrations to show that substantial CO2 diffusion may occur in ice on timescales of thousands of years. We estimate the permeation coefficient for CO2 in ice is _4_10–21 molm–1 s–1 Pa–1 at –238C in the top 287m (corresponding to 2.74 kyr). Smoothing of the CO2 record by diffusion at this depth/age is one or two orders of magnitude smaller than the smoothing in the firn. However, simulations for depths of _930–950m (_60–70 kyr) indicate that smoothing of the CO2 record by diffusion in deep ice is comparable to smoothing in the firn. Other types of diffusion (e.g. via liquid in ice grain boundaries or veins) may also be important but their influence has not been quantified.

“Gas ratios in air withdrawn from polar firn (snowpack) show systematic enrichments of Ne/N2, O2/N2 and Ar/N2, in the firn–ice transition region where bubbles are closing off. Air from the bubbles in polar ice is correspondingly depleted in these ratios, after accounting for gravitational effects. Gas in the bubbles becomes fractionated during the process of bubble close-off and fractionation may continue as ice cores are stored prior to analysis.”