Svalbard’s Lost Decades

In a 2006 article in JGR, Aslak Grinsted, John Moore, Viejo Pohjola, Tonu Martma and Elisabeth Isaksson study several climate indicators from the Lomonosovfonna ice field in Svalbard, shown below with their caption:

In the oldest part of the core (1130-1200), the washout indices are more than 4 times as high as those seen during the last century, indicating a high degree of runoff. Since 1997 we have performed regular snow pit studies [Virkkunen, 2004], and the very warm 2001 summer resulted in similar loss of ions and washout ratios as the earliest part of the core. This suggests that the Medieval Warm Period [Jones and Mann, 2004] in Svalbard summer conditions were as warm (or warmer) as present-day, consistent with the Northern Hemisphere temperature reconstruction of Moberg et al [2005].

Although the Svalbard ice core record extends back to 1130, a 2009 paper in Climate Dynamics, by Grinsted and 3 of the same authors plus M. Macias Fauria, S. Helama, M. Timonen, and M. Eronen, utilizes the same ice core record to infer winter sea extent, yet omits the distinctively “warm” first 7 decades of the record. It concludes, “The twentieth century sustained the lowest sea ice extent values since A.D. 1200.”

My question for Dr. Grinsted and any of his co-authors who might drop in is, why did the first 7 decades of the core disappear between 2006 and 2009? Is it because they contradict the IPCC/AIT line that there was no MWP to speak of?

BTW, has the Lomonosovfona core data ever been archived? I gathered from Steve’s post that it has not.

I might add that Craig Loehle and myself (see Loehle 2007, Loehle and McCulloch 2008) have reconfirmed the existence of a MWP, using twice as many proxies as Moberg et al. Craig selected the proxies and did the smoothing, while I contributed standard errors to the 2008 correction, showing that the MWP and LIA were both significant relative to the bimillenial average. We did not use Lomonosovfona, but it could be a useful addition to future such studies, if calibrated to temperature and archived.

Update: I would like to thank Dr. Grinsted for responding at length in comment #25 below, as follows:

It is curious that we find that it was warm in Svalbard during the MWP but we do not see a low sea ice extent in our sea ice reconstruction. I would have expected it to be lower even though it does not extend quite as far back. When i was interviewed by the danish press then I pointed this out as the most surprising result. But I do not see a conflict between those two observations.

We had several considerations that led us to restrict the sea ice reconstruction to 1200.

We knew that the oldest data 1100-1200 was influenced by melt to such a degree that ions were being flushed from the ice (Grinsted et al. 2006 and the figure shown on this blog). That made us cautious of whether the isotope data might be influenced by post-depositional processes as well. 1200 seemed a natural choice for the cut-off (see above fig).

The dating-model is expected to perform poorer near the bed. We believe the Lomonosovfonna dating to be quite accurate around 1200, since we have identified a sulphuric peak that we believe to be the 1259 eruption (Moore et al., JGR 2006). However, it is very important for the recontruction procedure that the dating is correct to within 5 years. Otherwise we might try to reconstruct the past ice extent using a lag between ice core and tree ring data that is inconsistent with the one used in the calibration period. The primary reason to do the 5-year smoothing was to make the reconstruction more robust against small dating errors. The dating could still be good prior to 1200AD, however we did not have confidence that the errors would be in an acceptable range for the treatment we were planning and therefore excluded this data. Note that, 1200AD is only 2m above the max depth of the ice core.

The layers gets compressed near the bed and the temporal resolution decreases back in time. For the reconstruction we needed atleast 5 year resolution, because that is what we chose in the calibration period. At 1200AD the d18o temporal resolution is 3-4years per sample. That is OK, but not very good when we want to resample to 5year averages.

@Hu (4): it is also asked why I only showed post-1400 d18O in my JGR 2006 paper. The reason is that E. Isaksson wanted to publish this herself before anybody else could get access to it. That simple. This is also where I will redirect all requests for the isotope data.

He also thoughtfully replies to several questions posed by readers in comments #25, 27, 30, 49, and 59 below.

Update 2: In a subsquent paper with K. Virkkunen et al, Dr. Grinsted and co-authors report on the washout factor from two pits at the summit of Lomonosovfonna that update the original core, which was drilled in 1997. An announcement, entitled “Present day summers in Svalbard are as warm as those during the medieval warm period”, is on Dr. Grinsted’s website, with a link to the full paper. Note that since the horizontal scale is depth in meters rather than inferred calendar date, the present is at the left, while the 12th century is at the right and highly compressed.

It is not clear that this is the same as either of the washout measures shown in figure 5 from Grinsted 2006 above, however, since neither of those has conspicuous up-spikes corresponding to the ones this update shows at around 25 and 32 meters.

93 Comments

I don’t view these studies with the same eye as I would a Mann study. But I think the point here may be lost insofar as the bleeding edge of climate concern seems to have forgotten that there was even a climate before 1970…. and after 1998.

Perhaps they could also clarify why the d18O series only goes back to 1400. I don’t understand “A” or “SMI”, but if the core is there, d18O should be obtainable.

One critical factor here is the dating. Grinsted 2006 explains that “Dating of the core was based on a layer-thinning model tied with the known dates of prominent reference horizons (see Kekonen et al [2005] for details).” There could be a lot of uncertainty to both the 1130 and 1200 dates, but I haven’t looked at Kekonen (in JGR) yet.

Re: Hu McCulloch (#4), Hu, some comments. The 2006 paper was examining proxies for 2 different concepts – continentality (1600-1930) for which they used d18O, and summer melt (1130-1990), for which they used the washout proxies. They showed d18O back to about 1400, which amply covered the period they were using it for. Maybe the analysis back to 1200 was incomplete at that stage.
Macias 2009 uses d18O back to 1200, but not the washout proxies. So it’s not quite right to say that 1130-1200 are omitted. No d18O for that period has ever been shown.
Washout and d18O are very different kind of proxies. d18O gives an indication of the state of the atmosphere at the time of deposition, whereas washout indicated melting subsequent to deposition, which is local to the icefield. That is probably why Macias et al did not use it in a study of sea ice. Melting indicates warmth. but only if it happens; otherwise washout would seem to be a poor temperature proxy.

Although the moment has passed, I’d still like to comment that, while I agree w/ snipping of motivic speculation, I also agree that Dr. McCulloch’s somewhat snarky question about why the missing decades is very much about motivation.

Now it’s Dr. McCulloch’s thread and he can snip who he wants to, but capricious snipping doesn’t exactly boost credibility.

.

I’d also like to add my thanks to Dr G. for stopping by and engaging, and hope he won’t be put off by the confrontational attitude of a few commenters.

.

Finally, a (probably dumb) question: Why all the hand-wringing about the apparent conflict between “warm” and not-reduced sea-ice cover? In the context of Polar regions, “warm” is a relative term, no?, ie: not necessarily “consistently above freezing.” Couldn’t “warm” also mean enough more precip (snow) in the winter to compensate for greater ice loss in the summer? Can this be accounted for?

Well the “The twentieth century sustained the lowest sea ice extent values since A.D. 1200” conclusion still technically holds if you add the 1130-1200 period. Of course, leaving out the 1130-1200 period certainly leaves it up to the reader whether or not there were lower sea ice extents prior to 1200.

Of course, leaving out the 1130-1200 period certainly leaves it up to the reader whether or not there were lower sea ice extents prior to 1200.

Indeed. Add to that the headline grabbing title of the paper and I think it’s clear why no mention was made of earlier work or data suggesting as warm (if not warmer) conditions prior to 1200. The message of the paper would have been diluted.

Although the Svalbard ice core record extends back to 1130, a 2009 paper in Climate Dynamics, by Grinsted and 3 of the same authors plus Macias Fauria, S. Helama, M. Timonen, and M. Eronen, utilizes the same ice core record to infer winter sea extent, yet omits the distinctively “warm” first 7 decades of the record. It concludes, “The twentieth century sustained the lowest sea ice extent values since A.D. 1200.”
My question for Dr. Grinsted and any of his co-authors who might drop in is, why did the first 7 decades of the core disappear between 2006 and 2009?

Perhaps it’s because, as shown in Fig 5 the δ18O only goes back to 1200AD, that is the data used in their study, not the washout index which goes back further.

Is it because they contradict the IPCC/AIT line that there was no MWP to speak of?

Rather snarky! I’m sure if I’d made similar comments re Loehle & McCulloch you wouldn’t have appreciated it?

When I click on the link for the 2006 JGR article, I get a “Page Not Found” message.

“… the very warm 2001 summer resulted in similar loss of ions and washout ratios as the earliest part of the core.” Could this be why d18O is unobtainable from 1130-1200, because of the ion loss and physical washout in that section? I don’t know the methodology, but wouldn’t the d18O numbers be corrupted by the ion loss and washout?

No. d18O is the difference in the ratio of the isotopes 16O and 18O in the water molecules that form the ice. Fractionation of isotopes occurs during both evaporation and condensation due to the difference in molecular weight of water molecules made from different oxygen isotopes. dD or the difference in the ratio of deuterium to hydrogen is also used as a temperature proxy in ice cores. The dissolved salt concentration in the ice should not have an effect on the isotope ratios of the water molecules in the ice AFAIK.

If there was significant run-off then you could imagine that the d18O would be affected. There is probably isotopic fractionation across the ice/water transition (=melting). If the melt water does not refreeze then the isotopic ratio of the remaining ice will have been affected. That said, i think that d18o is not very sensitive to melt. However given the very intensive melt we previously found evidence for, we were concerned.

Rather snarky! I’m sure if I’d made similar comments re Loehle & McCulloch you wouldn’t have appreciated it?

You are right. Rather snarky. This is not a question that SM would have posed. It’s “speculation about motives,” to quote the snip in 13. But I’m glad to see Hu be more forthright and ask the question that most of us would ask. There may be a valid reason the decades in question were not included. Let’s find out what it is. The only way to find out is to ask.

As for Loehle and McCulloch not appreciating some snark, I suggest you go back and read the threads on their work. You will find that almost without exception, they did not react irrationally to snarky comments. Compare that to Dr. Steig’s performances on RC. Steig invariably starts off calmly, becomes increasingly more agitated, throws out some nasty snark and then disappears. All in the timespan of a few hours.

Grinsted et al. 2006 is not cited in Macias Faurias et al. 2009.
Very peculiar, as Grinsted is co-author and the same icecore is used…

Clearly a mistake! I am all for shameless self advertising, and I would definitely have made Marc Macias include it if I had noticed it myself. Seriously: It would have been relevant to the discussion and I am sure we would have included it if we had noticed it was missing.

Lost decades?
I would like to hear the articles writers though about the age of the glacier.
It seems to me that the glacier was not even there before 1130.
How come that this glacier established itself and started to grow if the climate not cooled back then?

It is curious that we find that it was warm in Svalbard during the MWP but we do not see a low sea ice extent in our sea ice reconstruction. I would have expected it to be lower even though it does not extend quite as far back. When i was interviewed by the danish press then I pointed this out as the most surprising result. But I do not see a conflict between those two observations.

We had several considerations that led us to restrict the sea ice reconstruction to 1200.

We knew that the oldest data 1100-1200 was influenced by melt to such a degree that ions were being flushed from the ice (Grinsted et al. 2006 and the figure shown on this blog). That made us cautious of whether the isotope data might be influenced by post-depositional processes as well. 1200 seemed a natural choice for the cut-off (see above fig).

The dating-model is expected to perform poorer near the bed. We believe the Lomonosovfonna dating to be quite accurate around 1200, since we have identified a sulphuric peak that we believe to be the 1259 eruption (Moore et al., JGR 2006). However, it is very important for the recontruction procedure that the dating is correct to within 5 years. Otherwise we might try to reconstruct the past ice extent using a lag between ice core and tree ring data that is inconsistent with the one used in the calibration period. The primary reason to do the 5-year smoothing was to make the reconstruction more robust against small dating errors. The dating could still be good prior to 1200AD, however we did not have confidence that the errors would be in an acceptable range for the treatment we were planning and therefore excluded this data. Note that, 1200AD is only 2m above the max depth of the ice core.

The layers gets compressed near the bed and the temporal resolution decreases back in time. For the reconstruction we needed atleast 5 year resolution, because that is what we chose in the calibration period. At 1200AD the d18o temporal resolution is 3-4years per sample. That is OK, but not very good when we want to resample to 5year averages.

@Hu (4): it is also asked why I only showed post-1400 d18O in my JGR 2006 paper. The reason is that E. Isaksson wanted to publish this herself before anybody else could get access to it. That simple. This is also where I will redirect all requests for the isotope data.

We knew that the oldest data 1100-1200 was influenced by melt to such a degree that ions were being flushed from the ice (Grinsted et al. 2006 and the figure shown on this blog). That made us cautious of whether the isotope data might be influenced by post-depositional processes as well.

Are there no modern studies of these processes in operation that can give more surety to the question of whether oxygen ratios are disturbed? There are lumps of melting ice in abundance and surely it is possible to do contemporary measurements that give an answer with greater confidence. Any references?

Re: Aslak Grinsted (#25), I really appreciate the effort by Dr. Grinsted to answer questions here on Climate Audit. I, too, am in the “peanut gallery” along with others who don’t have the expertise to judge what’s correct–but we can judge whether it appears the give-and-take includes information needed by those who can judge what’s correct.

One question I have: If the paper under discussion didn’t give a brief explanation for choosing the starting point at 1200, wouldn’t it be a better practice to give an explanation so that people wouldn’t get so imagnative in wondering about “cherry picking”?

We knew that the oldest data 1100-1200 was influenced by melt to such a degree that ions were being flushed from the ice (Grinsted et al. 2006 and the figure shown on this blog). That made us cautious of whether the isotope data might be influenced by post-depositional processes as well. 1200 seemed a natural choice for the cut-off (see above fig).
The dating-model is expected to perform poorer near the bed. We believe the Lomonosovfonna dating to be quite accurate around 1200, since we have identified a sulphuric peak that we believe to be the 1259 eruption (Moore et al., JGR 2006). However, it is very important for the recontruction procedure that the dating is correct to within 5 years. Otherwise we might try to reconstruct the past ice extent using a lag between ice core and tree ring data that is inconsistent with the one used in the calibration period. The primary reason to do the 5-year smoothing was to make the reconstruction more robust against small dating errors. The dating could still be good prior to 1200AD, however we did not have confidence that the errors would be in an acceptable range for the treatment we were planning and therefore excluded this data. Note that, 1200AD is only 2m above the max depth of the ice core.

In a nutshell ” A start point of 1200 rather than 1130 was selected because of concerns about ion flushing and dating resolution near the bed”

In hindsight that sentence would have provided a clue to anyone questioning the start date. Generally speaking, I think most of us like to see these kind of decisions supported ( perhaps in a SI) with data and methods. Dr. G. exercised his scientific judgment. Perfectly acceptable. If the data were archived others could of course look into the matter further.

Re: steven mosher (#62), yes, I realize he gave this explanation in the comment to which I replied. My question was whether it would be good practice for everyone to give at least a brief explanation in the paper itself.

I agree. Dr. G’s decision was based on the focus of his study. Still, the unused time frame could make an interesting paper. Not that I put a lot of stock in proxy reconstructions involving tree rings. But if the proper error range is included they can be beneficial.

Aslak Grinsted, I’m afraid you are failing to answer some rather obvious questions.
First on the dating of the ice core. The second paper says clearly: “The period covered by the ice core is A.D. 1200–1997” which contradicts the first paper and your comment above where you talk of a cut-off. So which is it – did you cut off the ice core for the second paper or did you re-assess its start date?

The other obvious question is why the ‘washout indices’ from the first paper were not used as a sea level proxy in the second paper. Particularly since you say in the first paper that “negative washout indices during the period from 1700 to 1900 are indicative of greater sea ice extent in the Barents Sea” showing that you believe it is a sea ice proxy. Well, I think we can guess the answer to that one.

It’s interesting that you see no conflict between the statements about the MWP being as warm or warmer than today and their being no ice melt in the ‘reconstruction’. Perhaps therefore that you think that in the present warming, there is no need to worry about ice melting? The obvious conclusion is that the ‘reconstruction’ is just wrong.

That sentence is perhaps not so well formulated. It would have been more precise to say “The period covered by the ice core data is A.D. 1200–1997”. So, let me clarify: the period covered by the ice core is modelled to be 1123-1997. I prepared the ice core data for Marc Macias. That means I calculated the moving 5-year averages and I gave him the data since 1200. I have explained the reasons for doing so above. I think this also explains why it was written the way it was.

The other obvious question is why the ‘washout indices’ from the first paper were not used as a sea level proxy in the second paper. Particularly since you say in the first paper that “negative washout indices during the period from 1700 to 1900 are indicative of greater sea ice extent in the Barents Sea” showing that you believe it is a sea ice proxy. Well, I think we can guess the answer to that one.

Yes that is obvious, and we also tried it. But please note that in the Macias et al. 2009 paper we have a very specific target. I.e. the spring maximum ice extent in the western nordic seas. The washout indices did not correlate well with the target series and the reduced model using only tree-rings and isotopes performed better. Possibly because the washout indices dont reflect the target season, the target region, or that the noise level is too great on short time scales (again: we use only 5-year averages).

I am curious to know what your guess is. I think that most people can recognize the disdain your comment and can understand why I find it insulting.

It’s interesting that you see no conflict between the statements about the MWP being as warm or warmer than today and their being no ice melt in the ‘reconstruction’. Perhaps therefore that you think that in the present warming, there is no need to worry about ice melting? The obvious conclusion is that the ‘reconstruction’ is just wrong.

I think it is amazing how far you can extrapolate supported only by your preconceived ideas. Here are the observations: 1) There was a high degree of ice melt on Svalbard, 2) Regional spring ice extent was not particularly low. How exactly are they in conflict?

Re: Aslak Grinsted (#30), Kudos to Aslak for coming here to defend his work. I suggest that he has made a very good defense of the reason to cutoff the data when he did. Aslak: don’t get offended, there is so much “convenient” use of date ranges that people are suspicious.

Thank you for visiting and contributing to the debate here and providing valuable input. Unfortunately there are many here who’s reactions to inclusion/exclusion of certain data is influenced by the antics of a certain Michael Mann.

If you are not already aware Aslak, Michael Mann, knew about the lack of robustness of his ‘unprecedented warming in the last 1000 years’ claim to the inclusion/exclusion of certain bristlecone pine series from his reconstruction. He also knew that that by using decentred PCA in combination with these bristlecone series he could produce a ‘hockey stick’ shaped graph that ha dthe effect of attempting to change history by showing that the MWP and LIA did not exist. This now infamous and completely discredited graph (thanks to the efforts of Steve and Ross) became the ‘poster child’ for the IPCC TAR and was trumpetted (as still is by Al Gore) as the ‘smoking gun’ of AGW by its advocates. Despite a NAS panel review and an independent assessment by an expert team of statisticians led by Edward Wegman, to this day Michael Mann (and his fellow hockety team members) continues to deny that he knew about the complete lack of robustness in his reconstruction of the inclusion/exclusion of the bristlecone pine series.

May I ask you what you personally think about this whole shameful episode in climate science?

Aslak:
Many thanks for your thoughtful, candid and open responses to the questions from Hu et al. MrPete’s comments are on target: PaulM should try to be more courteous and less confrontational. His tone is totally unwarranted. Alas I fear you are the innocent victim of some of the ill-feeling caused by Lonnie Thompson’s continued reluctance to archive his data: Ice cores apparently bring out the worst in some people.

Aslak,
.
No disrespect intended, but I feel I must ask a somewhat difficult question. While the reasons you give for not using the 1130-1200 data seem perfectly reasonable, I find myself wondering about this sentence:
.

.
While certainly literally true, it is also true that the washout indices show that it was likely to be as warm or warmer immediately prior to the beginning of your reconstruction – and you agree that this is the case based on your 2006 paper. This particular sentence has no scientific or analytical value, and seems to be present only to impart the impression that the 20th century temperatures are unique.
.
I understand this may be a difficult question, and I hope you do not take this as disrespectful.
.
Also, thank you for taking the time to explain your proxy selections. That was much appreciated.

I do regret that we did not discuss the 12th century melt in the sea ice paper. We would probably not have been able to come up with any explanation, but it would have highlighted the issue for further study. Perhaps somebody can come up with an explanation. – Or just more proxy data that could shed light on what exactly was going on. I’m thinking what happened to: ocean currents, surface temperatures, sea ice cover in other regions, …

The slow, almost linear, trend going from the MWP to the LIA is very clear in both d18o and melt indices. This trend is not nearly as apparent in the sea ice reconstruction. By extrapolation that makes me believe that the sea ice reconstruction would not have been below the 20th century extent even if it had included the 12th century.

Re: Aslak Grinsted (#49),
I was going to say that I completely agree… but on further reflection, I think this is a good example of how very reasonable statements for those “in the know” can turn into PR hype when communicated without context.

The comparison statement about the economy makes some important assumptions:
– the reader can be confident the economy has been measured prior to the 1930’s
– the reader is probably aware of the Great Depression

If they are NOT aware of that, it may seem silly but is quite possible for a reader, or news editor, to imagine that “since the 1930’s” is just picky detail…

THAT is the unfortunate climate-science-communications context we’re dealing with.

There’s nothing you can do about other situations, but I have two recommendations for wording things to make misinterpretation more difficult:
– include context in such statements
– put the most important context first, so that dumb editing won’t too horribly change the meaning

Satellite example:
“Lowest arctic ice recorded since the beginning of satellite measurements in 1978” is not so good… the quick glance gets “since the beginning” and forgets that we’re only talking 30 years.

“Lowest arctic ice recorded since 1978 when satellite measurements began” is better.

For yours, I would suggest something like:
“The twentieth century sustained the lowest sea ice extent values since the MWP in A.D. 1200.”

Three extra words provide explicit acknowledgment and/or education about what came before. Sad that such things are needed, but that’s the world we live in.

Re: Aslak Grinsted (#49), After reading your response, I retract my implication that the statement was misleading. As I said, it is factually accurate, and I do not see how you should be held responsible if others later spin it their own way. I think that providing context as MrPete suggests can help; however, I do understand that the purpose of a journal article is to communicate the science, not to play semantics games.
.
Thank you for your reply.

I do understand that the purpose of a journal article is to communicate the science, not to play semantics games.

Which sounds rather naive to me. Very few articles are published that don’t push an “angle” ie they don’t purely communicate the science because they have to “justify” publication to journals, the work to grant awarding bodies and increase the “visibility” of the researchers themselves. This is common practice in all areas I know of and it’s why, for example, you’ll get all kinds of strange things attributed to “climate change”. It provides the high profile popularity angle.

Perhaps, I am naive then. I was very careful to avoid any attribution in my contact with the press. I can assure you that some journalists were trying to make me say some interesting stuff. In particular, the telegraph tried to make me say that continued ice loss is dangerous because it might result in an ice age. Just as in The Day After Tomorrow. 🙂

#34:
Commenting on sea ice extent during the period before the reconstruction begins would have been pure speculation on the part of Professor Grinsted and his collaborators. It hardly seems appropriate for a journal article.

The dominant feature of the reconstruction is a sudden decrease in sea ice extent during the 1920s to historically anomalous levels. I don’t see how this paper could have been written without commenting on this.

There is a fine line between being skeptical of data, and not being open to new data. I think that this thread contains posts which appear to cross that line.

This reconstruction shows a sudden state change in ice extent during the 1920s, interrupted by a brief rebound during the 1960s. At first blush this appears to be inconsistent with global climate models which show the great majority of arctic warming occurring far more recently.

If arctic warming is gradual, and dominated by anthropogenic CO2, it may be a challenge to reconcile these results.

Kudos for Dr. Grinsted for coming here to answer some questions. Dr Grinsted as Craig L suggests don’t take offense at the snarky little comments by some. In particular people here tend to have a hair trigger reaction to choices of data inclusion and exclusion, especially at the start and end periods of time series. That doesn’t excuse the boorish behavior just puts it in a context.

PaulM. please treat Steve’s guests ( especially those who have been here before to help people understand things) with respect.

This discussion is on the cusp of a civilized, mutually respectful exchange. There is even a hair-breadth glimmer of an attempt to assume honest intent in Grinsted et al’s thinking and vice versa. Please keep it up until there is complete understanding (if not agreement, which would be awesome).

Professor Grinsted is a good example of how visiting scientists should approach CA. Stick to the science with the confidence that any snarky comments will eventually be called out and/or snipped by the moderators.

RE Ianl #46, Nick Stokes #17,
I think Nick’s valid point is that the washout rate must be a very nonlinear measure of summer (only) temperature — If it’s too cold to melt, there won’t be any ion washout to speak of, and so there is not much of an indication of how much colder it was. It may still be a valid indication of one aspect of temperature, but perhaps linear regression does not capture the full effect without some modification. Tobit censored regression comes to mind, as does piecewise linear regression.

Even then, washout is only an indication of summer temperature. However, summer temperature is surely correlated with summer ice melt as well as with annual temperature, and therefore also (negatively) with winter sea ice extent. So it sounds like it should be a reasonable (if nonlinear) candidate as a proxy for maximal sea ice extent. The 2009 paper lead-authored by Macias Fauria does not even mention the availability of this proxy, let alone test for its relevance. In fact, it does not even cite the 2006 Grinsted et al paper.

RE Aslak #27, are you sure this omission of the paper you lead-authored was just an oversight?

RE Aslak #27, are you sure this omission of the paper you lead-authored was just an oversight?

100% sure. It would have been good to note the high degree of melt in the 12th century. But we would not have been able to discuss it much, and so I dont see how it could have been much more than a one-line note. It is hardly central to the paper.

Re: Hu McCulloch (#53), The 2006 paper says (para 26) that the washout index did not correlate significantly with any temperature series, although it did correlate with d18O. There is a related stratigraphic melt index (SMI), and para 25 notes that this also does not correlate significantly with temperature, and cannot be used as a summer temperature proxy.

Where a published climate study starts (or ends) can, of course, change the readers’ perspective on historical trends and changes. I have concluded on reading a number of papers in climate science that the authors’ motives and explanations for where they start and/or finish is really not the important factor operating here. Thinking persons, I believe, must judge for themselves the resulting implications of the start/end points in the larger picture of things.

As a reoccurring example, some climate scientists recording the TC activity in the NATL like to use the 1970s as a starting point for their published studies and give reason of better data reliability. As far as that reasoning goes one cannot argue strongly against that choice. However, when one knows that the TC activity under study is cyclical and the start period is at a valley and the present time near a peak, and that further, while the 1970s provides incrementally more reliable data than previous times, the reliability of the data has, no doubt, improved from that point forward, one sees the results in a different light then perhaps how the authors, who may or may not have neglected to emphasize these contextual points, have described the results.

A complete contextual description of these matters, while helpful to the reader, is probably too lengthy to expect from most authors and publication houses. I think that puts the onus back on the reader, as a thinking person, to understand the context. I judge that is the kind of person (who enjoys analyzing papers with the bigger picture in mind) that participates in blogs like CA.

On the other hand, I do think there exists a kind of amen choir for climate scientists, made up of laypersons, who seem to think that we do not have sufficient capabilities to understand the technical issues to make our own judgments. They seem to have an abiding faith in climate scientists – or at least those whose conclusions match their own POVs.

I think you all have raised the bar too high for the good scientist. That’s not to say that scientists don’t understand PR. That’s not to say that they shouldn’t write clearly. But to expect them to write in a way that is “spin free” or “unspinable” is just asking too much in my opinion. It’s a science paper directed at people with the requisite knowledge. One can reasonably expect that they understand these critical elements in any analysis:start dates, end dates; inclusion/exclusion criteria; the impact of data selection on calculating trends; the sensitivity of results to these selctions. For my part ( since I don’t READ alarmist media) if I have the data and the code and a forum for debate I think reason will eventually rule. If it doesn’t, then we are fucked.

Somewhat OT but interesting. If there was a lot of summer melt in the twelfth century, but no reduction in the amount of winter ice, this must mean a rather differen climatic regime than at present. At the present time Svalbard has a quite maritime arctic climate which is strongly influenced by the relatively warm waters coming up the West Coast. The twelfth century would seem to have had a more continental climate with relatively sunny and warm summers but perhaps less influence from the Gulf Stream.
This is interesting since the Gulf Stream was apparently active, and the amount of open water in summer not much less than now, even at the climax of the LIA in the Seventeenth century (this is proven by the fact that the Dutch whalers put their main base on Amsterdamöya at the north-west tip of Spitzbergen).

May I briefly interrupt this discussion with a small plea on behalf of your behind-the-scenes volunteers:

Please ignore spammers. If you see spam that’s been left in place for more than a couple of days, then feel free to bring it to our attention.

How do you know if something is spam that has managed to elude our spam traps?
1) It will always have a link to a commercial or salacious or all-links website. Usually it’s a link from the poster’s name, more rarely in the comment itself.
2) It will almost always have somewhat bizarre content.
3) It will not come from a familiar poster.

I would also like to express my appreciation to Dr Grinsted for taking the time to talk to us here at CA. There are a huge number of engineers (me included) and geologists here. We tend to be a sceptical lot and have been trained to smell BS from afar. This is a big part of the reason for our requests for logic, precision, honesty and openness in the scientific literature that will inevitable affect us all.

I would also like to express my appreciation to the unsung heroes of the climate wars: the moderators at CA and WUWT.

You did apparently have considered reasons for excluding the washout series and the 12th century d18O data from the 2009 sea ice paper. However, it was a big mistake, IMHO, to omit these reasons from the paper, and to bill the paper as “Unprecedented low twentieth century winter sea ice extent… since AD 1200”, given the evidence of considerably warmer temperatures in the preceding 7 decades (however uncertaintly dated) from the same ice core.

In any event, in order to qualify as scientific rather than faith-based, both these papers need to have their data publicly available. I recognize that you have no control over the data yourself, and that it is customary for researchers who collect data to put off releasing it until they can get a publication out of it. But the core was drilled in 1997, and Dr. Isaksson has already gotten at least 4 lead-authored articles from this data, plus innumerable co-authored articles. Did she receive any public support for this work that requires her to release her findings eventually?

If I had my way (not that I have any say in the matter), public policy reports like IPPC5 would bar lead authors from citing papers like these that are based on undisclosed data, even if they have been peer reviewed.

After scanning the 2009 article linked in the thread introduction, I see much grist in it for further analysis. I see temperature “indicators” of sea ice extent and EPS “leveling” out before the expected anthropogenic influences took hold and then holding there during the expected anthropogenic warming. I also see these two indicators out of phase timewise.

As for the 7 decades of data prior to the 1200s, I judge that the thinking person would inquire given the overall context of study and do so regardless of how the authors framed their limits on the study.

The 2009 paper of interest here gives the process and methods whereby a proxy back to 1200 for sea ice extent is developed using tree rings and O18 from an ice core and calibrating during the 1864-1997 instrumnental period against the sea ice extent.

On the first read of papers like this one, I often look for those selection processes for data manipulation and use that might be evidence of over fitting a model. In this paper the regression model for the reconstruction uses sea ice extent (SI) as the dependent variable and the tree ring index (TR) lagged one year and the ice core O18 lagged two years as independent variables. While the authors make general references to why the variables can be lagged, the choice was made from what the authors call a selection process of candidate combinations from evidently stepwise trial runs with each of the two independent variables lagged under five conditions: t-2, t-1, t=0, t+1 and t+2.

I saw no evidence in the main body of the paper on compensating the degrees of freedom for this selection process that in reality had no a prior reasoning for the particular selection that was used. Is anyone else bothered by this process – or am I missing something here?

The regression residuals had rather large auto correlations and I would suspect that the t-1 and t-2 lags for the TR and O18 variables are strongly autocorrelated also. How that complicates a calculation of the reduction of df I do not know.

Correlations are given in the paper for unfiltered, 5 year smoothed and high pass filtered reconstruction statistics and show a dramatically decreasing correlation for the TR and O18 to SI as the frequency increases. I think that the authors use autocorrelation of the smoothed series to compensate for the dfs in smoothing.

RE Ken Fritsch #75,
You’re quite right that autocorrelation may be problem for the Macias Fauria 2009 paper.

On their unnumbered fifth page, they do describe a procedure that ordinarily should be adequate for generating critical values for their full-period regression R^2: First, they fit an AR process to each series involved, eg Sea Ice (SI) , d18O and Tree Rings(TR). Then they simulate these processes independently and run their regression of SI on d18 and TR, tabulating the distribution of R^2, using 10,000 sets of simulated variables. This is an arguably better alternative to using the Quenouille-Bartlett “effective degrees of freedom” adjustment.

However, their Table 2 gives the estimated AR coefficients, and shows that most if not all of their series, as smoothed before running the regression, are near unit root processes. The sum of the AR lag coefficients, which is the measure of persistence used by the Augmented Dickey Fuller Test and by Andrews and Chen (Econometrica 1994, I believe) is .972, .920, .955, and .845 for TR, d18O, SI, and the regression residuals (“CI”), respectively. As is well known, the OLS estimate of this sum is biased downwards, especially as a unit root is approached. They provide none of the conventional tests for either a unit root or for cointegration.

The serial correlation in their variables and regression residuals is artificially aggravated by their pre-smoothing of the data with a “5-year cubic spline”. I take this to mean a cubic spline (a piecewise cubic function with discontinuities in the third derivative at selected “knot points”) with a knot every 5 years. I’m a big fan of cubic splines, and in fact used them in one of my first papers (J. Finance 1975). However, by using splines to smooth the data, they are creating serial correlation even if none existed in the first place, thus making the task of removing the effect of the serial correlation unnecessarily difficult.

If they had simply taken 5-year averages (to average out noise) as in Grinsted et al 2006, and then used 5 years as their sampling period, there might have only been moderate serial correlation to content with. However, a 5-year spline is much more complicated than a simple 5-year average, so that just sampling it every 5 years would still leave some induced serial correlation. I plan to do a post in the near future on spline smoothing (since things are slow this month…) to explore this further.

Like you, I was troubled by the “data mining” effect of their search over lags -2, -1, 0, +1, and +2 of each explanatory variable for the best fit. Ordinarily, this would be a problem, but after running their data through a 5-year spline, these leads and lags are almost indistinguishable. So it’s not clear what the gain would be from doing it. But on the other hand, there may not be much harm either.

In order to replicate their results and check whether or not there is a unit root or spurious regression problem, it would be necessary to have access to their data. However, Elisabeth Isaksson has not archived the now 12-year-old Svalbard data, so for the time being, the results of this paper this paper rest on faith rather than scientific replicability. (See #73)

Grinsted et al (2006) report an R^2 of .22 for their key regression of temperature on d18O that they say is significant at the 95% level. However, they make no mention of any correction for serial correlation at all, and do not indicate whether this regression was run using annual observations on 5-year averages, or non-overlapping 5-year averages. Again, this can’t be checked out without access to the withheld data. At least Steig made their TIR reconstruction (if not the AVHRR file) available from the start, which at least made it possible to determine that they in fact had made no correction for serial correlation.

It is not clear that this is the same as either of the washout measures shown in figure 5 from Grinsted 2006 above

It is not. However, Virkkunen does calculate log(Na/Mg) for the whole core including the pit (figure 7). It is the same as the plot from my Grinsted 2006 paper, except for: a linear transformation, it is on a depth scale, has not been smoothed, and has been extended using recent snow-pit data. Log(Na/Mg) is higher in the snow-pit than in deepest part of the core. That (along with figure 8 ) is the basis for saying that “Present day summers in Svalbard are as warm as those during the medieval warm period”.

Re Aslak Grinsted #77,
Thanks, Aslak, for confirming that this is not the same index. I take it that “0” depth is the surface at the time the ice core was drilled, so that the 2001 and 2002 pits lie at a “negative depth”.

That (along with figure 8 ) is the basis for saying that “Present day summers in Svalbard are as warm as those during the medieval warm period”.

In fact, all it indicates is that the summers of the last decade are comparably warm to those during the 12th c, which may already have been after the MWP itself. See Loehle and McCulloch (2008), which shows only the 9th, 10th, and early 11th c as significantly warmer than the bimillenial average (aside from a brief episode in the mid 13th c).

Recall that Greenland was explored and settled already in the 10th c, and also that the wood from Istorvet Ice Cap in Greenland dated by Tom Lowell et al was mostly in the range AD800-1014, with nothing after 1014. The 12th c may already have just been “back to normal”, and relatively warm only in comparison to the subsequent LIA. The Greenland colony already froze out sometime during the 14c, and by c1427, Clavus was mapping Greenland as connected by an impenetrable shoreline (presumably of perma-ice) to northern Europe. (A bad picture of one of his maps is here. I’m not sure if he included Svalbard in either of his 2 maps.)

I find it amusing that although the title of your Announcement merely claims that “Present day summers in Svalbard are as warm as those during the medieval warm period,” its URL subliminally projects that “Present-day-summers-in-Svalbard-are-warmer-than-the-medieval-warm-period”.

Interesting that Hu M gave the title with “lost decades” to the 2006 Svalbard reconstruction when a careful reading of the 2009 Svalbard paper noted in the introduction to this thread also includes some lost decades.

In the 2009 paper the period 1864-1939 was used as the verification period and the period 1960-1997 was used as the calibration period. The “lost decades” of 1940-1959 were attributed in the paper to a lack of complete data in this period due to WWII.

However, the final calibration from which the model for the reconstruction was constructed used the entire period from 1864-1997, i.e. the lost decades are found.

the period 1864-1939 was used as the verification period and the period 1960-1997 was used as the calibration period

We also swapped calibration and verification periods the stats for those are also presented in the paper.

the final calibration from which the model for the reconstruction was constructed used the entire period from 1864-1997, i.e. the lost decades are found.

You might think so, but actually we did not use the missing WWII data in any calibrations. The April ice extent in the Western Barents from Torgny Vinje. He filled the gaps with linear interpolations. We did not fill the gaps, but just left them out of our regressions.

Thank you for your reply. My confusion stemmed from the statement: “We used the whole period 1864-1997 to produce:…” which was followed in the paper by the reconstruction model relating SI(t) to TR(t-1) and O18(t-2).

In the Figure 2 graph of the reconstruction, the black line would represent the model applied to the TR and O18 explanatory variables and blue line would include the observed SI data used for the calibration with the exception of the 1940-1959 period which is the interpolated data not used for calibration. The grey band from the top of the graph to the bottom, then, I assume, implies that that period has a very large uncertainty, and CI not capable of being calculated, associated with it.

Also interesting for me from the same graph was an observation that the observed (and interpolated) data points during the instrumental period have a very much larger variation than those from the reconstruction. If this difference were extended to the pre-instrumental period, it would seem to me that the historical variations would be under estimated and as result the reconstruction and tentative conclusions from it would be misleading. I am assuming from the text that the observed data are presented with the same smooth as are the reconstructed data – although there is a precedent for my confusion in reading the text.

In reviewing the ice extent reconstruction in the Western Nordic Seas over the past 1200 years in the 2009 paper listed by Hu M in his introduction to this thread, I was finally able to locate the annual reconstruction data.

I was curious about how the reconstruction would look after doing a breakpoint analysis using the methods available in R. To that end I have listed a graph of the reconstruction with the breakpoints noted and the R code used to make the calculations.

The breakpoints for the reconstruction using either the normalized or calibrated data columns resulted in the following breakpoint years: 1324; 1443; 1600; 1768.

From the graph of the reconstruction with the breakpoints, one gets a different perspective than that that one might obtain from the paper. We see repeating cycles of relatively short periods of rapidly expanding ice extent followed by longer term slowly diminishing ice extent. The current cycle shows a diminishing trend to present from 1768. The current trend has a negative slope nearly the same as that seen from approximately 1450 to 1600, but extended for a longer period of time and in fact is the longest of the 5 cycles shown.

My depiction of breakpoints, without some reasonable explanation of why the sea ice extent would quickly expand and slowly diminish and further without providing an explanation for the apparent periodicity of the occurrences, becomes simply an unexplained observation and nothing more.

I should have said over the past 800 years and not 1200 years in my post above.

Also an explanation for the breakpoint occurrences might better start with a climate change correspondence than sea ice extent since the explanatory variables of tree rings and O18 are proposed to be temperature related.

RE Ken Fritsch #82,
The indicated file, at http://www.helsinki.fi/science/dendro/data/WNordicSeas_1200-1997_MaciasFauria_et_al_2009.txt, is the output file of the sea ice reconstruction generated as a linear combination of the Fennoscandian treering series and the unarchived Svalbard d18O series, as truncated at 1200. The breakpoints you observe therefore must arise as breakpoints in one or both of these series (for whatever reason), and only indirectly to be interpreted as breakpoints in sea ice extent trend.

Hu M, the reconstruction is of sea ice extent and thus if we assume that the reconstruction faithfully reproduces the extent over time, I should be allowed to talk about breakpoints in terms of sea ice extent.

Seriously, I did get carried away with that assumption and that is why in my previous post I alluded to dO18 and tree rings – and thus temperature.

I even went to a paper that used Antarctica ice cores to estimate the volcanic eruptions and their associated magnitudes and sulfate production from 905 to 1842 limked here:

I was prepared to do some contorted hand waving along the lines that a major eruption or a series of closly spaced smaller ones could reduce the SST in the Western Nordic Seas for a sufficient time to allow the ice to build up rapidly – perhaps by invoking some feedback from albedo changes. When the effects of the eruption were washed out the reversal would occur at a slower rate do to multi year ice.

Anyway, since I have a habit of posting the last post on some of the aged threads, I thought such a conjecture would really end an otherwise interesting discussion. Also the amount of hand waving that would be required to push my conjecture appeared, from my first glance at the historical eruption estimates, sufficient to cause me to fly away.

The comment by Hu (84) is very interesting. What I presume is that he is saying that the data available at the link (for which many thanks) are not to be trusted as a real indicator of Ice Extent because they are simply based on proxy values – dendrochronology and oxygen again!. It is of course very difficult to see how Ice Extent numbers from hundreds of years ago could be arrived at avoiding the use of proxies! Professor Mann has recently used proxies to estimate hurricane frequencies covering times going back many hundreds of years, but his work is (unsurprisingly and as usual) being called into question.

However, I am fascinated by your plots, Kenneth (#82) and have some comments to make in the light of my own trials on the data. The data loaded into my software instantly and produced exactly the same plot as your diagram, without the fitted values that you show in red, of course.

I am not a user of R (hangs head in shame) because I find that such a dense language is too heavy for my aged brain to handle. I use my own “easy to drive” software that works very nicely indeed on a dataset such as this.

However, my methods for attempting to identify and quantify regime changes are totally unrelated to those provided by R, and as a result give totally unrelated estimates of change points! Unfortunately I can’t provide the diagrams here (how does one upload GIFs to this thread?) so I’ll just mention some prominent ones giving their tentative dates.

But first I need to ask why you decided on five break points. Looking at the diagram I find it difficult to follow how what I regard as a very marked event at 1913 or thereabouts seems to have been by-passed by R. Is it perhaps because you specified only 5 break points?

Based upon a cumulative sum chart followed by dummy variable regression my tentative estimate for the step change at around 1913 is about 210 units and is followed by a possible rising trend (slope 0.6, t=1.6). I also propose five other tentative regime changes, based upon quite simple reasoning. I could supply full details and plots, but preferably by email. Is it “safe” to provide an email address here?

I identify other “break points” (dates are approximate) at 1295 (upward, following a highly significant decline) to a stable regime lasting to about 1500. Then a downward step (80 units?) to another stable state lasting about a hundred years (~1600), an upward step of about 160 units followed by a very significantly declining period, slope -1.5/year, t=5.0, again for about one hundred years, to around 1700. Then an upward step of about 120 units leads to a stable period ending abruptly around 1913 as noted above.

But first I need to ask why you decided on five break points. Looking at the diagram I find it difficult to follow how what I regard as a very marked event at 1913 or thereabouts seems to have been by-passed by R. Is it perhaps because you specified only 5 break points?

Robin, there are 4 breakpoints and it is not I, but rather the program in R that decides how many breakpoints reach the level of significance by, I think, doing a least squares calculation as would be performed in a single trend except here multiple trends are allowed.

Hu M was merely providing me with a reality check – I assume. Actually sea ice extent, tree ring indexes and dO18 are all assumed to be related to SST and, therefore, a look at breakpoints of the tree ring indexes and dO18 would be in order – if the authors had provided that data or if I could find it.

Actually sea ice extent, tree ring indexes and dO18 are all assumed to be related to SST and, therefore, a look at breakpoints of the tree ring indexes and dO18 would be in order – if the authors had provided that data or if I could find it.

I gather from Aslak’s remarks that co-author Elisabeth Isaksson controls the Svalbard ice core data. It sounds like she isn’t about to release it until she has coauthored a separate article based on every century of it. I don’t know who funded her.

The Fennoscandia series may be available somewhere, however, as “Fennoscandia” has been discussed on CA at great length already. I’m not sure what series are involved, or which one is which, but check out “Tornetrask Digital Version — Hooray!” and “New Light on Old Fudge” for starters. The fact that Briffa is not a co-author suggests that this is archived data.

My point was simply that a linear combination of an ice core series and a tree ring series does not constitute sea ice extent, even if is purported (unverifiably) to correlate with sea ice extent. I see no reason why the ice core data would exhibits the sudden shifts you show, but a tree ring series that splices together data from several trees might show such breaks, entirely without any relation to sea ice. So my first question would not be, why does sea ice exhibit this behavior, but rather, why does the tree ring index (or d18O series) exhibit this behavior?

Or maybe there really were such breaks in climate, affecting d18O, TR, and sea ice together.

So my first question would not be, why does sea ice exhibit this behavior, but rather, why does the tree ring index (or d18O series) exhibit this behavior?

Or maybe there really were such breaks in climate, affecting d18O, TR, and sea ice together.

I agree completely and realized what you are contending here after I had written most of my post. I posted it as originally worded anyway with the thought that a reconstruction of sea ice extent if it has any meaning should be responding to its explanatory variables and further that I needed to test the attention span of potential responders out there in CA land.

On thinking about this further, I realize that you and Craig Loehle have published a reconstruction free of tree ring proxies and here we have in the Fauria 2009 paper data used that would allow a rather direct comparison of a tree ring versus non tree ring proxy over 800 years. I would suggest that you contact Fauria with an offer to coauthor a paper using the tree ring and dO18 data not together but in opposition as a test one on the other. Just determining the t-x lags to use could fill half the paper.

Re Ken F, #82,
Fig. 2 of Macias Fauria et al shows that there was an abrupt reduction in the number of trees in the Fennoscanian TR index, from over 40 to around 10 (going backwards in time) right around the c. 1780 breakpoint that the R routine finds in the “Sea Ice” series. My first guess would be that this “break” is more closely related to how the TR index dealt with this sudden reduction in data, rather than to actual Sea Ice extent.

One serious shortcoming of the 2009 Macias Fauria sea ice reconstruction in the post and graphed in #82 is that it relies 50% on a TR series as an indictor of temperature and therefore sea ice extent, yet does not control for CO2, a primary fertilizer, which has greatly increased during the calibration period. Controlling for CO2 would presumably reduce, and possibly even eliminate the significance of the TR series as a predictor of sea ice.

But again, this can’t be checked one way or the other until Isaksson releases the long-overdue Svalbard data.

Adam Gallon,
With all due respect, although I share this websites enthusiasm for auditing the state of current climate research, lets try and remember that if we are going to make claims such as the falsity of the anthropogenically-enhanced greenhouse effect then you had better be prepared to defend those arguments. Harries et al. 2001 show indeed that there is more absorption by greenhouse gases occurring in the late 20th century and Wang and Liang (2009) do show an increase in the downward LW radiation flux as would be expected of an enhanced greenhouse. The real question is how much of an effect would it have on the climate, which could be minimal or significant, not whether this phenomena has any validity. I don’t think that Steve argues that emissions could have an effect on global temperatures but whether this would be significantly damaging is the source of the debate.

Steve: why would you say that I’ve argued the “falsity” of AGW? I’ve focused on proxy studies and what can be concluded from them. I haven’t attempted to analyze modeling literature as I have only so much time and energy.