Pielke Jr has an excellent post, reviewing the original statements by the authors of the Marcott article with particular attention to their promotion of the uptick, which Real Climate is now pretending not to exist. William Connolley responded in the style that is too popular among RealClimateScientists: by calling Pielke names – RP Jr Is A Tosser. Not exactly Churchillian wit.

Some comments at RC here, but nothing from the original authors, despite requests from Schmidt that they weigh in. No answers to any of the original questions other than Schmidt trying to “imagine” reasons.

New article by Andy Revkin here, including my comment that Tamino’s post, praised by RC and Revkin as “illuminating”, had been plagiarized from an earlier CA post. Although Tamino had previously conceded that he had consulted my blog post and had properly cited and linked my post in a draft, he is now arguing that he was justified in deleting the citations and links, though the rationale appears to be nothing more than pique.

Core Top Redating
Obviously, the main question arising from the sequence of CA posts was the rationale and methodology used in their core top redating. There were two related but separate issues: (1) the justification for re-dating coretops to 0 BP that had been dated much earlier by the specialists: (2) the algorithmic basis for blanking the recent values of several well-dated cores (e.g. the MD03-2421 splice. I had emailed Marcott about the latter question a couple of weeks ago. However, Marcott chose not to acknowledge my email and the FAQ merely reiterated a false statement in the original SI. Here’s what they said:

In geologic studies it is quite common that the youngest surface of a sediment core is not dated by radiocarbon, either because the top is disturbed by living organisms or during the coring process. Moreover, within the past hundred years before 1950 CE, radiocarbon dates are not very precise chronometers, because changes in radiocarbon production rate have by coincidence roughly compensated for fixed decay rates. For these reasons, and unless otherwise indicated, we followed the common practice of assuming an age of 0 BP for the marine core tops.

The topmost sample for 46 out of 60 cores was dated prior to 1950 (0 BP) both in published dating and Marcott dating. The topmost sample for four cores was dated to 0 BP in both published and Marcott dating. See contingency table below: published horizontal; Marcott vertical. Whatever is going on here is not properly described in either the SI or FAQ.

Marc/Pub

Post-1950

0_BP

Pre-1950

Total

Post-1950

0

0

2

2

0_BP

0

4

5

9

Pre-1950

1

2

46

49

Total

1

6

53

60

I have already discussed some of the redated cores. Of the five cores with the most recent sample dated prior to 1950 that were redated to 0 BP, MD95-2043 and MD95-2011 have already been discussed. Others treated similarly are MD95-2015, PL07-39PC and 17940. One core, represented in the Marcott dataset by two different proxies (GeoB 6518-1) two versions) is re-dated so that its most recent sample is slightly more recent than 0 BP. The issue here is the validity of overriding the specialist dating, given the observed accumulation rates. In my opinion, redating of this magnitude is not something that should have been done without explicit disclosure and without clear description of the impact of the redating.

Then there are three cores with most recent sample as published data after 1950 that were Marcott-dated prior to 1950. Two of the three have already been discussed: MD03-2421, where the post-1950 dating is confirmed by a bomb spike; and OCE326-GGC30. The removal of these two cores from the 1940 roster is a material contribution to the uptick and needs to be clearly justified. Thus far there is no explanation. Merely re-iterating “0 BP” doesnt do it.

The third is ODP-1084B, which was used in the Loehle reconstruction, the dating of which was criticised by Gavin Schmidt. This is a little different: the original article stated that there is a 5 cm discrepancy between the ODP depth and the actual core depth, i.e. 5 cm is actually 0 cm. This wasn’t picked up by Marcott et al.

If one actually tries to replicate Marcott dating from reported radiocarbon dating plus the assumption that 0 cm= 0 BP, I got further discrepancies in a first test which I’m analysing.

It is evident that the authors’ explanation of their redating both in their SI and in their FAQ does not explain the redating of the MD03-2421 splice and similar cores. I did not originally ask about re-dating as a “trick” question intended to stump the authors. It is an elementary question that should have been reported in the original SI and then should have been answered in response to my original email or in the recent FAQ. But it’s unacceptable that their FAQ, after three weeks of polishing, does not answer these and similar questions.

Would it not be possible to do check for when artificial radioactive isotopes appear to get a fixed starting point (+/- 5-10 years)?
The sensitivity and availability of the instruments today should not hinder a simple yes/no check.

Only where the sedimentation rate is reasonably high. For high resolution studies of the last century in lakes and bogs, it is common to test for Cs137 (and sometimes Am241), peak concentrations of which correspond to 1963 (and 1986 in Chernobyl-affected areas). Sometimes post-bomb C14 calibration is also used, but Pb210 is more usual. In marine cores, these are used less often, partly because of the resolution, but also because of reservoir effects and mixing time.

I hope Steve won’t mind if I post these graphics again showing the details of their redating. First the last 1800 years, then the last 500 years.

You can see some of the coretop redating along the bottom line on the graphs. These are coretops that were dated by the specialists to various published dates before 1950, but which were redated to 1950 (“0 BP”) by Marcott et al. It also shows the fate of those coretops with published core top ages post 1950 (left of vertical black line) after redating.

Gavin is claiming that Richard Telford told them to redate M95-2011. I’m not sure this jives with the comments he made on the original thread. See Comment #13 at RC:

“For MD95-2011, I understand that Marcott et al were notified by Richard Telford that the coretop was redated since the original data were published and that the impact of this on the stack, and therefore the conclusions, is negligible.”

“They did not discuss or explain why they deleted modern values from the MD01-2421 splice at CA”

[Response: I imagine it’s just a tidying up because the three most recent points are more recent than the last data point in the reconstruction (1940). As stated above, there are not enough cores to reconstruct the global mean robustly in the 20th C. This is obvious in the spread of figure S3. Since this core is quite well dated in the modern period, the impact of this on the uncertainty derived using the randomly perturbed age models will likely be negligible. – gavin]

—-

Whether one agrees or disagrees with his answer, he’s honestly engaging the question.

Steve: Given that Gavin doesn’t know the answer, he should just say that and ask the authors, rather than making stuff up. Whatever they were doing with the cores where they blanked the recent values, they weren’t just “tidying up”. I quite sincerely have no idea what they were doing. Nor does Gavin. Nothing wrong with saying you don’t know when you don’t. Nor is his last sentence any better. Marcott redating affects the absolute level of the reconstruction – a point made in one of my original posts. Does it affect the “uncertainties” as calculated by Marcott? Figure S3 is quite different from thesis Figure C8 and, as far as I know, the only difference is core redating. So core redating affects this figure – a point not mentioned by Schmidt. Unfortunately with Gavin, you have to read everything as though it were written by a Philadelphia lawyer. Since Gavin is hosting this FAQ, he should insist that the authors answer questions. That would be honest engagement. (And BTW that’s what I’d do in his shoes.)

I think Gavin’s perspective (as well as a lot of other folks who are of the same mind when approaching the science), is to first and foremost assume absolutely zero nefarious/untoward/away-from-best-practice-or-intention behavior, and then approach the subject from there.

As Tamino has said (when he felt privileged to ignore proper attribution protocols), it’s a better thing to ‘help out the science’ by giving the original authors a boost in trying to find ways to improve upon their work to show how what they show “can be still informative” without even bothering with what has been declared to be side-show stuff.

Unfortunately, it is well understood that this sort of leniency only occurs within the camp– there’s no way they’d offer the same assistance (or cover) to scientists and/or bloggers that publish differently than their prevailing understanding. Any presumed element of plagiarism is leapt-upon; any methodological inconsistency and/or dating error would be grounds by itself for rejection in review. To me, it shouldn’t matter what that kind of history is if the argument is made consistently…but it’s not.

There’s no way that Roger Pielke Jr (for example) would get away with publishing a paper showing a “negative trend” in total global hurricane strength from 1600-present, achieved by moving around storms at the early end and then declaring this portion of the graph as not robust. [I understand this is a faulty example, I just can’t think of a better one right now– )

I can understand their scorning about you not applying your ‘auditing’ skills to papers they believe are in serious need of rejection/repair for similar reasons in which you fault theirs… but you can’t be everywhere doing everything, you clearly focus on proxy reconstruction and there are too few in the field doing them (which is a money-where-your-mouth-is comment I get as well). I also ‘get’ the idea that ‘nothing’ can be known about paleoclimate temperature through reconstruction is a bad argument. I personally appreciate the rigor you’re demanding of the scientists, and am a little skeptical that those that publish declare their field as deserving special freedoms to deviate below that which is minimally acceptable practices in other more established science. Maybe they are right, but so far they haven’t proven it because (for example) subsequent iterations of Mann et al inch closer to such threshholds and will probably eventually cross it. But even once finally achieving such rigor it is not acceptable to then point back to the intial work as being problem-free or ‘confirming’ in all senses of the word.

Hi, Salamano. You said “I think Gavin’s perspective (as well as a lot of other folks who are of the same mind when approaching the science), is to first and foremost assume absolutely zero nefarious/untoward/away-from-best-practice-or-intention behavior, and then approach the subject from there.”

I think this is the right perspective to take all around, for both “sides.” As you (rightly) point out, some only practice this selectively. But for the most part, this perspective is usually the right one. Michael Mann and others talk a lot about “Climate wars,” and such politicalization is unfortunate. But by and large, most scientists are only interested in doing good science. If one assumes that as a baseline, it usually pans out.

This attitude is in fact what I love about Climate Audit. Steve doesn’t shy away from “away-from-best-practice” investigations, of course; but he doesn’t assume it as a starting point.

I understand that we have the knowledge of the modern temperature record to cover the modern temperature record period. We also have Mann et al and Marcott et al to govern reconstruction eras of other times (with each subject to its own inspections of robustness, just like the modern temperature record has).

One problem we’ve come across is the validation of each proxy in overlap periods. Marcott et al have bit off more than they can chew by even attempting to drift deeper into Mann et al territory because they needed to first manually align their reconstruction with his. Right at the exact periods where you’re hoping to find corresponding proxy overlap that can validate each other, instead we have “divergence problems”.

Remember in the days of evolution science prior to the discovery of things like the archaeopteryx? These transitional points between lines of data can prove very necessary. It is a tougher sell to instead have an explanation as to why all proxies are valid for their specific eras, with non-robustness at their transitional end-points. The science has been making arguments for this, yes, but skepticism should be an acceptable ground to stand on here.

So in this regard, there are ways to think that this redating thing for the modern era “doesn’t really matter for the main points of Marcott et al” and “does really matter” for how science is done and/or promoted.

“The climate has been warming since the industrial revolution, but how warm is climate now compared with the rest of the Holocene? Marcott et al. (p. 1198) constructed a record of global mean surface temperature for more than the last 11,000 years, using a variety of land- and marine-based proxy data from all around the world. The pattern of temperatures shows a rise as the world emerged from the last deglaciation, warm conditions until the middle of the Holocene, and a cooling trend over the next 5000 years that culminated around 200 years ago in the Little Ice Age. Temperatures have risen steadily since then, leaving us now with a global temperature higher than those during 90% of the entire Holocene.”

“Second, although Marcott stated that they assumed a 0 BP coretop date “unless otherwise indicated”, they only attributed a 0 BP coretop date to 9 cores out of 60. ”

I’m fairly certain you are wrong here. So few cores have their topmost sample dated as 0BP, but few cores have their topsmost sample at 0cm. For example, the top sample at ODP-1019D is dated to 122 yr BP, but it is at 6cm not 0cm. If this wasn’t what you meant, please can you clarify.

Steve: Hmmm. Good point. ODP1019D’s most recent sample was published to 160BP, but was taken from 6 cm, not 0 cm, a point that I hadn’t considered. I’ll have to restate this table and will do so forthwith. This doesn’t affect the issue with the named cores though.

Richard’s review comments are, as usual, helpful. Richard pointed out that the most recent sample in many of these cores was not taken at 0 cm, a point that I hadn’t considered. I’ve edited this post to reflect these comments. In particular, I’ve changed “coretop” to “most recent sample” throughout, as that was what I was actually analysing.

I don’t claim to be perfect. I try to correct mistakes when brought to my attention, as I’ve done here, as authors publishing in academic journals also do, though the turn time here is faster.

I now don’t understand what the gripe is. Perhaps I’m not following your contingency table – is it correctly labelled?

But specifics, core GeoB 6518-1 has a probably post-bomb date at 10cm, to the top sample ought to be post 1950.

In any event, these dating issues have minimal to no impact on the conclusions of the paper.
Steve: they do for the uptick and the preceding downtick. I know that Schmidt and others are pretending that this was not part of the paper but they were.

I particularly don’t understand this one:“However, Marcott chose not to acknowledge my email and the FAQ merely reiterated a false statement in the original SI. Here’s what they said:
‘In geologic studies it is quite common that the youngest surface of a sediment core is not dated by radiocarbon, either because the top is disturbed by living organisms or during the coring process. Moreover, within the past hundred years before 1950 CE, radiocarbon dates are not very precise chronometers, because changes in radiocarbon production rate have by coincidence roughly compensated for fixed decay rates. For these reasons, and unless otherwise indicated, we followed the common practice of assuming an age of 0 BP for the marine core tops’. “

Why false? In the original post it was claimed that Marcott et al had not consistently reassigned coretop levels a date of 0BP. But now it’s clear that they did, except where there was a contrary C14 reading.
Steve: Then explain MD03-2421 to me. Or OCE326-GGC30 which is where I started.

“Then explain MD03-2421 to me.”
OK.
There appears to be a column missing from the spreadsheet. If you look at cols P and Q, you’ll see upper and lower bounds to ages, but the central ages are not given.

You will find those on this SM table DR1 from this paper by Isono et al. The first thing to note is that they head the “published ages” as “conventional age”. I’m not sure what they mean by that, but they proceed themselves to calculate the C14 ages that Marcott et all used. P and Q are the CI’s for those ages. You’ll see that at levels 3cm (for MC1) and 7cm (for GC) they write NC – not calculated. It is hard to imagine that Isono et al just forgot – their software would not report an age. That SI does not record the software, but Marcott et al label P and Q as Marine09.

Now in M13 col T they note the dates from the different cores that they actually used to make the chronology. I’m not sue how you can put these together in this way, but it was Isono et al who made the composite, not Marcott. They have taken 50 cm from GC, 27 cm from MC1, and then they are left with the 3cm level from MC1, which is NC.

So they have interpolated/extrapolated down to 3.5cm, but for the three readings below 3cm, they have a big NC staring them in the face.

And it’s Isono’s NC, not theirs.
Steve: Nope. According to Marcott themselves, there were “published” dates for the top three samples (which include bomb spike.) Radiocarbon obviously isn’t determinative for such modern samples. Tell you what, Racehorse. Can you find out from Marcott. I realize that I asked rhetorically, but I’m bored with speculations from you and Gavin on questions that the authors should answer.

“but I’m bored with speculations from you and Gavin”
Well, you asked.
But just to add to it, we have a NC at 3cm from Isono for one component core, even with a bomb spike, and for the other KR core:“The correlation indicates that the top 3 cm of the GC core are missing (Isono 2009).”

And asa for the “conventional dates”, they seem to be, from Isono’s paper:“the CALIB4.3 marine98 program (Stuiver et al., 1998) with a 400-year reservoir correction (Table DR1 in the GSA Data Repository1).”

So Marcott et al are supposed to consistently use Calib 6.1. They have only 4.3 dates in one column, and big NC’s from Isono elsewhere. NA seems the right answer.

Nathan Kurz says: “I think Gavin deserves compliments for his sincere attempt to answer some questions:…” My first thoughts when reading Gavin’s answers were: 1) Why is he so involved? 2) Where is he getting his (inside?) information to answer these questions? 3) Whay are th authors not involved?

I understand RC is his blog, but his work is misplaced. He is neither an author nor reviewer of the study.

I’m not sure about providing links to Tamino or Stoat, I had to wash my brain out with alcohol after reading a selection at them. What stands out is how personally unpleasant and scientifically illiterate most of the posters are; that and the contortions that Tamino at least has to go to to not admit the truth about Marcott et al.

I have decided though, that projection is a powerful tool for understanding the “believer” blogs; what they most cavill about is what they themselves most indulge in. That and the fact that all the ad-hom strongly suggests that they have no other options, they can’t answer the science so they malign. Nice people.

“First, within Marcott’s own dataset, the so-called “common practice” of assuming an age of 0 BP for core tops was infrequent, rather than common. Out of the 60 ocean core proxies, only six had published coretop dates of 0 BP. The coretops for 53 of 60 were dated prior to 0 BP, often much earlier, while one core (MD03-2421 splice) was dated after 0 BP (it had a bomb spike).

Second, although Marcott stated that they assumed a 0 BP coretop date “unless otherwise indicated”, they only attributed a 0 BP coretop date to 9 cores out of 60.”

This is completely astray. Only 12 of the non-icecore proxies have readings at 0 cm. And Marcott et al were completely consistent. All were assigned zero dating, except for numbers 4 (JR51GC-35) and 37 (MD79-257) which had actual C14 dating at 0 cm.

Is there any followup to Hugo M’s point about spuriously precise data synthesized from the original author’s graphs? The line connecting points a thousand years apart is read as if it was measured to precision of 300 microseconds?

This is completely irrelevant and will have zero impact on their analysis. The uncertainty of the dates is given in the “Age model error” column in Marcott’s supplementary material and is often >100 years.

How “irrelevant”? The proxy selection criteria stated in the body of the paper explicitly requires points no more than 300 years apart. The proxies as actually selected include some with points over a thousand years apart. The needed missing data points appear to be inferred from a graph? This is approved practice in this discipline?

” The proxy selection criteria stated in the body of the paper explicitly requires points no more than 300 years apart. The proxies as actually selected include some with points over a thousand years apart.”
I think you’re mixing up proxy data points and C14 dating points.

At BH I suggested that they had chosen Easter Sunday as a good day to try to bury bad news.
Then somebody else pointed out that if you try to bury something at Easter it’s liable to rise again a few days later.

Is the rate of global temperature rise over the last 100 years faster than at any time during the past 11,300 years?

Jeremy Shakun:

We showed that no temperature variability is preserved in our reconstruction at cycles shorter than 300 years, 50% is preserved at 1000-year time scales, and nearly all is preserved at 2000-year periods and longer.

“Surface temperature reconstructions of the past 1500 years suggest that recent warming is unprecedented in that time. Here we provide a broader perspective by reconstructing regional and global temperature anomalies for the past 11,300 years from 73 globally distributed records. Early Holocene (10,000 to 5000 years ago) warmth is followed by ~0.7°C cooling through the middle to late Holocene (<5000 years ago), culminating in the coolest temperatures of the Holocene during the Little Ice Age, about 200 years ago. This cooling is largely associated with ~2°C change in the North Atlantic. Current global temperatures of the past decade have not yet exceeded peak interglacial values but are warmer than during ~75% of the Holocene temperature history. Intergovernmental Panel on Climate Change model projections for 2100 exceed the full distribution of Holocene temperature under all plausible greenhouse gas emission scenarios."

The first sentence is based on the uptick? This is totally misleading and Steve is correct.

Nick … a read of the authors 1st paragraph, does seem to show a very subtle issue many people may be missing. Lets deconstruct it:

“Surface temperature reconstructions of the past 1500 years suggest that recent warming is unprecedented in that time.

I believe you want folks to note it does not say: “our” surface temp reconstruction – that they are referring to other records, a point Eliza picked up on. …

However, then they say:

Here we provide a broader perspective by reconstructing regional and global temperature anomalies for the past 11,300 years from 73 globally distributed records.

… which sets the belief and expectation, implying their work offers a relevant observation regarding recent and past temperatures, as noted in the first sentence.

They continue:

Early Holocene (10,000 to 5000 years ago) warmth is followed by ~0.7°C cooling through the middle to late Holocene (<5000 years ago), culminating in the coolest temperatures of the Holocene during the Little Ice Age, about 200 years ago. This cooling is largely associated with ~2°C change in the North Atlantic.

The implication again is their reconstruction provides the support for this information. Which is correct – Marcott does provide support for older past temperatures.

But they shift gears again in next part:

Current global temperatures of the past decade have not yet exceeded peak interglacial values but are warmer than during ~75% of the Holocene temperature history.

I believe you say, and I don’t necessarily disagree, they are now back to referring to other contemporary records regarding the current warming, and NOT on Marcott, for this claim. This explanation could have merit, however the last sentence muddies the issue further yet – referencing yet another outside source for projections:

“Intergovernmental Panel on Climate Change model projections for 2100 exceed the full distribution of Holocene temperature under all plausible greenhouse gas emission scenarios.”

Their claims they were talking about OTHER temperature records, and NOT Marcott’s reconstruction, when they refer to “current” temperatures, although almost impossible to infer from what they wrote, again, might well have merit.

If they had simply reported only the relevant part of their work – how their proxies support other reconstructions of past temperatures – they would have had little scrutiny.

But instead, it appears they succumbed to the lure of the magical hockey stick.

Had they reported only the relevant data and conclusions from the Marcott thesis – regarding how their proxies and reconstruction supported other work regarding past temperatures, absent showing the hockey stick in their graph, their work would have been perhaps interesting, and noted by others. Had they overlaid the current record on top of their combined reconstruction, using these particular proxies, there seems to be good support for other similar work of the past.

However, I suspect they also would likely have received little or no notice for their work had they done so.

But there was a hockey stick there – and gosh dang it, no matter how thinly supported by their own data it was, you just can’t pass up the fame, fortune and attention a good hockey stick provides. With temps flat for going on two decades now, and papers starting to show up in the media about scientists being puzzled by the pause in warming, a good hockey stick story must have seemed just what the Doctor ordered.

And thus, either thru the encouragement of the authors, or by the NSF, who funded this work, a press release and PR campaign was seemingly born. One the authors seemed to have readily jumped onboard with.

By posting their own hockey stick, they got the attention they likely we’re wishing for, but in doing so they solidified in virtually every readers mind, their claims about the current record compared to the past record were talking about the current period of their reconstruction vs. the past period of their reconstruction. Not that they were comparing the current instrumental record with their reconstruction of past temps, they now say they were claiming.

By failing to resist the urge to include their own hockey stick, when the support for that was, as they acknowledge, “not robust” and should not be used, it pretty strongly seems to show they’ve hoisted themselves on their own petard.

No, there isn’t a single reference in that abstract to a recent spike in their proxy record. The last part you refer to, with projections, is directly describing Fig 3, where both the current instrumental and future model projections are clearly marked and compared with the distribution of Holocene temperatures.

Sometimes I think academic jargon was invented for two reasons: to give weight to assertions the authors wish to be true but have not yet proven to be true; and to provide wiggle room if those same assertions are later proven to be false.

Perhaps that’s why the IPCC report, when referring to the two different boundary ends of CO2 sensitivity…

The low end: “very likely cannot be lower than _____ ”
While with the high end: “numbers beyond ______ cannot be excluded”

It’s interesting that using the same language for both ends would never be approved, and yet the wiggle-room is still there on both sides just in case new research refutes it. Though I get that the ‘fat tail’ is what they’re driving at.

Nick – I think you miss my point … I’m saying I generally agree with you, they don’t directly reference their spike in that paragraph from the abstract.

However the way it is worded almost any person is going to read it and think they are talking about their spike.

In the end if the spike is as they admit ‘not robust” and should NOT be used to make observations about the recent temp record, then exactly what purpose does it serve to include it at all?

The ONLY thing including it does is to confuse people into thinking it is relevant.

And of course gain them a highly promotable hockey stick they can use to get attention for the paper and to their claims. Their highly promoted claims that this study shows how big, bad and scary the recent/current period warming is.

Tell me honestly – if they had left the hockey stick out of this paper. And only included the instrumental or other records for recent temps, would anyone (other than maybe a few scientists) have looked twice at the paper?

What is the ethics, let alone the purpose, in your opinion – of the authors including the hockey stick, which they admit is poorly supported by the data, which they acknowledge is “not robust” and which they note should not be relied on for any meaningful contribution to recent temperatures?

There is an obvious flow to the language makes it sound like “current global temperatures” are included in their “broader perspective”. Certainly the intervening sentences continue the direction and context. And there is nothing indicating that their “reconstruct[ion of] regional and global temperature anomalies for the past 11,300 years” does NOT include current temperatures.

But, in the end, it’s all a bunch of vague crap.

And, of course, they themselves did nothing to clarify, quibble, or make caveats originally.

In a discussion at his blog regarding press attention to Marcott et al Nick wrote, “I agree that the paper has had more attention than it deserves, and it may be because of the spike. But it’s still a useful account of Holocene temperatures.”

Call me naive, but I’ll bet that everyone – the authors, Gavin, Steve, Roger, everyone – can agree with Nick on that point.

“What is the ethics, let alone the purpose, in your opinion – of the authors including the hockey stick”

Well, I see DGH has noted my view expressed elsewhere. And yes, I’ve said many times that I don’t think they should have included the recent period, spike included. I don’t think it’s particularly an ethical issue – just for writing a good paper. As to purpose, the spurious spike only distracts from the fact that there is a very real and well-known one.

In fact, my suspicion is that the re-dating was required by a Science referee, presumably in the interest of CALIB 6.1 consistency. I think then the reason the spike gets so little mention in the paper is that most of the text was written before it appeared. Almost all the references to recent warming from the authors relates to Fig 3, which is where they address the now-then comparison.

Nick, you say, “Well, I see DGH has noted my view expressed elsewhere. And yes, I’ve said many times that I don’t think they should have included the recent period, spike included. I don’t think it’s particularly an ethical issue – just for writing a good paper. As to purpose, the spurious spike only distracts from the fact that there is a very real and well-known one.”

Nick if what you say is what you actually believe, and you don’t have any ethical concerns, then how do you explain that virtually all of the post-publication hype, including Shakin’s video proclaining a “super hockey stick”, was centered on the stick?

James Smyth
“And, of course, they themselves did nothing to clarify, quibble, or make caveats originally.”

Not so, they did a lot, although I don’t think M&S are yet accomplished presenters. But here’s Marcott laying it out in the first round of interviews with Seth Borenstein:“The same fossil-based data suggest a similar level of warming occurring in just one generation: from the 1920s to the 1940s. Actual thermometer records don’t show the rise from the 1920s to the 1940s was quite that big and Marcott said for such recent time periods it is better to use actual thermometer readings than his proxies.”

Bob“how do you explain that virtually all of the post-publication hype, including Shakin’s video proclaining a “super hockey stick”, was centered on the stick”
I don’t think it was. One of the points I was making above with the abstract is that I think you folk sre so focussed on the proxy spike that you hear that when other people are talking about the instrumental rise.

I didn’t intend that as a “gotcha.” By the tone of your response you seem to think otherwise. Apologies if that’s the case. I noted that the quote came from your site. I didn’t post a link because I was hoping to reduce the discussion, not widen it. But I think it fairly reflected your position.

The point I was attempting to make is that everyone agrees

-the uptick was spurious,
-the research was a Holocene reconstruction, and
-it receive more fanfare than it deserved as such.

We’re arguing past each other on those points because some see innocence where others see intent (not necessarily nefarious). It’s clear to me that the authors got the limelight that they desired and encouraged. They just cant take the heat. You disagree. That gap can’t be bridged no matter how long the details of the abstract snd press release are debated.

So we’re left with the technical issues. Many of those issues can be resolved online. And if the paper has any merit the issues might also be discussed in the literature.

It will also be interesting to see how the paper is cited by the IPCC. Figures? Error bars? Comparisons to temperature record? Do you think they’ll step into this pile?

Nick, you say, ” Bob
“how do you explain that virtually all of the post-publication hype, including Shakin’s video proclaining a “super hockey stick”, was centered on the stick”
I don’t think it was. One of the points I was making above with the abstract is that I think you folk sre so focussed on the proxy spike that you hear that when other people are talking about the instrumental rise.”

NICK: “One of the points I was making above with the abstract is that I think you folk sre so focussed on the proxy spike that you hear that when other people are talking about the instrumental rise.”

INTERVIEWER: “Ok so you go from the ice age and the Earth kinda trundles along for ten, eleven thousand years and then you get to the 20th Century and what happened? You basically get this big spike, right?”

SHAKUN: Yeah it goes along for the last ten thousand years and sure enough its been going up and down, up and down, up and down but for the last ten thousand the long-term pattern is kinda just a long gradual cooling if anything and then you come to the 20th and BING it goes up about a degree …

Obviously Shakun was talking about PROXY data when he said “Yeah it goes along for the last ten thousand years and sure enough its been going up and down, up and down, up and down but for the last ten thousand the long-term pattern is kinda just a long gradual cooling if anything …”

But if your Nick’s analysis is correct, then somewhere between the word “anything” and the word “you” he switched over to talking about INSTRUMENTAL data and said “and then you come to the 20th and BING it goes up about a degree …”. He made that switch despite the fact that PROXY data in the paper also “comes to the 20th and BING it goes up about a degree …”. But he wasn’t talking about PROXY data going “BING by a degree” (which it does); he was talking about INSTRUMENTAL data going “BING by a degree”?

But perhaps I am “so focused on the proxy spike” that I am (horrors) following the plain meaning of what the author himself says about his own study, instead of accepting the contortions of the folks who want to “help out the science” “by giving the original authors a boost”?

INTERVIEWER: “Ok so you go from the ice age and the Earth kinda trundles along for ten, eleven thousand years and then you get to the 20th Century and what happened? You basically get this big spike, right?”

SHAKUN: Yeah it goes along for the last ten thousand years and sure enough its been going up and down, up and down, up and down but for the last ten thousand the long-term pattern is kinda just a long gradual cooling if anything and then you come to the 20th and BING it goes up about a degree …

Bob,“You don’t think it was. Did you miss this [Pielke]:”
No, in fact I commented. Never got a response.

seanbradyBut he wasn’t talking about PROXY data going “BING by a degree” (which it does); he was talking about INSTRUMENTAL data going “BING by a degree”?
I don’t know. But it’s true, as he said, that after a period of cooling, temperature went up by a degree. It’s still true.

Here’s an example of one sort of problem that I’m encountering. Some authors report the top sample as 0 cm, others as 0.5 cm, though my guess (and I’ll defer to Richard Telford on this) is that this is more a matter of reporting style than a distinction between 0 and 0.5 cm.

For the core, MD01-2378 ( Xu et al., 2008 ), there is a radiocarbon sample at 0.5 cm with a mean Mar09 date of 1173.5 BP. The top Mg-Ca sample, also said to be at 0.5 cm, is dated to 597BP, approximately halfway between 0 and 1173. This turns up as a large discrepancy.

Mr. Stokes, I believe would make a fine lawyer, but probably would not be too successful before the courts in securities fraud cases. This kind of fine linguistics regarding the abstract, in a journal like Science which is aimed at the general public, and certainly in the accompanying media blitz which was definitely aimed at the general public, would be found to have left the impression on a reasonable reader that the results of the current research being reported on supported the statement about unprecedented recent warming, an impression which is misleading, and likely was intentionally so, or if not intentionally at least done with reckless disregard for the perception being created. I am a Science subscriber, I read the abstract and article and until I saw some of the posts I had the clear impression that the researchers were presenting data which supported the conclusion stated in the first sentence of the abstract. And the authors must believe that impression to have been created or they would not have felt the need to clarify, really in essence to withdraw that impression in their more recent statements to the effect that the recent uptick was not “robust”. What is most troubling about all on what goes on in climate “science” these days is not the specifics about what may or not be happening to the climate, but what is happening to science, which of all disciplines should most severely guard itself against ideologic intrusion, and which has so miserably failed to do so in recent decades. Climate science is beginning to lower itself to the level of psychological research, which now routinely sees retractions of findings from bad design, bad statistics, and outright fraud.

“The same fossil-based data suggest a similar level of warming occurring in just one generation: from the 1920s to the 1940s. Actual thermometer records don’t show the rise from the 1920s to the 1940s was quite that big and Marcott said for such recent time periods it is better to use actual thermometer readings than his proxies.”

In the whole statement, nothing places this quote in context. Does Marcot mean “all of the same fossil- based data” or “one of the fossil-based data sets” or something between?

I accept that the graph body uses data of lower resolution. However, this statement implies that when a higher resolution in possible in a small window of time, there is a probably significant discrepancy. Is it not this type of discrepancy that throws doubt upon, or even invalidates, a particular proxy reconstruction or the magnitude of its effect through calibration?

Having admitted to a spike 1920-40, why do they not stress that the body of the graph should have such spikes also, thus altering their estimates of the % of time that past temperatures might have exceeded present?

My conclusion would differ from Marcott’s “it is better to use actual thermometer readings than his proxies”. Mine would be “these proxies are too inaccurate to be useful and you can’t prove oppositely.”

“Having admitted to a spike 1920-40, why do they not stress that the body of the graph should have such spikes also, thus altering their estimates of the % of time that past temperatures might have exceeded present?”
The spike 1920-40 is due to end effects, so isn’t a guide to the rest. But they did do what you suggest, in Fig 3. Their heavy black line (stressing) is a distribution curve with high frequency noise added (matching the annual treering data). That’s the basis for the quoted fractions.

nevilb,
Concede a difference? Yes, Marcott et al are obviously going into more detail. And they attribute the non-robustness to:
“the temporal resolution of our data set and the small number of records that cover this interval”

That’s pretty much my idea – I think there is a specific end effect of the random variation as well, but there’s no need to pile on.

Racehorse Stokes does not disappoint. There is a difference between “not robust” (because of a small data set) and “wrong”. And something that is an artifact of end effects and dropping proxies is just wrong.

“And something that is an artifact of end effects and dropping proxies is just wrong”
It’s not wrong; it’s the product of correct calculation from reasonable choices. It’s not robust because other reasonable choices would give different results.

[blockquote]“And something that is an artifact of end effects and dropping proxies is just wrong”
It’s not wrong; it’s the product of correct calculation from reasonable choices. It’s not robust because other reasonable choices would give different results[/blockquote]

Scientific statements are neither right nor wrong. They are either useful or not useful. Newton’s laws do not agree with experiment for relativistic speeds. However Newton’s laws are useful.

No, Racehorse. The reconstruction shows an uptick not supported by the data. Hell, even Tamino, in his plagiarized analysis, concedes this. Even if the mechanical calculation is correct, the answer can be wrong if the methodology is inappropriate and the assumptions are not reasonable. The uptick is an artifact. It is not “not robust”. It is wrong.

Thanks, Nick. I was also thinking of a proxy count criterion. I wondered earlier about why Marcott et al. used 510-1450 BP for their alignment with the CRU-EIV recon; there is no justification given for the recent end of that interval in the paper or SI. I (informally) count 52 proxies with data as of that time (510 BP). [The count may be off by a couple; the SI didn’t have the interpolated data and I haven’t tried to generate it.]

In Figure 1C, the RegEM recon begins to diverge from their primary reconstruction (Standard 5×5) around 500 BP which may have been Marcott et al.’s criterion. This makes me suspect that the non-robustness extends further than just the 20th century.

* Are there still no checks and balances in the paleoclimate community (outside of the efforts of Steve McIntyre, JeanS et al.)?. . .
* I see this as a struggle for the souls of two young climate scientists. Will they (i) decide to care primarily about science, and embrace the values of transparency and public accountability, answer questions about their research, and engage with skeptics in the interest of improving their research; or (ii) do they aspire to Mike Mann-style celebrity and plan to join the RealClimate warriors against auditing and skepticism? . . .
* JC advice to the skeptical blogosphere: Lets get to the bottom of this, but while doing so I remind you that one element of this is the struggle for the scientific souls of two promising young scientists. Please don’t overegg the pudding and inadvertently send them to the RealClimate refugee and training camp. Cordially invite them to engage, and work with them to try to change the culture in the paleoclimate community.

@ David L. Hagen: And then, in a comment when someone responded to her and attacked her tone in that statement vis a vis RealClimate, she responded with this:

I don’t bother engaging with RealClimate et al. However, I think there is some hope for influencing Steve McIntyre in terms of how he engages with these young scientists (McIntyre was mostly the intended audience for that statement).

I did ask the question why this FAQ was placed upon RC, rather than over here at CA, seeing as all the analysis had been done here.
Surprisingly enough, by question appears to have ended up on the cutting room floor!

Suppose the actual mean global temperatures over the past 11,000 years included fluctuations of magnitudes and durations comparable to that seen by standard thermometry in the recent past (the last century and half). Suppose these fluctuations were both positive and negative, warmer as well as cooler than the slowly varying mean. Suppose these thermal fluctuations happened randomly a few times per millennium. Would the methodology of Marcott et al have been able to detect them? Or would this methodology only see the slow, millennial changes?

Are there any methods that would detect past fluctuations comparable to what we are now experiencing? In other words, are there any methods that have the required temporal resolution of decades over a span of ten millennia, which is of the order of a few tenths of one percent?

Re: Nick Stokes (Apr 3 17:34),
I’ve replied. I believe his analysis has serious flaws and shared some ideas for improving it. (Compare the 1 and 100 perturbation graphs — the spikes should have changed significantly but did not. I have some ideas why…)

Looks like Tamino is not going to allow you to post a comment, Mr. Pete. Typical. He’s allowed a couple of other posts through since you’ve posted your comment here. Or maybe he’s trying to figure out what you are saying in your post?

Naaaaaa. Not likely. I’m sure he considers you one of the Devil’s disciples.

That seems (hindsight being 20-20 and all) like an obvious thing to include in the original paper. Especially considering they did make the comparison w/ the modern recon/record. Is there a general rule in academic statistical work to include that kind of thing? (like it might be considered something of a curiosity.)

How reproducible do you find his description? I’m assuming that you would benefit greatly from having paid attention to that part of the paper. Something that NZ Willy dude probably would know.

Several evidence suggests that Tamino could have at least partially right. Past variations in temperature are modest compared to the increase of instrumental temperatures. What is known about phenology not fit well with significant variations. This is the case for millennia but it is also the case for the twentieth century. An increase in global temperatures of about 1 ° C is invalidated by phenological observations. And there is still no proxies that show an evolution consistent with instrumental temperatures.

Several evidence suggests that Tamino could have at least partially right. Where I come from “partially right” tends to be called wrong.

Past variations in temperature are modest compared to the increase of instrumental temperatures. Evidence for that? Proxies like Vostok suggest you are completely wrong.

What is known about phenology not fit well with significant variations. We’ve had several Ice Ages. Yet plants have not managed to cope previously? Really?

This is the case for millennia but it is also the case for the twentieth century. Evidence?

An increase in global temperatures of about 1 ° C is invalidated by phenological observations. So those Ice Ages killed all the plants?

And there is still no proxies that show an evolution consistent with instrumental temperatures. Tend to agree. However that is not a plus for the warmists. Either the proxies are wrong or the instrumental record is wrong. Which is better for them?

Vostok is insufficient to determine the evolution of global temperatures, I do not think we can directly compare these two quantities.

Treerings densities (good high frequency correlation) suggests limited variability in the last millennium. The stability of crop types according to altitude is also not compatible with large temperature differences in terms of decadal average. The reversal behavior of glaciers during the decades of slight cooling 60 and 70 also suggests that it would be sufficient for these conditions to persist for decades to send us into a new little ice age.

“Either the proxies are wrong or the instrumental record is wrong.”

Indeed it’s one or the other. We know the weaknesses of instrumental data. Regarding proxies, they are related to the temperature by different physical laws (dendro, snow, glaciers, TLT) and yet they all agree to diverge in the same way. Very strange.

Vostok is insufficient to determine the evolution of global temperatures, I do not think we can directly compare these two quantities.?i>

I showed a proxy that indicates much wilder swings than we currently see. Yes, it will be larger for being localised. But it is evidence of wild swings, albeit not overwhelming.

Treerings densities (good high frequency correlation) suggests limited variability in the last millennium.

I have serious issues with trees as a direct proxy of temperature movement. A tree on the edge of the growing zone is not dependent on temperature solely. Therefore any changes are likely to be muffled by all the other factors. Do bristlecones suddenly start growing quickly if the temperature is raised by a degree over a short time? Maybe they don’t react quickly, thereby muffling the effect. Maybe some do and some don’t, again muffling the effect. Who knows?

So even if treemometers are decent proxies, I’m only prepared to accept them as indications of general direction.

Mooloo,
Unlike ring widths, it appears that the density of laterwood is essentially dependent on temperature (and not only in the edge of the growing zone). It is in any case what seems to show this graph:

Phi: do you see what you are doing here? First you say things like “tree ring densities suggest limited variability” and “it appears that the density of laterwood is essentially dependent on temperature . . ” and then you leap into certitude with “Either the proxies are wrong or the instrumental record is wrong.”

Indeed it’s one or the other.

This is what I find so annoying about climate science™ : how much of it is based on speculation and high uncertainty, and then how quickly it digests all that and converts it into certainty.

Re The Abstract: Yes not being an expert but having 4 higher degrees related to statistics and my father having being a atmospheric physicist who told me in 1994 that AGW was just a Tax grab not even bothering to explain the science behind it! Yes (the Abstract) it is misleading. Your person on the street would think the uptick is related to the abstract. If these scientists had just represented the truth (graph 2) they would be still be highly respected “climate Scientists”. Unfortunately I agree with J Curry and they were led astray by the last few remaining ******

shows what it claims, namely that the Marcott et al methodology would be able to detect thermal spikes such as the one we’ve seen in the past century and a half. Tamino’s result is not meaningful because for such spikes to exist in the Marcott data (where there is intrinsic natural and measurement smearing) the actual non-smeared spike would have had to be an order of magnitude larger, so as to appear smeared and flattened out as in the actual data.

In other words, the proxies are intrinsically smeared by natural processes over the millennia, as well as by the collection procedure, and by the averaging performed in the calculation. For a spike as used in Tamino’s demonstration to exist in the physical data, the actual temperature excursion, over a century, would have had to be one to two orders of magnitude greater than the current one. That is, Tamino may have shown that Marcott et al can rule out spikes of 10 or 100 degrees. But he did not show that spikes of 1 degree would have been detected.