The Marcott-Shakun Dating Service

Marcott, Shakun, Clark and Mix did not use the published dates for ocean cores, instead substituting their own dates. The validity of Marcott-Shakun re-dating will be discussed below, but first, to show that the re-dating “matters” (TM-climate science), here is a graph showing reconstructions using alkenones (31 of 73 proxies) in Marcott style, comparing the results with published dates (red) to results with Marcott-Shakun dates (black). As you see, there is a persistent decline in the alkenone reconstruction in the 20th century using published dates, but a 20th century increase using Marcott-Shakun dates. (It is taking all my will power not to make an obvious comment at this point.)
Figure 1. Reconstructions from alkenone proxies in Marcott style. Red- using published dates; black- using Marcott-Shakun dates.

Marcott et al archived an alkenone reconstruction. There are discrepancies between the above emulation and the archived reconstruction, a topic that I’ll return to on another occasion. (I’ve tried diligently to reconcile, but am thus far unable. Perhaps due to some misunderstanding on my part of Marcott methodology, some inconsistency between data as used and data as archived or something else.) However, I do not believe that this matters for the purposes of using my emulation methodology to illustrate the effect of Marcott-Shakun re-dating.

ALkenone Core Re-dating

The table below summarizes Marcott-Shakun redating for all alkenone cores with either published end-date or Marcott end-date being less than 50 BP (AD1900). I’ve also shown the closing temperature of each series (“close”) after the two Marcot re-centering steps (as I understand them).

The final date of the Marcott reconstruction is AD1940 (BP10). Only three cores contributed to the final value of the reconstruction with published dates ( “pubend” less than 10): the MD01-2421 splice, OCE326-GGC30 and M35004-4. Two of these cores have very negative values. Marcot et al re-dated both of these cores so that neither contributed to the closing period: the MD01-2421 splice to a fraction of a year prior to 1940, barely missing eligibility; OCE326-GGC30 is re-dated 191 years earlier – into the 18th century.

Re-populating the closing date are 5 cores with published coretops earlier than AD10, in some cases much earlier. The coretop of MD95-2043, for example, was published as 10th century, but was re-dated by Marcott over 1000 years later to “0 BP”. MD95-2011 and MD-2015 were redated by 510 and 690 years respectively. All five re-dated cores contributing to the AD1940 reconstruction had positive values.

In a follow-up post, I’ll examine the validity of Marcott-Shakun redating. If the relevant specialists had been aware of or consulted on the Marcott-Shakun redating, I’m sure that they would have contested it.

Jean S had observed that the Marcott thesis had already described a re-dating of the cores using CALIB 6.0.1 as follows:

All radiocarbon based ages were recalibrated with CALIB 6.0.1 using INTCAL09 and its protocol (Reimer, 2009) for the site-specific locations and materials. Marine reservoir ages were taken from the originally published manuscripts.

The SI to Marcott et al made an essentially identical statement (pdf, 8):

The majority of our age-control points are based on radiocarbon dates. In order to compare the records appropriately, we recalibrated all radiocarbon dates with Calib 6.0.1 using INTCAL09 and its protocol (1) for the site-specific locations and materials. Any reservoir ages used in the ocean datasets followed the original authors’ suggested values, and were held constant unless otherwise stated in the original publication.

However, the re-dating described above is SUBSEQUENT to the Marcott thesis. (I’ve confirmed this by examining plots of individual proxies on pages 200-201 of the thesis. End dates illustrated in the thesis correspond more or less to published end dates and do not reflect the wholesale redating of the Science article.

I was unable to locate any reference to the wholesale re-dating in the text of Marcott et al 2013. The closest thing to a mention is the following statement in the SI:

Core tops are assumed to be 1950 AD unless otherwise indicated in original publication.

However, something more than this is going on. In some cases, Marcott et al have re-dated core tops indicated as 0 BP in the original publication. (Perhaps with justification, but this is not reported.) In other cases, core tops have been assigned to 0 BP even though different dates have been reported in the original publication. In another important case (of YAD061 significance as I will later discuss), Marcott et al ignored a major dating caveat of the original publication.

Examination of the re-dating of individual cores will give an interesting perspective on the cores themselves – an issue that, in my opinion, ought to have been addressed in technical terms by the authors. More on this in a forthcoming post.

The moral of today’s post for ocean cores. Are you an ocean core that is tired of your current date? Does your current date make you feel too old? Or does it make you feel too young? Try the Marcott-Shakun dating service. Ashley Madison for ocean cores. Confidentiality is guaranteed.

Re: Jeff Norman (Mar 16 13:48),
yes, it is so amazing I could not believe I was reading the spreadsheet right, and just had to ask Steve. I wonder how Isabel Cacho from University of Barcelona likes the re-dating… the original paper is here and data here.

Core MD 95-2043 has an accurate chronostratigraphy based on 18 14C AMS ages for the last 20 kyr (Table 1). AMS measurements were determined in the University of Utrecht with a precision ranging from +/-37 to +/-120 years. The older section has been dated by correlation of the alkenone SST profile with the 18O record of the Greenland ice core GISP2 (see more details in the work by Cacho et al. [1999a]). The correlation coefficient between MD 95-2043 SST and GISP2 18O over the 14C dated interval is extremely high (R=0.92).

How many of the proxies that did not end in the last 100 years were “updated”? Are they selective in which ones get updated? It would be telling if selective updating was occurring and proxies that ended early were not touched.

A bit OT and I also asked this question on another thread and I’d be grateful for your opinion Steve.

I don’t understand how the uncertainty can be temperature-independent i.e. that proxy precision and accuracy is as good for 1000 years back as from 200 years. The proxies respond to things except temperature and there surely must be variations in this. Averaging proxies seems an optimistic way to get to the “true” value.
Steve: It’s hard enough to try to figure out the calculations. I haven’t looked at their uncertainties yet.But your intuition seems right to me. Their first centering was on BP4500-5500. If they centered on the modern reference period (their ultimate interest), the spread of values in the Holocene would much increase. That may have something to do with it. Topic for another day,

I get the impression from a previous response to a similar query that the presented “uncertainty” is not a function of the uncertainty of the temperature measuring abilities of the individual proxies but simply of the number of proxies available at that moment in time.

I don’t think that any reconstruction that I have ever seen actually presented a combinated uncertainty of the individual proxy’s ability to represent actual temperatures at a local, regional or global level.

For that matter I don’t think I have ever seen modern global temperature trends presented with adequate uncertainties.

Jeff
If this is the case then this is worrying. An error band based on an incomplete uncertainty evaluation gives a very misleading impression. One should then make this clear. For example, they could label the error band “errors only due to factors X,Y” (which is what I do in this situation) or at the very least shout very clearly throughout the paper, and in press interviews, that (possibly) incorrect assumptions are made about other sources of errors being negligible.

I realise that this means the significance of their work is reduced but this is science.

Both Pat Frank and I have posted here over the years about the lack of formality used to estimate uncertainty in climate work, by comparison with that used in related work. For example, from analytical chemistry I’ve referenced the historically important paperhttp://www.geoffstuff.com/MR%20Analysis%20Eval%20Morrison%201971.pdf
This gives some consequences of optimistic estimates of capability.
It seems there is a need for a text book on how to estimate uncertainty in climate time series. Is there one already?

My background in this reguard is measuring temperatures (pressures, flows, concentrations, etc.) using calibrated instrumentation as defined by the ASME, the ISO and the U.S. EPA for contract and regulatory acceptance tests.

The steam temperature mattered. The gas temperature into and out of the emission control device mattered. the uncertainties mattered.

As mentioned earlier, in the Ph.D. thesis, the temperature uncertainties for the individual proxies were simply kept constant throughout the entire up-to-22000-year period. Entire batches of proxies were assigned a single constant temperature uncertainty (chronomids and pollen both were assigned uncertaainties of 1.7 degrees C.) Here is the relevant section from the thesis:

4.5.1 Temperature and Chronologic Uncertainties
In order to incorporate the full range of error associated with both the proxy
calibrated temperatures and the age control points used to construct the time series,
we implemented a Monte Carlo based approach. 10,000 Monte Carlo simulations
were performed for each of the datasets that incorporated both the temperature
calibration and chronologic uncertainties (Appendix C). The temperature records
51
used in this study were derived from multiple proxy-based methods, including UK’
37,
TEX86, Mg/Ca, chronomids, pollen, ice cores, and biomass assemblages (e.g.,
foraminifera, diatoms, radiolaria). The uncertainty associated with each of the
proxies was randomly varied following a normal distribution and errors were
assumed to not correlate through time in order to maximize the temperature
uncertainties. All UK’
37 based alkenone records were converted to temperature
following the global core top calibration of Müller et al. (1998). TEX86, Mg/Ca, and
all biomass assemblage records were converted to temperature following the original
publication from where the data were obtained (Apendix C). Chronomid based
temperatures errors (±1.7°C) were derived from the average root mean squared error
(RMSE) of several studies. Pollen temperature errors followed the RMSE of Seppä et
al. (2005) (±1.7°C). Ice core based temperature error was conservatively assumed to
be ±30%.

Is redating of proxies common and accepted? If redated are the creators of the proxies not asked for their opinion as to why it is needed?
If it an acceptable procedure I would like to be redated from 57 yo with thin hair to 27 and golden locks!

Re: polski (Mar 16 13:52), It seems to me that this very first post neatly nailed all the issues in a couple of lines. It would seem that they decided to use a dating procedure change based, at best, on hand waving before the community had established a procedure of sufficient soundness that one could justify an informal explanation of altering the ages. Therefore such optimistic alteration of the reported ages would have no such place within the spirit of the dating service. Especially in such circumstances where accurate reporting of age would be a fundamental indicator of value.

In other words — if one is to use the Ashley Madison “dating service”, and a reputation for premature gesticulation and alteration of reported age precedes you it may well make future efforts to partake of the services of the “dating service” to achieves one’s stated goals. All this assumes that I have correctly grasped the essential issues of course.

This reminds me of a friend’s relative who was a refugee from Cuba in the 70s. Because she arrived with no papers and was close to 30, she simply added a couple years to her original birthdate (which coincidentally was fairly close to 1940 by the way).

Everything was fine until decades later, when had to work two more years before she could start collecting Social Security!

There is nothing quite so entertaining as when the team or, in this case, wannabe junior members, toss a slow pitch over the plate and Steve, in full flight with the bit between his teeth, knocks it out of the park. In this case, Marcott et al appear to have provided the gift that keeps on giving.

There is also nothing quite so nauseating as the fact that Steve must provide pro bono a service that one would have thought the vaunted peer review gates of journals such as Science should have executed as a matter of course. Sad.

Marcott was one of the authors, so his co-authors should have caught the obvious errors (or can we say manipulations). Didn’t they? Then there were the peer-reviewers; they should have questioned some of these errors or inconsistencies.
But then there is also the “Team” (Mann, Schmidt, Tamino et al). Aren’t they interested in these studies? Aren’t the able to cast a skeptic eye on such a study or do they accept just about anything that fits their believes?

Finally, there is a skeptic non-climatologist who is not paid for his work who has to do the quality control? Imagine if Steve didn’t catch these errors, we (the whole world) would be damned to accept whatever they come up with in their studies. A frightening thought.
Thanks Steve!

All four article authors of the article were also involved with Marcott’s PhD thesis. Oddly (at least to me) is that the dissertation supervisor, Peter Clark is credited as a co-author of the relevant chapter of the thesis:

As neither reconstruction in your diagram is a reasonable depiction of history over the last couple of hundred years, is not the correct conclusion that the number of proxies (or the method itself) is insufficient over that interval? [Equivalently, the error bars are huge.]

Two things Bob. First the objective is to get this paper into AR5, whether Mann et al have the pull to do that given the current furore in the blogosphere I don’t know, it depends on the integrity of the various lead authors. Most of them seem to men/women on a mission, but you never know there might be a flickering of scientific integrity in the embers of what were once scientists. The public are unaware of the deception because the MSM has shown the hockeystick and certainly won’t report the demolition on this, and other blogs, so we are depending on the integrity of scientists who’ve shown little in the past.

The second thing is, as I’ve observed before, is the slipshod methodology of climate science in general. Richard Betts from the Met Office told me his mission was to get respect for scientists. In any other field there would be a hue and cry from the other scientists if such an obviously flawed paper came out. In climate science we get en masse silence, or enthusiastic support.

I read somewhere that only 1 in 70 papers submitted to Science get published. It makes you wonder what sort of rubbish is put forward in papers if this one came first out of seventy. Or you could look at my first point again.

re: AR5, one of the Lead Authors for the Paleo chapter is a co-author with Shakun and Marcott and Clark on Shakun et al. (2013): Bette Otto-Bliesner of NCAR. So there may be some interesting discussions on what gets into that chapter….

re: hype and specialist reactions, consider how fast a substantial number of microbiologists came out against the “arsenic life” paper (also published in “Science”) in Dec. 2010. Within a week or less the paper and authors were already being fairly widely repudiated, with quite a few prominent microbiologists speaking out. What a contrast to how things go with climate science and group solidarity….

Assuming I am not the only one reading this and assuming the above will be communicated to the authors, I would hope they will respond specifically to the questions being raised here at CA and how, if at all, my understanding of their work should be modified, if my sole source of information regarding their work was based on the recent March 9 Atlantic article “We’re Screwed” by Tim McDonnell.

The age-depth model for MD952011 is not closely crossdated with tephra. It contains a single tephra, the Vedde ash, in the Younger Dryas. No Holocene tephras have been identified. Nor is this core near Iceland – it is much closer to Norway.

Cores HM107-04 and HM107-05 are offshore Iceland, and dated with many tephras.

Steve; thanks for the comment. That will teach me not to preview a post that I haven’t written yet. I think that I was thinking of MD99-2275.

Richard, can you comment on the main issue though: – the legitimacy of re-dating the coretop?

In deference to Richard Telford’s correction and to avoid confusion for further readers, I have edited the main post by removing the following sentence:

As a preview, I’ll note that MD95-2011 is a core that I’ve studied. It is a high-resolution core offshore Iceland, that has been carefully studied by competent specialists and closely crossdated by tephra. While the dating of some core tops may be open to question, this is not one of them.

As Richard observed MD95-2011 is a core offshore Norway. I was thinking of offshore Iceland cores. I’ll ensure that this point is properly addressed in the forthcoming post. I’ve documented the change here in comments and corrected the article itself to reflect the review comments.

I will discuss MD95-2011 in a forthcoming post since I do not believe that relevant specialists would support Marcott-Shakun redating. Another M-S core, MD95-2015, which is offshore Iceland, was also one of the re-dated cores. But I’ll have to double check whether it had tephra crossdating.

Steve, I think the sentence following also needs to be removed, pending your new post: “While the dating of some core tops may be open to question, this is not one of them“ (because the “this” referent is to the core MD95-2011 in the sentence removed).
Steve: fixed. I amended my comment to reflect this as well.

It’s nice to see that my misdescription of the location of MD95-2011 was so quickly spotted. CUriously, a similar mis-statement was made in Marchal et al 2002, one of Marcott’s references:

It is noteworthy that apparent cooling is observed in so different oceanographic environments, including the Barents
slope (core M23258), off Norway (MD952011), southwest of Iceland (MD952011), the Gulf of Cadiz (M39008), the Alboran Sea (MD952043), and the Tyrrhenian Sea (BS7938 and BS7933).

On the subject of tephra, do we have a resident volcanologist who would opine on the area of influence and detectability of tephra at various distances from its deposition on land and sea; and on its fingerprint, if any, when there might be overlap from two sources of similar age. I can see its use as a marker, but I can see some generalist limitations. Is it typically separated and dated by radioactivity? There would be some problems with this approach also. It keeps coming back, in this discussion, to errors on the time (X) axis, which need to be corrected before the Y axis can be used for anything useful. Australia’s geochemists typically do not have much exposure to geologically recent volcanism, so I apologise for asking instead of reporting.

In the Iceland area this is hardly necessary in recent centuries since we have excellent historical data on eruptions back to the Middle Ages and good geochemical data on the relevant volcanic systems.

Confusion: “… off Norway (MD952011), southwest of Iceland (MD952011)…” It’s the same core number in each case. Is this the “misstatement” you are referring to? If this is a cut-and-paste from Marchal 2002, then it appears to be a typo of some sort instead of a misstatement.
Steve: of course it’s a typo. I was feeling a little annoyed that Richard Telford (whose comments I welcome) tweaked me on mislocating MD95-2011 (which we had discussed in its right geography on an earlier occasion) without also calling out Marcott et al on the substantive issue of unjustified re-dating. I’m less annoyed today: it’s what happens.

I noticed a comment on one of the earlier Marcott posts saying that journals should use Steve to review the statistical element of climate papers. I completely disagree. Far better for every prominent climate scientist in future who think they can get away with shoddy statistical practices to be exposed to ridicule in public. If I were a prominent, or even not so prominent, climate scientist tempted to offer dubious conclusions based on dubious or downright shoddy practices I would be more concerned about Mr McIntyres eyes alighting on my published paper, and thereby risking public ridicule,than any review prior to publication.

One well aimed public exposure from the big dog will have far more of a chilling effect on anyone tempted to risk their reputation than a hundred private reviews. Every climate scientist should now realise they have to up their game. Reputations are at stake.

Sorry – maybe a dumb question with simple answer … is there a valid reason for redating these cores, and if so an accepted process to do so? I guess I just don’t get why someone else, or some other process, would be applied … why wouldn’t the original authors want to get the dating correct to start with – seems that’s the whole point of their work?

It’s obvious; anthropogenic warming stimulated tachyon bursts originating from the 20th C. These were retrospectively responsible for interfering with historical alkenone generation in the proxies in question.

Brandon,
Well, as I said, that was my first option. I’ve since run using the Marcott dates, and while overall there is not much change, there is now a big and recent spike. I’ve updated here. I’ll post further on this.

Nick Stokes, if you weren’t using the re-dated data from Marcott et al, what data are you saying you used? Marcott et al only provided their re-dated data. You’d have to go to a different source to get a different version.

Which we can tell you didn’t do since you (commendably) provided your code. Your code shows you used the Marcott et al data. That means you used their re-dated data as that’s the only data they provided. How do you figure you could have used anything else?
Steve: Marcott has two date columns: one showing published dates and one showing their dates.

“The complaints are mainly about some recent spikes. My main criticism of the paper so far is that they do plot data in recent times with few proxies to back it. It shows a “hockey stick” which naturally causes excitement. I think they shouldn’t do this – the effect is fragile, and unnecessary. […] Indeed the spikes shown are not in accord with the thermometer record, and I doubt if anyone thinks they are real.” – Nick Stokes

With out the spike at the end no one would have heard of Marcott et al. and it likely would never have been published in Science, nor so close to the IPCC deadline.

Paul,
I don’t think it is necessarily wrong to re-date. I was just doing a no-frills emulation, and used the published dates to avoid getting my head around the changes.

But the dates aren’t original observations by the authors. They work them out mostly by known formulae, given depths, proxy measurements and carbon isotopes etc, which they report. There’s no reason in principle why someone shouldn’t use a different formula. As Richard Telford says elsewhere – these get updated, and there’s also a good case in this type of analysis for using a consistent calibration.
Steve: there is a good case for consistent calibration. But that’s not what Marcott and Shakun did. Consistent use of CALIB 6.0.1 is fine. Marcott already did that in his thesis. The coretop redating is entirely different. I wish that you pause every so often from being Racehorse Haynes,

. . . Mr. Haynes also represented Morganna Rose Roberts, baseball’s “kissing bandit,” charged with trespassing at the Astrodome. Mr. Haynes’ defense? His client, who has a 60-inch bust, had been the victim of Newton’s law, pulled to the field by gravity.

Well, according to Wiki:‘her lawyer used what he called the “gravity defense” to explain her unauthorized presence on the field, arguing: “This woman with a 112-pound body and 15-pound chest leaned over the rail to see a foul ball. Gravity took its toll, she fell out on the field, and the rest is history.” The judge laughed and dismissed the case.’

Expect a pleasant communication thanking you for your efforts but pointing out that one of the authors independently discovered this trifling issue some minutes earlier. A minor rewrite will be undertaken.

But has there been a “mistake” spotted i.e. something which would necessitate a retraction or correction ?

I’ve certainly seen evidence for weak science exposed here which demands (good) answers. If any of my papers contained these flaws I’d crawl into bed for a week and then consider a rewrite/correction. However, no serious “mistake” seems to have been spotted as in the Gergis et al. paper where the work was simply non-reproducible according to their stated methodology**. However, the standards in this field are low and I doubt many in the community really care about (or indeed understand the importance of) the issues raised here.

** Here they changed their minds and decided what they did wasn’t a mistake, rather it was what they should have done in the first place :). Fortunately the journal was tough in this case.

Actually the dating uncertainty of a calibrated radiocarbon date is invariably larger and usually much larger than the original uncalibrated date where the uncertainty is only a matter of measurement error.

There are lots of “plateaus” where samples of different dates contain the same amount of radiocarbon. For example according to INTCAL09 a sample dating 120 BP (1830) could also be from 1710, 1890 or 1910.

“After receiving a communication from S. McIntyre we compared our reconstruction with one using the dating of the cores in the original articles and found prominent differences in the recent period. Due to this inconsistency, we wish to retract the Article, even though the reported reconstruction is consistent with other published reconstructions. We apologize to the readers for any adverse consequences that may have resulted from the paper’s publication.”

I am just starting to read Thinking, Fast and Slow by D. Kahneman. I feel like I have been at the water cooler while Steve has been been sharing his thinking via his (and the commenter’s) interplay of System 1 and 2 thinking. It will be interesting to see how the water cooler approach to building knowledge (“ability to identify and understand errors of judgment and choice, in others and eventually to ourselves………….(pg 4)” compares to the peer reviewed method.

Borrowing, if I might, the caption from Steve’s first graph, I’ve now got a mental image of a guy clicking around a spreadsheet fiddling a chart to the tune of ‘boom da boom boom da boom redate Marcott style’.

Steve, I’m glad you put the names of Clark and Mix in the mix, so to speak. Along with Bard, they should have been the experienced hands on the good ship Marcott et al. Either they were on board and complicit or on shore leave as their post-doctoral grads Marcott and Shakun drifted a long way offshore in a leaky vessel.

Peter Clark and Alan Mix have been writing papers together since the 90s and even wrote a comment challenging a Forum piece in Eos written by Fred Singer back in 1997.

Given the changes in results as a function of “dating uncertainty” (to put it most charitably), how can ANY of the authors and advisors OK the published error band? In particular the width of the error band as the proxy population tails off.

This beyond error. Beyond blunder. At best, it is willful blindness of the introduction and existence of a source of noise and uncertainty not accounted for in the estimate of significance. The author could not be unaware that different shifts change the results, especially at the end points. The author could have produced Figure 1 as part of the paper, but chose instead to leave it as an exercise for the reader.

I’ve been trying to understand the Monte Carlo aspect of the analysis. Was it to randomly search for the best time offsets of the proxies to reduce the statistical error between proxies?

Shawn: Regarding the NH reconstructions, using the same reasoning as above, we do not think this increase in temperature in our Monte-Carlo analysis of the paleo proxies between 1920 − 1940 is robust given the resolution and number of datasets…

Re: Stephen Rasey (Mar 16 16:06),
Steve is using the original dates as bolded in the beginning of the post, but more importantly you are confusing the dates of proxies and dates of the reconstruction(s). There are no values after 1940 in any of Marcott et al reconstructions, and Steve asked him about the increase between last two values (1920, 1940; corresponding actually to 20 year intervals) of his NHX reconstruction. So it is no wonder he did not talk about “robustness” of the reconstruction after 1940 since there is no reconstruction to talk about.

“…To account for age uncertainty,
71 our Monte Carlo procedure perturbed the age-control points within their uncertainties. The
72 uncertainty between the age-control points was modeled as a random walk (76), with a “jitter”
73 value of 150 (77). Chronologic uncertainty was modeled as a first-order autoregressive process
74 with a coefficient of 0.999. For the layer-counted ice-core records, we applied a ±2%
75 uncertainty for the Antarctic sites and a ±1% uncertainty for the Greenland site (1σ)…”

From page 8:

“…3. Monte-Carlo-Based Procedure
79 We used a Monte-Carlo-based procedure to construct 1000 realizations of our global
80 temperature stack. This procedure was done in several steps:
81 1) We perturbed the proxy temperatures for each of the 73 datasets 1000 times (see
82 Section 2)(Fig. S2a).
83 2) We then perturbed the agemodels for each of the 73 records(see Section 2), also
84 1000 times(Fig. S2a)…”

I thought perturbing data was to protect confidentiality while data mining:

“…Abstract: Data perturbation is a data security technique that adds ‘noise’ to databases to allow individual record confidentiality.
This technique allows users to ascertain key summary information about the data while preventing a security breach…”

Apparently all 73 datasets have been perturbed via the ‘Monte Carlo technique’ a thousand times.

From page 10:

“…4. Construction of Stacks
117 We constructed the temperature stack using several different weighting schemes to test
118 the sensitivity of the temperature reconstruction to spatial biases in the dataset. These include
119 an arithmetic mean of the datasets(Standard method), both an area-weighted 5°x5° and
120 30°x30°lat-lon gridded average, a 10° latitudinal area-weighted mean, and a calculation of
121 1000 jackknifed stacks that randomly exclude 30% and 50% of the records in each realization
122 (Fig. S4 and S8). We also used a data infilling method based on a regularized expectation
123 maximization algorithm (RegEM; default settings) (78). The uncertainty envelope we report for
124 RegEM combines the Monte Carlo simulation uncertainty with that provided by the RegEM
125 code (78)…”

From page 20 (Fig S12 description):

“…Fig. S12: Temperature reconstructions using multiple time-steps.(a)Global temperature envelope (1-σ)
263 (light blue fill) and mean of the standard temperature anomaly using a 20 year interpolated time-step
264 (blue line), 100 year time-step (pink line), and 200 year time-step (green line). Mann et al.’s(2) global
265 temperature CRU-EIV composite (darkest gray) is also plotted. Uncertainty bars in upper left corner
266 reflect the average Monte Carlo based 1σuncertainty for each reconstruction, and were not overlain on
267 line for clarity. b same as a for the last 11,300 years. Temperature anomaly is from the 1961-1990 yr
268 B.P. average after mean shifting to Mann et al.(2)…”

From page 21:

“…We next used the NCDC land-ocean data set, which spans a greater period of time than
285 the NCEP-NCAR reanalysis. Comparison of the global temperature history for the last 130 years
286 to the temperature history derived from the 73 locations of our data sites shows agreement
287 within 0.1°C (Fig. S15). Finally, we used the modeled surface-air temperature from ECBilt-CLIO
288 (81)in the same way as the NCDC land-ocean data set, and again find agreement within 0.1°C
289 or less between our distribution and the global average from the model(Fig S16). These
290 findings provide confidence that our dataset provides a reasonable approximation of global
291 average temperature. Our results are also consistent with the work of Jones et al. (85) who
292 demonstrated that the effective number of independent samples is reduced with timescale,…”

Data from the proxies is perturbed whatever they mean by it.
Data is infilled.
Modern temperature dataset is also given the full Monte simulation.
Mann et al is used. Strictly for comparison?
Jones et al is used. Strictly for consistency?
Modern temperature data set is compared to the 73 locations?
It sure looks like the modern data is processed via Mannian methodas and then included with the end of the 73 proxies (As Steve shows above).

I keep wondering if the infilling for the missing data points on the proxies are filled against the modern temperature data.

In the manufacturing world I was led to believe that when combining resolutions or error rates that the proper method was to multiply them, not add and then divide out desired resolution scales after linking datasets.

It also looks like age of the data records is adjusted/manipulated throughout their perturbations.

“Modeled as a random walk“
A random walk implies a sequence of random steps, where the result of step 2 is dependent upon the result of step 1. In how many dimensions?

Take a “jitter” of 150 (years, for each proxy, I assume)

If I am to take seriously the concept of random walk, then some series can be shifted by much more 150 years as the prior step randomly walks away from the zero point.

Does he mean “random walk” this way: For each proxy(i), time(j), where j = 0 at present and increases into history, the random walk is to use TimeAdj(n+1) = TimeAdj(n) + TimeInterval + RandValue. So the proxy series can stretch and shrink in time through history as well as bulk shift?

121 1000 jackknifed stacks that randomly exclude 30% and 50% of the records in each realization

In other words, use Monte Carlo to shift, shrink, stretch and exclude proxies optimizing on… WHAT? How many degrees of freedom are in this analysis? The number of possible combinations probably exceeds the number of atoms in the Sun (2E+57)

If I am to take seriously the concept of random walk, then some series can be shifted by much more 150 years as the prior step randomly walks away from the zero point.

That is not what the jitter value means. If it were, the dating uncertainty of each series would be completely irrelevant. In reality, the jitter value of a random walk should be scale insensitive, and it is effectively multiplied by the dating uncertainty of each series. That means each series will be shifting by different amounts depending on the dating uncertainty for that series.

In other words, use Monte Carlo to shift, shrink, stretch and exclude proxies optimizing on… WHAT? How many degrees of freedom are in this analysis? The number of possible combinations probably exceeds the number of atoms in the Sun (2E+57)

Why would you say the process excludes proxies “optimizing on… WHAT?”? What part of “randomly” excluding series makes you think they are optimizing on anything? Moreover, why would you talk about this immediately following a discussion of the authors’ main Monte Carlo method when this is just a single implementation of it that has no bearing on the others?

I say “optimizing” because of regularized expectation maximization algorithm

I also say optimizing because the number of degrees of freedom are large and without some sort of optimized search, 1000 purely random samples of the domain seems woefully inadequate.

Just take the combination of proxies in or out = 2^73 = 10^22 combinations.

Add on the constraint that we’ll take 100% of the combinations where 50% or more are included (>10^21), none of the combinations where 30% are excluded (about 10^18) , and an uncertain but variable fraction of where 30-50% are excluded. It is still >10^21. Will 10^3 adequately sample this space? I think not. For every one of the 1000 trial samples, there are 10*18 unsampled combinations.

Your explanation makes less than no sense. The algorithm whose name you highlight isn’t a part of the methodology we’re discussing. It’s a methodology the authors used to infill data in one set of test runs That has nothing to do with the Monte Carlo process. In fact, it has nothing to do with the jackknifed runs you refer to next.

You do math for number of combinations possible if series were dropped out all together. Again, that is nothing like what the authors did in their Monte Carlo process. You are referring to a method used in modifying data for one set of test runs, a different set than those you were just talking about. Moreover, your math is done for a dramatically different set of conditions than that used in the paper, and that greatly increases the numbers you came up with.

Both reasons you’ve given are unconnected to the Monte Carlo process. They are even unconnected to each other as both are connected to different test runs. I have no idea how you think they explain anything.

The occasions I have seen Monte Carlo simulations applied were in the stock and futures markets. After playing around with them for a while, I found them to be not worth the effort in terms of their predictive value.

As I understand it, in this case they have been used to infill data. The problem I have with this is that in a truly random system (which paleoclimatology is not), you will get periods when things will go one way for a while. Toss a thousand coins and you will get “ten heads” sets. Now suppose that one sees one of these sets and then “redates” it so that it becomes significant, what have you got?

Absolutely nothing. Any findings are based on a totally artificial dataset. It doesn’t matter how many times you redo it.

Paleoclimatology is not “random”. There is a reason why something has happened. Monte Carlo simulations have uses when one is doing what if scenarios, but using them to derive significant insights is no better than reading tea leaves. I stand to be corrected.

“Data perturbation is a data security technique that adds ‘noise’ to databases to allow individual record confidentiality. This technique allows users to ascertain key summary information about the data while preventing a security breach…””

Does anyone see why individual proxy points would need to be “anonymized?”

What else would a Monte Carlo treatment of the data bring to the table?

That is what I think he means but without looking at the data myself, I’m not sure if he means to say that there is a significant effect from this. My guess is that it was just an observation and Steve isn’t sure yet.

For my own interest I wrote the following list of concerns/issues that Steve and co have raised. Pls let me know if this is complete/correct.

(1) The uptick seems to be an artefact of the algorithm. That said, the authors state “clearly” that the 1890-onwards part is not “robust”.
Why bother showing it at all then ?
(2) The uptick anyway starts earlier in the extra tropical reconstructions so the “1890-onwards” argument in (1) is dubious.
(3) The re-dating has a large effect for the temps corresponding to the past few hundred years, providing an up-tick rather than a down-tick. Is it standard practice to re-date ? Is the re-dating performed here sensible ?

I add my pet worry:
(4) Why is the uncertainty for temps of 10000 ago the same as for 200 years ago ? This is counter-intuitive in view of the variability of the non-temperature quantities affecting proxies.

“Why is the uncertainty for temps of 10000 ago the same as for 200 years ago ? This is counter-intuitive… ”
Interestingly, in Marcott’s thesis, the uncertainty bands increased for the latest ~500 years, but this is not the case in the published paper.

The authors evaluate the range of outcomes which are achieved by Monte Carlo dithering of the chronologic and temperature calibration parameters, and from randomized subsets of the proxies. However, even if the calibrations were ideal, and the proxies provided a fully accurate local temperature, the limited number of samples must place a lower bound on the accuracy of a reconstruction of global temperatures. This lower bound increases as the number (or geographical diversity) of proxies decreases. Failure to include this sampling uncertainty leads to overly tight confidence intervals.

Gergis et al. — at least in its first incarnation — also did not quantify uncertainty due to sampling.

This should be in the news. If they want scary headlines, it’s right there. I can’t think of anything more scary than this on-going very deliberate deception.

Yes, I know, no one has to explain to me why it won’t happen, the MSM are puppets, I know they’ve been bought. I just keep thinking that somewhere there’s a paper, a reporter or an editor who is not quite so pink as the others and who will finally see just how hot this story is.

We are in the dying days (months? years?) of the biggest and longest-lasting scam in the history of our world and every single one of the newspapers in existence today will look back in a few years and wonder why the heck they didn’t run with the story first.

Well done, by the way, for uncovering this, Steve, we would be lost without you.

If the reconstruction ends in 1940, how then the IPCC can use it to bolster their political claims, (since we know that even according to the IPCC most of the warming before 1950 was natural in origin)? Hence, if Marcott et al are right that would only prove that the natural forcing in the early 19th century was extremely strong.

“I was unable to locate any reference to the wholesale re-dating in the text of Marcott et al 2013.”
Actually it is rather obvious from the data selection criteria (and elsewhere in the Supplementary Materials) that Marcott et al recreate the age-depth models. They write:

“6. All datasets included the original sampling depth and proxy measurement for complete error analysis and for consistent calibration of age models(Calib 6.0.1 using INTCAL09 (1)).”

There would be little (probably no) utility calibrating the radiocarbon dates with INTCAL09 (or MARINE09 for the marine dates) if the age-depth models were not updated to the re-calibrated dates. This is a necessary, or at least highly desirable, step in this type of analysis. The older publications in Marcott et al would have used INTCAL98 or earlier calibration curve, or perhaps not calibrated the dates at all, and really do need moving onto a modern calibration curve. There is also the possibility that the original authors had calibrated their radiocarbon dates using an incorrect protocol – I know of some papers published when calibration was a fairly new step that report strange methodologies.

Treating the coretop as 0BP (1950 CE) is commonly done and is reasonable in the absence of other information. However, I would not have recommended this assumption for MD95-2011.
Steve: the re-dating between the thesis and the Science article is NOT “obvious”. The thesis has the same language about CALIB 6.0.1 but doesn’t play around with coretops/

The re-dating should be obvious to anybody who has worked on proxy chronologies – if the dates are recalibrated, the chronology needs to be recreated. Perhaps Marcott et al could have stated this more explicitly.

I’ve just looked again at the JM96-948/2A data on Pangea.de.
doi:10.1594/PANGAEA.510801 has a reasonable age-depth model (-0.045 to 0.522 ka BP)
doi:10.1594/PANGAEA.510799 has an odd age depth model (-0.049 to -0.02 ka BP) not sure that has happened here. When I next meet the author, I will ask her.
Steve: Richard, you commented on this error six years ago and it still isn’t fixed. why dont you email her while you remember.

Richard:
Many thanks for the response. Does the magnitude of the redating surprise you? Are Marcott et al the first ones to redate these proxies? Wouldn’t each redating exercise need to be documented and validated?

I think the C14 dating is used only to locate the 4500-5500BP baseline for each series. The idea is, get the proxy depths of the baseline from C14, then use those depths to get the baseline temperatures from the main series data — the baseline is then used to calculate the temperature anomalies for the whole series. Apologies if this is obvious or wrong.

So if I understand. They have taken proxies which show the opposite of what some unnamed climate scientists want, that is an upside down hockey stick, redated them to remove them from the time period we are all worried about leaving hockey stick like proxies in place, and et volia a hockey stick and world wide fame is the result?

I’ve been in a foul mood on this topic and entirely too snarky. Steve has batted down several of my comments as a result. So let me try to articulate what’s got me in such a snit.

I can fully understand the need to re-date based on new or updated calibration data. But how does one choose? The era of big data has created a situation where scientists have a nearly infinite smorgasbord of data to choose from. I’d love it if someone would actually examine the combinatorics here. I would wager that there are billions (or more) of potential re-dating combinations that could be justified by information available somewhere in the big data soup. And Steve has shown that the reconstruction is hype-sensitive to dating. Then there’s the problem of information asymmetry — its unlikely that any particular investigator is aware of all of the information available. So how do you avoid cherry picking? How do you avoid talking yourself into a rationale for re-dating in such a way as to produce a tidy result? There’s just too much opportunity for conformation bias.

Its a bit analogous (although not perfectly so) the the Drake Equation Fallacy. Basically you can get any answer you want because the entire equation is based on a combination of assumed values. Re-dating turns a multi-proxy reconstruction into a mulligans stew of assumed values.

Methodologically, I think this argues that you need a separation of duties. The investigator producing the multi-proxy reconstruction should not be permitted to re-date individual proxies.

I’ve been deleting a number of “piling on” comments. Precisely what constitutes “piling on” is a bit arbitrary. But I dislike comments that editorialize against climate scientists or journals or otherwise moralize. The facts are eloquent enough,

mpaul:
Your comment that “The investigator producing the multi-proxy reconstruction should not be permitted to re-date individual proxies” makes sense to me though I think it would be OK if they first wrote up their redating, received feedback and needed adjustments and then separately wrote up the aggregation.

The moral of today’s post for ocean cores. Are you an ocean core that is tired of your current date? Does your current date make you feel too old? Or does it make you feel too young? Try the Marcott-Shakun dating service. Ashley Madison for ocean cores. Confidentiality is guaranteed.

Thanks, Steve … with your choice of topic and this closing para [not to mention the graph], you’ve given this statistically-challenged person a grand chuckle for the day!

With so many contortions dedicated to preserving the icons of “the cause”, surely there’s enough material that you’ve unearthed for Cirque du Soleil to consider a new production! Or perhaps a spin-off “Cirque du Science”!

I do hope that Josh has read this post; it cries out for one of his inimitable captures ;-)

Steve McIntyre at Climate Audit has been dissecting the Marcott et al. paper and corresponding with lead author Shaun Marcott, raising constructive and important questions.

As a result, I sent a note to Marcott and his co-authors asking for some elaboration on points Marcott made in the exchanges with McIntyre. Peter Clark of Oregon State replied (copying all) on Friday, saying they’re preparing a general list of points about their study:

After further discussion, we’ve decided that the best tack to take now is to prepare a FAQ document that will explain, in some detail but at a level that should be understandable by most, how we derived our conclusions. Once we complete this, we will let you know where it can be accessed, and you (and others) can refer to this in any further discussion. We appreciate your taking the time and interest to try to clarify what has happened in our correspondence with McIntyre.

———————-

Sometimes these “this is all I’m gonna say about it” FAQs may end up moving the pea or not addressing the actual issues, but it’s nice to see that they’re willing to do this. Also nice to see Steve hitting this hard in the past few days– the FAQ is most likely being drawn up now, and will probably only address whatever has been generated up through this weekend or so.

In Andy Revkin’s 1st video Jeremy Shakun is full of the 4-6C temperature rise that he is sure will come. His confidence to show (only) a present short term very sharp uptick in a low frequency graph seems to from those predictions. It has to happen according to theory, so there it is.

The 20th century may have had uniquely rapid warming, but we would need higher resolution data to draw that conclusion with any certainty. Similarly, one should be careful in comparing recent decades to early parts of their reconstruction, as one can easily fall into the trap of comparing a single year or decade to what is essentially an average of centuries.

Paul
They argue that the time interval of the uptick is unreliable and that they made this clear. Their response to your point would be “big deal whether its an uptick or downtick here, we’ve said in the paper not to trust it.” .

Sure, the “uptick” region of invalidity for this set of measurements is not quite just 1850- (looking at the hemispheric results). However, this could be argued to be not terribly serious in view of the very long period over which they make measurements.

I think their stats are, in general, sloppy on this paper and the PR surrounding this work absurd. However, I don’t think your argument, or indeed the other criticisms the made here, are clearly retraction-worthy.

I suspect we are dealing with group think here. Marcott’s thesis describes honest and respectable work. The 2013 Science paper however gives the impresstion that (among other things) carbon dating has been subtly manicured to support a predefined narrative. This may be entirely a false impression. However I have other concerns with the linear interpolation of each proxy time resolution to a 20 year interval, and the method to calculate regional anomalies.

1. Was interpolation done before deriving anomalies or after ? This will make a difference.

2. Were anomalies calculated for each location using the measured data or only after interpolation?

3. Were the anomalies calculated prior to geographical averaging , or were they calculated afterwards?

4. Were anomalies transforned to the 1961-1990 individually or just the reagional averages ? Is the transform a simple linear offset ?

Groupthink is also likely happening here. Following such gems as the classic Mann hockey stick, upside down proxies and Gergis et al. there is an expectation here (IMO) that this paper is fundamentally balonely and that each of Steve’s points is a nail in the paper’s coffins.

I’ve seen lots of interesting points raised by Steve and others which demand serious answers. However, to abuse yet further the term, I’ve seen nothing yet which indicates that the paper’s major results are not robust and ought to be withdrawn. Even the redating question, perhaps the most serious of the issues, can likely be met with a decent explanation (Richard Telford). If the authors state that 1850-beyond is unreliable they can just about argue that nobody should worry whether an uptick or downtick occurs.

The PR etc surrounding this paper is another matter
entirely. I recently spent a few hours with a journalist who was writing an article on one of my papers. I avoided overhype and encouraged him to do this same.

Were I one of the authors I would have jumped into this discussion given the track record of this blog. The tone of comments the authors would have received would have been critical but not vicious. Science should work this way. FAQ’s and response-posts on other blogs doesn’t move science forward.

Richard Telford’s comments are always interesting, but in this case, I’m 99.99% sure that he’s holding ranks while privately gnashing his teeth. There’s no possible way that he would endorse Marcott et al’s wholesale redating of specialist cores.

Using CALIB 6.0.1 consistently is one thing, but Marcott et al have done something much different. Richard’s very mild disapproval of the redating of MD95-2011 should speak volumes. But it’s foolish to expect more than that in public discussion. Privately, I’m sure that he would ream Marcott et al out if he had a chance.

Imagine if Craig Loehle or I had produced a reconstruction making a similar re-dating of cores dated by specialists. We’d have had our heads handed to us by the specialist community. Imagine what Gavin Schmidt would have written if Loehle had redated a core by 1000 years. He’d have run Loehle out of town.

Assuming your hypothesis is correct then I still don’t see how this any material impact on the study. It certainly would be poor practice (I doubt such work would have been shown in the paper had it led to downticks) but the major conclusions of the work but be largely unchanged.

The big issue for me is the size of the uncertainties. It is counter intuitive that a quantity from 10000 years ago can be measured to the same accuracy and precision as one from 200 years ago. Having looked at the thesis its very difficult to work out exactly what they did here. I will take a little convincing that their uncertainty evaluation and averaging procedure somehow takes into account all of the (non-temperature) factors which influence a proxy and which aren’t terribly well constrained as one goes back to the very distant past of 10000 years ago. Their time-independent uncertainties represent an extraordinary claim and such claims require extraordinary evidence.

This is unfortunate since it is the uncertainty which leads them to draw their conclusions about recent temperatures being higher x% of the past y years.

Assuming your hypothesis is correct then I still don’t see how this any material impact on the study. It certainly would be poor practice (I doubt such work would have been shown in the paper had it led to downticks) but the major conclusions of the work but be largely unchanged.

Um, what “major conclusions” are you referring to that are left unchanged?

As far as I can tell, this paper made its way into Science not on other merits, but because of what now appears to be an erroneous last data point.

The reconstructions that don’t use the Marcott-Shakum date realignment don’t show this uptick on the end point… that is making this paper (and sadly the field of paleoclimate by extension, given how many have climbed on board and endorsed this paper) look really bad.

Instrumental data is used to make the case for the fast rise in recent temperatuers.
Marcott’s results are primarily historical measurements stretching back 10000 years or so. Indeed, in Marcott’s own words, their measurements corresponding to the most recent times aren’t robust and not to be taken seriously. They’ve written themselves a get-out clause.

Marcott’s conclusions are unchanged whether or not measurements relating to the most recent 150 years are kept in the paper.

Without the uptick, would the major conclusion not be that we are slowly cooling? That’s what their graph shows. Even if there is a recent (post 1950) temperature spike, we are well within the normal bounds for recent millenia. In fact the CO2 is likely cancelling the slow drift into an ice age that the data shows.

I doubt strongly that Marcott, Shakun, Clark and Mix really want to send a message that we don’t need to worry about warming.

Without the uptick this paper is a disaster for the calamitous warming cause.

The uptick is an irrelevence. It is a useful picture for those putting forward a certain point of view it but has no scientific merit. Even the authors admit this in the paper. If it was a downtick (and it was shown) it would not be disaster for the CAGW hypothesis since it is an artefact of an algorithm and not a temperature measurement.

The paper’s significance is due to the temperature measurements stretching back 10000 years.

As Mooloo notes, there is a grand irony for the discourse and a yawning dilemma for the alarmist narrative. I was amazed to find this sentence in CNN’s first reporting of Doctor Marcott’s article: ‘If not for man-made influences, the Earth would be in a very cold phase right now and getting even colder’.
======================

Instrumental data is used to make the case for the fast rise in recent temperatuers [sic]

.

Yes, but given the low-frequency nature of Marcott’s reconstruction, you can’t use the lack of data to argue that there weren’t correspondingly fast increases or decreases prior to the temperature record, and the proxy based record tells you nothing about high frequency variability.

This is a case of “absence of evidence is not evidence of absence.”

But I was looking for what “new” conclusions there were to this paper?

It’s plausible, even likely, that adding high frequency noise to the Holocene Climate Optimum period, and you’d still have periods where historic temperatures were warmer than current. So that’s certainly not one.

I agree with you regarding the problems at the end points of the series, but as you know Marcott doesn’t (or maybe now “didn’t”) share that lack of enthusiasm when he said “We’ve never seen something this rapid. Even in the ice age the global temperature never changed this quickly.”

That certainly is an unfortunate thing to claim. “We’ve never seen … ” may be true, but it doesn’t immediately follow that the temperature “never changed this quickly.”

As you probably are aware the attribution studies suggest that the global warming from 1910-1950 was primarily natural in origin and has a slope that is statistically indistinguishable from that of 1970-2000, so even within the existing temperature record, we have an example where it did “change this quickly.”

Re: Robert (Mar 16 23:59),
The challenge is not just “PR.” Can you find any MSM discussion of the paper that provides an appropriate headline, let alone content? Revkin’s is the most even-handed content I’ve seen so far, and his headline is 100% incorrect.

In fact, is there any non-skeptic discussion of the paper that is appropriately grounded? I’m truly astounded this paper has received any attention at all, let alone been published. Talk about GIGO.

“Science’s Mission: Science seeks to publish those papers that are most influential in their fields or across fields and that will significantly advance scientific understanding. Selected papers should present novel and broadly important data, syntheses, or concepts. They should merit the recognition by the scientific community and general public provided by publication in Science, beyond that provided by specialty journals.”

Carrick,
Sorry about that xls problem. I’ve modified the code so that it writes the data that it reads immediately to a R binary (prox.sav) which I’ve put into the zip file. There’s a flag to load the file instead of reading xls.

Nice timing Nick Stokes. I was just about to post something similar as I made a workaround for my own use, and I thought it might be worth posting with your code.

Carrick, if that doesn’t work, let me know. I had to make some workarounds for two different systems, and I think I figured out all the kinks. If not, I can provide the data in a simple csv file r will have no trouble reading.

Just wanted to say a quick thank you to Nick for your continued presence here. Providing code is great! It must take a thick skin, but right or wrong (on either side) it’s a better site for your skepticism toward the skepticism.

Both proxy values (temperatures) and their dates constitute the data. The dates were changed which means that the data of the thesis and paper are different. The only valid reason to change the dates, is that they were seriously wrong in the thesis. Is this said and motivated by the paper authors?

“The only valid reason to change the dates, is that they were seriously wrong in the thesis. Is this said and motivated by the paper authors?”

I believe that SteveM wrote this thread with the intent of showing that the changed dates in the paper are different than the original proxy source papers showed and that the thesis was more in line with the original proxy sources. SteveM also notes that the authors of the Marcott paper had little or nothing in the way of explaination of why and by what rationale the dates were changed.

I am quite sure that the authors are very much aware of this discussion at CA and they could come by and explain this all in a flash.

Thanks for bringing up data – as in “facts.” Work with proxies is so far from fact that the crucial issues in the Marcott controversy do not touch upon fact at all. Even the critics, first rate critics such as McIntyre and Telford, agree that the dates can be changed and legitimately. Clearly, then, the entire discussion is over what is “proper” in the relevant statistical methodology. Whenever proxies for temperature are the topic, scientists believe that they are quite justified in failing to tie their inferences to any factual ground at all. In my humble opinion, the lack of empirical science in the study of proxies is exactly why Warmists love them.

OK, with legitimate change of the dates, the question is of whether re-dating was independent of proxy shape. We have a population of 73 proxies. In order to get an up-tick in the twentieth century, we need proxies with an up-tick in its tail. We take a sample of n proxies to be re-dated. Assuming random selection, the hypergeometric distribution gives the probability of getting (at most/ at least) k proxies with an up-tick in its tail. Or, for getting the required up-tick, proxies with an up-tick in its tail should be re-dated upwards, and proxies with a down-tick in its tail should be re-dated downwards. Same procedure next.

Steve, bravo.
There is a classic smoking gun ‘proof’ of this time shifting trick in the Science paper as published. Compare figure 1G ( proxy ‘survival’ ) to figure 4.3C in the thesis. Basically the same graphic, except the Science version shows how many proxies ( at least 10 ) were pulled forward to at least 1850, while at least 9 were provably pulled back from 1950.

I sensed passing the puck to you from Climate Etc would result in a hockey goal. Had no idea how solid the goal would be. A high sticking penalty would seem to be in order for Marcott et. al.
Highest regards
Rid

Peter Clark of the author team told Revkin (on Friday) that there is an FAQ document in preparation, to respond to issues being raised. This was before Saturday’s developments, so it will be interesting to see if they still try to deal with everything via FAQ, or will they have to take new action about the paper itself:

[REVKIN]
As a result, I sent a note to Marcott and his co-authors asking for some elaboration on points Marcott made in the exchanges with McIntyre. Peter Clark of Oregon State replied (copying all) on Friday, saying they’re preparing a general list of points about their study:

[PETER CLARK of Oregon State]
After further discussion, we’ve decided that the best tack to take now is to prepare a FAQ document that will explain, in some detail but at a level that should be understandable by most, how we derived our conclusions. Once we complete this, we will let you know where it can be accessed, and you (and others) can refer to this in any further discussion. We appreciate your taking the time and interest to try to clarify what has happened in our correspondence with McIntyre.

This really is beginning to track like Gergis et al. Remember when Karoly took over for Gergis and became the voice of their writing group?

From a distance and regardless of what is going on behind the scenes, if there are professional reputations on the line here, it’s likely they are Clark’s and Mix’s reputations.

I look at Shakun in the interview with Revkin waxing enthusiastically about the dawn of the Anthropocene and the impression I get is that he and Marcott are victims of indoctrination. Years and years of it.

There is some circularity in the alkenone method. (An alkenone is a hardy class of ketone). In summary, it relies upon the ratio of lengths of alkenone molecules. The ratio is calibrated against other indices such as the surface sea temperature derived from oxygen isotopes in foraminifera and similar. The oxygen isotopes are related in turn by quite loose equations that more or less say that evaporation sites leave heavier isotopes behind and vice versa for precipitation. However, the distance relation between evaporation and precipitation sites and the likelihood of intermediate mixing and contamination and repeated processes is a great unknown. The mere fact that some people make equations for calibrating temperature with oxygen isotope ratios does not eradicate the possibility of wide variation. This has been known since the 1980s.http://www-odp.tamu.edu/publications/161_SR/VOLUME/CHAPTERS/CHAP_39.PDF

Quote: To determine environmental conditions during sapropel formation
requires reconstruction of hydrographic parameters: ambient
surface-water paleotemperature and paleosalinity before, during, and
after sapropel formation. As demonstrated by Rostek et al. (1993),
sea-surface temperature (SST) and paleosalinity can be estimated by
using the combined planktonic foraminiferal oxygen isotope and alkenone
Uk’ 37 signals. Reliable application of this strategy, however, requires
knowledge about growth season and depth habitats for both
planktonic foraminifers used for isotope analysis and of phytoplankton
species used for alkenone measurements. End quote.

Thanks so much for this post. The information warms the heart of this lover of empirical science. I guess none of this information will make it into the analysis of Marcott’s paper.

I wish that McIntyre had a twin who would attend to the empirical matters as well as McIntyre attends to the statistical matters. Of course that twin would not have so much to write about as the Warmists carefully avoid all matters empirical.

Alkenones are either calibrated against a set of core-top samples or against laboratory cultures grown at different temperatures. Doose et al. use the Prahl and Wakeham (1987) equation – this is explicit in their methods section.

Alkenones are not calibrated against d18O.

The quote you give is emphasising the importance of using multiple proxies that are sensitive to different aspects to best understand the oceanography of a site.

Calibrating alkenone against coretop samples assumes that the coretop reflects current temperatures (which is quite reasonable if sedimentation rate is not too low and there isn’t any loss of loose sediment when recovering the core).

However it would seem that this assumption also eliminates any need for “recalibrating” the coretop.

Prahl and Wakeham (1987) reported an accuracy of (+/-)0.5 C for their calibration equation, for E. huxleyi algal cultures grown under laboratory conditions. This lower limit of uncertainty should have been propagated into the Marcott construction. Wild-type conditions will produce a larger scatter of points and lower accuracy.

Further, though, Prahl and Wakeham also noted an inconsistency between their calibration, and prior work. They go on to say that, “The cause for this apparent discrepancy is uncertain and warrants further laboratory investigation. Systematic study of other strains of E. huxleyi will reveal the extent to which [derived temperature depends on the specific algal clone used for study]”

So, the (+/-)0.5 C is really an upper limit of accuracy, because unknown organismal influences other than temperature may affect the alkenone distribution.

I haven’t evaluated the field generally, but if Prahl and Wakeham are typical, alkenone proxy temperatures, like d-O18 temperatures, do not have the resolution needed to evaluate the modern trend.

Richard Telford, Thank you, I’ll dig deeper. In either the early data or the late data there has to be a way to calibrate against temperature. I was of the opinion, which might be wrong, that oxygen isotopes were used in the early data.
Life here is quite complicated by illness just now, so if you could do a couple of one-liners on whether or not oxygen isotopes enter the calibration of alkenone methods at any stage, I will be grateful. It’s poor of me if I have posted before being more thorough.

Pat Frank, thank you for the comments. Do you know of a text that is used to teach how to measure, formalise, calculate and express both precision and accuracy in the current context? There is an Australian blog that would benefit from exposing some university people to it. It’s run by a number of Universities plus bodies like CSIRO and BoM, but they will not allow me to contribute lead articles because they say I have no current affiliation with a University. I’m challenging the use of public funds to censor writers, but I think they have used some smart to work around this. “The Conversation”.

There’s another interesting dating issue. Shakun 12 is co-authored by Marcott, and uses a number of the same proxies. Redating was also done in this paper, this time both Marine04 and Marine09 were compared, with 04 being chosen (discussion in the supplemental). The spreadsheet for Shakun12 lists both Marine04 and 09 ages, and the Marine09 ages differ from Marcott13 for the same proxies, usually in the earlier years. As an example, the ages for the first proxy in Marcott, GeoB5844-2:

“We appreciate your taking the time and interest to try to clarify what has happened in our correspondence with McIntyre.”
Or “We are attempting to determine where shite will land after its contact with fan.”

Oh dear. Did Prof Mann point out that :
(a) the blade is nonsense (as admitted by the authors themselves) ?
(b) a growth of temperatures as we saw in the last ~100 years would not not be visible owing to proxy resolution and that therefore it tells us little as to whether or not we’re experiencing unprecedented conditions ?
(c) the temperature errors on this study are 50% of his ? Perhaps he could even explain why. This is baffling me (a poor physicist).

Using just the anomaly from the means of each proxy, one can compare the increase over the century from 1850 to 1950 using either the published or Marcott (Marine09) ages for the NHX. The slope is about zero for the published ages, almost a degree per century for the Marcott ages.

No links because my comments with links seem to disappear so I’ll try adding them in a reply.

An article was published in Nature in April 2012 called “Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation”. The authors where Shakun, Clark, Marcott, Mix and others.

This used Calab 6.0.1 with IntCal04 to redate the proxies, but the supplimentary information also contains re-dating information with IntCal09.

I’ve only looked at MD95-2043 but the first 4 dates look to have been stretched. The table below shows the depth, the orignal dating, Shakun dating from the Nature article and Marcott dating from the Science article

Good to see you extending attribution to all authors. No reason why Marcott should get all the “credit” for this work. Indeed as mt points out above, the re-dating trick was also used in Shakun et al 2012, an attempt to reverse CO2-temperature causality. That paper’s somewhat longer author list includes the authors as this paper.

I don’t wish to detract from the rigour and seriousness of your analysis but I’m again tempted to note the appropriateness of some of the authors’ names: Shakun-Mix re-dating.

Looking at the main plot something troubles me. The main plot is here btw

Why are Marcott’s errors half of the size of Mann’s ? This strikes me as being not a little daft. Mann wasn’t exactly noted for his conservative treatment of systematic uncertainties. What is the trick (used here not as a snarky term) that Marcott et al. used such that they are able to reconstruct the temperature so much better than a similar analysis ? Or have they just neglected a lot of errors ? Are the errors being compared like for like, eg 1sigma to sigma etc ?

Also, is there a systematic uncertainty in this work at all ? A systematic error is defined here as the uncertainty that would be dominant should nature have granted the authors enough samples such that the random error is negligible. Is the systematic error something sensible or does it turn out to be absurd , eg 0.1 degrees uncertainty for 10000 years ago, which would imply that the methods and assumptions are incredibly (and implausibly) reliable.

I know I bang on about the error but the importance of getting the error right is the first thing I teach my ug students.

In my primitive Marcott emulation, I had reached a stage where switching from published dates to Marcott dates introduced a very large spike. I have found a reason. It’s a rather trivial one. The sheet of proxy 65 has some junk many lines down from the data block. My R program read these as data, with spiky effects which I explain in the post. It’s due to the linear interpolation. With this fixed, a modest spike remains. I think I have found some reasons for this, but am still checking.

A more careful (robust?) program would have avoided this problem, but it’s just possible that it has affected others.
Steve: I had noticed the junk sections of several proxies down the page and had excluded that data in my calculations.

While we are fixing obvious errors in the data provided by Marcott et al, we should remove the two zeroes from proxy 62: Stott’s MD98-2176. These values create a change of about .4 degrees in the reconstruction at around -9000BC.

All of the other temperatures in the proxy set are of the order of 28 to 30 C. The proxy plot in the thesis does not display such precipitous declines in temperature.

Being cynical about almost anything to do with official climate science, I would suggest something along the following lines may have happened:

Imagine a small room somewhere in the academic world, probably early last year, where Dr Marcott is sitting in front of a small panel of ‘eminent’ climate scientists.

The conversation eventually comes around to the subject of economic reality in the world of climate science.

“Well, Dr Marcott your PhD thesis is excellent work, but if you want to have a career in climate science, you must realise you have to come to the right conclusions, something which was clearly not achieved in your thesis.”

“Oh,” replies Dr Marcott, contemplating the bleak prospect of having to find a real job in the real world, “I understand, but what do I have to do?”

“Well, we suggest you publish a new research paper along the lines of your thesis, but this time coming to the right conclusions. So no big deal, just an update of some of the graphics, casting a fresh eye over the data, clean up some of the wording, nothing serious. And to make things easier for you, we can suggest some co-authors – people who are known to be sound on the subject of climate science – they will provide you with the statistical methodology and whatever other ‘proof’ you need to reach the right conclusions. These people are masters in the interpretation of raw data and can be relied on to provide you with what you need to become one of us. In addition, in order to demonstrate our sincerity in this, we shall arrange publication of this paper in a prestigious science journal – and don’t worry about having any difficulties in the peer review process, they will all be supporters of the Cause.”

“The Cause?” mutters a baffled Marcott.

“The Cause of all us climate scientists.”

“OK, but how can I justify coming to a totally different conclusion?”

“Trust us, nobody will ever know. And in the unlikely event anyone does, the re-interpretations of the data sets used in your new paper will be so complicated no one, not even Steve McIntyre, will ever be able to figure them out. In an absolute worst case scenario, you can talk about always being concerned about the robustness of data, so if there is something wrong it’s not your fault, it’s the fault of the data.”

It is not certain at this point, whether or not Dr Marcott fell to his bended knees, realising his financial future was suddenly now secure as he clasped his hands together and gratefully bleated: “Thank you, oh thank you so very, very much.”

One ’eminent’ climate scientist rises to his feet, extending his hand: “And as we are now all agreed, I warmly welcome you as the newest member of The Team.”

Steve McIntyre: thank you for all the hard, timely, clearly-explained work you’ve done here and, of course, on the original Mann “hockey stick.”. I have learned a great deal and I admire the civility and patient tone that, somehow, you are able to maintain in the face of such ineptitude and outright imposition. As for your wit: the “Dating Service” is hilarious. I would suggest that somebody might also find a way to work up a good skit, along the lines in Rocky Horror Picture Show, with all of the paper authors singing and dancing to the “Time Warp.”

Now fellas. You must have heard of the spin ratio statistic. The believability of the statistical analysis is a function of the word count in the paragraph preceding to explain all the adjustments made to the data prior to tests of significance.

Several people have noted that known perturbations of climate over the past 10000 yrs are missing from the final recon. Throwing a bunch of bad data together would do that, as Willis suggested, but also the Monte Carlo perturbation of dates (and maybe the redating) would also effectively do a time-smoothing (move to lower frequency) thus eliminating details. This would flatten out the curve and further enhance the perception that recent instrumental warming is even more alarming.

My post may be considered off topic for the subject at hand, but I cannot keep myself from giving my perspective on the Marcott paper based on my general view of reconstructions and the analyses of those reconstructions.

Firstly, I think when SteveM does these analyses of papers it is a real learning experience for people on all sides of these issues. In this case I hope it draws meaningful replies by the authors, and, regardless of the outcome, we will all have learned. I also think that using a term audit for these analyses does not do them justice as the audits I have been connected with were not that imaginative in finding problems but more or less scripted and thus limited by that script. Finding corrective action might have required more imagination but sometimes that action was merely tuned to the scripted audit.

The problem I do have with these analysis that focus in detail on a given aspect of a reconstruction is that sometimes that approach, as necessary as it is, gives the impression that the remainder of the rationale and methods used in a reconstruction have been accepted. In turn this allows piecemeal replies that imply that given the existence of a single problem the conclusions continue to hold with perhaps a little less certainty.

I have finished reading through the Marcott paper and SI for the first time and from that read I get the impression that this reconstruction was meant primarily to address temperature trends on the centennial basis as the following excerpt explains:

“Power spectra of the resulting synthetic proxy stacks are red, as expected, indicating that signal amplitude reduction increases with frequency. Dividing the input white noise power spectrum by the output synthetic proxy stack spectrum yields a gain function that shows the fraction of variance preserved by frequency (Fig. S17a). The gain function is near 1 above ~2000-year periods, suggesting that multi-millennial variability in the Holocene stack may be almost fully recorded. Below ~300-year periods, in contrast, the gain is near-zero, implying proxy record uncertainties completely remove centennial variability in the stack. Between these two periods, the gain function exhibits a steady ramp and crosses 0.5 at a period of ~1000 years.”

Taken together with the table in the SI showing the temporal proxy resolution – nothing less than 20 years and with an eyeball average around 100 years – I do not see how advocates including scientists/advocates can really say anything about the comparison to the 40 years of the modern warming period. While the authors have not tacked the instrumental record on the end of the reconstruction like a number of reconstruction authors have in the past, with the implicit meaning that that record and the reconstruction proxy responses can be taken as equally valid , they have evidently somehow been able to show a spike upward at the end even while showing very large uncertainty intervals and telling us, at least implicitly , in the SI that that spike is not to be seriously considered. Note that those uncertainty limits shaded in blue in Marcott are for +/- 1 sigma and not +/- 2 sigma and with that last upward spike rather difficult to read from the graph and probably better estimated by looking at the – 1 sigma.

Finally, a good hard look at the individual proxies shows, as is so often the case with these published reconstructions, little coherence between the proxy responses over time and even when considering the proxies are located around the globe and the trends can be different depending on location. Unfortunately averaging the responses together with the hope that the noise in those responses cancels out does not address the issue that given proxy responses which are influenced only a small amount by temperature and a large amount by responses to other effects that those other effects will be sufficiently similar in kind and magnitude to somehow cancel to give a meaningful response. This area of investigation has been totally neglected in my view by the climate scientists and it would not be surprising to hear as I paraphrase from a second hand comment from a Marcott author : We take 80 proxies and average them together.

Maybe I’m missing something obvious, but the alkenones are based on fish oils, and are thus proxies for sea water temperature, not surface temperature, if I understand this correctly. Doesn’t this make a 1.9C rise in 20 years all the more … not robust?

Dumb question from a lurker – the extreme drop off shown using the original dates also looks somewhat “alarming” to the casual observer, like something went haywire in the modern era. Does it imply anything of significance?

It could suggest that the proxies in question have been picking up some recent (anthropogenic?) changes unrelated to temperature. This might not be the case, but it seems more likely than the possibility that these proxies have recorded a rapid temperature change.

I’m sure that The Team will now gang up against the editor of “Science” as we saw against another editor in the Climategate e-mails and as Gavin has so eloquently explained – it was just because they don’t like bad science to be published. On second thought … i’m not that sure…

Just a stupid question. Has anybody checked or has the possibility to check if this new, inventive dating proxies process has ever been used in other work? I remember there was a Shakun et al not so long ago… Maybe there is a consistence somewhere after all. Just asking…

For what it’s worth Monte Carlo analysis is commonly used in electronic engineering. The idea is if you have a circuit with multiple components each of which has a tolerance associated with it, you want to know that the output of the circuit will likely remain within required limits no matter what combination of tolerances on the components you have. A “worst-case” analysis can do the same thing, but sometimes it is difficult to work out exactly how a given component will affect the output without doing a lot of math for that specific component. What you do in electronics is build a software simultion of the circuit which allows the tolerances to be considered, then the software does a Monte Carlo analysis – basically it tries the component values at different values within the tolerance limits and plots the output. Usually you end up with a whole series of curves showing how the output will change with component values, and hopefull this will tell you if the circuit will do the same thing.

In this case the scientist wants to present a single curve showing temperature over time, and this curve is a function of multiple input proxies. Unfortunately he has some tolerance on the dating of those proxies, so it seems he used a Monte Carlo analysis to get an idea of how taking into account the tolerance will impact their end curve.

Dear Steve, I do not doubt you found once more irritating data manipulations. However, even as somebody being fairly familiar with climate matters, I am unable to follow your essay. Could you present your findings in a more didactical manner? What are alkenones? What proxy do they represent? Where and when were the cores drilled? Where was the original dating published and by whom? What implications does the re-dating have? Furthermore I do not understand the table you show nor the explications you give.

I’ld very much like to be able to follow your analysis so that I can reproduce the essentials in any dispute on climate change issues….

“In 100 years, we’ve gone from the cold end of the spectrum to the warm end of the spectrum,” Marcott said. “We’ve never seen something this rapid. Even in the ice age the global temperature never changed this quickly.”

Skiphil,
I noticed this from your link:” The same fossil-based data suggest a similar level of warming occurring in just one generation: from the 1920s to the 1940s. Actual thermometer records don’t show the rise from the 1920s to the 1940s was quite that big and Marcott said for such recent time periods it is better to use actual thermometer readings than his proxies.”

A general question from a non-specialist.
Marcott said:
“In 100 years, we’ve gone from the cold end of the spectrum to the warm end of the spectrum,” Marcott said. “We’ve never seen something this rapid. Even in the ice age the global temperature never changed this quickly.”

If he’s using the weighted average of low frequency proxies, which themselves have a large dispersion wrt each other, how could he possibly draw that conclusion from this study ? Surely any (genuine) 100 year downticks/upticks would be washed out in resolution effects.

“even in the ice age” , why even? Would we expect temperatures to vary ‘even’ more _during_ a period of glaciation (which is presumably what he is referring to by “ice age”)?

Having muddied the historical rate of change by shifting various proxies back and forth, it would not be surprising to see less variation and lesser rates of change. This is similar to Mann’s flattening technique. By inverting Tiljander and averaging he reduced historical variability.

I assume they claim to understand the resolution of their measurement. They should therefore be able to show how a 100 year temp increase, eg as seen in the 20th century, would appear if they artificially grafted such a rise into their data. This I would love to see.

My suspicion is that it would totally invisible. Then again it would be interesting to be proven wrong.

“Marcott said for such recent time periods it is better to use actual thermometer readings than his proxies.”

Nick,

Thanks, but doesn’t such a modern “divergence” between proxies and thermometer readings raise issues about the reliability of those proxies? How then do we really know said proxies were any more reliable in the distant past?

I’m not a scientist, but I find it mystifying that Marcott’s statement can be so blithely asserted (echoes of tree ring divergence issues post-1960). Why should such a divergence of proxy and instrumental records be treated as anything but a sign that the reliability of such proxies is cast into serious doubt?

Nick Stokes, that comment means to me that the instrumental record and the proxy responses to temperature are not equally valid measures of temperature – not even close in my estimation – and it also implies that it is very misleading to tack an instrumental series on the end of a reconstruction as is so often done. This practice is particularly misleading when the proxy response does not show the expected ending trend upward.

… or if it is then asserted that the divergence “doesn’t matter” because the proxies don’t have so fine a resolution, now we get to the issue that the Marcott study (as Robert Rohde emphasizes) may not have high resolution below 300 or 400 year intervals due to the types of proxies used.

But if that is the point, then the 20th century spike means nothing significant AND it is impossible to rule out similar spikes in similar periods in the past.

Seems to my “Joe Citizen” mind that the study has no business saying anything about the 20th century, or about temp. variations in the past for less than 3 or 4 centuries.

Skiphil,
Well, the main problem with recent is that many proxies just don’t extend into the period. That doesn’t make them bad proxies. They aren’t calibrated by reference to to modern air temp.

But they are also low resolution (time). That’s partly due to uncertainty about what time they relate to. But also whether there is even a unique time, since they are generally layered sediments and there is mixing.

So it’s hard to get, say, accurate decade measurements, but should be better over a century.

(1) Why did you show the uptick when it has no effective scientific value ? Even though it is stated that the recent temps are not robust, the picture itself gives a misleading picture. Indeed, it seems that people are shouting about the “hockey stick” even though no valid measurement of the blade has been made here. I appreciate that statements were made in the text and interviews to correctly describe the weakness of these data points.
(2) If a temperature variation as seen in the 20th century were to have taken place in, eg the 6th century, how would it appear in your final distribution ? Would it have been washed out.
(3) In view of (2) above, does *this* particular study say anything about the exceptionalism of the 20th century in terms of global temperatures ?
(4) What is your systematic uncertainty on temperature reconstruction rather than random error i.e. in the limit of infinite proxy stats what contribution to the uncertainty comes from other factors ? Is it tiny, eg 0.1 degrees or large and is there a dominant error source accounting for it ?
(5) Are there any known sources of uncertainty omitted ?
(6) The uncertainty on temperature measurements of 200 years ago is the same as for 10000 years ago. This is counter intuitive (to me) as I would expect, eg, factors other than temperature to influence the proxy response and such factors would likely be more poorly controlled the further back in time one goes.

Kenneth F,
You’re right that they are not equally valid – thermometer is best. But it’s just one world temperature series we’re trying to find out about, and absent thermometers, we have to make do with proxies as an extension of the measured temp. But of course they should be plotted together – that’s what the whole endeavour is for. To show the best history of temperature we can get.

No, Nick that is not the best we can do and in fact it is probably the worst we can do because it adds validity to the reconstruction were none exists. This is particularly obvious when the proxy responses diverge downward and we get a superimposed instrumental record. It diverts attention from the basic issue and that is how can we validate proxies and proxy responses a prior.

I might not mind this splicing if the authors went into detail on how to properly interpret the graph and explain what it is not showing. Since the authors do not provide this information it becomes what it appears Marcott et al have done with the spike and that is provide a publicity stunt. I think your defense is more in the spirit of what might be forgiven if this process were carried out by a politician.

Nick…I am curious as to the best way of adding the temperature record to a set of readings derived from proxies. Do you have any thoughts on the best way to do that without obscuring the problems such as different data resolution (temperature daily, proxy 30 years)?

Temperatures do change much more abruptly during glaciations than during interglacials, at least in the northern hemisphere. As a matter of fact not even Marcott et al’s manufactured hockeystick can match the rate of change at the end of the Younger Dryas despite Marcotts clams.
Incidentally the only proxies that can record such abrupt changes are ice cores which have annual layers.

[…] Which makes them useful proxies for temperature. But as well as reflecting temperature a useful proxy must also be accurately dated. If you were to choose your proxies carefully and fiddle with their dates you could get any result you wanted … even a hockey stick. […]

[…] The Marcott-Shakun Dating Service Marcott, Shakun, Clark and Mix did not use the published dates for ocean cores, instead substituting their own dates. The validity of Marcott-Shakun re-dating will be discussed below, but first, to show that the re-dating “matters” (TM-climate science), here is a graph showing reconstructions using alkenones (31 of 73 proxies) in Marcott style, comparing the results with published dates (red) to results with Marcott-Shakun dates (black). […]

[…] the last decade. This year, he takes aim at the latest nonsense, from Marcott et al. On his blog (Climate Audit), he explains how the timing of the data was manipulated – in one case, a dataset was […]

[…] the last decade. This year, he takes aim at the latest nonsense, from Marcott et al. On his blog (Climate Audit), he explains how the timing of the data was manipulated – in one case, a dataset was […]

[…] on the proxy samples were changed for some strange reason. McIntyre’s post on his research is here. This chart shows how critical Marcott’s re-dating was to his conclusion that temperatures spiked […]

[…] it. The day after publishing Marcott’s nonresponse, Steve published his re-dating comment, with McIntyre 2 worth more than a thousand words. Black is Science with Marcott’s re-dating. Red is Marcott’s […]

[…] 2 shows the anomaly data using the modified carbon dating (re-dating). This has been identified by Steve McIntyre and others as the main cause of the up-tick. However I think this is only part of the […]