Pierrehumbert: Reason for Methodology Used by IPCC is "Illegitimate"

Pierrehumbert recently made the following statement about the truncation of data:

Whatever the source of the purported … data, there is no legitimate reason in a paper published in 2007 for truncating the … record … as they did. There is, however, a very good illegitimate reason, in that truncating the curve in this way helps to conceal the strength of the trend from the reader, and shortens the period in which the most glaring mismatch … occurs.

I totally agree with Pierrehumbert’s condemnation of graphics that are truncated to “conceal” mismatches from a reader. This is a matter that I’ve discussed previously in connection with IPCC and which I would like to review today in light of Pierrehumbert joining with climateaudit in the condemnation of this practice. Prior discussions of the topic at CA include herehere and here .

IPCC TAR
Let me refresh the discussion by showing how IPCC TAR concealed the mismatch between the post-1960 decline of the Briffa et al 2001 reconstruction and temperatures, by simply deleting the post-1960 values of the Briffa reconstruction. First here is a graphic from the original Briffa article showing the “divergence problem” – values of the Briffa reconstruction declined sharply in the 2nd half of the 20th century, such that closing values were similar to those in the early 19th century, long before modern warming.

The Briffa et al 2001 reconstruction was one of only three reconstructions in the IPCC TAR spaghetti graph. However, as shown below in the left panel, the graph does not show any mismatch between the Briffa et al reconstruction and the other reconstructions or with temperature, even though late 20th century values of the Briffa recon would be at early 19th century levels – well below the supposed “confidence intervals”. The detail shows why: the Briffa MXD series has been truncated. The Briffa series is in green and ends in 1960, but the truncation is virtually impossible to spot in the spaghetti graph as the green series seems to merge with another dark-colored series. Without the truncation (as you can estimate by examining Figure 1), the late 20th century values of the Briffa series would go to values about equal to early 19th century values, yielding a glaring mismatch.

Figure 2. IPCC TAR Figure 2-21 with blowup.

This unscrupulous truncation was previously reported at CA.

IPCC AR4

This was bad enough in TAR – what about IPCC AR4. While their spaghetti graph included more series, their truncation of the Briffa et al 2001 series was the same as in IPCC TAR as shown below – the Briffa et al 2001 recon is in light blue – see the 1960 truncation in the detail at right.

Figure 3. IPCC AR4 Box 6.4 with blowup.

Review Comments
In my capacity as an IPCC AR4 reviewer, I noticed that the Briffa et al 2001 reconstruction had once again been truncated in 1960, which had the effect, in Pierrehumbert’s words, of “concealing” the “mismatch” from the reader. I observed in language not dissimilar to Pierrehumbert:

Show the Briffa et al reconstruction through to its end; don’t stop in 1960. Then comment and deal with the “divergence problem” if you need to. Don’t cover up the divergence by truncating this graphic. This was done in IPCC TAR; this was misleading. (Reviewer comment ID #: 309-18)]

The IPCC review comments (unavailable at the time of my original post on this but now online here – Go to chapter 6 – Final review comments and then to comment 6-1122) stated:

Rejected: though note ‘divergence’ issue will be discussed, still considered inappropriate to show recent section of Briffa et al. series

So let’s return to Pierrehumbert’s statement:

Whatever the source of the purported … data, there is no legitimate reason in a paper published in 2007 for truncating the … record … as they did. There is, however, a very good illegitimate reason, in that truncating the curve in this way helps to conceal the strength of the trend from the reader, and shortens the period in which the most glaring mismatch … occurs.

While Pierrehumbert made this statement in respect to Courtillot et al 2007, the same principles obviously apply to IPCC AR4. Indeed, the IPCC circumstances are far worse than Courtillot circumstances. First, Courtillot’s interest was in the earlier period, as he recognized the post-1990 divergence. Second, Courtillot used an obsolete data set resulting in a shortened series, but did not actively truncate the data. Neither of these justifies the use of obsolete data, which, like Pierrehumbert, I have criticized.

In the IPCC case, there was an active truncation of “inconvenient” data which had the effect of concealing a mismatch from the reader. Worse, the matter was clearly and explicitly brought to IPCC’s attention and they refused to address the concealing.

In Pierrehumbert’s words, there was no “legitimate reason” for what IPCC did, but a “very good illegitimate reason”. It’s gratifying that Pierrehumbert and realclimate are lending their authority to the condemnation of such practices.

104 Comments

Between Pierrehumbert’s condemnations of data tricksterism and Schmidt’s (hollow) advocations for data transparency … why do they keep Mann aboard at RC? RC has outgrown its initial raison d’etre. Now that the hockey stick is smashed, there is nothing paleoclimatological to buttress. RealClimate must morph into RealGCM.

I love that Steve keeps posting these “hockey stick” graphs – this is great! Very convincing to see how high temperatures are now compared to hundreds of years ago!

But, if you have a complaint about them omitting Briffa’s more recent data, why not add it on to the plot yourself? Show us what it would look like with the “divergence” included, let’s see it, not just talk about it! Why not? Or maybe it wouldn’t make much difference to the way the graph looks?

Steve: My emulation is shown in prior posts linked above. An emulation was not possible for a very long time as it took me over 3 years to obtain a listing of the sites used in Briffa et al 2001. The divergence is very considerable. I showed the Briffa recon in the first Figure to indicate the size of the discrepancy from an original figure.

I found this a very informative post – but you ought to re-arrange the graphics. The banner on the right side of the post is obscuring the important graphics and parts of the graphics that illustrate the points you are making in the text.

The only way I could get around this problem was to copy the gif images from your post onto my computer and look at them there.

By the way the Briffa curve is not only truncated it appears to have been “adjusted” just prior to truncation to indicate that it is leveling off much too soon/high (see blow up of figure 3 and compare with figure 1). Note that the original graph shows the post 1940 minimum is much lower than that which occured just pre 1940.

Mark Pilon, you just need to make your browser wider. Monitor not big enough? Sounds like a personal problem. No problem on my dual 20 inch monitors. 🙂

Arthur, IMO, it’s incorrect to directly compare proxy data to temperature data. That said, on that graph, only HADCRUT goes way up, which is contradicted by 1) McCitrick analysis and 2) the satellite data. IOW, if the hockey stick were real, we’d be in the blade, but the satellite data shows that we’re not. SteveM did update some tree ring proxy data, and I’m not sure about the final conclusions, but I think it showed a downward facing blade.

#7 Arthur,
I’m not sure if you’re joking or not? The original Briffa is shown in this post and it does not follow the instrumental record. That is the problem, either the proxy measurements are wrong or the instrumental measurements are wrong, (or both are wrong) but they cannot both be correct.
If you have decided that splicing the temperature record onto a proxy graphic and then omitting the proxies beyond that point because they don’t match is legitimate, then perhaps there is nothing that can convince you there is a problem.

Steve, the big question that leapt out at me when reading this article concerned the statement:

Rejected: though note divergence issue will be discussed, still considered inappropriate to show recent section of Briffa et al. series
This begs for explication. I sought it in the link, but your link takes me to a table of more than a dozen downloads; I’d have to do a lot of digging to locate that item. Could you fill me in on whatever explication was provided for that statement? Was there any further explanation of the point at that location or any other location? What the divergence issue discussed as promised?

Steve: Go to chapter 6 – Final review comments and then to comment 6-1122) There’s nothing in the reply other than what I quoted! There’s a discussion of divergence on page 473 of chapter 6 that I’ve discussed elsewhere, but this paragraph hardly justifies concealing the mismatch.

Steve I’m rather puzzled by your choice of graphics, the graphic of Briffa’s you posted seems not the same reconstruction as the one used in the IPCC TAR.
The Briffa graphic you posted is for summer mean temp. for land north of 20N, while the TAR graphic is also for extratropical N hemisphere it is clearly not the from the same data. Unless we are shown the same data we’re unable to judge what has been truncated. Even so the truncation you refer to, if it had been of the Briffa data you presented, was at the very point that their recon had reached a minimum and was starting to rise, an unusual choice if they were attempting to ‘conceal a mismatch’. In any case, I would submit that there is a distinction between truncation of measured data and truncation of data smoothed over long time periods (24 yrs in Briffa and 30 in Loehle) which have problems due to the treatment of the end condition. (See discussion of Loehle recon. for example).

Steve, the IPCC authors clearly state their reason for excluding the Briffa data:

This divergence is apparently restricted to some northern, high-
latitude regions, but it is certainly not ubiquitous even there. In
their large-scale reconstructions based on tree ring density data,Briffa et al. (2001) specifically excluded the post-1960 data in
their calibration against instrumental records, to avoid biasing
the estimation of the earlier reconstructions (hence they are not
shown in Figure 6.10), implicitly assuming that the divergence
was a uniquely recent phenomenon, as has also been argued by
Cook et al. (2004a). Others, however, argue for a breakdown
in the assumed linear tree growth response to continued
warming, invoking a possible threshold exceedance beyond
which moisture stress now limits further growth (DArrigo
et al., 2004). If true, this would imply a similar limit on the
potential to reconstruct possible warm periods in earlier times
at such sites. At this time there is no consensus on these issues
(for further references see NRC, 2006) and the possibility of
investigating them further is restricted by the lack of recent tree
ring data at most of the sites from which tree ring data discussed
in this chapter were acquired.

It seems to me that, if the authors themselves felt it appropriate to exclude that data, then we owe them some deference in this matter. They stated their reasons for doing so, and those reasons certainly sound legitimate to me. Moreover, the IPCC authors explicitly declare that “there is no consensus on these issues”. I think that Pierrehumbert has been unfair to them. Please advise if I have misunderstood the issue.

23: opposite. I rather see them as her, in the way they hang onto things even when proven wrong, and go out of their way to not even acknowledge that they were wrong. Plus, I see myself as Worf in that episode (since I used to be one of them and I’m still pretty miffed at being had.)

I think there is more fudging going on than mere truncation. Notice that after 1800 there are five local minima values prior to the peak about 1940 in Figure 4. Figure 2. shows only 2 minima pre 1940. Overlaying the line from Fig. 4. on Fig. 2. looks like a different data set. I have a converted graph, but no website from which to link it.

I’m sorry the #23 you’re responding to no longer exists so I fear that your explanation will only have meaning for me. I took your reference to the ‘Admiral Satie moment’ to refer to her final breakdown. I guess my response was too far off topic.

Follow the Money, what’s the basis for your suspicion? The authors explained why they excluded that data. Do you argue that their reason for excluding the data is invalid or incorrect? I have excluded data from analyses when I felt that the data had been compromised. What’s wrong with that?

#28
Whatever the source of the purported data, there is no legitimate reason in a paper published in 2007 for truncating the record as they did. There is, however, a very good illegitimate reason, in that truncating the curve in this way helps to conceal the strength of the trend from the reader, and shortens the period in which the most glaring mismatch occurs.

They stated their reasons for doing so, and those reasons certainly sound legitimate to me.

Their reasons are hogwash. Of course the divergence will bias the earlier reconstructions and they should. Divergence carries with it a hidden implication that the earlier reconstructions are nonsense, i.e. there’s no way to know when the reconstructions are diverging from what they are attempting to reconstruct. Simply stating there is no “consensus” on the matter is not sufficient, they should have provided the data for readers to assess themselves.

The little piece of the scientific method known as falsification: hiding adverse results, no matter how innocuous, thwarts the entire process of falsification. Divergence falsifies reconstructions by itself, and all the data should be available to rightly bias readers to this point, which is entirely what they have stated they did not want to do.

I hate to ask feedback on what must be an elementary point, but by what standard is a data series “obsolete”? As we have seen many dubious manipulations of data sets under the guise of “updating” why does one presume that Briffa’s later versions are more appropriate?

I have excluded data from analyses when I felt that the data had been compromised. Whats wrong with that?

Well, on what basis did you decide data were compromised? The basis matters. If the only basis is: If I keep them, the results look crappy, that’s a poor reason. If you have a concrete reason that is based on some external factor, then that’s ok.

For example: in two phase flow experiments involving pneumatic conveyance, I have excluded data from tests when we found a big plug of material blocking an elbow and preventing flow. The pluging was noted at the time of the experiment, and those tests were excluded from analyses to determine pressure drop for free flowing materials. (Strangely, I might actually look at the data to try to see how they differ from unplugged experiments — but these are cases with plugged elbows, so they are not the case we want to compare to a theory.)

In contrast, suppose this happens: those running the tests thought, a test went fine, and on initial evaluation of the data, the test looks fine (no sudden pressure drop loss etc.) and test run just before and after a particular test are fine BUT that one test happens to make my theory look crappy. I must keep that data. I don’t exclude.

Of course, I try to understand what went wrong, and if we can, we re-run that case to see if we can replicate the results that seem to indicate an outlier. But I don’t just throw that bit out.

The way the Briffa reasoning reads, it sounds like they excluded precisely because the later data suggest the tree-nometer are sometimes subject to error. The reason for the current divergence is unexplained, and no particular reason is provided to explain why, the same thing couldn’t have happened in the past.

If, in contrast, the author excluded that set of trees because they discovered that herbicides has been applied to the area during some recent war, well… then we would all agree we could stop using the trees. But, that’s not what we are reading. They stopped using the trees at that point because if they used them, it would give readers the impression that sometimes the tree-nometers don’t measure temperature so well.

Mike, the legitimate reasons for excluding the data are presented in the quote I provided. There are several:

1. The “This divergence is apparently restricted to some northern, high-latitude regions, but it is certainly not ubiquitous even there.” Thus, it appears to be outlier data, and is therefore intrinsically suspect.

2. The ” divergence was a uniquely recent phenomenon, as has also been argued by Cook et al. (2004a).

3. The data led to a conflict of the dataset with better-established data. When you have a lot of data that says X and some new data that says “Not X”, then you have to be very careful with that new data. If it’s right, you could have a big discovery on your hands, something that will greatly enhance your reputation. So there’s a big incentive to run with data that conflicts with the established data. But if you’re wrong, then your reputation could suffer. These guys decided to exclude that data. Their reputations were on the line, and they made a judgement call. Now you come in, with nothing to lose, and second-guess them. I’ll put greater credibility in their judgement on this matter than yours.

The This divergence is apparently restricted to some northern, high-latitude regions, but it is certainly not ubiquitous even there. Thus, it appears to be outlier data, and is therefore intrinsically suspect.

Which begs the question. The evidence of them being suspect is that they don’t support the theory. There’s no other evidence that they’re suspect.

1. The This divergence is apparently restricted to some northern, high-latitude regions, but it is certainly not ubiquitous even there. Thus, it appears to be outlier data, and is therefore intrinsically suspect.

If it happens there, how can we ever know it hasn’t historically happened anywhere else? This restriction, btw, includes THE TREES that create the hockey stick shape, namely, bristlecone pines. Without them, the recons become noise-like anyway. Oops.

2. The  divergence was a uniquely recent phenomenon, as has also been argued by Cook et al. (2004a).

I explained this one in my original post. What’s your problem? If it is happening now, there’s no way to know when it happened in the past.

3. The data led to a conflict of the dataset with better-established data.

Favoring the data set that supports your outcome, but dismissing the data set that cast doubt on your outcome? Give me a break. That’s what FALSIFICATION is all about, reporting ALL outcomes, regardless of whether they support your initial theory. Clearly

When you have a lot of data that says X and some new data that says Not X, then you have to be very careful with that new data. If its right, you could have a big discovery on your hands, something that will greatly enhance your reputation. So theres a big incentive to run with data that conflicts with the established data. But if youre wrong, then your reputation could suffer. These guys decided to exclude that data. Their reputations were on the line, and they made a judgement call. Now you come in, with nothing to lose, and second-guess them. Ill put greater credibility in their judgement on this matter than yours.

Um, it’s not that simple. These are time series, not just “some data that says X and some new data that says ‘Not X'”. It’s all the same data, same trees, different times. The assumption is that these tree rings represent past temperature. Recent tree rings do not indicate this, hence the divergence. You can’t just say “well, in the past they represented temperature, but not now” and continue to use the data expecting meaningful results. If they diverge now, there’s no way to know if they diverged in the past.

Their reputations are not on the line. These guys were merely pushing the “consensus view”, so there’s nothing to lose. The authors that rejected the criticisms that Steve M. rightly made are the very authors that wrote the reconstruction theory to begin with.

Now you come in, with nothing to lose, and second-guess them. Ill put greater credibility in their judgement on this matter than yours.

Uh, I’m not second guessing them all by my little lonesome. Read around this site. Recons have been rejected by more than just me, and the fact that the NAS as well as the Wegman report support my position is hardly insignificant. Given the many problems with “these guys,” I find it hard to believe anyone thinks they have any credibility at all, let alone more than simple me. You’re right, I have nothing to lose, at least not directly. I have nothing to gain, either, but the same cannot be said for “these guys”.

1. The This divergence is apparently restricted to some northern, high-latitude regions, but it is certainly not ubiquitous even there. Thus, it appears to be outlier data, and is therefore intrinsically suspect.

So use the other trees from that region. You know.. the ones that don’t show the divergence? And which you used to diagnose that these trees no longer work?

2. The  divergence was a uniquely recent phenomenon, as has also been argued by Cook et al. (2004a).

But what makes us believe the recent problem is unprecendented? Why couldn’t it happen back in 1200?

3. The data led to a conflict of the dataset with better-established data. When you have a lot of data that says X and some new data that says Not X, then you have to be very careful with that new data. If its right, you could have a big discovery on your hands, something that will greatly enhance your reputation. So theres a big incentive to run with data that conflicts with the established data. But if youre wrong, then your reputation could suffer. These guys decided to exclude that data. Their reputations were on the line, and they made a judgement call. Now you come in, with nothing to lose, and second-guess them. Ill put greater credibility in their judgement on this matter than yours.

If the new data look bad because they conflict with better data, why use the suspect new data at all? If you really have evidence these trees sometimes don’t match thermometer records (as in now) why not just delete the entire series? Why extrapolate using these trees at all? Why not use the more reliable trees that you belive prove these trees are faulty now? Why truncate at a certain time?

Caution would dictate trying to understand why the data now diverge but meanwhile not use the suspect tree data to extrapolate back.

When you have a lot of data that says X and some new data that says Not X, then you have to be very careful with that new data. If its right, you could have a big discovery on your hands, something that will greatly enhance your reputation. So theres a big incentive to run with data that conflicts with the established data. But if youre wrong, then your reputation could suffer. These guys decided to exclude that data. Their reputations were on the line, and they made a judgement call

“Judgement call?” That sounds more like giving into peer pressure.

If your measurement system fails for a long period of time when compared to established data, doesn’t that suggest maybe your measurement system is ‘hogwash?’ Wouldn’t it be best to fix it or scrap it altogether?

Pierrehumbert recently made the following statement about the truncation of data:

Whatever the source of the purported data, there is no legitimate reason in a paper published in 2007 for truncating the record as they did. There is, however, a very good illegitimate reason, in that truncating the curve in this way helps to conceal the strength of the trend from the reader, and shortens the period in which the most glaring mismatch occurs.

I am just noting that the purpose for this thread as I see it is if RC wants to use this argument on one paper then they should use on all.
But I still haven’t figured out how one would go about getting average annual temp. from tree rings.

OK, there’s been a lot of response to my comments so I’ll try to handle those responses as completely as I can, but I suspect I’ll fall short.

Before I begin, however, I want to offer an observation on this comment:

This is kinda scary that we have people who dont see the difference between science and sausage making.

Larry, I assume that this comment was directed at me. Correct me if I’m wrong. I did not come here to argue with anybody. I came here to determine if it is possible to carry on a reasoned, adult conversation on climate change issues with skeptics. All of my previous attempts have failed. This is a particularly well-informed group and I am hopeful that I have at long last found some people who can discuss this with me.

Now, on to the issues. Several people have taken me to task for defending the decision to exclude for the reason that the data was inconsistent with other data. This is a tricky point and several people have already been tripped up. There’s a difference between excluding data because it clashes with a theory and excluding it because it clashes with other data. The former is unconscionable; the latter is common. The authors clearly state that they are using the latter reason. The principle is sound: if you have a bunch of independent datasets on the same phenomenon, and one dataset ranges far outside the error bars of all the others, then you have a good reason to suspect one of two things: 1) that you’ve discovered something interesting; or 2) that the dataset has somehow been corrupted. In this case, you go back over the data very carefully. It it stands up to all the objections you can raise, then you publish it. If it can’t stand up to the objections, you exclude the data. This kind of thing is done all over the scientific world all the time. So let’s drop that argument, OK?

There remains the argument that excluding the data in this particular case was unjustified. The authors have provided two independent reasons for questioning the data: first, it is inconsistent with other data taken in the same locations and times that it was; and second, the Cook paper argues that special considerations apply that render that this data cannot be directly compared with the other data. Now, there’s plenty of room to argue over all of this. The IPCC report itself noted that there is no consensus about this issue. So why we chalk this up as a legitimate disagreement among professionals and stop insinuating dark conspiracies?

Lucia writes:

“They stopped using the trees at that point because if they used them, it would give readers the impression that sometimes the tree-nometers dont measure temperature so well.”

Again, there were three declared reasons for the exclusion. Your hypothesis has no evidence to support it.

The point was made that we can’t be certain that the issues raised for the later data do not apply to the earlier data. That, however, is the point of the Cook paper, which, sadly, I cannot locate. Until we read that paper, we simply can’t address this point — either pro or con.

The most sensible suggestion made was that we simply exclude ALL the Briffa data. If we exclude that data, we still have plenty of data supporting the conclusions in Chapter 6. And that fact gives this entire discussion a real tempest-in-a-teapot feel. Yes, there could be an issue here that might need further exploration. But I do not see this issue as in any way bringing into question the conclusions of Chapter 6. There’s so much other data that we don’t need to squeeze this data so hard. And anybody who tries to hang a case against the basic AGW hypothesis based on this argument is really pushing the data too hard.

The authors of the original graph did not truncate the series when they published it.

The IPCC truncated the graph.

If the divergence makes the data “instrinsically suspect,” then delete the whole of the graph.

Simply, the data diverges at some point in time T.

Three approaches:

1. Show all the data and explain the divergence ( full and open disclosure)
2. Show the data that confirms your belief, hide the data that causes you problems, and “explain away”
the hidden data without showing it.
3. Junk it all.

IPCC said “Rejected: though note divergence issue will be discussed, still considered inappropriate to show recent section of Briffa et al. series”
What they are saying is that it is inappropriate to be appropriate.
Let’s face it. Is anyone else other than Steve M finding this? If it was not for Steve M, this bunch would be right up there with the Nobel Prize “winners”.
The “explanation” for truncating holds no water. The least they could do is show both graphs. That was not science.

A lack of proof on X does not constitute proof that the opposite of X is true. It merely makes the data worthless to do anything. Perhaps, do you think??

Moot. As Gunnar said, “its incorrect to directly compare proxy data to temperature data.” Therefore, going back before 1880 is guesswork, although possibly an indication. Or possibly not. In my opinion, anything before the 1940’s (or maybe even later) is suspect regarding instrumentation.

I can probably trust the modern digital thermometers and satellites to give me accurate information about whatever they happening to be measuring.

Then the question becomes; what are they measuring? Or actually, what does what they’re measuring mean?

Ah, just wait until 2009 is done, that’s what I say.

On the other hand, I don’t want to have anything to do with my buddy’s behinds.

Theres a difference between excluding data because it clashes with a theory and excluding it because it clashes with other data. The former is unconscionable; the latter is common. The authors clearly state that they are using the latter reason.

Gaah, you just don’t get it. They’re not excluding data because it “clashes with other data”, they are excluding parts of the same data set that do not agree with their theory. It’s one data set, Chris, if you exclude the endpoints that amounts to cherry-picking the rest. Parts of the same data do not agree, therefore ALL of the data is suspect, not just some that “clashes with other data.” This is a simple concept

1) Hypothesis: tree rings are proxies for temperature, therefore tree rings should correlate with temperature.
2) Test hypothesis: do tree rings correlate with temperature over the known record? From 1900-1960, yes, after that, no.
3) Discard hypothesis and start over.

The latter data does not merely “clash with other data”, it falsifies the original hypothesis.

Wait a minute, are people on here claiming that the Briffa proxy data invalidates the *instrumental temperature record* from 1960 on? And that, in particular, temperatures have been going down rather than up?

Wow, that would be revolutionary. Why all this side-talk around the main point, if that’s what you’re actually believing?

Or if that’s not the point, then what is it, exactly? What in your views would be the consequence in terms of implications for reality of including this data in the curve? It doesn’t seem to add anything to the discussion that I can tell, except to cast in doubt the whole Briffa curve itself.

Wait a minute, are people on here claiming that the Briffa proxy data invalidates the *instrumental temperature record* from 1960 on? And that, in particular, temperatures have been going down rather than up?

Uh, hopefully this was sarcastic, if not, sigh…

NO! This casts doubt on the hypothesis that proxy reconstructions of historical temperature hold some significance, i.e. prior to recorded temperatures. That tree-rings don’t correlate well with BCPs post 1960 is an indication that something else is going on with tree rings, and we can’t draw any historical conclusions without knowing what, why and when in the past similar conditions existed.

Oh, and Chris…

the latter is common.

Only when you have a-priori knowledge of the specifics of such data corruption can you piecewise eliminate data.

“The authors of the original graph did not truncate the series when they published it”

but the IPCC report says:

“Briffa et al. (2001) specifically excluded the post-1960 data”

The only interpretation I can draw from these apparently conflicting statements is that Briffa et all did NOT exclude the data when they originally published, but later decided to exclude its use in the IPCC report. Is this a correct interpretation?

Sam, I agree that all our temperature estimates prior to 1940 are suspect, and I don’t think that this data is solid enough to be probative. However, I do think that it is useful data and deserves consideration. In most cases, a major hypothesis is decided not when it is proven (which is formally impossible), but rather when the overall pattern of arguments in favor and opposed to it is clear enough that scientists are willing to hang their hats on it (or against it). My own read of the arguments is enough for me to hang a small hat on. I wouldn’t be willing to bet my entire career on the AGW hypothesis (as many have done), but I feel pretty good about it. If anybody else looks at the same arguments in good faith and comes to a different conclusion, that’s fine with me. When people say that “the science has been decided”, I think that this is a fair statement in that a solid enough majority has emerged for the civilians to take it seriously. That doesn’t mean that scientists should not continue to poke at the arguments and look for chinks. There remains a small probability that the AGW hypothesis is wrong, and a greater probability that some particular of the hypothesis is wrong. For example, it could well be that the cost of addressing the problem exceeds the cost of ignoring it (although I doubt it). It could well be that we’re screwed anyway because we’ve triggered natural processes that will release a lot more CO2 than we are releasing (again, I’m dubious). In any case, I think we must differentiate between “sure enough that we should warn the public”; “sure enough that we should urge major sacrifices”; and “sure enough that we needn’t bother questioning the basics any more.” I think that we’re somewhere between the first phrase and the second.

Gunnar, thanks for reassuring me on the infelicitous comment. I’ll try to ignore that stuff and concentrate my attentions only on the good stuff. I’m also relieved to see that Steve is not pushing an explicitly anti-AGW line. That makes him, in my eyes, part of the “Honorable Opposition”. We certainly need people like Steve nipping at the heels of the mainstream, barking and carrying on, trying to find chinks in the logic. And it certainly appears that there might well be some sort of chink in this matter of the Briffa data — I’d like to see the Cook paper before I decide.

I think you need some background on how these graphs are constructed. First you measure the growth rings. Then this data must be translated into a temperature. This is typically done by fitting against the instrumental record which is at best the 20th century. The resulting graph outside of this range is then a prediction of what the temperature was.

In the case of the Briffa series, it was calibrated against the early 20th century. So is the series a good estimator for temperature? Probably not–based on evaluating it against the known instrumental period that was not used for calibration: the second half of the 20th century.

Meanwhile consider the IPCC’s “preferred” series. That series was calibrated against ~70 period using ~70 tunning parameters. The fit achieved is very good. So is that the preferred series “good data”?

We don’t know, it hasn’t undergone the same test that the Briffa data did.

But even worse, they delete from the spaghetti graph the evidence that the Briffa data fails the test.

So if the ONE series that was tested using verification data fails, all the other series were never tested, and you omit the evidence of the failure do you think this is an honest presentation?

Here is an example of validly discarding data. Some guys down the hall had a problem with their allometric equation for leaf area vs length (or some such–it was years ago). There was a clump of outliers. I said: “what is different about those points?”. They went and looked and all of those points and only those points had big holes in them from insect damage and they had estimated the missing parts. Valid reason to discard those data. Saying “something is different about post-1960” because you don’t like which direction the curve goes does not quite rise to this standard IMHO.

Mark, you make several arguements I disagree with. The first of these is that the Briffa dataset must be accepted or rejected in its entirety. I argue that, if the authors can demonstrate that some phenomenon contaminated the later data but did not contaminate the earlier data, then I would expect them to throw out the later data. This appears to be the purport of the Cook paper. Does anybody have access to the Cook paper?

Next, you state:

“Only when you have a-priori knowledge of the specifics of such data corruption can you piecewise eliminate data.”

I disagree. For example, many years ago I was doing some statistical work on binary stars. My dataset had several hundred systems in it. Like any good scientist, I went through each and every system, examining it for anything that might compromise the integrity of the analysis. There were some red giants, for example, that required special treatment because the statistical analysis assumed that all the stars were main sequence. But I ran into one system that was just weird. The combination of relative masses, luminances, and spectral types just didn’t add up. I sweated blood over that system for two days before I finally went to my advisor and asked his thoughts on it. He glanced over the data and said, “Throw it out.” Just like that. That binary system probably provided some good papers for future astronomers who had more data. But, given the lousy data available at the time, it was best just to exclude that system from the dataset.

Chris Crawford,
“We certainly need people like Steve nipping at the heels of the mainstream, barking and carrying on, trying to find chinks in the logic.”

While being compared to a noble beast such as man’s best friend is no insult, I believe you have missed the appropriate canine analogy. Steve McIntyre is closer to the prescient Toto, pulling open the curtain revealing the Wizard frantically spinning wheels and opening valves to project his omnipotent image as the all knowing “great and powerful Oz”.

The most sensible suggestion made was that we simply exclude ALL the Briffa data.

Yep. I suggested that. If you conclude that the trees don’t track temperature during part of hte calibration period, why not scrap them? Why only show the period in which they track and then use it to extrapolate back, possibly including periods where they may not track?

BTW: I think AGW is probable. So, my intention is not to disprove AGW.

In my opinion, these silly sorts of manipulations and special pleadings make it appear the case for AGW is weaker than it is. Stick with the good data. Drop the Briffa stuff entirely.

Don’t show the Briffa data only during the “good” bit but truncating the bit that might make it appear that sometimes it doesn’t work. (Like, now for example).

If the case for unprecendented high current temperatures stand without Briffa, why make the case look suspicious by using Briffa? It’s just stupid to keep that data in the chart: Drop it.

This divergence is apparently restricted to some northern, high-latitude regions

It is specifically these northern, high-latitude regions which provide the bulk of the evidence for 20th Century warming being outside natural variability and therefore buttressing the AGW hypothesis.

Excluding data which casts doubts on validity of the proxy reconstructions for these regions because the data which casts doubt is limited to these regions is how to put it mildly disingenuous, to say the least.

The first of these is that the Briffa dataset must be accepted or rejected in its entirety. I argue that, if the authors can demonstrate that some phenomenon contaminated the later data but did not contaminate the earlier data, then I would expect them to throw out the later data. This appears to be the purport of the Cook paper. Does anybody have access to the Cook paper?

It’s been discussed here, Chris. Cook simply used the calibration period strong correlation as evidence, but that says nothing of past history.

I disagree. For example, many years ago I was doing some statistical work on binary stars. My dataset had several hundred systems in it. Like any good scientist, I went through each and every system, examining it for anything that might compromise the integrity of the analysis. There were some red giants, for example, that required special treatment because the statistical analysis assumed that all the stars were main sequence. But I ran into one system that was just weird.

In other words, you had a-priori knowledge of how they were supposed to behave. Even you included that point I made.

The combination of relative masses, luminances, and spectral types just didnt add up. I sweated blood over that system for two days before I finally went to my advisor and asked his thoughts on it. He glanced over the data and said, Throw it out. Just like that.

In other words you threw out a block of data. The point about the tree rings that you repeatedly fail to understand is that he’s not throwing out the suspected tree rings as a whole, he’s only throwing out a few years that are inconvenient. The equivalent analogy to your case would be if the entire Briffa series that generates the divergence was thrown out… but ut oh, that means the BCPs go away and, not surprisingly, so does the hockey stick.

I argue that, if the authors can demonstrate that some phenomenon contaminated the later data but did not contaminate the earlier data, then I would expect them to throw out the later data.

Except, of course, that they haven’t “demonstrated” anything, they’ve merely conjectured that the problem only appeared recently. How do they now there isn’t a problem earlier? To answer this question, we should first ask, how do we know there is a problem now? Because we have data to compare it to. How do we know there wasn’t a problem in the past? We don’t, because we have nothing reliable to compare it to. And don’t say that you can compare it to proxies made with only a handful of trees, many of which were also used in the study in question, and don’t say we can compare it with the Hockey Stick, because the faulty statistical methodologies behind these studies have been exposed. We don’t have enough good data to say that Briffa or any of the others diverged or didn’t in the past. So why use such data at all, if you have to throw part of it out? Might not the whole thing be a piece of junk?

Of course, we could compare the data with, say, an unrelated data set, like Craig Loehle’s study using only non tree ring proxies. But obviously that would suggest that temperatures did diverge in the past, and that idea makes you uncomfortable. Not to mention the presence of a MWP in that study. Maybe you could find us a study that shows temperatures that agree with Briffa until recently (remember the prohibitions previously specified)? Seems like if you want to prove that divergence is a recent problem, your SOL.

The authors of the original graph did not truncate the series when they published it

but the IPCC report says:

Briffa et al. (2001) specifically excluded the post-1960 data

The only interpretation I can draw from these apparently conflicting statements is that Briffa et all did NOT exclude the data when they originally published, but later decided to exclude its use in the IPCC report. Is this a correct interpretation?

No. That’s not even a reasonable interpretation. The IPCC’s claim is simply not true. A more reasonable explanation is that the IPCC reviewer is blowing smoke out their fundamental orifice.

Briffa et al. (2001) did not exclude the post-1960 data, LOOK AT THE BRIFFA 2001 FIGURE 2 AT THE TOP OF THIS THREAD. It’s there in black and white. So the IPCC reviewer is … well … it’s … I won’t say that they are lying, I’ll just say that they have a very self-serving and advantageously distorted view of reality …

The authors of the original graph did not truncate the series when they published it

but the IPCC report says:

Briffa et al. (2001) specifically excluded the post-1960 data

Read the rest of the sentence:
“Briffa et al. (2001) specifically excluded the post-1960 data in
their calibration against instrumental records, to avoid biasing
the estimation of the earlier reconstructions (hence they are not
shown in Figure 6.10),”

That is the tree ring data mapped well with the locally for most of the record (from the excerpt provided by SteveMcI perhaps back to 1871?) but started to diverge after 1960. In order to calibrate their proxy they therefore excluded the data from 1960 since it would bias the reconstruction. Another logical reason for the exclusion would be that if the instrument temperature data shown is reasonably correlated with the local temperature then the temperature regime where divergence occurred was significantly higher than at any other time during the instrument record. In that case the divergence might not be expected to influence the reconstruction if it doesn’t enter that regime. None of this data is hidden from readers of the paper nor is it in the TAR where the original paper is referenced. The difference in the Courtillot paper is that the authors don’t say that the data has been truncated (unlike the TAR) and that the original papers from which the data are drawn are not referenced. (The references in the paper are not the ones from which the data was taken). The fact that the wrong references were cited only came to light after the comment on the paper by Bard & Delaygue was published and Courtillot et al. contributed a note:

“Note added in proof

In their Response to our Comment, Courtillot et al.
state that for the total irradiance curve S(t) they had used
the SOLAR2000 model product by Tobiska (2001)
instead of the century-long record by Solanki (2002)
cited in their original paper (Courtillot et al. 2007).
However, the SOLAR2000 model is restricted to the UV
component and their total solar irradiance is severely
flawed as pointed out by Lean (2002). For the global
temperature Tglobe curve cited from Jones et al. (1999)
in Courtillot et al. (2007), these authors now state in
their response that they had used the following data
f i l e: monthly_l and_and_ocean_90S_90N_df _1901-2001mean_dat.txt.
We were unable to find this file even by contacting its
putative author who specifically stated to us that it is not
one of his files (Dr. Philip D. Jones, written communi-
cation dated Oct. 23, 2007).”

In fact even that note is incorrect since that was not the file used, instead it appears to be the unsmoothed version of the data shown in the dotted line in Fig4 in McI’s original post above.

So the two cases which Steve McI is inviting us to compare are the composite graph in the TAR in which the truncation of the original data is explained and reference to the original literature where the data is not truncated is clear, and a paper where truncation and renormalisation of the original data is done without comment and the reference to the original data is obscured by incorrect references and is only revealed by the ‘auditing’ of B&D. The note added in press is not now available on the website and instead a lengthy, obscurantist response has been added. The Chevaliers continue to deny that the earth is spherical thoroughly deserving their sobriquet!

I’m sorry Steve but I think that the equivalency you’re trying to make between these two cases is a stretch.

I think SteveMc’s point in a nutshell is that Ray’s proclamation is a bit broad, and
if one applied that proclamation uniformly, then the truncation of Briffa would be frowned upon.

Instead we get special pleadings and medievalist arguments about differences that make no difference.

Thought experiment. if Briffa had diverged in the OTHER direction showing super warming post 1960,
Some might speculate that the “divergence” would have been depicted in the IPCC graph and “explained away” in
the text. Picture and a 1000 word thingy.

>> They fail to capture the extent of recent warming, so maybe they fail to capture the extent of past warming?

Two more possibilities:

1) the extent of recent warming has been exaggerated in hadcrut. Certainly, it doesn’t look like much on an absolute scale. Certainly, satellite doesn’t confirm that dramatic rising part.

2) the proxies don’t show short timescale variability. IOW, the alleged blade of the hockey stick is in fact simply short time scale variability. If one didn’t know about summer, and was given this graph and told it was the past temp record, and therefore one expected nearly constant temps to continue. Then, summer comes making temps go up dramatically over what the graph says is normal. It would look like the end of the world. One would foolishly cry wolf.

There’s just too much going on here for me to respond to all the comments directed at me. I want to thank Mark for linking me to the earlier article on the Cook paper — although I found that argument less robust than the argument presented here. I sense that we’re reaching the end of productive discussion. My overall impression is that Steve has found a chink in the IPCC report — but that chink is really tiny. I don’t think it in any way undermines the conclusions drawn in Chapter 6. Moreover, the chink is second order in that it isn’t a falsehood, but merely a failure to justify a statement. There certainly is no basis whatsoever for drawing any grand conclusions from Steve’s observation. I hope that Steve’s point induces people to run a tighter ship.

Chris Crwaford and Arthur Smith need to read the blog. There’s a fair amount of background one needs to know to understand why this is one step away from junk science. Don’t pretend like you know everything. It’s ok to be skeptical of the skeptics, but if you persist in reading the blog (ignore the noise, seek the signal), I am confident you will come to the correct conclusion: the skill of these “proxies” is WAY overstated.

Gunnar – #71 – in fact I thought all the proxy data was explicitly graphed with a 25-year smoothing, so aside from when the proxies end, they couldn’t even in principle show the most recent decade or so of warming at all.

@Chris Crawford:
Did anyone ever imply that including the Briffa data would overturn the entire foundations of the theory of AGW or change IPCC’s views on AGW?

I dont think it in any way undermines the conclusions drawn in Chapter 6.

Who said it did? As far as I can see, no-one claimed that.

All SteveM is saying is that, this sort of truncation should not be done but it was. More over, this particular truncation wasn’t just a mere oversight. The truncation was pointed out during the process. People somehow decided this sort of truncation was acceptable. Yet, it’s quite clear that this sort of truncation is so seriously frowned on that it causes RayPierre to go into a tizzy on his blog.

So, what’s the deal? And why does someone pointing this out cause you to assume anyone is tacking on an implied, “And so, AGW isn’t true!”

Frankly, what puzzles me in the blog debates is this:

I think the case for AGW is pretty good. But does that mean I need to say it’s ok to truncate data to present a case that is stronger than it is? Does that mean I need to not ask questions about models that I would ask in my own field (of models I would, in the end use?)

I need to ask these questions and get answers to make up my mind about levels of certainty. So, reading people give lame defenses of these odd truncations makes me uneasy.

Chris Crawford Chapter 6 AR4 is simply not convincing. Without proper error bars on those reconstructions there is no way to compare the past to the present. You will need to learn something about time-series statistics if you want to understand why this is so. It’s not a chink in the report. The statistical basis for use of the word “unprecedented” regarding anything prior to 1600AD is entirely lacking. Your optimism and faith are misplaced.

A couple of points. The Briffa truncation has been discussed previously on many occasions from a variety of perspectives. Look at the Briffa category in the left frame.

Second, the IPCC paragraph on divergence was written by Briffa himself – so there’s the same problem here as IPCC TAR had with Mann being the review author of his own material.

Third, the divergence paragraph was not externally reviewed. It was inserted in the Final Draft and no one commented.

Fourth, the excuse in the divergence paragraph is bogus. THey said:

In their large-scale reconstructions based on tree ring density data, Briffa et al. (2001) specifically excluded the post-1960 data in their calibration against instrumental records, to avoid biasing the estimation of the earlier reconstructions …

This is a different issue than truncation. If they wanted to calibrate on 1880-1960, they could have done so and used that calibration to construct post-1960 values. That’s what I did in my emulation. Don’t kid yourself – they didn’t show the post-1960 values because of the mismatch.

IPCC says:

(hence they are not shown in Figure 6.10), implicitly assuming that the divergence was a uniquely recent phenomenon, as has also been argued by Cook et al. (2004a). …

This is a hypothesis, but one not treated seriously by the young dendros in the AGU Divergence session. Cook was there and didn’t say boo to a goose. Even so, it doesn’t justify the concealing of the divergence.

#65 and others. You say that the post-1960 deletion was disclosed in TAR – this is simply false. There’s not a whisper in TAR about the deletion. Prior to my posting about it, no one knew. Roger Pielke at Prometheus was quite taken aback when I first pointed this out in relation to TAR. Also, the citation for the series in TAR is Briffa (2000) which did not contain the post-1960 deletion.

#65 and others. You say that the post-1960 deletion was disclosed in TAR – this is simply false. Theres not a whisper in TAR about the deletion. Prior to my posting about it, no one knew. Roger Pielke at Prometheus was quite taken aback when I first pointed this out in relation to TAR. Also, the citation for the series in TAR is Briffa (2000) which did not contain the post-1960 deletion.

Apologies I misunderstood the origin of #23, however that was hardly the most important point, as you acknowledge Briffa is correctly referenced whereas Courtillot took three iterations to provide the correct reference for their Tglobe but still incorrectly characterize it in their response. Regarding their total irradiance curve it took the second time of asking but I’m not convinced that we have the final story on that.

Steve: IPCC TAR did not correctly reference the correct paper. At least Courtillot has provided references; show me a reference to MAnn’s calculation of MBH99 confidence intervals or even the reconstruction for the AD1400 step. If people are worried about correct temperature versions, Mann also gave a false version in MBH98 and gave the same false reference in the Corrigendum. If you want to take the position that Courtillot’s disclosure amounts to a serious defect, that’s OK with me – all I ask is that. if similar defects in other studies are presented, that the same opprobrium be applied to those other studies and you know where that’s going.

Chris Crawford Chapter 6 AR4 is simply not convincing. Without proper error bars on those reconstructions there is no way to compare the past to the present. You will need to learn something about time-series statistics if you want to understand why this is so. Its not a chink in the report. The statistical basis for use of the word unprecedented regarding anything prior to 1600AD is entirely lacking. Your optimism and faith are misplaced.

And and long as we’re insisting on error bars for the reconstructions, we should also be insisting on error bars for the temperature record:

Hansen et al. 2001, J. Geophys. Res., 106, 23947-23963:

There are inherent uncertainties in the long-term temperature change at least of the order of 0.1°C for both the U.S. mean and the global mean.

To me, this means that the temp record can’t be resolved to hundredths or thousandths of a degree as far as the anomalies are concerned.

Or, to use the AGW line, “If you study the yearly global average temperature 1975-now, youll find that it shows a statistically significant upward trend, showing that the world is getting warmer by an unprecedented 0.018 deg.C/yr (or .18 deg.C/decade).”

Steve, while we are talking about the (lack there of)correlation between reconstructions from the past, do you think you could post the IPCC TAR graph from 1000 to say 1900, with the y-axis running from -.7 to 0. Having the 0 to 1 range on the graph makes the historical difference not as noticeable.

Sorry bender, it was a rhetorical question. There is probably no defendable criterion to kick out Briffa without losing some or all the rest. If so, Briffa would have to stay in (or be seen to stay in, it seems).

In looking at these graphs, I’ve noticed that the IPCC didn’t appear to show the full variability of Briffa’s data, either. In the graph from Briffa 2001, the heavy line goes down to somewhere between -0.8 and -0.9 in the 17th century. The IPCC graphs don’t show it going below -0.7. That point in the TAR actually appears to be around -0.5. Would there be some scaling or transformation of the data that might cause that?

I’ve been a lurker on this site for quite some time, but I haven’t contributed partly because a lot of the statistics goes over my head. I also had one response kicked back because my ip address wasn’t accepted or something.

There can certainly be valid reasons to exclude an outlier from analysis, but in that case it is usually either one or two individual data points or the entire series if it’s been completely corrupted, not, as in this case, a selected portion of the data with no scientifically supported reason given. Furthermore, the data should still be reported, even if you choose not to include it in a mean or other statistical analyses.

As a case in point, my line of work often involves hydrogeology. While doing a recent groundwater map, two of the points showed up as being about three feet below the surrounding water table. If there had been a system pumping from these wells, such depressions would make sense, but in the absence of that, it simply wasn’t logical. Therefore, we had to assume that the field technician made an error, either in reading the measurement instrument or writing it down. We kept the data on the map but did not include those points in the contours and provided a footnote to that effect. This did not change the overall impression of the map (e.g., groundwater flow generally still to the SW), and we still provided the data for anyone who might want to look at it. This seems like it should be best practice for any area of science.

I am just writing to say how terribly disappointed I am with this argument. The author proposes that the Biffra data was truncated deliberately to avoid revealing some ‘truth’ that the current climate is no warmer than the past or that we can’t tell what the current climate is compared to the past. And then carries on to claim that this is deliberate deception and that no clear reason is given as to why they did it. It seems implying some sort of conspiracy, which is then parroted below by other posts.
It reads like madness, especially when someone points out that the IPCC clearly states that they have done it, why they did it, and they there is no consensus about what it means and that they’ll have to keep looking at it, but they can’t at the moment because of some problems with data collection. Why do people persist with the conspiracy theory?
The best argument here is that there is some doubt on reconstructions. But that’s hardly new, nor is it particularly helpful.
They haven’t ‘concealed’ the mismatch – quite clearly they truncated the data, then explained why.
I have trouble comparing this to what was discussed about Courtillot et al 2007, as the IPCC seem to have indicated why they did it and that they need to investigate the data further.

@Nathan– But the Courtilot issues is also not a conspiracy. He got an old data set. It didn’t contain the newer data, so the newer data didn’t show.

No conspiracy. It was somewhat sloppy of Courtillot– but on the other hand, the trail to the various data sources is also somewhat mysterious. It’s clear even the authors who create the data sources make some available, then add new and delete from time to time. There doesn’t seem to be a central archive or repository for this stuff.

So….maybe it would be nice if there were a repository? Possibly maintained by the journals that publish the papers?

#85
It would be marvellous. Although that’s marvellous for other folk as I wouldn’t be able to manipulate the data myself.
I am waiting for the day when all journals provide their material free, online, for everyone.

#83. The “explanation” in IPCC AR4 is invalid as I observed above. If you wanted to estimate coefficients prior to 1960, that did not stop them from showing the post-1960 values. The truncation was pure and simple to avoid showing the mismatch.

And IPCC TAR did not report or discuss the truncation. No one noticed it until I pointed it out here.

@Nathan But the Courtilot issues is also not a conspiracy. He got an old data set. It didnt contain the newer data, so the newer data didnt show.

Actually lucia that’s not correct, Courtillot claimed to use a certain dataset which he referenced in his paper, Bard & Delayque in their commentary said that this data didn’t match, being truncated before 1952, which reduced its range thus significantly changing its appearance when normalised. At that point the Courtillot said that they’d actually used a different source which they hadn’t referenced in the paper (and wasn’t even the same parameter as they claimed to be plotting)! Another source of data they used was misidentified as you said (twice!)

@Phil– So how does that all make what I said incorrect? It turns out they did, in fact, end up using an old data set that was truncated. That’s why it was truncated.

They didn’t keep track of what they used. Also, the original authors are posting and reposting and recorrecting various different version. They were very slopping in their record keeping etc. But this doesn’t mean they, themselves, truncated.

There are two Courtillot data sets in play and I think that there is some confounding in the present discussion. I examined the temperature data set, where Courtillot used the Jones version used in Briffa et al 2001. He did not actively truncate it; he just used an obsolete version. Is this worse than Mann’s use of obsolete data – an issue raised as long ago as MM03? Or with Hughes not using the Ababneh update? I’m OK with a condemnation of the use of obsolete data versions as long as this criterion is applied consistently.

The other issue is the solar data version. I’ve been able to locate digital versions of the SOLAR2000 data for comparison.

Andrey Levin (#203) and Raven (#199) – if the Spenser argument about precipitation were correct, it would apply to the entire greenhouse effect from water vapor. But its well-established that without water vapors greenhouse effect, we would have a planet at least 20 degrees C colder than now. Do you disagree with that? Unless theres some water vapor level below which the net greenhouse forcing is negative, above which its positive?

Unless I misunderstand you, I think it is the opposite. (Perhaps that is what you meant to write.) At a low (natural) level of water vapor, you would have a positive greenhouse effect. As the level of water vapor increases, you eventually reach a level at which the feedback begins to turn slightly negative (the clouds formed cool the surface, the infrared iris effect, etc). This acts as a built-in climate stabilization mechanism.

In comment #23, I provided you with two links to Pielke’s blog (one of the most highly cited climatologists in the field) where he discusses the IPCC and WV feedback issues.

Nathan: “I am just writing to say how terribly disappointed I am with this argument…”

Steve is concerned with setting and applying standards. Consistently and scrupulously. I think that is something worth fighting for.

” … The author proposes that the Biffra data was truncated deliberately to avoid revealing some truth that the current climate is no warmer than the past or that we cant tell what the current climate is compared to the past. And then carries on to claim that this is deliberate deception and that no clear reason is given as to why they did it.”

An important thing about Briffa’s work is that it fails to confirm the recent upswing evident in other series. That alone should send it straight to the front of the queue. It is far more interesting than just another series which shows the same pattern as others.

Despite what you say, the authors appear to have attempted to cover this up by truncating the most interesting part of the Briffa series. A reaonable mind would wish to understand why.

If it was a simple mistake, why not publish an erratum showing the graphic with the Briffa series in full? Easy enough to do on the internet these days.

“IPCC clearly states that … theyll have to keep looking at it, but they cant at the moment because of some problems with data collection.”

Are you suggesting that the IPPC has slipped from passively reviewing peer reviewed material into an active role as a reviewer?

If there was any particular doubts about Briffa, surely it ought to be emerging from the normal peer review procedures. If so, the correct approach would include the full Briffa series in the graphic and add a footnote with respect to Briffa. That would have fairly represented “the state of the science”.

There are two Courtillot data sets in play and I think that there is some confounding in the present discussion. I examined the temperature data set, where Courtillot used the Jones version used in Briffa et al 2001. He did not actively truncate it; he just used an obsolete version. Is this worse than Manns use of obsolete data – an issue raised as long ago as MM03? Or with Hughes not using the Ababneh update? Im OK with a condemnation of the use of obsolete data versions as long as this criterion is applied consistently.

The other issue is the solar data version. Ive been able to locate digital versions of the SOLAR2000 data for comparison.

Regarding Courtillot’s use of the Briffa data it’s not just that he used an obselete version it’s that he used data that wasn’t what he says it is, i.e. it’s not global mean temperature it’s NH extra-tropical summer T. When there are available data sets for Tglobe (as shown in B&D’s recasting of his graph one wonders why he didn’t use them instead, as I recall in his response he still doesn’t acknowledge that it’s not a global temp.
I’d be interested in seeing the SOLAR2000 data although there do we know what data was used?
According to the source for SOLAR2000 the following acknowledgement should have been used:
‘Solar Irradiance Platform historical irradiances are provided courtesy of W. Kent Tobiska and Space Environment Technologies. These historical irradiances have been developed with partial funding from the NASA UARS, TIMED, and SOHO missions.’
Why they used that ~50 yr data instead of the 100 yr data that they referred to in their paper is still not known.

Regarding the Ababneh data I didn’t follow that but as I recall it poses a rather tricky situation in academia, using data from a colleague’s grad student’s thesis before it has been formally published. Someone did that to me once and I wasn’t very happy about it, to say the least!

Steve: Courtillot’s data handling is unsatisfactory as I’ve observed. How is it worse than Mann’s? There’s more going on with the Abaneh data than just another grad student’s thesis. Hughes didn’t mention Ababneh at AGU or her results (which had been published by that point.) Given that the Sheep Mountain data is the most important item in the MBH reconstruction, Hughes’ failure to confront the Ababneh results is highly questionable. As is Briffa’s failure to report on the updated Polar Urals data. And what’s happened to the publication of the Grudd update of Tornetrask. These are important data sets that are used in every reconstruction. If people are going to be self-righteous about Courtillot, then be consistent. I’m consistent about this: Courtillot should have used properly documented data but so should everyone else. You’re being inconsistent in your application of the rules.

Liselle #82, that was what I was trying to point out in #25 that, the IPCC Fig. 2. presentation looks like a different or smoothed
dataset when compared to the TAR.

Likewise I pointed out in #22 that the Briffa data in the original post obviously wasn’t the data used in the TAR but got no response.
I also pointed out that the truncation was at the point where the data was turning up with the same result.

The Briffa graphic you posted is for summer mean temp. for land north of 20N, while the TAR graphic is also for extratropical N hemisphere it is clearly not the from the same data. Unless we are shown the same data were unable to judge what has been truncated.

Make you sure that you understand what these folks do. First they create a time series going from 1402-1990 and then they fit it to a “target” series. All that happens in this step is that the series is re-scaled and re-centered. It’s a linear transformation, but the underlying series is the same. I know the data intimately and we’re talking apples and apples here.

Even so the truncation you refer to, if it had been of the Briffa data you presented, was at the very point that their recon had reached a minimum and was starting to rise, an unusual choice if they were attempting to conceal a mismatch.

Nope. You’re way wrong. The data goes down a lot after the truncation point. That’s why they truncated it. Read the linked posts as well.

In any case, I would submit that there is a distinction between truncation of measured data and truncation of data smoothed over long time periods (24 yrs in Briffa and 30 in Loehle) which have problems due to the treatment of the end condition. (See discussion of Loehle recon. for example).

What is the distinction that you have in mind and how does it justify IPCC concealing the mismatch?

Thanks for responding Steve, but that is not a linear transformation, look at the two curves between 1640 and 1700 for example.
Your inset for Fig 3 shows the truncation of the light blue curve at a minimum, there’s no minimum in the Briffa curve until the 1970s and as you say it goes way down.
If those curves started life from the same data there has been rather more than a linear transformation performed.
Depending on the smoothing algorithm the ends of a smoothed series can be distorted which could justify truncation (should be indicated of course).

2 Trackbacks

[…] Sounds good, right? So, that is what they did. (By ‘they’, I mean the scientists who promote the ‘Anthropogenic Climate Change’ agenda and on whose scientific work the current political policies are based. I shall refer to them as ‘the IPCC cabal’) They took core samples of very, very old trees and looked at their rings, counted the years and centuries, compared them, analyzed them, assigned temperature values to various ring thicknesses – and they came up with a nifty little graph. Because it does not measure the temperature directly, but uses a ‘proxy’ (a substitute) – the growth of trees – this nifty little curve was included on the graph they submitted to the IPCC report as one of the ‘proxies’ for actual temperature records from long ago. Except that…. During the time period when we have had the most reliable actual temperature readings, say, from 1960 to now, the tree ring growth did not correspond to the temperatures the scientists measured! To the contrary: while these scientists measured an in increase in temperatures, the tree ring ‘record’ from 1960 to now shows a DECLINE in temperatures! The scientists did notice this divergence: one set of readings went up, the other down. That can clearly be seen from the email exchanges between them – and from the graphs they exchanged, which I linked to above. Now, at this point, a real scientist would look at their data and say: ”We have actual, measured temperatures going up, while the temperatures reconstructed from tree-ring temperatures are going down! Obviously, there are other factors at play here: either some of our measurements are wrong, or the method how we are using to figure out temperatures from tree rings is wrong. Therefore, either have to figure out what we are missing or figure out where we have made a mistake: either way, this data cannot be used as is!” Alas, that is not what happened. Instead, they decided that since the first ‘divergent’ year that the ‘common data’ was available for both the actual measured temperatures and the tree-ring proxy temperatures was 1960, they would simply stop showing the tree-ring data from 1960 on!!! Then, nobody could tell that the tree-ring data showed something different than what they were claiming! This is hard to believe. Please, consider the picture below: […]

[…] advised the IPCC not to truncate data but to show and fully discuss the Divergence Problem, but McIntyre’s recommendations were rejected out of hand. McIntyre seems to feel reviewer’s comments are routinely ignored by Coordinating Lead […]