Upside-Side Down Mann and the “peerreviewedliterature”

In Andrew Revkin’s recent blog posting, he made the following observation about blogs, referring in particular to Climate Audit:

What is novel about all of this is how the blog discussions have sidestepped the traditional process of peer review and publication, then review and publication of critiques, and counter-critiques, by which science normally does that herky-jerky thing called knowledge building.

He then asked:

So should this all play out within the journals, or is there merit to arguments of those contending that the process of peer review is too often biased to favor the status quo and, when involving matters of statistics, sometimes not involving the right reviewers?

As CA readers know, I dislike taking generalized approaches to questions like this. Unlike some (in my view) overly patriotic CA readers, I do not regard blogs as a substitute for journals. While (in my opinion) my recent journal submissions have not been fairly reviewed (and there are many points that I take issue with in how academic journals handle reviews), this is not a point that I’ve placed at issue at this blog because, in my opinion, I haven’t fully put the system to a full test (but this issue is definitely on my radar.)

Recently the question of Tiljander sediments and Upside Down Mann arose in a couple of different contexts – at Andy Revkin’s, at realclimate and in a Finnish blog post by Atte Korhola. Mann’s upside down use of the Tiljander proxies was originally reported at CA here in fall 2008 and then reported to PNAS in a published comment by Ross and I.

PNAS limited comments to 250 words and 5 references (and must be submitted within 90 days of publication). Unlike the CA post where I could prove the upside down use of the Tiljander proxies through graphics, within the restrictive PNAS comment format, given that there were other important points, we were only able to make one sentence on the upside down Tiljander proxies as follows:

Their non-dendro network uses some data with the axes upside down, e.g. Korttajarvi sediments, which are also compromised by agricultural impact (Tiljander, pers. comm.)

For critics visiting this site, there isn’t a shred of doubt that Mann et al 2008 used these proxies upside down from the Tiljander interpretation. See the original post here for details. My interpretation was triple checked by two Finnish statistics professionals (Jean S and UC) who are intimately familiar with Mannian methods and who confirmed using Mann’s Matlab code that the Tiljander series are used upside down in both the CPS and EIV versions of Mann et al 2008.

Tiljander reported that the very wide varves in recent sediments were due to ditches and sediments.

This recent increase in thickness is due to the clay-rich varves caused by intensive cultivation in the late 20th century… There are two exceptionally thick clay-silt layers caused by man. The thick layer of AD 1930 resulted from peat ditching and forest clearance (information from a local farmer in 1999) and the thick layer of AD 1967 originated due to the rebuilding of the bridge in the vicinity of the lake’s southern corner (information from the Finnish Road Administration).

In earlier periods, prior to this sort of non-climatic impact, she interpreted narrow varves as evidence of warmth and wide varves of evidence of coolness, locating the Little Ice Age as a period of relatively wide varves and the MWP as a period of relatively narrow varves (the existence of both periods accepted by Finnish paleolimnologists)

The amounts of inorganic and organic matter, form the basis of the climate interpretations. Periods rich in organic matter indicate favourable climate conditions, when less snow accumulates in winter by diminished precipitation and/or increased thawing, causing weaker spring flow and formation of a thin mineral layer. In addition, a long growing season thickens the organic matter. More severe climate conditions occur with higher winter precipitation, a longer cold period and rapid melting at spring, shown as thicker mineral matter within a varve.

Here is a figure from Tiljander et al showing the density graphic, rotated so that up corresponds to warm periods.
Figure 1. Excerpt from Tiljander et al, rotated from vertical in original graphic to show interpreted warm periods as up.

Mann didn’t just use one Tiljander series upside down; he used all four of them upside down, a point illustrated in the graphic below from a Japanese language article that rather appealed to me.
Figure 3. Excerpt from Itoh graphic identifying upside down Tiljander proxies.

In a more mundane version, the figures below (from CA in fall 2008) show the Xray density series shown above in the upside down Mann orientation together with another upside down Tiljander series.

Figure 2. Two of 4 versions used in Mann et al 2008

The huge HS blade is, as noted above, attributed by Tiljander to “intensive cultivation in the late 20th century… peat ditching and forest clearance … the rebuilding of the bridge.”

The SI to Mann et al 2008 conceded that there were problems with the recent portion of the Tiljander proxies (without mentioning that they were using them upside down from the interpretation of Tiljander and Finnish paleolimnologists), but argued that they could still “get ” a Stick without the Tiljander sediments. However, as I observed at the time, this case required the Graybill bristlecone chronology (where they failed to mention or cite Ababneh’s inability to replicate Graybill’s Sheep Mt results, even though Malcolm Hughes, a member of Ababneh’s thesis panel was a coauthor of Mann et al 2008). Thus their “robustness” analysis used either upside down Tiljander sediments or Graybill bristlecones.

Even though there is no doubt whatever that Mann used the Tiljander proxies upside down, in their reply to our comment, Mann et al flat out denied that they had used them upside down. Mann:

The claim that ‘‘upside down’ data were used is bizarre. Multivariate regression methods are insensitive to the sign of predictors. Screening, when used, employed one-sided tests only when a definite sign could be a priori reasoned on physical grounds. Potential nonclimatic influences on the Tiljander and other proxies were discussed in the SI, which showed that none of our central conclusions relied on their use.

These comments are either unresponsive to the observation that the Tiljander sediments were used upside down or untrue. Multivariate methods are indeed insensitive to the sign of the predictors. However, if there is a spurious correlation between temperature and sediment from bridge building and cultivation, then Mannomatic methods will seize on this spurious relationship and interpret the Tiljander sediments upside down, as we observed. The fact that they can “get” a Stick using Graybill bristlecones is well known, but even the NAS panel said that bristlecones should be “avoided” in temperature reconstructions – and that was before Ababneh’s bombshell about Sheep Mt bristlecones. The claim that upside down data was used may indeed be “bizarre”, but it is true.

This wasn’t the only proxy used upside down in Mann et al 2008. In our discussion of Trouet et al 2009 in the spring, Andy Baker commented at CA and it turned out that Mann had used one of Baker’s series upside down – as discussed here.

Mann’s failure to concede that they had used the Tiljander proxies upside down resulted in Kaufman et al 2009 also using them upside down. Kaufman said that he was unaware of our comment on this point, but was sufficiently attuned to the controversy that he truncated the data at 1800. As a result, the big HS blade isn’t used, but the Little Ice Age and MWP are flipped over, a point made at CA here Kaufman and Upside Down Mann. Two other Finnish paleolimnology series also appear to have been used upside down by Kaufman.

Aside from the anything else, I’d like readers to consider the total ineffectiveness of the PNAS exchange in resolving something as elementary as whether Mann used the Tiljander proxies upside down or not. We observed that Mann used them upside down; Mann said that he didn’t. Nothing was resolved in the PNAS Comment-Review exchange.

Viewed objectively, this exchange in the PeerReviewedLitchurchur had less substance than the original blog posting.

Now I don’t want readers to complain about the iniquities of this system. What I want people to think about is this – why did this exchange in the PeerReviewedLitchurchur accomplish so little?

One of the reasons is PNAS’ absurd comment policy limiting commenters to 250 words and 5 citations (leaving aside their equally absurd 90-day limit). Nature and Science also have policies that are inimical to comments, the three journals in effect requiring any dirty laundry to be handled in other journals. I can see a couple of QC issues arising from these export policies. In climate science, some of the worst problems seem to arise at Nature, Science and PNAS – all “high-impact” journals. If the journal had to mop up their own problems, then maybe they’d exercise a bit more care in the first place.

The second issue is the journals are not really designed to cope with the total combativeness of Mann and coauthors and their seemingly supreme confidence that they can say whatever they want with impunity. One can contemplate circumstances where Mann used the Tiljander proxies upside down, without actually turning his mind to whether they were used upside down. Mann et al 2008 used a huge number of proxies, all fed into a new Mannomatic. Mann’s algorithms are poorly designed and poorly coded and important information about proxy weights and influence is not recovered. Using Tiljander data upside down gave the answer that they expected and thus they probably didn’t question it.

The situation in the Reply seems entirely different to me. Our Comment stated that Mann et al 2008 used the Tiljander proxies upside down. At this point, it seems to me that Mann et al were obligated to double check whether they had used them upside down and, having made such an investigation, they would be obliged to report that they had inadvertently used the Tiljander proxies upside down and provide the corrected results with the proxies used in the Tiljander orientation. It is not open to them to deny that they used the proxies upside down. This becomes especially distasteful when they then cite this Reply to members of the public at realclimate, who then accuse me of being “dishonest”. It becomes even more distasteful when they censor my comment refuting Mann’s claim on this matter.

The evidence that Mann used the Tiljander proxies upside down is easily understood by non climate scientists; it is clearly understood by Korhola (see Google translation here). This is not the sort of issue that Mann should be taking a stand on. Nor is it one where Mann should throw stones or countenance realclimate readers throwing stones.

In Sept 2008, a CA reader drew our attention to the following amusing commercial. The recent commentary at realclimate makes it all the more pointed.

296 Comments

the journals are not really designed to cope with the total combativeness of Mann and coauthors and their seemingly supreme confidence that they can say whatever they want with impunity.

I just wasted an hour looking at posts on dot.earth and Real Climate and confirming what I suspected from history and what you say above. I even went back to the posts Mike linked to from the early days of RC and re-read them. Content is 0 and combativeness is 100%. dot.earth at least allows dissenting views while there were only a couple such posts which escaped scrutiny on RC. It was a bit interesting to see how the lead bashers felt no compunction about making outright lies about what you’ve said and done.

This paragraph I expect you’ll snip, and I have no problem with that, but I did want to say it anyway. [snip] A lie goes round the world before the truth can pull it’s boots on, as the old saying goes.

The evidence that Mann used the Tiljander proxies upside down is easily understood by non climate scientists;

Ok, now this is an interesting point which was a huge discussion on tAV. Ryan and myself got into a big battle at tAV with others over using ACTUAL thermometers upside down. This situation happened in Steig’s paper on Antarctic trends and it was defended by a number of people as reasonable through various odd conceptualizations of inverse covariance and how the upside down thermometer might explain them – completely randomly through fancy math of course. Mann 08 is the exact same situation. If we can’t determine that a thermometer may NEVER be read upside down!!, how can we discuss if a thermometer PROXY is reasonable UPSIDE DOWN.

It hurts my engineer brain.

Its like different PCs in a PCA analysis being attached to individual physical processes i.e. rain, CO2, temperature. Now my just be engineer but my think math don’t care about physical processes. AT ALL, ever!

Mann didn’t answer anything. The reply was out of context just enough that it didn’t make any sense if you’re familiar with the subject but it sounded scientific, aloof and obviously dismissing if you’re not. Since I’m not unfamiliar, with the subject the reply did nothing but confirm the criticism and my own belief that they understand exactly what the paper was comprised of. Just imagine pretending to not understand the criticism of ‘upside down’ thermometers and responding with ‘multivariate methods don’t care’. There isn’t a science PHD on the planet who couldn’t understand the criticism better than that. NOT ONE!

It almost seems like you would have been better served picking just one of the issues in the PNAS comment and provide enough detail that the response would have to address it. Whether that was the upside down data or something else, by trying to put in too much you allowed room for a non-responsive response.

Unlike some (in my view) overly patriotic CA readers, I do not regard blogs as a substitute for journals. While (in my opinion) my recent journal submissions have not been fairly reviewed (and there are many points that I take issue with in how academic journals handle reviews), this is not a point that I’ve placed at issue at this blog because, in my opinion, I haven’t fully put the system to a full test (but this issue is definitely on my radar.)

I don’t think of this blog (or any blog) as a substitute for journals. On the other hand, the way that journals deal with combative and censorious people like Mann is totally inadequate, allowing him to walk over criticism as if it didn’t exist.

Realclimate was setup for the express purpose of refuting McIntyre and McKitrick and to defend Mann, so we know and expect that Mann will exercise total control of postings in order to avoid any hits on his massive ego. And if that means censorship of Steve while allowing defamatory comments then so be it as far as Mann is concerned.

The journals are setup for quick debate and quick commentary because every page of paper that they print on costs money. They have not adjusted to the reality of the Internet. On the other hand, there has been a continuing problem with journals publishing wrong or damaging statements and then refusing time and space to set the record straight which suggest that the problem is institutional rather than monetary.

Re: John A (#5), This is a good point. I’m sure in all sciences (it’s certainly true in physics) there are often one or two people who regard their own research as having said everything there is to say about a particular area. These people are frequently highly combative and overly assertive (a polite way of saying arrogant) and take great offence at anyone presently anything new. If you get one of these people as a reviewer then God help you. Appeals to the editorial board of the journal are usually not successful as they invariably side with the reviewer. In physics journal (APS and the UK IoP journals), the solution of allowing the reviewer to publish his comments along side that of the submitted (and dammed paper) is not an option.

Re: John A (#5),
I don’t really think blogs substitute for journals either. They are different beasts.

In some ideal world, I think the intense dialogue of a properly run blog would provide a great level of multidisciplinary feedback not found anywhere else… and would be a great precursor to “archival” publishing in a journal.

Journals are great for posterity. They seem pretty awful for getting the science right in the first place.

Science it’s beyond blogs and peer-reviewed literature, I hope.
For a scientist like me, it’s sad how science it’s used as a shield same as a sword, because Science it’s a matter of facts, not a matter of concensus or predictions. The shield face of the science is a response to critics and audits, it’s how the process superpass the data, and when you stand above the data, you miss the science. The sword face of the science it’s when the data are used to prove that you’re right and not to prove the question, and when you think the data it’s useless because prove that you’re wrong, you miss the science too.
Journals, are tools for knowledge, research and truth, not the truth itself, the truth is in the data obtained not in the fancy words that explains the data, and a point that I want to remark here is that the data can be right or can be wrong, but that will be the truth.
Science can be made for one or more scientists, with or without being be reviewed by the “experts” (and I emphasize experts), the important issue is that science, the real science is find in the data and in the scientists, not in the pages of a journal, in the posts of a blog or in the words of a reviewer.

“Which data it’s better, your data or my data?, answer?… no one, the data it’s what it is, the key question is how the science use the data in the right way”

I suppose in the ideal world journals are about doing science and blogs are about doing policy–that is politics. Is that a valid assumption? In the real world the institutional problem seems to be the current overlap between the two. Is this because of rogue behavior violating defined methods to achieve desired policy goals or are even the methods and what constitutes the science of climate the issue? If one cannot agree on what a fact is–the thermometer is up side down– much less its meaning–is an up side down thermometer’s readings significant–then intelligent conversation is impossible. Needless to say, the lack of personal accountability for bogus meter readings provides the incentive for abuse, especially for egos on a mission to save the planet for a profit, playing the prophet.

Blog and wiki weaknesses; long term stability, resourcing (though I suspect this is not really a weakness), adherence to a recognised procedure (perhaps).

Journal weaknesses; overhead and cost – leading to diffiulties of access. In reality stability is provided by libraries not the journal itself. Production pressures lead to short-cuts e.g. 250 character comments. They also lead to editorial groups developing groupthink.

Blogs and wikis are a better future for science, given a little support and some appropriate formatting and workflow.

The most extraordinary think about Revkin’s comments is that he thinks there is some kind of public policy issue to be debated. There is not. There is no decision to be made. It does not matter what anyone thinks about blogs, and discussions on all kinds of topics in them, they are here to say. It is equally pointless to ask whether newspaper reports on new developments are ‘useful’ or whether such reports should be confined to the science press.

Revkin should have asked another, reasonable, and quite legitimate question. This would be something like this: when people without academic affiliations are researching topics with public policy implications, and arriving at conclusions different from the consensus, how is it most effective for them to communicate their results? Should they confine themselves to the peer reviewed journals? Or is it more effective to communicate through the cut and thrust of debate on blogs?

Another legitimate question is whether, for the informed and curious voter, should he confine himself to reading the peer reviewed journals, science mags, and national newspapers, or are the various science blogs now cost effective for him, in terms of time effort and rewards, to add to this reading? And if so which blogs on which topics?

The underlying idea that there is any sort of public policy issue in whether things of various sorts are or are not discussed in blogs is positively sinister, and Revkin should know better.

I am sure I am just being dense, but how does your fourth figure (labelled “Figure 2. Two of 4 versions used in Mann et al 2008”) illustrate the data being used upside down? The y-axis (“Relative X-ray density”) is reversed from the earlier figure but a graph is just a picture. If you read the data off the two graphs you get exactly the same data back. What am I missing?

Re: ChrisZ (#22), thanks Chris, maybe I’m starting to get this. The y-axis in figure 1 shows no values at all so I can’t tell which way is up, so I failed to see the point that figure 2 attempts to make. Perhaps I am only the only newbie here but a little more completeness would be welcome sometimes.

Re: Roy (#25), I’ve been around here awhile, Roy, and I fully agree. Ordinate and abscissa parameters and units should be clearly shown every time. Poorly labeled y-axes are especially frustrating when the topic is inversion of data. ChrisZ explained that the data itself was used inappropriately in the statistical analysis, but that wasn’t at all clear based on the graphs alone. Turning a graph upside down in toto is not a big deal. The numbers don’t change. Inverting the data and not the ordinate is a very big deal, and vice versa.

I haven’t followed this issue thoroughly. Is there somewhere where it is spelt out what real consequences would flow from this sign inversion? It seems to be agreed that it doesn’t affect the multivariate analysis.

Re: Nick Stokes (#13), the consequences as I understand it are that the correlation claimed by Mann can have no skill at predicting temperatures since it is variously assumed to have a positive or negative impact in arbitrarily different circumstances.

If the opposite sign were consistently used on ALL proxies, then the analysis would be essentially the same (statistically speaking). However that isn’t the case here, where the sign was reversed ONLY on the Tijander 4, but not on tree rings.

Imagine you were trying to predict body mass with %body fat using 100 subjects. You would get (statistically) the same result whether you regress body mass on %body fat or body mass on 100%-%body fat.

But now suppose that when transcribing your data, you use 100%-%body fat when you think you’re using % body fat. The regression will handle the data just fine, but you’re going to interpret it backwards. “Unphysical” is I believe the word the climatologists like to use.

Now suppose that when the data are transcribed, the metric is inverted for 10 of your 100 subjects, and that those 10 just happen to suffer from acromegaly. You now have a mess, even if you use proper methods. Throw Mannian methodology into the mix, and “mess” doesn’t even begin to describe what you have.

Re: Mike B (#40),
This is also true, but a totally separate issue. Now you have some samples being interpreted opposite to others – which is the ultimate sign of a complelety unreliable, non-informative “proxy”. If the damaged bits (post-1957 uptick) were removed, would the individual samples then all take on the same, and correct, interpretation, consistent with Tiljander?

Well, my analogy isn’t perfect, but then analogies never are, else the wouldn’t be analogies.😉

I was trying to simplify things for Nick, in an example where each “proxy” was a single datum instead of a time series, and how metrics for those datums can be “inverted”, and how “sign” does matter.

At the risk of moving off into snippable territory, the more I think about Mann’s dismissive “sign doesn’t matter” admonition, the more he sounds like one of those petulant undergrads I used to deal with who insisted they should receive full credit for calculating the correct magnitude of the correlation coefficient, but with the wrong sign.

Re: Mike B (#56),
Your illustration is good. It is an important point. And it helps to make the overall point, that the sign of the regression coefficient coming out of the analysis does matter, even if the sign on the predictor going in does not.

Perhaps you can help me (and it appears a host of other commentators here) understand where Mann is coming from in claiming the sign doesn’t matter. It seems clear to me that a multivariate analysis that produces a coefficient with a sign opposite to the a priori understood physical meaning has found a spurious correlation. Yet both you (a clearly talented if rather dogmatic mathematician) and Mann seem to claim that the sign doesn’t matter. What am I missing?

Re: myna (#43),
Read my #44. I am trying to explain where Mann might have been coming from. He was basically dismissive because he couldn’t understand the complaint. For the Editor to leave it at that is a failure of process. But it happens. The fact that the PNAS exchange resolved nothing illustrates the inappropriateness of the peer-reviewed literature for resolving “minor” technical problems (that nevertheless may have major consequences).

Re: myna (#43),
The sign “wouldn’t matter” if (1) you were only doing a one-stage regression; (2) there were not a pre-existing interpretation of the proxy that suggested a specific orientation (i.e. interpretation) was the correct one.
.
In reality, the sign DOES matter because (1) there is a critical non-stationarity in the first stage of a two stage analysis (calibration, reconstruction), and it needs to be removed or adjusted; (2) Tiljander has previously interpreted the proxy as having the opposite response to temperature as that suggested in the analysis of Mann et al. (2008).
.
Note I explain this effect without resorting to graphical notions of “inversion”. (But whether one speaks graphically or algebraically, a good scientist will understand the substance of the complaint and should reply equally substantively.)
.
Unresponsive responses are a pattern.

Re: Nick Stokes (#13),
If you’re a mathematician, as myna says, then you know that the sign on a predictor going into an analysis will have no effect other than the sign of the coefficient coming out. Flip one and you flip the other. I assume that that is not what you want demonstrated. (It is so simple that it would be “bizarre” to suppose that that is the only thing that Steve is pointing to.)
.
I assume you are more interested in how the flip happens when you calibrate on a bunky upticking proxy during the instrumental period, and then reconstruct on the historical period. i.e. How the reconstructed temperature is biased downward because of the contradictory interpretation ascribed to the Tiljander proxy. You want to know the magnitude of that bias.
.
Before we go there. You do realize that there is no consensus among the experts that these varves are a valid (i.e. quantitative) temperature proxy? You have read that literature?

I think it should be added that the interpretation of Lake Korttajärvi data by the original authors is that winters around Lake Korttajärvi were at least about 0,5C warmer during medieval period than currently. The statement of this fact by the third author (Antti Ojala) can be read from this TV program transcript (in Finnish).

BTW, Korttajärvi (also here) is a very small lake only about 10km north from downtown Jyväskylä (current population ~130 000) right next to the road going to the airport.

I find an interesting parallel between the founding of the Royal Society by a bunch of ‘natural philosophers’ who take the intellectual high ground away from religion and the astrologers and the way in which this medium is severely taxing the authority of the journals. The print media – mainly newspapers – seem to be having a particularly hard time of it and new and more effective ways of dealing with information flow are inevitably emerging. I would not expect Revkin to be able to look at this situation very creatively – or can he? As an aside, I think the fact that an architect from Fiji is able to make a comment here on this matter is an extraordinary thing in itself. (Irrelevant note: Christopher Wren was a founding member of the RS).

1. Their publication delays are no longer defensible versus the immediacy of web communications.
2. Their economics are no longer defensible given that most of what they publish is publicly-funded research the product of which is then withheld from public access behind pay-walls.
3. Their editorial control by small cliques is no longer defensible given the democratisation of web communications and the fact that science and knowledge is fundamentally democratic and open to all.

I don’t know what will evolve, but it will certainly be different. In the meantime we have blogs and other online forums.

It is true that Mann et al used consistently wrong sign for Korttajärvi series in the CPS method. However, the situation in EIV method is even more “bizarre”. In EIV method, the sign is determined independently for each step (century), and signs may change from step to step. This can been seen here (compare for example “tiljander_2003_xraydenseave” series in AD500 and AD600 steps; blue indicates positive and red negative weight relative to the orientation in “raw” data files):

Let me try to elobrate this a bit. Suppose CRU was flipping signs of their station data anomalies, say, every decade in their NH temperature average calculations. How many people would defend that? Some people (not me, I hasten to add) might say the analysis would be complete rubbish.

I’d suggest that CRU flipping a single station (i.e. treating high temperature readings as indicating a lower temperature) would be sufficiently scandalous; whether or not they did this inconsistently.

Changing the sign between centuries is merely another indication that Mann is using methods that have not and could not survive peer review by an actual statistician.

I’d suggest that CRU flipping a single station (i.e. treating high temperature readings as indicating a lower temperature) would be sufficiently scandalous; whether or not they did this inconsistently.

Actually the MBH98 Mannomatic did this. Some of their “proxies” were actually temperature series. When you worked through the weights from the Mannomatic in the reconstruction step, something that I did a few years ago, some of the weights for instrumental temperature series were negative.

It didn’t “matter” in the sense that the only thing that “mattered” were the bristlecones, but it was amusing in terms of the absurdity of the Mannomatic methodology.

Re: Jean S (#23), I read a lot that says there must be something wrong with changing signs. But I’ve asked for a pointer to where the effects are demonstrated. My hope is that someone will have done calculations with alternate signs to show the different results.

The problem with changing signs is NOT that it changes the results. The problem is that it demonstrates that the fundamental assumptions underlying the statistical methodology are flawed.

The basic assumption behind any multiproxy temperature reconstruction is that the individual proxies contain a signal which has a constant relationship with temperature over time. Thus proxies which are known to be related to temperature during the last 100-200 years (when humans recorded temperature) are assumed to have the same relationship during the period preceding human recorded temperature. It is only on the basis of this assumption that Mann claims that his calculated graph has any relationship to temperature.

To the extent that a proxy is treated as being positively related to temperature at some times, and negatively related to temperature at other times, this assumption is being disregarded.

A scientific argument is logically invalid if it assumes X, and then proceeds based on calculations that clearly violate X.

Mann attempts to resuscitate his effort by calculating how likely it is that random data would have produced similar results. Even if his calculations during this step were correct, it says nothing about temperature. All this test shows (again, if we act as if Mann’s calculations were correct) is that the input data was not randomly selected. The only basis for attributing this non-randomness to temperature is Mann’s fundamental assumption (which again is violated when he calculated the shape of his reconstruction).

There is an alternative explanation for the lack of randomness which is well documented here, and which is not contradicted by Mann’s intermediate calculations. Specifically, the decisions and methods that produced the initial pool of proxies may have artificially selected for proxies containing a hockey stick. The hockey stick signal would then be amplified by Mann’s statistical process which checks for similarity between the input proxies and the recorded temperature record. Since the recorded temperature record contains a pronounced hockey stick blade, proxies possessing a blade were amplified, while other were minimized (or, in this case, deselected entirely). Consequently, a relatively small number of proxies displaying bias would have a disproportionate impact on the end result, quite possibly enough to pass a test for statistical significance.

the initial pool of proxies may have artificially selected for proxies containing a hockey stick.

This is indeed the essence of what was done, but of course it isn’t described that way. It’s described as correlating with instrumental temperature records. Since the recent instrumental temperature record is, in fact, a HS, that is what is selected. If, in fact, there is indeed a true signal for temperature in the proxies, all is well. Selecting such records will enhance the signal and allow determining its value. But there are three other possibilities which don’t give such good results:

1. The proxies don’t respond to temperature at all, in which case the whole project is worthless, but the very process of assuming it’s true will result in a homogenization of the early temperature reconstruction; the temperature in the early 1800s will be pinned low since that’s what the instrumental record says and modern temperatures will be pinned high.

2. If the proxies respond somewhat to temperature but somewhat to other things, there will still be a structure to earlier temperatures but it will be reduced in amplitude and thus the modern temperatures will be higher WRT earlier temperatures than the real temperatures would be if they could be independently determined.

3. If some proxies are good temperature responders but most aren’t then there will be an admixture of correct earlier temperatures with incorrect ones resulting in a situation similar to #2. This is because some of the proxies selected will randomly have modern values which look like they’re responding to modern temperatures but aren’t.

I suspect 2 or 3 are more likely than 1, but in all cases comparing say, MWP to modern temperatures will be bogus. And, IMO, anyone who can read the above and understands it needs to have some sort of star next to their name in posts to indicate they get it.

As I understand his methods, Mann claims to test for spurious signals introduced by his methodology by calculating the probability that random data (examined via the same method) would have caused a signal of equal or greater magnitude.

He claims to show that it is unlikely (to the point of statistical significance) that such a large spurious signal would be detected. Were this calculation correct, then any signals introduced by Mann’s methods alone could be be ruled out.

When I say that “the initial pool of proxies may have artificially selected for proxies containing a hockey stick”, I an specifically saying that the set of proxies that Mann chose to input into the first step of his calculation (BEFORE checking for correlation with local temperature) were created and/or selected in such a manner as to create a bias.

The latter calculations then only have to enhance that bias (which, assuming the presence of a bias, they undeniably would) to escape detection via his test for statistical significance.

I read a lot that says there must be something wrong with changing signs.

… but you don’t seem to understand the problem.

It is basically the same flaw as with the earlier Alice-in-Wonderland use of teleconnection in climate reconstructions.

Here, we have a proxy the form of whose response to climate has been identified by the original researchers. Dr. Mann and crew decide to use the proxy upside down with the nebulous justification that it doesn’t matter (“Multivariate regression methods are insensitive to the sign of predictors”).

The missing portion is a proper explanation of the physical reasons for why, in this particular case, the relationship between climate and proxy response has been not only reversed, but with equal magnitude of effect. Unsupported by such physically tenable reasons, the whole exercise is nothing more than opportunistic selection of numbers to produce a previously ordained outcome. The use of teleconnection for including proxies which did not match their local environment, but had the proper hockey stick “global” shape were also never justified by a specific accounting of an actual physical process which could make that a genuine explainable effect.

This still would not explain the validity of a statistical process which “swings both ways” using the proxy in each orientation (while the other proxies remain stable) at different times as demonstrated by Jean S in comment 23. I think the term “bizarre” might apply to that even more strongly.

Your attempt to divert the thread to a “It doesn’t matter so it’s OK” line is simply disingenuous, but that just seems to be your preferred operating procedure as past experience has demonstrated.

Re: RomanM (#50),
Yes, the sign going in doesn’t matter. But the coefficient coming out sure does. If it’s directly opposite to what’s a;ready been published you kinda sorta have a problem.
.
And one that is easily diagnosed and rectified, I might add! The lack of attention to such obvious, and easily rectified, contradictions is quite disturbing evidence of confirmation bias.

Indeed, blogs will never be a substitute for good journals. At least I hope not, because that would only ever come about through a complete failure of the peer review system, which despite problems now and again and the need to continue to look for improvements, has served science well.

But blogs have a place. They are only the 21st century water coolers, with complete open access (a double edged sword).

I like to think about the anecdote (possible mythical) about Fisher Black and Myron Scholes chatting over lunch after randomly bumping into each other in the MIT canteen. The product of those casual discussions led to a revolution in financial economics and Nobel prizes. It did so in a way that allowed a short cut to what would have been a much longer and drawn out back and forth in Journals among a more narrow range of disciplines.

So with blogs. If some anonymous blogger can point out a flaws in a peer reviewed paper, this can make any peer reviewed ressponse, or subequent research more rigorous and more quickly produced.

As long as the peer review system addresses its shortcomings and continues to look to evolve.

On the one occasion that Einstein was subjected to peer review, he reacted with outrage that a copy of his paper would be shown to another scientist prior to publication.

Only in medical journals was peer review widely adopted before the second world war. Major journals continued publishing papers without peer review well into the 1960s.

Only very recently established disciplines (like Climate Science) were established with peer review already in place.

Perhaps the problems in this field were caused by scientists relying on peer review instead of policing themselves.

If the peer review system fails completely, science will be fine. Perhaps it will even be improved, because something else will surely take its place. Something not completely dependent on a handful of unidirectionally anonymous reviewers.

An example. At present, peer reviewed science requires papers founded on novel work and peer citations to other peer reviewed work.

Why can the system not allow for some lower form of accreditation of ideas (that still need to be proven), from non-peer reviewed sources.

Why can Michael Mann not, with Steve consent and clear credit for source, use some of Steve’s idea’s approaches or data for inclusion in his own work? Not of the same rank as a proper citation, or dual authorship but instead some form of creditation.

If, for example, Steve doesn’t want to publish some particular idea or analysis, why can’t we promote a peer review system where others, who do want to publish, would be incentivised to spot good ideas and get consent to use them with due credit and advance science?

what is a paper in a journal? A paper in a journal is a blog post. The post button is pressed by an editor after the post is reviewed by a group of people who deem it worthy of posting. The comments on this ‘blog’ are heavily moderated and word limited. The thread closes fast.

It would be interesting if CA did an experiment.

1. select a topic to be “blogged” as we know it.
2. use peer review before its posted. get rid of reviewer anonymity.
3. Comments are posted without moderation or word limits.
4. readers can digg comments. ( basically vote)
5. reviewers review the top comments and select those that are most germane and should be addressed by the author

I am dumbfounded at the fact that Revkin, in his reply to comment #287 at the dot.earth blog post, did not have the candor to admit that McI had sent Revkin a copy of Mann’s response to McI’s PNAS comment in a Sep 29 email.

Why can Michael Mann not, with Steve consent and clear credit for source, use some of Steve’s idea’s approaches or data for inclusion in his own work? Not of the same rank as a proper citation, or dual authorship but instead some form of creditation.

It seems to me that Mann’s “bizarre” comment stems from his belief that his algorithms can orientate a series automatically better than by any reasoning. He thinks Steve’s comment is bizarre because all he (Mann) does is feed the series into his (sausage!) machine and out pops the result. In such a case, upside-down makes no sense in his mind. Basically, he is ignoring Tiljander’s interpretation as of no consequence.

So, I don’t think that Mann is refuting that he used the series upside from Tiljander’s interpretation. He is saying that he didn’t need to consider Tiljander’s interpretation at all. Therefore, he can’t have used it upside-down.

Re: Tony Rogers (#32),
there is only one problem with your speculation: Tiljander series have ID 4000 (lake sediments) and those are screened for positive correlation only (see INPUTscreen.m). Notice that Mann et al state in their reply:

Screening, when used, employed one-sided tests only when a definite sign could be a priori reasoned on physical grounds.

I guess I do not need to comment that.

Thus the effect of correcting the orientation for Tiljander series results into removal of those series from the “screened set”. That’s highly problematic for Mann et al reconstructions, since a) there are only very few long series that pass the screening and b) Tiljander series are not tree ring series (they are desperately needed for “proving” the “robustness” of the reconstructions without tree rings).

Re: Nick Stokes (#35),
you will still remain on my “ignore” “list” (the only one there), but see here for some demonstration.

If all (or enough) participants have an interest in quality, then more eyes are better (ie linux). In this regard, blogs have the advantage of bringing more eyes to bear.

When things like egos, agendas and beliefs come into play, it becomes a political matter. The conflict between McIntyre and Mann is, at this point, a political matter. Parties have been formed, lines have been drawn, and battle has been joined.

As best I can tell, when it comes to politics, “Sunlight is the best disinfectant”.

In this context, that would include publication of the underlying data and code, as McIntyre has noted.

Considering that Mann et al believe in “teleconnections” of local sites with hemispheric climate, or that it is valid to view a site as a global indicator rather than reflecting strictly local temperature history, any correlation is a good correlation and there is no such thing as “upside down” thus SM is speaking nonsense. There is no sense that things like varves reflect a physical process and have a physical meaning. It is simply a proxy. It only appears that Mann and SM are speaking the same language, but they are not.

Re: Craig Loehle (#37), Re: RomanM (#50),
IMO, Mann et al took this “teleconnection” idea up to a new, borrowing some Team terminology, unprecedented level in their EIV reconstructions.

First, it is required (screening) that proxies correlate with local temperature (or actually, some temperature nearby as this is done with the “pick best out of two correlations”-method). Then, in the actual reconstruction exercise, proxies are calibrated (through RegEM) against hemispheric temperature. Now the determined proxy weights may (and do) have different sign than indicated by the screening with respect to local temperature. And not only that, the reconstructions are done in a stepwise manner leading to a situation, where the weight signs (and of course magnitudes also) may change (and are changing) over the time.

In other words, proxies are required to respond local temperature in a certain way, but their “respond” to hemispheric temperature (“teleconnection”) may be of opposing sign, and the sign and magnitude of the respond are changing from century to century! The story is so amazing, that I doubt even the best science fiction writers could come up with that!

not quite sure added space for comments (PNAS comments < 250 words) would make much of a difference, would it? Seems it would still be just as easy to label an argument ‘bizarre’, without further discussion.

The general assumption is that the journals, through the mechanisms of peer review and commentary reply, are somehow neutral and impervious to consequential editorial bias. As illustrated by this example, it’s the temporarily powerful who control history. Blogs are like the little boy in the fable of the Emperor’s New Clothes. It’s just going to take a while for the crowd to catch on. As long as the boy can avoid the emperor’s censorship, the palace guard, and a fatal misstep, the truth should prevail. Emperors are rarely clever enough to triumph over reality in the end. Journalists like Revkin need to be prodded continually to examine contrarian opinions. When one finally gets it, others are sure to follow.

The orientation would not “matter” if all they were doing was a one-stage multivariate regression. In that case they would simply have a proxy whose interpretation is (bizarrely) opposite to that of the scientist who originally collected the data. But they are not only doing a multivariate regression, they are doing (i) a modern calibration and then (ii) a historical reconstruction. They calibrate on the modern uptick portion – the piece that should have been removed (as Kaufman recognized) – and reconstruct on the remainder. Thus the “inversion” doesn’t happen in the calibration stage (which is probably what Mann thought, and why he might have used the term “bizarre”). It happens when the bogus calibration is misapplied in the reconstruction stage. This forces the historical portion of the series to take on the opposite interpretation of what Tiljander herself proposed.
.
Along the lines that Nick Stokes suggests, it would be interesting to see the difference with the bogus blade included versus removed. The former should force an “inversion” (i.e. proxy interpretation bizarrely opposite to Tiljander’s). The latter not.
.
IMO it would be tough to explain this effect in 250 words. Also, you would need at least one figure to demonstrate, and that is often not allowed in replies.

One issue here is language and jargon. A fancy word for it is versimilitude, although “truthiness” is just as good.

Mann et al. write turgid prose that gives the impression of rigor and completeness, until you translate it into plain language. Then you realize the language glosses over important decisions/mistakes and trumps up trivial victories. Reading these climate science discussions is like listening to “Dr. Science: I have a master’s degree in … climate science.”

Steve M. is using the term “upside down”, which is descriptive and plain-spoken (as are “spaghetti graph,” “hockey stick,” etc.). But it is easy for a hostile correspondent to dismiss that characterization. They can pretend to not understand what the criticism means in one sentence (“bizarre”), then implicitly acknowledge the point but explicitly refute it in the next sentence (“multivariate analysis is insensitive to the sign”). This is true only in the sense of prediction and model fit, but not true in being consistent with a theory or hypothesis about the data stated a priori. That rhetorical flourish (deny then implicitly admit while explicitly refute) is actually brilliant given the audience. This style seems to be encouraged by the extremely dense style preferred by physical science journals.

If one were stating this issue for an undergraduate class in, say, econometrics, this crazy proxy literature posits a positive monotonic relationship between local temperature and numerous variables recorded in agreed upon units. Therefore, there is a ex ante theory that says certain parameters are positive, which can be tested ex post. Finding a negative estimate rather than a positive one does not invalidate the theory per se, because maybe the negative coefficient is not significantly different from zero. And ideally a multivariate F or chi-squared test would test the joint hypothesis not just isolate one coefficient that turns out negative. But if one imposed the theory on the data (imposed non-negative coefficients) my guess is that the large variation in these “hockey sticks” series combined with the negative signs would lead to rejection. The hypothesis would result in throwing out that variation (setting the coefficients to zero) and the result would be much worse fit that would be rejected by the unrestricted model that does not implement the theory. The ability to substitute one hockey stick series for another as a test of robustness, rather than testing a nested null hypothesis, is yet another glaring piece of evidence that these authors and their reviewers could not pass undergraduate econometrics.

As far as I can tell science journals rarely require people to write down the null hypothesis behind their analysis (positive coefficients) and then discuss their theory in light of the results. It is all there to see once the language is decoded but this lack of explicitness give true believers and hucksters alike room to maneuver.

While plain language like “upside down” is fine, I think it would help if terms like this were actually hot links to a more complete description of what is meant. This makes it harder for people to dismiss terms as “bizarre.”

Steve’s style is appreciated by some, not by others. It’s his blog, but a folksy style is not always helpful rhetorically, at least in the short run. Maybe in the long run it is an antidote to this rampant versimilitude (ness?). But for now it does alienate some people and it provides ammunition for the “he should write a journal articles and get it accepted” criticism.

Re: C. Ferrall (#45),
Thank you for addressing the semantics at play here. Steve M used common vernacular – probably because of the 250-word limit. That appears to have irked the defendant. The Edittor let it sit as a stalemate when it really ought to have been resolved. This would have been a big help to Kaufman et al (2009), for example.

I wrote the original post on Upside Down in Sept 2008 right in the middle of the financial meltdown.

In the original post, I observed (somewhat archly, I’ll concede) that the sign of the coefficient “mattered” a lot to holders of puts and calls.

It would be nice to be able to hold financial instruments where you could elect after the fact whether you were long or short, but, adapting Esper’s comment about dendroclimatology, that seems to be advantage unique to climate science.

OT: I did not realize this comment of Esper’s was published in *print*. (Thanks, Steve Bloom, for the reference.) It is hard to fathom what on earth he was thinking. Any dendro who is willing to defend or explain or recontextualize this statement, please, your attention is urgently requested.

Thanks for your reply. I was not aware of the implications of the two stage multivariate analysis. However, I am still not completely comfortable with your interpretation of Mann/Nick’s response to the one stage regression. Steve’s post made it very clear that there is an expected a priori physical interpretation of the data – it was the basic premise of the post! Nick responded to Steve’s post. Nick is an experienced mathematician from the CSIRO which I understand is highly regarded Down Under (though I hope Aussie mathematicians are better than their rugby players…). So I don’t think it is reasonable to believe that Nick does not understand this issue (though you may be right that Mann did not understand the complaint). So my question remains for Nick – are we all missing something? How can you justify using a proxy that results in a correlation with an inverse sign to the ex ante physical meaning?

Re: myna (#59),
Nick understands. He just wants a demonstration. I’m trying to outline what I think a proper demonstration should include. Flipping the sign on the predictor going in will have “no effect”, quite obviously. You will simply end up with a proxy that has the opposite interpretation to that proposed by Tiljander. That’s so trivial it’s not worth demonstrating. I’m sure Nick understands why the output wouldn’t change. That’s not the problem. The problem is the contradiction with Tiljander’s own work when the proxy is compared to local paleohistory.
.
What is less trivial is what would happen if the blade were truncated, as Kaufman did (but the wrong period). Would the proxy retain a contradictory interpretation? That’s far more interesting.

No, it’s the other way around. I’m wondering what I’m missing. bender (#67) says that there would be an error at the calibration stage, biasing the reconstructed temperature downwards, and I’ve been thinking about that. But again, it would be good if someone could show it. Other arguments I’ve seen refer to problems when you go back to interpreting Tiljander’s proxy, but again, it isn’t clear to me whether that isn’t just a matter of interpreting consistently.

I generally agree that if the sign doesn’t matter, it’s still better to have a consistent convention, which probably means following Tiljander. But deviating from a convention is different from getting wrong results.

I generally agree that if the sign doesn’t matter, it’s still better to have a consistent convention, which probably means following Tiljander. But deviating from a convention is different from getting wrong results.

Huh??

You’re completely misunderstanding the issue.

In a CPS multiproxy study (one of the two legs of M08), the sign absolutely does “matter”.

Mann made the following statement:

We selected non-tree-ring proxies with the goal of obtaining data from as wide a geographic area as possible, but at the same time we tried to limit the selection to records that were reasonably well dated and where the original analysts had shown that there was a paleoclimatic signal associated with the proxy.

Any reasonable reader would conclude from that, having accepted that the original analysts had shown that there was a “paleclimatic signal”, Mann was no longer at liberty under his methodology to use the data upside down.

And while the EIV method relies on correlation to establish which way is up, you are ignoring Mann’s use of the contaminated portion of the series. Mechanically following his algorithm with the spurious correlation between temperature and ditching resulted in the EIV version being upside down to the orientation of the original analyst (who did not use the modern portion for calibration.)

I find it hard to believe that such elementary points can seriously be disputed.

Re: RomanM (#62), As a funny coincidence, around the time I saw the phrase first time, I was reading your groundbreaking 1977 paper (with A.F.) and couldn’t help of thinking what would those guys think if they saw phrases like that! Now I have an idea🙂

Peer review is simply a way to make sure that what gets published in the journal is of interest to the target readers of the journal, which usually means creating an elevated sense of the credibility of the articles. Any “quality control” function is incidental – if the editors believe the readers want quality, they will arrange the review process so that it successfully signals quality to the readers, and as cheaply as possible. That usually means through proxies (quality itself requiring a great deal of effort to ascertain) – things like “well-respected reviewers” who are “published within the field”. It also means that a comment system that makes it difficult to demonstrate the low quality of previously published research is highly desirable.

Peer review will attempt to ensure actual quality (as opposed to proxy quality) only when readers demand it. I hope that blogs like Steve’s will eventually cause a groundswell from journal readers wishing to improve the actual quality of published research.

Let’s not discuss peer review in general terms. The issues are well understood by readers here.

I’m trying to focus here on a very specific case. It seems unsatisfactory from a quality control perspective that such a simple issue as whether Mann used the Tiljander series upside down should remain in dispute after being specifically raised in an exchange in the PeerReviewedLitchurchur.

This is a very elementary point and yet it remains in dispute nearly a year after being raised.

It seems to me that the editor of the journal should have required them to directly deal with the comment, rather than permit them to introduce a lot of irrelevant fog.

But even more, it seems to me that the authors of Mann et al 2008 had a responsibility to concede that they had used the series upside down from the Tiljander orientation once it was brought to the attention of the journal. A question for readers – is there a point at which the failure to concede a point like this creates a situation where the research is not accurately represented in the research record?

Despite my request that readers avoid generalities, too many readers are trying to litigate every issue all at once. There are some important distinctions in this case. As I observed in my post, in my opinion, there is an important difference in the state of mind at the time of the original article and at the time of the Reply. Does this create different obligations?

Re: Steve McIntyre (#65),
Steve, the issue is how diversity of expert opinion is to be summarized when the goal is consensus. The IPCC model is flawed. But they will argue that the flawed process can be defended under the “precautionary principle”: “We don’t have time to cope with uncertainties and caveats, never mind the right wing skeptics. The science is complex enough as it is.”

it seems to me that the authors of Mann et al 2008 had a responsibility to concede that they had used the series upside down from the Tiljander orientation once it was brought to the attention of the journal. A question for readers – is there a point at which the failure to concede a point like this creates a situation where the research is not accurately represented in the research record?

They had the right to complain about your vernacular. If they understood your point, they had a moral obligation to concede it. If they did not understand the point, they might have obtained clarification before commenting further, but exercised their right to not do so. I do not believe the prevailing culture would oblige them to do so. Pleading ignorance is always available as an out. (Only the Editor can prevent this endpoint outcome. He is not obliged to do so. But one does expect it of a good Editor.)
.
This points to a serious flaw in the peer review process as a mechanism for ensuring the level of due diligence that policymakers need so that they can sell their policies as “science-based consensus”. More eyeballs. More inclusivity. More back-and-forth. When you are talking global meltdown, you want maximum inter-neural connectivity. If the alarmists are right, then we can’t afford to get this wrong.

Re: bender (#73),
… and if they did concede the point – that their interpretation of the proxy is opposite to that of the originator of the data – they would be obliged to explain that fact. They would then be forced to either defend their interpretation, putting them in argument Tiljander, or accept that it is wrong. If they accept it is wrong, or could be wrong, then the only obvious solution is dropping Tiljander from the network and recalculating the results. If nothing changes then it greatly diminishes the weight of the original complaint.
.
But there’s that pea under the thimble. Removal of Tiljander “doesn’t matter” … as long as something else is in there, acting as a backup crutch. Take out both crutches and the patient falls over.

If they accept it is wrong, or could be wrong, then the only obvious solution is dropping Tiljander from the network and recalculating the results.

That seems to be what is done in the thread to which Jean S (#36),directed me. But I can’t see why the more obvious solution isn’t to correct the sign error and recalculate (with Tiljander). And since I believe the calcs have been replicated here, I’ve been asking whether anyone has done that, and can report the difference. It seems that according to Mann, there would be none, and that can be checked.

I believe this is missing the point. Changing the sign of the series wouldn’t make any difference to the result, the sign of the coefficient would just invert, and the result would be the same.

The problem is that this would still be the opposite sign to the physical meaning of the series.

Take tree rings as a simple example. Imagine that during a period of measured increasing temperature a tree ring series got smaller. This type of algorithm would assign a negative coefficient to the tree rings in that series, that would then propagate to the non-instrumental period. If you try to fix it by inverting the sign of the tree ring series the algorithm would then sign a positive coefficient and the result would be unchanged, but this would then be inverse to the already known physical property of increasing tree rings with increasing temperature. (And please don’t let this set everyone off on whether that relationship is really always true.)

Take tree rings as a simple example. Imagine that during a period of measured increasing temperature a tree ring series got smaller. This type of algorithm would assign a negative coefficient to the tree rings in that series, that would then propagate to the non-instrumental period.

This is the issue of consistency. If you are using treerings “upsidedown”, then they will behave similarly in the calibration period and the proxy period, and nothing is lost. But if the tree really responds inversely in the two periods, then there is a real problem whatever convention you use.

Huh??
You’re completely misunderstanding the issue.
In a CPS multiproxy study (one of the two legs of M08), the sign absolutely does “matter”.

Well, I’m looking for numbers to help me understand it. It seems reasonable for Mann to look for an indication that someone else had response to a paleoclimatic signal as a proxy guide, and I don’t see why that binds him use their sign convention (although I would have). And I’m trying to understand the sense in which the portion is “contaminated”. I’m trying to distinguish between using a different convention and actual wrong results, and looking for some quantitative measure. A recalc with the “right” sign would be a good guide.

Well, I’m looking for numbers to help me understand it. It seems reasonable for Mann to look for an indication that someone else had response to a paleoclimatic signal as a proxy guide, and I don’t see why that binds him use their sign convention (although I would have). And I’m trying to understand the sense in which the portion is “contaminated”.

You don’t need numbers. You need to understand what the CPS procedure does. From an earlier post by Steve:

As a technique, CPS is little more than averaging and shouldn’t take more than a few lines of code. You standardize the network – one line; take an average – one more line; and then re-scale to match the mean and standard deviation of your target – one more line.

In CPS the proxies are averaged. Do you need to see numbers for an explanation of what effect flipping a proxy has? While other proxies are going up, this one is going down and vice versa. The whole thing is “contaminated”!

You don’t think it is incumbent that someone who decides(?) to use it the opposite way should be aware of how it was originally used AND give a reason for flipping it. Saying the methodology will flip it back doen’t cut it.

Thanks, that may help. It seems the key step in CPS as described in Mann’s SI is“All proxies available within a given instrumental surface temperature grid box were then averaged and scaled to the same mean and decadal standard deviation …”. So if at that stage the Tiljander proxy was added in with inverted sign, it would make a difference. In the paper a “potential” weighting factor (which could be negative) is mentioned, but not in the SI. Do we know that the Tiljander proxy had the wrong sign at this averaging stage?

The SI also says “As a cross-check we considered two alternative CPS procedures. The first procedure attempts to preserve signal amplitudes by matching trends rather than variances …”. Unlike variance, the trend is signed, so that could compensate for any inversion. They say this (and another procedure) “generally produced similar results, with isolated exceptions”.

In the specific case being discussed the proxy is known by the author’s of the original paper to behave differently in the calibration period. But since the upward slope in that period correlates with temperature the algorithm assumes that is the correct direction for the proxy and propagates that back to the earlier periods. This is why Steve describes it as being “upside down.”

Re: Nicolas Nierenberg (#136), I’m sorry I’m catching up here – do you have a reference for that known kinky behaviour of the Tiljander proxy? At the moment, Romanm’s explanation looks more straightforward to me.

Re: Nicolas Nierenberg (#149),
OK, thanks, the “Saturday Night Live” thread seems to have the explanation, and does indeed describe the kinked behaviour. In recent times agriculture etc makes the proxy appear to get colder (more clay), and if you calibrate with instrumental temperature it comes out upside down in the earlier period. On that basis, presumably Tiljander data can’t be reliably calibrated at all with instrumental temperature, and should indeed be omitted.

This seems to relate to the note in the Mann 2008 SI which says

These records include the four Tijander et al. (12) series used (see Fig. S9) for which the original authors note that human effects over the past few centuries unrelated to climate might impact records (the original paper states ‘‘Natural variability in the sediment record was disrupted by increased human impact in the catchment area at A.D. 1720.’ and later, ‘‘In the case of Lake Korttajarvi it is a demanding task to calibrate the physical varve data we have collected against meteorological data, because human impacts have distorted the natural signal to varying extents’)

Incidentally, although most of these high-latitude sites seem very remote, I have indeed seen Lake Korttajarvi from the airport bus.

Nick, again, you seem to be wilfully obtuse to what is going on here. Mann’s notation in the SI does not quarantine the problem, as I stated in the post. Having noted the problem with the Tiljander sediments, why did he use them at all? What’s so pressing about having a contaminated series? Mann’s sensitivity study without the Tiljander sediments was not a no=dendro sensitivity study without the Tiljander sediments. For that study, he included Graybill bristlecones.

The Tiljander sediments are used in the recon that supposedly shows that they don’t need tree ring proxies – the recon that Gavin Schmidt cites so proudly.

presumably Tiljander data can’t be reliably calibrated at all with instrumental temperature

Did you read Tiljander yet? These problems were known BEFORE Mann et al. 2008. There’s a reason they didn’t do a calibration. There’s a reason why they go into tremendous detail about the paleohistory of the area. So why – despite all the caveats – does the thing get fed into another Mannomatic?
.
Note further that Tiljander questions Lamoureux & Bradley’s rather bold and questionable assertion that lake sediment varves ought to be considered a temperature proxy until proven otherwise. Why’s everyone so dismissive of Tiljander’s hard work?

Re: Nick Stokes (#119), I don’t believe it would make a difference and that Mann is indeed correct because any PCA-like reduction of the data would pick up the largest variance not the sign. I believe the sign would be transfered to the weight assigned to the series when the PCs are created (this is a bit vague, I know PCA is a linear transformation). I’m trying to think if it would have any impact on the magnitude of output PCs that would be used in the calibration regression? Maybe not.
Someone needs to check. However although this is a mathematically consistent argument it is not a physically consistent argument (the series is upside down) and since the outcome of the reconstruction is to produce a physical reconstruction, the Mann answer is in its full context, incorrect.

Re: Micky C (#123), The coefficients have to be constrained to make physical sense. The math would “work” either way, but you wouldn’t want to use the series with a net sign in the wrong sense to the presumed physical temperature/sediment relationship. I think this example is a low variance signal with a high variance noise (bridges, peat ditching, forest clearing) added at the end with a spurious correlation to global temperature anomalies in the instrument period. The regression (or PCA) will probably minimize the error of the high variance noise – ie pick the sign the wrong way to what is the accurate physical temperature/sediment relationship. To the extent there is a physical relationship to temperature that is significant, this should reduce the accuracy of the overall proxy in the pre-calibration period. However, it’s quite possible that the spurious non-physical correlation may increased the R2 during the calibration period, without any physical meaning. If you understood that the wide rings in the 20th century were outliers or at least non-temperature related data, you would want to address this issue in some other way, to put it mildly.

If they accept it is wrong, or could be wrong, then the only obvious solution is dropping Tiljander from the network and recalculating the results. If nothing changes then it greatly diminishes the weight of the original complaint.

Oddly enough, I see they have actually done that in the SI. Not because of the sign issue, but because of other quality doubts about the 4 Tiljander series, and three others. So they dropped them, and showed the alternative results in Fig S8. Unfortunately, the colors in my version are unclear (to my eyes), but the difference seems noticeable but not huge.

The SI to Mann et al 2008 conceded that there were problems with the recent portion of the Tiljander proxies (without mentioning that they were using them upside down from the interpretation of Tiljander and Finnish paleolimnologists), but argued that they could still “get ” a Stick without the Tiljander sediments. However, as I observed at the time, this case required the Graybill bristlecone chronology (where they failed to mention or cite Ababneh’s inability to replicate Graybill’s Sheep Mt results, even though Malcolm Hughes, a member of Ababneh’s thesis panel was a coauthor of Mann et al 2008). Thus their “robustness” analysis used either upside down Tiljander sediments or Graybill bristlecones.

You see nick here is where I have problems with your defense. When mann mislocated the rain in seine, when he got the coordinates demonstrably WRONG, he wouldn’t correct them. he even repeated the mistake ” it didnt matter”

Well if it didn’t matter then why not just THANK STEVE and fix it. just for neatness sake. there is something similar happening here. If it doesnt matter, or if it matters just a little, why not admit.

1. the original study had the sries oriented one way, a way that made physical sense.
2. Run the analysis with the data oriented this way.. since it doesnt matter! or matters just a little.

What is it with not being able to admit errors, even when they don’t matter or matter just a little

( ps I’m not granting this problem doesnt matter)

can you SIMPLY and without buts say.

1. mann inverted the series.
2. Any one with a math background can see this.
3. I nick can see this.
4. I nick would orient the series the way the original author intended.

let’s grant it doesnt matter. that actually works IN favor of orienting it the “right” way.

Do you invert it just because you can? just for the hell of it? because you don’t know how to keep it in the
right orientation?

Re: steven mosher (#162),
I can’t even grant that much grace. These “small” errors are always in the direction required to elevate the paper from ho-hum Journal of Quaternary Research or Holocene to Nature or Science. These guys are racing each other to sound an ever more alarming alarm.

Run the analysis with the data oriented this way.. since it doesnt matter! or matters just a little.

No, that was my first thought too, from the talk of “upside down”. Just turn it right way up. That’s why I asked initially if anyone could point where that had been done.

But it won’t work. As NN said, it’s wrong either way. Inverted again, it’s wrong for the instrumental period. As Steve said here, and I said in the previous comment, the Tiljander data should not be used here. Mann’s Fig S8 was more appropriate.

once again, you’re being wilfully obtuse about bristlecones and Mann 2008.

I’m glad that you agree that the hockey stick that Gavin SChmidt displayed so proudly at the Yamal thread at realclimate – the supposed hockey stick without bristlecones – is not appropriate due to its use of upside down Tiljander (and this isn;t the only problem).

But, as I noted before, Fig S8 used bristlecones – a proxy that the NAS panel said should be “avoided”. I presume that you agree that bristlecones should not be used in temperature reconstructions, as the NAS panel recommended (and Mann said that he complied with NAS recommendations but actually flouted them).

Unless you are willing to justify the continued use of Graybill bristlecones (also repudiated by Ababneh), you must also concede that Fig S8 also is inappropriate.

once again, you’re being wilfully obtuse about bristlecones and Mann 2008.

Gosh, there’s always something😦 I don’t think I’ve mentioned bristlecones, and I wasn’t planning to. I’m glad you folk have given me some pointers to understand the Tiljander issue. Bristlecones might have to wait a while.

No, it is the perfect time to look at bristlecones and PC’s too. If you want to understand how CA views Mann, you need to follow the trail. It’s the pattern that is disconcerting. That’s what you are missing; you are looking at each tree as completely separate. Look at the forest.

Then once you understand how Mann operates, look at the other members of the “Team”.

pending your consideration of bristlecones, a topic that has been endlessly discussed here, you might post over at realclimate that the non-dendro HS of Mann 2008 shown in their Yamal thread cannot be used because of upside down Tiljander.

I remind you of Atte Korhola’s (an eminent Finnish paleolimnologist) comment (as translated) on the upside down use of Tiljander and other proxies( in this case, Kaufman, but Mann’s use is far worse because he used the contaminated portion):

It is concluded in the article that the average temperatures in the Arctic region are much higher now than at any time in the past two thousand years. The result may well be true, but the way the researchers ended up with this conclusion raises questions. Proxies have been included selectively, they have been digested, manipulated, filtered, and combined, for example, data collected from Finland in the past by my own colleagues has even been turned upside down such that the warm periods become cold and vice versa. Normally, this would be considered as a scientific forgery [ falsification??], which has serious consequences.

I’m not sure why you wouldn’t post it yourself, but I have taken the liberty of doing it for you:

Richard Sycamore says:
Your comment is awaiting moderation.

16 October 2009 at 10:04 AM
There are two graphics in the opening post that contain problems and I ask whether they should be removed or amended.

The first is the one derived from Kaufman et al. (2009). It makes inappropriate use of the Tiljander lake sediment proxy.
On this topic, eminent Finnish paleoclimatologist, Atte Korhola, has stated [translation from Finnish]:

“It is concluded in the article that the average temperatures in the Arctic region are much higher now than at any time in the past two thousand years. The result may well be true, but the way the researchers ended up with this conclusion raises questions. Proxies have been included selectively, they have been digested, manipulated, filtered, and combined, for example, data collected from Finland in the past by my own colleagues has even been turned upside down such that the warm periods become cold and vice versa. Normally, this would be considered as a scientific forgery [falsification?], which has serious consequences.”

The second problematic graph is the one beneath it, derived from Mann et al. (2008) which makes the same innapropriate use of this proxy, attributing to it an interpretation that is directly opposite to that proposed by the orginal authors. It does not make sense to have proxies being interepted one way by one group and the other way by another. Given the contradiction, perhaps fix the graph or remove it?

[Feel free to edit this text if any part of it is considered offensive.]

Thank you.

Please let me know if I have mis-represented the case. I rather dislike posting the comments of others, but in this case there is a need for resolution on the matter.
Good day.

Gavin, what is your opinion on the Finnish Lake sediment proxies? They certainly jump out when looking at the series used. Would you consider them quality temperature proxies?

[Response: I have no particular opinion, but leaving them out (and a few other potentially problematic ones) was tested as described in the SI (p2 and fig S8). It makes a little difference, but nothing particularly striking. It’s worth pointing out that in these kinds of projects there are no guarantees – all you can do is attempt to find out how robust answers are to various issues (this is one, but so is the dependence on tree ring data, reconstruction methodology, etc, etc.) and seeing what decisions make a significant difference to the final answer. If they do, then obviously you are constrained about what can be concluded, while if they don’t matter, it’s not worth delving deeper. – gavin]

So Gavin is saying if a reconstruction is filled with nonsense from a variety of sources and you remove one bit of nonsense and the difference is not “particularly striking,” then you can put it back before you test the next bit of nonsense? That hardly seems scientific to me. I would much prefer “delving deeper” to make certain you are looking at data warm side up so to speak. I would also like to see a little more honesty when errors are found. All of this posturing and circling of wagons is a disservice to science.

When the study was published I discussed with Mia Tiljander and she stated that human impact, especially land cover change, agriculture, accelerates erosion, more material flows to lake sediments, and thereafter the lakes are of no use in proxy reconstructions anymore, Korttajärvi from 1720 on, most of the other 180.000 Finnish lakes from 1850s on.

[Response: That is, I presume, why Mann et al did a test that didn’t include them. – gavin]

Re: Nick Stokes (#218),
You’re “obfuscating” only in the sense that the issue is not whether the question was raised but whether it was resolved. That it wasn’t is evidenced by the two incorrect graphs.

Nick, that is pathetic. You cite a RC question answered by Gavin citing figure S8!! The fact that figure S8 is Mannomatic bristlecone sausage and nothing more has been demonstrated here more than once, and is not in dispute. Please don’t cite S8 as proof of anything other than Mann’s intransgience.

One more time. Tiljander upside down or otherwise is not a valid proxy by Mann’s own admission. The only thing which keeps the stick alive in M08 if you take out Tiljander is to add stripbark bristlecones, and the NAS said no stripbark bristlecones, full stop.

Re: Peter (#227), Again you and bender are missing the point. Steve suggested that I post something at RC, presumably to get their response. I pointed to where someone had done that. I’m not saying RC gave a good response; I’m just saying that the result of posting that query is known.

Nick, your reference is from last year. Maybe if they heard the comment from you (rather than last years’ commenter), it would help them understand. Wouldn’t do you any harm to try. Let us know how you fare.

As to your point that they’ve already responded, I’ve noticed that you don’t immediately refrain from commenting here after I’ve answered a question. You’ve been known to voice your disagreement.

I see no legitimate reason for you to express disagreement here (or at Lucia’s) and not doing so at realclimate, merely because Gavin said something a year ago.

Re: Steve McIntyre (#230), I’m not averse to pressing an argument. But only if I have something new to say. In this case, we have a comment offerred in challenging style at RC by Tilo, who seems to be a scientist in the field who has discussed the matter with Tiljander. I could not muster that authority, and I can’t see what content I could add.

Gavin responded by pointing to the Mann SI which flagged the Tiljander issues and did a T-free calculation. He seems to say that is sufficient; others (me too, I hasten to add) think the data should have been omitted entirely. I don’t see how the passage of time would change that difference of views.

Re: Nick Stokes (#229), This is repeatedly the case with any dissenting or difficult questions at Realclimate. They give a poor response which does not answer the question and then their acolytes point to the fact that it has been “discussed” as being evidence that they were right all along. The fact that Gavin moderates (ahem!) dissenting voices out of the blog renders it a useless forum for academic discussion.

Re: EdBhoy (#231), Yes, Nick Stokes has the chance of improving the reputation of both himself and Realclimate in the eyes of many by voicing disagreement there. Do you think you would be censored, Nick?

Re: Peter (#234), Peter, everyone here seems to be an expert on the statistical vagaries of bristlecone pines. But as I said above, I’m not.

I have now said several times that I think the Tiljander series is unsuitable for use as a temperature proxy. Incidentally, being careful with words has some worth. I’m sure Mia agrees with that statement, and disagrees that the series has demonstrated flaws. I believe it is good data for the right uses.

Steve: Again, Nick, that’s not what she said in her article. She said it was suitable for use as a temperature proxy prior to modern disturbance – however, in the opposite orientation to Upside Down Mann.

Re: Nick Stokes (#210),
1. That the question “was raised” doesn’t mean it was answered. (It wasn’t.)
2. Not all the relevant questions were raised. (The bcps.)
Your behavior is now what I would call “willfully obtuse”.

Nick, you think wrong. You offered figure S8 as demonstrating that the reconstruction was not hugely different without the inverted Tiljander series, but since S8’s “hockey stickedness” is flat stuff + bristlecones, you certainly did mention them.

This lawyerly parsing of words is so antithetic to the scientific method and search for truth that it becomes maddening.

No, I take that back. It wasn’t just temptation, I really do see the quality-signaling function of peer review as relevant to the question at hand. Steve asked “Why did the exchange have so little impact?” I say it’s because the mechanism for the exchange has not evolved in response to incentives for creating that impact; rather, the incentives (from the journal’s perspective) are to minimize the impact of after-the-fact findings of error. That makes it not just a question of “more ink costs more money”. It’s not good for science as a whole, but it is good for maintaining the prestige of the journal, and it’s certainly related to peer review being the primary signal of quality.

So while Mann had an obligation (to his peers in the scientific community as a whole) to admit the problem, and a matching obligation not to smear the person who correctly identified the problem and brought it to light, he didn’t. And in my experience, denying that there is a problem, or that the problem matters, is completely typical of responses to comments, regardless of the journal or the branch of science. And I think it’s typical because it is (or has been) an adaptive attribute for journals.

Re: bender (#90), If you want to know the reason why the editor let it slide I would offer this. As I contemplated my blog experiment ( where a paper is publsihed as a blog without space restrictions on comments ) it became evident ( duh) that the principle difference between a blog and a journal article is closure. That is, a blog for the most part can theoretically be left open, left open to comment ( can you say like the talmud). How does one reach closure in such an open ended media. the conversation can go on and on an on. In fact a comment could overtake the whole article and disrupt the whole affair. How would one cite that in a bibiliography? Journals like to view what they publish as established science. Citeable. there is a certain finality to the publication process. Comments are like expanded footnotes in a journal. in a blog, as I noted, the comment could overtake the article. Further, allowing back and forth, extended back and forth would undermine the whole peer review edifice.

I was thinking about that yesterday in the context of Steve’s assumed lack of peer reviewed output and I think something like the following would work.

An interesting result appears in a post by Steve M and either he or someone else decides it should be “published.” The interested party produces a mock-up article and posts it as an open thread. After a fixed period the “author” makes a post saying comments for the first round are closed and then posts a new version containing changes, new citations, etc and it’s open as well. After 2 or three iterations, a “closed” version is posted where only specially invited people can post (perhaps after vetting) and when this is done the semi-final version is posted and outside peer reviewers send reviews and if necessary more changes are made. Finally the paper is “published” in a special place. Perhaps it could even be outside Climate Audit. This would serve as a repository for the paper and would signal “closure” I.e. complaints and further discussion would go back to the blog level.

Re: Dave Dardinger (#103), While peer review, properly and fairly done by qualified and relatively unbiased reviewers is something to be desired and aimed at, given the present atmosphere in the climate science journals, the widely touted stricture that Steve must to be ignored until he “publishes” is nothing but a delaying tactic, a smokescreen which seeks to take advantage of the status quo. These scientists are reading this site and are well aware of what he is doing and the quality of what is going on here.

The simple answer to Steve’s question in #104…why “they cannot resolve such a simple issue”… is that they don’t want to.

I think it would be an interesting experiment. I don’t know if wordpress can handle all the things
I would like to see handled in the review process.. Steve hasnt indicated if he thinks such an experiment
is feasible or to his liking.

As for the specific topic here> why won’t people address this problem with Mann. For christ sakes on time over at tamino I trued to get ONE PERSON to admit that mann got coordinates wrong and nobody would dare say that he
did. i suppose they look at how to this day they triumphantly roll out UAHs math error or an error ( degrees versus radians er something like that) that Ross made and they realize that if they cop to an error ( like the Y2K error)
people run through the streets with somebodies head.

Think of this way.. climate science has been highly PERSONALIZED.. there are key persona.. gore.. gavin.. mann.. hansen. That personalization of science ends with people not being able to admit errors.

Put it this way if Mann didnt make the error if the anonymous “science process” made the error correcting it would be trivial

I’m trying to focus here on a very specific case. It seems unsatisfactory from a quality control perspective that such a simple issue as whether Mann used the Tiljander series upside down should remain in dispute after being specifically raised in an exchange in the PeerReviewedLitchurchur.

This is a very elementary point and yet it remains in dispute nearly a year after being raised.

I am of the frame of mind to generally favor lack of understanding and knowledge as the root of these types of recognition problems over any motivational or conspiratorial influences.

The peer review process could handle these types of problems, given a general understanding and knowledge of those people administrating and reviewing the process. Without those attributes, we can have a rather frustrating situation for those that do have the required knowledge to understand. Is not the blog a great outlet for those frustrations and pointing them out to the thinking person?

Take the case at hand with the comment by a mathematician here:
Re: Nick Stokes (#34),
who is apparently a sincere and intelligent mathematician who is answered here: Re: RomanM (#50),
by a statistician at the university level and reputation.

Could the problem be that these issues are processed in peer review more by the likes of Nick Stokeses than RomanMs? The less innocent alternative would require that the peer review owners in this area of publishing are somehow and at some levels blinded by their advocacies on policy. Without the capability of getting into these people’s heads, I personally can more readily handle the reasoning given in first sentence in this paragraph.

Another question that suggests itself is why the original authors have not raised objections to the incorrect use of the data in Mann et. al. and in Kaufmann et. al. Answer of course is that they would pay a heavy political, career-impacting price, which only a few brave academics like Korhula are able/willing to risk.

Andy Revkin – Speaking of bravery, don’t you think this would be a great peg on which to hang your discussion of peer-review in the age of blogs?

We raised that issue in a post entitled The Silence of the Lambs, referring to Peter Brown’s drought-affected dendro series in the Great Plains – another set of data used upside down in Mann et al 2008.

There’s a difference in experience and culture that cause differences in behavior.

The RC folks see their work as publishing through peer-review, and their blog as the “Dummies Guide”. Steve is putting top level effort into CA.

Mann, Steig, etc don’t feel the same need to respond to comments from the “Dummies”, and probably feel betrayed when the blogs have more influence than the peer-reviewed journals. Anything that raises the value of either blog could overshadow their day jobs in academic publishing.

I appreciate all your work exposing the sloppy process of testing conclusions and challenging their validity as it pertains to the global climate debate. I work in the space community and we could never allow this kind of peer analysis and debate in our processes – lives are at stake. Many lives. This is a rolling disaster from where I sit.

When the scientific community decided to go from debating theory to imposing world wide policy effecting the lives of literally billions of people, their cozy little debate society needed to grow up and perform quality analysis and debate as is required in any endeavor that can adversely impact human beings through shoddy work.

The journal based peer review process, with its quirks, biases and good ‘ol PhD network, is a complete failure in this matter. What bloggers have done is bring these conclusions and theories into the crucible of serious analysis and debate by a variety of minds and experience. You and others have simply shown the process is too lax, allows too many errors through, and the conclusions based on this process are basically invalid.

If the climate science community is prepared to step out of the theoretical debate society and get into policy and human safety, they need to completely overhaul their peer review process. What I have seen from reviewing this debate is that the ‘science’ here is basically unfounded. You could never design a system like this in the real world. There have been too many instances the data you being used was corrupted, tainted or error filled. The history you went through above would cause people to be fired for poor work and hiding their poor work in my field.

I don’t have the answer to fix this, but the climate debate has moved into an arena where folks like me should have a voice in challenging how solid the conclusions and theories are. Our lives and livelihood are at stake. We who do similar activities using similar methodologies are well versed in doing this process in a manner the requires solid, high quality, bullet proof analysis and results.

I know you and others would prefer to salvage the journal review process. But this is not a debate of theory, but a debate of human safety and trillions of dollars. We need to make the peer review process as broad as possible, so that the results that survive are as solid as possible – before we ask the world to invest at this level.

I see you and I agree on this, more eyes to drive out quality results and conclusions:

This points to a serious flaw in the peer review process as a mechanism for ensuring the level of due diligence that policymakers need so that they can sell their policies as “science-based consensus”. More eyeballs. More inclusivity. More back-and-forth. When you are talking global meltdown, you want maximum inter-neural connectivity. If the alarmists are right, then we can’t afford to get this wrong.

So please enlighten me – what is the answer? How do we get this process into a more rigorous methodology we can all have high confidence in?

Steve’s point seems a simple one: That the ‘system’ has reached the point where it is not possible to acknowledge even a completely unambiguous finding such as the mistaken use of inverted proxies. Not being constrained by Steve’s desire to avoid generalizations, I think it is clear that this is one more piece of evidence of the drift toward ‘infallibility’ of climate science. The field is no longer capable of acknowledging error. This is doubly disturbing because it is after all a field that is subject to much ambiguity in data gathering/analysis, parameterizations in models, etc. yet so much rides on its findings.
snip -OT

Infallibility will certainly not serve the quest for science well, and I suspect that over the long run it will not serve those whose agendas and future well being are dependent on strong anthopgenic global warming.

Re: Noblesse Oblige (#81),
The difference between model predictions and observations finally grew large enough that something had to be done to reconcile the difference; it was getting too embarrassing. The result was the “mind the gap” post at realclimate, where they pointed to the under-sampled Arctic as the source of the 2001-2008 flatline. They argue that the NCEP reanalysis generates a warm Arctic 2001-08 that restores the upward trendline in GMT.
.
This is a good example why people should stick to the topics in which Steve has interest and expertise. This is not a part of the “denialosphere”. No audit has been conducted yet on the under-sampling effect described above. Wild speculations don’t have much place here.

I briefly argued at Real Climate that weblogs would be an excellent forum for debating issues such as this. I was a bit naive.

However, I do believe that some form of Web 2.0 technology could serve as a basis for evaluating the progress of discussion on scientific issues. Perhaps a moderated wiki with qualifications of commenters clearly delineated, and an associated weblog for narrative commentary. Properly organised, it could define the issue (why the research is being conducted), archive the data, promote discussion and serve as a reference.

Sadly, I don’t see how anybody could commercialise this, and it would gore some people’s oxen, so I don’t have much hope. But I do think it would function to overcome the problems delineated here.

Just to follow on for a bit, if the point of the exercise is to produce robust findings that explain a point of science, wouldn’t it make sense to involve the community at an earlier stage? Why not get peer and public review involved right from the design stage?

Bender, the sign going in doesn’t just change the coefficient coming out. Steve has said the program assumes a higher value is warmer orientation for the proxies. It won’t flip a proxy back if it is inversely correlated with temperature.

In a multivariate regression (e.g., stock prices from various economic indicators) it indeed does not matter what the sign is. In a Mannomatic, this would be ok also except now it means that Tiljander site is assumed to correlate inversely to the rest of the world: a hypothesis that can be tested.

I have never seen a history of the development of peer review, but having just read ‘Boltzmann’s Atom,’ it occurs to me that as recently as 100 years ago, battles over physics theories were carried out pretty much as they are being done over climate at CA, only much more slowly.

Why don’t the journals have (perhaps moderated) blogs tied to print articles that would allow a cheap mechanism for lengthy comments? Those which the editors (perhaps using reader responses as a metric) deemed to have merit could be forwarded to the authors for response.

In doing a regression analysis like this, the coefficients would typically be constrained to all go in one direction (assuming the data was all in one direction) to force the model to be physical. If the unconstrained coefficients came out with different signs (or zeros if constrained), it would be a good bet that the assumptions of the model were wrong in some cases, or at least not supported by some of the data.

For those of us mathamatically challenged this sounds like the ice increasing on lake A and the ice decreasing on lake B can both mean it is warming when they are seperate data sets and a regression analysis is conducted without worrying about the physical meaning of the change in ice.

The models expect a hockey stick. Those reconstructions that have a high correlatability to the model “stick” out like a sore thumb. Tiljander, Graybill, Yamal … a list that is growing. It might be time to submit the list (and the detailed critique of each) to several publications for simultaneous review, then get them to agree to publish.

It’s about why climate scientists cannot seem to resolve such a simple issue as whether Mann used the Tiljander series upside down to the temperature interpretation of the author.

It’s simply because they don’t want to and so far don’t have to. As bender said, the only disincentive is shame. It will take a critical mass of critics with understandable, impeccable, and publicized evidence to drag them up out of their gravity well of of “settled science.” There’s not enough competition amongst them to overcome their mutual support system. Maybe a combination of gadflies, jesters, new researchers, etc. will eventually do the job. So far it’s not been enough.

The claim that ‘‘upside down’ data were used is bizarre. Multivariate regression methods are insensitive to the sign of predictors. Screening, when used, employed one-sided tests only when a definite sign could be a priori reasoned on physical grounds. Potential nonclimatic influences on the Tiljander and other proxies were discussed in the SI, which showed that none of our central conclusions relied on their use.

I’ve been assuming that the paleos out there remain silent out of fear. But maybe they actually buy the Mann statement above. Is it possible that their analytical skills are that bad? It would be really great to get a comment from luminous beauty and Jim Bouldin to clarify.

Mr. McIntyre:
As a counterpoise to your point in this posting, you might (for example) resurrect that post/comment in which you or someone–see, I can’t myself remember any of the important specifics–described a “proof” someone had submitted to a mathematics journal, in response to which one reader found an error: The original author then retracted the entire paper. Care to link to that here? It was good.

http://www.climateaudit.org/?p=3249 was about a climate science-style proof of the Riemann Hypothesis. In that post, I noted how using the terms “highly conservative” and “rigorous” more forcefully would have enabled a climate scientist to easily prove the Riemann Hypothesis:

We used the E. Bombieri’s highly conservative refinement of A. Weil’s rigorous positivity condition, which implies the Riemann hypothesis for the Riemann zeta function.

Ah, the Riemann hypothesis paper…a lovely light read😉. I think the point was that a certain reviewer found a fatal flaw and the paper was withdrawn….very much unlike a few cases in the climate-related literature where neither the originating authors or his/her peers cared to uphold similar (fatal) criticisms.

Would it be possible and useful (in your view) to have a brief bio attached to each CA posters name? Something about the education and experience each brings to the discussion? For example BS physics, MBA, semi conductor marketing (in my case).

I would really like to know which comments come from those with science/engineering educations and work experience. Also, if a poster has a particular expertise in a topic under discussion(e.g. statistics, dendro etc).

The journal peer review process doesn’t reliably result in full disclose and reproducibility, the foundation of the scientific process. However, while the reviewers are not public, the authors of the paper are.

The blog process seems to be working for full disclosure and reproducibility but the credibility of the commentators cannot be easily judged. I think the blog process could be improved if commentators are allowed and encouraged to state their competency’s.

I think a blog’s credibility will improve if observers (particularly those without scientific training) could have some idea of the competency of the commentators.

The blog process seems to be working for full disclosure and reproducibility but the credibility of the commentators cannot be easily judged. I think the blog process could be improved if commentators are allowed and encouraged to state their competency’s.

I think a blog’s credibility will improve if observers (particularly those without scientific training) could have some idea of the competency of the commentators.

I would think that at some point the observer must judge for themselves and without undue reliance on the credentials of the observed. If one does not have much confidence in one’s capability to comprehend the subject at hand, then one should not partake in the discussions and rather let the “experts” speak for them.

We must be ever vigilant to judge the expert as well as the blogger and use all the information available – of which credentials can be a part, but certainly not the major part of the judgment. And, of course, we are all aware of two people of equal credentials arguing nearly diametrically opposed views. What then? A coin toss?

Re: Kenneth Fritsch (#135), I agree with you for those, like myself, who have technical training. However, most of the press, politicians and general public are not capable of discerning scientific truth.

I think it would be very beneficial if the press, for example, could see the excellent qualifications of many of the frequent posters on CA. Thus far, they rest almost totally on the “peer reviewed good housekeeping seal of approval” and do not understand the serious flaws especially as it relates to climate science.

Re: charlesH (#142),
Follow the blog and judge by what people write. “Credentials” mean nothing in this brave new world where “retired mining consultants” can obtain PhD-level understanding just by reading, thinking, doing, writing, publishing.

I suspect you do to. I have a basic technical background. You and I can form our own independent judgement based on fundamentals.

If Steve wants to move CA’s influence beyond the technical crowd then CA needs to publicize the credentials of CA posters. Otherwise “peer review” in mainstream journals with all it’s shortcomings will remain the “good house keeping seal of approval” to the press, political and general public.

I’m sure this has been covered (at length), but can a sediment series that does not calibrate with the instrument period be dismissed as a non-thermometer (or maybe ‘dirt-ometer, in this case)..and not be retained for further inclusion/analysis?

He added: “Those such as McIntyre who operate almost entirely outside of this system are not to be trusted.”

This comment by Mann at the Revkin blog tells it all. Revkin writes for the main stream media and I doubt that the main stream media – with reporters that are usually in a technical position to do no more than report the “official” line – will take a bloggers analysis over that of the more peer reviewed “experts”.

So be it and for the person who thinks for herself who cares.

This is the same for such fields as economics and even within the framework of peer review. Currently held conventional thinking is reported and accepted and whether it be right or wrong is for the thinking person to judge.

Once understood, this crap or jewels of wisdom are much easier to take.

If I turn my thermometer upside down, then the attached scale bar is necessarily turned upside down also. The mercury in the bulb will still expand when the temperature increases and the temperature value for hot is clearly capable of being measured and recorded, irrespective of the orientation of the instrument.
But that is not what was done. The field researcher identified, for good climatic reasons, that thick mineral rich varves occured when the climate was cold, and that organic rich varves occur when the climate is warm. Modern environmental disturbances, tree-felling, ditch cleaning and road building, created thick mineral rich varves during the modern warm climate, contaminating the record.
To therefore equate all thick mineral rich varves in the sedimentary record as a warm climate signal is clearly wrong. The thermometer is being wrongly read.
Calibration does matter.

So, peer review questions aside, how should this data be handled? If there is knowledge of fat tailed outliers in the 20th century, should they be eliminated? Clearly using the data in the “right” sense and constraining the coefficients isn’t necessarily the best solution, if there is specific knowledge about the behavior of certain outlier years, etc.

Nick: I’m sorry, but you seem to be in some weird state of denial about this whole affair. Maybe you don’t have experience in empirical science? You just cannot SELECT series that meet your preconceived concept (theory) of reality and then use those selections to PROVE your idea. Please tell me that you understand that much about science!

An aside: I’ve been observing for a long time and can’t recall one instance where Nick has agreed or even supported ANY comment that just might take some wind out of the sails of the DOOMSDAY SYNDROME. How about it, Nick, can you refute this?

And where are the dendros to comment on this thread?
.
I’ll tell you where. “luminous beauty” is busy hurling jihadist invective over at Cruel Mistress. And Jim Bouldin is practicing his “corridor method” with Deep Climate over at real climate. Unh-hunh. But they’re too busy to comment here? Listen up you lambs: you are cowards sitting in your easy chairs reading about this crap that passes for science. You should be ashamed of your silence.

#162. Actually it’s even worse. Even after the upside down was pointed out in a PNAS comment, he denied it. The first mistake may have been unintentional, but the second time, in the Reply, was with knowledge. Why wouldn’t that feel into being a distortion of the research record?

I find the issue regarding peer review and publishing in esteemed journals very interesting. It is true that historically it has been the goal of every academic to publish, and many of us know that it was used as a marker of ones achievement to have a list of publications as long as your arm especially when trying to further ones career. However the system was subject to long delays and many a researcher has had his/her work gazumped by another researcher who got published first or who was right. The literature is full of disputes that arose. One only has to recall the famous Kekule/Brown battle over the structure of benzene to illustrate. And it took for ever to sort things out. With the advent of phenomenal internet speed and the ability to rapidly share information the blogs such as this one are marking an EXTREMELY important milestone in the development of scientific knowledge. It is great to witness it. Now it is possible to rapidly test and replicate scientific work almost instantaneously, so that the old system of correcting errors and refinement by comment, letters etc in the established literature are becoming somewhat redundant (not entirely tho). I think that the naysayers who we know do actually read this site, would be better served by embracing this new order of intellectual exchange and development rather than trying to preserve the old order. I see it as an exciting time and congratulate Steve on being at the forefront of it.

Re: TAG (#181),
Ad hominems? I’m trying to find good reasons to cut Nick Stokes some slack. I thought I was being generous. If there is something offensive there, then snip away. Yes, #164 was intentially offensive. Why? I’m disgusted by the failure of Steve’s critics to come here and fight like real men. This is not at all ad hominem. It’s a hearty invitation to come here and get trashed by logic and data.

I’m disgusted by the failure of Steve’s critics to come here and fight like real men.

There are three different and equally disspiriting aspects to the present situation.

As you observe, climate scientists don’t show up here to defend their work.

In my view, the primary reason why they don’t is because of peer pressure within their community to place sanctions on Climate Audit. (We know of individual scientists that have commented here that have got into trouble with their associates and had to withdraw.) This is particularly problematic for young scientists who are looking for grants and jobs, who might otherwise be interested in engaging, but it’s not worth the risk of potential sanction.

bender, and I’m not saying anything that you don’t realize just as well as me, the issue is not just the failure of the individual scientists to defend their work, but the policy that has arisen within the Community of a sort of sanctions policy against Climate Audit, discouraging individual scientists from participating here.

The flip side of this is the censorship policy at realclimate, which is well-known to CA readers, but seldom fully understood by third parties.

While both sides get blamed in this cold war, I think that this is unfair both to me and to this site.

The people that suffer in the end are the scientific public who are starving for high-level discussion of the issues. They would like to understand exactly why the criticisms of Mann, Kaufman, Briffa,…etc are wrong.

Blogs are not a substitute for journal publication, but they are a unique and potentially excellent way of establishing the sort of high-level discussion that the public is starving for.

In other words, proxies are required to respond local temperature in a certain way, but their “respond” to hemispheric temperature (“teleconnection”) may be of opposing sign, and the sign and magnitude of the respond are changing from century to century! The story is so amazing, that I doubt even the best science fiction writers could come up with that!

OK, I have to make one comment I think is relevant to the subject of the thread, regarding “peerreviewedliterature.” I’ve been on peer review panels for a couple of niche publications related to tech industries I’ve worked in. I know just how political they can be, and just how thorough they usually are, and for me peer review is hardly a UL stamp of approval. I hesitate to say it means nothing at all, but that’s close to my feeling after my experiences, which I freely admit are my experiences only and may not be (but probably are) transferable to other pubs and subjects. Certainly the experiences presented here and in other climate related sites regarding the kinds of things that get thru and get published, the refusal to abide by archiving and openness standards, the shutting out of dissenting views, certainly leads me to believe that my experiences are not unusual.

I’ve seen far more thorough, and intensive, scrutiny of papers and research on this and other blogs than I have ever seen in real peer review panels. While blogs have their good and bad points, ones that are seriously run as this one is allow a large number of eyes and minds to review and comment on an issue, and have proven far more useful at rooting out problems than any publications peer review panel (or thesis committee I’ve ever seen). The comments against sites like this are remarkably similar to what the news media make regarding bloggers, that they don’t have the professionalism of “real” journalists (who have shown striking lack of honesty and competence at times), and smacks more of a jealous attempt to hold off progress and change more than a real interest in accuracy or fairness. Same here. The paradigm is changing, old methods and channels of information are dying off to be replaced by new ones, and I find it a fascinating thing to watch. I see the same thing everywhere, either in journalism, scientific publication, and going further back in the music and video industry and their attacks on new distribution channels.

Change is never embraced easily, particularly by the moribund institutions who feel threatened by it. It would seem that a more open, logical, and useful approach would be to endeavor to determine a good way to embrace and incorporate the power of blogs and such into the review process, but being human it’s doubtful change will be happily embraced, it will have to be forced. And the egg on faces that has arisen from work like our host’s will undoubtedly be one of the things that causes a change, so naturally it’s feared and attacked. Sadly, but not unexpectedly.

Re: Severian (#184),
Do you ever notice that freshmen ask the greatest questions? The reason is because they have watchful eyes and thoughtful minds and they don’t trust authority. Multiply that watchfulness and thoughtfulness and skepticism a million fold and that is what open blog-based review offers. I’ll take that over two tired, established guys with “credentials” any day. Put the two together and you have a vetting mechanism of unprecedented parallel processing power.

Can anyone point me to a 1-2-3 of how Mann et al. use PCA? I’m a mathematician by profession (speciality logic, but with statistics as a side subject), but currently working as a software developer. I used to do some biometrics statistics for a biology professor (as a student many years ago – I applied stepwise multiple regression which is a somewhat related technique, and I reinvented cluster analysis🙂 which is also a loosely related technique). From what Steve and Jeff Id writes, I get the impression that “mannomatics” is misuse of statistics methods at its worst, but I may be missing something (e.g. it seems to me they commit the sin of reusing data used for model building for the actual analysis – is that really true?). (btw., and trying to be on topic wrt. peer review: I didn’t get a very convincing impression of the statistical competence of biology journal reviewers at that time)

Steve: Look at our two 2005 articles and the left-frame categories on MBH and Principal Components. Also Wegman. There are layers of issues with MBH, not just, PCA, but it was an interesting one.🙂

Re: Espen (#186),
Espen
Open Mind had a series of five posts on PCA a while back. Here is the most recenthttp://tamino.wordpress.com/2008/03/19/pca-part-5-non-centered-pca-and-multiple-regressions/
In these posts, Tamino used material from Joliffe (an expert on PCA) to justify use of ‘non-centred PCA’. A while later Ian Joliffe joined the discussion, pointed out that Mann had not used the techniques he said he had used, and basically called the analysis meaningless. Joliffe begins by asking for an apology from Tamino for mis-representing his work

Nick Stokes, Kaufman use Tiljander upside-down, and chop off the period after 1800. You can download their data and see that they have it warmer in the 1600s and coldest around 1000 and 1200.
If a proxy were found to go down when the temperature goes up, I would expect the algorithm to invert the proxy and use that. It appears Mann’s CPS algorithm would just drop it, or perhaps it would keep it in inverted orientation, since the correlation is still good.

Blogs are not a substitute for journal publication, but they are a unique and potentially excellent way of establishing the sort of high-level discussion that the public is starving for.

Exactly. And they are a corrective for attempts to impose a monopoly on information about climate. Being the focus of a column in the NYT is a stunning victory in the effort to hold world-renowned scientists accountable.

Re: theduke (#192),
Blogs are a shady place where misinformation passes for information and skeptics can spend their days endlessly reaping what they sow in the coprophagic echo chambers of the denialosphere.

Re: Steve McIntyre (#191),
Respectfully, I question your sense of what it means to write “less pointedly”. Some of your communications are unnecessarily provocative. Yet you complain when those provoked are “unresponsive”. Your game is to obtain “unresponsive” responses. Admit that nothing pleases you more.

Re: MikeN (#189),
I’ve not been censored there before, and yet I have challenged some of their assumptions. When the comment is printed you will be asked to eat your cyncism the way you eat up half-truths.

Steve: Sometimes I write pointedly. All I meant in this instance is that my comment at RC did not include the portion of Korhola’s quote where he describes the continued use of upside-down Finnish series as a “scientific forgery”. I merely quoted the sentence where he says that they were used upside-down. I posted it simultaneously at DotEarth.

Re: TerryS (#221),
Whether the comment appears or not is somewhat immaterial. What matters is whether the graphics are corrected – by removing the Tiljander lake sediment series and the defective pine tree series. I will wait the week-end before passing judgement.

Re: William Newman (#194),
Thank you for the clarification. I’m not dismissing anybody’s argument. I’m inviting – nay imploring – engagement. Drive-bys from half-experts is not engagement. Be like JEG. Show us your stuff.

Ad hominems? I’m trying to find good reasons to cut Nick Stokes some slack. I thought I was being generous.

For many of us, this is a learning experience. Having CA challenged is essential for learning. I can understand that beeing right is important, but good arguments can be presented without snark or Ad homs. CA is my favourite site, but I still look other ways when post / posters go off the hinges.

Richard, looking forward to it. Hope you ask some followups about the lake varve proxies, I think it’s #1,#2 and #4 in the list. There are a number of posts on the subject last month.
Just throwing out Tiljander and Yamal changes things, taking away the claim of it being ‘warmer now than at any point in the last 2000 years.’ The Medieval Warm period is still cooler when you take both of those out, so they have something to cling to.

I looked at Kaufman sensitivity to several accounting issues here showing that a few changes in accounting policy (using Finnish series upside-up and using three reputable non-Briffa tree ring series (Grudd’s Tornetrask, Polar Urals and Indigirka River) instead of the three Briffa series; and a couple of other things mentioned in the post. I haven’t parsed the precise impact of each accounting decision – my guess is that the Yamal-Polar Urals sensitivity is the largest.

Please be careful about presuming that any of these squiggles means very much.

Re: MikeN (#199)
Something might have gone wrong in the submission process. Perhaps my typographical error on the word “interpreted” Or perhaps it will materialize later. I decided to submit the following:

Richard Sycamore says:
Your comment is awaiting moderation.

16 October 2009 at 1:38 PM
Are you quite sure the graphs posted above are all correct? There are two that make use of a lake sediment proxy that I’d like to discuss.
Thank you.

Re: Richard Sycamore (#201),
Once that’s submitted maybe you can try getting “Deep Climate” and “Delayed Oscillator” over here for a dendro lecture from “Luminous Beauty”. There’s gonna be some learnin done out behind the woodshed tonight.

Thesis as soon as I put the tip of the’d put to public criticism: When you later learn about climate science, knees, they classify the early-2000s history of science an embarrassment to the figures. Kummastelemaan they come in and use our time as a warning of how the core values of science and criteria, and were given little by little forgotten in the research areas – climate change – changes in political and social playground.

The truth, universality, objectivity, reproducibility, publicity, systematic, rationality, criticality and self-korjaavuus are signs that enable science has traditionally been separated from other types of world hahmottamistavoista and pseudo-science. The revolutionary new science of research results has always taken criticism. The University’s own operational strategy of “engaging in research in the basic attitude of a university is critical.”

Therefore calls for the development of scientific skepticism. The American sociologist Robert K. Merton speaks of systematic doubt – conclusions, and estimates of the postponement, until sufficient empirical data to give them a solid foundation. If this is not a systematic science is no doubt, threaten to change research dogmeiksi, which imprisons the progress of science.

These dogmeista was caught in time, including the church, but now the researchers themselves can sometimes become dogmatic. Researchers identify with the feeling of ease at the level of its outputs, and long-puurtamisen through the findings will be easily generated as the ultimate truth. This is a lot of examples on climate science. Ilmastomallintajat For example, some may start to keep their own designs as reliable descriptions of the world, when in fact they are crude simplifications and simulations of the complex and chaotic reality as statistical virherajoineen. Climate change is an area where researchers are still a lot to do before the “final truth” achievement.

Climate research critical to eat ilmastoskeptikoiden sharp dichotomy and the human contribution to highlighting the researchers. Scientists do not wish to be identified with skepticism, when many all-natural, materials, methods and conclusions of the criticism fails to mention fear of stigma. At the same fear of new alternative models of explanation does not dare to look, with the result that the oxygen is being lost throughout the scientific field.

It is highly symptomatic that criticism or alternative models must be presented as soon as the stress that he is far from being a skeptic. This is what the UN Climate Conference in Geneva recently talked Mojib Latif rushed to point out to the telling of the near-term climate change projections could go into new and that the world’s climate may cool down in the next 10-20 years. Such prudence and living on tiptoe is not in the long run in honor of science.

Korjaavuuden themselves, along with the criticality and the objectivity of science is a key criterion for autonomy. Scientific progress can only be the scientific community through the activities of their own and do not be pressured by external parties or the hope of certain types of delivery. Scientific knowledge is sought and set out on his own personal interests and prestige notwithstanding.

Climate research is the subject of political heat because of the enormous external pressure. Community of researchers in this field has not previously been used to. In particular, the Copenhagen Climate Change Conference is below the pressure outside the research community seems to energisoineen voltage level at which the normal critical study of the basic principles of easily forgotten.

Public putkahtelee ever and anon a variety of reports and studies are maalaillaan still hurjempia horrors of climate change. In this report the genuine scientific criticism, publicity and repeatability is often forgotten. For example, the UN Environment Program has just released a report on climate (McMullen, CP and Jabbour, J. (2009). Climate Change Science Compendium 2009. United Nations Environment Program, Nairobi, Earth Print) presents impressively Wikipedia receipt for “hockey racket” to describe the last hundred years the temperature rises and inform lähteekseen Hanno (2009), which is one of the occurrence of Wikipedia checkmate meikäläisen pseudonym. Master’s thesis of such sources in the mocking.

Another example is the prestigious journal Science recently published a study in arctic regions, average temperatures are found to be higher now than at any time in the past two thousand years. Result may well be true, but the way the researchers conclude that raises questions. Proxy-material has been included selectively, they have been digested, manipulated, glazed, and the combined – for example, own and my colleagues collected data from Finland in the past has even turned upside down when the warm periods become cold and vice versa. Normally, this would be considered as a scientific falsification, which has serious consequences.

The ultimate goal of science is to seek truth and to broaden tietopiirimme. Scientific knowledge is never absolutely certain: to err is always possible. It does not mean that all information would be as shaky, or that should be adopted in a general skepticism in relation to all information. However, it is what we perceive truth may now change as you rise to new ideas, theories, methods and better research tools. This is science always happened and is happening still.

This humility and a sense of proportion is now also needed on climate science.

The author is a professor of environmental change, the University of Helsinki

Here is an additional comment by Atte Korhola in reply to comments (from the same thread):-

Atte Korhola Friends. What would have happened if I had written another blog post on the forum say, “Social psychological research doldrums” or “brain research in crisis”?

I would have likely sparked an interesting discussion as to whether such a crisis, the risk actually exists, and in what direction the investigation is going. Some people would argue against teesiäni, others might understand my concern.

Climate change, the case is otherwise. Within it is apparently impossible to create a calm and neutral in the debate, but that would become immediately tivaa the strains of guessing motives, to demand an account of moral responsibility and to broadcast.

Perhaps the legitimacy of speaking would have been through it, that I immediately expressed my views kärppänä the phenomenon of self-respect, I believe climate change, a significant human contribution to the cause, and I take it seriously. But it must be such a confession of faith actually read before every speech? I think this is just symptomatic of the advancement of science and scary. It is the presence of the toes, which I described in my text.

When Swanson & Tsonis recently released the GRL in the investigation, “Has the climate shifted recently?” In which they speculate about a possible cooling of the climate over the coming decades, they were forced to close those persecuted in the scientific community. As a result, they ended up in mm. Real Climate website to motivate and to clarify their positions for researchers and the general public and vakuuttelemaan that are still down the line.

I think this is somehow painful and embarrassing, especially since the other direction is not speaking the same kind of responsibility placed in the. For example, Jay Zwally, and Mark Serreze have both argued that the Arctic ice cover melts completely by summer 2012. Early summer of 2008, Serreze even predicted that the end of that same summer, the entire north polar region is ice vapaaa. I was traveling in England and the meeting was extensively covered on the forecast of daily newspapers were reflected etusiviulla. Well, eipä melted, smooth and not show the 2012, not yet. But where is the accountability? Who would require them to account are saying?

Nickname ‘diff’ has taken the discussion of my speech correctly. I’m certainly not presented by the climate research would in itself a bad quality or inferior – and I have to rise to the scale of the Competency not even be enough. I expressed my concern for climate studies the general state of the subject of enormous jännitteisyyden and the politicization of. Symptoms is in the air. But I may well be wrong.

Nickname “Peter” (Garfield?) Submitted that such a keskutelua should only take place within the scientific hämmentämättä the general public. Peter, Wake Up! We are living in the 2000s, where researchers are no longer impossible to hide, and Korhonen maamonttuun, which then would be to light to tell the public the latest scientific truths. Citizens seeking information, and yes it is enough!

I usually do not believe that climate change would be opposition to just a lack of information, yes, and that the people will stand up if they are just right informaaatiota adequately distributed. As the Tyndall Center for Climate Director Mike Hulme excellent book “Why We disagree about Climate Change” (Cambridge University Press, 2009) shows, for combating the causes are myriad: the rational, kulttuurellisia, political, psychological, financial, moral, scientific, and the essence of the status of these issues and others the scientific community is to be considered together with the citizens.

I was asked to write weblogs for this column. I want to stimulate debate, stir and bring new ideas and perspectives. Quite natural, right? According to Peter, or what should I write? Repeat of the IPCC summary and projections? Yawn.

When the 2008 Nobel Prize winners in economics, Paul Krugman, asked about the award ceremony, which is the quality of his research activities, the secret, he replied: “Listen harhauskoisia. Questioning. Dare to be stupid. Simplify, simplify.” There it is!

Whatever one thinks about peer-review, this “hockey stick saga” demonstrates clearly and conclusively that it is a totally unreliable and potentially corrupt construct. I don’t want to say it’s worthless, but it is damn close to that. It’s very similar to the American political system: good-ol-boy networks, pay to play, scratch my back and I scratch yours, $$$$, etc.* We need something better. I don’t have a solution and I don’t know what’s “better,” but peer-review means absolutely nothing, relative to the veracity of some printed material. I have just as much faith in “Car and Driver” as I do in “Nature.” My point is that peer-review just doesn’t add any value to a publication. I’ll take a heavily debated blog post, anytime. Peer review may already be an anachronism!

*I know, because I’ve been there, many years ago. Sometimes the damn reviewers don’t even read the publication!

Korhola commented 02/10/2009
12:37
Atte Korhola debate has evolved into a very interesting, so thanks to everyone’s participation.

Garfield asked why I write such a science forum, when things is not here, can not influence. Could just as well ask why here, 2 in secondary target; mekö that we decide? Hypothesis was that the researcher will also visit saitilla bunch, and the liveliness of the debate shows that the interest is sufficient.

The most recent debate spoke of “hockey racket.” I know it very well. Hockey Mailan ilmastotieteesssä significance in itself is a limited (albeit important), but the public image of climate change unprecedented. It has become a sort of climate change, logo, symbol ikonografinen.

I am deeply entangled in matter, as the International Geosphere-Biosphere Program’s (IGBP) PAGES program has recently launched a new sub-program called Arctic2k. The program focuses on the last 2000-year climate history of northern polaarialueilla. I am taking this program as coordinator.

But I know what I inject my head. Hockey racket, bat or rather the plant, with very many open questions related to proksisarjojen choice of data sets for statistical analysis and data archiving and accessibility. I will do anything to make data access and increase the transparency of any kind. I will also make use of experts in mathematics of statistical time series analysis.

McIntyre and ClimateAuditin The criticism is taken seriously. Mann and Partners Real Climate essentially makes it a mockery of a new mass blog. It may, however, in the long run give rise to self-destructive.

simply remove both Bristlecones and upside down stuff and see what is left.

Or, you could actually read Steve’s posts where he is explicit about what was done and how it affects the reconstructions. Including a reference where the EXPERTS RECOMMEND NOT USING BRISTLECONES. Now, isn’t that a fresh thought??

Of course, you have no interest in determining a realistic resolution of the flawed research. You are only here to try and confuse the issues as you do everywhere you post.

This discussion reminds me of the game you play with kids where you hide a treat in one hand behind your back: you bring one hand to the front, open it, empty, put it back, you bring the other hand to the front, open it, empty, put it back. No treat! Most kids work it out by the time they are four or five …

A question for readers – is there a point at which the failure to concede a point like this creates a situation where the research is not accurately represented in the research record?

In response, let me say the I, as an engineer, view the scientific process as a closed-loop control system where the error signal is the deviation of hypothesis from reality. The output of the system can be manipulated by breaking the feedback loop, which can be done, and is being done, by evasion and non-response to queries and criticisms.

Speaking outside the domain of science, I think the system is being gamed. There is a pattern of behavior going on of which the Hockey Team plays only one part. Their part is to present a thesis with plausible comfirmability, break the feedback loop, and sustain the game long enough for other players – outside of science – to exploit it for political ends. Whether my assessment of the big picture is correct or not is irrelevant to the science (which is why this may get snipped). But your efforts to do the science right are greatly appreciated by many on the outside looking in.

Re: Pops (#224), wrote, “…I think the system is being gamed. There is a pattern of behavior going on of which the Hockey Team plays only one part. Their part is to present a thesis with plausible comfirmability, break the feedback loop, and sustain the game long enough for other players – outside of science – to exploit it for political ends.”

Very concisely expressed and, in my considered view as a scientist, you’ve hit the nail squarely on the head. These people have put the end before the means, and in so doing have betrayed their professional probity, have subverted science — especially physics — and have wrecked the integrity of the entire field.

I actually haven’t followed this closely for a reason. I don’t think the paleo evidence is central to the AGW argument. AGW is about what happens if you burn hundreds of Gtons of fossil carbon over a few decades. That has never been done before, so there will only be very indirect guidance at best from whatever we can glean from the older history.

But thanks for the pointer to the report, and Chap 4. It does seem very informative. And to respond to Peter, yes, I do see a sentence that starts“While “strip-bark” samples should be avoided for temperature reconstructions…”.
I’ll need to read more to get a feel for the context.

Steve: “Big picture” questions are important but are not relevant to the questions at hand. Editorially, I don’t permit two sentence exchanges debating the big picture to avoid every thread being exactly the same after 12 comments. So please do not respond on this thread to Nick’s musings here on the big picture.

Re: bender (#240),
Deep Climate – Gavin’s new guru – is trying to make dendroclimatology sound as though it is deeply rooted in hardcore physiology. If that were the case there would be lots of manipulative experiments quantifying the temperature-growth relationship. I know of one. Done in 2006. The fact is it is rooted more in archaeology. Choosing sites is guesswork informed by – you got it – past experience, NOT insight. Figure out where the hockeysticks are and just keep on mining them. Reckoning that treeline tree growth might be temperature-limited is hardly rocket science. Perhaps Deep Climate should consider coming here to ask me questions.

>Whether the comment appears or not is somewhat immaterial. What matters is whether the graphics are corrected – by removing the Tiljander lake sediment series and the defective pine tree series. I will wait the week-end before passing judgement.

They wouldn’t post your comment, but you think they might pull the hockey sticks?

I should stop commenting, as Pocahontas sang:

How high does the sycamore grow? If you cut it down, you’ll never know.

Deep Climate – Gavin’s new guru – is trying to make dendroclimatology sound as though it is deeply rooted in hardcore physiology. If that were the case there would be lots of manipulative experiments quantifying the temperature-growth relationship

Far from physiology and biophysics.These models have no relationship to any physiological properties and hence ontogenic/metabolic efficiency laws that I am aware of (the Hugershoff growth curve is a fitted curve)and the interactions requires significant measurements at the quantum level ie at the molecular interface.

We re-analyze the assumptions underlying two recently proposed ontogenetic growth models [Nature 413 (2001) 628; Nature 417 (2002) 70] to find that the basic relations in which these models are grounded contradict the law of energy conservation. We demonstrate the failure of these models to predict and explain several important lines of empirical evidence, including (a) the organismal energy budget during embryonic development; (b) the human growth curve; (c) patterns of metabolic rate change during transition from embryonic to post-embryonic stages; and (d) differences between parameters of embryonic growth in different taxa. We show how a theoretical approach based on well-established ecological regularities explains the observations where the formal models fail. Within a broader context, we also discuss major principles of ontogenetic growth modeling studies in ecology, emphasizing the necessity of ecological theory to be based on assumptions that are testable and to be formulated in terms of variables and parameters that are measurable.

To return to the topic of this post, I would suggest that the problem with peerreviewedliterature has always been there. In the past, flawed papers that were not that important were simply never corrected. Those more important would be revisited as people tried to include the new results in their own work or to make practical use of them. This would start a cycle of refinement of the results, often over decades. The idea that a result is a flawless gem simply because it appeared in Science is preposterous. Still, a simple matter such as using Tiljander upside down should have been addressed by the editor. In my experience, however, editors take a completely hands-off approach to replies.

Re: Craig Loehle (#248)
The problem are not the scientists but the legions of activists and media who treat the label “peer review” as an stamp of truth. This means that nonsense papers will get repeated and their conclusions accepted as general knowledge even if the scientific community collectively ignores the work.

For this reason, editors should be allowing much more detailed comments and ensuring that the author’s replies actually address the criticisms (a comment submitter should automatically be a reviewer on the author’s reply). This would make it much easier to identify papers that were junk in the first place.

It’s a long time since I had anything published, but I don’t recall stringent examinations of results, data, methods or code. That just didn’t seem to be the point of peer review. Which is fine if your work is of interest to a limited number of specialists and did not look like it was going to have practical consequences to the public (mine fell into this category). Looking back, I don’t think I would have welcomed a whole heap of blog feedback a la CA on my research and methods, but it sure would have forced me to sharpen my act (which incidentally is what happened when I went into industry …) It is not healthy to deify “peerreviewedliterature” in this way – it just doesn’t operate in the way that is being assumed.

Would it be oversimplifying to say the upside-downness of Mann’s graphs, would result in the Medieval Warming Period showing as a Medieval Cooling Period, and the Little Ice Age showing as Little Warming Age ?

Nick,
Here’s what you have been sidestepping: If you agree that the Tiljander series should not have been used, as you have stated, then the result is that the sensitivity test where the Graybill chronologies where removed would have failed. Without the Tiljander series to maintain the hockey stick and MWP/CWP relationship the results would have been very different in that test. This renders claims of robustness invalid, in my opinion, and would be grounds for retracting it if it were mine. Sidestep that.

>What matters is whether the graphics are corrected – by removing the Tiljander lake sediment series and the defective pine tree series. I will wait the week-end before passing judgement.

Richard, you have to give them more time to reprocess those graphics, to get the results without Tiljander.

I’m sure that’s what they are doing, and that’s why your comment is not up yet. Otherwise it would be embarrassing. They’ve had 50 comments in that thread since then, including one that calls into question almost every single hockey stick.
I even got one through under another name!

I notified him of the upside-down problem on Sept 3, pointing to our PNAS Comment. He replied at the time.

I wasn’t aware that Mann et al. had flipped the Korttajarvi series, and I’m afraid that I didn’t see your correction in PNAS. I’ve plotted our original series along with the re-orientation of the Korttajarvi data to get a feel for the effect on the overall result (attached).

I offered him the opportunity to post a thread here without any editorial restrictions – he was indignant at the very idea, he didn’t want to catch cooties/kudies. “Zach has kudies/koodies/coodies. He’s gross, I don’t want to sit next to him! EWWWWWW! Yucky”

I also asked him for some of the “publicly available” data that wasn’t actually publicly available. He refused. Then had a temper tantrum and said not to contact him in the future. I notified Science of the data refusal and asked them to intervene. Nothing from them yet; I should refresh the request.

“I offered him the opportunity to post a thread here without any editorial restrictions – he was indignant at the very idea, he didn’t want to catch cooties/kudies. “Zach has kudies/koodies/coodies. He’s gross, I don’t want to sit next to him! EWWWWWW! Yucky””

“I also asked him for some of the “publicly available” data that wasn’t actually publicly available. He refused. Then had a temper tantrum and said not to contact him in the future. ”

I was here at the time, and that’s not how I would characterize his reactions. Joe Romm has temper tantrums.

…
As you may know, I’ve been discussing Kaufman et al 2009 at http://www.climateaudit,org and would welcome your participation. In addition, if you wish to post a thread setting out the paper in whatever terms you wish, you are more than welcome. LEt me know and I’ll provide you a password.

One of the questions that I’ve been discussing at Climate Audit is the apparent disconnect between the sampling and archiving program originally outlined for the 30 NSF sites (the original program description stated that data would be collected from the sites in a consistent format), the limited archive to date and what was used in Kaufman et al 2009, the topic of a current thread. Any light that you can shed on this would be greatly welcome.

In one of your meeting notes, you mention that you intended to “publicly defend” the findings of these studies. Climate Audit represents a relevant forum for such public discussion and I hope that you can participate.

Kaufman

I did log onto the Climateaudit website about a week ago. I have no desire to engage in vicious commentary…

and later…
SM:

As I told you previously, I expect politeness from myself and my readers and have no plans to alter these policies. If there are any comments that, in your opinion, offend against these policies, I would appreciate it if you would draw them to my attention. If you are unable or unwilling to identify such comments, I would appreciate it if you would withdraw your offensive comment that I had engaged in “vicious commentary” at Climate Audit.

In Kaufman et al 2009, you stated: “We compiled available proxy climate records that (i) were located north of 60°N latitude, (ii) extended back at least 1000 years, (iii) were resolved at an annual to decadal level, and (iv) were published with publicly available data”. Given your failure to provide URLs for proxies 6- Lake C2; 19- Lake Nautajarvi; 21- Lake Lehmilampi; 20- Lake Korttajarvi or the annual versions of proxies 7,12,13 and 16, I take it that the claim to have used “publicly available data” is, at least in part, incorrect.

Your email attaching pdfs of various articles is unresponsive to my request, as is your suggestion that I contact the various original authors for the supposedly “publicly available data”.

Kaufman:

I really, really tried.
Please do not write to me again.
Thank you.

There’s a bit more – I think that I posted it before. I think that my characterizations of his responses is fair.

I saw what you posted last month.
I don’t think his responses are fairly characterized as
“he was indignant at the very idea, he didn’t want to catch cooties/kudies. “Zach has kudies/koodies/coodies. He’s gross, I don’t want to sit next to him! EWWWWWW! Yucky”

I also asked him for some of the “publicly available” data that wasn’t actually publicly available. He refused. Then had a temper tantrum and said not to contact him in the future. ”

Erm. Newbie here. Can’t remember quite how I ended up here, but it was linked from some mainstream news, so you must be getting somewhere.

Anyway, I have to say that despite being deeply skeptical about the AGW theory/theories, I don’t understand why you lot are so gratuitously rude to Nick Stokes. The questions he asks seem reasonable to me (as a not-very-technical reader) and your responses generally assume bad faith on his part and throw ad hominem instead of answering. He doesn’t appear at any point to ignore any reply. Is there history to this that isn’t obvious from the thread? You seem to come across as just as defensive/offensive as Mann and his cronies with regard to your hypotheses.

Anyway, the question that I still don’t think has been answered is about the specific issue at hand, rather than general issues regarding Mann’s general methodology. I don’t see anywhere that you’ve established that it matters whether the data series is inverted; Mann says it doesn’t matter. You (as a group) contend that it does matter because it makes the research nonsensical, but that doesn’t seem to make a lot of sense in the context. It may well be true, but does it affect Mann’s argument? No, in my view. You would need to show that it invalidates his methodology.

Is this a reductio ad absurdem argument that assumes Mann is correct and then shows that he is only correct from false premises? That seems to be the line you’re taking, but I don’t see anywhere that it’s actually been explained.

FWIW, I think Mann is talking total b*ll*cks, but it doesn’t appear that you’re doing a good job of proving it.

The reason it’s not always easy to “prove” it is what Steve Mc refers to as “the pea under the thimble”. Mann’s response to the upside down curve is to simply hide behind the math, that the sign “doesn’t matter” once you let the magic algorithm go do the voodoo it do so well.

Some seem to want to give a free pass on this, but we are constantly told that you need to be a “climate scientist” to understand the actual physical phenomenon behind climate change. It’s not just the math. It’s not just the statistical analysis. Even “geologists” need not apply, since you have to actually understand the physical phenomenon and only a highly skilled “climate scientist” can understand all the nuances.

Then a simple thing like an upside down proxy somehow “doesn’t matter” and is held onto by its adherents like a hungary dog hangs onto a bone.

WRT to Nick Stokes, I agree he’s been very forthright and polite on this thread. The history probably relates more to some debates he’s taken part in on other blogs.

Re: John M (#276),
Can someone explain to me why Geologists are subjected to such vitrole by the Climate Scientist community?

One of my close friends is a leading geologist (we are talking double digit publication counts in Nature alone), and he is clearly someone of profound intellect and scientific curiosity. If he is indicative of Geologists as a whole, then I would expect input from geologists could only be of benefit any scientific endevour.

Can someone clarify for me?

Oh, IIRC, the geologists positition is that even if the “Hockey Stick” were true (and that seems more exceedingly unlikely as each day passes), on any sort of reasonable geological timescale, it is more or less indistinguishable from noise anyway, and not very interesting at all in fact.

Your second part answered the first. There’s no celebrity, cash or cultism if it “isn’t interesting” Do you know of many celebrity accountants?? The skeptics just need a vocal, focal point to smackdown the AGW mob.
The PRL is now passe, like pen & ledger books. CA is a much better way because it brings an accountability to the process rather than the well doumented “done cheap, mates rates” we have seen so far. Your science is either robust or shredded. All in one day.
regards

You can see from the data that Tiljanderproxy#20) is colder in 1025 and 1225 than 1125, yet in the actual tiljander proxy, it is colder in 1125. Mann has the similar problem, only it is worse because he then adds in a hockey blade for the post 1800 period which Kaufman cut off. Kaufman is issuing a correction for his upside-down use.

You are right with regards to Nick Stokes. From other threads, there appears to be some sort of history, but I’m not sure what it is.

Help me out, do I have this right? If I regress height of son on height of father I find height of father is a very powerful predictor. If I regress height of son on negative height of father, I find the same but the sign of my regression coefficient changes. No problem making future predictions if I remember what my variables mean.

Now suppose I have a father/son height dataset, but it’s corrupted so that some tall fathers have their sons height grossly understated, say they’re all 1 foot tall. Suppose too I’d really be happy if people were getting taller.

I do a regression on my corrupted dataset. I look at what it predicts about future average height – uh oh, we’re going to get shorter. But what if I accidentally flip the data before regressing, say I use 10 minus height instead of height? Now my tall fathers have sons who are 9 foot giants.

Now I take this model and project on a dataset where the corruption is even worse (late 20th century sediment data?)- people are going to be really huge. But if I get a clean dataset (distant past) I have a problem which is the height relationship is upside down – the really tall people look short because of the flipping, and vice versa. Hence the reversal of the MWP and little ice age historically, and the emergence of a hockey stick if the x-ray density is even more inverse to temperature outside the calibration period.

“Corrupt” may not be the right word. Tiljander (2003) warned that the climate signal in the varves was accompanied by progressively stronger human-activity signals, from circa 1720 on. Mann et al. (2008) acknowledged this problem… and then seemingly proceeded to ignore it.

Very few people understand what the temperature signal is, that the Tiljander proxies were wrongly calibrated to, 1850-1995. I believe (but am not certain) that it was correlated to a composite, regional temperature grid that was computed by a worldwide model. Jean S, Steve Mosher, Steve McI, bender, and some others know this aspect. It is significant, because Tiljander has a figure in her paper that clearly shows that the local average temperature hasn’t displayed much if any trend since about 1880. This “at-a-distance” influence of temperature upon data series might be a bug or a feature, depending on one’s desires, and one’s sense of rigor.

The Upside-Down episode and the ongoing efforts to justify Mann et al’s procedures often appear to be more in the spirit of Inspector Clouseau than Professor Moriarty.

Thanks a lot this helps. Substitute “signal distorted by non-climate influences” for corrupt, that’s what I really meant.

So from 1720 onward the the temperature signal is obscured by human activities, and the imapct of these activities was mostly to increase x-ray density, so a literal reading of the proxy would make it appear colder than it was. This proxy was calibrated to 1850 to 1995 temp data, but flipped upside down before doing so. If the 1850 to 1995 temp data showed an increase, which I’m guessing it did, this would incorrectly show increasing x-ray density to be correlated with rising temperatures. If x-ray density continued to increase post 1995, for non temperature reasons, the regression would predict increasing temperatures (I’m not 100% clear on if they used the sediment series to forecast post 1995 temps and produce an artificial hockey stick). Using the regression on pre 1720 data would cause an inversion – low x-ray densities (suggestive of warmth) would predict cold, and vice versa, causing the reversal of the MWP and the LIA (if I’m following correctly).

Thanks a lot this helps. Substitute “signal distorted by non-climate influences” for corrupt, that’s what I really meant.

So from 1720 onward the the temperature signal is obscured by human activities, and the imapct of these activities was mostly to increase x-ray density, so a literal reading of the proxy would make it appear colder than it was. This proxy was calibrated to 1850 to 1995 temp data, but flipped upside down before doing so. If the 1850 to 1995 temp data showed an increase, which I’m guessing it did, this would incorrectly show increasing x-ray density to be correlated with rising temperatures. If x-ray density continued to increase post 1995, for non temperature reasons, the regression would predict increasing temperatures (I’m not 100% clear on if they used the sediment series to forecast post 1995 temps and produce an artificial hockey stick). Using the regression on pre 1720 data would cause an inversion – low x-ray densities (suggestive of warmth) would predict cold, and vice versa, causing the reversal of the MWP and the LIA (if I’m following correctly).

Re: J. King (#288),
The Tiljander varve series and the rest of the long-range proxies in Mann et al (2008) were calibrated to the instrumental temperature record 1850-1995, then used to derive temperatures before 1850. They weren’t employed to make predictions past 1850.
More background in the post Connolley Endorses Upside-Down Mann and other posts referenced in that post & thread.

So I can see why there was confusion. Whether a proxy series grows with temperature or declines with temperature seems irrelevant, this will be reflected in the sign of the regression coefficient and the predictions will work (subject to a lot of caveats) if the relationship of the proxy to temperature is consistent in the calibration period and the prediction period. The real problem seems to be the distortion of the temperature signal in the proxy during the calibration period. The comment below is the crux:

“The Tiljander sediments are the combination of two unrelated processes: a presumably climatically driven process in which narrow sediments are interpreted by the authors as “warm” and thick sediments as “cold” and a nonclimatic process in which sediments are produced by ditches, bridges and farming”

So upside down means something different than what many seemed to think – it’s not the sign of the predictor that’s important but the fact that when calibrated as it is the correlation of the proxy with the reconstruted temperatures are the reverse of what Tiljander says they are.

Actually this suggests that the whole enterprise of reconstructing 1500 years (!) of temperature history from 150 years of real data by regressing it against various proxies is a hopeless exercise. At least Tiljander had an explanation of the divergence of the proxy from the real temperatures post 1930 (forest clearing, bridge building). No agreed upon explanations seem to exist for the divergence of the northern forest tree ring proxies post 1960. The 150 years we have available for calibration are the worst possible ones because the amount of human influence on the proxies is higher than at any previous time.

If high quality proxies without these kinds of problems existed we wouldn’t be talking about tree rings and sediment data. It seems to me the whole exercise is an elephant balanced on a rubber ball.

Re: J. King (#291),
As a latecomer to this story, I have realized that pieces of the story of how Mann et al (2008) employed the Lake Korttajarvi varve proxies are spread out at Climate Audit and beyond. Here is a compilation of some links to primary source material.

>The claim that ‘‘upside down’ data were used is bizarre. Multivariate regression methods are insensitive to the sign of predictors. Screening, when used, employed one-sided tests only when a definite sign could be a priori reasoned on physical grounds.

Screening IS used with Tiljander, which makes the second sentence irrelevant. The issue is whether it was oriented correctly or incorrectly. Even the calibration items are not an issue here. Mann used the data upside-down.

However, the proper reading of this paragraph is to imply that screening was not used. Regression methods are blind, so the only possible way of having inverted data is if screening is used, and this is only done if a physical interpretation can be reasoned. We can’t do that with Tiljander, so it was not screened.

Any other reading should have a sentence that Tiljander was oriented correctly. If he knows screening was used, then the regression methods are blind line should follow the screening line, which itself should just say things are oriented properly.

All of this is even without the calibration/divergence issues.
The algorithm for Mann’s CPS code assigns Tiljander class 4000,
which goes into the screening code. Here the proxy is tested for correlation above .1. An inversely correlated figure would be dropped.
The code in which Tiljander is processed is not blind to the sign of the predictor.
At this point, Mann can only say McIntyre is correct, or McIntyre is incorrect and the Tiljander proxy is oriented correctly.

Mann does neither, which means he thinks the Tiljander code is not being screened, and the method is blind to the sign of Tiljander.

Oh the the irony – McIntyre writes: “The second issue is the journals are not really designed to cope with the total combativeness of Mann and coauthors and their seemingly supreme confidence that they can say whatever they want with impunity. ”

Then finished his “serious” attempt with that juvenile antagonistic Chimp lighting a cigar, amusing info-mmercial.

Incidentally, the journals are not designed to cope with McIntyre’s style of guerrilla warfare either.

He’s a smooth operator, that’s for sure.

Steve: unfortunately, Mann’s use of upside-down data, his refusal to issue a corrigendum and the climate community’s acquiescence in such antics lend themselves to satire. I haven’t yet written about EPA’s reliance on Upside-Down Mann in their response to the Petition for Reconsideration, but it’s an amusing incident as well.

It is interesting that the self-righteous “Citizenschallenge” gives no evidence of having intellectual and scientific standards. He keeps interjecting complaints with no analysis, evidence, or reasoning. How curious….

6 Trackbacks

[…] McIntyre and Ross McKitrick published a peer reviewed critique of the hockey stick in 2005, and in 2008 had a comment published in PNAS. In the intervening years the Wegman and NAS panels accepted […]

[…] when Mann himself has issues with his own papers such as incorrect lat/lon values of proxy samples, upside down Tiljander sediment proxies, and truncated/switched data, is mind boggling. It’s doubly mind boggling when these errors […]

[…] or down but the emails discuss corrections to published papers for the same problems. Below are a couple of graphs taken from climate audit. My blog is acting up so you will see them as I am writing rather than after I’ve […]