Chladni and the Bristlecones

Some of the CA posts that I’ve found most interesting to write have been about identifying Chladni patterns in supposedly “significant” reconstructions when principal component methods have been applied to spatially autocorrelated red noise. (This is by no means a new observation, as warnings about the risks of building “castles in the air” using principal components is prominent in the older climate literature (from the 1970s), but seemingly forgotten in the IPCC period. I did several posts on Chladni patterns in the Stahle/SWM network used in MBH99 and about Chladni patterns in the Steig et al 2009 Antarctic network (in the latter case, observing that eigenvectors said in the Nature article to have physical significance were, in fact, nothing more than the expected Chladni patterns from spatially autocorrelated data in a region shaped like Antarctica.

The second half of the McShane Wyner paper applies a sort-of MBH99 style analysis to the Mann 2008 network. In their case, instead of applying PCA to the North American tree ring (and similar networks) and combining the PCs with individual proxies as in MBH, they applied principal components to the entire proxy network. Although principal components were integral to MBH99 (and to the reprise of MBH in Rutherford et al 2005), principal components are not an important feature of the majority of AR4 studies – which, for the most part, reverted back to the simple CPS methods of Bradley and Jones 1993 and Jones et al 1998 or poorly-understood RegEM.

In my earlier discussions of Mannian networks, we observed the concentration of proxies in the bristlecone area, but since then have developed some tools for analysing spatial autocorrelation that I didn’t have at the time of the original study.

Applying these methods to the Mann et al 2008 network (McShane Wyner 93 proxy selection) yields some provocative results. (Results for the Gavin Schmidt 55 proxy subset are probably similar and I’ll get to them on another occasion.)

I’ve done three plots below showing weights in the style that I’ve been using – the area of the filled circle proportional to the weight. Red for positive weights, blue for negative weights. (I’ve plotted the gridcell beside Sheep Mountain in orange to avoid it being overprinted by the Sheep Mountain gridcell – there’s no significance to the orange otherwise.) In each case, I’ve added up weights in a gridcell and plotted values by gridcell. Click on the plot below for a larger version.

The figure at left show simple counts. In the Mann2008/MW93 subset, there are 24 bristlecone series, which are reflected in relatively high weights in the bristlecone area. The middle plot is the eigenvector corresponding to a PC1 from spatially autocorrelated red noise located at the locations of the actual proxies – for this calculation, I used spatiall decorrelation of exp (- distance_km/1200), a decorrelation factor more or less equivalent to what we’ve seen in station data. It’s worth experimenting with this. By working directly on the correlation matrix, you can get the eigenvector directly (you could do lots of simulations, but you don’t need to.) Notice what taking the principal component does: it focuses the weights on the bristlecone area, and makes “peripheral” weights almost negligible. This is a more extreme version of what we saw with the Antarctic Chladni patterns. There the PC1 tended to overweight the center of the region and underweight the boundary (this is the Chladni pattern of a drum). It’s even more extreme here – the Chladni pattern is really focused on the bristlecone area with negligible weights on the periphery. Take a look at the graph and I’ll discuss the actual proxy network eigenvector afterwards.

The eigenvector for the actual proxy PC1 is “remarkably similar” (TM – climate science) to the eigenvector for spatially autocorrelated red noise at locations of the actual proxies. However, there are a few very interesting details. There are two negatively-oriented proxies in the actual PC1 in central America. The Yucatan proxy in question has a very elevated medieval warm period. Because this is antithetical to the bristlecone pattern, it is flipped over in the PC1. This is related to the phenomenon observed in McMc 2005 (EE) where we observed that proxies with a warm early 15th century introduced into an augmented network would be flipped over because of the way that bristlecones imprinted the PC1.

Other noticeable details: the two Tiljander proxies retained in the Mann 2008 (MW93) network fight the tendency to downweight “peripheral” proxies in the PC1 and are noticeably weighted. The “Tornetrask” series also receives a higher weight than in the spatially autocorrelated version – as I noted on another occasion, Briffa’s Yamal series is averaged with Tornetrask (and Taimyr) and used as “Tornetrask”.

Next, here is an interesting comparison of the eigenvalues for spatially autocorrelated red noise at actual proxy locations and for the actual M08 (MW93) proxy network. On the left is a “scree” plot showing the squared eigenvalues (log scale) for the actual proxy network as compared to spatially autocorrelated red noise. Gavin Schmidt has an eigenvalue comparison in the Schmidt et al comment on Mc_W, but does not consider the potential impact of non-random spatial distribution of proxy locations. On the right is the cumulative eigenvalue weights.

The 2nd and higher eigenvalues for the proxy network are higher than the corresponding eigenvalues for spatially autocorrelated red noise. However, the first eigenvalue for spatially autocorrelated red noise is a LOT higher than the actual network and the cumulative eigenvalue weights for the actual network are lower at all points than spatially autocorrelated red noise.

I’m mulling over the interpretation of these results, but my first impression is that the results of the actual network are forced by the unique properties of the bristlecones. The bristlecone pattern is very distinct and imprints the PC1 of the actual network; lower eigenvectors are various contrasts with the bristlecones and are more heavily weighted because the bristlecones are somewhat sui generis.

Chladni patterns are pretty interesting in themselves. In the present case, the bristlecones result in a mathematically interesting sort of symmetry-breaking.

As observed on other occasions, the NAS panel said that strip bark should be “avoided” in temperature reconstructions, but, like the dead parrot in Monty Python, they were re-sold in Mann et al 2008 (notwithstanding its claims that it complied with NAS recommendations) and included in the MW analysis. In the Schmidt comment, he says that these have been vindicated by Hughes and Funkhouser 2009 (not available at the time of Mann et al 2008), with Gavin Schmidt pointedly avoiding any discussion of Linah Ababneh’s failure to replicate Graybill’s Sheep Mountain chronology.

98 Comments

From your graph of counts, is it right to say then that the number of bristlecone series and the correlated variance is making it a strong player in the matrix despite its limited location. Am I understanding your point correctly?

Very good post, Steve. May I add a couple of thoughts.
1. Principal Component Analysis is a particular form of the Generalized Linear Model approach, which also encompasses linear regression. These models are based on the assumptions required for least squares to be minimized, and their significance tests are based on the assumption that observations are a random sample. If the sample is not random, and especially if it is not proportional (i.e. sampling ratios vary for different subsets of data) one has to apply the necessary inverse weights (the reciprocal of the sampling ratio) to avoid giving unduly high weight to areas were samples are relatively more abundant.
2. The original applications of PCA (in Psychology, especially the 1904 Spearmann study looking for a General Intelligence underlying factor explaining the correlations among various intelligence tests) assumed that ONE underlying factor explained most of the observed variance and covariance of observations, the rest being peculiarities and quirks of the individual variables. Consequently, the PCA algorithm is designed in such a way as to maximize the amount of variance attributable to the first factor extracted.

But this is not correct for applications where SEVERAL underlying factors are at play. The first factor will not reflect much of the overall variance, but just any general trend; in the case of paleo-reconstructions of climate, if the millennial trend of existing observations is, for instance, stable (which might be due to the particular set of observations used), the first factor will reflect this trend and give a flat trace similar to the handle of a hockey stick. But other factors (second, third, etc) would reflect other aspects of the observations’ variance, such as some warmer or colder periods.
To account for this, one classic approach is the variant known as Multiple Factor Analysis (after L.Thurstone, 1937). Another approach is to build a composite measure involving several different factors (or at least the first few ones, as far as they are statistically significant) weighted by their eigenvalues, i.e. by their proportional contribution to the overall variance of observations. The composite measure might be a simple (weighted) sum of various component scores, or some other functional form as dictated by theory.

Without being a specialist in dendro-chronology, one may easily see that in the field of paleo-climate proxies several underlying factors are surely at play: apart from temperature at the time a tree-ring is formed you have for instance temperatures in later times affecting the width of the previous rings (and requiring “de-trending”), and also humidity, species of tree, location and what not, and then there are other proxies besides trees, like ice cores or sea sediment cores, that add their bit of diversity to the data set.
Taking account in one way or another of other underlying factors (which may potentially be as many as variables, in this case as many as the number of series) may undoubtedly make for a better reconstruction (in a spece of a few dimensions) of the mind-boggling complexity of so many series of so many proxies, each covering different lengths of time.
There is some discussion in MBH and subsequent discussions about using one or more principal components, but I have not seen any in-depth discussion of the underlying statistical and mathematical issues involved.

Relevant to the Sheep Mountain bristlecone pine proxies, Matthew Salzer et al. published “Recent unprecedented tree-ring growth in bristlecone pine at the highest elevations and possible causes” in the Dec. 1, 2009 issue of PNAS. Link to full text.

As the title suggests, they find no post-1960 “Divergence Problem” to speak of. High recent temperatures are accompanied by high recent growth rates, for bristlecones at the treeline. On the strip-bark issue, they write:

At both upper-treeline (Sheep Mountain) and non-upper-treeline (Cottonwood Lower) locations in the White Mountains, we found no differences in modern growth between the whole-bark and strip-bark trees that adequately explain the wide modern rings at the upper treeline (Fig. 3, Table S2). Thus, it is unlikely that the modern wide rings are a result of a change to the strip-bark growth form in recent centuries after partial cambial dieback.

A search of the Salzer et al. manuscript for the name “Ababneh” returns no hits. In the Supplementary Information (PDF), Figure S4 discusses Graybill’s and her results. The legend reads (emphasis added),

Graybill and Idso (1) strip-bark (solid line) and whole-bark (dotted line) collections plotted as raw ring widths as in this study (A) and as standardized indices (e.g., figure 5 in reference 1) (B). Mean segment length 1,052 years for strip-bark and 279 years for whole-bark. Note: the divergence in modern period is clearly a result of the standardization technique used by Graybill and Idso (1). A similar result was obtained by Ababneh (2), with little difference between strip-bark and whole-bark raw ring widths… and more substantial differences after data processing…

Hector, while I agree with your caveats in one sense, you have to be careful in what you transpose. Most regression analyses are effect against multiple causes and you don’t want multicollinearity. In reconstructions – and this is a point that 99.9% of all people are insufficiently attentive to, it’s a calibration problem where you have effects, but not the cause. You actually want multicollinearity .i.e a “signal”. interpretation is a different thing. In plausible mathematical situations, adding PCs below the PC1 distorts things in a network meeting usual assumptions. Of course, if the PC1 is taken hostage by something like bristlecones, then it raises a variety of interesting mathematical issues, none of which have been articulated by proponents.

Of course, Steve, I know that calibration (or eliciting underlying trends off a mass of individual series) is different from the usual regression problem. My point is that even if one wants all the variables to be correlated, it usually happens that many correlations are weak, and perhaps some are even negative, and thus a single factor may be unable to explain most of the variance, because other factors (orthogonal to each other, or made to be correlated by rotation) explain also a significant portion. It is quite plausible that in a given situation, even without “PC 1 taken hostage by something like bristlecones”, there might be other factors interfering (if PC1 is itself distorted by bristlecones and the like, so much the worse). The “other factors”, IMHO, are not necessarily to be taken as “noise” or distortions insofar as they are not random but show a distinctive pattern. In my field I have often come about datasets where PC1 is a main trend and PC2-PC3 give secondary movements and oscillations about the trend, sometimes large enough to be more than “random fluctuations”. In any case, I just wanted to put out some thoughts on the quirks of PCA, and would not even think of telling you anything about how to proceed in this matter.

My understanding of the issue is that given autocorrelated data and an aperture (bounded spacial or temporal dataset).

To obtain the significance of a PC one must project it onto the natural eigenvalues of the “aperture/autocorrelation pairing” and read off the weightings against this basis.

It could turn out that the only significant signal is in a low order PC.

To be clear, what I am saying is that certain signals are amplified, (preferencially picked out) by the aperture in question and one must divide by that ampification to get the correct result.

Viewed another way one must pose the problem in terms of the proportion of generating white or Gaussian noise that is explained not by the amount of autocorellated coloured noise (in the data record itself) that is explained.

If PC1 has no zero crossings, PC2 has one, PC3 has two etc, then that is what would be the expectation of just coloured noise.

Obviously one is stymied by ignorance of the precise origin of the autocorellation so it is not normally possible to know the exact eigenvalues of the aperture so the procedure is moot.

I was recently surprised to find that the eigenvectors of the aperture (1850-2010) of the global temperature record assuming “red” noise were symmetric. I was surprised as the correlation (only dependent on the past) isn’t generated symmetrically.

I really wanted a result for “pink” noise as it is a much better spectral match and the natural filter specification given a diffusive ocean. Unfortunately the long tailed persistence of “pink” noise meant that the process would not converge in the life-time available.

An approximation would be available on the basis that pink noise a 1/f filter and PCs have an implied frequency based on the number of zero crossings. In this case the amount of variance explained should decline as 1/f in this 1D case,

Great observation! Bristlecone pine data comes from areas that probably comprise less than 2% of the US land area, or .04% of the area of the globe. Yet without them, even using Mannian methods, little if any warming can be seen. Don’t get me started on that single tree in Yamal!

My approach to proxy data is very simplistic indeed. I would really like to told why it is wrong! Ignoring for the moment the problem of spatial autocorrelation, which greatly exercises the minds of accomplished statisticians, and taking each proxy as an honest attempt to get a meaningful value that represents the state of climate at a particular place and time, we arrive at basic data that ought, if the observations are well scattered over the region of interest (Northern hemisphere?) to be reasonably representative of the prevailing climatic conditions at every time of observation. However, observations come in an assortment of scales. If you look at the data behind MBH98 you will be convinced of this! Why is it not reasonable to standardise every proxy to mean zero, variance one, use only one proxy from any given geographical location, possibly thinking in terms of a “location” as being the size of around the median area of US State? If using more than one proxy within a particular location – a sort of replication – deweight them appropriately so that the dense region does not tend to dominate the analysis. Now simply average across all proxies, with appropriate weights if necessary (and also real data, normally temperatures) for every time (year, generally)of observation. This elementary process produces an average value and also a standard deviation associated with that average. Anyone could do this with Excel, I’d imagine, though I never use spreadsheets. The scale of the average values is dimensionless (a sort of average z score). One could attempt to relate it to real-world temperatures by referring to a known climate series of real values – in the case of MBH98 this would be the Central England Temperatures, but this is not really necessary when considering the general pattern of climate over the years.

Thus is obtained a simplistic but readily understandable series that describes typical(NH)climate wrt time. The series can be subjected to various types of analyses. The primary one is plotting against the time axis. This can be remarkably informative, and will convince anyone who carries this sort of process through, that fitting a linear trend model to a long multiple source climate time series is misguided to put it mildly. Grossly non-linear plots cannot be sensibly described, even to a huge approximation, by a linear trend. Anyone can do the arithmetic, but its value is minimal. Sub-sections of the series, however, may appear “by eye” to be roughly linear, and if so it makes sense to me to try fitting a trend line over a section of this type.

My preferred analysis is to regard the series as something that has come off the production line of the climate factory and to use the simple SPC approach of forming the cumulative sum of historical data. This readily reveals periods of “in spec” operation, the presence of trends if there are any, and, importantly the occurrence of abrupt or step changes.

Applying this sort of approach to single site data can yield most interesting things. Try it with any NW Atlantic – eg Greenland and Iceland – locations and note what happened in late 1922.

Data that have been pre-processed – perhaps by averaging as I’ve outlined above, or by being put through a smoothing algorithm – will be unlikely to reveal any startling step changes, but the patterns of the cusum plots form an immediate source of insight into the nature of the series. The plot also readily reveals whether smoothing has been applied to the data.

Robin, there was a great discussion at tAV on this thread a week ago or so. Our discussion followed your line of logic very closely from what I can see. In comparing the Loehle and Ljugqvist reconstructions, we got into discussions about how the scaling and offsetting of proxies or reconstructions by calibrating to instrumental could result in spurious distortions. We mused about scaling and (relative) alignment through standardized comparisons of common overlapping (long time scale) periods between series.

So some of the weights are negative for PC1 – I dont think I had appreciated that before.
At first glance that sounds really bad.
But MBH used more than one PC (MW2010 were wrong about this I think) – either 3 (MBH99) or 5 (MBH98).
So the question is, when more PC’s are included, do the overall weights for each site become positive?

Before people complain too much about PCs in this context, keep in mind, as I observed in the post, that M08 did not use PCs, but forms of CPS and RegEM. M08 has different issues – including ex post correlation screening, the problems of which are known in various blogs, but not understood by IPCC scientists.

As to lower order PCs – lower order PCs are typically contrasts and their addition will increase the prevalence of negative coefficients overall.

I think that you might be giving the EIV method a bit of an early pass here. From your analysis, retaining a few PC’s in the regression looks to me like it will weight the bristlecones pretty automatically. Am I misunderstanding something?

I’ve been doing almost exactly this since Steve first sent me the Mann98 data, all 112 columns and 583 rows, many years ago! I made comprehensive analyses, arranged them in what I deemed to be reasonable group types, compared the groups, etc etc as well as examining all the 112 proxies individually. This convinced me completely that the HS was nonsense, to put it politely, and I’ve stuck with this view ever since. I circulated my analyses to various friends, and have given short informal talks on my findings. I’ve also looked at other compilations but the details have slipped my (slightly failing) memory :-(( Just now I’m looking at what I call “The Briffa Reconstructions”, which include stuff like Tornetrask, Yamal – a spectacular HS of course –

Fredrik (Lj.) does something that is very similar I think, though not exactly the same as far as I can tell. The reason is that the method for forming standardised values may well be different. Not that I think that this would make the slightest /practical/ difference to the outcome, but the details might differ, though I’d not expect to meet any devils.

What I would really like opinions on is whether the technique that I (and presumably Fredrik) use is sensible. Of course I can’t see any serious holes in it, otherwise I’d do something different!

I see its merit as being its simplicity and transparency. I’d guess that even some people in the government and MSM might understand it, which is presumably not the case for the thoroughgoing analyses that you and Steve are able to do. If that were so the whole climate scene would have changed by now!

My huge and major concern is that I still really do not understand how the proxies are transmuted into the gold standard of temperatures. Mann’s data contained one set of temperatures, which he called Central England, but which are actually Summer temperatures. The other 12 so-called temperatures were transforms, approximately “standardised” to mean zero, variance one. I have to guess that like Fredrik’s analyses these used a subset of the data for deciding what to use as the standardisation norm. I’ve not come across the precise descriptions of the process, though I guess that they me available somewhere. In pre-broadband days I could not possibly search for this sort of stuff.

I have therefore presumed that all the “calibration” stuff has had to rely on CET Summer values. What if these by some mischance turned out to be nonsense, or unreliable? I don’t think they are, though. I’ve made really detailed studies of CETs, and continue to do so on a monthly basis. There is a great deal of interesting stuff in them that I have not seen reported anywhere. Should I try to publish? Joke!

One other thing that always bothers me:- what happens to all the data that should be taken into account that occur during the rest of the year. Tree rings are not changing their parameters during autumn, winter and spring, but temperatures might be low or high in the months that are ignored. This can’t be “scientific”, can it? Thermometers can be examined at any period or date, although I have noticed that in records from Arctic regions deep winter temperatures sometimes never go below -20C. Data slots just record missing values. I can appreciate the problem – too cold to risk going out to the screen – but it means that we are getting a false impression of what it was really like in Churchill, or wherever.

SM–These Chladni patterns are, thanks to you, fascinating to me as a chemist. I say “thanks to you” because it was in your Antarctica posts where you diagrammed them that I saw that these strange-sounding “Chladni patterns” are but 2D projections of 3D atomic orbitals, where we chemists are mapping electron density and you are mapping red noise density. And why not? Same eigenvectors after all. Our 1S is your PC1, our 2S is your PC2, etc. Makes me wonder: Is PCA really a legit technique for analysis of ANY 2D data set? And another question: Why are you so far ahead of the curve in climate science? It seems as though you are trying to pull climate science along, trying to get it up to speed, and it pretty much refuses to go. (Well, there I sound like I am indicting all climate scientists–bad form on my part.)

“As an aside- many of the champions of the Wegman Report (e.g Steve McIntyre) took up Wegman’s claim “Method Wrong + Answer Correct = Bad Science” as a sort of incantation, chanting it as though it might somehow dispel the fact that reality appears to have a hockey-stick-shaped bias. I am sure that these same people will maintain their integrity and immediately disavow the Wegman Report and its conclusions.”

So if you write 2+2 =4, and I copy you word for word, people should disavow my writing. Seems logical. Wegman’s writing hardly makes Mann right or wrong. The math, kiddo, take the personalities out of it and deal with the math. But if you insist : wegman copied and Mann was wrong. Spot the logical contradiction. Can’t? that should tell all you need to know about the lack of any connection between the two parts of that sentence. The have cures for knight move thinking, you should look into them

“I say “thanks to you” because it was in your Antarctica posts where you diagrammed them that I saw that these strange-sounding “Chladni patterns” are but 2D projections of 3D atomic orbitals, where we chemists are mapping electron density and you are mapping red noise density. And why not? Same eigenvectors after all. Our 1S is your PC1, our 2S is your PC2, etc.”

Shallow Climate, as a chjemist from many years ago, I was also struck with the similarities.

In my case, I suspect that I should not have been had I understood my linear algebra and principle components better. Kind of an “aha” moment 45 years after the fact.

Answer is No! I would however consider putting my work on a blog, but it is heavily graphics dependent. Don’t yet know how to post GIFs – the most convenient graphic format for me, who normally uses an entirely different operating system (RISC OS).

Jeff Id wrote “I would but there isn’t any quality moderation so it’s important to be careful.”

Indeed, absolutely right.

In order for you to judge I shall need to send you something to audit. How can I do this? I can email, with attached graphics, but how else? What I’d propose is sending a slightly modified version of something I wrote several years ago regarding Mann’s 1998 paper – at least the data it is based on. Let me know what to do. I’m happy to release my email address if necessary, or to get Steve to give it to you so you can email me if you like.

Sorry to go off topic here, but I am writing a guest post for Anthony’s site on John Mashey’s ‘report’ on Wegman, which I find as bizarre as anything I have read recently. I notice he accuses you and Mr. McKitrick of being ‘recruited, coached and promoted’ by the George Marshall institute. Would you care to comment?

I’m actually surprised that Steve McIntyre didn’t carefully audit the Wegman report….. Of course, his failure to audit Soon and Balliunas , Loehle, Lindzen and Choi etc makes it seem that he’s only interested in picking holes in climate science. So much for being an unbiased observer in this climate change debate!

he also does not audit the many comments here. doesnt audit santa clauses naughty and nice list. He also doesnt audit sites to see which of his commenters have pictures posted of themselves in tighty whities.

Re: concerned scientist (Oct 10 11:16), This is an incredibly dumb comment. It would not make a lot of sense for Steve to ‘audit’ Wegman, since Wegman is basically an audit of Steve – auditors need to be independent. Regarding to Loehle, Steve has in fact done several posts auditing his paper, as you can see for yourself if you bother to look.

**I’m actually surprised that Steve McIntyre didn’t carefully audit the Wegman report…**
Why should he? Wegman audited Steve who audited Mann.
Wegman found nothing wrong with Steve’s work – all points Steve made were verified.
Are you capable of auditing Wegman???
Steve does not have to audit everybody – only the ones with problems and when he has time.

I guess I shouldn’t have expected a sensible answer. Surprising though that Steve doesn’t audit some of the rubbish put out by Monckton, Plimer etc. This would at least show that he was even-handed and objective about climate change. Since he clearly isn’t, this rather reduces his credibility doesn’t it.

It does not surprise me that you did not receive a positive reception for several reasons. Your overly critical comment contains particularly snide overtones and betrays a lack of understanding of the roles of both Steve and the ClimateAudit web site. Combined with your use of the name “Concerned Scientist” – a name previously used by a well-known troll also known as TCO – was guaranteed to elicit the abrupt answers which you saw.

is at best naive, and more likely disingenuous. What exactly would you have him audit? Is he supposed to waste time checking to see if references were properly made? Maybe, he should have checked Wegman’s mathematical analysis which substantiated Steve’s initial analysis showing that the methodology used to generate the Mann hockey stick was ill-founded and could turn random noise into identical structures. Get a grip on reality!

Neither Steve not the rest of us are self-appointed Climate Police. Among the rules of this site is that the scientific material considered in detail is usually limited to technical papers in the mainstream literature. You might not realize it, but readers here ARE critical of nonsense statements and are quick to criticize such when it is is proposed regardless of the climate viewpoint of the proposer.

I won’t speak for the others, but I personally am interested in examining the statistical basis of exaggerated doom-and-gloom claims which the authors of papers spread to the main stream media before the papers even reach the journals. Steve is also likely to focus his audits on those places where his interests lie. If you feel audits of other topics are needed, then I suggest that you are entitled to do so. Just don’t require others to do it for you.

“I guess I shouldn’t have expected a sensible answer. Surprising though that Steve doesn’t audit some of the rubbish put out by Monckton, Plimer etc. This would at least show that he was even-handed and objective about climate change. Since he clearly isn’t, this rather reduces his credibility doesn’t it.”

You are illogical. The problem you have is you conflate two ideas: “even handed” and objective. You think that to be credible one must be both even handed and objective. let’s consider some examples to illustrate the kind of pathological thinking you exhibit:

1. a vice cop arrests a pimp. the pimp complains that the vice cop is not being even handed because there are other types of criminals that the vice cop never arrests and so the charges should be dropped
2. A scientist specializes in debunking creationist arguments. Creationists complain that he never criticizes evolutionary science and therefore is not even hand and lacks credibility and therefore God created the universe.
3. The AG investigates corporate fraud. A ceo complains that he never looks at small companies and therefore the charges are baseless.
4. The climate scientists focus their debunking on Monkton and never directly address Say Jeffid or Roman. therefore, they are not even handed and objective and c02 does not warm the world.

In short, the focus of one’s inquiry has nothing to do with the credibility or objectivity or truth of one’s findings.

Steve’s chosen purview is peer reviewed climate reconstructions. In particular, those reconstructions which use suspect data and suspect methods. The focus is on peer reviewed literature because it’s his contention that the field does not perform peer review adequately.One can only demonstrate this by limiting the scope of what you look at.

I don’t expect him to be ‘even’ handed. i dont want him wasting his time on Monkton’s junk. if he did i would question his intelligence, the same way I question the intelligence of anybody who wonders why he doesnt. I do expect him to be objective:

A. show his data
B. show his code
C. construe his opponents position fairly.

I am a concerned scientist and my interest are not concerned with the personalities in the game. I take what all the participants say/write and judge the evidence presented and the cases attempted to be made.

I could hardly get excited about potential personal problems of Wegman, Mann, Monckton, Jones etc. ect. that do not bear on the important issues at hand and I certainly do not need sombody telling me who the bogie men might be.

OK…I’ve changed my name from ‘Concerned Scientist’. I’m a scientist working on geology and climate change (a lot of paleoclimate) and I wasn’t being flippant about Steve’s seeming inability to audit some of the unbelievably egregrious papers published by sceptics. Soon and Baliunas is so full of holes it’s incredibel that it got published. Same with the rubbish put out by Ian Plimer and his ilk. I would have thought that the way to get a reasonable dialog between scientists and sceptics is to show a willingness to be objective. Steve isn’t…he’s partisan (and I’ve been reading Climate Audit from the very begining so this is not just a snapshot view).

Clearly there were things that MBH could (and should) have done with regard to proxy selection, verification procedures, use of PCA etc. And Steve has drawn attention to these (although it’s a moot point whether this really changes any of the reconstructions as you get hockey sticks from glacier length records, boreholes, sediment supply to valley bottoms, ice loss from ice caps, forams etc etc).

By only EVER attacking mainstream science, Steve is showing that he’s not really interetsed in auditing. He’s just another sceptic.

I love these antagonists that wander by here every once in a while, all claiming to be long time readers yet each writing the same tripe as if it were novel. The complaints are always the same, and are always along the lines of…”Well, Steve really isn’t doing science unless he does it the way I think he should…”

Monty, you should rename yourself again to ‘Three Card Monty’ because you’re just as transparent.

This theme is now completely OT in this thread, but I think it needs a response. You first say

I would have thought that the way to get a reasonable dialog between scientists and sceptics is to show a willingness to be objective.

and then follow it up with (my bold)

By only EVER attacking mainstream science, Steve is showing that he’s not really interetsed in auditing. He’s just another sceptic.

Attacking? Somehow, objectivity seems to be in short supply here.

Which do you think it is more important to get right? The main show around which much of the “consensus” is built or a side show that seems to be a thorn in the side of some people? I for one do not think that the former is a “moot” point and that there is room for improvement in what is being done.

You only need to see Steve’s language to realise that he spends his time attacking mainstream science. Don’t you think that auditing papers in Energy and Environment once in a while might be a good idea? Or are you happy for sceptic rubbish masquerading as science to get through without any checks at all (let’s be honest E&E is hardly refereed)? I’m afraid he will be judged by the bedfellows he keeps.

SMc has indicated that he is interested in papers that are or will be found in the IPCC reports. This seems like a reasonable way to apportion the limited amount of attention that he, working alone, can give to this work.

As well, it would also seem reasonable for the reviewers selected by the most prestigious journals to do their job. Perhaps, in that way, the work of a solitary individual, working from his home in Toronto, would not be needed to point out glaring errors, including incorrect undergraduate statistics, published in these journals and referenced in UN reports.

I don’t think any “apology” is necessary for Steve’s language. One only needs to read some e-mails written by some of the “bedfellows” you appear to be protecting to understand why he might occasionally be acerbic in his presentations.

I repeat what I said before: It is more important to ensure what peer review of climate journals do not seem to have done properly. Make sure that what is being presented as a basis for major changes to our societal structure is on a solid foundation and represented honestly. So far, we have been short on both of these aspects. The rest is less important.

This conversation is off-topic for this thread and if all you are going to do (as you have done so far) is repetitively bash Steve, his work and journals which you do not like, I see no valid reason for its continuation.

An important thing to notice is that commenters like Monty carefully ignore mentioning the work that Steve actually has done and instead attempt to excuse that oversight and diminish Steve’s work by criticizing him for not auditing certain other authors and papers that they consider to be on ‘his’ side of the argument.
Papers and authors that he in any other context would dismiss as ‘unimportant’ and ‘having no impact on climate science or policy’ and ‘non-peer reviewed’ etc.

Notice how Monty specifically does not mention anything that Steve either got right or wrong in his audits?
That’s because he doesn’t dare.

Notice also how Monty gives no specific reason why Steve should audit these papers?
That’s because getting Steve to audit them is not his goal. His goal is to sow doubt on Steve’s integrity.

I was careful to say that, in my view, Steve has shone light on some statistical issues re MBH work….which paleo-scientists have largely taken on board and have begun to reflect on. This is laudable. I also argued that hockey sticks exist whether you use tree rings or not, and whether the ‘hockey team’ is involved or not. But it is also telling that he ignores some of the utter rubbish put out by the sceptic community. In my view, he would have more credibility if he was even-handed…which he clearly isn’t.

To EJ D – Monty clearly stated that if Steve were to look at papers by Soon and Baliunas or Lindzen and Choi then he/she has answered your “How many audits, exactly, and of whom, exactly, should Steve conduct in order to satisfy your requirements for even-handedness?”

I really don’t see how you can then accuse Monty of “you wont state exactly because there’s nothing he could do to convince you that he’s ‘even-handed,’”

Monty cleary DID state what he/she needed to satisfy his/her requirements for even-handedness.

Monty, you really are behaving like TCO. As I told you above, Steve has written several posts on Loehle. To find them, you need to type the word Loehle into the search box above. Can you manage to do do that for yourself?

Which paper are you interested in having dissassembled? I was critical of variance loss in MW10 as soon as I read it because it is just a hockeystickinator repackaged. They worked harder on the CI’s though.

We all just had a big discussion about panel regression and statistical nuances of MM10 at tAV too, nobody was particularly gentle in that case either.

Monty, I concur with Paul, Kenneth, Ed and likely a bunch of others. If you have followed the blog for many years you will understand why so many thought of TCO – initiating vague, OT discussions of Steve’s credibility or motives. Is it too much to ask that it continue on unthreaded? You won’t have any problem finding people to take up a discussion with. In the meantime I for one would welcome your contributions on this or any other thread on methodological issues. I refered Robin Edwards above to a thread comparing Loehle and Ljungqvist. Why not enlighten us there on your specific criticisms of Loehle07?

Soon and Baliunas is so full of holes it’s incredibel that it got published.

I decided I might as well see what Steve has said about that paper, so I did a search on this site and then found all references to “baliunas”. Steve hasn’t written tons on it, possibly because it was written a couple of years before he started this blog and by the time he started the paper was generally discounted. Indeed, reading his earlier references, he seemed to discount it himself. But later he got bugged that S&B was attacked for taking some of their temperature proxies from precipitation proxies despite the fact that Mann had done the same. Further along, he dissed Black for complaining that S&B used his monsoon wind proxy for a temperature proxy, even though other team members were doing the same. Perhaps you might want to respond to those points and/or provide us with other holes we could try to fill. Oh, you might not want to dredge up the “failure to correlate to grid cell temperatures” complaint given the Mannian reliance on teleconnections whenever his pet proxies fail to do so.

While I find it regrettable that this thread got off the track of interpretation of PCs, I think it would be best to simply answer a ConcernedScientist/Monty/TCO by stating that anyone who enters the discussion of climate science at the level that SteveM does can decide what works they choose to critique and agree and which ones they do not. I can easily learn from critiques or counter critiques from the participants in these discussion regardless of any known stance they take on climate science. It is rather obvious that climate scientists of the consensus bent do not go out of their way to critique members of the team.

If Concerned Scientist is not TCO, I do not know what motivates his comments, but in the case of TCO it is rather apparent that he has a personal vendetta against SteveM. My point is that we waste a lot of time with these discussions that could be better spent talking about the science.
Concerned Scientist/Monty what did you learn about PCs in this discussion or what might you add to it?

Hi Kenneth
I’ll tell you what motivates this discussion. There is the general assumption on sceptic blogs like this that (maybe) MBH got some stats wrong…and this then completely destroys the whole basis of AGW (although let’s be honest it’s now 12 years old and has been superceded by loads more, and better, work). Arguing whether RE is a better verification statistic than r2 is of interest, but doesn’t alter one bit the radiative forcing from GHG…nor does it alter likely climate sensitivity….nor does it alter the fact that hockey sticks exist. This seems to be an intellectually incredible position.

If Steve doesn’t believe that humans are responsible for much of the recent warmth, or that sensitivity is very low, or that there is no such thing as ‘global warming’ then he should say so. At least if he did that, we could just dismiss him as another nutty sceptic. His coyness in this regard seems strange. Is he really convinced by the bulk of the science and just wants to papers like Mann’s?

Perhaps you should ask him: What does he think would be the equilibrium T response to a doubling of CO2? I’d sure like to know!

Steve: On many occasions, I’ve requested climate scientists to recommend a derivation of 3 deg C from doubled CO2 that describes all the relevant aspects of the calculation and is not simply a report of the output of a model (which is more or less what IPCC does). Ideally, I’d like what I refer to as an “engineering quality” derivation i.e. one in which all the relevant aspects of the calculation are “stamped” in the document. To date, I’ve been unsuccessful in such requests despite asking many climate scientists. Gerry North actually mocked me for making the request though it seems like something that should be right at the fingertips of a concerned climate scientist. Perhaps you can provide both me and other CA readers with such a reference.

Hi Clown Car. Thanks for your considered reply. For further, non-tree, hockey sticks have a look at the glacier length records, borehole T records, Atlantic foram curves, ice loss calculations from ice caps, tropical ice cores. I’ve also published climate reconstructions using trees and these show hockey sticks. I didn’t use bristlecones and I’m not part of the ‘hockey team’.

Hockey sticks exist and show that current global temps are likely higher than for perhaps 2000 years (and the GHG forcings are increasing and will drive higher temps). Guess what….AGW existed before Mike Mann. Get over it.

And the multi-proxy reconstruction which uses these to produce a temperature reconstruction?

BTW, Steve has posted on most if not all of these proxies and found problems with all of them. You might want to start with the ice core guy from Ohio State whose name escapes me at the moment. He refuses to archive (at least in sufficient detail to let his work be checked. Glacier lengths are prone to selection bias, and need a lot of work to figure out what’s rebound from the LIA and what’s possible AGW. Borehole T records are just plain not good. I’m not sure about Atlantic forams, as opposed to ones like the mideast upwelling ones, but I don’t know they can be used for the past century because of turbidity problems. Ice cap losses (at least via satellite) have the problem that’s it’s too new a process to produce good data yet.

Now perhaps you have some magic bullet reconstruction which we should be looking at here, but quit being coy and act like a real scientist. Produce a link to an actual paper which can be examined and refuted or verified. We’re always anxious for fresh meat… Well actually I’m on a vegetarian diet at the moment, but I’m anxious for fresh veggies at any rate.

Of course there’s no magic bullet to accurately reconstruct T in the past and of course all proxy reconstructions have their problems! Why do you think paleoclimate is so hard!! Don’t you think it’s significant that multiple lines of evidence from independent proxies researched by independent teams all show hockey sticks (Oerlemanns, Huang, Thompson et al)? Especially as this is EXACTLY what we would expect given the huge increase in GHG forcing over the past few thousand years? In fact if there were NO hockey sticks then our assessments of climate forcing, sensitivity and attribution would be way out. And how likely is that? Talk about Occam’s Razor!

However, instead of people like you and Clown Car whinging that all climate science is crap, why don’t you start behaving like real scientists? Write reasoned critiques of the proxy reconstructions you don’t like and submit them to the peer-reviewed scientific literature? You could even try open-access journals like CPD where all reviews are open. Even better….develop your own reconstructions (make sure that your proxies don’t end in the 1930s though).

And don’t try to use the old canard that “scientists won’t allow us to get published”. Lindzen manages to publish papers, as does Roy Spencer, John Christy etc.

For someone claiming to be having a technical discussion, you are somewhat careless with your references.

I presume that Oerlemanns is the 2005 glacier paper – which goes back to 1600 and looks like a hockey stick. Quelle surprise. Start at the LIA and it goes up from there

Huang (2004?) boreholes. Similar time frame (starts in 1500) and increases. No surprise there either. However, don’t the unrealistic assumptions in extracting “temperatures” from boreholes bother you? When you look at different holes, the results are all over the map.

Thompson (Lonnie?) has been written about numerous times here. He apparently STILL has not archived much of his data multiple years later…

You said that you have done reconstructions yourself (presumably going back before the LIA). Quite seriously, we would be pleased to have you discuss them with us, either here or at some other site (for example, TAV belonging to Jeff ID).

Thanks RomanM for your constructive comments. Yes, many of the reconstructions don’t go as far back as we would want. However, there is lots of evidence that glaciers now are more recessed than they’ve been for thousands of years (work by Koch, Luckman, Clague and loads of others)…since the Holocene Climatic Optimum….don’t just rely on Oerlemanns. Sure there are lots of problems with boreholes (as with all reconstructions) but I just have to reiterate my previous comment:

” Don’t you think it’s significant that multiple lines of evidence from independent proxies researched by independent teams all show hockey sticks?….. Especially as this is EXACTLY what we would expect given the huge increase in GHG forcing over the past few thousand years? In fact if there were NO hockey sticks then our assessments of climate forcing, sensitivity and attribution would be way out. And how likely is that”?

I’m obviously not going to post my reconstructions (there’d be no point in calling myself ‘Monty’ if you knew my name!). Judging by the tone of some posters I also wouldn’t want the vitriol that would follow. However, my reconstructions go back ‘only’ to the 16th century…but that is likely before the regional LIA in my study area. The paper was also fully peer-reviewed.

As I said before, people like Steve should publish their own reconstructions. While some in the paleo community might want to give these a rough ride, truth will out and the rest of us would be happy to see alternative reconstructions. Only by this way can we get the science right. And that’s the main point isn’t it?

Lol, more hand waving, no specifics, when specifics are cited and refuted, more hand waving.

Of course the all the proxies have problems BUT THERE ARE SO MANY OF THEM WE MUST BELIEVE!!!

Classic.

I’m surprised you display your kool-aid drinking so unashamedly.

Also, if you had read this blog for such a long time, as you had claimed before, you would know that Steve has no interest in creating his own reconstructions, and your desire for him to do so doesn’t carry any weight. His contributions to this debate are enormous, more than yours I bet, by a country mile.

You shouldn’t come back until you can answer Clown and put up your single best hockey stick for debate. Until then you’re just a hand waver.

By the way Kenneth…I’m not TCO and don’t have a vendetta against Steve. He’s clearly very bright and industrious and has done excellent work which has made many people in my community think carefully about a number of issues. He needs to publish more though!

As I said before, people like Steve should publish their own reconstructions. While some in the paleo community might want to give these a rough ride, truth will out and the rest of us would be happy to see alternative reconstructions. Only by this way can we get the science right. And that’s the main point isn’t it?

I have to disagree. On the contrary, Steve should feel no compunction to put forward a reconstruction of his own. His criticism is inately valuable, as is all valid criticism.

One would hope that the paleo community would feel grateful for the valuable insights that Steve has provided. His contributions constitute progress for their field.

“As I said before, people like Steve should publish their own reconstructions. While some in the paleo community might want to give these a rough ride, truth will out and the rest of us would be happy to see alternative reconstructions. Only by this way can we get the science right. And that’s the main point isn’t it?”

I sincerely have to judge that he has not paid much attention to what SteveM (and others) have posted here about the problems finding temperature proxies and methods that pass statistical muster when considering an a prior established selection criteria, using methods that do not deflate the historic variance or artificially produce a HS and avoid using questionable proxies that have very great influence on the overall result. Why would one under these conditions want to do a temperature reconstruction? And of course MM have produced critiques of climate reconstructions as have others. It should be misconstrued that doing a sensitivity test like removing certain proxies or adding others implies that the one doing this endorse the final result – they simply are showing you that the result changes and significantly. Even the Loehle and McCullough publication of a non-tree ring reconstruction was done, my view anyway, primarily to show the effects on a reconstruction minus tree rings – and no matter that ended in the early 20th century as the point was made.

I do not think that Monty or people like him “get it” and perhaps never will. I find it a major waste of time for them to come on here and make generalized statements and challenges that simply do not make sense in light of what has transpired here previously. It would perhaps be an overwhelming task for them to pry open their minds sufficiently to see the state of climate science and reconstructions in the light it is generally seen at this blog – and then proceed to ask appropriate questions.

One Trackback

[…] Patterns. Everything from the patterns on the backs of Tortioses, weather patterns and patterns of forrest growth. It seems that sound has a significant role in the formation of matter swell as unseen forces of […]