The Loehle Network plus Moberg Trees

Loehle’s introduction emphasized the absence of tree ring chronologies as being an important aspect of his network. I think that he’s placed too much emphasis on this issue as I’ll show below. I previously noted the substantial overlap between the Loehle network and the Moberg low-frequency network.

I thought that it would be an interesting exercise to consider Loehle’s network as a variation of the Moberg network as follows:

use my emulation of Moberg’s wavelet methodology using the discrete wavelet transform. This is not exactly the same as Moberg’s method for which there is no source code, making exact emulation very difficult.

The results are shown in the Figure below. Obviously this method maintains the general “topography” of the Loehle network as to the medieval-modern relationship, while using a Team method (or at least a plausible emulation of the Moberg method.)

The difference between Moberg’s results (in which the modern warm period was a knife-edge warmer than medieval) and these results rests entirely with proxy selection. The 11 series in the Moberg low-freq network are increased to 18, primarily through the addition of ocean SST reconstructions (two Stott recons in the Pacific Warm Pool, Kim in the north Pacific, Calvo in the Norwegian Sea, de Menocal offshore Mauritania and Farmer in the Benguela upwelling, while excluding the uncalibrated Arabian Sea G Bulloides series – a proxy much criticized at CA). The other additions are the Mangini speleothem and the Holmgren speleothem (while the Lauritzen speleothem in Norway was excluded for reasons that I’m not clear about, but its re-insertion would not change things much) plus the Ge phenonology reconstruction and the Viau pollen reconstruction.

So the underlying issue accounting for the difference is not the inclusion or exclusion of tree ring series (in a Moberg style reconstruction) but mainly the inclusion of several new ocean SST and speleothem temperature reconstructions, combined with the removal of uncalibrated proxies.

With the late result it seems that there is a lot more uncertainties than acknowledge by the warmers team, knowing that different selection end up much different.

It would be interesting (I don’t know if it is possible) to see what would look like a reconstruction using all calibrated proxies.

I have a few questions:

How confident can we be that scientists don’t select proxies until they reached the expected result?

What does mean the different standard shown in the selection of what is published?

Do these publishers hold to much power on the outcome of science?

Why isn’t there more scientists with the level of professionalism shown by Mr Loehle?

It is a little more than a year and a half that I follow this debate. I’ve been really turned off by how the science is conducted, and surprised to learn how little publications get audited, and even more surprise by how much obstruction exist so they don’t get audited.

Mr McIntyre felicitation! I have the perception that many people are seeing you as a IRS auditors, this is a good thing since they catch a lot of cheater. To sad that you don’t have their power in acquiring data then they have acquire our transaction record.

Bear in mind that paleoclimate studies are not really the core of what the theory of AGW is based on. Computer modeling of future climate is, in fact, what the Kyoto Protocol was primarily driven by. Worth bearing in mind also that Craig Loehle used data from peer reviewed scientific papers. What I think is slowly emerging now is that properly conducted scientific research is already readily available in the literature which refutes Mann’s hockey stick so people who have been making sweeping comments about scientists on here have been throwing the baby out the window with the bath water.

Can I ask what is being done to get some of this extensive work into peer reviewed publication? While the falabilities of ‘peer review’ is understood, this is the constant mantra of the hockey team and promoters of IPCC conclusions. Any view points that are not ‘peer reviewed’ are too easily dismissed.

Can I ask what is being done to get some of this extensive work into peer reviewed publication? While the falabilities of peer review is understood, this is the constant mantra of the hockey team and promoters of IPCC conclusions. Any view points that are not peer reviewed are too easily dismissed.

But that’s the beauty of this study – ALL the proxies he’s using are taken from peer-reviewed (and published) articles.

It will be hard for critics to refute information they’ve already accepted.

They might critique his methods – or his choice of proxies – but the science has been accepted.

#7 You’re conflating the science of the canonical/source studies, A, with the science of the derivative/synthetical study, B. Acceptance of A does not imply acceptability of B. In fact, the statistics in B in this case (Loehle 2007) are sketchy. That is apparently being fixed. But it will be some time before an improved version is available. At that time – not sooner – we will see whether Loehle’s conclusions are robust. If the post above is any indication, that may be the case. OTOH I’m not sure Moberg adequately dealt with the problem of confidence estimation either.

#9: “How do they test climate computer models that predict the future climate?”

I’m not sure how the test those models, but if the methods are similar to how GISS produces surface air temperature (SAT) maps, it’s an art form, not science:

Q. If SATs cannot be measured, how are SAT maps created ?
A. This can only be done with the help of computer models, the same models that are used to create the daily weather forecasts. We may start out the model with the few observed data that are available and fill in the rest with guesses (also called extrapolations) and then let the model run long enough so that the initial guesses no longer matter, but not too long in order to avoid that the inaccuracies of the model become relevant. This may be done starting from conditions from many years, so that the average (called a ‘climatology’) hopefully represents a typical map for the particular month or day of the year.

How do they test climate computer models that predict the future climate?

They generate simulated climate data. In simple terms, they make up data that fits their predictions (preconceptions) of future climate and test the models against that data. If the model gives the the result the modelers expect then that is a successful test.

Wow hard would it be to work out what would happen to the graph if the Ababneh trees were included?

Currently impossible because the individual tree data are not available. All we have is Erren’s digitization of the mean chronology. Hence Steve M’s efforts to have researchers archive their data promptly.

Not really. In appendix II of her thesis she attempted to correlate the four tree ring series to local temperature and precipitation. The best correlation she got was with the Sheep Mountain whole bark trees, although it was not a strong correlation.

#17 Exactly why JEG said that the proxies should always be weighted according to the amount of instrumental variation they explain. If Ababneh bcp explains zero, its weight in the recon would be zero. If it’s 0.1, it’s weighting would be 0.1. Better proxies get more weight because they’re more credible.

So the underlying issue accounting for the difference is not the inclusion or exclusion of tree ring series (in a Moberg style reconstruction) but mainly the inclusion of several new ocean SST and speleothem temperature reconstructions, combined with the removal of uncalibrated proxies.

just from an eyeball approach, none of the Moberg proxies seems to end in 1810.

perhaps the most simple explanation is using data reaching (sort of) up to 2000?

Steve: While the point is worth raising, practically, I don’t think that this is relevant to the difference. (And BTW the Moberg ARabian Sea series is a splice of one series ending in 1500 with another series and the splice is very hairy as I observed elsewhere.) My own sense is that the difference comes from the addition of SST estimates and the removal of the two uncalibrated series (which were very non-normal and perhaps overly influential).

Exactly why JEG said that the proxies should always be weighted according to the amount of instrumental variation they explain. If Ababneh bcp explains zero, its weight in the recon would be zero. If its 0.1, its weighting would be 0.1. Better proxies get more weight because theyre more credible.

You have to be careful here with what you’re saying. You don’t know in advance what proportion is explained and have to estimate this. If JEG is proposing that proxies be weighted according to their correlation to the MBH PC1 or NH temperature or something like that, then you’re doing a (Partial Least Squares) inverse regression. The coefficients are proportional to . In OLS multiple regression, these coefficients are rotated by an orthogonal matrix to yield new coefficients. If yo have a very noisy network, then the rotation matrix (in coefficient space) is “near” orthogonal in some sense, so that the PLS regression (about which we don’t know very much) is approximated to some degree by OLS regression about which we know a lot and have applicable instincts.

One applicable instinct is that if you do an OLS regression of a series that is 79 years long on 72 poorly correlated series (near orthogonal), you will get a sensational fit in the calibration period regardless of what you’re working with. That’s why Mannian methods work just as well when the networks are mostly white noise as they do with actual proxies – if you have one active ingredient.

My own instinct in this is that there is a real case for very simple averaging methods if you have poor knowledge of which proxies are good and which ones aren’t. I think that there is even a mathematical basis for this although I’m not sure that I can demonstrate it.

#20 Steve, Bender and JEG, I believe, were arguing for weighting based on the amount of “local” instrumental temperature variation they explain. A reasonable refinement. Another very useful calculation would be to attempt to create a global temperature reconstruction by separately (or further) calibrating and weighting the proxies prior to averaging based on the covariance between the local temperatures at the proxy sites (not the calibrated proxy data themselves) and the global temperature over the instrument record. This would push the Schweingruber vs. Fritsch methodology dichotomy to its limit: a legitimate (testable and defensible) reconstruction of global temperature using direct temperature relationships as much as possible.

My own instinct in this is that there is a real case for very simple averaging methods if you have poor knowledge of which proxies are good and which ones arent. I think that there is even a mathematical basis for this although Im not sure that I can demonstrate it.

I’ve been fiddling a bit with the Loehle proxies, and I believe the best framework for evaluating multiproxiy reconstructions is via robustness (i.e. how does the composite chang when individual or groups proxies
are removed).

As a first test, I looked a simple average of each of the 18 possible combinations of 17 proxies. The composite is remakably robust to the removal of any individual proxy.

As a tougher test, I looked at composites of 11 proxies. There are about 32,000 possible combinations, so I obviously haven’t looked at all of them. But I have randomnly selected 25 composites of 11 proxies (in a bootstrapish fashion), and again am amazed at the robustness of this network. Everyone of the composites shows a MWP and a LIA.

Given my utter failures at posting graphs and data previously, if anyone else in interested in pursuing this approach, they could probably get something for others to look at more quickly than I could.

Steve: You upload the image somewhere and then use the IMG button to insert the url. This results in WordPress code .

#17 Exactly why JEG said that the proxies should always be weighted according to the amount of instrumental variation they explain. If Ababneh bcp explains zero, its weight in the recon would be zero. If its 0.1, its weighting would be 0.1. Better proxies get more weight because theyre more credible.

The question is why are measurements of xyz a proxy for temperature–pick your favorite xyz. Your statement does not directly address the importance of distinguishing local variance from global variance.

A “teleconnection” to global variance requires intermediate physical processes which draw into question the linearity assumptions of the technique. Conversely, a correlation with local variance is meaningful so much as your knowledge of the local temperature is accurate.

Loehle claims no specialized knowledge of whether timeseries xyz is a proxy for temperature. He draws from peer-reviewed studies where others have claimed such specialized knowledge to identify and to calibrate the timeseries.

Consequently the variance-weighting approach JEG mentions has no direct meaning within the logic of Loehle’s approach which takes the calibrated timeseries as an a priori given. Thus, JEG’s suggestion is a gross misunderstanding both of what is taking place in the Loehle approach and in what is taking place in the MBH approach.

Steve, not to nag, but it would make the blog more intelligible for your readers if you preserved the original post numbers whe you moderate…
Steve: I’m sorry about that, but the mechanics of snipping and leaving is sufficiently more time-consuming than deleting and I’m already swamped that I’m going to do the quickest thing on many occasions. Sorry about that. If you suggest to WordPress that they add a Clear button beside their Delete button, I’d use it and preserve the order.

Re 26 (Peter D. Tillman). This point keeps coming up. Just make sure you include the no. AND the name. Then if the no. slips, we can still work out who you’re having a go at. As for no. 28 (PHE), you don’t know what you’re saying – just talking in circles.

#29: Jason L, how dare you post that “cartoon.” It was just a “schematic” of popular opinion in the olden days and not based on any real proxies that were actually published or anything. :)

Steve M or anybody: what causes the Moberg-style graph in this post to have a huge uptick at the end and Loehle 2007 to have a downtick? Loehle 2007 seems to better represent the 1970s cooling, though the warm 1930s don’t seem to make an appearance.

Joel #31
“Steve M or anybody: what causes the Moberg-style graph in this post to have a huge uptick at the end and Loehle 2007 to have a downtick? Loehle 2007 seems to better represent the 1970s cooling, though the warm 1930s dont seem to make an appearance.”

The way he treats the endpoint of his dataset; the mean of his proxies peaks in 1966 and drops rapidly to the end in 1980 (see below):

One Trackback

[…] While the Arctic ice cover may be shrinking, the Antarctic cover is rapidly increasing. It is becoming apparent that the Medieval Warm Period was considerably warmer than the temperatures we are currently experiencing, even the IPCC has abandoned the Mannian […]