Mann 2008 Non-Dendro MWP Proxies

Once again, Mann has claimed that he can obtain a “reliable long-term record” without tree ring data, a claim that, as previously noted, is eerily reminiscent of a similar untrue claim made 10 years ago.

Ten years ago, we could not simply eliminate all the tree-ring data from our network because we did not have enough other proxy climate records to piece together a reliable global record,” said Michael Mann, associate professor of meteorology and geosciences and director of Penn State’s Earth System Science Center. “With the considerably expanded networks of data now available, we can indeed obtain a reliable long-term record without using tree rings.”

The first thing to look at is always the data. Basketball scouts look at dozens of prospects, so climate scientists should be able to look at 33 proxies. I’ve plotted all 33 proxies in a consistent format below. I’ve highlighted the 950-1100 MWP period and the modern period in the graphics below and standardized all proxies on 1400-1800, choosing this period to avoid incorporating the two extreme periods: I think that this highlights the proxies of interest quite well. As an experiment, I’ve made the proxies into a flash-gif (Thanks to Anthony for advising me on software.) I’ll make a few comments below and plan to continue examining this data.

Mann et al 2008 Non-Dendro Proxies

Figure 1. These are arranged more or less by longitude going east around the world. x-axis dates are Years AD with 950-1100 and 1850-1980 highlighted. y-axis SD units are based on native proxy units standardized on 1400-1980.

Press Release vs Article
The above comments were from the press release. Now look at the article (not available at the time of the press release.) Using CPS methods (which should be sufficient to recover any actual signal), Mann says that a “skilful reconstruction” is possible only back to AD1500 – not even the AD1400 of MBH98, conceding the 15th century period in dispute in our articles. However, Mann then triumphantly announces that he can get a “skilful reconstruction” using a new and improved Mannomatic – in this case, an “EIV method” together with other opague Mannian multivariate operations, a method that you can’t read about in Draper and Smith or other statistics texts. The statistical authority for the method is, needless to say, another Mannian article. Mann et al:

The skill diagnostics (Fig. 2; see also Dataset S4) for the validation experiments indicate that both the CPS reconstructions (with the screened network) and EIV reconstruction (with the full network) produce skillful NH land reconstructions back to A.D. 400. When tree-ring data are eliminated from the proxy data network, a skillful reconstruction is possible only back to A.D. 1500 by using the CPS approach but is possible considerably further back, to A.D. 1000, by using the EIV approach. We interpret this result as a limitation of the CPS method in requiring local proxy temperature information, which becomes quite sparse in earlier centuries. This situation poses less of a challenge to the EIV approach, which makes use of nonlocal statistical relationships, allowing temperature changes over distant regions to be effectively represented through their covariance with climatic changes recorded by the network.

A skillful EIV reconstruction without tree-ring data is possible even further back, over at least the past 1,300 years, for NH combined land plus ocean temperature (see SI Text). This achievement represents a significant development relative to earlier studies with sparser proxy networks (4) where it was not possible to obtain skillful long-term reconstructions without tree-ring data.

There is something eerily similar to an observation in MBH98, where Mann noted in passing that conventional methods had been “ineffective”, but the Mannomatic was just the ticket. Of course, no one then knew exactly what a Mannomatic was (no mention of Mannian principal components or bristlecones), but climate scientists and IPCC loved the answer.

Largely because of the inhomogeneity of the information represented by different types of indicators in a true ‘multiproxy’ network, we found conventional approaches (for example, canonical correlation analysis, CCA, of the proxy and instrumental data sets) to be relatively ineffective. Our approach to climate pattern reconstruction relates closely to statistical approaches which have recently been applied to the problem of filling-in sparse early instrumental climate fields, based on calibration of the sparse sub-networks against the more widespread patterns of variability that can be resolved in shorter data sets. We first decompose the twentieth-century instrumental data into its dominant patterns of variability, and subsequently calibrate the individual climate proxy indicators against the time histories of these distinct patterns during their mutual interval of overlap.

Are we seeing the same thing once again? A new Mannomatic?

A Few Comments on the “Proxies”
I identified 33 non-tree ring proxies with that started on or before 1000 – many, perhaps even most, of these proxies are new to the recon world. How were these particular proxies selected? How many proxies were screened prior to establishing this network? Mann didn’t say. (Mann SI Figure S8 plots 18 of these series (ones going back to 818) – readers should consult this as well.)

One’s first impression is that there isn’t a common signal in this data.

The proxies with the loudest modern warm period “signal” – a Finnish lake sediment, are said by the author to have been contaminated by non-climatic modern disturbance. Mann notes in the SI, referring specifically to these 4 series:

we also examined whether or not potential problems noted for several records (see Dataset S1 for details) might compromise the reconstructions.

This smacks all too much like his attempt to “adjust” the bristlecone data, concocting an “adjustment” that didn’t affect the results. The logical course of action when an author notes such disturbance is simply not to use the data. There are dozens of other unused series. In Mann’s SI Figure 7, he argues that the presence/absence of 7 problematic series doesn’t “matter”. So why use them? And why use 4 of them? It’s definitely fishy.

One of the Socotra Island series has a huge modern increase, not present in another Socotra series – this data needs to be examined. The Agassiz ice core series also has a big difference: it also had a lot of leverage in Moberg.

I’ve already examined the Curtis Punta Laguna data and Mann’s version does not match the public archive. I’ll discuss this separately.

I wonder what would be the best way to graphically illustrate how adding more noise, I mean random proxies, will average out to the ‘mostly stable for over 1000 years’ if you add enough of it, and splice some US surface stations onto the end to give you the hockey stick.

Perhaps a little Flash app, where you have a slider that adds more and more proxies in a random order as you move it, and a dynamically updating graph with the average of all those proxies, and a toggle where you can let it do it with the actual proxy data, or with randomly generated graphs, or exclude certain types of proxies. The same kind of animation can be done with Javascript, if only I had a few days of free time.

#1. I don’t know about flash files – I’d be happy to post it up. Right now, it’s a gif file. I can generate the frames very quickly, but making the moving gif is fairly manual. I used Benetton software.

The moving gif files are great Steve. These kind of animations present much more information to the reader (than any kind of fancy Mannomatic certainly).

It already looks like you are on the right track here in dissecting the reconstruction. Don’t present a detailed analysis until you are truly ready because the Team will be quick to jump on any problems like they have the past. They will keep falsely using any initial analysis as your final analysis like they did in the past.

“…These reconstructions show the warming from the last glacial maximum, the occurrence of a mid-Holocene warm episode, a Medieval Warm Period (MWP), a Little Ice Age (LIA), and the rapid warming of the 20th century. The reconstructions show the temperatures of the mid-Holocene warm episode some 1–2 K above the reference level, the maximum of the MWP at or slightly below the reference level, the minimum of the LIA about 1 K below the reference level, and end-of-20th century temperatures about 0.5 K above the reference level.”

“…Results suggest that there could be an interesting issue in the effects of noise, which could in extreme cases lead to an under-estimation rather than an over-estimation of past climate variability as it has been suggested in the past literature. Future efforts could consider a more realistic involvement of noise, both in borehole profiles and also with a regional perspective incorporating non-climate perturbations like horizontal heat advection, changes in vegetation, etc. The effect of multi-centennial changes in land surface cover has not been considered so far…”

Re #5
I thought the term “MWP” was no longer accepted as global consensus. For example, I notice IPCC now uses “MCA” (“Medieval Climate Anomaly”) on their graphics. Onw wonders: what was the process for deciding this linguistic shift? Is this some camp’s pet hypothesis? What IS the consensus on this terminology? Is there now no consensus?

One’s first impression is that there isn’t a common signal in this data.

No kidding. I drew the same conclusion from a manual gif animation of the individual frames you posted yesterday. The animated gif makes it clear that their supposedly robust conclusions are nothing more than wishful thinking.

If a graduate student came to me with that data and said they could draw robust conclusions about the temperature a millenium ago, I’d try and persuade them to find another advisor…

I beg your pardon for my stupid question, and for my stupid observation.

The years written below the graphs, are the years AD (so 2,000 is 2,000) or BP (so 2,000 is when Julius Caesar ruled)?

In the event of years AD…it seems just to me, but the only few proxies that should agree with Mann, are the ones for we are sure by direct evidence they had a Medieval Warm Period (Scotland, Finland and Ellesmere) while these proxies are the only ones to show a great warming by 1800-1900 on and not during Middle Ages? Or am I totally wrong in looking at graphs?

#6 Bender, it is a general consensus in science that “Linguistic Shifts” are human-induced, and further, that the change from “MWA” to “MCA” was already predicted by scientists, who first began to discover this inconvenient truth when noticed that “Global Warming” (GW) was altered to “Climate Change” (CC). There is a pattern and a growing theory that acronyms are increasing in the amount of “C” words, and that it may have something to do with CO2, obviously because it also starts with a “C”. There is also a growing concern that the entire alphabet will be substituted by the letter “C”, but some skeptics claim that such event is out of 95% confidence intervals of the models.

Of course, there are people that claim that this is all bollocks and insane, but are dismissed for being out of scope with the consensus and should be regarded as fringe lunatic oil-bought opinion makers.

[18] We have advanced a technique for producing a
Northern Hemisphere reconstruction consistent with the
seasonality of tree ring derived temperature and temperature-
depth profiles. This technique also yields a cold season
temperature reconstruction that suggests Northern Hemispheric
temperatures are more sensitive to external forcing
in the cold season. We estimate annual and cold season
warming of 0.2_ ± 0.1_C and 0.4_ ± 0.3_C, respectively,
between 1500 and 1856. Subsequent to 1856 the SAT
record indicates annual warming and cold season warming
of 0.82_ and 1.13_C, respectively [Jones and Moberg,
2003]. This reconstruction indicates that the late 20th
century is warmer relative to the past 500 years than
previously thought and suggests that the climate has greater
sensitivity to external forcings. These results are consistent
with previous findings of recent global warming and its
causes in that since the onset of the industrial revolution,
temperatures appear to be rising at an unprecedented rate
[Briffa et al., 2004; Jones and Mann, 2004].”

The finnish lake was polluted in the 20th century, and land usage changed the sediments already after 1720. I don’t know what the proxies present, probably some mineral ratios etc. The article itself states that the MWP is clearly visible in the records.

“…The temperature difference between the LIA and the MWP is about 1.7 °C on average. This difference is in good agreement with those derived from sediment cores from the Bermuda Rise but is larger than the reconstruction of temperature for the Northern Hemisphere from low frequency stacks and significantly larger than that in the IPCC report…”

Tiljander et al. discovered “an organic rich period from AD 980 to 1250” that they say “is chronologically comparable with the well-known ‘Medieval Warm Period’.” During this time interval, they report that “the sediment structure changes” and “less mineral material accumulates on the lake bottom than at any other time [our italics] in the 3000 years sequence analyzed and the sediment is quite organic rich (LOI ~20%).” From these observations they conclude that “the winter snow cover must have been negligible, if it existed at all [our italics], and spring floods must have been of considerably lower magnitude than during the instrumental period (since AD 1881),” which conditions they equate with a winter temperature approximately 2°C warmer than at present.

In support of this conclusion, Tiljander et al. cite much corroborative evidence. They note, for example, that “the relative lack of mineral matter accumulation and high proportion of organic material between AD 950 and 1200 was also noticed in two varved lakes in eastern Finland (Saarinen et al. 2001) as well as in varves of Lake Nautajarvi in central Finland c. AD 1000-1200 (Ojala, 2001).” They also note that “a study based on oak barrels, which were used to pay taxes in AD 1250-1300, indicates that oak forests grew 150 km north of their present distribution in SW Finland and this latitudinal extension implies a summer temperature 1-2°C higher than today (Hulden, 2001).” And they report that “a pollen reconstruction from northern Finland suggests that the July mean temperature was c. 0.8°C warmer than today during the Medieval Climate Anomaly (Seppa, 2001).”

The Finnish lake sediments can not be used for temperature interpretations in the 18th to 20th century unless you know exactly the history of the regional lake environment conditions. We have 180,000 lakes in Finland. It is very easy to cherry pick among them and say that it is a random sample. Of all lakes, 1,500 lakes are affected by lowering of water levels. These must be omitted. Many other are affected by agriculture including forestry. This affects the relative components of the sediments. This is well known although somebody can by chance use them for climate trends. Finnish prof. Matti Saarnisto has showed me graphs of lake sediments from Finland which can be used for temperature trends but still show strong deviations in the recent 200 years because of agriculture. These lakes are not always very close to agricultural sites.

In addition, we must remember that the fauna or flora in the sediments do not represent the temperature of the air because long term trends in water temperatures do not correlate with long term trends in air temperatures. I had a poster on this phenomenon in Italy in 2001 (or 2002?).

This is very well done. Other than the four Finnish sites and the northern Canadian site (which clearly stand out from the pack), the time series as a whole appear to be random noise. Some of the South American and African samples seem to be correlated within their region, but there seems to be nothing striking at first view about these correlations. I’ve done a bit of time series simulation work where I’ve looked at hundreds of random simulation results just to get a feel for how “real” randomly generated samples can look and the results of these 33 proxies don’t look any different that what can be generated by a completely random process.

Are these 33 proxies a representative sample of the data used to make the conclusion? There is certainly no evidence here of a remarkable 20 century warming compared with past centuries.

Yes, the lack of any coherent signal was the very first thing that I noticed a few years ago. In the MBH98 network, there are a few series like the Finnish sites (bristlecones) and the net result of Mannian methods was to increase the weight on these sites.

I showed that the reconstructions using bristlecones plus white noise wer as good as recons with actual proxies. It takes a while for people to get their eye in for what is doing – it’s a unique style of spurious regression, which is so bizarre that people miss it. Consider a classic univariate spurious regression and then mix in 21 white noise series. Apply the Mannomatic and measure the RE stat. Works like a champ. Stock prices will work too.

Underneath the opaque methodology of EIV etc, there’s undoubtedly a similar trick.

Note that I’ve added in 8 series starting in 1010 increasing the show to 41 series. This includes the “independent” Thompson tropical data of which the Dasuuopu series carries the water.

I’ve done similar plots for all 1209 series now and if I can automatically insert delay times will post that up as well.

Steve, I would hope that since you already have identified several problems with the data and suspect more of the same methodology problems, that you would publish and crush this study in short time. Obviously all of the facts are not in yet, but there’s clearly an expectation of fatal flaws in Mann’s methodology yet again… if such flaws are indeed verified, it looks like you’d be able to very quickly (relative to MM03) figure out what is wrong and publish corrections/refutations.

Steve, I would hope that since you already have identified several problems with the data and suspect more of the same methodology problems, that you would publish and crush this study in short time. Obviously all of the facts are not in yet, but there’s clearly an expectation of fatal flaws in Mann’s methodology yet again… if such flaws are indeed verified, it looks like you’d be able to very quickly (relative to MM03) figure out what is wrong and publish corrections/refutations.

If it were only that easy. I would suggest that the same crowd that takes a pass on these papers in peer review would either not be qualified or have the where with all to understand the analysis of the errors from a critque (or why would they miss them time and again in peer review) or would not be predisposed to accept a critque. I agree that all these apparent errors add to a fatal flaw, but I think the response(s), if there were any, would address each error separately as minor and over a long period of time.

What I see in this paper is a definite response to the questions that M&M have posed over time and some pretty significant (and very indirect) admissions of problems in the previous publications. That is not to say that significant problems do not remain for further admissions at a future time. At the rate the HS is being worn down as evidenced by this paper, I could see the next IPCC AR5 leaving temperature reconstructions off the agenda – unless we see a paradigm shift in methodolgy and proxy science and it can be used as evidence for AGW mitigation.

Notice that MM was not referenced once in this paper.

In the meantime we get to enjoy the analyses and perhaps find out how Mann et al. handled the data.

Abstract
We present a suite of new 20,000 year reconstructions that integrate three types of geothermal information: a global database of terrestrial heat flux measurements, another database of temperature versus depth observations, and the 20th century instrumental record of temperature, all referenced to the 1961–1990 mean of the instrumental record. These reconstructions show the warming from the last glacial maximum, the occurrence of a mid-Holocene warm episode, a Medieval Warm Period (MWP), a Little Ice Age (LIA), and the rapid warming of the 20th century. The reconstructions show the temperatures of the mid-Holocene warm episode some 1–2 K above the reference level, the maximum of the MWP at or slightly below the reference level, the minimum of the LIA about 1 K below the reference level, and end-of-20th century temperatures about 0.5 K above the reference level.

Inversion is often a problem because it is frequently ill-posed. Even something as simple as inverting a logarithm has no solution for an exponent of zero. That’s also why satellite MSU temperatures can never be the gold standard. X-ray crystallography is an example of how an initially ill-posed problem can be solved if there is other data to constrain the solution. The Wikipedia article on the Inverse Problem is not a bad place to start.

33 (DeWitt): Inversion is mostly successful. It is used to great effect in seismology, both the terrestrial kind and helioseismology. The technique is not bad in itself, so what is it specifically that makes it bad for borehole data?

, “In particular, the largest geothermal anomaly at 0–2 km depth is caused by a downward movement of meteoric waters in the zone of active water exchange. The experimental results can be understood quantitatively if a permeability of (1 to 3) × 10−13 m2 over a 1–2 km zone of exogenic fracturing is assumed. An abrupt change of conductive HFD in the depth interval 1.7–2.2 km is attributed to a downward movement of fluids of 2–3 cm/yr along inclined zones of fracturing at the boundary between igneous and sedimentary sequences.”

Later, there was a study of geothermal gradients in Germany (the KTB project), one brief summary of which is

Finally, the potential and the limitation of the analysis of heat flows and temperature gradients are demonstrated. Heat-flow interpretations are conclusive only for nearly horizontally layered, isotropic geological units. In steeply dipping and anisotropic formations the heat-flow field is perturbed over a large distance (>1 km) around the point of interest. In such geological units only, the temperature gradient interpretation can provide reliable information on the surrounding material.

While these 2 quotes are not killers for shallow geothermal gradient inversion methods to study palaeoclimates, they do indicate that much care has to be taken to avoid confounding effects. Also, it is important to drill deep holes in the study area to assess the outward heat flow as it is at the base of the temperatures under study.

The rocks in Australia where I live are mostly deeply weathered and so covered with soil with different hydrologic properties to the hard rock below. It would not be wise to use the inversion method in such terrain. Indeed, it is hard to imagine a suitable place globally to conduct such studies, confident that perturbing effects were known to be negligible. There is a lot of anisotropy out there.

However, Mann then triumphantly announces that he can get a “skilful reconstruction” using a new and improved Mannomatic – in this case, an “EIV method” together with other opague Mannian multivariate operations, a method that you can’t read about in Draper and Smith or other statistics texts.

As I haven’t seen any actual discussion of Errors-in-Variables statistical model here, I thought the EIV Wikipedia reference might be useful. The lone citation is from section 3.4 in Draper N.R., Smith, H. (1998) Applied Regression Analysis (3rd Edition), Wiley.

Steve: Mann does not use standard statistical routines, but weird homemade methods whose properties are unknown; just because he uses a label that can be identified elsewhere doesn’t mean that his method is valid – for example, MBH98 refers to “conventional” principal components. Principal components is a known technique, but the properties of mannomatic PCA are different and required careful parsing.

Steve,
I am having trouble following your point about supposedly exaggerated claims of non-dendro reconstruction skill.

As I understand it, Mann asserted non-dendro recon skill back to 1760 in MBH98. Now he is claiming skill back to 1500 under CPS or 400 under EIV.

So I’m still at a loss to see how you can say:

Once again, Mann has claimed that he can obtain a “reliable long-term record” without tree ring data.

[Emphasis added]

None of the material you have cited in MBH98 supports your accusation of past exaggerated claims in this area, or that there was a claim of non-dendro recon skill beyond 1760, as far as I can see. And yes I have chased down the other passages you cited.

One of the passages you quoted did say that “the long-term trend [i.e. at the century level] in NH is relatively robust to the inclusion of dendroclimatic indicators in the network.” However, the full statement in context makes clear that the dendro indicators are “especially important” in the reconstruction, and the statement is a much more limited one addressing possible bias in the dendro indicators.

Here is the full quote from MBH98:

But certain sub-components of the proxy dataset (for example, the dendroclimatic indicators) appear to be especially important in resolving the large-scale temperature patterns, with notable decreases in the scores reported for the proxy data set if all dendroclimatic indicators are withheld from the multiproxy network. On the other hand, the long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network, suggesting that potential tree growth trend biases are not influential in the multiproxy climate reconstructions. [Emphasis added]

Steve: How about the others that I mentioned in a recent post

We have also verified that possible low-frequency bias due to non-climatic influences on dendroclimatic (tree-ring) indicators is not problematic in our temperature reconstructions.

Or

Whether we use all data, exclude tree rings, or base a reconstruction only on tree rings, has no significant effect on the form of the reconstruction for the period in question. This is most probably a result of the combination of our unique reconstruction strategy with the careful selection of the natural archives according to clear a priori criteria.

Obviously, this is not a statement about non-dendro proxies nor removal of dendro proxies, and so is not relevant to the discussion.

Other statement 2:

Whether we use all data, exclude tree rings, or base a reconstruction only on tree rings, has no significant effect on the form of the reconstruction for the period in question.

The “period in question” was post-1760, as I pointed out previously.

Steve: So what. Mann’s statement is not a plain disclosure of the adverse results; it’s a cunning presentation of a case that went his way. It’s exactly the same as presenting the successful verification r2 results for the AD1820 step in MBH98 Figure 3 without reporting the failure in the AD1400 step. If you tried that sort of stunt with a securities commission, you’d be prosecuted. I do not understand why you support this sort of thing.

“EIV method” together with other opague Mannian multivariate operations, a method that you can’t read about in Draper and Smith or other statistics texts.

The original statement could easily give the misleading impression that the “EIV method” was a “method that you can’t read about in Draper and Smith or other statistics texts.” That needed to be corrected.

Mann does not use standard statistical routines, but weird homemade methods whose properties are unknown; just because he uses a label that can be identified elsewhere doesn’t mean that his method is valid.

Of course, we would all be interested in an actual reasoned critique of Mann’s methodology, when you can provide one. A good place to start would be a full review of the two references provided. Perhaps you could devote a thread to this once you’ve had time to absorb all the material, references, source code etc.

for example, MBH98 refers to “conventional” principal components. Principal components is a known technique, but the properties of mannomatic PCA are different and required careful parsing.

I think a rehash of the PCA issue would be OT, but I will say the situation is not nearly so clear-cut as you assert. I tend to agree with von Storch’s assessment of the relatively minor significance of the M&M findings on Mann’s use of PCAs. Perhaps this time will be different, perhaps not.

Steve: So what. Mann’s statement is not a plain disclosure of the adverse results; it’s a cunning presentation of a case that went his way.

To me, a reading of the entire text made it “plain” that MBH98 claimed skill for the complete recon back to 1400, but only back to 1760 (and not before) for the non-dendro subset. If that’s an “adverse result”, so be it, but it should have been clear to anyone who read the study.

Of course, your highly selective quotes might lead to a different impression, but that can hardly be the fault of Mann’s supposed “cunning presentation.”

Steve: Nope. It most certainly does not. Under proper disclosure, when he claimed that it didn’t matter for the period in question, he was obliged to disclose explicitly and in the same paragraph or section that this result only applied for a cherry-picked period and that it broke down for the earlier period. If you don’t understand this, I’m not going to discuss it further. Neither did he disclose the adverse verification r2 results. You show me a statement of the adverse verification r2 results. IT took an academic misconduct complaint to get Ammann to report the adverse results. TAt the NAS panel hearings, he denied even calculating a verification r2 result, making the provenance of MBH98 Figure 3 am unsolved mystery.

Re #31,32, & 35, Sorry it has taken me so long to respond, but my day job has keep me busy. My day job does include parameter and state estimation using Least Squares and Kalman filtering.

I have replicated several of the non-ice borehole temperature reconstructions and I’ll “share” my observations. The inversion problem boils down to finding the solution to the inconsistent set of linear equations Ax~b, where A is a skinny matrix (mxn) whose columns are generated from the solution of the heat conduction equation. x is a nx1 solution vector where the first two elements are a slope and intercept and the remaining elements are the temperature reconstruction. The b mx1 vector is the borehole temperature profile. The solution is calculated using the pseudo inverse based on the singular value decomposition (svd) where A = U*S*V’ and where U (mxm) and V (nxn) are complete orthonormal basis sets. The A matrix is ill-conditioned with ratio of max to min singular value on the order of 1e6 to 1e7 on the boreholes I replicated. In the older literature only singular values of greater than 0.3 are used in the pseudo inverse, with recent literature using a ridge regression that optimizes the norm of the residual versus the norm of the solution. If you keep singular values that are less than 0.3, the reconstruction is physical unreasonable, i.e. pulses on the order of 20-40 deg K. For 500 year reconstruction and 100 year steps, the 3 smallest singular values out of the seven total aren’t used in the psuedoinverse.

So what is the problem??? The ill-conditioning of A!! For instance if A is rank deficient i.e. rank = n-1, then one has a single null vector z such that A*z = 0_mx1 and an infinite number of solutions for x. For our A, there are 3 “almost” null vectors which are the last three columns of V. So A*v(5), A*v(6), A*v(7)~0_mx1. Let x_est be the pseudo inverse solution using only singular values greater than 0.3. Let’s create a new solution x_new = x_est + 2*v(7). The individual residual changes are on the order of millikelvins. x_new “looks” substantial different then x_est but the values are reasonable. The point is that there are many x_estimates and many reasonable temperature reconstructions that have residuals that are almost identical, with differences that are less then the temperature sensor noise level.

Summarizing, the columns of the ill-conditioned A matrix are created using solutions to the heat conducting equation. x_est is one of many possible temperature histories.

As a fairly antique natural philosopher, as I grandly style myself, physics, astronomy, mathematics and even engineering to say that I regard both Mann’s data and his statistical analysis with a very jaundiced eye is a serious understatement.

In my time I have seen far too many studies with cherry picked data and inappropriate statistical analysis not to spot a bad ‘un miles away. And this one stinks. Almost as bad as Hansen’s paper which suggested you can infer an entire planet’s climate from one single piece of data gathered at one point on the globe: well the other point did not agree and so had be discounted of course.

Since Doom makes good press and politically driven agendas can be boosted by spurious scientific studies, well I may not have seen it all but I have seen an awful lot of it my lifetime.

As for the poisonous academic hostility between rival schools of thought, I have seen quite enough of that too. Serious debate is one thing, fitting or inventing data and analysis to support a political agenda quite another.

Of course I may be wrong but I suspect the current quiescent period in solar activity is just about to teach us a very chilly lesson in the simple fact that great natural forces far exceed anything that humankind can manage with its puny technology. I only hope I live long enough to see it. And if so that I can afford the heating bills.

Apologies for daring to comment given my abysmally deficient mathematical and scientific skills, but can someone tell me if I’m really missing the point here?
The “anomaly” that I see in the paper is that the rise in the “CRU” data in Mann’s graph no 2 for the mid 70’s to 2000 period looks like 0.6, when the comparable “CRU” rise in figure 3 is clearly about 1 degree. Graph 2 seems to be intended to suggest that his reconstructions are consistent with the modern actually measured data – graph 3 is where he claims to demonstrate an anomaly. If his methodology as shown in graph 2 gets a 0.6 result from the underlying “CRU” data (“decadally smoothed” he calls it) why does it suddenly jump by 0.4 degrees in the comparison with his methodology as applied to past proxies in graph 3? One degree is consistent with the CRUTEM3 northern hemisphere data on its own as published by Hadley without Mann’s decadal smoothing, but if that’s what he’s doing why is that used in graph 3 as a comparison for calculations using his own methodology? Also, if he is using CRUTEM3 data only, why is that a valid comparator to results calculated from other proxy data including (according to the paper itself) coral and marine sediment? Why not just use the full northern hemisphere HadCRUT3, or apply his graph 2 methodology to the data for the purposes of graph 3 as well, or both? (The alleged anomaly would scarcely be noticeable in graph 3 if either of those were done, and would presumably completely vanish if both were done.)

One Trackback

[…] Recent events have certainly turned the table. Here is a fun little chart put together over at Climate Audit, of Mann's "peer reviewed" work that shows the hockey stick. This file shows the profiles […]