Bring the Proxies Up to Date!!

I will make here a very simple suggestion: if IPCC or others want to use “multiproxy” reconstructions of world temperature for policy purposes, stop using data ending in 1980 and bring the proxies up-to-date. I would appreciate comments on this note as I think that I will pursue the matter with policymakers.

Let’s see how they perform in the warm 1990s -which should be an ideal period to show the merit of the proxies. I do not believe that any responsible policy-maker can base policy, even in part, on the continued use of obsolete data ending in 1980, when the costs of bringing the data up-to-date is inconsequential compared to Kyoto costs.

For example, in Mann’s famous hockey stick graph, as presented to policymakers and to the public, the graph used Mann’s reconstruction from proxies up to 1980 and instrumental temperatures (here, as in other similar studies, using Jones’ more lurid CRU surface history rather than the more moderate increases shown by satellite measurements). Usually (but not always), a different color is used for the instrumental portion, but, from a promotional point of view, the juxtaposition of the two series achieves the desired promotional effect. (In mining promotions, where there is considerable community experience with promotional graphics and statistics, securities commission prohibit the adding together of proven ore reserves and inferred ore reserves – a policy which deserves a little reflection in the context of IPCC studies).

Last week, a brand new multiproxy study by European scientists [Moberg et al., 2005] was published in Nature. On the very day of publication, I received an email from a prominent scientist telling me that Mann’s hockeystick was yesterday’s news, that the “community” had now “moved on” and so should I. That the “community” had had no opportunity to verify Moberg’s results, however meritorious they may finally appear, seemed to matter not at all.

If you look at the proxy portion of the new Moberg graphic, you see nothing that would be problematic for opponents of the hockey stick: it shows a striking Medieval Warm Period (MWP), a cold Little Ice Age and 20th century warming not quite reaching MWP levels by 1979, when the proxy portion of the study ends. (I’m in the process of examining the individual proxies and the Moberg reconstruction is not without its own imperfections.) In the presentation to the public – see the figure in the Nature article itself, once again, there is the infamous splice between reconstruction by proxy (up to 1980) and the instrumental record thereafter (once again Jones’ CRU record, rather than the satellite record).

One of the first question that occurs to any civilian becoming familiar with these studies (and it was one of my first questions) is: what happens to the proxies after 1980? Given the presumed warmth of the 1990s, and especially 1998 (the “warmest year in the millennium”), you’d think that the proxy values would be off the chart. In effect, the last 25 years have provided an ideal opportunity to validate the usefulness of proxies and, especially the opportunity to test the confidence intervals of these studies, put forward with such assurance by the multiproxy proponents. What happens to the proxies used in MBH99 or Moberg et al [2005] or Crowley and Lowery [2000] in the 1990s and, especially, 1998?

This question about proxies after 1980 was posed by a civilian to Mann in December at realclimate. Mann replied:

Most reconstructions only extend through about 1980 because the vast majority of tree-ring, coral, and ice core records currently available in the public domain do not extend into the most recent decades. While paleoclimatologists are attempting to update many important proxy records to the present, this is a costly, and labor-intensive activity, often requiring expensive field campaigns that involve traveling with heavy equipment to difficult-to-reach locations (such as high-elevation or remote polar sites). For historical reasons, many of the important records were obtained in the 1970s and 1980s and have yet to be updated. [my bold]

Pause and think about this response. Think about the costs of Kyoto and then think again about this answer. Think about the billions spent on climate research and then try to explain to me why we need to rely on “important records” obtained in the 1970s. Far more money has been spent on climate research in the last decade than in the 1970s. Why are we still relying on obsolete proxy data?

As someone with actual experience in the mineral exploration business, which also involves “expensive field campaigns that involve traveling with heavy equipment to difficult-to-reach locations”, I can assure readers that Mann’s response cannot be justified and is an embarrassment to the paleoclimate community. The more that I think about it, the more outrageous is both the comment itself and the fact that no one seems to have picked up on it.

It is even more outrageous when you look in detail at what is actually involved in collecting the proxy data used in the medieval period in the key multiproxy studies. The number of proxies used in MBH99 is from fewer than 40 sites (28 tree ring sites being U.S. tree ring sites represented in 3 principal component series).

As to the time needed to update some of these tree ring sites, here is an excerpt from Lamarche et al. [1984] on the collection of key tree ring cores from Sheep Mountain and Campito Mountain, which are the most important indicators in the MBH reconstruction:

Now to get to Campito Mountain and Sheep Mountain, they had to get to Bishop, California, which is hardly “remote” even by Paris Hilton standards, and then proceed by road to within a few hundred meters of the site, perhaps proceeding for some portion of the journey on unpaved roads.

The picture below illustrates the taking of a tree ring core. While the equipment may seem “heavy” to someone used only to desk work using computers, people in the mineral exploration business would not regard this drill as being especially “heavy” and I believe that people capable of operating such heavy equipment can be found, even in out-of-the way places like Bishop, California. I apologize for the tone here, but it is impossible for me not to be facetious.

There is only one relatively remote site in the entire MBH99 roster – the Quelccaya glacier in Peru. Here, fortunately, the work is already done (although, needless to say, it is not published.) This information was updated in 2003 by Lonnie Thompson and should be adequate to update these series. With sufficient pressure from the U.S. National Science Foundation, the data should be available expeditiously. (Given that Thompson has not archived data from Dunde drilled in 1987, the need for pressure should not be under-estimated.)

I realize that the rings need to be measured and that the field work is only a portion of the effort involved. But updating 28 tree ring sites in the United States is not a monumental enterprise nor would updating any of the other sites.

I’ve looked through lists of the proxies used in Jones et al. [1998], MBH99, Crowley and Lowery [2000], Mann and Jones [2003], Moberg et al [2005] and see no obstacles to bringing all these proxies up to date. The only sites that might take a little extra time would be updating the Himalayan ice cores. Even here, it’s possible that taking very short cores or even pits would prove adequate for an update and this might prove easier than one might be think. Be that as it may, any delays in updating the most complicated location should not deter updating all the other locations.

As far as I’m concerned, this should be the first order of business for multiproxy studies.

Whose responsibility is this? While the costs are trivial in the scheme of Kyoto, they would still be a significant line item in the budget of a university department. I think that the responsibility here lies with the U.S. National Science Foundation and its equivalents in Canada and Europe. The responsibilities for collecting the proxy updates could be divided up in a couple of emails and budgets established.

One other important aspect: right now the funding agencies fund academics to do the work and are completely ineffective in ensuring prompt reporting. At best, academic practice will tie up reporting of results until the publication of articles in an academic journals, creating a delay right at the start. Even then, in cases like Thompson or Jacoby, to whom I’ve referred elsewhere, the data may never be archived or only after decades in the hands of the originator.

So here I would propose something more like what happens in a mineral exploration program. When a company has drill results, it has to publish them through a press release. It can’t wait for academic reports or for its geologists to spin the results. There’s lots of time to spin afterwards. Good or bad – the results have to be made public. The company has a little discretion so that it can release drill holes in bunches and not every single drill hole, but the discretion can’t build up too much during an important program. Here I would insist that the proxy results be archived as soon as they are produced – the academic reports and spin can come later. Since all these sites have already been published, people are used to the proxies and the updates will to a considerable extend speak for themselves.

What would I expect from such studies? Drill programs are usually a surprise and maybe there’s one here. My hunch is that the classic proxies will not show anywhere near as “loud” a signal in the 1990s as is needed to make statements comparing the 1990s to the Medieval Warm Period with any confidence at all. I’ve not surveyed proxies in the 1990s (nor to my knowledge has anyone else), but I’ve started to look and many do not show the expected “loud” signal e.g. some of the proxies posted up on this site such as Alaskan tree rings, TTHH ring widths, and theories are starting to develop. But the discussions so far do not explicit point out the effect of signal failure on the multiproxy reconstruction project.

But this is only a hunch and the evidence could be otherwise. The point is this: there’s no need to speculate any further. It’s time to bring the classic proxies up to date.

Well fine, but has anybody done it by tree ring measurements all the way to the present? It would be interesting to see how it stacks up to the weather station data, or to the satellite data of the past 30 years, for example.

Regards,
Jaime

“John L. Daly” wrote:

Apparently, tree ring data up to the present shows no warming during the 20th century. Curious.

As we see a consistent deviation in the proxies and the “real” temperatures, when they parallel for a few decades and the correlation between the ratio of rural weather stations as you detected yourself and the discrepancy between surface station data and lower atmosphere data there seem to be only one suspect remaining. The measured data of the weather stations that show that spike in the 1990’s

That could be the real subject of a scrutinizing audit.

There is a good case of 1540 being the hottest year of the previous millenium.

Here is a comment from Mike Baillie, a senior dendrochronologist, sent to me via Benny Peiser:

Benny here is a thought for Steve McIntyre. He has put his finger on a
long-running problem in dendrochronology for sure.

"On updating the proxies".

Steve, in an ideal world dendrochronologists would nip out regularly, every few years, and update their tree-ring records – but the world is less than ideal. Val LaMarche and Don Graybill who built the California/Nevada bristlecone pine chronologies are both dead, as is Wes Ferguson who did similar work earlier. Here in Ireland we once drove round Ireland jumping over walls and coring stands of ten trees wherever we could find them. That was in 1979, when we were young and irresponsible; there just never seems to have been a day free since then, because, of course, it isn’t a day you are talking about. It is the ‘getting permission’, the ‘collecting’ and the ‘processing’ of the samples from say six or eight sites on our small island – a month’s work maybe? Maybe two? (We also collected a series of English and Scottish chronologies around 1980. Re-building those would make it up to six months, maybe a year of work!) It is just enough work to stop it getting done on a whim. So in the 1990s we tried asking grant giving bodies to fund us. Such work is not regarded as ‘cutting edge’ so it doesn’t get funded. Note that if it had been funded in 1995 it would need doing again now!

People working in universities in this country (UK at least) are now "busy to within an inch of their lives" doing administration and trying to keep weak students in the Thatcher/Blair-revisited system while doing cutting edge research. There is no longer time for doing stuff on whims, least of all stuff that is poorly regarded by research councils.

You also need to know that very comprehensive suites of high latitude/high altitude tree-ring chronologies (from essentially hundreds of sites) were produced across northern Eurasia and Canada/Western America by one (almost lone) Swiss wood-man, namely Fritz Schweingruber, back in the 1970s and 1980s. The records are world class but they all end pre-1990. The effort was super-human at the time. To do it all again now…essentially impossible! I often wonder just what the trees are now recording.

There is a good case of 1540 being the hottest year of the previous millenium.

Frankly that is not what I concluded. I’d phrase it like this:
There is a good case of 1540 being the hottest year of the last 500 years in western europe
So I shorten the time span and decrease the area. The problem with the Luterbacher Hockeystick is again comparing a thermometer result with a proxy result. From the data of Luterbacher you are not allowed to conclude that 2003 was hotter than 1540.

Now if you look at the Chuine Pinot Noir harvest proxy of Burgundy, you can conclude that the 2003 harvest year was unprecedented (for Burgundy). To extrapolate this to Europe, however, is stretching the data.

Sure thing. All published papers should go to 2004. I suspect that some of the research may cover the present era but because the resulting graphs are probably not spectacular, these files may be in some CENSORED directories. 😉

Steve’s comment: Unfortunately, there’s some evidence of this. See my posts on Gaspe and Jacoby.

In view of the economics involved in global mandates, and the controversy surrounding the hockeystick graph and its data, it should be priority one to get the data right.

This would be a great internship program for many types of students.

As a geology undergraduate student myself, I wouldn’t mind being on the cutting edge collecting new data on Sheep Mountain or elsewhere. It would add to my field experience. I’ll have some time in July…

If professors are swamped with admin duties, perhaps foresters could spearhead this with high school or college students assisting…

Maybe I am missing something, but just for general understanding,
it would be helpful to know where these places are. The closest town, perhaps and a geographic map….Maybe that has been done, and I am overlooking it…http://www.climate2003.com/pdfs/2004GL012750.pdf p 17

Steve: If you look at the Graybill and Idso paper which I’ve put online, you’ll see locations in Figure 1 and Coordinates in Table 1. Also information is at http://www.ngdc.noaa.gov/paleo/ if you go to Tree Rings and then look up codes in individual states, but first is easier.

Steve’s comparision with mineral exploration is spot on but my experience with another section of science (plasma related) indicates that academic funding gets directed to the establishment views – that if Mann et al don’t feel it necessary to obtain up to date proxies, then funding is denied.

Part of the problem is the associated with the current state of the universities and the administrative roles researches find themselves in as here in Australia we seem to suffer from the same problem. Current geology graduates from our universities now can’t read maps, nor figure out what air-photos are, and are expertly trained in using applied maths programs without understanding the underlying theory.

I suspect Steve that most of the younger academics in climate science don’t know how to collect tree-ring samples, read maps, sat images, and hence like most new geology graduates, think that orebodies are found using remote sensing and GIS packages.

There is an incredible shortage of field geologists in Australia at the moment, but heaps of computer geologists or ones who feel disobliged to go bush.

Therein lies the problem I think, for the lack of modern proxies, though Mann’s reason for it being too expensive is total bulldust. Mind you the millions spent on ocean going ships measuring ocean temperatures etc suggests one funding black hole which might be more easily diverted to important data collection.

ALl you need is a Helo, a GPS, digital camera, PDA to record the data, and some funding to collect tree rings, plus the coring device of course.

I don’t think it is a simple case of re-prioritising funds to proxy collection – it is the system itself which needs revamping, and I suspect that the climate science we are forced to suffer is the product of this systemic illness in academe.

Abstract
This paper is concerned with dendroclimatic research aimed at representing the history of very large-scale temperature
changes. It describes recent analyses of the data from a widespread network of tree-ring chronologies, made up of ring width
and densitometric measurement data spanning three to six centuries. The network was built over many years from trees selected
to maximise their sensitivity to changing temperature. This strategy was adopted so that temperature reconstructions might be
achieved at both regional and very large spatial scales. The focus here is on the use of one growth parameter: maximum
latewood density (MXD). The detailed nature of the temperature sensitivity of MXD across the whole network has been
explored and the dominant common influence of mean April–September temperature on MXD variability is demonstrated.
Different approaches to reconstructing past temperature for this season include the production of detailed year-by-year gridded
maps and wider regional integrations in the form of subcontinental and quasi-hemispheric-scale histories of temperature
variability spanning some six centuries.
These “hemispheric’ summer series can be compared with other reconstructions of temperature changes for the Northern
Hemisphere over the last millennium. The tree-ring-based temperature reconstructions show the clear cooling effect of large
explosive volcanic eruptions. They also exhibit greater century-timescale variability than is apparent in the other hemispheric
series and suggest that the late 15th and the 16th centuries were cooler than indicated by some other data.
However, in many tree-ring chronologies, we do not observe the expected rate of ring density increases that would be
compatible with observed late 20th century warming. This changing climate sensitivity may be the result of other environmental
factors that have, since the 1950s, increasingly acted to reduce tree-ring density below the level expected on the basis of summer
temperature changes. This prevents us from claiming unprecedented hemispheric warming during recent decades on the basis of
these tree-ring density data alone. Here we show very preliminary results of an investigation of the links between recent changes
in MXD and ozone (the latter assumed to be associated with the incidence of UV radiation at the ground).
….

Figures (tree ring proxies) show no warming from 1950 to 2000.
If you want I can send you the pdf by email.

The statement "However, in many tree-ring chronologies, we do not observe the expected rate of ring density increases that would be compatible with observed late 20th century warming." would seem to be a reasonable conclusion of the gathered data.

However, "This changing climate sensitivity may be the result of other environmental factors that have, since the 1950s, increasingly acted to reduce tree-ring density below the level expected on the basis of summer temperature changes." is NOT a reasonable conclusion based on the evidence. Unless independant data is available on these "external factors" before and after 1950 then the conclusion is similar to removing the legs from a flea and concluding it is deaf. Why not the equally valid conclusion that the instrumental record, post 1950, is corrupted and shows a rapid increase that isn’t really there. Why not gather data on the instruments such as calibration history, local developments, number of significant digits recorded…

Steve: The other possibility (which I think is equally likely) is that the proxies are not linear. Plant growth in relation to temperature has an upside-down U shape. There is an optimum temperature. Why tree ring growth is supposed to have a linear relationship to temperature is not proven, simply arm waved. See the Twisted Tree post on this. Briffa’s hypothesis of a deus ex machina unproven anthropogenic influence is competely unsupported and ridiculous on its face. If you can’t prove the proxy in the modern period, how can you rely on it in the medieval period. In fact, I had read this article of Briffa’s (and it was on my mind) just before I asked Mann for his data in 2003.

That temperature reconstructions from proxies are highly uncertain and should contain large error bars is a given. But wouldn’t your suggested improved updates for proxy reconstructions simply highlight difficulties in using such proxies to indicate temperature..? Also, isn’t the instrumental temperature record used for recent years because it is more reliable…? It’s unclear why you would inherently distrust the instrumental record..? Generally your idea of "climate auditing" is a good one in principle, but at times you often seem smug and self-assured and this post is a good example…I’d like to see a lot more skepticism directed towards your own ideas as well as to the other side, otherwise you lose some credibility, at least to critical minds.

Steve: I don’t think that there is any sort of sound statistical basis for the error bars in the multiproxy reconstructions. My guess is that the only result of updating the proxies is to highlight the difficulties: what’s wrong with that? If the proxies can’t pick up 1998, why do we feel that they could pick up a warm 1012 or 1405 or some year like that.
This is a completely different point than arguing that the proxies are more accurate than instrumental temperature. I’m not arguing that at all. I’d just like to see how they turn out. I think that I apologized slightly for the tone of this, saying that, as someone with mineral exploration experience, I couldn’t help it. I discussed this issue at the Toronto Geological Discussion Group, which was attended by many exploration geologists, and they had total scorn for Mann’s position on “heavy equipment” – which it deserves.

I’m a retired seaman.[engine room]From the mid 1960s until the end of the millenium,part of my job was to record the sea water temperature in the log book.We did this every hour.The ships position was logged every watch[4 hours].There might be some eye-opening things to be realized if one peruses these old[and newer ]records.I seem to remember that the hottest tropical water[outside of the red sea]occasionaly reached 87 degrees F.Usually it ranged between 85 and 86.That was in the 60s and 70s.In the 80s the sea temp.rose to 90 degrees F.[in most tropical waters that I traversed-generally the Pacific].Jes my 2 bits.

Proxies cannot be more accurate than the instrumental measurements since they are derived from them. One thing calibrating proxies in controlled experiments to obtain mappings, another to then use real data for extrapolation, whether back or forward in time.

But as you write, this is not the issue.

So why don’t the geologists you addressed say something!

Steve: they wanted to know what they could do. I’m not sure which they found funnier: Mann’s heavy equipment excuse, Mann’s excuse for not providing source code, Jacoby’s “a few good series” or Jones’ excuse as to why he won’t archive station data.

i> This question about proxies after 1980 was posed by a civilian to Mann

I think the "civilian" might be me. If it is it’s encouraging because I’m not always sure if I’m writing rubbish or not. In fact, I’m encouraged enough to ask readers of this site for any comments on some thoughts I have regarding the ‘hockey-stick’.. I’ve posted a related query to the realclimate site on more than one occasion – but they’ve not been published. I’m not saying there’s anything sinister in this – and it’s almost certainly just coincidence, but I am still curious about their comments on what I see as an inconsistency in some of their graphical data.

I’ve never been convinced by the MBH h-s reconstruction. Unfortunately, unlike others, I’m not sufficiently able to investigate the mathematical reasons why the reconstruction might be inaccurate, but I’ve noticed a couple of small things which don’t quite add up.

which includes 3 graphs. The first of these shows a comparison between temperature variation due to natural forcings ONLY (modelled) and actual observations since 1850. Models and observations are clearly in broad agreement up until around the 1970s. In other words – according to the models – early 20th century warming is entirely natural.

Note the large upturn JUST AFTER 1900. Why? This is completely out of character with the previous 900 years and yet all fluctuations – at least up until the mid 20th century – are supposedly due to natural variability. Not proof – I know but doesn’t it suggest just a hint of doubt?

Of course, it’s always possible that the h-s is right and it’s the models that are wrong! But, in this case at least, the models would appear to be right. There are other sources which support the view that enhanced greenhouse warming was negligible before the 1970s.

Steve: Nice of you to drop by. Your question to Mann was entirely appropriate. If you like, I’ll set up a post to host rejected realclimate questions. On the models, take a look at my post on Hegerl. The variance explained in the graph doesn’t look nearly as good as what is claimed. See if that helps and maybe check back in.

The three graphs you link from the realclimate blog may also be found two pages after the IPCC link you provide. In the Summary for Policy Makers they are part of, “Figure 4: Simulating the Earth’s temperature variations, and comparing the results to measured changes, can provide insight into the underlying causes of the major changes.”

If you look closely at these graphs they reveal other irregularities spun out of the computer models. Examples,

Between ~1860 and ~1890, what was the negative anthopogenic forcing that reduced the effect of the large positive natural forcing that caused the “model” results to line up so well with the “obsevations” in the “(c) All forcings” graph and why don’t these large negative forcings show up in the “(b) Anthropogenic” graph?

What are the large positive anthropogenic forcings in ~1895, ~1920 and ~1945?

What are the large negative anthropogenic forcings in ~1915, ~1935, ~1955, ~1960, and ~1970?

Of course the source code that was used to derive these results are the personal and private property of the researchers who created them and therefore are not open to scrutiny.

I must say that i side with Mann about it being too costly to bring the proxies up to date. You can not compare the cost to that of implementing Kyoto. I mean it is not as if the money would otherwise have went into research budgets. And can we afford to wait for proxies to be brought up to date? The anthropogenic global warming argument does not hinge on the paleo reconstructions but on model predictions. I understand that you are not convinced about anthropogic global warmning being significant but the majority is. I assume that the people behind Kyoto have evaluated the risks involved to the best of their ability and decided that now is the time to act.

The preindustrial level of CO_2 seems to be nearly without variation (according to values used by Mann). I have seen theories that the CO_2 levels without variations measured from icecores actually are caused by high pressure under about 180 meter of the ice surface. Is anybody familiar with this problem? Another question: Has anybody measured treerings from parks in a big city? The recent "urban" warming could possibly be traced there, or then it could be verified that threerings are not growing more than to a certain upper level.

John replies: I have a few new pieces of information on carbon dioxide and ice core reconstructions which I’ll try to publish soon. Suffice it to say, that carbon dioxide levels, as actually measured in the ice cores, bear only a faint resemblance to the subsequent graphs of carbon dioxide vs time produced.

While I agree that updating all tree ring proxy data bases would be an unnecessary cost, I think it would be prudent to update a few of the more controversial data sets like the Quinte set. It would also be prudent to complete some of the experiments suggested in these pages, if we truly wanted to test to see if Mann et al’s hypothesis was robust.

I also agree that they do not measure current climate and test climate models using proxy data. They use weather data collected into data bases like those at the CRU and the GISS.

If you were to go to the GISS Station Data Page and select any location in the world you would find that very few of the thousands of weather station data sets actually extend into the 21st century. Those that do are usually located in large, growing urban centres.

In Canada most of the data sets end in 1989/1990.

I would like to see some of the moneys earmarked for global climate changed put back into the collecting climate data, so that the models could actually be checked against more globally distributed data.

Re: Comment 21.
I find your reasoning confusing and/or extremely naive. I will stop short or accusing you of shilling for Mann and the IPCC.

First, saying that it is too costly to update the proxies is pure FUD. Are you saying that we spent hundreds of millions to gather the initial data? Of course not. Do you really want to spend hundreds of billions without spending 1 or 2 million on due diligence checking. If you tried to do that in any other area of corporate or personal finance it would get you declared incompetant to handle your money.

Of course we can wait for the proxies to be brought up to date. If Kyoto is really that urgent then the proxy gathering and analysis could be completed in under a year by a lot of people in parallel. Of course, that would mean that lots of people would know the results up front so embarassing facts would be impossible to suppress.

“The anthropogenic global warming argument does not hinge on the paleo reconstructions but on model predictions.” That is the most suspicious of your statements. Can you point me to a model that has been properly verified? By verified, I mean that for a 50 year prediction, you move the start and end dates back in time at 10 or 20 year intervals until you run out of good data to compare against. When you can show that the 50 year prediction is accurate for any start date in the last 100 to 150 years then you can claim that the predictions of the model may be reasonably accurate for the next 50. If the model cannot pass that test then it would be criminal to spend billions in reaction to it.

And finally, to paraphrase a famous saying, “In a hotly contested scientific debate, the majority has always been wrong!” If you need examples of the majority being wrong, look up “the earth is flat”, “the earth is the center of the universe”, “if your train goes faster than 60mph it will burn up”, “the sound barrier cannot be broken by a flying machine”. The list goes on and on.

Briffia states “in many tree-ring chronologies, we do not observe the expected rate of ring density increases that would be compatible with observed late 20th century warming.” He wants to attribute the discrepancy to a change in tree ring senstivity to temperature. An alternative explanation is the surface temperature are not being measured correctly. Fortunately, the surface network can be checked using pressure transducers on balloons and measuring the thickness of the atmosphere which is proportional to air temperature. Chase et al. (2000) performed this analysis and found that the temperature rise in the 1000 to 925 mb surface layer is much less than that reported by the surface thermometers. The trends are close to those derived from the MSU satellite which measure temperatures higher in the atmosphere.

From this analysis, it would seem the tree ring data are still responding to temperature as before. The odd man out in the analysis is the surface temperature trend which is much higher than the trends derived by other techniques.

“The NCEP/NCAR Reanalysis lower tropospheric layer-averaged temperature trend (1000-500 mb) has an average temperature increase between 1979 and 2001 of +0.05 C/decade (and +0.08 C/decade between 1979-2002), although with considerable interannual variability (and which is not statistically different from a zero trend). The Hadley Center, using radiosonde data, has computed a lower tropospheric trend (corresponding to the layer viewed by the UAH lower tropospheric data) for the period 1979-2002 of +0.05 C/decade (their data is described in Parker et al.6 and Folland et al.7), while the UAH lower tropospheric trend for 1979-2002 is +0.07 C/decade (J. Christy 2004, personal communication).”

These trends (0.05 to 0.08 C/decade) are considerably less than the roughly 0.2 C/decade trend reported by the surface network. It suggests the surface network should be viewed with suspicion.

I don’t know much about climate science, but I do know something about ozone and plants. Ozone has a rather unpleasant effect, from the plants point of view, and even at relatively low concentrations can cause reductions in growth. Most of the research has been done on crop plants, but the effect is the same with trees. As low level ozone is highest in hot clear weather, and the effect is made worse by drought, it is a quite reasonable suggestion that low level ozone has interfered with climate proxies in the last 50 years.

Further to Douglas Hoyt’s point about possible errors in the surface temperature record. I wonder if Gavin Schmidt (on realclimate) inadvertently provided an example of the urban heat effect. In order to emphasise a point, Gavin cited the temperature record at Santa Barbara when he actually meant Santa Cruz in South America. Because I lead a fairly sad, lonely life and have little better to do, I thought I’d check out Santa Barbara. I actually found 2 Santa Barbara stations on the NASA site, i.e. Santa Barbara and SB Airport.

This is not intended to be taken too seriously but I thought it was quite interesting all the same

This first link is a graph of the Santa Barbara temperature data (1880-2003)

Unfortunately the data is incomplete, but it’s hard to see the trend that exists in the first graph. The more surprising thing was the difference in actual values. This is a comparison of annual mean temperatures for the 3 most recent available years

SB Airport

2001 16.10 14.47
2002 15.78 13.83
2003 16.73 14.82

i.e. Almost 2 deg difference. Now I’m sure such a discrepancy is possible due to all sorts of geographical factors, but this discrepancy was far less pronounced 50 years earlier as shown below

SB Airport

1951 15.19 14.68
1952 14.68 14.03
1953 15.33 14.78

Here there’s only about 0.5 degree difference. We can’t compare earlier periods because there’s no earlier data for the airport, but the mean values for Santa Barbara in the period 1901-03 (100 years ago) are 14.79, 14.58 and 14.72 respectively, i.e. very close to to-day’s airport values.

Apart from UHIE, I can’t think of a good reason why this should happen — can anyone else? If that is the reason it appears to be responsible for more than a few tenths which is what is allowed by CRU in the UK in their global temperature record.

In most fields of science, there are different ways to measure things and the different measurement methods can be ranked in order of best to worst, generally based upon which method has the least systematic and random errors. The same thing can be done for temperature measurements and here is how I would rank them:

1. Balloon pressure transducers measuring the thickness of layers which is then converted to a temperature. Basically it involves distance and mass measurements, so it is really hard for me to think of something that could go wrong with these measurements.

2. Balloon thermistors measuring temperature in situ. Sure thermistors can make erroneous measurements if not properly calibrated, but it seems a fair amount of effort goes into doing so. It seems to be a fairly professional operation.

3. MSU measurements of mid-tropospheric temperature, as interpreted by Christy and Spencer. This method gets rated fairly highly primarily because it agrees with the first two methods. It also relies on a single sensor to measure the entire Earth and so is good for studying year to year global changes and regional changes.

4. Surface observations using mercury thermometers in screens. John Finn points out problems with Santa Barbara and if you look at any station, there are a host of problems (roughly 40 to 50 different types) involving instrument changes, observation time changes, micro-climate changes, and so forth. Each station is a problem itself and combining them to get regional trends, let alone global trends, presents additional problems. In fact, quality control on 1000 to 2000 different sensors is difficult to say the least.

5. Surface observations using automated thermistors. These are replacing the older measurement techniques. They have some advantages and disadvantages. Since it is still new, it ranks a little lower than the older measurement methods.

6. Surface observations using proxies. There are a number of proxies, but I would rank them roughly as follows:
6a. Boreholes (a physical measurements capturing the entire year).
6b. Ice boreholes (again a physical measurement, but with limited geographical distribution).
6c. Tree rings (responds to summer temperature, precipitation, ozone, and micro-climates, so it is often hard to get the temperature record out).
6d. Oceanic proxies (respond to water temperature rather than air temperature).
6e. Glaciers (they respond to temperature, precipitation, wind, cloud cover, and so forth, so it is not always clear what attribution should be made to their changes).

I know I am forgetting a lot of proxies and maybe even some other methods such as soil temperature, so a more thorough study could be done. The problem, as I see it, is that people make no effort to distinguish between the quality of the measurements, instead treating them all as equal, or else elevating surface observations to the top because they like the results.

I am fairly new to this, so appologies if I am being stupid, but surely balloon measurements are highly dubious because they never measure the same place twice and in the case of pressure transducers, is the atmosphere uniform enough to get sensible readings. I had always thought the turbulence on planes is caused by small scale changes in pressure. I thought the satalite measurements were far from clear as well, because no one can agree how to calibrate them or the influence that the stratospherice temperature has on the readings.

I am new to this so please excuse me if I am being stupid. How can ballon records be accurate as it is impossible to make two measurements in the same place and they must be well spaced in time, unlike satelite or surface measurements. Also, for the pressure measurements, how do they account for the small scale, large variations in air pressure (the ones which cause turbulence on aircraft?)

The balloons are launched from the same place every day so near the lower part of their flight they will be close to their point of origin. They will be getting a measure of a region rather than a point and this should average out in deriving long term trends. As for pressure transducers, the balloons are riding with wind and not cutting through the wind like airplanes, so they will ride with the turbulence.

The fact is that the pressure transducers, thermistors and MSU satellite measurements as interpreted by Chrity and Spencer all give nearly identical results. Since they are independent and quite different measurements, shouldn’t this increase the confidence in them? It seems the odds of three methods all being identically wrong are low.

Do you know of any studies which compare, say, MSU satellite measurement trends with surface temperature trends. I’ve read a couple of articles which suggest that there is reasonable agreement over some land regions (where stations are well maintained, perhaps) but large disagreement over oceans. I don’t know, though, if this is speculation or actual fact.

This is a rather selective list of the opinions regarding the correlation between satellite and surface temperartures. As far as I can see there are lots of studies which compare surface and satellite records, none of them are in agreement. But a couple which provide an alternative view point are

Do any of the papers actually provide a comparison between satellite and surface records. As far as I am aware all the studies attempt to scrutinise and discredit the satellite records. This has been going on for some time. Roy Spencer (UAH) has already responded to the Fu study. I’m not sure about the others, but that’s not really the point.

Douglas Hoyt has provided a list of different measurement methods. The satellite record is in agreement with other independent weather balloon records. In other words, it has been validated. The surface temperature record, on the other hand, has not been validated. The original article which started this thread, i.e. “Bring the proxies up to date” was prompted by a known discrepancy between the surface temperature record and available tree ring data in the last few decades of the 20th century. Basically, proxy data cannot reproduce the recent “unprecedented” warming which is measured by the satellite record.

I gave one, slightly trivial, example (Santa Barbara) of the type of problems which occur with the thermometer readings. There are other much more serious potential problems, e.g. large areas of the world where coverage is poor or non-existant.

I have the impression that the satellite data, as interpreted by Spencer and Christy (S&C), must be wrong, as they don’t fit the surface temperature trends (neither the climate models). Of course every method needed to interprete data has its own problems. But the other methods used for satellite data either use theoretical corrections for the temperature of one of the references, instead of the real measured one (Vinnikov & Grody) or use an overlap of two measurements which is such large, that the “corrected” temperature is questionable (Fu e.a.). And the third also used some theoretical adjustment over the different satellites (RSS). While the last may have some merit (there was a too small overlap in some subsequent satellites for an efficient calibration), the difference in temperature is larger, if compared to radio sonde data. See: http://www.ncdc.noaa.gov/oa/NonlinearTalk.ppt slide 10-12. For Christy’s comment on Vinnikov and Grody, see the same slide show, slide 13. See also http://www.ncdc.noaa.gov/oa/ncdc_vtt_pwt.ppt where the UK Met Office reanalysed the radiosonde data, and compared that to their own climate model (see also slides 15 and 25 for model problems!). The satellite data interpretation by Fu e.a. was commented by Tett & Thorne (Hadley Centre):
“Fu et al. linearly combine time series from two MSU channels to estimate vertically integrated 850–300-hPa temperatures and claim consistency between surface and free-troposphere warming for one MSU record. We believe that their approach overfits the data, produces trends that overestimate warming and gives overly optimistic uncertainty estimates.”
But in a newer (corrected) attempt of Fu e.a., the difference between S&C becomes much smaller. See: http://www.techcentralstation.com/120304F.html

But what about the surface record? Nobody in the (official) climate world seems to question the surface record. But simply look at the surface data of GISS and try to find something reliable in the tropics:
Look e.g. to the data for Salvador, a town of 1.5 million inhabitants. That should be compared with rural stations to correct for urban heat island effect. But the nearest rural stations are 458-542 km away from Salvador (Caetite, Caravela, Remanso). And their data are so spurious, that it is impossible to deduct any trend from them. Quixeramobin is the nearest rural station with more or less reliable data over a longer time span, and shows very different trends than Salvador. Or look at Kinshasha (what a mess!), 1.3 million inhabitants, Brazzaville (opposite the Congo stream), and something rural in the neighborhood (Mouyondzi – 173 km, M’Pouya – 215 km, Djambala – 219 km,…). East Africa is not better: compare the “trends” of Nairobi with these of Narok, Makindu, Kisumu, Garissa,… Rural data trends with some reliability on a longer time span are very rare in the whole tropics. Only expanding towns have (sometimes) longer data sets which are hardly correctable. The unreliability of the data in the tropic range is thus obvious, that one can wonder how a “global” surface temperature trend can be calculated to any accuracy… But temperate or polar Russia is not better. All but one rural station ceased operation in 1980. What is left are large cities like Moscow and St. Petersburg…

I don’t have a problem with doubting the surface record. My point was that all records can be questioned and interpreted differently. The arguments surrounding global warming have become so polarised that in my opinion there is no longer a genuine attempt to get to the truth through orignial research, but simply a process of point scoring by either side going on. So a piece of research is produced saying X, rather than trying to disprove X by doing more research there is simly an attempt to discredit X directly, by questioning methods, data selection, or even the source of funding of the researcher involved. All this does is further polarise the situation and gets us no where.

I have just had a look at the Tett and Thorne paper and they do indeed say that both Fu et al and Vinnikov & Grody’s methods overestimate tropsphereic warming. However Tett and Thorne method comes out with a tropospheric warming close to the surface warming 0.17/decade, which is much more than Christy and Spencer etc. As I have said in an earlier post I am new to this stuff, having changed jobs, allowing me more access to relevant journals. However the more I read the more I am convinced that you can show what ever you want from the temperature records by selectively quoting papers. It makes me wonder, if we can’t even argee what the global temperature is now whether we will even be able to agree in 50 years if there has been warming or not (assuming no dramatic change in the climate system). Perhaps we should give up trying to predict climate and just cross our fingers?

I fully agree with your comment #39, that the whole climate debate is highly polarised, at the cost of better science. Part of the problem is that more and more the “models” are replacing real life data. And that the predictive value of such models has given as much weight as a modern version of the Delphi Oracle in ancient times, while the results are as ambiguous…

It was for a search how the data in developing countries were corrected for UHI (urban heat island) effect, that I looked for the stations around the equator… And I needed the Russian data to look at the influence of sulphate aerosols, which must be highest near the Finnish-Russian border, according to the Hadcm3 model (Hadley Centre – UK). According to the model, the reduction of SO2 emissions in Europe in the past 10 years must give a 5-6 K increase in temperature at some places there. Which is not observed. What is visible is the influence of the NAO… See: aerosols: model and stations compared.

About the trends: The models (including the Hadcm3 model used by T&T) all calculate that tropospheric air temperatures should warm in lockstep or faster than the surface air temperatures, because air with increased GHGs must warm first (by trapped IR), before more heat can be re-emitted/convected to the surface. But in the tropics, the trend is reverse: the surface warms, while the troposphere cools (as well as from sonde as from satellite data). That is a change from the opposite since 1975. Probably a question of faster convection. But the new Fu satellite data version looks far more like the S&C version (0.9 K/decade vs. 0.6 K/decade), both (less than) halve the surface trends…

Thus indeed, this is a controversy which may last for the next decades. Maybe by then it probably will be clear what is going on in climate…

While we do not have the proxy data up to the present, the distortion resulting from the combining of proxy data with actual temps after 1980 could be illustrated by doing variations of what Mann, et al. did. For example, the temp record shows a cooling from 1940 to 1975. If you combined the proxy data only up to 1940 and then grafted on the actual record from 1940 to 1975, the hockey stick method would likely show that the 1970s had the coldest temperatures in the last 1000 years!!

I’ve been thinking about what you say about tree-ring growth and, in particular, the relationship between tree growth and temperature. You’re right – why should a linear relationship be assumed. Presumably, growth is governed by factors other than temperature and, even if a linear relationship does exist, it’s likely that it only exists over a narrow temperature range.

I just wondered if the data is available to do a simple “pilot’ study which could justify using tree-ring data as a temperature proxy. If sufficient data were available from the UK, it might be possible to calibrate and compare it with the Central England Temperature (CET) record. The CET is the oldest and longest thermometer record in existence going back to 1659. Not only that, but the UK has experienced “global warming’. Well, we get more warm years than we used to — whether or not that’s the same thing I’m not sure.

But there is another interesting feature of the CET index. That is, in 1698 the mean annual temperature was 7.63 deg C following which, apart from a few exceptions, the temperature increased year on year until 1733 when a mean of 10.47 deg C was recorded. This is a rise of almost 3 deg C in 35 years. Fortunately, for the GW alarmists, this can be dismissed as a “local’ event which, to be fair, it probably is, but would this not provide a validation test for the tree ring data. If the tree-rings can pick up the sharp rises in the early 18th century as well as the later 20th century warming it ought to provide more confidence in their use in other reconstructions. On the other hand I could be talking rubbish, but I’d be interested in any comments. PS the following is a link to the CET data.

I spent some time back in the late 1980’s studying oceanic climate data for a totally different reason — trying to determine lower tropospheric enhanced radar propagation

As a reasonably experienced experimental physicist I was “appalled” by the quality of both the ocean surface water temperature and near to surface air temperature data

Most of my study — performed for a customer with real life-and-death problems {e.g. tracking sea-skimming cruise missiles like the Exocet} — was devoted to the analysis of sources of error of the climate data that I was using to “Model” enhanced radar propagation {e.g. “seeing” tankers over-the-horizon in the Persian Gulf}

I suspect that if one carefully studied the land surface record — it would be consigned to what it was designed to do — tell local farmers when to plant to avoid the last frost and when to harvest with a reliable lifetime of a decade or two. For instance, the use of urban data is too problematical to correct by simple algorithms — Boston’s temperature has been measured at Logan Airport since 1924. The good news is that there have been only three main sites for taking the data. The bad news is that in 1924 East Boston Airfield was a gravel strip in a small grass field surrounded by Boston Harbor. Since then 2 square miles of water have been filled in and mostly covered with asphalt, concrete or the myriad structures of a modern international airport. How do we compare the temperature record for Boston in 1924 {gravel surrounded by water} with 1977 {next to the control tower, surrounded by asphalt and concrete} with 2003 {back next to the much narrower harbor, with asphalt and concrete only on one side}

Ultimately, if we are really interested in comparing surface temperatures with balloons and satellites — we need an extensive and expensive carefully designed and maintained integrated network of automated land and sea surface stations.

Finally — it’s not “rocket science” or nuclear physics — outside of a return of pole-to-pole — BIG ICE — the human race can adapt — Climate is all about perception and expectation — what’s cold in January in Boston is considered pleasant in International Falls, Minnesota, What’s hot in July in Boston is just routine in Dallas, etc.

Ted, have you ever looked at Folland’s procedure for adjusting SST measurements? (Folland is a huge IPCC bigshot). A lots been written on the UHI effect, but little on ocean adjustments. The issue is that ocean temperatures appear to have originally been made in wooden buckets, then in canvas buckets and then in engine inlets. You get different temperatures from all three methods – engine inlets run warmer than canvas buckets. Folland assumes that all measurements were instantaneously switched from canvas buckets to engine inlets in December 1941 for war reasons, allocating the entire effect (about 0.4 deg C as I recall) to this month. Thus, if some portion of the conversion took place after this, the adjustment effect would appear as an increase in ocean temperatures. Prior to 1941, he assumes that wooden buckets measure warmer than canvas buckets and postulates a gradual change from wooden to canvas buckets. The size of the adjustments is obviously as big as the effect being measured.

I just have one question – why? Why do the proxies have to be up to date? I was under the impression that the proxies were just that proxies for a time when hard measured data is not available. With the amount of climate measuring that goes on today I woud have thought that we do not need the proxies anymore?

I for one have many concerns about whether the “proxies” are any good for estimating temperature. If they are good and the 1990s are off the charts warm, then the proxies should be off the charts as well. All the information that I’ve seen on “proxies” suggest that the proxies are not at unusual levels in the 1990s. If so, if (say) the 1180s were a candidate warm period, perhaps the “proxies” then ddid no better than the 1990s proxies and thus there is no basis for grandiose claims about one period versus the other.

As an observer from outside all of these fields, I find it stunning that there does not seem to be (correct me if it exists) some comprehensive efforts by dendros to (1) bring many classic proxies up to date before AR5, and (2) rigorously analyze whether various proxies do or do not closely track relevant instrumental and satellite records of temperatures.

Surely by 2010-2012 a lot of proxy records could have been analyzed for the past 30+ years. Is this happening?

There can no longer be any excuse for reliance upon vague climatologist hand-waving about any “divergence” problems.

It seems like a pretty obvious test of the joint hypotheses that the hockey stick is correct and the surface temperature record is accurate. Since the hockey stick says that recent temperatures are far above any seen for 1,000 years, the proxies should be completely off the charts. You wouldn’t even need statistics to see it, just graph it. We should be seeing every year proxy observations NEVER seen before.

Terry,
If you will spend some time reading through this site you will find:
– This site strongly advocates bringing the proxies up to date.
– Trees are not a good proxy for temperature. They have an optimum temperature range for best growth and warmer or cooler decreases growth with no indication which side of the curve you are on.
– Trees in the 20th century are no longer showing a temperature sensitivity as the instrumental temperatures indicate we are warmer. It is suggested there is an anthropogenic cause, that we are doing something to change their sensitivity. The obvious is overlooked. ie above their optimum temperature, trees don’t respond to increased temperature. The corollary is that in the past, the trees simply did not respond to Medievel Warm Period instead of the MWP not existing.
– Trees are a lousy indicator of temperature as they respond strongly to other variables such as available water and sunlight and CO2 fertilization and there are no independant, local to the sample sites, proxies or records of these other elements. Therefore, there is no way to isolate the temperature portion.
– Trees have been used as temperature proxies and moisture proxies in various studies. The samples are supposedly picked from locations that strongly favour one response or another. Fair enough except that at least 2 of the major “accepted” studies used, for the most part, the exact same samples to indicate moisture in one study and temperature in the other. One, or both, of them is seriously wrong in their choice of samples. Discarding the overlapping samples probably leaves both studies too gutted to publish.
Green House Gas models for trapped radiation show a surplus of about 3W/m2. Too bad the fudge factor for unexplained effects is about 30W/m2. (m2=square meter)
To anyone reading this, I did this summary off the top of my head. If I have made any substantial errors in the summary, please feel free to point them out along with a link to the correct information.

Particularly interesting is the idea about how higher or lower temperatures both reduce growth. Does this apply to ring density too? I think I have hear Mann say that density is the best variable.

Also, does this imply that the output from the proxy models will be truncated on the top end, i.e., that they will not be able to detect temperatures above a certain level? If so, it would be devastating to the hockey stick since it would mean that it is incapable of detecting higher temperatures in the past — exactly what it is being used for. Also, it seems like this might explain why the recent proxy record does not detect the recent warming.

It had always struck me as a little weird to expect tree rings to be a good indicator of past temperature. It seems like there should be some proof of this claim before they are used to reconstruct temperatures 1,000 years ago.

This ties in with my chief suspicion about the hockey stick, that it is a NEGATIVE result — it fails to find temperatures in the past 1,000 years comparable to surface temperatures measured by thermometers. To make this case, you have to show that your test has the power to detect what you are looking for. Is there any evidence that the proxies can detect signficant temperature changes?

What about other proxies such as cores? Someone over at RealClimate said they were able to pick up the recent surge in temperature.

Let me lift a quote from the above, on which I would like to comment. “Think about the costs of Kyoto and then think again about this answer”.
Before getting into my comment, let me say that I applaud mightily the work that M&M have done. I have disagreed with the TAR from the moment it was published, for quite a few reasons, including the famous MBH curve now called the hockey stick.
That said, I am continually amazed (even offended) at the way critics mention the high cost of Kyoto. They really should apply the same demands of rigor to economic analysis that they expect of climate analysis. I can’t speak for work done in Canada, but in the USA the high cost of Kyoto is as much an unsupportable fabrication as is the hockey stick. The primary report referenced, in the rare case when there is any reference, is “Global Warming: The High Cost of the Kyoto Protocol National and State Impacts, 1998, published by WEFA Inc, and paid for by the API (The American Petroleum Institute). Shortly after this report was published one of the primary authors testified before a Congressional committee. and the mythology was launched. I had the opportunity to exchange some correspondance with that author, until she stopped responding, during which she confirmed that a major underlying (but unstated) assumption of the analysis was the standard (but usually unstated) economists’ assumption that any improvements (eg efficiency) that are possible have been implemented. The study therefore implicitly models destruction of economic activity through extremely high prices ($265./ton C) in order to reduce energy use. The possibility of reducing energy use through efficiency and non-punitive conservation is never considereed. In fact, the USA is so energy profligate that, given a serious start in 1998, the Kyoto targets could have easily been met, with only economic benefits, particularly increased jobs that could not be outsourced, and a much reduced payments imbalance due to reduced fuel imports. While I don’t see any need for Kyoto driven by AGW I would still like to have seen it ratified due to the economic and energetic benefits it would entail. I doubt that things are much different in Canada.
Why do you reject spurious climate modelling while unquestioningly accepting equally spurious economic modelling? Murray Duffin

Murray, in terms of cost here, I’m only talking relative to the cost of tree ring samples, which is surely unarguable and does not depend on the acceptance or non-acceptance of any economic model. We’ve already spent hunderds of millions in Canada on Kyoto without accomplishing anything.

I’m against spurious modeling of all kinds; I have never endorsed any economic models, spurious or otherwise. There’s only so much that I can do; it’s a big job doing what I’m doing with multiproxy studies already.
Steve

Steve, I’m not suggesting you do anything, except to stop propogating the myth that serious implementation of programs to achieve Kyoto targets would be economically costly. M&M have included such suggestions, perhaps unconsciously, in at least 3 postings I have read, so it seems they have accepted the mythology without question. Murray

Murray
The hypothesis that markets are sufficiently efficient that reducing carbon emissions will impose new costs seems very reasonable to me.
Your assertion that “Kyoto targets could have easily been met, with only economic benefits” sounds highly implausible. After all, why hasn’t the market already captured those benefits ?
Can you provide any background to your assertion ?

Your question and my response aren’t really germane to Steve’s work, but I’ll provide a short reply.
There are a lot of reasons why the market hasn’t captured benefits that range from ignorance of what is possible to simple lack of motivation. There are very few people who have any real expertise on energy efficiency. For example, as of 1998 (I don’t have more recent data) for 70% of American manufacturers energy cost was less than 2% of sales and for another 20% it was less than 3%. It is simply not a subject that got attention in the councils of management. Utility programs to promote efficiency as cheaper than building new power plants got almost completely wiped out by deregulation. Auto manufacturers decided that they could make more money exploiting the light truck loophole in CAFE standards, and as a result spent billions in market research and carefully designed advertising and promotion to create demand for SUVs. (The original Cadillac Escalade was an $18,000 Jimmy dressed up with some nice body panelling and interior trim that could be sold for $56,000.). A similar effort to promote elegant and efficient cars would probably have been as successful in unit sales and job retention. Insulation standards for residences are inadequate, and incentives to improve home insulation, install double glazing etc would create many jobs. The list goes on and on. Have you made any effort to make your life more energy efficient? No. Why? That answer is also part of the reason the market hasn’t captured those benefits. Let me refer you to RMI at http://www.rmi.org, and to the book “Factor Four” by Lovins, Lovins and Weiszacker. You can find some more brief material from me at http://www.energypulse.net. Murray

This post is getting a lot of hits today out of the blue. Anyone feel like checking in?

BTW if I were re-writing this, I wouldn’t take Mann’s statement that there was a need to reply on old proxies as much at face value as I did here. There are many proxies taken in the 1990s and 2000s, but few of them are incorporated into multiproxy studies (including Moberg). There are separate issues relating both to archiving/original publication and to multiproxy selection.

For example, Hughes has updated the Sheep Mountain site in 2002, but has not reported it. Does anyone seriously think that if Sheep Mountain ring widths were off the charts in the 1990s that Hughes would not have rushed to print with this information? The silence on this topic is deafening. In mineral exploration, no news nearly always means bad holes and the promoters are hoping for a few good holes before they are obliged to announce (although their flex period for delaying results is at max measured in weeks.) Other examples have been discussed passim (see Jacoby,

Maybe we should shift from complaining that this work is not being done and shift a bit more towards trying to get it done. How many grants have you had rejected for this work? Seems that one could hire a postdoc and buy that boring tool and give him some travel money and send him off.

The point for most of us has always been MBH’s lack of volatility before the blade, and the subsequent trumpeting of “the warmest decade of the millenium!” If the proxies don’t reflect “the warmest decade”, than Mann and other reconstructionists have likely underestimated the MWP — and out goes “the warmest decade” claim. Even worse for the hockey team, what if the paleo indicators from the MWP calibrate as stronger (temp-wise) than the 1990s? Now that would be a hot discussion.

Perhaps a bit of the increased hits are dues to my post 113, eluding to your call to update the proxies? But if as you say, some have already been updated why would Esper, Moberg and others also suggest a recalibration/update of the proxies (as published in Science Direct 10-AUG-05 Quaternary Science Reviews: “Climate: past ranges and future changes”) — they’re reasoning:

1) to both improve the training period of the instrument record and

2) to resolve the “amplitude puzzle” between the low amplitude reconstructions of Mann’99 and Jones’98 vs. the high amplitude reconstructions of Esper’02, Moberg’05 and to a lesser extent, Briffa’00.

I’ve occasionally posted up comments on post-1980 proxies. Off the top of my head, I can recall posting up notes on Alaska tree rings and TTHH (Twisted Tree), probably some more. If you google the term plus climateaudit, you should locate the post. D’Arrigo of Jacoby and D’Arrigo ponders the upside-down quadratic for TTHH (updated to 2002 but not archived). But the morale drawn is not that the proxies may be no good, but that northern forests may not be as a big a carbon sink. When I read Zhu [1973] and his obsequious genuflection to the Great Helmsman, it reminded me for all the world of genuflections to global warming by people like D’Arrigo – the analogy isn’t exact, I know. I just wonder whether 30 years from now, all the references to global warming, regardless of context, will read as strangely as Zhu’s praise of Chairamn Mao in a science article.

I’ve discovered your weblog today. I’ve followed your critique of the Hockey Stick diagram of Mann, Bradley and Hughes (MBH) for several years now ( at John Daly’s web page). I don’t think I would have had your stamina. Anyway, I haven’t had time to read all the comments published here, but there are a few interesting things I’ve come across.

I’m a Dr in dendrochronology. I also have a physics degree and extensive mathematical knowledge. How I got to do dendro is irrelevant, I wanted to do science and I like trees. They also needed somebody that knew FORTRAN and maths. So thats part of the story.

In my second year of PhD I was reading papers and books trying to understand the methods before implementing my code…Dendro has very good statistical books by the way- but later discovered that even the authors could use the statitistics if they felt like it…let’s say…if they had the ‘feeling’-. Coming from a physics background I couldn’t understand subjective arguments when I was looking for quantitative constraints for my results (signal to noise, number of trees necessary, how to evaluate the empirical models, how to choose the best method to remove age trends). I discovered that you can do it (justified and quantitatively). Your signal-to-noise depends on your location, species, what climate signal the trees show. For example, if you have young trees or anomalous you tend to remove them from the chronology, but you should always justified and note it.

However, what worries me about the dendroclimatology community is the circular arguments, the predisposition and ‘tunning’ towards whatever they think is right and the lack of qualified people to do the maths or to write codes (always by the same people). I still think there is a good statistical background in the science, only if people didn’t use the codes as black boxes… Boxes kindly provided by the ‘same people’….or they did things properly, taking their time (what’s the rush anyway?). For instance, I remember feeling appauled by the use of a very short period to calculate a empirical regression…and I noted that the same authors stressed, in another publication, the problem with inflated variances explained if the ratio of the number of predictors/number of years was small (say similar to 1)….

Regarding the extension of the proxies series. There is simply no funding available. I’ve been one year out of job, I’ve applied for grants, sent emails to people in the field…no money. Some dendro people (and we are less than 1 thousand) are given up, retiring or going to do something else. Only if you are involved in a multiproxy-macro-project you get funding. Small projects are disappearing. Updating chronologies would take a lot of time and very little output (publications??). So, although good and necessary, there is no chance…

I have recently got a job. I have to pay the bills, but I’ll be now a modeller. What worries me is that the department can have a very strong view against skeptics. I console myseft thinking that different views makes science possible (and advance). Maybe they need somebody to nag on a regular basis.

Hi Ana, I know this was addressed to Steve, but hey its a blog. Lets assume from your name that you are a woman. This blog has been accused of being ‘testosterone charged’ in the past. I’d be interested in knowing how many woman are in dendroclimatology, and what would get women interested the approaches here: especially audit, questioning assumptions, quasi-litigation? Maybe this is a gender neural issue, but if you had any relevant thoughts I would be interested.

#73 “What worries me is that the department can have a very strong view against skeptics.” Climatology departments have now become like cultural anthropology departments, which became like literary criticism departments. Politics now directs the “science” and the “science” done there deserves the quotation marks.

What’s the point of knowing math and physics, Ana, if they are bent to someone’s politics? I see no difference between tuning data and inventing data. Why not just do the latter? The outcome is the same, and it’s ever so much easier. Just be sure to include the properly auto-correlated noise in your “data” so that no one will find out it’s fake.

“Double entry book-keeping” will then have its manifestation in “science.” You’ll have the code, which no one gets to see, but that skillfully produces your “data.” And you’ll have your public code that produces your “results,” and all with a properly rigorous methodology, which all the regulators can see, and which shows everything is on the up-and-up. Everyone is happy. You get tenure, AGW is “confirmed” in spades, Al Gore becomes president, lots of people get rich trading carbon credits, the expiationally needy weep with relief, economies hemmorhage their prosperity, and Earth climate goes merrily and indifferently on its way.

David, I am a woman. I don’t know yet if this blog is testosterone charged ‘cos I haven’t read much of it. Number of women? I guess dendro is male dominated, I would say 1/10 or maybe 2/10 are women. However I only know a woman in a relevant position (R. D’Arrigo in Lamont, NY). Also I know of lecturers that teach or do some dendro as part of their research, but mainly we (women in dendro) are either students or post-docs. I don’t know why this ratio is smaller than in other sciences. Maybe the fact that gathering the samples requires physical strength or expending long periods in remote places puts some women/employers off. I dont’ know.

Regarding the interest in the temperature reconstructions and the Hockey Stick, I haven’t really thought that it had anything to do with gender. (Can you give me some examples?) My interest is Science (with capital S) and defending what I believe is right. Maybe I’m a bit naive, but this is a vocation, I want to learn, I want answers to my questions (maybe my science school teacher was too enthusiastic). Science misconduct is not included in my definition of Science. If I think about it, I’m used to being extra careful…this is ‘the’ gender issue: women have to prove they can do research, not only do it. I’d like to think that the low number of women in Science have to do with the fact that we got later to do the job, any job….and not to do with the fact that we would be less competitive if we have a family (as somebody recently told me). Litigation? I don’t know if I would have got to the extreme that Steve have got to, but I can certaintly fight a battle. Besides I got interested in all this for more practical reasons: I wanted to know where my tax money was better spent on… Kyoto or fighting hunger/desease/clean water suplies? Any other thoughts?

Hi, Ana, thanks for checking in. I think that there are some very interesting statistical issues in dendrochronology which would be well worth the attention of applied statisticians to the benefit of both sides – the dendro data is a large and rich data set with interesting issues of autocorrelation, mixed effects, nonlinearity, nonnormality, crossed effects.

As to updating the proxies, I wouldprobably emphasize different points today. The cost of updating classic proxies sites is not large in the scheme of the total climate budget. There’s enough money to update them and I suspect that the funding agencies would be happy to fund it. Has anyone specifically proposed this?

Also, I’m pretty sure that some sites have been updated but not reported. For example, the key bristlecone site of Sheep Mountain was updated in 2002 by Hughes but has not been reported so far. Every instinct in my body tells me that the results weren’t “good” or we’d have heard about them – it’s no fun reporting bad results.

#77. 10:1? I am guessing but my impression is the ratio would be much lower than that on this blog. I am interested from the point of view of factors that encourage female readers of blogs, and how women can be encouraged in science. In my area biodiversity modelling there are a lot of women in management roles, and they also have been at the forefront of critical evaluation of models too. I think women are attuned to responsible management, and this is about that too.

Good on you for defending Science. If you cut through the conflict, a skeptic is one with whose primary values lie in the science, and not the particular political world view. Skeptics answer to a greater value by expecting scientists to prove their case with bold and definitive experiments, well documented and executed studies, and persuasive educational materials instead of treating the public like nongs because they don’t trust concepts like the consensus of experts. Now ends the sermon.

I personally don’t have any problem playing with statistics and the dendro data set. However, I don’t think you’ll find raw chronologies in data banks, at least for early collections. While doing my PhD I chatted with a statistician about my data and results. He gave me a few tips regarding relying on R^2, RE, etc.

Yes, I agree with you. There are some updates in the chronologies. I don’t think is something systematic though. BTW I’ve proposed bringing some chronologies up to date in a proporsal. But I suspect it won’t be funded (I’ll know by the end of the month): it’s difficult to convince a panel that ‘old’ data can provide you with useful information even if you re-do previous calculations and analysis. Why should they do that if ‘new’ data provides more publications/press releases, etc. I know that compared to other climate projects, it doesn’t require a lot of money, but remember, there is no money in data gathering.

David, to be honest with you this is the first time I contribute to a blog. I’ve read some before, but I was never active. I’ll ask my female colleagues the questions you are asking me and I’ll let you know. It won;t be a proper survey, but at least it won’t be my sole opinion. As for why I am contributing now? I guess I thought I had something to say and because my husband has been considering starting his own. He wants to do it in Spanish though. Aparently there are around 20000 webblogs in Spanish compared to more than a million in English.

I can also tell you that male and females can have very different interests when browsing on the net (Is there a proper study on that?), maybe it contributes towards the low 10:1 ratio. Also more practical reasons, time?. When I told you yesterday about having to do research and prove I can do it, I wasn’t only talking about me. We (women) also had to go to intimidating lectures at college with only 1-2 females in the classroom, with professors and family telling you that you would be better off doing something else. (My mum:) With the hours you do, how can you even consider having a family? My first guess is that, if you want to encourage science among females, whoever is in charge should start making science and the science environment more female/family-friendly (like in comporate companies): more flexible hours, facilities for child care and more importantly employer-outreach (hey, having a child is not the end of the world. Men-researchers have children too and the number of publications doesn’t drop, does it?). Tell me that is not true that a woman can get a tenure, same posibilities than a man, being married and with/without young children? Please make my day…. So much for no gender issues, huh? (I feel so much better now)

Tell me that is not true that a woman can get a tenure, same posibilities than a man, being married and with/without young children? Please make my day….

Well I think women in the UC system are actively encouraged to pursue careers, and at the risk of generalization have a natural sense of responsibility that serves them well in high positions. Particularly in biological and computer science the ratio seems higher.

I can see a huge need at the eIementary level. This is way off topic though and perhaps I will do a post on that on my blog. Regards

This post was in reponse to one of Mann’s excuses. If I were writing this again today, I’d spend more time on the “divergence factor” and why some updated proxies haven’t been reported. Hughes updated Sheep Mountain, a key bristlecone site in 2002; Thompson drilled Pruogangri in 2000. In the mining business, when drill results from little companies are delayed, you can be almost certain that they’re bad and the promoters are delaying in the hopes that maybe one of the holes in progress will hit and they can soften the bad news. There are lawas preventing mining promoters from withholding results, but climate scientists do not seem to feel the need to adhere to even such minimum standards.

As a complete dendro analphabet – I made a bit of search about dendrology done in my country and a dendrologist and a climatologist Rudolf Brazdil commented in one of his Czech papers:
“Discussed problem of a climatic paradigm of the last 1000 years documents a range of yearly temperature anomalias of North Hemisphere, based on various proxies (tree rings, ice cores, documents, coral reefs. The range shows a gradual decrease without a sudden LIA onset and the dramatical increase in 20th century with the warmest year in 1998 (Mann et al.). The presented range though does not correlate with an analogical temperature range constructed for the years 1068-1979 in Europe according to biological and documentary data (Guiot, J.: The combination of historical documents and biological data in the reconstruction of climate variations in space and time. In: Frenzel, B. — Pfister, Ch. — GlàÆà⣳er, B., eds.: European Climate Reconstructed from Documentary Data: Methods and Results. Stuttgart — Jena — New York 1992, s. 93-104.)”

It’s kind of amusing that you state that some people compare the GCM models with the Oracle at Delphi.
My recollections of the Oracular predictions was that most of them were impossible to interpret correctly, until
after the fact, and that people who tried to base their behavior on those predictions usually came to bad ends.

So maybe the link between GCM models and the Oracle at Delphi is stronger than you believe?

East Sierra and White Mountain sites are near Mammoth Mtn and there are really nice hot springs along the East Sierra foothills. Great hiking, camping, fishing, packing, etc all around. Drive 4 hours and you are in LA. 3 hours north Reno, 2-1/2 hours to Vegas, during summer with high passes open 4 – 5 hours to SF.

I think they might be scared to get new proxy data because they think it will invalidate the theories. Normally. You might expect them to jump at the chance to prove things once and for all if they were really sure they’re right, except for one problem: They already think it’s been proven and we’re all mistaken annoying idiots.

One interesting thing is what we do have that’s clear and recent. We have two data sets for the same period from 1959 to 1978, one is CO2 levels in the atmosphere, and the other is the same levels in ice cores. If you graph both and get a linear trend, you’ll see the linear trends are divergent at the start and continue to separate as time goes on. Then they stopped measuring both and just used atmosphere. I wonder why…..

Divergence

I say until the proxies are newer, they’re hiding something. I can hear it now. ‘That’s nonsense, you’re paranoid.’ Then the prompt failure to go get updated proxies. Just like always, throw out the conspiracy theory card and deflect the issue as just silliness from some crackpot.

While I agree that updating all tree ring proxy data bases would be an unnecessary cost, I think it would be prudent to update a few of the more controversial data sets like the Quinte set. It would also be prudent to complete some of the experiments suggested in these pages, if we truly wanted to test to see if Mann et al’s hypothesis was robust.

I also agree that they do not measure current climate and test climate models using proxy data. They use weather data collected into data bases like those at the CRU and the GISS.

I’m not sure how many publications will arise directly from this funding, or whether this group is just bringing together existing work in the field, but certainly the Millenium project looks like it’s asking exactly the right question – “Does the magnitude and rate of 20th Century climate change exceed the natural variability of European climate over the last millennium?”

A new study at the University of Arizona’s Laboratory of Tree-Ring Research has revealed a previously unknown multi-decade drought period in the second century A.D.

Almost nine hundred years ago, in the mid-12th century, the southwestern U.S. was in the middle of a multi-decade megadrought. It was the most recent extended period of severe drought known for this region. But it was not the first.

The second century A.D. saw an extended dry period of more than 100 years characterized by a multi-decade drought lasting nearly 50 years, says a new study from scientists at the University of Arizona…”

Doesn’t really say if there are new cores, but they’ve apparently have data going back into the 124 A.D. to 210 A.D time period.

If there are newer cores, then a real scientist could use that data to see if the “treemometers” follow the current rise in temps.

Just getting in to all this. Read Andrew Montford on the hockey stick, following the BBC non-disclosure of January 2006 meeting and reading that the IPCC next major outing is due in 2014.

Given your wish to update proxies back in 2005 – and the very sensible notion that the cost is immaterial compared with the cost of expected energy bill price rises alone to fund policies generated by apparently dubious out of date tree ring proxies – where has this got to?

Will there be new data for IPCC in 2014 – presumably the next big event in the public’s consciousness?

As a newcomer this all seems extraordinarily like Alice in Wonderland – but in reality.

By the way, thanks to you Steve for your extraordinary efforts – I can’t believe the work you’ve put in and the knocks you’ve had to take; it certainly strikes me that the scientists you encountered were as determined to play the man as the ball – and that always pressages a rat, as it were.

A quick question should Steve be reading.
Is this post still valid ? If not, it would be interesting to have an update of the lack of updates.
Note, this isn’t a request for “room service”, rather its a suggestion for a useful future post.

5 Trackbacks

[…] the proxies up-to-date. I wrote an Op Ed in February 2005 for the National Post entitled “Bring the Proxies Up to Date”, where I expressed the view that this was really the first order of business in Team world. While […]

[…] One of the earliest CA posts (published as a National Post op ed) called on climate scientists to Bring the Proxies Up to Date!, observing that the last 30 years presented an opportunity for an out-of-sample test of the […]

[…] One of the earliest CA posts (published as a National Post op ed) called on climate scientists to Bring the Proxies Up to Date!, observing that the last 30 years presented an opportunity for an out-of-sample test of the […]

[…] on these matters in a couple of op-eds at the National Post – one on due diligence, one on bringing the proxies up to date This entry was written by Steve McIntyre, posted on Jul 2, 2005 at 8:49 AM, filed under […]