Urgh.. I watched this video. Elsewhere in his presentation he described the mechanics of hiding the decline – cut, splice, smooth etc – as a “no-no”. I’m wracking my brain to remember where I found it. In case it sparks anyone else’s memory, I watched this the same day I watched an MIT video with Lindzen and 3 or 4 others discussing the meanings and implications of Climategate.

Hi Steve! This is not related to your comment but it is related to your work and I thought you might find it interesting. I came across this little blurp in a very old Canadian magazine called Chatelaine–November 1972.
“We are told that automobliles are raising the levels of carbon monoxide in the air to dangerous levels, but scientists from the Stanford Research Institute who have studied ice drilling cores taken from Greenland and Antarctica, say that there was alsmost as much carbon monoxide in the air 2,500 years ago as there is today, and that most of it comes from natural, not manmade causes. The ice samples from Greenland were from 115 to 185 years ago, while those from Antarctica were 700 to 2,50 years old. The ice was flown to the institute’s headquarters at Menlo Park, California, where they were melted and air trapped and analyzed. The main source of carbon monoxide in the atmosphere today, as 2,500 years ago, comes from methane. a gaseous hydro-carbon produced by the decomposition of vegetation and other organic substances.”
My husband came across a whole pile of this magazine’s issues and Life Magazines that date back to the 50s, 60s and 70s. It’s just kind of amazing the kind of information we’ve been finding in them.

Interesting article in the NYTimes today about astronomers fussing over data. http://www.nytimes.com/2010/06/15/science/space/15kepler.html?ref=science It provides interesting contrasts and comparisons. Data collectors do seem expect and be given right to a first crack at data. On the other hand if the data is made permanently unavailable or is lost the conclusions can hardly be said to have been demonstrated.

Steve, are you going to add the book “The Hockey Stick Illusion” to a sidebar on this site? It is the most comprehensive and informative summary of your work so far. It should be recommended as an introductory text. Since Climategate this sort of text is gravely needed to make sense of the last seven years.

I met David Rutledge in the early 1990s. If I remember correctly, he wrote an article around this time in the IEEE Spectrum that was skeptical of the claims that EM fields from power lines were causing detrimental health effects. He did some simple calculations to show that the fields from the EM lines were many orders of magnitude weaker than the fields that are naturally within the cell membranes.

He ignores what I believe to be the primary source of disparity in estimates relating to reserve and “peak production” data. The majority of estimates are based on government controls in place (and/or projected), while other estimates are based on known and likely physical reserves, independent of energy policy.

The gulf between those is:
a) Large
b) Political
c) Growing rapidly

Bottom line: what few of my friends seem to understand is that much of the “peak oil” question is far more about politics than science or even economics.

My real point: got to be careful here; this may well be yet another arena where politics manages to hide in the middle of science.

Thank you for the comments on my talk. I have been surprised how few former editors have discussed the Climategate emails. Coaching reviewers, stacking panels, and litmus testing for associate editors are wrong, and in the journal I was the editor for (IEEE Transactions on Microwave Theory and Techniques) these things were not done. In contrast, my experience was that it was important to choose associate editors with a range of perspectives. Editors talk to each other a lot about their job, and having varying views is helpful in being fair to the authors and readers. This is particularly the case for the authors exploring unconventional ideas. Some of these papers turn out to be important in the long run.

In the IPCC reports, future fossil fuel supplies are covered by Working Group III. Climate Audit readers may not be aware that the uncertainty expressed by the range in the scenarios is extremely large. For example, the range for future oil production is about 10:1. This would appear to be a good place to try to reduce the uncertainty. It is particularly promising for coal, because of its long production history. We have major producers, like the UK and Japan, that are very far along in the production cycle, with current production at a few percent of the peak value. It is not that these countries stopped consuming coal; they are now major importers. It is not that the price has collapsed; inflation-adjust prices for coal are higher than they were 100 years ago. What is interesting is that these countries only produced a small fraction of the minable coal that their governments reported 100 years ago. It is a great puzzle.

To answer David Routledge’s question about low coal production vs consumption in the UK:

the reason for this is that the Conservative Party under Margaret Thatcher, in the late 1970s and early 1980s, decided to destroy the political power of the trade unions, which they viewed as being largely responsible for the economic problems in the UK.

One of the most powerful unions was the National Union of Mineworkers, which was led by Arthur Scargill. At the time the UK was benefitting from the oil and gas fields discovered in the North Sea, so basically Thatcher killed off the mining industry and relied on the North Sea reserves.

As these dry up it seems to be cheaper to import coal than to re-open the still existent coal fields.

Thank you for your comments. I was an undergraduate in England in 1974 when the National Union of Mineworkers brought down Edward Heath’s government, and I remember being cold in the dorms and lecture halls. I think a fairer reading of the history is that the NUM went gunning for Tory governments, and Margaret Thatcher turned out to be a formidable opponent. In any event, British coal peaked in 1913, before Thatcher was born. And production has halved during the last ten years, when Labour has been in power and while British coal prices have been rising.

It is not just Britain that underproduces its coal reserves. I mentioned Japan, but France, the eastern Pennsylvania anthracite fields, and the Ruhr hard coal mines also show the same pattern.

David, I spent a considerable amount of time looking at commodity markets in a much younger body. Before WW1, Britain was a very large exporter of coal – sending coal all around the world to fuel steam vessels. The relative decline of the UK coal industry long preceded the Thatcher government.

The Pennsylvania anthracite fields declined for a very different reason – anthracite was mainly used as “smokeless” fuel for home heating and has been almost entirely displaced by natural gas and oil. Power utilities are designed for bituminous coal. In addition, anthracite veins tend to be steeply dipping and do not lend themselves to the style of mining generally used in the industry.

My guess is that the UK, France and Ruhr reserves tend to be rather deep – thus more difficult to mine for a variety of reasons e.g. ground support, perhaps methane, … In any event, the mines tend to be expensive. With the development of very large ocean going vessels, it’s become possible to move coal from South Africa and Australia to Europe at a lower cost than the European mines.

Thank you for your comments. You make valid points about the production difficulties for each country. I do not think it lets the people who calculate national coal reserves off the hook, because the definition of reserves includes a component of economic feasibility. The prices for these producers have not collapsed. The inflation adjusted price of coal in all of the markets mentioned, UK, PA anthracite, Japan, and continental Europe has risen rather than fallen. However, production has collapsed, and on average, these countries have only produced about a quarter of their early reserves.

To start edging back to climate, at the time the IPCC developed its scenarios in 1998, the Ruhr hard coal reserves were about 20 billion tons. The Ruhr mines are an interesting case, because the German government has historically subsidized the producers at several times the price of imported coal. It is as close as we get in the real world to “technically recoverable” coal. It now appears that the production from the Ruhr mines from 1998 on will be less than a billion tons. And this is not a new problem. The Ruhr reserves in 1913 were about 200 billion tons. The actual production will be about 10 billion tons.

One additional factor that separates coal from the other fossil fuels is that the reserves often drop as more careful surveys are done and the reserves criteria are tightened up. The most spectacular example is Canada. In 1913, Canada’s reserves were 1,234 billion tons. After production of 3 billion tons in the last century, the current reserves are 7 billion tons.

The IPCC assumes that a multiple of the world coal reserves is available for its scenarios. The IPCC would have a stronger case if they could actually find a country that has produced a multiple of its reserves. The predicted future temperature rise is much more manageable if the eventual production is less than reserves.

Thanks for your comments. World coal reserves are handled by Alan Clarke and Judy Trinnaman for the World Energy Council. They do an excellent job, and everyone uses their numbers. Nevertheless, it is just a natinal survey and it has limits. For example, China, which produced 44% of the world’s coal last year, has not responded since 1992, and Clarke and Trinnaman use the 1992 numbers.

However, the IPCC does not use coal reserves for its scenarios, but rather coal resources. At the level of an individual coal mine, the word resources has a precise meaning. It is the coal that is not in an n-year plan, and would need additional drilling cores and infrastructure to classify it as reserves to get it into an n-year plan. However, at the world level, coal resources gets to be a bit wooly. The IPCC’s 4th Assessment Report speaks of a “possible resource of 100,000 EJ.” This is about 5 trillion tons at the current production energy densities. For comparison, world reserves are about 800 billion tons. There is no reference given for this resource number, and the fact that is a round number sets off the BS alarms.

I have a question with regards to your analysis (based on “feelings”, not researched facts – and I am fully incompetent in this field):

The decline of production for coal has happened as a new fuel has become available at lower or equal price (or more confortable). Coal was reduced as oil and gas became available. Even US oil has arguably dried out as foreign fuel became cheapaer.
I was wondering if this is not the primary driver for the production collapse.

Now imagining that oil reserves dry out and not other chepaer option for energy is available – old mines may certainly become rentable again, i.e. may be re-opened.

This would resulting in the “available ressources” to rise again as the availability is mostly a question of cost. And hence may explain why the IPCC considers higher reserves. Your estimate would than be on the lower end.

Thank you for your comments. I am assuming that they are in the context of world coal production, rather than a single producer. World coal production is not falling; it has been rising at a 4% annual rate of the last 20 years. It even rose to a record level in 2009, when both oil and gas production both fell. And it has a growing market: generating steam to produce electricity. However, the production for some producers have fallen dramatically. For example, France produced 60 million tons in 1958, but only 200 thousand tons in 2009.

It is common in hard-rock mining to keep a mine open only a fraction of the time, when prices are high. For example, my wife and I take vacations in New Mexico near the Questa molybdenum mine. It appears to only be open about 10% of the time, when the company accountants tell them they can make money. However, this generally does not happen with coal. Coal is classified as a commodity, but in many ways it is a different animal. The electricity producers sign multi-year purchase agreements with individual mines,and the boiler engineers tune the burn for that mine’s coal.

Re-opening an underground coal mine is really only feasible if the company has been paying to maintain the miles of roadway and to pump out the water. Otherwise the floors come up and the roofs come down. This maintenance is expensive, so it is done only rarely. To illustrate, the UK had more than 400 producing coal faces in the 60’s. With the closing of Welbeck Colliery last month, there are now five. There is only one addition colliery (Harworth), where the proper maintenance is being done so that the colliery could be brought back to production. This costs UK Coal about 3 million pounds per year, and I noticed that in the last annual report, the company was starting to talk about the development potential of the Harworth site for office buildings. I think the odds of Harworth Colliery ever producing a ton of coal again are slim.

I guess then (and hope) that energy will never get expensive enough for those mines to be re-opened. The limit is probably the cost of solar electricity. If coal (or oil) gets more expensive per kWh, then nobody will mine them.

The bright side is that this coal (even it is not worth mining for energy production because of prohibitive costs) is actually still available. As a professional in the chemical industry, it is comforting to know that there is a reserve of coal, seen as a valuable source of organic chemical and therefore of high value.

If you don’t mind me complaining a bit, whatever happened to Steve’s intention to tell us what happened at Erice? The post about Choi’s cloud work was fascinating. He has since done some more work which is very interesting, regarding supercooled clouds and aerosols:

There seems to be a lot we still don’t understand about the role of clouds in the climate system.
Steve: it is too bad that that I didn’t write that up. It got overtaken by Yamal, then CLimategate, then the inquiries. Unfortunately I’ve only got so much time and energy and I sometimes get tired as well.

Earlier this year Biology letters published an article claiming early emergence for the common brown butterfly around Melbourne , Australia due to increased temperatures fueled by increased CO2 emissions. An e-letter has now appeared on the Biology Letters website that pours cold water on the methodology and the results. Pity it had to come from outside the biology community. Here’s the link…

@Steve.
I’ve been saying for months now, like D.Rutledge, that the SRES scenarios are unrealistic due to massive overstimation of energy ressources and growth projections.

All SRES scenarios energy reserves are above the ultimate reserves, it just doesn’t make sense. Even if you assume massive technological advances, shouldn’t you have a few scenarios based on real world data ?
(needless to say that the distribution between coal, gas and oil was done totally randomly, it’s easy to see if you look the inputs in SRES scenarios in details).

To give you an example :
EIA 2010 projection for 2020 oil production : 33.6 Gb/yr (that can actually be considered as an upper limit, i can explain why if necessary).

67Gb is just total nonsense (it means 180 millions barrels per day, current production has stalled for the last 5yrs at 85 millions barrels per day).

Massive overstimations are mainly for coal though.

I actually have contacted the French National Academy of Science and they agree that “the hypothesis used in the defintion SRES scenarios should be better kept in mind and they should also be put into perspective with respect to the physical limits on fossil resources, but keeping a good margin of uncertainty (flexiblity) on those limits”

When I first started looking at all this stuff back in 2007 the SRES was my primary focus, but nobody wanted to listen to any criticism of the SRES because they are just “storylines”.. fairy tales more like it

Bishop Hill has just received further material under FOI that has this astounding phrase from the beginnings of the Oxburgh Inquiry, after the list of eleven CRU papers:

These key publications have been selected because of their pertinence to the specific criticisms which have been levelled against CRU’s research findings as a result of the theft of emails.

That’s in a draft of the invitation to take part from Lord Oxburgh to David Hand, but sent by Trevor Davies to explain the situation to John Beddington. “Pertinence to the specific criticisms?” Has that claim been made elsewhere?

Just for the record, I like Steve’s sense of humor and think that his particular variety of snark, which I would call wit, is gentlemanly and in perfectly good taste. It is also rather tame in comparison with some other academic fields, like, say, philosophy. I attribute it to his being in business all those years as opposed to academia.

Also, my respect for Judith Curry has increased in proportion to her engaging with Willis Eschenbach and others. She comes across as a first class academic department chair. I think she has an exaggerated sense of professional courtesy, which I have not seen in other fields, but that is no doubt due to the peculiarly insular nature of climate science.

Agricultural scientists say a genetically engineered clover may be able to reduce the methane emissions from livestock such as cows and sheep by 10 percent.

Scientists from AgResearch and one of its subsidiaries, Grasslanz Technology Ltd, said today they can “switch on” a gene in white clover to give cows and sheep extra protein, reduce emissions of methane and nitrogen waste, and improve animal health.
…
The AgResearch announcement was criticised by the Green Party, which said the nation’s farmers did not have to resort to using genetic engineering to reduce greenhouse gas emissions.

“The agricultural sector already has many options to reduce greenhouse gases, which are not a gamble like GE.

“We know lower stocking rates, limiting fertilisers and possibly selective breeding (of livestock) and nitrogen inhibiters are ways to reduce greenhouse emissions,” said Green Party co-leader Russel Norman.
Dr Norman said there was no need to use GE organisms in the field and widespread use of GE in New Zealand would damage the “clean, green” image which underpinned agricultural exports.

Are we allowed to ask random, dumb questions in an “unthreaded” topic?

If so, here’s one that’s been nagging at me. (If not, you can delete this. 🙂 )

Not really a scientist, so bear with me. Anyway, as I understand it, the temperature rise from CO2 alone is 1.2C per doubling of CO2. After that, the resultant water vapour, having a much stronger greenhouse effect, is what will cause the temp to rise to dangerous levels.

But it doesn’t seem that it would need to be CO2 to cause this “runaway” effect – any temperature rise would lead to increased humidity, leading to even higher temperatures.

So what I don’t get is this: throughout the earth’s history, the temperature has gone up and down. Possibly as warm as it is now during medieval times, and even warmer at times before that. Why during those times did it not cause a runaway heating effect?

I know, this sounds more like a question for realclimate, but somehow I find this site more approachable. 🙂

So what I don’t get is this: throughout the earth’s history, the temperature has gone up and down. Possibly as warm as it is now during medieval times, and even warmer at times before that. Why during those times did it not cause a runaway heating effect?

Based on what he said at the recent Chicago conference, Professor Richard Lindzen thinks that this is a very good question. Why over billions of years have mean temperatures on earth not gone the way either of Venus or Mars? (Especially with the increasing evidence of massive oceans on Mars at one point.) All the evidence is that on earth they’ve stayed within a relatively narrow band: plus or minus 10degK at around 280degK has it been? Something very narrow like that over such a long time means, according to Lindzen, that ‘sceptics’ may have emphasized the chaotic nature of the climate system too much. It ain’t been that chaotic – or at least there seems to be something in place that constrains any chaotic effects to stay within relatively narrow bounds.

I think it’s fair to say that nobody knows how this has happened. It has to be considered part of the ‘Goldilocks Effect’ for life on earth but the mechanisms are a mystery.

Mike L – that was the very first question I had (years ago) regarding the climate system. Seems there is some degree of built in stability. There are certainly perturbations (on numerous time frequencies) in temperature…however we have remained at least in a biologically acceptable envelope long enough to evolve. I think the general argument is that temps can sway quite a bit but then (other) mechanisms kick in to bring things back within the ‘normal’ range over long time frames.

Amazing! That was the first question I asked when I became interested in clamate change a couple of years ago. I posted it on Real Climate and received a smug response which wasn’t an answer. They don’t do themselves any favours and the experience steered me more in the direction of the sceptics.

If you attempt to model a regulated system, but leave out the regulating effects, it will appear to be unregulated. Within a narrow band, it can still simulate accurately, but with a tendency to fall off a cliff outside the narrow band.

The evidence we have is that our climate is regulated.

The models capture the behavior reasonably well within a narrow band.

The models are known to have serious defects in at least one regulating effect.

It really doesn’t seem that difficult to me. I’d have more to say here, but I’m not ready for the full statement, and a partial statement would be misconstrued and snipped. [selfsnip]

However, there are two interesting papers at this site which attempt to analyze Ontario Temperatures and look for time relationships.

Since I cannot seem to post a link without the spam filter eating mypost…
cdnsurfacetemps DOT blogspot DOT com

For people here the papers under the first video are likely to be of interest and the movies not likely to have enough detail. It paints quite a different picture than the Ministry of Environment official position.

For people here the papers under the first video are likely to be of interest and the movies not likely to have enough detail. It paints quite a different picture than the Ministry of Environment official position.

However, there are two interesting papers at this site which attempt to analyze Ontario Temperatures and look for time relationships.

For people here the papers under the first video are likely to be of interest and the movies not likely to have enough detail. It paints quite a different picture than the Ministry of Environment official position.

The relative constancy of Earth’e temperatures over eons is the direct product of the relative constancy of solar irradiance and orbital parameters. There’s a closely fixed amount power being supplied to warm the different components in the passive system and nothing can severely change that. The problem is that the AGW crowd has sold an ill-founded notion of “feedback” (it would make IEEE editors LOL) to an unsuspecting audience that does not realize that feedback systems cannot operate without auxiliary power. GHG produce no power whatsoever, they merely redistribute energy produced by the Sun.

I’m accustomed to distinguishing between mere recirculation of a fraction of the output (what many process engineers call “feedback”) from true feedback that puts no load on the output and to which the analytic Laplace-transform transfer functions apply. In such systems, where sensors are used to pick up the output signal, which is then replicated and perhaps filtered, auxiliary power from op amps is required no matter whether the signal in the feedback loop is added to or subtracted from the input. Nothing in the climate system resembles that.

To the gist of my earlier comment, I would add that chaos comes into play not as a fundamental matter of heating but as a consequence of the nonlinear dynamics of fluid flow in the oceans and atmosphere, i.e., of redistribution of thermalized matter. The fact that most of the Earth’s surface is composed of the substance with the highest known specific heat–water–adds greatly to temperature stability.

Perhaps this is just an argument about semantics, but it would seem to me that this planet is currently radiating as much heat as it takes on from the sun, and will be at any time when it’s in equilibrium. (Ignoring energy taken in and stored, that is.) There is, therefore, a great deal of energy – nearly the total amount provided by the sun to earth – that could be trapped and increase our temperature. If the trapping of some of that heat causes further heat to be trapped, it would constitute behaviour enough like a feedback system for it to be a decent analogy. But, as always, when you make an analogy, people debate it instead of the real point.

It’s not a matter of semantics, but of physics. All substances with a temperature above zero K radiate in the electromagnetic spectrum. That radiation comes from kinetic energy stored internally in all matter. As a planet, the Earth radiates away nearly as much energy in the thermal range as it receives from the Sun, with remainder going into plant growth and mechanical work. The heat transfer is invariably from warm source (Earth’s surface) to cold sink (space). There is no physical means of “trapping some of that heat that causes more heat to be trapped.” Only as much stored energy can be sustained on a long-term basis as is replenished by the Sun. That’s the fundamental power constraint. The daisy-chained reasoning that heat causes more heat through “feedback” is aphysical. It ignores the basics.

There is a real back radiation for the I.R wavelength for any GHG: H20, C02, CH4,…They are not supposed to trap the outgoing radiation but to reflect it.
The question is not to know if it ‘s true or not …it is pure physic.. but to know how much :
– net flow with incoming radiation in the same wavelength,
– vapor and clouds GCM for the energy emitted, between surface and atmosphere,
– intensity and direction of the residual backradiation.

Atmospheric backradiation is of profound significance to surface temperatures. But it is not reflection, as such, by the atmosphere (it has no diurnal cycle). Nor is it an auxiliary source of power on a planetary basis. It is simply the re-emission of thermal energy that originated from the thermalized surface. And it comes not just from GHGs, but also from “inert” constituents that radiate in the far infrared and get thermalized via conduction, convection and molecular collisions. From the standpoint of energy transfer, it is part of a largely null-net radiative exchange between the surface and the co-thermal base of the atmosphere.

Post-modern climate science has infused a lot of physical confusion about the operation of the climate system. Hope I’ve cleared up some of these.

Thank you for your answer Sky. I am really confused with your answer. I don’t want to open any controversy, fight or trolling in this blog.. I am really looking for understanding.

First of all because I don’t see any demonstration or any links to scientific documentations.
Thermal energy is energy and energy on a clear spectrum of wavelenght. And even if I don’t share at all the stupid theory of a “thicker blanket ” due to C02 blocking energy in our atmosphere…. I have a lot of difficulty to withdraw in 2 words that this energy cannot for sure be “re”radiated reducing natural cooling.

Actually, Mars is the poster-child example of the ineffectuality of CO2 as a means for retaining heat and the patent silliness of thermal “runaway.” With >95% CO2 in its atmosphere, Viking lander in the Martian “tropics” recorded early summer temperatures between -83C and -33C. Clearly, distance from the Sun dictates available power and atmospheric pressure dictates the concentration of molecules per unit volume. These are the primary factors that set the temperature. The really inconvenient truth is that CO2 has a very low specific heat. It not a simple matter of radiative intensity, which cuts both ways.

So what I don’t get is this: throughout the earth’s history, the temperature has gone up and down. Possibly as warm as it is now during medieval times, and even warmer at times before that. Why during those times did it not cause a runaway heating effect?

And whatever answer you get from Real Climate, if you follow it up with this next question…
“So what stops the runaway affect and when does it stop?”
You get some rude comments waving you off with accusations that you just don’t understand coz you’re not as smart as them.

The point about “runaway affect” is that feedbacks that are treated as positive by the IPCC i.e. clouds, WV (and even GHG’s above a threshold) are actually negative feedbacks.

As Trenberth states, 70% of the globes cooling is by way of GHG’s radiating out to space. More GHG’s high up in the atmosphere equals more efficient cooling.

I will add my name to the list of those that began to question the consensus when I found out that the GCMs all include significant positive feedbacks. I have a working knowledge of linear control systems and know what happens when a linear system has significant positive feedbacks. Recognizing that the climate is not, of course, a linear system – the effects of positive feedback still causes instability – in extreme cases runaway to a physical limit. Reading some opinions from Doctors Lindzen and Happer convinced me that this was a legitimate line of questioning. A few months ago I read a presentation at WUWT by Burt Rutan that also questioned CAGW consensus based, in part, on the feedback issue.

As you can see from this thread, your question was definitely not dumb.

Not all feedback processes produce runaway. I think it is safe to say that non-runaway feedbacks are more common in the real world.

As a contrived example: Suppose 1 degree of increased temperature leads to enough extra water vapour for 0.6 degrees of further warming as a direct consequence. But then this 0.6 degree increase leads to more water vapour which leads to a further 0.36 degree increase. Do temperatures “runaway”? No, the overall limit of this process is a total rise of 2.5 degrees as a result of the initial 1 degree rise. (2.5 = 1 + 0.6 + 0.6^2 + 0.6^3 …, ) Or a total rise of 3 degrees for an initial rise of 1.2 degrees.

The 0.6 figure was contrived to show how 1.2 degrees could turn into 3 degrees for CO2 doubling after a feedback effect. I am not claiming that water vapour feedback is strictly linear nor that the feedback factor is actually 0.6.

But maybe you didn’t mean runaway in that sense? True runaway is not something suggested by climate scientists. I think “runaway” is used in an informal way to mean “a lot more warming than you expected”.

This is the difference between gain and runaway feedback. Runaway is what happens when you put the mic in front of the speakers. Indeed people like Hansen ARE talking about runaway, and do not, I think, understand this difference.

Agreed, likewise water vapor feedback would probably stop (no later than) after all water was gaseous. Positive feedbacks that create a net feedback coefficient less than one would just produce instability – a system susceptible to wild changes in output for small changes in forcings.

Mike L … So what I don’t get is this: throughout the earth’s history, the temperature has gone up and down. Possibly as warm as it is now during medieval times, and even warmer at times before that. Why during those times did it not cause a runaway heating effect?

There were no computers available in those times to model the theoretical impacts of the warming, and so Mother Nature had no information available on which to design a fully-enabled positive feedback process.

The thing to remember is that ‘radiative feedback’ only serves to increase Earth’s surface temperature and that this ‘feedback’ begins to reduce above a few kilometres of atmospheric altitude. As most of the land surface and all of ocean surface areas are ‘wet’, a warmer surface temperature only results in greater water evaporation rates. This results in an ‘over-humidity’ of the atmosphere and clouds form.

The greater the surface temperature, the greater the water evaporation rate, the greater the cloud cover. I think that this is the stability factor that you’re looking for, as the cloud tops reflect much of the short-wave component from incoming insolation. See Dr Roy W Spencer’s post on “The Missing Heat”.

Sorry to those who’ve seen the question elsewhere – just can’t resist asking again since Steve’s offered an Unthreaded.

A book called The Long Thaw by David Archer is being used in a college (maybe graduate level) course on “Environmental Policy (and ….” – don’t remember). It was recommended that I read it.

However, having researched its premises online to the extent I can find anything, I’m really not inclined to pay Amazon.de 30 bucks for it. Part of my resistance is that I doubt I will agree with it (although for free I’d check it out just to see what the points are and what “evidence” is provided). Not only have I seen some claims in a summary that I don’t consider “settled” but, in any case, I am more interested in studies that are currently coming out (peer reviewed or not) and going into past studies in relation to them (I think the book was published in 2008 so there’s a lot missing!) That it’s recommended by members of the team also doesn’t cut much ice! My understanding is that it differs from others of its ilk in that it projects much farther into the future.

I’m gathering that it’s not seen as a particularly important book from any POV – one person did tell me he heard it was “dull.”

The few libraries here with an English language section (Switzerland) tend to have the classics, etc., so I don’t think it’s worth my while to tromp around to them.

I would welcome any information any of you have (I’m rather upset that it’s the main text for a college course).

Different topic. With the emphasis on siting weather stations at airports, we need a better estimate of temperature changes produced by aircaft activity. I do not mean the occasional jet wash from 150 yards away. Can someone with aviation interests plase take a moderately busy airport, get the number of aircraft movements, estimate the average weight of a/c on takeoff and the energy needed to take this weight that many times a day to an altitude of say 500 feet. Then assume an airport area of say 2 miles x 3 miles and spread that energy over the ground, to get a unit like watt per sq m and then convert it to a temperature change.

Alyernatively, calculate the actual avgas use in a typical taxi and climb to 500 feet, then convert to heat of combustion hence to temperature. Then I suppose you’d need to double it to cope with landings.

I think that simply arguing that airports are subjected to siting errors is too broad a generalisation and that it needs better quantification, especially from an aeronautical engineer. The upper estimate of temerature change might turn out to be insignificant.

CO2 scam? Oliver – I can see being very sceptical of certain aspects of the AGW/IPCC ‘consensus’, such as aspects of dendroclimatology and paleoclimatology, and perhaps an underlying sense of ‘confirmation bias’ in attribution. And certainly there is much room for argument on the overall CO2 doubling effects (sign and magnitude of feedbacks) and the risk to societies. However, I’m puzzled by folks that seem to be able to dismiss the ‘possibility’ that CO2 induced warming is real and occuring. I’m not sure if you fit in this category but with terms like “CO2-induced global warming scam” it sounds like maybe you do (?). I certainly don’t think Mr. McIntyre falls into this category at all either (do you?). Just curious and no offense/disrespect intended.

I will add my 50+ years of experience to that of Oliver K. Manuel and sadly and reluctantly agree. I am very, very sorry that earlier in my career I did not heed (in my hubris) the similar sentiments from the 50 year experience professionals at that time and speak out much earlier.

I posted this on the “such as” thread, but maybe I should have put it here.

Did anyone besides me try to listen to the IAC audio webcast yesterday? The link just gave a recorded message. No explanation was given at the Review website. I received no reply to an email to the Secretariat asking if it would appear.

Graduate student Utane Sawangwit and Professor Tom Shanks looked at observations from the Wilkinson Microwave Anisotropy Probe (WMAP) satellite to study the remnant heat from the Big Bang. The two scientists find evidence that the errors in its data may be much larger than previously thought, which in turn makes the standard model of the Universe open to question.

I just received a courtesy copy in the mail of a new book “Coming Climate Crisis? Consider the Past, Beware the Big Fix”, by Claire Parkinson, a distinguished NASA climate scientist. This is the closest thing i’ve seen to a balanced exposition and discussion from a “mainstreamer.” Check it out.

Too bad Dr. Parkinson apparently knows little about the energy sector since his book is recommending fantasy solutions like Wind and Solar. He should read books on this subject before making silly recommendations.

JAN ESPER*w, DAVID FRANK*, ULF BU¨ NTGEN*, ANNE VERSTEGE*, RASHIT M.
HANTEMIROV z and ALEXANDER V. KIRDYANOV. 2010. Trends and uncertainties in Siberian indicators of 20th century warming. Global Change Biology (2010) 16, 386–398, doi: 10.1111/j.1365-2486.2009.01913.x
Abstract
Estimates of past climate and future forest biomass dynamics are constrained by
uncertainties in the relationships between growth and climatic variability and uncertainties
in the instrumental data themselves. Of particular interest in this regard is the borealforest
zone, where radial growth has historically been closely connected with temperature
variability, but various lines of evidence have indicated a decoupling since about the
1960s. We here address this growth-vs.-temperature divergence by analyzing tree-ring
width and density data fromacross Siberia, and comparing 20th century proxy trends with
those derived from instrumental stations. We test the influence of approaches considered
in the recent literature on the divergence phenomenon (DP), including effects of tree-ring
standardization and calibration period, and explore instrumental uncertainties by employing
both adjusted and nonadjusted temperature data to assess growth-climate
agreement. Results indicate that common methodological and data usage decisions alter
20th century growth and temperature trends in a way that can easily explain the post-1960
DP. We show that (i) Siberian station temperature adjustments were up to 1.3 1C for
decadal means before 1940, (ii) tree-ring detrending effects in the order of 0.6–0.8 1C, and
(iii) calibration uncertainties up to about 0.4 1C over the past 110 years. Despite these large
uncertainties, instrumental and tree growth estimates for the entire 20th century warming
interval match each other, to a degree previously not recognized, when care is taken to
preserve long-term trends in the tree-ring data. We further show that careful examination
of early temperature data and calibration of proxy timeseries over the full period of
overlap with instrumental data are both necessary to properly estimate 20th century longterm
changes and to avoid erroneous detection of post-1960 divergence.

“There are several ways the handle a problem like this. If tree rings and temperatures diverge, one can question the accuracy of the temperature records….The precedent for questioning the temperature record is established in the literature…as noted on CA, Wilson … had compiled [his] own temeperature records, specifically in Canada…Briffa does not even explore this possibility..In his mind Jones’ record is a fact”

On that point. As I understand it, tree ring data is calibrated against one section of the or a temperature record, validated against another and then projected back into the past. Given the temperature record(s) keep being adjusted and corrected, doesn’t this mean tree proxies need to be re-run against the new temperature record to remain valid?

I’ve read the paper a bit and apparently they compare GHCN values raw and adjusted vecause that obviously messes with the tree/temp correlation. They write:A corresponding calculation of the residuals between raw and adjusted temperature data showed that these changes increased from about 0 °C to >1 °C back in time, and were largest before the 1940s. The association of all these influences and uncertainties suggests that more attention needs be paid to the (i) consequences of tree-ring detrending on the low frequency signal of mean chronologies, (ii) effect of calibrating proxy data over different time periods, and (iii) number of instrumental temperature readings as well as the size and temporally varying adjustments of these data (Frank et al., 2007a).
Actually, it looks like the tree proxies will need to be re-run against the temp every time, when the adjustments change…

Speaking of data mohelim and such … did you know that the “overwhelming scientific consensus” – attested to by “thousands of scientists” – has been shrunk, and that it’s now only a few dozen here and a few dozen there?!

When did they graft surface temperature onto tree data? Mosher claimed that they grafted, smoothed, and truncated in WMO chart by Phil Jones, TAR, and AR4. I believe at least 2 of those to be wrong claims.

Here’s the whole replication of TAR Briffa http://www.climateaudit.info/data/uc/TAR_pad0.png , now I’m having trouble with interpreting “boundary constraints imposed by padding the series with its mean values during the first and last 25 years”. Does not fit with the beginning of Briffa series.

Interesting title. What is true in “All series were smoothed with a 40-year Hamming-weights lowpass filter, with boundary constraints imposed by padding the series with its mean values during the first and last 25 years.” ?

Yes UC this is my question exactly. It appears to me that each series in the TAR was smoothed / padded with a different process – unlike the caption you quoted. You have already shown this in part with your replications of MBH and Briffa. WRT Jones I am not sure about the smoothing, but the possibility that the Jones series is padded with zero’s (like you suggest with Briffa) appears to be ruled out because the start of the series does not drop down from zero.

DC is attempting to explain this inconsistency. Focusing on the Briffa series in the TAR, he claims that padding with 25 year series means is virtually indistinguishable from 0 padding (or insturmental padding) – therefore the endpoint *could* be padded as per the TAR caption. It is only the start of the series that is certain to be padded with 0’s. In the comments he offers the theory that it was a mistake which accounts for the 0 padding at the start. The mistake was failing to differentiate in the code that Briffa starts at 1400 rather than 1000 like the other series. 0’s must then fill in the voids for the 400 years with no data.

DC has not offered an explanation for the re-alignment of the Briffa series by your .0645 factor. That explanation is supposedly coming in “part 2”.

While DC’s post is about “Another false claim from Steve McIntyre”, he is obviously aware that it is difficult to post pointing fingers at Steve if Mann’s TAR graph stated vs. actual processes can’t be explained or rationalized.

UC, any thoughts which might explain +0.0645 offset for the Briffa series? I thought Maybe Mann might have realigned according to how the series would have lined up using Briffa’s base period. However, unless I am missing something, when I checked the numbers the alignment for Jones and Mann would not change by using Briffa’s base period.

Arthur Smith says:
“The first discussion point in Angliss’ review of the claims and in the ClimateAudit back and forth is the meaning of the “trick” to “hide the decline” phrase found in the stolen emails. This has been adversely interpreted in a couple of different ways but the actual meaning has been clearly identified as the process of creating graphs that do include tree-ring-based temperature “proxy” data only up to 1960, or 1980, a point where they start to diverge from temperatures measured by instrumental thermometers. There is nothing scientifically nefarious or “wrong” about this – the “divergence problem” has been extensively discussed in the scientific literature including in the text of the most recent IPCC report. If you have reason to believe a particular collection of tree ring data is a good measure of temperature before 1960 but for some still uncertain reason not after that point, then it’s perfectly legitimate to create a graph using the data you think is reliable, particularly if these choices are all clearly explained in the surrounding text or caption.”

It is actually a matter of opinion whether it is “perfectly legitimate” to ignore inconvenient data. Arthur Smith wants to play authority at this one? He should come here and we can discuss.

Surely it follows inevitably, if you accept the abstract as correct, that the reconstruction of past temperatures also has errors as great as those needed for “proper” adjustment of divergence. Therefore, reconstructed temperatures are virtually useless because they would be so prone to large errors.

One wonders why it took 50 years from 1960 to now to find the cause of divergence, given its cenral importance to global warming.

At the risk of a snip, some Russian workers write several times in the Climategate emails that they are starved for funds. Keith Briffa seemed to dole it out in reward to the researchers writing the “correct” conclusions into reports. Has that stopped?

I don’t think the Russians were rewarded for conclusions – they were funded primarily for data collections. At least this is obvious when looking at the papers.

When they wrote their own papers or Theses in Russian, their formulations were very careful. They always wrote about their trees as being proxies for summer temperatures. No ambitions for commenting on global temp whatsoever.

EW – There are suspicions raised if you search the emails with keyword “money”. e.g.881356379.txt from Keith Briffa to Eugene and Stepan (do read the whole email)

Finally, I have got permission (provided I can find the money to pay for it) to have a special issue of The Holocene dedicated to the results (todate) of the ADVANCE-10K project. It will contain a series of major articles describing each piece of the work and I wish these to include large ,detailed papers on the Yamal and Taimyr chronologies , and perhaps a separate paper on the Northern Urals work. I hope to get a firm committment now from Both of you that you will be prepared to do this. I would be happy to help with specific ideas and some analysis and plotting of all Figures and retyping if you wish. The provisional deadline for the production of the papers would be late summer or autumn at the earliest.

There are more “puppet on the string” exercises – email 907339897.txt for example

Dear Stepan and Eugene ( and Fritz),
I have now receivd contracts from The EC for the INTAS work.
I have received the real signed Power Of Attorney form from Stepan , but not from Eugene.
It seems I must have both . I am a bit reluctant to forge Eugene’s signature! We will need to think about how the money should be handled . Also please all go back and look at the document I wrote and be sure you are happy with the committment. The most important new aspect is the biomass work and I think new , or additional collections need to be taken to look at the growth of young , medium and old trees separately through time. We have very few recent young and middle age trees in recent years. We could consider using data along north/south transects (how goes the status of the Siberian Transect?).

Also, I must go to Vienna in 2 weeks to present the results of ADVANCE10K . We have a meeting of this group here in Norwich in November but I am very sorry that I have no funds to invite you to attend this. Could you afford a meeting some time , perhaps in a neutral spot where we all (including Fritz) might get together to talk about the INTAS work and future EC work? A state of the art report of progress of the Taimyr and Yamal work is needed very soon ( by email),also so that I can report on it in Vienna and Norwich. I am also writing a paper for PAGES for the book of the conference in London that Rashit attended. I will include a report of both projects , hopefully with some Figures of the data distribution or plots of the some version of the curves themselves ( along with others at high latitudes) . I would appreciate new copies of the full dated raw data sets , in Tucson compact format, to produce some curves in a standard style. I would like to compare changing variance through time at different wave lengths and perhaps co spectra.

As for money on ADVANCE10K, I initially was awarded 50,000ECU to be split between Krasnoyarsk and Ekaterinburg. Because of exchange rate changes , which have gone against us continually since the start of the project, this is now worth between 0.2 and 0.25 LESS than it did then. I have looked at the remaining money and I think I can give you each a final payment of between 4000 and 4500 US dollars. This is not definate – but it is pretty definate! I hope this means you may be able to do this year’s fieldwork. We need to think also about how and if this should be coordinted with the INTAS work – but maybe not? How about some discussion by email regarding these points. I look forward to a quick reply.

I already know these mails concerning funding of Siberian researchers. Even from the ones you quoted it is obvious, that the interest of CRU was primarily in the data collections.

“The most important new aspect is the biomass work and I think new , or additional collections need to be taken to look at the growth of young , medium and old trees separately through time. We have very few recent young and middle age trees in recent years. We could consider using data along north/south transects (how goes the status of the Siberian Transect?).”
“I hope this means you may be able to do this year’s fieldwork.”

I can attest, that it was very difficult for science of the post-commie states in the nineties. Not only the state funding was very limited, but there was also a lot of various obstacles to get money from abroad. Therefore I understand quite well the, let’s say, “unconventional” methods of getting the funding to Siberians to enable their collection trips. The collections were, what the Russians are good at. They leave the broad interpretations to others.

It is now 2 weeks past the due date for the findings and recommendations of allegation #4 of Michael Mann’s research conduct at Penn State. I am not aware of a report of the findings and/or recommendations. Does anyone care, or am I the only one who does?

If the paradigm here is that the judge is acting as Mann’s defense attorney, then the development you cite is analogous to the defense missing by two weeks the deadline for presenting its case, without even ever beginning — without calling a witness, without entering an item into evidence.

If the “prosecution”, in effect, finished up on time and with a strong case … well, as a jury member, I know what I’d be thinking.

Doubtless the “judge” would climb up to his bench to issue me a stern warning against that sort of reasoning. But in this case, that just makes it worse, doesn’t it?

Apologies for the redundant post , I thought the first had got lost.
I ran a crude unix script to retrieve the country & no. of contibuting authors info. The top 10 is below, with the script at the end for a full list:

Perhaps I should not comment on the abstract until seeing the whole paper, but the abstract contains such remarkable comments that discussion seems warranted.

“when care is taken to preserve long-term trends in the tree-ring data”. Is it just me, or is this statement in the abstract of a scientific paper troubling? Standard practice is to start with a null hypothesis that there has been no significant change and determine if that is inconsistent with the data or not. Here we are told we must “take care to preserve long term trends”, which must necessarily start with the assumption that there are long term trends. Isn’t it alarming when “methodological and data usage decisions” can make the outcomes so different that it is deemed necessary (al la Jones) or is deemed not necessary (present abstract) to hide the decline?

This stuff just makes me shake my head in wonder. I am not aware of any other field of research in which statements like this could be published, much less taken seriously.

“Respondent avers that neither academic freedom nor the First Amendment have ever been held to immunize a person, whether academic or not, from civil or criminal actions for fraud, let along immunized them from an otherwise authorized investigation,”

Ouch.

And, “…and he indicated to a research colleague in Eagland ‘[a]s we all know, this isn’t about the truth at all, it’s about plausibly deniable accusations,'”

I don’t remember seeing this particular email but its quite damning when you are talking about research using public funds. I need to look it up to see it in full context.

I too have what may be interpreted as a stupid question. I have been following climate science over the last two years, mostly through reading CA and various other web sites and blogs. I have been very skeptical about climate alarmism because of the heavy reliance on predictions made by GCM’s (as well as the empirical temp data does not seem to suggest we are outside of any range the planet has not experienced in the last couple of millennia) . My question is: are there any GCM’s (or research groups) out there that predict no runaway warming due to increased CO2 by theoretical methods? It appears to me that all GCM’s only point in one direction based on the way skeptically minded people present them with such disdain. Surely there must be a few groups out there that are not in agreement? Or is it as simple as if you don’t model positive feedbacks you don’t get anything interesting and your funding dries up?

I’ve been pondering the “heat in the pipeline” question and need someone to check my math.

The assumption I’ve made is that any unrealized gain in SST to a given forcing can be modeled as per Newton’s Law of Cooling. This “law” calculates the unrealized temperature gain using the factor e^(-rt), where e=2.17…, r is the heat transfer rate and is > 0, and t is time. So at t=0, 100% of the heat gain is unrealized and as t goes to infinity, 0% remains.

Many moons ago, I took an actuarial course and the “heat in the pipeline” calculation reminds me of the future value (FV) calculation for an annuity with continuous interest and continuous payments. In this case, however, our interest rate will be negative and our payments will be the increases in ln(co2). Looking up in Wikipedia, the present value (PV) of this annuity is:

PV=(1-e^(-rt))/r [http://en.wikipedia.org/wiki/Time_value_of_money]

To convert this to future value, multiply by e^(rt):

FV=(e^(rt)-1)/r

Then, to make the rate negative and multiply by an assumed constant payment (a):

FV=a*(e^(-rt)-1)/-r

So, given the assumptions, as t->infinity, then FV converges on the value a/r. The amount of time that it takes to get to 99% of the convergence value is about: ln(.01)/-r. So if r=.5 then we get to 99% in 9.21 years (assuming t is a year).

But if we regress ln(accumulated co2) over time, then we find that a second order polynomial is a better fit than a linear one. This would mean that our payment is not constant but rather increases linearly (ax + b). So to derive the FV formula, I will attempt some calculus (help!). It looks like the derivation of the above FV formula was:

Summation from 0 to t of the integral (a*e^(-rx))dx

So if the payment increases linearly and is not constant, then:

FV=Summation from 0 to t of the integral ((ax+b)*e^(-rx))dx

So plugging this formula into Wolfram’s Online Integrator and doing the summation I get:

FV=-(e^(-r*t)*(a + b*r + a*r*t)/(r^2)) +((a + b*r)/(r^2))

which converges to: (a + b*r)/(r^2) and we get to 99% of this value at about the same time as calculated above.

Anyway, if any of you are mathematically inclined, I’d appreciate it if you can check my calculations.

I should note Wikipedia states the following about Newton’s model:

“This form of heat loss principle is sometimes not very precise; an accurate formulation may require analysis of heat flow, based on the (transient) heat transfer equation in a nonhomogeneous, or else poorly conductive, medium.”

So this might not be a good model of calculation SST heat in the pipeline. Does anyone know of any study that refutes this model?

The lesson for climate research is clear. “There are so many weather stations in Europe that, if we are not careful, these solar effects could influence our global averages,” says Lockwood.

Is that true? I thought gridding came before averaging, so doesn’t that mean that having large numbers of weather stations in one area doesn’t matter (provided you haven’t got zero stations in an area, and hence have to pinch your data from upto 1200km away, à la GISS). I’m quite new to this and it’s all very confusing…

Having lived in Edmonton for nine very cold years in the 1970′s (-30c to -45c for weeks to months at a time) you get to know a lot about snow and cold weather. Sometimes I was playing outside in a t-shirt on a day that was sunny and around -20c! It would not get above zero celsius for weeks. Yup, and the snow would be melting. Strange was what I thought. Maybe it was sunshine warming the snow? Never did get the answer to that one… so I’m asking if anyone knows. It could be related to the arctic ice issues.

I found this comment and wonder if it’s correct?

“CAN ICE MELT WHEN THE AIR TEMPERATURE IS SUB-FREEZING?
METEOROLOGIST JEFF HABY
I have made this observation on days in the winter in which there was snow cover: The air temperature was below freezing yet ice and snow was melting off house roofs, cars and other objects. Why would this occur?

1. Air is a very poor absorber of solar radiation while objects on the earth’s surface are much better at absorbing. Even snow with its very high albedo and reflective ability is more absorbing of solar radiation than the air. While a temperature sensor exposed to the air may detect temperatures below freezing, the sun’s radiation can warm individual objects above freezing (especially objects with a low albedo).

2. The temperature of the earth’s ground surface and/or objects on the surface may be above freezing. The air temperature at the observing level may be below freezing while the temperature of air immediately surrounding certain objects may be above freezing. This can occur when the soil temperature has not yet adjusted to or modified the air temperature at the observing level.”http://www.theweatherprediction.com/habyhints/230/

There is always geothermal heat coming out of the ground. If you push a temperature probe about 3m into the ground, it will record a temperature relatively constant throughout the year of ~10°C. Similarly, when the ground is covered with several metres of snow, the snow acts to insulate the earth such that the temperature at the earth’s surface will rise to this level – hence causing the snow to melt from the bottom-up.

While some of this heat may be absorbed from solar radiation, with snow the reflection dominates so the heat that melts the snow derives mainly from within the earth itself.

By the way, geothermal heat derives from the earth’s core which is molten as well as from emissions from a variety of radioactively-decaying elements.

Hence, one could claim that geothermal energy is actually a form of nuclear energy.

This is an interesting site.http://nsidc.org/data/ggd621.html
“The summary ground temperature database for northern Canada includes publicly available information from published and unpublished sources for 656 sites, 526 of which are in the permafrost region. The majority of the sites are currently abandoned, with only about 17% active. Measurements at the inactive sites were generally recorded between 1960 and the mid 1980s. Although ground temperatures were measured over a number of years at many sites, the database compilation contains mainly summary information. Information on site characteristics such as air temperature, snow cover and vegetation which influence the ground temperature regime has also been compiled. The database has been published as a Geological Survey of Canada Open File Report which also contains a series of maps and graphs illustrating site distribution, near-surface ground temperatures, and other attributes of the database.”

This is also interesting.http://gsc.nrcan.gc.ca/permafrost/database_e.php#database1
“Knowledge of baseline permafrost thermal conditions is critical to monitoring and assessing changes in permafrost. This is important not only for climate change science, impact and adaptation studies but also critical for land use planning and infrastructure design, construction, operation and maintenance. To this end, the Geological Survey of Canada (GSC) has compiled national databases of permafrost thickness and ground temperature. The data are compiled from observations of over 500 government, university and industry boreholes. Information on site characteristics such as air temperature, snow cover and vegetation which influence the ground thermal regime and the permafrost distribution has also been compiled. The reference for each site is also provided and may be consulted for further information. Selected fields from the database may be viewed here. The entire database and complete text are published in the following GSC open file reports…”

Yes, at times there was water on the ground. Later it would ice over when the sun went down. This was a rare occurrence, usually after weeks of -40c. We’d play outside for a few hours in t-shirts in the minus 20c. As long as the sun was out we’d be fine mid day. It’s not like there was water gushing from the ice as it does when it’s above zero in the spring. It was a lot of moisture with some water. Very good snow ball weather since it would stick together due to the moisture present. Of course this is all anecdotal, I’m just wandering if there is such a phenomenon?

I’m curious if some melting occurs in snow as the temperature (air) warms up from say -40c to -20c?

Why would the snow, a powder one day, become sticky the next?

I wonder how much we really know about how snow and ice behave? Who has done experiments or observations on snow/ice that might show what if anything happens under these conditions?

Maybe it’s just the sunshine warming the snow since it’s not a perfect reflector and it warms enough to melt some, even a tiny portion, but enough to change the texture of the snow from powder to sticky perfect snow ball throwing snow?

The USDA Forestry Service “Avalanche Handbook” – chapters 2 and 3 cover the formation and changes in the snowpack, and include links to . See http://tinyurl.com/3x8wty9 – link to USDA Digital Repository.

Or would there simply be more moist air that imparts some of that moisture to the snow?

I’ve noticed that the whole battle on climate change seems to revolve around temperatures and CO2 concentrations while it’s rare to see any mention of the humidity of the atmosphere or other aspects of the air (such as barometric pressure). Why is that?

Surely water content in the air makes a difference to climate and thus to the alleged AGW hypothesis?

Mike L asks an interesting question that has relevance to past climates of the planets and the present and future climates of Earth. “Why hasn’t Earth had runaway climates like Venus and Mars”? I would like to concentrate on Earth and Mars.

Turns out I am in Thailand with a little time to spare, so here is a partial answer to your question.

What are the differences? Perhaps the distance for the sun and the absence of a magnetic field on Mars are important. But even more so is the existence of a background greenhouse gas. Earth is fortunate that we are close enough to the sun that CO2 stays in its gaseous form and provides a basic downwelling infrared radiation to the surface. The resulting surface temperature, higher than it would be without CO2, is sufficiently high to allow water to stay in its gaseous form (i.e., water vapor). Furthermore, magnetic field surrounding Earth diverts the solar wind reducing the shedding of the atmosphere.

Mars, on the other hand, has a much smaller or non-existent magnetic field. Probably shedding has taken place reducing the mass of the atmosphere. But Mars is also much further away from the sun and the CO2 in the atmosphere goes through phase changes and much of the CO2 is collected at the poles through deposition. During the spring/summer, the CO2 ice sublimates and finishes up in the winter poles through deposition. In summary, solar wind has probably reduced the atmospheric mass of Mars. There is no background greenhouse gas to modulate the surface temperatures and the surface temperature goes through wild diurnal and seasonal changes.

I think that the concentration of basic or background greenhouse gas is the backbone of Earth’s climate. Largely the balance between source and sink of CO2 determines the background concentration: the carbon cycle. The source is volcanism and the sink is weathering of rocks (rain being slightly acidic due to the dissolution of CO2) and the uptake of CO2 by the oceans. Absorption takes place because of the formation of carbonates by the biosphere and the precipitation of these carbonates to the bottom of the ocean once the beastie dies. The carbon then enters very long geological cycles and is eventually expelled through volcanoes and etc.

Earth, though, undergoes modulations. It probably has undergone at least two snowball earths where there were mass extinctions of life up to 95-99%. But these did not last long because of the vigor of the carbon cycle. The snowball state was probably cause by meteorite which, on impacting the planet, reduced solar radiation sufficiently long for the oceans to freeze. This, of course, cut off the sink of CO2 and rather rapidly the CO2 built up to many times the pre-collision state, as the volcanism source did not stop. Increased downwelling radiation rapidly (geological evidence suggests 1000’s-10,000’s yrs) allowed the surface to heat up ridding the ocean of ice and producing a sink for CO2 and a subsequent reduction of its concentration. There are much more rapid changes Earth’s history as the solar radiation at a point on the surface of the Earth changes with Milankovitch orbital. Two things need to be noted: the total radiation over an entire year does not change (assuming no changes in solar outpouring of energy) but the planet has a complex distribution of land and ocean. Thus through the regional changes in solar the temperature of land and sea changes. As the amount of CO2 absorbed or rejected by the oceans is dependent on the temperature of the ocean, the concentration of atmospheric CO2 has changed over the time scales of 1000’s of years.

Mars too has its own orbital variations and I believe that the local changes are more dramatic than Earth. But there is an interesting curiosity. Noting that I am not a planetary atmospheric scientist, it has always intrigued me that the observed concentration of the CO2 in the Mars atmosphere is 4 hPa (mb). This turns out to be the saturation vapor pressure of CO2 over ice. So one may hypothesize that Mars also underwent a catastrophe as Earth and solar radiation was reduced. Lets assume that there was a shallow ocean on Mars and (say) a 100 hPa atmosphere (compared to Earth’s 1000 hPa atmosphere). With reduced sunlight, the ocean cooled. Noting that the ingassing into the ocean goes as the inverse of the water temperature, the amount of atmospheric CO2 reduced and lowered the surface temperature even more. All this occurs until Mar’s oceans are frozen over leaving that amount of gas that would be in balance with ice. So why didn’t Mars rebound in the same manner as Earth? Simply, Mars has little volcanic activity and doesn’t not have the virile carbon cycle that exists on our planet.

What has this to do with the debate over global warming? Irrespective of one’s viewpoint, it has to be acknowledged that (a) CO2 is important and it has to have a stable phase, (b) there are water vapor feedbacks. I think that it necessary to have this basic science understood before we man our various barricades. Personally I am somewhere between the barricades as I am trying to sort out how much of the of the warming during the last 100 years is due to natural or internal variability or due to the perturbation of GHG concentrations. Of curse, this determination is complicated by the complexity of the surface temperature record.

So not a silly question at all! If the answer is silly I will hide behind jet lag.

Thanks but I don’t understand. Clearly I was unclear! CO2 is the stable (no phase change gas). H2O requires that CO2 greenhouse effect for there to be a water vapor feedback otherwise H2O stays in three phases.

Have you seen Lindzen tell us that he can use cirrus clouds to explain the faint young sun paradox yet it’s been tried many times with CO2 and it isn’t possible? This seems to put the argument out there that in the past, H20 using all of it’s phases, which can provide both positive and negative feedbacks, is rather more adaptable and powerful as a climate change agent than CO2, the single phase pure heating amplifier.

That CO2 is only a heating amplifier is surely actually a problem when you want to explain ice-age reversal because that requires either an opposing cooling amplifier or a huge and sudden carbon sink. Seemingly this fatal flaw in the theory was just ignored until Richard Alley came along and suggested rock weathering might be that carbon sink. Clouds still seem more plausible though don’t they?

“The Allan Hills are located on the flanks of the TransAntarctic Mountains. Ice upwells onto the hills where combinations of winds and solar insolation cause the ice to quickly ablate. Meteorites that once fell over a large region of East Antarctica have been carried by glacier motion into this small locality.” (It looks like a hole sited at Allan Hills might not be so good). What happens to the ice that ablates? Does it freeze and fall to the ground again, with on ozygen isotope signature incapable of interpretation?

We don’t have snow here so I’m out of my league. For the experts – is there a possibility that places like the South Pole (where there are prominent sastrugi, rather than snow drifts at surface) and Vostok, have layers formed by variation in temperature, but material in those layers that has been blown to and fro as small pieces of ice, each with its own isotopic history? If this is so, I guess that one should not expect too much resolution out of ice core. Major events every 100 years seem to show through from one site to another, but in between, who knows where the oxygen isotopes came from? Is there some help from other isotopes like those of chlorine?

Geoff,
All I know is that it is complicated and that the water that ends up as snow in different parts of the continent are different. For example, O18 ratios in the ice for the coastal regions indicates that the source of the water is local from a cold ocean. But ice on the plateau has a ratio indicative of subtropical/tropical water source. I do believe that there is an annual cycle in each of these regions but not sure of the magnitude although very small on the plateau. I tried a few years ago to determine the pathway of the tropical water but failed. But if the O18/O16 ratio is so different, as mentioned above, I would guess that ice cores would not be contaminated by drift.

When extrapolating into the past, how exactly does one get around the confounding influences on the ratio? In particular the amount of water in ice sheets, and salinity, to isolate a purely temperature signal?

It makes for an interesting morning read over coffee when you realize that you are now likely the target of background searches and criminal investigations for a crime that you might contemplate in the future. I have variously seen this in fantasy and science fiction described as “Future Crime” and “Thought Crime”. What more could I say?

I don’t know if CA readers have come across this paper by Esper/Frank 2009 titled ‘Divergence pitfalls in tree-ring research’. It discusses the difficulties of tree-ring reconstructions due to what they term as ‘divergence phenomenon’ or DP (something Phil Jones is apparently familiar with). Here’s the link:

“…inter-annual tree-ring variation may be predominantly controlled by temperatures, but the long-term warming trend is not (fully) retained in the tree-ring time series. Such a situation is of importance, as it limits the suitability of tree-ring data to reconstruct long-term climate fluctuations, particularly during periods that might have been as warm or even warmer than the late twentieth century.”

So it seems they are saying that the tree-ring proxy data is probably biased towards cooler temps which of course if true would make present day temps appear all the more ‘unprecedented’.

We need to grow thicker critical skin. Why? Because critical behavior that always results in a chorus of affirmation is nothing more than conformity; because allowing views to persist that need to be challenged is nothing less than critical mediocrity; and because failure to tell our colleagues what we truly think about their work is simple dishonesty. A reshaped critical culture will help build a more robust, honest, and transparent academy.

Hi Steve, I have just finished reading “A CROSS EXAMINATION” By Jason Scott Johnston from pensilvania university and i found it tiveting. I would like to know if you have read it and what your thoughts are?

Top of pageStable isotope ratios of oxygen and hydrogen in the Antarctic ice core record have revolutionized our understanding of Pleistocene climate variations and have allowed reconstructions of Antarctic temperature over the past 800,000 years (800 kyr; refs 1, 2). The relationship between the D/H ratio of mean annual precipitation and mean annual surface air temperature is said to be uniform 10% over East Antarctica3 and constant with time 20% (refs 3–5). In the absence of strong independent temperature proxy evidence allowing us to calibrate individual ice cores, prior general circulation model (GCM) studies have supported the assumption of constant uniform conversion for climates cooler than that of the present day3, 5. Here we analyse the three available 340 kyr East Antarctic ice core records alongside input from GCM modelling. We show that for warmer interglacial periods the relationship between temperature and the isotopic signature varies among ice core sites, and that therefore the conversions must be nonlinear for at least some sites. Model results indicate that the isotopic composition of East Antarctic ice is less sensitive to temperature changes during warmer climates. We conclude that previous temperature estimates from interglacial climates are likely to be too low. The available evidence is consistent with a peak Antarctic interglacial temperature that was at least 6 K higher than that of the present day —approximately double the widely quoted 3 1.5 K (refs 5, 6).”

Don’t worry – these people must be scientists – and therefore are not qualified to discuss climatological matters. And no doubt, two or three fossilized tree trunks (from a database of thousands) will prove that this _must_ have been a localized anomaly, and therefore unworthy of inclusion in IPCC reports.

However, one can see the temptation for getting into this line of work for multinational companies. In fact, I how do I apply to receive large sums of money for _not_ generating electricity, in a green sort of way?

“The following papers support skepticism of “man-made” global warming or the environmental or economic effects of. Addendums, comments, corrections, erratum, replies, responses and submitted papers are not included in the peer-reviewed paper count. These are included as references in defense of various papers. There are many more listings than just the 750 papers.”

A question for CA’s radiative physics junkies: I’ve been looking into the Faint Young Sun paradox and trying to figure out what sort of CO2 concentrations would be necessary for temperatures to be like today’s. The problem is that I don’t know exactly what happens to CO2’s forcing as concentrations get very high. Right now I get an atmosphere with ~9900% CO2 for 2.5 Billion Years Ago as necessary to compensate for the negative forcing of a dimmer sun of approximately 68.4 w/m2. I get this by assuming that the formula 5.35ln(CO2/CO20)holds for such high concentrations. Now I know that CO2 alone is incapable of solving the FYSP, but this difference is larger than I thought it would be. Anyone have a more precise calculation?

I don’t think a CO2 model can determine this. However, the atmospheric end of the hydrocycle probably can accommodate the ‘young sun hypothesis’. At current insolation levels ‘cloud top’ reflection of incoming SW keeps the surface cool by many w/m^2 and a reset period of ~9 days lifetime offers a rapid regional response. A ‘young sun’ just wouldn’t provide the same intensity of ‘cloud cover’.

Just as a heads up and perhaps this isn’t the right place to put it. I don’t know how in depth you plan on looking into the US Government malfeasance, but one of the major players in the incident, the Minerals Management Service has undergone a name change to the Bureau of Ocean Energy Management, Regulation and Enforcement.

Seeing as the last thread I posted in was closed without me responding to a number of things, I feel I should offer a chance to resolve any outstanding issues. If there are any questions, comments or concerns I can address, I will do my best to do so (keeping in mind a food fight is unacceptable). That said, there are a couple points I noticed which I wanted to address. The first comes from Richard T. Fowler:

“It is now 45 minutes since I demanded that Brandon Schollenberger quote from the supposed response that he has alleged to exist, and over 2 and 1/2 hours since he first made the false claim that I had made up the claim that McIntyre’s claims were not responded to.”

The first thing I want to note is my name does not have a “c” in it. More importantly, this comment shows why I said I would not jump around a thread to respond to people. I have things I want to do with my time other than post on this site. It takes too much time and effort to have exchanges like were going on in the other thread, and nothing seems to get resolved. I asked to try to organize things, but nobody seemed interested. It seems unreasonable to expect someone to post the way people expected me to post.

“Of course you don´t get it, you wrote it and you know the meanings behind it. I just don’t understand why you get your knickers in a twist by this misinterpretation. Otherwise I agree with your message(s), so let’s call it a day. Beer anyone?”

This comment from Hoi Polloi demonstrates a common problem on the internet. He says he doesn’t understand why I got upset about people misunderstanding my comment. The problem is I didn’t get upset about it. I was merely expressing genuine confusion, as I didn’t think the misinterpretation made sense. Tone isn’t conveyed well over the internet, and it is easy to read the worst possible tone. My personal approach is to try to take the best interpretation when I am uncertain.

In any event, I hope I can resolve some issues without causing unnecessary strife.

Everybody comes out of a food fight looking bad. Misunderstandings abound; there is sometimes some dark amusement in spotting them before the combatants do. But most readers don’t care about what’s being vehemently disputed. Of those readers that do, most will have forgotten why they seemed to matter, within a few days.

On the issues raised by Arthur Smith’s and Steve McIntyre’s posts, there are some good comments following Arthur’s post, here (note that there’s a “Page 2” of comments).

I really did not intend to cause a food fight. The situation was simple, and I expected it to be resolved rather quickly once things were laid out clearly. I mistakenly thought if the misunderstandings were exposed, everyone would “back off.” Instead, it seems my comments just added fuel to the fire.

In my past readings of ClimateAudit, I always got the impression of fair and level-headedness. Because of that, the sheer amount of vitriol in that topic was baffling. There are so many comments in it which I feel merit a response, but I’m afraid to say anything given what happened the last time I posted.

That said, it is obvious to me Arthur Smith has a flawed view of the “big picture.” Reading some of his comments, I can see why he is wrong. Quite frankly, the situation is not easy to understand. There is no “beginner’s guide” to all the material ClimateAudit has covered. There is no way people can be expected to know all the issues.

And given the way he was treated, can you really expect him to want to listen to them?

I’m sorry about the repeated error with your name. It was entirely accidental. (Though, after I was advised the first time, perhaps it was subconscious? It certainly does underscore the point I found myself repeatedly making.)

In light of your most recent comment (the one to which I am now replying), I can say that I have only one outstanding matter of contention with you. This can be illustrated as follows.

You quote me as saying, “[. . .] since he first made the false claim”. Hopefully you can see that it would have changed the meaning somewhat (that is to say, dramatically) if I had said “since he first made UP the false claim”.

I can assure you that the claim I referred to in the statement you have just quoted is false.

In order for it to be true, I would have to have believed that there existed a response, as I define the term and as I have defined it on the page where we communicated.

If that were the case, that would mean that all my expressed offense or outrage about Smith’s reply to McIntyre was phony.

I can assure you that I am, was, and have been, nothing BUT offended and outraged about Smith’s reply to McIntyre.

If you have any doubt about that, well, you have no need to reply to the present comment.

If however you believe that I was offended by Smith’s reply to McIntyre, I ask that you admit that, at the very least, you cannot be certain that I made up the claim. Better than that would be a statement that the claims that I made it up are not supported by the preponderance of the evidence.

Even better than that would be to say that your statements that I made it up were clearly unwarranted by the facts that were in hand, and are unwarranted by the facts that are in hand.

If you would make any of these statements or substantially the same, I agree that I will permanently forget about the above-referenced matter of contention.

I can’t agree with you. The first point I want to make is non-essential, but I think it merits attention due to its absurdity. At one point you said:

“No it isn’t. Let’s be clear. ‘Reply’ means any statement said or written as a result of a previous one. ‘Response’ means a statement that is actually responsive to some other specific statement.”

As far as I can see, this is completely arbitrary. I have never experienced the definitions used here, and I find no evidence for it in any dictionary. This seems to be some sort of post-hoc attempt to avoid admitting a mistake. It is made even worse when one realizes the after-the-fact justification in no way covers another comment of yours (which I quoted just above where you offered your semantic defense):

“McIntyre made claims that you made false statements. You ignored them. And now you want to get credit for having ‘answered’ McIntyre’s claims. You answered only one of them — the claim that false statements were made after the intro.”

Even if one were to accept your definition of “response,” something I see no reason to do, you said Arthur Smith ignored McIntyre’s claims. There is no wordplay which could even pretend to explain that away.

I don’t know what exactly to say. The idea was not made up. I believed it and I continue to believe it.

The definition you discuss, my definition, is really the definition that I always try to use, and have tried to stick to for quite some time. I was very careful in my language in these posts to try to stick to it, and as a result I was careful to use words such as “post”, “answer”, and “reply” instead of “response” or “respond” when I did not feel that a response had happened.

Even if you use a different definition of the word “response”, that does not have to mean that I use the same definition in my writing.

If the question were what is a reasonable interpretation of the meaning of my words, then one could consider the question of “What is the most reasonable definition of the word ‘response’ for the given context?”

But since the question here is only what I believed, it is at best of marginal relevance what definition you yourself use or used. Since I had never before corresponded with or spoken to you, or read or heard your words, I had no way of knowing what definition you would consider appropriate, and even if I did, that does not necessarily mean that I considered myself bound by it.

As for your last three paragraphs, Smith’s reply to McIntyre speaks for itself:

“That paragraph was part of my intro. The main post doesn’t start until I quote Steven Mosher.”

That’s all Smith wrote.

The only way it could conceivably be argued that I made up my claim is if there is some hint in that quote of a reference to the truth or falsehood of the statements in question on the blog post of Arthur Smith. That might, if I really didn’t believe what I was saying, be used as a means to infer what thoughts I had in my mind.

But even in that case, I just really fail to see what my motive would be. It’s not as if Smith hadn’t taken a verbal lashing on other grounds. If it’s so transparently obvious that the claim that Smith didn’t respond is untrue, why would I not just pick something more credible to rail about? And that assumes that I wanted to rail against Smith, which I didn’t want to do until I saw his reply to McIntyre, and was incensed by it.

You don’t think it’s the slightest bit possible that I was offended by the Smith statement I quoted here?

———-
Now then, I’ll admit I messed up in my response to you. I should have caught your mistake right away.
———-

The previous paragraph says:

———-
So your complaint is Arthur Smith did not respond to certain claims, but did respond to a minor claim instead. The reality is he responded to the “minor” claim here, and edited the post on his website to respond to the rest.
———-

If I have correctly identified the mistake you were referring to, then this means that you were implicitly stating at that time that I actually believed that there was no response, and that such belief was a mistake. While I obviously disagree that it was a mistake, I do agree that I did (and do) believe that there was no response to the claims of false statements.

If you were telling the truth about your view at that time, then that must mean you only subsequently came to the view that I had made it up.
The apparent suggestion that my claim began as genuine (which alone proves my assertion that one cannot be certain that I made anything up) and then turned into a fabrication is in stark contrast to your later statement,

Statement 1.

———-
Why did I “PO RTF”? I’d imagine the reason is because Richard T. Fowler made something up. He claimed Arthur Smith didn’t respond to something, but Arthur Smith had responded.
———-

and also to your use of the word “obviously” here:

Statement 2.

———-
Regardless of why it happened, he obviously made it up.
———-

and also to this:

Statement 3.

———-
There is no wordplay which could even pretend to explain that away.
———-

In the case of Statement 1, particularly, it is noteworthy that that statement was made during your last comment to me before I declared that I wasn’t going to continue. Since that declaration was obviously the basis of Bender’s assertion that I had been “PO’d” by you, and since (referring again to Statement 1) you apparently agreed that you had “PO’d” me on that occasion, that must mean that you were arguing in statement 1 that you had already, at that point, concluded that I had made my claim up. Yet your testimony at the time was that it was a “mistake”.

These two statements (that it was a mistake and that I made it up) cannot both be true, and thus it follows that at least one of them must be false. I of course attest that they are BOTH false, but at the very least, one of them MUST be.

I reiterate that at no time in the conversation did I try (nor did I have any motive) to “avoid admitting a mistake” by fabricating a definition (or any other information), because at no time have I ever believed that I made the alleged mistake. I did make a mistake in my assessment of Smith’s words on one occasion, saying Smith had “responded” to McIntyre’s comment when I clearly believed that he hadn’t. I was typing quickly, and apparently it was my half-conscious desire to put the word “response” in quotes. But apparently I also was considering using another word such as “reply”, and in my haste I failed to put the quotes around “response”. BUT THIS ERROR WAS ADMITTED IMMEDIATELY AFTER I STATED MY DEFINITION OF THE WORD RESPONSE. Therefore, I obviously was not using the definition to try to hide that mistake.

In order for you to be right that I made up my claim, this one mistake (calling Smith’s obvious non-response to certain claims a response) would have to have been my real belief, and all the other instances of my using the word “reply” or “post” for what I obviously believed to be non-responses would have to have been the setup for an elaborate ruse.

The real reason that I made the mistake was, in fact, because when I am not discussing communication-related or linguistic issues, I relax a little and sometimes accidentally use a more colloquial definition (since that’s the one I grew up with). But there exists a formal definition in my vocabulary (the definition I cited) and when I have a communication-related issue arise, I try to consciously switch into that formal mode. But I am imperfect with this, because I don’t have to do it too often. In the post in question, I immediately realized my mistake after posting, and this caused me to be more vigilant going forward. (I made one other slip-up, but it was not in the context of assessing Smith’s words, rather in the hiding-behind-a-tree analogy.)

Thus, it should be clear that, rather than it being the case that I revealed my true belief about Smith’s words on that one occasion and fabricated a nonexistent belief on all the other occasions, instead I accidentally used my “colloquial” definition on the one occasion, even though my own standard rules would have dictated switching to the formal definition.

On the matter of why I use the formal definition I do for the word “response”, it is because this definition alone provides for the fact that not all statements made as a result of another statement are responsive to the other statement. A less rigorous definition than mine allows people to say things such as “Your response is not responsive”, which deconstructs to “Your response is not a response” or “You responded but you didn’t respond.” Since both of these statements are tautologies, any definition which allows them is also a tautology and is thus impossible in any logically valid vocabulary.

Returning to the matter of the OTHER mistake, the one you had alleged, it should also be noted that the contrast between your earlier statement that I made a mistake and the later statements I have labeled 1 – 3 here shows that, even if you honestly changed your mind, the issue is not cut-and-dried as you have lately suggested it to be. Thus, this fact alone proves (in yet another way) that one cannot in any way be certain that I made anything up. The evidence simply falls far short of demonstrating this, and in fact if it weren’t for my two mistakes in usage of the word “respond”, the case that I could not POSSIBLY have made up my claim would be prima facie, as would the case that you made a false statement against me.

Again, for the record: I DID NOT AT ANY TIME, IN ANY WAY SHAPE OR FORM, MAKE UP ANY OF MY STATEMENTS MADE IN ANY COMMENT ON THIS BLOG. And in discussing a matter of science (which this is), I have never in my life made ANYTHING up and have no intention of ever doing so. That would be repugnant, and (though I don’t currently have a position in research) could obviously be professionally damaging to me. If the implication here is that the motive was nothing more than to score points in a rhetorical roast, how dumb would I have to be to think that such a reason is worth the risk of getting exposed? That is intended as a rhetorical question. I don’t happen to think I am anywhere near dumb enough. But if there happens to be a reader of this post with a different opinion, I suggest that they keep it to themselves.

The alternative results presented by MM as well as by Soon/Balliunas were shown to be biased by omitting relevant data and application.[…] the results of MM show a warm period in the 14/15th century, ie during the beginning of the Little Ice Age. This is in contrast to all other independent reconstructions.

I do not say anything about them being right or wrong. I just wanted to share the link with all of you, and especially Steve as his work is criticized…

One thing I must say however – even if I am not convinced by their document – is that Swiss Re puts its money where its mouth is.

Let me explain: one could pretend scientists have an advantage of supporting the “consensus” in order to get another grant or a better position. But the Insurance company is basing its policy, i.e. its forecasts and its calculations of premium, on the reality of Global Warming. They have no specific interests for that, and one could argue that they could loose money if they bet wrong. For me that a slightly more “neutral” opinion, and if they are convinced enough to publish a (definitively not neutral) refutation of skepticism, that’s a strong statement.

Yes it is more valid for an insurance company to put money behind something. However, that is not the last point. Just the excerpt you pasted shows they are not spending their money wisely. Steve McIntyre was pointing out errors in a re, not building an alternate reconstruction. That a corrected version shows warming in the 15th century just means there are bigger problems with the original study. One possibility is that the proxies in question are not accurate. If after a review, this is what they came up with, means they are doing a horrible review.
Indeed, if I had a company with billions of dollars at stake, I would certainly be running things by a proper peer review.

Using one example, the AGW side claims it will increase the number of hurricanes. It that was so, claims for huricanes will rise. And therefore insurance companies wil raise rates and/or offset some of their potential claims by buying reinsurance from a reinsurance company.

If fearmongering leads to more business but no more claims, then its a big financial win for a reinsurance company.

Ok, I did not think of it that way. But they are not alone in the business. I think Munich Re is even bigger than Swiss Re. So if because of wrong beliefs, Swiss Re increases its rates but Munich Re does not, guess where the customers will be going…

For most insurance companies, the amount of assets maintained for ‘potential payouts’ are non-taxable.

I.E. Insurance company takes in $100 in premiums. They pay out $50 in claims. In simple accounting they would then have to pay taxes on the $50 in profit.

In Insurance accounting, the insurance company tells the tax man that it needs to set aside $45 for future claims, so it’s not actually profit and shouldn’t be taxed. Of course the taxman asks for evidence of the potential for these future claims and out comes the IPCC report. So the insurance company puts $45 into a special ‘reserve’ for global warming damaged and pays taxes on $5.

Swiss Re, Munich Re, every other insurance company in the world, if they have an opportunity to increase their ‘non taxable reserves’ rather then pay taxes they will.

I live 27 feet above sea level, according to Al Gore my house will be under water soon, so I think the taxman should start taxing me now on the ‘Al Gore’ value of my house which is zero. I think if the taxman would go for it, I’d believe in global warming too.

if they are convinced enough to publish a (definitively not neutral) refutation of skepticism, that’s a strong statement.

I went and looked at the “refutation” and was greatly unimpressed. It’s all the same rot that’s been pushed by the AGW crowd for years now, and lacks any substantive content. I believe that the

biased by omitting relevant data and application

canard has actually undergone two lives, neither of which are correct. The first time Mann complained that Steve used wrong data, even though it’s data Mann supplied Steve via one of his underlings (I forget at the moment just who). Once Steve had the correct data and showed Mann’s method to be [pick one] lacking in robustness WRT bristlecones or capable of producing hockey sticks from white noise, the claim was made that Steve’s “reconstruction” was not significant because it didn’t use important data. This was silly because Steve wasn’t producing a reconstruction of his own but merely showing that Mann’s reconstruction relied on bad data. Another claim that Mann & allies made was that Mann’s reconstruction was ok as long as he could go down to PC5 instead of just PC2 as Steve did. But Mann has never been able to justify (with a source) the use of a PC5. All the rest of the stuff is of a similar ilk. Many regulars here could just about name the typical “refutations” of Steve in their sleep and often wouldn’t have to wake up to show what’s wrong with them. The reinsurers should demand their money back from whoever supplied their info.

With regards to the content, I guess it is more of the same. Thanks for clarifying the part about Steve’s work.

Can anybody comment about A4 (“fingerprint”) – for me this is a key argument against a dominant effect of CO2, but they say the data are getting corrected and corrected again until they fit the models. Can somebody judge whether the data are indeed getting corrected or whether the issue is still critical?

Are you referring to temperature adjustments? The status of various adjustments varies. Most of the main players here have been convinced that the time of day correction is needed, and of course changes due to station move are needed, but often not available. The adjustment for UHI is a big problem as IMO the AGW crowd are just plain cheating on this subject. Night lights and wind are indirect measures of something which can be done much more directly, and some simple back of the envelope calculations can show that the figures the AGW people push must be wrong.

OTOH, those (skeptics) who have audited the actual ‘corrected’ data going into the publicly available datasets (Giss, NASA, etc.) have been finding that the calculations of the final results seem to be correct. IOW, assuming you can verify the actual corrections made (which hasn’t been done yet AFAIK), then the temperatures follow. And this is good because it lets people move on to trying to get the actual processes used to make the corrections. Some of these are available and some aren’t. It’d be a good project for someone to try find out just where this process stands at the moment.

Well, the whole theory of Global Warming is that IR emitted by the surface is absorbed by GHGs in the troposphere, which is then thermalized (since almost all of the GHG molecules [at least of CO2 and I believe H2O as well], have many collisions before their typical emission time) and thus the troposphere is heated and will then increase back radiation to the surface. This ipso facto will heat the surface relative to the situation with less CO2 in the atmosphere. This is readily accepted by any skeptic who knows what he or she is talking about. The point of contention is what happens next.

What the models show is moot as what they show, as far as the presently existing models go, is based on the assumptions used to build them and all of them have a built-in assumption of a positive H2O feedback. If a similar model were tested with a negative feedback or even no feedback, then it might be worth looking for a fingerprint for warming.

By the way Dave, I was wondering why some people still doubt the global average temperatures (blaming UIH or other effects) when satelite and weather station fit so well.
I mean such a fit cannot be chance…

They don’t fit quite as well as you think, but in any case look at the actual temperature rise; it’s .3-.4 deg C in 30 years meaning a temperature rise of about 1-1.5 deg C per century, in accord with the doubling of CO2 with no feedback. IOW, there’s no sign of CAGW.

On this point, I fully agree. I think it’s indeed difficult to theorize a large feedback and still fit the data, unless you assume that conveniently the natural system would otherwise have cooled down.
This is what IPCC is assuming, and the coincidence streches the imagination!

FWIW lucia’s Blackboard also does not strike me as “disingenuous”. Neither does Pielke Jr, for that matter. But where are the self-described neutral parties, clamoring to accuse eric of “tribalism”? Cause that’s what it is.

My last comment seems to have disappeared into moderation, though I don’t know why it would have been caught by the filter. Other than what I covered in it, I don’t think there was anything left I needed to respond to.

That said, I would like to point out Steve McIntyre hasn’t offered any sort of retraction or correction to his post.

I have no intention of having any exchange in this thread like in the last one. That said, I realize there is at least some obligation on my part to point out what I am referring to. To demonstrate the post in question merits correction, I will provide an example:

“In contrast, Arthur Smith makes strong and untrue allegations against Climate Audit here without providing any citations from Climate Audit to support his allegations.”

Whatever people may have said about Arthur Smith trying to smear Climate Audit, nobody has provided any examples of him making “strong and untrue allegations against Climate Audit.” Indeed, nobody even raised the point in response to me.

With that said, and assuming the post of mine which got caught in the filter can be recovered, I believe there is nothing more for me to respond to.

There is a late entry to the ‘Independent’ Climate Change Email Review.
It is described as being from ‘various’, but is in fact from ‘The team’, including MBH, Santer and Schmidt.
Despite being submitted three months late, and bearing almost no relation to the remit, and being almost entirely content-free, it has been put up on the CCE review website.
Their main purpose seems to be, as usual, to smear Steve, commenting on his submission and going on to whinge about accusations of fraud and make the desperate link with tobacco. They then continue with the helpful suggestion that their data ought to be exempted from FOI. This is followed by the usual team tactic of alleging that some submissions are not in good faith, without saying what they are referring to.
Perhaps the only interesting point in the letter is that apparently some of their submissions have been held up, as David Holland’s has been.

Much discussion at Bishop Hill. A number of comments on Gavin’s signature, which seems to omit some of the consonants. See also James Delingpole’s blog.

Bender,
Not my field, and I didn’t know anything about it until I attended the Royal Society of Edinburgh discussion earlier this week when I heard Professor John Haslett say he had been talking with Michael Mann over the last couple of days. That’s when I dug it out. The RSE discussion was planned to coincide with the conference.

Global signatures of the “Little Ice Age” and “Medieval
Climate Anomaly” and plausible dynamical origins
Tuesday – Plenary Session 3Michael E. Mann
Pennsylvania State University, University Park, USA
I will review recent work aimed at establishing the nature of, and factors underlying, patterns of large-scale climate variability in past centuries. Evidence is compared from (1) recent proxy-based reconstructions of climate indices and spatial patterns of past surface temperature variability, (2) ensemble experiments in which proxy evidence is assimilated into coupled ocean-atmosphere model simulations to constrain the observed realization of internal variability, and (3) ensemble coupled model simulations of the response to changes in natural external radiative forcing. Implications for the roles of internal variability, external forcing, and specific climate modes such as ENSO and the NAO will be discussed. Implications for long-term variations in Atlantic tropical cyclone activity will also be discussed.

A lot in the programme about Bayesian statistics. A lot about climate modelling and external forcings. Little sign that CO2 forcings have been properly devalued or H2O / solar forcings properly elevated, or that the MWP has been returned to its proper position of exceeding current warmth (to say nothing of the Roman warm period when good grapes were grown in Yorkshire UK)

It’s very difficult for people to believe that there could possibly be so many sheep as you find here, still being led astray by fundamentally secondrate science, despite the impressive sound of scienciness. Or am I mistaken?

You can’t judge the content of the talks and posters by the titles and abstracts. Given the conference theme is “uncertainty”, you can’t really expect a lot of dazzling black-and-white statements of the sort you seek. But those kinds of insights might be buried deep in the body of the articles. Which is why I ask Gabi Hegerl if there is a coherent publication strategy for all this wonderful new content. Will it get out in time for AR5?

By the way, Lucy, “plausible dynamical origins” is exactly the kind of thing we’ve asking about here at Climate Audit for years, in discussions with Koutsoyiannis, for example. Internal variaiblity as a competitor in the game of anthropogenic signal extraction.

These experiments reveal a range of results: (i) climate reconstructions for the preinstrumental period have the potential to provide additional constraints upon climate model parameters (and hence future climate predictions) but this potential is limited by the need for accurate estimates of past forcing factors; (ii) the response to past volcanic eruptions appears to provide some constraint upon the lower values of climate sensitivity, but less so for the higher values; (iii) observing the response to forcings that operate across multiple time scales (e.g., volcanic and solar variations together) is necessary if both the climate sensitivity and ocean heat uptake need to be estimated, especially if the strengths of the forcings are also uncertain.

A pause for reflection might also be useful. I’d been thinking I might put something up on my blog about Schneider after about a week. But I’d like to think about it. Especially as it seems from the appreciation in TIME that he was bad-mouthing Judy Curry privately to journalists even in the last week of his life. So my thoughts aren’t straightforward. Nobel Peace prizewinner. Plaudits from all the best people like Al Gore and Ben Santer. Feted for his humanitarianism in death just as he was in life.

I might choose to say something rather different to all that. Or I might say nothing. But Steve has the responsibility of a much greater readership. I don’t know what personal contact, if any, he had with Schneider. But it certainly seemed remarkable how both men reached for almost exactly the same analogy a week ago – and used it in what you might call polar opposite ways. But I’ll leave all that with Steve. Silence is sometimes the best option.

From time to time I look for interesting features in the Central England Temperature (CET) daily maximum records. I had noticed that in January and February 2010, the temperature never reached 10degC, so I wrote an R program to search for long runs of temperatures above or below a threshold. (The good old CA Forum used to have a place to archive such codes.)

When I set 10degC as the threshold, 2009/10 turned up as the Everest of all runs! Here are the top 20, with 1962/3 beaten into 4th place and 1947 into 14th.

Here is a thought on clouds and global cooling/warming – has anyone else suggested this?

It is that clouds over oceans should be more important than clouds over land, so anyone measuring cloud cover would do well to discriminate. My reasoning is that solar radiation reaching water can penetrate, heat, and mix convectively, making it hard for that extra energy to be released. Whereas, solar radiation reaching land heats up a small depth of it to a high temperature, which can then radiate back effectively at night (if there isn’t too much H20 and CO2 in the way!). So the overall energy gained by a sunny day on land would be less than on the sea.

Sitting in the environs of San Diego with a persistent marine layer has helped me to think of this!

Conversion of threads to readable text for use in Ereaders is something I’d like to find an easier way to do. I readily admit that this shouldn’t be your problem but maybe someone else has found a better method.

The problem with a straight print-to-pdf is that the graphics and side-columns are recorded and the subsequent conversion to MOBI, an Ebook format, becomes problematic.

What I’ve been doing is selecting, copying, and pasting the thread to MS Word in XP. Then switching to Linux, I load the Word file in OpenOffice word processor and export to pdf. This file is loaded in Calibre (excellent shareware ebook manager) where it can be converted to MOBI which my Kindle can understand.

To anyone who asks why not do the whole thing in Linux, OpenOffice wordprocessor doesn’t seem to be able to handle files like the “paste” of the current No-Dendro size. Could be that I need to create more swap space.

You might also ask why I don’t simply read the threads on the computer.

I’ve found that the quality of thought expressed in many of your threads is so good that it requires careful reading. At best, I “get” maybe 75% of what I see in some of the more abstruse writing, but I do want to get at least that.

Being able to page back and forth read and re-read helps a lot. Kindle is superb for this – and also much easier on eyes ruined by years of drawing then CAD.

It may be that the best solution wold be to write a script which processes an html listing of the pages and removes the things I don’t want. I could do that but it’s been over 20 years since I did any scripting and then it was to spruce up data-sets for further massaging.

I sure hope someone else has already done this or can suggest some other ideas.

Wow, Sinan, it works wonderfully well. It took seconds to process the no-dendro thread and then exported to pdf produced just what I wanted. Now to see how well Calibre turns it into MOBI, although it may be that I can skip the pdf stage and go from your output to MOBI. I’ll try that next.

It’s a really big deal to me to get Kindle input easily because it makes it so much easier for me to read these complex threads.

I like the logic of the nested discussions, but it makes keeping up with the simultaneous discussions contained within the nesting a bit difficult.

The timestamps on comments seems to be all over the place as well. I posted yesterday at about 8am US Pacific time on another thread, and the timestamp said Aug 4, which was correct, but the time was 1am. I just posted again in another thread, almost 8am, and the time said 9:56am. I understand the server time would be the time used, but the server can’t both be 7 hours behind me and 2 hours ahead…

So, in the future, how can you decide whether a claimed P≠NP proof is worth reading? I’ll now let you in on my magic secrets (which turn out not to be magic or secret at all).

The thing not to do is to worry about the author’s credentials or background. I say that not only for ethical reasons, but also because there are too many cases in the history of mathematics where doing so led to catastrophic mistakes. Fortunately, there’s something else you can do that’s almost as lazy: scan the manuscript, keeping a mental checklist for the eight warning signs below.

The key to the Alzheimer’s project was an agreement as ambitious as its goal: not just to raise money, not just to do research on a vast scale, but also to share all the data, making every single finding public immediately, available to anyone with a computer anywhere in the world.

No one would own the data. No one could submit patent applications, though private companies would ultimately profit from any drugs or imaging tests developed as a result of the effort.

“It was unbelievable,” said Dr. John Q. Trojanowski, an Alzheimer’s researcher at the University of Pennsylvania. “It’s not science the way most of us have practiced it in our careers. But we all realized that we would never get biomarkers unless all of us parked our egos and intellectual-property noses outside the door and agreed that all of our data would be public immediately.”

Imagine that…
One can speculate that it works fairly well for Alzheimer’s research partly because there is actual agreement that there is a disease, what the disease is, and the end goal – fighting it.

in the section “Correspondence” under the topic “Hockey Stick Studies” there are a couple of dead links ..
Afaik there were a couple of documents which must have been quite painful to produce (all that obstruction), but they are very illuminating to read .. you really behave like a saint in all that frustating correspondence!
Could you update these links?

(I am looking for a good example of long delay from the editors and reviewers to show to a friend)

Journals have been exercised about competing interests of authors and reviewers. Yet journals have a strong conflict of interest regarding letters to the editor because publishing criticisms of journal articles suggests that the editors are not doing their job and may lower the prestige of the journal, and handling correspondence requires journal resources. There is a case for an independent letters editor.

The inadequacy of authors’ responses to criticism suggests that authors feel no obligation to respond to reasoned criticism, and letter writers may fear that they will be perceived as picky or anti-collegial for pointing out flaws. Such a culture impedes the progress of science by stifling the open communication that makes the literature self correcting and is essential to the scientific process.

John M – yes, sad that the worries are about “prestige” and “reputations” etc etc. Where are the concerns about truth and science? (As an aside; today on BBC R4 there was a piece on Richard Feynman that was a breath of fresh air)The whole (not just science) publishing scene is struggling to cope with the Internet age and the how blogs etc. are beyond their editorial control. I don’t know what the business model is of Journals – I guess that subscriptions are part and paper fees another – but IMO the savvy ones will be looking at how they can embrace the Internet to actually up the quality of their papers and publications instead of retreating further and further behind paywalls and word limits. People will always pay for quality is my guess and if by harnessing blogged the quality of publications goes up then that will mean happy subscribers. The comments about numbers of papers and the desnsitising of critics are significant – maybe the Journals could have an two tier publishing system: Open blog review prior to editorial decision to elevate to the status of an accepted paper. Journals could host and run the blogs open to all with light touch moderation rules – evidence backed comments only and no ad homs. Also in the past I think SM floated the idea of journals being paid for supplemental data archiving and dissemination which seems reasonable to me.

I personally see Monckton as a loose cannon, with just enough knowledge and intelligence to be dangerous to either side of the climate debate, but this response (embedded in the url) by Mann et. al. makes me think that the Team doth protest too much. Particularly Gavin, who is full of sound and fury but signifying nothing. As I read his “arguments,” today’s “warming” is NOT exceptional, but because we have a lot more people than in 1400 CE, for example, the effects of any warming need to be dealt with immediately, whether or not we know how they arrive or how they might be mitigated or even whether any mitigation efforts would enhance global human welfare.

Hopefully others with greater specific expertise in this area can chime in, but use this as you wish.

Richard Goodale

PS–have you noticed that the US government (and others) are now changing the terms of refernce?

Verity and I will shortly be publishing Part 2 and Part 3 in a series of threads on the subject of how the GHCN V3 dataset differs from the previous GHCN v2 dataset.

If you are interested in an ‘advanced’ look at the V3 dataset (in a much more user friendly normalised database format than the usual text files), why not pop over to Climate Applications and have a look at the TEKTemp implementation of the NCDC GHCN v3 beta dataset by clicking on the following link.

The NSF produced a set of slick videos on climate change, including appearances by Michael Mann.
He shows quite the hockey stick here (see the video link on the left side entitled “What is unusual about the earth’s warming during the past century
or so?”
http://http://www.nsf.gov/news/special_reports/degree/index.jsp

It looks as if this hockey stick is very pronounced. Can any one tell if he is using his old hockey stick–one that no longer appears even in his current publications? Or did make up a new one for this video? I can’t find any reference links for the specific material in the videos.

Re: Dan Hughes (Oct 17 05:55),
Thanks for finding that, Dan! I added a comment. This is a topic that could be picked up here at CA and other related blogs. I particularly appreciated a link provided by another commenter… although outdated (due to Oracle’s purchase of Sun), I was able to find the paper. My comment is below:

—————–

I’d like to further emphasize what Don Fay noted. First, here is an updated link to the paper he references: http://dlc.sun.com/pdf/800-7895/800-7895.pdf — What every (computer) scientist should know about floating point arithmetic.

A number of scientific computations involve highly repetitive calculations of very small differences, particularly in climate science. I have never seen a discussion of the catastrophic computational errors of such calculations when performed using typical computer formulae. The above paper is a good introduction to the subject, yet there are further insights being developed.

A simple example from the paper: an interest calculation [ (1+i)^n ] of $100 deposited daily at 6% annual interest will be off by a significant amount after only a year if done using a straightforward calculation, because the important bits are lost over time.

The same thing happens in many other ways. Some assume all such errors relate to “rounding” and that the errors always cancel out. This is true only some of the time. Significant systemic bias can easily creep into scientific computations if not carefully accounted for.

These are serious issues, not addressed by simple code documentation or choice of language.

My bottom line: computer calculations are every bit as important to the scientific endeavor as any other aspect of modern science. They require the same care, the same level of transparency and reproducibility.

Hiding code and/or requiring black-box re-creation of codes by other labs will not improve the situation. We need to take scientific computation practices to a whole new level.

Until we do that, we are fooling ourselves when we imagine that our computer models are telling us something valuable about this planet.

There’s a new paper by Li, Nychka and Ammann “The Value of Multiproxy Reconstruction of Past Climate” in the Journal of the American Statistical Association, building a complicated hierachical Bayesian model. However, more interesting (IMO) is the discussion by Richard Smith who talks about principle components based methods. He acknowledges the criticisms of M&M and the Wegman Report, but seems to think principle component based methods can work (although he does acknowledge that he’s ignoring the issue of which proxies to include). In the conclusion he says “critics
of the hockey stick are also partially vindicated by these
results” because the hockey stick he produces is not as sharp as MBH’s.
The paper and the discussion are available at http://pubs.amstat.org/toc/jasa/105/491

Anthony Watts has a posting on a big name conference that jsut took place on the existence of not of the MWP. The following is from teh abstract of Bradley’s contribution. To me it looks like a complete reversal of thse people’s position. proxy reconstructions are now no longer useful

As a result, there is little utility in picking over definitions of the geographic and temporal extent of putative epochs, especially in the Late Holocene. The pressing questions concern the dynamics of the climate system, and the relative roles of free and forced variations, whether the forcings are anthropogenic or not.

This appears to be a statement that is saying that combing through yet more proxy records and yet more statistical techniques is pointless. The statistical hockey stick is being thrown under the bus. This is an announcement of complete capitulation of McIntyre. The “proxy” record is not adequate to do what these people wanted it to do.

I hope that I have supplied an adequate citation and attribution of this to Prof. Bradly. We know how important that is to him.

And substantively the point the Prof Hughes (not Prof Bradley as you correctly point out) was making is

The pressing questions concern the dynamics of the climate system, and the relative roles of free and forced variations, whether the forcings are anthropogenic or not.

The pressing questions are not concerned with a description of past temperatures. That is the creation of yet more paleo-temperature graphs is not of pressing concern. Rather the creation of useful models the the climate response to forcing is.

I have read very similar statements by Steve McIntyre on this blog many times. The tact does not show why Prof. Hughes is advocating this position but the continued failure of years of paleo-climate work to create a useful result in this area would be of interest to some ar least.

I do hope the discussion of Prof Hughes interesting new proposal did cite and attribute the seminal work of Steve McIntrye in this area. We know how important proper attribution and citation are.

I should also point out that Prof Wegman did make an important contribution to the assessment of the paleo-temperature work in validating and extending the contribution of McIntyre and McKittrick. A proper bibliography of this field would include his report and any paper surveying the results is this field would necessarily cite his report.

I understand this post is off topic. However, after following Climate Audit for several years, I suspect some readers will find it interesting.

A paper was published in 2006 entitled “Solar Resonant Diffusion Waves as a Driver of Terrestrial Climate Change”. The author derives the theoretical natural frequencies of these waves and compares them to the results of a fourier analysis exercised on a proxy record of paleotemperatures dating back more than five million years. The fourier spectrum has peaks at 40K and 100K year periods, aligning with the predicted results.

“We have little or no evidence that peer review ‘works,’ but we have lots of evidence of its downside.”

“Doug Altman, perhaps the leading expert on statistics in medical journals, sums it up thus: ‘What should we think about researchers who use the wrong techniques (either wilfully or in ignorance), use the right techniques wrongly, misinterpret their results, report their results selectively, cite the literature selectively, and draw unjustified conclusions? We should be appalled. Yet numerous studies of the medical literature have shown that all of the above phenomena are common. This is surely a scandal’ [9].

While Drummond Rennie writes in what might be the greatest sentence ever published in a medical journal: ‘There seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.'”

Comments are closed on the Tom Crowley Apology post so I’ll comment here.

I was waiting for someone else to recall a post at Bishop hill in March 2010 about big Oil involving Hans von Storch and Ross McKitrick and where Tom Crowley came to Ross’ defence. No one did so here’s Ross’ comment about the incident –

‘…I would like to discourage any notion that Tom Crowley’s email to me was meant unkindly. Just the opposite. He saw the allegation rolling around an internet discussion and to his credit he took the initiative to check the facts. And apparently he was the only one who did so, as his was the only email I got, though I am told there were 50-60 people in on the discussion list. Once he had my reply he not only transmitted it to the discussion group, but followed up with me to let me know he was telling his colleagues to drop the subject (or words to that effect). So I am quite grateful for his intervention on my behalf, and I think it was very decent of him.”

I wonder if you had seen this – from searching Climateaudit and Wattsupwiththat I couldn’t find an update that reflects it. This is the Lampasas temperature record, which you have written about in the past.

As you can see, NASA have now dramatically adjusted the recent history, compared with the situation when you and Anthony Watts were mentioning this in ~2008:

It’s not clear whether he has actual estimates of past temperatures, or if he is just extrapolating them from estimates of past CO2 concentrations (eg those surveyed by Dana Royer in 2 recent articles, typically 1000-3000 ppm over the last 550 million yrs)

I wanted to bring to attention the fact that Wikipedia has banished William Connolley (and company), clearing the way for the various climate change articles to be revised to give beginners a fair primer, rather than the RC party line.

Unfortunately, after years of Connolley’s control, he’s managed to drive off everyone who disagrees with him, and no one has stepped into the breach to revise the articles. It seems to me that many of the people who comment here would be excellent candidates.

People still read Wikipedia for AGW information!??? Strange that. I might have until I commented on Connelly’s blog about Tijlander and he abridged and distorted my posting. It was good bye to Mr. Connelly and his ideas for me after that.

Yes–Wikipedia remains an incredibly influential source, because many undergraduate students (and other 20-somethings) use it as a primer on essentially everything. Professors generally forbid citing to it, but it’s still a useful way to get the lay of the land in a new subject. I find it very helpful, as long as the subject isn’t politically charged.

That’s exactly why it’s an appealing target for propaganda campaigns, I guess. Odds are some other Team players will step into the breach to protect the party line. But I guess I still hold out hope that the free market of ideas will still win this one, in the end.

There is great concern at CA and elsewhere at overhyping of AGW. Amazingly, Garnaut says “This raises a question about whether something in the environment for scientific research on climate change introduces a systematic tendency to understatement.”

Garnaut starts by claiming that “The Climate Change Review’s acceptance in 2008 “on the balance of probabilities” of the overwhelming majority of opinion of the Australian and international science communities has not been challenged by developments in the genuine science during the past three years.

“The most important of the quantitatively testable propositions have been confirmed or shown to be understated by the passing of time: the upward trend in average temperatures; the rate of increase in sea level.

“Some important parameters have been subject to better testing as measurement techniques have improved and numbers of observations increased. On these, too, the mainstream science’s hypotheses have been confirmed: the warming of the troposphere and the cooling of the stratosphere, and the long-term shift towards wet extremes and hot extremes.

“The science’s forecast of greater frequency of some extreme events and greater intensity of a wider range of extreme events is looking uncomfortably robust.”

This differs from my understanding of developments in recent years. I wonder is someone here who is better versed than me in the data might submit a counter-article to The Australian/

I’m sure that there will be several letters to the paper on this issue. The address is letters@theaustralian.com.au . No attachments are accepted, and you need to provide name, address and phone number.

Garnaut, who I know, seems a bit like Krugman – a very good economist who has drifted into partisan support in areas outside his expertise.

It is, of course, an alarming report. The reason why I thought you may be interested, is the quote I picked up from the story,

“It turns out that a purely mathematical analysis, “Evidence for super-exponentially accelerating atmospheric carbon dioxide growth,” comes to the same conclusion.

The paper itself is mostly mathematically and essentially agnostic on climate science. But the conclusions are as stark as any in the climate literature:

Overall, the evidence presented here does not augur well for the future.”

In that often I can follow particular ma thematic acrobatic maneuvers, my experience doesn’t lend very well to making judgments to the appropriateness of such maneuvers. In other words, this seems to be “right up your alley”.

“UPDATE: I had a good conversation with the co-author Didier Sornette. They made a numerical mistake in one of the footnotes and used some inapt wording in a couple of places, none of which changes the main conclusion about CO2 concentrations. They will be revising the paper and I will make some changes below”

The most amusing comment on that site was by “Jon Jermey” – apparently a skeptic – who made a comment to which someone had replied. Then Jon Jermey’s earlier comment got deleted, leaving the reply referring to a non-existent post. He replied:

“Jon Jermey says:
March 17, 2011 at 4:03 pm

Zot! Suddenly I’m a non-person! But I KNOW I was here because somebody replied to me.

Apparently a number of science journals are beginning to restrict the amount of supplementary materials that may be submitted, in the interest of protecting the peer-review process. I suppose it depends on what the meaning of “is”, I mean, “supplementary”, is, whether that’s a problem. There’s no clear indication in the article how this might affect access to code and raw data.http://www.the-scientist.com/news/display/58027/

“Investigators have determined that Knut the polar bear died of drowning after suffering a brain disorder. Fans are set to protest on Saturday against plans to display his remains in a climate change exhibit.”

Here’s something fun. Google have a new tool for finding correlations between any time series and the popularity of search terms. So of course the first thing to do is put in the NOAA global temperature anomaly and see what’s controlling the world’s temperature! Unfortnately the search data only goes back to 2003, so no matching against the hockey stick. Still, the results are spooky:

The best correlated search term for global temperature anomaly is “elvis was born” and elvis turns up three times in the top ten. R2 for “elvis was born” is 0.9043, a lot higher than the usual benchmark in climate science.

I’m not having much luck accessing ftp://holocene.evsc.virginia.edu/pub/MBH98 – getting “connection timed out” errors, and I haven’t been able to find the directory elsewhere on the net. Do you happen to have or know of a copy of this directory? ( I can see a BACK_TO_1400_CENSORED directory at climateaudit.info/data/mbh98/UVA/TREE/ITRDB/NOAMER/ but not the BACK_TO_1400 directory, or any other files.)

I’m not having much luck accessing holocene.evsc.virginia.edu/pub/MBH98 – getting “connection timed out” errors, and I haven’t been able to find the directory elsewhere on the net. Do you happen to have or know of a copy of this directory? ( I can see a BACK_TO_1400_CENSORED directory at climateaudit.info/data/mbh98/UVA/TREE/ITRDB/NOAMER/ but not the BACK_TO_1400 directory, or any other files.)

Also, ftp dot ngdc.noaa.gov/paleo/paleocean/by_contributor/mann1998/ , referred to in the script, is no longer available – the entire paleo/ dir is empty. Those files can now be found at ftp dot ncdc.noaa.gov/pub/data/paleo/contributions_by_author/mann1998/

the fact is, if you go to blogs like WattsUpWithThat or Climate Audit, you certainly don’t find scientific and mathematical illiterates doubting climate change. Rather, you find scientific and mathematical sophisticates itching to blow holes in each new study.

““We don’t advertise a lot of the things we do,” says Edwards, who was called in by the University of East Anglia when Climategate blew up. “That was really interesting. It’s very high level, and you’re very much in the background on that sort of thing.”

The university’s Climatic Research Unit wanted Outside to fire back some shots on the scientists’ behalf after leaked emails from the unit gave climate change skeptics ammunition and led to an avalanche of negative press about whether global warming was a real possibility.

“They came to us and said, `We have a huge problem – we are being completely knocked apart in the press,’” says Sam Bowen. “They needed someone with heavyweight contacts who could come in…”

This morning the Managing Director of the Outside Organisation was Former News of the World executive editor Neil Wallis (he’s not MD any longer), Neil Wallis was deputy to Andy Coulson at News of the World. Andy Coulson until recently worked for the Prime Minster David Cameron.

Andy Coulson was arrested a few days ago, Neil Wallis was arrested this morning.

there is a word used in Buddhism that refers to the eighth stage of enlightenment achieved by a devotee whereupon all is revealed, everything is clear and there are no questions of substance that remain as mysteries. The word is ‘Hasso’. Its corruption has asians saying ‘ah so’.

I just posted this in unthreaded at BH and thought that CA devotees would also appreciate the irony.

“Shub N. has been hiding something from us on his blog that, thanks to Donna Laframboise’s diligent digging, is now coming to our attention. McKinsey, the management consulting firm, produced a study that examines the cost of reducing our carbon footprint. Behold, they reached the conclusion that it costs real money to do this and the farther you go, the more it costs. They even demonstrate this with a chart that depicts a, dare I say it, hockeystick. What is even more egregious is that McKinsey refuses to release their data, methods, code, etc. leading to an anguished GreenPeace demand that McKinsey release their data, etc. This quote from the GreenPeace press release sums things up nicely: “McKinsey must: 1. Immediately publish all the data, assumptions and analysis underlying the international and national versions of its cost curve and include such disclosures in all future publications.” Horrors, are they aware that they are channeling Steve M. et al?

Steve and others, are you aware that Kevin Trentberth also thinks that there are large disparities between different analysis (sea levels, total sea ice area, heat content anomalies)? He said so in his powerpoint presentation (pages 9/10 2010) http://www.joss.ucar.edu/events/2010/acre/thursday_talks/trenberth_WOAP.pdf which I found on the website http://reanalyses.org/
One objective of this WCRP is to help improve and promote sound data stewardship, including data archiving, management, and access. This includes making sure that climate-related data variables are reaching data archives, and that standards are set for archiving new types of data. Help make data accessible and available e.g., through the internet. Promote shared efforts for data quality control.

The Climate establishment knows all is not well and this looks like their new project. A confession?

As ” … a wonderful opportunity to people like me (a retired scientist) to get involved in an ongoing debate … ” I would like to ask a naive question:

A simple back of the envelope calculation suggests to me that the Earth’s total vegetation absorbs atmospheric CO2 at an average rate that would – in the absence of all other CO2 sources and sinks – completely deplete atmospheric CO2 in four months, though I have never seen this result discussed anywhere. Is this calculation (a) correct (b) incorrect (c) irrelevant or (d) just uninteresting.

Godfrey, you sometimes have to go with the flow a little. There is ongoing work on the carbon cycle. While it is an important topic, it isn’t one that I’ve been involved with and hasn’t been discussed here much. You’ll probably have to do your own research on the topic or maybe have more luck at a blog where carbon cycle has been discussed in detail.

Severe testing? Why not take the test of time?
Forecasts have to be (1) specific, at least in the easy part, the first 2 decades. (2) Forecasts are then set on “Hold” for a decade and
(3) at the end of each decade, validation studies are made of whether
the forecast materialized or not and if they correspond to measurements
a decade later.
Very simple, we then keep the grain and throw out the shells…..just
have to wait one decade to know….. now its time to assess IPCC-AR3
from 2001…. and we know now.”

Siefert is correct in that it is now possible to test AR3 against the last 10 years. This strikes me as the sort of technical subject that would fit comfortably within the boundary conditions that you use at CA. I hope that you will ponder this suggestion and take up the challenge.

I took a basic statistics class as an undergraduate too many years ago to put forward any claim to having remembered anything but the broadest of strokes. That said, I saw a link to an intriguing article in Unthreaded at BH. Unfortunately, the article from Nature Neurosciences titled “Erroneous analyses of interactions in neuroscience: a problem of significance” which discusses the “differences in differences” is behind a paywall. The authors note that a large number of papers in their field fail to take this into account thus their claims of statistical relevance are not correct. Given their noted ability to misuse even basic statistical techniques, I wonder if this artifact is also present in some of the cli-sci papers emanating from the Team but I lack the tools to make this determination. Perhaps a reader or two will give at least a cursory look to see if further investigation is warranted.

Investigations into a case of alleged scientific misconduct have revealed numerous holes in the oversight of science and scientific publishing
…
The papers drew adulation from other workers in the field, and many newspapers, including this one (see article), wrote about them.
…
Unbeknown to most people in the field, however, within a few weeks of the publication of the Nature Medicine paper a group of biostatisticians at the MD Anderson Cancer Centre in Houston, led by Keith Baggerly and Kevin Coombes, had begun to find serious flaws in the work.
When they first encountered problems, they followed normal procedures by asking Dr Potti, who had been in charge of the day-to-day research, and Dr Nevins, who was Dr Potti’s supervisor, for the raw data on which the published analysis was based—and also for further details about the team’s methods, so that they could try to replicate the original findings.

…
A can of worms

Dr Potti and Dr Nevins answered the queries and publicly corrected several errors, but Dr Baggerly and Dr Coombes still found the methods’ predictions were little better than chance. Furthermore, the list of problems they uncovered continued to grow. For example, they saw that in one of their papers Dr Potti and his colleagues had mislabelled the cell lines they used to derive their chemotherapy prediction model, describing those that were sensitive as resistant, and vice versa.

Another alleged error the researchers at the Anderson centre discovered was a mismatch in a table that compared genes to gene-expression data. The list of genes was shifted with respect to the expression data, so that the one did not correspond with the other. On top of that, the numbers and names of cell lines used to generate the data were not consistent. In one instance, the researchers at Duke even claimed that their work made biological sense based on the presence of a gene, called ERCC1, that is not represented on the expression array used in the team’s experiments.
…

He noted that in addition to a lack of unfettered access to the computer code and consistent raw data on which the work was based, journals that had readily published Dr Potti’s papers were reluctant to publish his letters critical of the work. Nature Medicine published one letter, with a rebuttal from the team at Duke, but rejected further comments when problems continued. Other journals that had carried subsequent high-profile papers from Dr Potti behaved in similar ways. (Dr Baggerly and Dr Coombes did not approach the New England Journal because, they say, they “never could sort that work enough to make critical comments to the journal”.) Eventually, the two researchers resorted to publishing their criticisms in a statistical journal, which would be unlikely to reach the same audience as a medical journal.

I’ll stop here since I don’t know how much fair use I can get away with but I think that should inspire enough interest (and amusement). The article is here at The Economist

By the way, for Steve and anyone else interested in criticism of the CPI measure here in the U.S., here is another link to some of it. This is from a guy and a source that I respect a great deal – Chuck Butler and his DailyPfennig website.

the above link will open the Friday issue until Monday, when the Friday issue will be in the archives. See paragraphs 11 and 12 for Chuck’s opinion of the CPI, but the whole thing, as usual, is good reading.

—

I have another question/concept, courtesy of my friend Tom Lawrence:

Climate/Temperature Optimum

Has anyone done any work, or in any way attempted to quantify what would be an optimum global temperature, or more broadly, an optimum global climate?

I could even break it down a little bit more into

1. Optimum for people? or
2. Optimum for every living thing on earth other than people? or
3. Optimum for both?

Oh, and I almost forgot, does anyone even try to define “optimum” at all, or otherwise characterize what an optimum climate would be like? I’m assuming that this concept of “optimumness” would be expressed in global average terms, but maybe it wouldn’t.

Tom is wondering why the AGW people are so focused on there not being any climate change, when no one has explained if the exact climate we have now is optimum or not. He wonders if maybe we should warm it (the planet of course, by simply twisting whatever knobs we need to – carbon taxes, regulations, total slavery, whatever) up a few degrees, or cool it down a few degrees, and THEN we can freeze things right at the optimum.

It sure seems to him that it is an awfully strange coincidence that this exact climate we have right now seems to be so imperative to maintain. Are we at any sort of optimum climate stage or state right now? If so, Why?

Are there any papers, discussions or even sophomoric musings on record that address the climate change issue in these types of terms?

Stephen: noting that a few weeks ago you mentioned feeling tired I stick my nose in and urge you to take care of yourself. That’s a selfish recommendation by me, because your work is valuable to freedom thus my life.

While I don’t see temperature records being as important in rebutting AGW theories as CO2 mechanisms, past history, and fundamental mismatches between theory (models) and reality, it is quite worthwhile – you’ve achieved wide recognition of serious problems with analysis. More broadly, you are deep into the critical matters of scientific method, especially the arcane field of statistics heavily used in “scientific” studies, and ethics of scientists.

And I need to put higher priority on personal administrative matters than writing in blogs. 😉

Scientific American has published a review of a paper by Jonathan Carter​ demonstrating why economic models are always wrong. Carter was studying geophysical models and realised, when applying calibrations, that more than one calibration fits existing perfect data from the model, and therefore no one calibration is valid for projecting the data into the future. He’s written how this applies to economic models, but of course it applies to any extrapolated model. I know Steve, you’ve made this point before in different ways, but it’s nice to see it validated.

Selective data refusal is illustrated in #4532, in which Tom Crowley of the University of Edinburgh complains in Nov. 2008 of not having a tree chronology of Gordon Jacoby, after “TEN YEARS” (emphasis in original). The response is “it’s all archived”; Crowley responds that the individual tree data were deposited, but not Crowley’s published distillation into a single site composite.

However, some people (Crowley mentions Ed Cook) are favored and get the data.

In #5106, it transpires that the site chronology was in fact archived (at the end of August 2008), and Crowley has his data, but remains frustrated:

I want this business to end just like you – but there is one very puzzling element of this business that leads me to ask further questions:

“if this information was available on the web”, why didn’t you just say so?
“why didn’t Gordon just say so”?
after getting a runaround for years, the community deserves to hear the answer to this.

I have just perfected a fail-safe method to resolve the entire global warming argument. The answer has been staring us in the face all along of course. It’s been prototyped on numerous occasions, and has become the industry standard for testing any hypothesis with financial implications. What’s more, every modern society has adopted a similar mechanism for settling legal disputes. The solution? Simply require that all participants in the debate gamble to stay in the game.

Dr David Whitehouse recently won a bet with climatologist Dr James Annan over a global warming prediction. The important point is not the prediction, or even who wins it. The key ingredient is that both protagonists should have a nontrivial – preferably a substantial – monetary stake in GETTING IT RIGHT. In today’s clamorous world, where he who shouts louder shouts further, only money is potent enough to buy honesty. A fact not lost on the financial world – or the judiciary.

In finance it’s called investment hedging of course, not gambling. In the legal world it’s known as punitive costing (making the judicial process so expensive, only fools – or the desperate – ever resort to it). The mechanisms are unimportant for the essential point is – as everyone agrees – it works. The clever bit is to set the market price on honesty, which is what betting does so brilliantly.

A global warming enthusiast (if that’s the right word) who believes dire climatic change is nigh, should be willing to wager a sensible fraction of his research funding and GETTING IT RIGHT. So should a global warming sceptic. If they can’t or won’t, their beliefs are literally worthless. Put up or shut up.

Once a testable wager is formulated and agreed, the government could take on responsibility for running the wager (at an agreed rate of taxation, of course). Betting is then opened up to both sides, further financing and sharpening scientific enquiry. The more successful/contentious bets would attract public support, like a national lottery perhaps. The latest odds might be quoted and traded on the world’s stock exchanges, dedicated on-line websites and major news and media channels. With bets continually striking and maturing, the national gaming industry would happily finance the day to day running of the enterprise at street level. The revenue raised would be used to finance the winning teams, though with honest scientific enquiry at stake, everyone becomes a winner.

I’m promoting this idea to both sides of the debate. What’s the betting neither side shows the slightest enthusiasm?

2012 is expected to be around 0.48 °C warmer than the long-term (1961-1990) global average of 14.0 °C, with a predicted likely range of between 0.34 °C and 0.62 °C, according to the Met Office annual global temperature forecast.

Highly autocorrelated time series show an increased occurrence
of relatively high but spurious RE and R2 values. CE is a more
robust statistic, but shows the same limitations at higher significances
(i.e. p < 0.01). Threshold values of 0 for RE and CE,
traditionally used to distinguish between successful and unsuccessful
reconstructions, are not necessarily valid and depend on
the temporal structure of the time series analysed.

Among other things, he hits the blog theme of full true and plain disclosure not being required or even suggested in academia:

“For results that could not be reproduced, however, data were not routinely analysed by investigators blinded to the experimental versus control groups. Investigators frequently presented the results of one experiment, such as a single Western-blot analysis. They sometimes said they presented specific experiments that supported their underlying hypothesis, but that were not reflective of the entire data set. There are no guidelines that require all data sets to be reported in a paper; often, original data are removed during the peer review and publication process.”

Also interesting is that the only people doing this kind of due diligence are working at companies not universities.

“Unfortunately, Amgen’s findings are consistent with those of others in industry. A team at Bayer HealthCare in Germany last year reported that only about 25% of published preclinical studies could be validated to the point at which projects could continue…. [Meanwhile in academia,] some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis.”

(The converse is not true: It seems not every company does good due diligence. Some of the studies that Amgen failed to reproduce went into expensive clinical trials.)

You asked me to stop posting off-topic comments regarding policy; and I have obliged. However, given that the crusade that you and Ross McItrick have been on ever since you decided to buy-in to Fred Singer’s UN/WMO/IPCC conspiracy theory* – and given that this conspiracy theory is now being overtaken by events – I believe you owe the world an explanation for continuing to insist that anthropogenic climate disruption (ACD) is not happening.

I say this because, despite not wishing to massage your ego, your work has proven incredibly influential – not least upon the likes of Andrew Montford#; a large number of non-climate specialists; and a vast array of scientifically-illiterate journalists.

I hope you will not take offence at these remarks; as I am not attacking you personally. On the contrary, I am hoping to engage you in rational debate. I would also be very interested to know what you think of Jared Diamond’s book, Collapse: How Societies Choose to Fail or Succeed, the message of which I have summarised on my blog here (i.e. I would feel greatly honoured if you would post a comment)?

Yours hopefully,

Martin.

* This post links through to my old Earthy Issues (EI) blog – but please do not post comments on EI because…
# I am going to re-post the old EI item about Andrew Montford on Lack of Environment tomorrow.
Steve: Martin, you seem to have a fertile imagination. Can you find a single quotation to support your allegation that I “decided to buy-in to Fred Singer’s UN/WMO/IPCC conspiracy theory“.

I’ve scarcely, if ever, mentioned Fred Singer at this blog. Nor do I think in terms of “comspiracies” and indeed don’t use the word “conspiracy” and don’t allow others to use it. I’ve made an exception in your case (probably unwisely). I think that your concerns should be more properly addressed to the various members of the Team, who have unwisely decided to argue their case on Graybill’s bristlecone chronologies, upside-down Tiljander, Briffa’s Yamal and the like. In the scoping of AR4, I suggested that they simply walk away from these analyses in order to save space and focus on the issues that they believe to be the most important. IPCC authors (Briffa, Osborn etc) decided otherwise. I think that they’re the ones that you should be blaming.

My point was that Dr S. Fred Singer was (to the best of my knowledge) the first to posit the existence of such a conspiracy [oops] as an explanation for the scientific consensus that anthropogenic enhancement of the so-called greenhouse effect will be a problem.

In your previous responses (or those of others – forgive me – I am not going to go and find who said what), it has been made clear that you became concerned that Mann et al had made mistakes and/or were trying to cover them up. That being the case, my point is that continuing to insist today, despite all the evidence that validates the basic conclusion of MBH98 (that current warming is unprecedented) is a pedantic sideshow; and a distraction.

People need to see that ACD was and is an inevitable consequence of pumping fossilised carbon into the atmosphere faster than the Earth can turn it back into fossil fuel. It really is that simple.

Once again, I am not attacking you for what you have done; I just don’t understand why you continue to argue that any of it matters. If you still don’t understand my motivation for pursuing this line of argument, please read the post on my blog today and that which will come out at midnight London time tonight (i.e. 2300 hrs GMT).
Steve: what is “all the evidence that validates the basic conclusion of MBH98 (that current warming is unprecedented)”? I’d be interested in reading it. I’ve reviewed many studies purporting to support the Hockey Stick and, unfortunately, they are not “independent”. They rely on questionable data like Graybill bristlecones and Briffa Yamal – so, in my opinion, this line of reasoning does not provide solid ground, This is not to say that other lines of reasoning may not do so, but I can only examine a limited number of things at a time in the depth that I like to examine things.

Is the Earth’s climate changing? Is human activity the primary cause? Is the change likely to be significant? Is this likely to be detrimental to life on Earth? Is there anything we can do to stop it? Is the cost of mitigating or adapting to this change going to increase if we delay acting?

Most relevantly-qualified, active researchers with a track-record of well-respected articles in peer-reviewed scientific journals now say “Yes” in answer to all 6 of these questions. Therefore, even if there were some significant doubt left regarding any of them; the prudent thing to do would be still be to take action.

First email in this FOI is from Gergis to a full list and covers (well not covers actually) the CA involvement in the pulling of the paper, its not what it says that is interesting but what it does not say.

She expects the new paper to have very similar results to the old paper.

Michael Mann has sued National Review, Competitive Enterprise Institute, Rand Simberg, and Mark Steyn for “accusing him of academic fraud and comparing him to a convicted child molester, Jerry Sandusky …” in the District of Columbia Superior Court, case number 2012-CA-008263. A copy of the complaint is posted at http://ge.tt/3NBGeSQ/v/0?c .

Bouldin: I do have a number of cards I’ve not played yet, but they don’t guarantee anything. Fights are always learning experiences, and this one surely will be. I do believe that there will be, sooner or later, some objective dendro people who get what I’m saying and see its importance. There are still a bunch of good people in this field, as there are in all fields. That’s where my faith ultimately lies.

I do however have one thing on my side that many do not: I’m not afraid of the repercussions of this to my career.

John Bills:
Good find. This deserves to be highlighted. Jim Bouldin is taking a 2×4 to the dendroclimatology issue. Basically he is saying crap in, crap out. Mann is going to be royally po’d. Our host will be vindicated big time.

The issues that I’m addressing are different from the ones Steve M has addressed, in many ways, which are different again from Craig Loehle’s (2009), and different still again from ones addressed by Burger and others, and again from others who I’m probably forgetting right now.

So, I’m not out to validate, or invalidate, anyone else’s criticisms on other topics, nor to single out any particular researcher’s work as bad. I’m out to comment on, and explain, a serious analytical problem that I see which underlies and affects many studies. People will do with it as they will.

On your other comment, the statistics are just not really all that complicated; very little math is required to explain it, and that required to demonstrate the effect is included in the computer code. It’s more important to explain the underlying problem in simple language.

Hopefully, you read this thread occassionally. I’m not a statistician nor am I a climate scientist, although I do have a BS in Chemistry and worked 15 years in the chemical industry.

That being said, a poster at Climate Etc. keeps going on about another paper with another hockey stick. The link is below. Reading the methods summary, it appears the proxies were selected for modern sea ice extent. If I am right, wouldn’t that select proxies that conform with expections, as with Mann’s hockey stick?

“E.O. Wilson is an eminent Harvard biologist and best-selling author. I salute him for his accomplishments. But he couldn’t be more wrong in his recent piece in the Wall Street Journal (adapted from his new book Letters to a Young Scientist), in which he tells aspiring scientists that they don’t need mathematics to thrive. He starts out by saying: “Many of the most successful scientists in the world today are mathematically no more than semiliterate … I speak as an authority on this subject because I myself am an extreme case.”

“A number of the sceptics are saying there’s no warming because they look at the temperature record and see a peak in 1998 and cooler years after that. But we know the peak was because of an El Niño event and that comes out in this forecast.”

The increasing use of line-by-line radiative transfer codes has been made possible by
the availability of comprehensive molecular spectral line data compilations. These draw on
the laboratory and atmospheric measurements and calculations of many workers and are
periodically up-dated.

There are currently four widely used spectroscopic data bases for use with line-by-line
radiative transfer codes. These are the:

GL/HITRAN (Air Force Geophysical Laboratory / High Resolution Transmission) Molecular
Absorption Data base (Rothman et al., 1987). The latest edition of the HITRAN data base
was released in early 1991, and will be described in a paper by Rothman et al. -in a special
edition of J. Quant. Spectrosc. Radiat. Transfer to be published in 1992.

There has been considerable interaction between the data base managers in recent years
and much of the data is common to the various editions. A description of the status of the
different compilations can be found in Husson (1986).

So there’s even more convergence at this point. In short, it seems that the mainstream modelers and IPCC all use HITRAN db or a few other db which should mostly share the same data.

So I’d say now you’ve got to check mainly Edwards (1992) list of references (and paper and code) for what brings us from lab results to atmosphere/climate modeling, and for lab results, mainly the references for HITRAN db… No checked Husson 1986.

Note: sorry I had to jump a little bit after the AR4 references. In fact it was a bit hard to follow the track there. But I got a better help in the TAR, which brings you in particular to Myhre et al 1998. See:

I didn’t know where to put this, but I thought this twitter conversation was amusing, especially the comment by Boothe. Will Mann be labelled a denier because he does not agree with the ‘consensus’ of the ‘dendro community’? Or is all of a sudden “this is how science is done” as per Laden? Quite a conundrum for Mann and his followers I would think… Notice how Dessler doesn’t respond or even ‘like’ Kevin’s responses backed up with peer reviewed consensus papers.