Sciencemag on House E&C Hearing

Richard Kerr of Science has reported on the House Energy and Commerce Committee hearings. Having lived through the hearings, it’s interesting to see how they get characterized. For example, Kerr says:

He [North] said he doesn’t disagree with Wegman’s main finding that a single year or a single decade cannot be shown to be the warmest of the millennium. But that’s only part of the story, he added. Finding flaws “doesn’t mean Mann et al.’s claims are wrong,” he told Barton. The recent warming may well be unprecedented, he noted, and therefore more likely to be human-induced. The claims “are just not convincing by themselves,” he said. “We bring in other evidence.” The additional data include a half-dozen other reconstructions of temperatures during the past millennium. None is convincing on its own, North testified, but “our reservations should not undermine the fact that the climate is warming and will continue to warm under human influence.”

One thing that I’ve noticed in climate science is how seldom people use the exact words of their opposite party, preferring to re-describe their "main finding" in alternative words. Nine times out of ten, they mis-describe the finding – or, if the describer is on the Team, 100% of the time. How can anyone say that Wegman’s "main finding is that a single year or a single decade cannot be shown to be the warmest decade of the millennium". Not only is this not Wegman’s "main finding", it’s not even a finding of his at all. He didn’t say that the enterprise is impossible; he did say that the Mannian methods that he studied (which were not all of Mannian methods) were flawed in the way that we described them to be flawed and that the flawed method had the properties that we described. He did not purport to say that conclusions about warmest year or warmest decade could not be reached by some other method, although he raised doubts about whether methodologies used by the Team in other reconstructions are likely to provide such a method.

I’ve said over and over how frustrated I am that the due diligence of the NAS panel was so negligible and slight and that they relied on mere literature review for so much of their study. It’s ludicrous for them to say that bristlecones should be "avoided" in temperature reconstructions and then to "bring in other evidence" – a "half-dozen other reconstructions" that use bristlecones – without testing for the impact of bristlecones on these reconstructions. I’ll do the testing of the impact of bristlecones on the other reconstructions, but the NAS panel should have done it themselves.

Their report is most usable on specifics, where they make many useful comments, so their poor performance in one area does not make the report unusable in total and, to date, I’ve tried to focus mainly on the positive aspects of the report.

But when I see how Science describes North’s testimony, one realizes that the failure to assess the impact of bristlecones is not just silly, it’s negligent. Let’s say that the NAS study had been done by an engineering firm – large or small, Bechtel or your local P.Eng. I don’t think that you could find an engineer who could say on the one hand that you can’t use bristlecones ("strip-bark") and then, on the other hand, produce drawings using bristlecones. They’re trained not to do stuff like that.

My guess is that, under the circumstances described here – that bristlecones should be "avoided" in temperature reconstructions, it would be professional misconduct for an engineer to produce drawings which used bristlecones. If an engineer said that certain materials should not be used in a bridge and then produced drawings for a bridge that used these low-quality materials, it would punishable if it were the subject of a professional misconduct complaint. Why should a NAS panel conduct itself with lower standards than engineers?

The bridge metaphor may be a handy one for people that think that we “confound” issues of methodology and data. Maybe I’ve not expressed things clearly enough, but I think that the interaction is important.

I’m working on the following analogy so bear with me.

Let’s say that a consulting engineer reviewing a design for a suspension bridge says that the loads should have been calculated centered on each pillar instead of decentered load calculations; that the design placed all the stress on ungalvanized steel; that ungalvanized steel should not be used in bridge construction; that, if you don’t use ungalvanized steel, the design ends up being very different; and that the design when replicated failed standard stress measurements, which were not reported by the original engineers.

In response, the original engineers argue that pattern selection tecnhiques that are standard practice in highway construction show that ungalvanized steel MUST be used.

The dispute is sent to an eminent panel who say that ungalvanized steel should not be used, but then recommend alternate designs using ungalvanized steel.

If an engineer said that certain materials should not be used in a bridge and then produced drawings for a bridge that used these low-quality materials, it would punishable if it were the subject of a professional misconduct complaint. Why should a NAS panel conduct itself with lower standards than engineers?

Obviously, NAS panels and scientists are not punishable in the same way engineers are. There aren’t the same professional licensing and formalized codes of ethics in place for academic and government researchers. Plus the lawyers haven’t figured out how to sue them when they mess up.

It should be noted that Kerr had been ‘reporting’ on climate research for many years and probably knows the major players quite well. Even if he strives to be as intellectually honest as possible, it’s ‘plausible’ maybe even ‘likely’ that he’ll slip over into the speculative realm and report bias as fact.

Re #1, the Big Dig is more a failure of Massuchesetts politics/government than engineering. Two engineering safety reports in 1999 and 2000 explicitly warned of the recent failure and were ignored by the administrator-politicians in charge.

i.e. The more we learn, the less certain we become about these “trends”.

Notable in that article:

“‘The most dramatic difference since ’95 is the DECREASE in the uncertainty'” associated with recent warming, says statistical climatologist Michael Mann of the University of Virginia”

Kerr included that observation for a reason: it is pure counterspin. Focus on the things you’re certain of (modern instrumental record) to avoid talking about the things you’re not certain of (the historical record & GCMs).

Also, note the description of Mann as a “statistical climatologist”. That adjective was thoughtfully inserted for a reason.

As for the failure to do a proper sensitivity analysis – there’s no doubt that’s negligent. If it’s criminally negligent then the academic world is in some trouble, because it is a universal problem.

Manuscript reviewers should have access to the turnkey scripts that were used to produce the articles they review. The reason no one is clamoring for that is because it’s a lot of work, and scientists are not paid for the hard work of review. They’re paid to publish. I would suggest that needs to change.

Not to digress but I was in Boston recently and because of the big dig we went from the though the city center part of Boston in no time. I think most of the traffic problems come in the surrounding highways because the population of the Boston area is so spread out. Even though the Big dig only makes getting though the city faster as a visitor that is very nice.

“our reservations should not undermine the fact that the climate is warming and will continue to warm under human influence.”

In the North quote above, notice how he starts with a red-herring, namely that the climate is warming. We know that and nobody is seriously arguing that it has not warmed up since the LIA. There’s some dissagreement on the amount, where it will peak (if it has not already), and whether some 10 or 20 year trend is meaningful in the context of long-term climatical variation. What’s causing it is the big dispute and this brings us to the second part of North’s quote. This is where he states that it’s a “fact” that (recent) warming has been caused by humans. At best AGW is a theory, and the data supporting it to-date is, at best, inconclusive.

My reaction to North’s quote is this: Of course our “reservations” on the validity of the studies supporting AGW should create serious doubt in our minds about the theory. That’s the scientific process, man!

Re: bristlecones. It seems to me that given the non-linear response of tree rings to temperature or any other climatic variable, tree rings in general should not be included in any reconstruction what so ever. As one of the commenters here has explained to us, trees even respond to insect poo following an infestation. How is anyone ever going to untangle all the variables that make a tree grow and extract the temperature signal? Not likely.

I suppose that the negligence of a bridge designer using low-quality materials might rise to criminal negligence, but that’s not the nuance that I have in mind and let’s leave that firmly out of any discussion. My point is more civil – the engineer has a "duty of care" in the sense of tort law (which I know a bit about). That duty of care is undoubtedly included in engineering codes of conduct, but it exists independently.

As a thought experiment, let’s suppose that a NAS panel said that ungalvanized steel shoud be avoided and then recommended designs using galvanized steel and NAS were then sued. Who would argue what?

The NAS lawyer would absolutely argue that a NAS panel has no duty of care. This defence would not be available to an engineer. Maybe we’re getting somewhere here – maybe that’s the problem. At some point, one needs a report from somebody that has a duty of care that academics don’t seem to have. In the financial world, auditors and accountants have a duty of care.

Zidek in an editorial complained about "post-normal science". But to coin an epigram by changing the position of the hyphen, in the real world, post "normal-science" comes engineering, when someone arrives on the scene with a duty of care. Maybe the professional duty of care is part of what makes an audit an audit.

Steve, we (I) am perfectly able to comprehend the effect of x1, the effect of x2 and the joint effect of x1 and x2. That is what BC does in their full factorial. When you reply to in-depth discussion of x1 by drawing in x2, you are not communicating poorly, you are either showing poor logicical analysis or are using misdirection in argument. If you want to buy me another steak dinner, I’ll bet that Wegman would back up this view.

Thanks for the post Steve. Another analogy is that a civil engineering standard requires aggregate of certain strength in road building, and the engineers use sub-strength aggregate. The road lasts a few years then falls apart under load, but the engineers have ‘moved on’. The civil standard is an acceptable basis for selecting proxies: i.e. objective, significant response to local temperature.

Climate scientists would presumably argue that even though the proxies did not meet an engineering standard, they met a "teleconnection standard". It was essential to use substandard aggregate because the use of substandard aggregate in California highways kept elephants in Africa through teleconnection. It sounds ridiculous when we put it that way, but there’s not much difference between that and Mann’s methodology.

Whereas engineering science has evolved to cope with the hard reality that it can influence people’s lives (in good & bad ways), Big Science is not in the habit of meeting up with Big Policy in a way that it matters to anyone. The culture of ivory tower Big Science is to be content with peer review and to shun inspection from outside. Big Scientists need to be periodically reminded that if they want to inform Big Policy and get the Big Bucks Grants, they will likely face a Big Audit.

That’s where the analogy fails. Big Science is a minor, and you’re trying it as an adult.

CA could be one of BSs valuable life lessons … if only they would swallow their medicine.

I think that it is likely to be a bad practice and even more prone to data mining than regular proxy usage, to appeal to teleconnection. But it’s not inconceivable that teleconnection proxies may exist. Really the issue becomes qualification of proxies in general (so that we can say that teleconnection proxies are ok, so that we can say that any are ok, so that we can say that teleconnection proxies are as good as others, etc.). Issues is how we avoid the “test 20 models to get one with 95% significance” fallacy. I think someone like Wegman could be helpful in this type of work. Or he could suggest someone who would.

Paul, I disagree with this. “Theory” is a tested hypothesis that passes said tests. Relativity is a “theory” that has probably been tested more than any, for example. The first tests ever conducted on AGW are Steve and Ross’ work. The tests showed flaws. At best, AGW is hypothesis in need of revision.

how we avoid the “test 20 models to get one with 95% significance” fallacy

It’s called the “Bonferroni correction” and has been discussed elsewhere I believe.

A priori hypothesis testing vs. post-hoc data dredging is a separate, but not unrelated issue. And there’s no mathematical cure for post-hoc analysis. It is simply a starting point in the cycle of science, not an endpoint. Treating it as an endpoint is fradulent.

Maybe the professional duty of care is part of what makes an audit an audit.

I think this is what bothers skeptics about the IPCC. It looks like it should be an organization with a “duty of care” but when pushed they just say, “No, we have to rely on the scientific Journals and their peer-review doing their duty.” And the science Journals, etc. say, “No, it’s the individual scientists’ responsibility”, and they scientists say, “No, we’re doing what our science advisors at NAS or NASA or wherever say we can do”, and the orgaisations say, “We’re just doing what Congress told us to do” and Congress says, “Let’s gather data and hold a committee meeting!” and everyone else says, “You can’t do that! It’s scientific McCarthyism.” And so when we’re done with the CJ, nobody has stepped up to the plate and taken responsibility.

The large 387-site Schweingruber data set discussed by Briffa is the closest thing in proxy world to an ex-ante selection of proxies. I’d love to know what sites are in it, but that’s a secret. It leads to the Divergence Problem – ring widths in the large sample go down. Briffa has proposed cargo cult explanations to maintain the fiction of linear proxies, but the most plausible explanation is that ring widths decline above an optimum temperature and that simple ring width chronologies cannot be used for comparing MWP and modern temperatures. It’s very simple really. The NAS panel botched the Divergence Problem BTW.

If the overall large population has declining ring width and the average of your selection of 10 sites goes up, what are the odds of that without cherrypicking? Non-existent. The Team has already “data snooped” – that’s why Yamal is in study after study rather than the Polar Urals update.

I used to think that the importance of this blog was that it dealt with issues that affected the environment and involved high political office and billions of dollars. I was wrong. It seems its more important than that.

My model of the world held science to be a central activity practiced by people who were smart and filled with integrity. I expected business people and government employees to require careful scrutiny. I expected mere engineers to be held to codes of conduct lest they cut corners for low motives.

In popular discourse and these congressional hearings it is assumed that anyone with a business interest of any kind is a poltroon and not to be believed. The debate – such as it is – isn’t over facts or logic but as to whether one may be characterized as a “scientist” in this case a climate scientist. Scientists are the recognized “truth tellers”.

Well that’s how it used to be. A new viewpoint seems to be arising out of this dispute. If an opinion is expressed by a scientist we will henceforth until proven otherwise, assume that the purported scientist is a self-interested, self-promoting scoundrel trying to deceive us.

Pity. I enjoyed believing in scientists. It made me feel secure. I guess its time for me to grow up.

On this contentious issue of “cherry picking” – just a minor clarification. Choosing particular sites for the expected tree ring response they will provide is not so bad (assuming many things …). But sifting through thousands of time-series after-the-fact to pick out those that the support the hypothesis – that’s bad. Both could be described as “cherry-picking”. But one is a priori and the other a posteriori. Big difference. A credible dendroclimatologist would not engage in the latter.

Corrupt is the word that best describes the current state of climate science. Corrupt is a very strong word and I understand the reluctance of Steve and many scientists who frequent this site to use the word.

Can someone find a better word?

Re #16: “Theory” is a tested hypothesis that passes said tests.”

A generally accepted hypothesis can rise to the level of theory without being proven. AGW is generally accepted therefore calling it a theory is correct. That doesn’t mean the theory is correct.

Pat, you can keep believing in REAL scientists. There are plenty out there … like Burger & Cusbach (2005). The problem is that (1) it’s getting harder to distiguish the real from the fake; (2) the real are being told to stay out of the public debate, which serves to further reinforce (1). Only the old guys are willing to share their skepticism. Problem is they often don’t have a grasp of the newer statistical methodologies. So it falls to outside parties to take on the responsibility that the ‘authorities’ have largely abrogated.

“Conjecture” is probably the most descriptive term. If after repeated attempts at REFUTATION – not CONFIRMATION – it may eventually be elevated by the scientific community to the status of a theory. Laws are merely conjectures or theories that have proven remarkably resistant to refutation, to the point where they make usefully correct predictions. All scientific knowledge is conjectural, yet individual conjectures do not make usefully correct predictions.

Given all the uncertainty (i.e. àŽ⳦gt;>0), I think that is where proposition AGW (i.e. A±àŽⳠ>> 0) is at.

Read Popper. Short, digestible essays that can be finished in a single night before you go to bed.

If policy makers want to take strong action on the basis of conjecture, that is their decision to make. That’s what they are paid to do. Scientists are paid to estimate A±àŽⳠto the point where àŽⳠis small. They are not paid to pretend àŽ⳽0.

-picking sites based on some physical rationale or bontanical basis of knowledge that says that they should be proxies.
-picking sites that show good correlation in the instrument period
-picking sites that show good correlation in the instrument period and have no MWP

Obviously the last one is just plain evil. The first and second are more arguable and bring up other issues:
a. What is good correlation (especially given autocorr)? Maybe we need year to year proof of correlation?
b. What does it mean to have a rationale (per first category)? Does it come down to first principle physics or is it in some sense circular (but maybe not entirely so) in that establishing the basis of a useful proxy requires some observation and correlation to instrument. Perhaps a general one over geographic space.
c. What is the range of variability of the proxies and can they be demonstrated to “register” large temp movements.
d. what is the quality of foundational studies and assumptions on low and high treeline?
e. How can low and high treelines and the like be combined to more effectively remove non-temp components?

#28. You have to pick the sites in advance and report what you collect. If white spruce ring widths in treeline sites are supposed to be temperature proxies and you go out and collect 40 sites, then you report all of them. You can’t decide after the fact that some are proxies and some aren’t, a la Jacoby. If some sites behave differently than others, then explain the differences. How hard is that?

First, I find it very frustrating to see the efficacy of a method evaluated by contemporary political criteria, a red herring that many, including your supporters, chase. A 1000 year temperature reconstruction, shorn of AGW concerns, purports to be an accurate historical model. If 85% of the model (pre-1850) is in substantial error, then its a deeply flawed historical retrospective regardless of its “conclusions”. It may be true that a single data point in the last two decades is higher than the highest single data point of prior times – so what? Two data points do not make a model, and a model cannot be justified by the small tail end of the hockey stick (15% of the model).

In any other discipline this “measure” of model “conclusions” would be laughable. Suppose one did a model of stock market cycles for the last 100 years were junk for all but the last 15 years, would anyone (other than Mann) have the chutzpah to claim “its conclusions” are supported by others?

Second, don’t get caught up in analogies (as I have). Your simply frustrated that what is self-evident is irrelevant to those caught up in political concerns. Look at the bright side, there is plenty of cleanup work left – I mean, what would you do if the NAS had resolved all your concerns?

Re #9
Steve, you make a good point about ‘duty to care.’ Perhaps, the culture of Science minimizes this in order to advance knowledge with the greatest possible speed (albeit through trial and error methods) and leaves it up to Technology (ie, the implementation of knowlege) to be diligent about caring. Policy-makers are not seeing the distinction and assume both spheres are behaving according to the same standards.

I think there are probably 3 levels of selection:
-picking sites based on some bontanical basis of knowledge that says that they should be proxies
-picking sites that show good correlation in the instrument period
-picking sites that show good correlation in the instrument period and have no MWP

TCO, I was going to say that most of the BP sites were sampled using rationale #1 long before there was a 1970s+ instrumental record, which would rule out #2 and #3.

But on second thought, you may be right. In chicken-egg scenarios it is hard to say whether the botanical insight preceded site selection. Most likely there was interplay. e.g. Someone took a sample in the 1950s, someone noticed a pattern, someone speculated a relationship in the 1960s, more sampling was done in the 1970s to reduce the chronology error, someone did experimental work on temperature as a limiting factor in the 1980s, someone took this as proof of a link function, more sampling was done in nearby areas in the 1990s, and so on. Thus is woven a tangled web of a hypothesis that would be resistant to Bonferroni’s simple method. [NB: I’m not saying this is how events developed. I’m just guessing at a plausible scenario.]

Typically, independent investigation (different research groups different systems) will help to break this vicious cycle of chicken-and-egg groupthink. It is interesting, and perhaps relevant, that a “20th century loss of climatic sensitivity” is being noted more and more frequently in dendroclimatological studies. Buckley et al (2004, CJFR 34: p. 2549) are at a complete loss to explain it in T. occidentalis. Maybe the alleged degree of sensitivity was never there to begin with? Maybe it is purely a result of sampling prejudice?

On this contentious issue of “cherry picking” – just a minor clarification. Choosing particular sites for the expected tree ring response they will provide is not so bad (assuming many things …). But sifting through thousands of time-series after-the-fact to pick out those that the support the hypothesis – that’s bad. Both could be described as “cherry-picking”.

I’m sure Steve will get to you, but as I recall there are a couple of attempts to do reconstructions without bristlecones per se. Trouble is that they each have other problematic series in them. I suspect Steve will be coming up with some compilation of all of them, or if you’d care to list all the reconstructions you know of, I suspect that each of them has been discussed here at one time or another and as a group the people here could go back to the old threads and see what they contained and whether or not there are “bad” series in them.

Bristlecones have a substantial impact on all but 3 studies. The Yamal substitution is a second important source of non-robustness and affects 2 of the remaining 3 and is its own separate can of worms, discussed at length in many posts earlier in the year. Moberg is not substantially affected by either of these 2 issues, but has its own defects, some of which I’ve alluded to previously.

All of this is without dealing with important bias issues relating to changing altitude of tree ring samples, “modern sample bias”.

Thermometers render proxy temperature data. For example, a thermometer measures the amount of exampansion experienced by a qunatity of mercury. This is a proxy for temperature, which is deduce by using an appropriate calibration of the actual measurement taken (i.e. the amount of expansion).

What would be the response to a study that collect such thermometer data and discarded that which did not give the expected/desired implied temperatures following calibration? That would be a clear case of data mining.

So what exactly is different in principle in those studies that do the same thing with other forms of proxies, for example tree ring widths?

Steve (29): I think there are probably eggregious examples of bias in the field. I also think that doing the work thoughtfully is not “simple” though. Bender alluded to some of the difficulties.

On your later point, I think that looking at sensativity to bcp is interesting and useful (but not definitive). I would recommend to think about this holistically and consider other common factors which could be removed from the studies and how they would be influenced by the change. You do not want to get into the trap of just going after high outliers because they are high (not saying you are, saying you should be wary to make sure you don’t). I hope you will also be a little bit less dogmatic and a little more nuanced in your discussion of bcps themselves. You are making too much of the NAS statement in judgement of them (they shouldn’t be used) given the overall lit-review nature of that work, that the remark was not supported thoroughly, that it was not consistent with the rest of their document, etc.

#4 Gary it is way more complicated than that. There is plenty of blame to go around. Yes there were engineering studies showing concerned. Then the installation was examined, and accepted, by engineering staff.

#6 what time and what day was this. Pre-Big dig you could do the same thing. during the big dig it was damn near impossible, now it is the same as it was before. Because after more than a decade and almost 15 Billion dollars we have two extra lanes, one in each direction. From the Callahan it is about 1 1/2 miles North you have an extra lane, and about 1 mile of extra lane to the south. After that it is the same. On other benefit is coming out of the Callahan you don’t have to go through a stop sign.

But I can say if your plane landed at 8Am I reckon you would be stuck in traffic for a long time. Note that the majority of the time John you were in the Callahan, which is longer, and about 40 years old.

Some airport traffic has been shunted away from 93 because of the connection to 90 for people going West. That however has changed (I don’t know when you came into Boston) two weeks ago, since the connector has been closed.

There were plenty of Engineering successes (Freezing of the South Boston Rail yards during Ted Williams tunnel / 4 Point Channel construction), but also plenty of failures. There were also many political failures, for one the failure to secure federal funding (And with it Federal oversight), but most importantly holding the contractor accountable for damages not to exceed 150 Million dollars or about 1% of the total cost of the project.

I would say that the Big dig is at least as complicated as the Climate change issue. Which means it is a good analogy, but bad because you can have the same never-ending semantic arguments. Though you can say, without a doubt, that the big dig was an unmitigated disaster.

“One thing that I’ve noticed in climate science is how seldom people use the exact words of their opposite party, preferring to re-describe their “main finding” in alternative words.”

One thing I have noticed is how slick these climate scientists are. NOt slick like a lawyer, but like a public relations pro for celebrities and politicos caught in scandal. So demure, so deceitful, when they meet a wall they talk like the trend is in their direction. I suspect they are getting training, polishing from PR professionals such as Environmental Media Services or other front organizations that may be fronting for BP and carbon credit financiers.

The additional data include a half-dozen other reconstructions of temperatures during the past millennium. None is convincing on its own, North testified, but “our reservations should not undermine the fact that the climate is warming and will continue to warm under human influence.”

None is convincing but the trend continues. Specious, but delivered perfectly. These are pros, or are getting polishing as I suggested. Heck, the NAS Report reads like it had the hand of a lobbyist involved in it. Why not? We’re talking about 10’s of billions of dollars in carbon credit exchanges and money laundering. Why not invest a few million for influence, or helping those who “just happen” to support your financial interests?

A “theory” is any conjecture intended for use as a simplification of reality in order to explain and predict reality. A theory is judged in three parts:

A. how well does it explain reality?
B. how well does it predict reality yet to happen?
C. is it the simplest possible theory that properly accomplishes parts A&B? Since simplifying the complexities of reality is the very goal of theories, complexity should only be added to any theory if the additional complexity improves the theory’s ability to explain and predict.

A&B are always both required. Put another way: If the theory cannot explain reality, the theory is a failure no matter how well it predicts. If the theory cannot predict reality, it does not matter how well it explains. This rigor is required to prevent the adoption of theories that later prove to be false once all the data is in.

Whenever one jumps to believing a theory’s predictions before that theory has been proven to correctly explain, the person employing that theory is skating on really thin ice.

My guess is that, under the circumstances described here – that bristlecones should be “avoided” in temperature reconstructions, it would be professional misconduct for an engineer to produce drawings which used bristlecones. If an engineer said that certain materials should not be used in a bridge and then produced drawings for a bridge that used these low-quality materials, it would punishable if it were the subject of a professional misconduct complaint. Why should a NAS panel conduct itself with lower standards than engineers?

As another poster noted it could follow the sequence of peer reviewed makes it correct even evidently when other peer reviewed work shows a major reservation for the basis of the work in question. Once published the standards for refutation become extremely stringent. NAS does literature reviews, as noted here previously, and, if that is accomplished in an unquestioning manner, it can allow for some cherry picking of those articles that might support whatever case they are attempting to make. With Mann and the HS, NAS had to look at the specifics that were called to its attention and on those specifics they had no choice but to exonerate MM criticism, but beyond that the peer reviewed articles of their selections were back in force.

In would appear to me that the circumstantial evidence for AGW is thought to be at first glance so powerful and overwhelming that single bits of evidence are gathered with the thought that, of course, this evidence “must” support that proposition or it is probably wrong and therefore we judge it not on its own merits but in the AGW context. Peer reviewed MBH stood for many years with little or no questioning and that in my mind was no “accident”. If the MBH HS is wrong, you will be told not to worry because we have other temperature proxies to fall back on. If you exposed problems with them, e.g. bristle cones (and it appears that you, Steve M must look because no one else is stepping forward for that job) and other potential data mining, again it will be: not to worry because we have the computer climate models. If they failed it would be back to: fossil fuel use caused increasing carbon dioxide levels and temperatures have generally trended upward during that time period and that evidence alone should suffice.

That the remedies for mitigating the supposed AGW are so vague and superficial seems to me to be right in line with the climatology approach to AGW, or GW, if you will.

Your article here hammers the NAS for referring to subsequent studies after saying that bristlecones are problematic, and leaves the very clear impresion that they had no grounds to be doing so. Yuu now say, when asked, that ther are subsequent studies not subject to this criticism. Your article leaves (at the very least) a false impression. And it is an important, central distinction. If you had said that not all subsequent studies are free of that issue, but some are, it makes a HUGE impact on your argument.

The fact that the other studies may have issues is interesting and important, but you didnt mention that, and it is not what you were hammering NAS for. You were hammering them for relying on studies that have a problem they had identified – when in fact at least some of those studies do NOT have that problem, and apparently render a conclusion consistent with that of the other papers. It means that your very strong criticism of that NAS committee, in this instance, is null.

This, by the way, is a general criticism. You in general do not place your arguments here in context, so it is very difficult for anyone to come her and figure out the imapct of your criticisms on the overall field. The context is critical to understanding the impact (cf, this post).

This is yet another reason you should be publishing. Expose your arguments, IN CONTEXT, to criticism and rebuttal. If you are right and this means the entire field is suspect, then publish the 3-4 key papers for your key criticisms and then do a comprehensive literature review to tie it all together. And no, you DONT need the full details from the extant papers, you merely need to do a large scale valid analysis, show that your comprehensive valid analysis either gives a different result, or results in too much uncertainty to be useful, or fails critical tests in ways taht are uncorrectable, or so on. Do it de novo, and show the issues that way.

As is, it is nearly impossible for anyone to tie what you are saying into any kind of coherent picture of it simpacts ont eh claims of the field, without putting in an individual effort equivalent to doing a major literature review in a novel field.

I get a picture from reading around your blog here, but I HAVE NO WAY OF KNOWING WHAT YOU ARENT SAYING (cf, this thread), and neither does anyone else, without major effort. And that leads to suspicions that either you dont much care if people get the big picture with embedded supporting details, (if so, why) or that you are ducking or hiding something (and if so, why?). Harsh, perhaps, but its an issue that arises from the scattershot way you do your work here – and the fact that the scattershot incomplete work is having a potential policy impact. Precisely the criticism you levy at others, in fact.

Chill out. Do you seriously expect Steve to give the full context of everything in every post?

Do some of the hard work yourself. Read through the “Other Mulitproxy Studies” category on the sidebar. Don’t expect Steve to explain everything over and over again just for your benefit. He’s got enough to do, and more than should be expected of one person working alone.

That said, like many others here I hope Steve publishes more, and more often, and that he does in good time produce a document(s) that examines the entire corpus of work. And yes, he DOES need the full details from the other papers in order to replicate them. That’s the function he’s undertaken.

Ken, I expect STeve to not leave out things he knows to make an arguemtn taht woudl be affected by the thing he knows.

I HAVE read much of that other stuff. I SAID that. I’ve put in a lot of work here. That is precisely WHY I get so frustrated here, as I said above. The overwhelming majority of the articles are piece-by-piece works that niggle on details out of context – and what is needed FOR THE BROADER CLAMS STEVE IS IMPLICLTY OR EXPLICLTY AMKING is precisely the part that either doesn’t exist or is very hard to ferret out, which is the reviews and summarizations. I’m here (as a ‘warmign denialist skeptic” as it were) to give him a chance to convince me that he is right on this issue, and he makes if really hard to be convinced, and that is frustrating. At best.

Steve’s claim is braoder now thatn, they did soem thigns wrong. It amounts, expliclty becasue he said ti above, to “the entire field is a can of worms.” He does NTO need the detials to suport that statemtn – he knows the kinds of things he is claiming they get wrong. So do it right, and show what difference it makes in a de novo study. That doesnt mean he cant continue lookign at details – but it is the kind of work necessary to suport his broader claims.

From Drudge:
July Heat Wave Could Set National Record
104 Temps Possible On Wednesday
Heat Taxes Utilities, Human Endurance
New York declares emergency as heat wave bakes East Coast
Cities Conserve Power Ahead of Heat Wave

People are dying of heat complications and farmers and ranchers are losing their livelihood to drought.

Its obvious that all the evidence, however imperfect, points toward little late holocene climate variability.

I think the ultimate liability on the issue of climate change will lie with the politicians and obfuscates who decided they knew better then are best scientist and have delayed action on the issue.

Ultimately this discussion on statistical nuances of proxy constructions will be a very very very small footnote in the history of climate change. It’s only significance being one more piece of the puzzle that explained how we took so long to take action on this issue…..it will seem inconceivable the degree to which the issue was neglected by our children and their children.

So what exactly is different in principle in those studies that do the same thing with other forms of proxies, for example tree ring widths?

In addition to bender’s comments, unlike tree-ring widths, thermometers also measure temperature OVER THE ENTIRE YEAR, not just the 4-month growing season. Furthermore, thermometers, if used correctly, are not susceptible to confounding factors such as solar irradiance, soil quality, moisture and even, gasp, CO2 fertilization.

re: #49: Errr Lee. Don’t you think that if the Hockey Team were REAL scientists that they themselves would take on board the numerous criticisms of their work, in part summarised by the NAS Panel and the Wegman review? It is THEIR credibility that has been destroyed, not that of Steve McIntyre. Au contraire. Steve has been found to have located some very problematic work on the part of the Hockey Team.

Some of that poor work is even evident to the laymen like me, such as the assumption of linearity between tree ring thickness and temperature. Any gardener knows that plants grow best under optimal conditions of temperature, moisture, soil condition, absence of pests etc. If cooler, they grow less, thus thinner tree rings. If hotter, they are stressed, and they grow less.

So far as I have seen, none of the HS supporters – not you, Steve Bloom, Dano, Peter Hearnden, Carl Christensen, John Hunter et al – have ever attempted to address this single simple point. You just say “the real climate scientists know what they are doing. Trust them.” Well Lee, on their performance to date, I don’t trust them, and I don’t think that you should either.

Arguably Steve Mc has done enough in showing the gaping holes in the HS corpus. They have lost cred. Their practices are not consistent with expectations of science, demonstrably in archiving, and disclosure of data and methods to allow replication/confirmation of their work.

As well as that, I get no response when I ask the HS supporters to explain the extraordinary differences between the Summary for Policy Makers of the last TAR and the body of the work. As the published edits show, somebody allowed the message of science to be grossly distorted.

Is it any wonder that the army of sceptics is growing daily in the face of such shoddy practice?

I fully second fFreddy in that post, but further I just happen to think that Lee kind of illustrates how badly these two recent hearings have hurt the Mann and AGW zealots. I wonder if they get any sleep at all for the time being. Well I certainly sleep very very well, so thats exactly what I´m gonna do right away!

Goodnight folks, and to Lee – calm down, course you´ve lost it anyway and there is nothing you can do about it!

“From Drudge:
July Heat Wave Could Set National Record
104 Temps Possible On Wednesday
Heat Taxes Utilities, Human Endurance
New York declares emergency as heat wave bakes East Coast
Cities Conserve Power Ahead of Heat Wave

People are dying of heat complications and farmers and ranchers are losing their livelihood to drought.

Its obvious that all the evidence, however imperfect, points toward little late holocene climate variability.”

More aptly, it points toward July/August weather variability.

“Ultimately this discussion on statistical nuances of proxy constructions will be a very very very small footnote in the history of climate change. It’s only significance being one more piece of the puzzle that explained how we took so long to take action on this issue…..it will seem inconceivable the degree to which the issue was neglected by our children and their children.”

Then shame on our children and their children for neglecting the issue. Thanks to people like Steve, they may have the facts upon which to base decisions.

Ffreddy. Steve leveled harsh criticism of the NAS board for pointing to subsequent corroborating studies after saying that bristlecones cant be relied on, and goes on to detail that as a particular kind of error – he gives details of how people woudl be laughed off the stage if they relied on data that they had already said cant be used.

He did NOT say that they point in part to a class of corroborating studies that ARE NOT SUBJECT TO THAT ISSUE. Or that the two classes of studies give similar results. The truth is that there is a class of studies that have a problem that may or may nto mater to the overal conclusion in teh broad field, and a different class that does not contain that error but gives simnilar results. In context, given the corroboratin by papers without that error, it makes sense to refer to those subsequent studies as supporting. In the engineering cases Steve proffers as an analogy, this woould make a difference. The analogy becomes – here is a bridge built based on structural analyses with a particular kind ogf mistake. Here are some analyses without the flaws, and they give strength results consistent with the flawed analyses.’ That is a qualitativley different kind of issue than the one Steve paints.

Steve is not writing to a closed audience (is he?) This is a public blog, and he is making VERY public arguments and claims. He is making those very serious claims in many cases about the entire field – he does so in his response to me, where he talks about ‘the entire field.’ His criticim are broad, and in that context, when he levels criticisms against a subset, he DOES need to specify that subset. Especially since people go on as if they DO apply to everything – as do some responses in this thread.

yes, my post was a bit of a rant – this gets irritating for the reasons I explained in that bit of a rant. I get tired of it. Steve is making serious claims on a very serious issue, getting national and international play – and simply failing on some very basic levels of communicating why the issues matter to teh broader conclusions. I’m here honestly to see if and how those issues matter, Steve keeps saying it matters, Steve gets policy input play – and taht comes with an obligation, IMO.

Re#53 gbalella,
I wouldn’t worry too much about our children and our childrens children, they will do what humans have always done in the face of climate change, they will ADAPT or Die. Too many people here on earth anyway, maybe its a good thing all this gloom and doom will befall us, punishment from the gods. Or you could invest heavily in space technology to get us off this rock. I suppose on the other hand I bet land is still cheap in northern Canada and other permfrost areas. Maybe investing in farm equipment to feed all the starving people? Or beach resorts in Vancuver BC and Alaska. Or……

Can you name just one other well known published paper in significant field of science other than climate change in which the data and methodology are kept secret or just discarded and fellow researchers rally round to defend it and the authors when it is challenged? We have more transparency in our local jazz club!

In what way have those who question the science of Climate Change impeded those who believe that global warming is largely man-made?

Here in the UK all major parties say they believe human emissions are causing global warming and we have a government with a majority large enough to force through any law they want. Yet they have taken no substantive action to actually curb (as opposed to trading) GHG emissions, which are rising. The reality is that the commitment to action amongst the general public is not there. We nearly had a general strike when the government tried to increase tax on fuel. Canada, Greece, Spain and most other Kyoto signatories have no chance of meeting their commitments.

Since we are showing no likelihood of doing what the “science” says we must what harm can be done by taking a good look at the “science”?

hans, drop the tendentious straw man posturing. I have said several tomes here that I am at least for now treating the dendro reconstructions as if they have no weight. This is an important issue – if we can not in principle derive useful data from such reconstructions, we need to know that. If we can, we need to get it right. My frustratin with Steve’s continued detail-picking stems from my view that it isnt getting us closer to EITHER of thsoe answers. And my insistence on holding Steve to highlevels of care in his work and words derives from teh importance of one of thsoe two answer, not from some kind of zealous adherence to a preconcieved point of view that you so readily attribute to anyone who brings a criticism to bear here.

And re 55 – bruce, Please. Where have I EVER said: “”the real climate scientists know what they are doing. Trust them.”” I HAVE acknowleded sources of noise in the dendro data (which is what you are essentaily listing) and acknowledged the question of whether that noise can be dealt with adequately. Your listing of some sources of noise does NOT address that question; it simply poses it. Again, as I said to hans – please drop the misrepresentatin sof my position and the straw man posturing.

People are dying of heat complications and farmers and ranchers are losing their livelihood to drought.

Do you have any specific ideas on what it would take to avoid these situations, gbalella? Any concerns with cold complications if warming could be stopped eventually or in its tracks? It is good that you are concerned but that in itself will not fix the problem.

You can settle the issue any time by showing all those goof, scoundral, unethical, biased climate scientist how to publish a proper proxy study. That you avoid doing this and persist in ankle-biting all their hard work tells me all I need to know.

Lee has a good point and the poster who came after and chided him (but said that he wants to see more Stevian papers) is implicitly endorsing him as well. I hope Steve is not slighting real academic paper publishing to work on a book.

You can settle the issue any time by showing all those goof, scoundral, unethical, biased climate scientist how to publish a proper proxy study. That you avoid doing this and persist in ankle-biting all their hard work tells me all I need to know.

Yeah, it tells you that there’s enough work in just keeping those biased climate scientists in check.

There’s a simply fact that you obviously just don’t get: these proxies are not sufficient to reconstruct past climate. His so-called “ankle biting” is the only reason we know this.

If Steve is “doing vital work in showing that tree ring proxies are not valid”, then he should write a paper and get it published. So far I am not even convinced that Graybill and Idso were correct when they hypothesized that BC’s were CO2 fertilized; there are other papers out there that indicate otherwise, including work done by Graybill himself. I am however, willing to accept the hypothesis that they are proxies for mositure, which is heavily influenced by ENSO in that part of the country.

as far as I am aware, Steve has not done ANY work “showing that tree ring proxies are not valid.” He ahs made argumetns about selection of tree ring data series, and about statistical analysis of proxy data – but neither of those addresses in any way the underlying validity fo tree ring proxies per se.

Your article here hammers the NAS for referring to subsequent studies after saying that bristlecones are problematic, and leaves the very clear impresion that they had no grounds to be doing so. Yuu now say, when asked, that ther are subsequent studies not subject to this criticism. Your article leaves (at the very least) a false impression. And it is an important, central distinction. If you had said that not all subsequent studies are free of that issue, but some are, it makes a HUGE impact on your argument.

Lee, your super sensitivity to these issues and, Heaven forbid, discussions in a public blog, I think allows you to view this article out of context.

I will wing my post much as I assume you did yours — and in public. As I recall NAS commented that there 13 proxy temperature reconstructions going back more than 400 years. They also noted that reconstructions (not just those of Mann and MBH, but all reconstructions) are just not sufficiently precise to say much about temperature beyond 400 years ago. They then somehow seem to conclude that there is a concensus amongst the reconstructions back 1000 years and longer that allows them to say that the late 20th century could have been (or some other modifying phrase much more qualitative than quantitative) the warmest in a 1000 years. They warn of problems with all reconstructions and then note that taking all these problematic reconstructions together somehow gives a (my word not theirs) a concensus.

They go on to specifically warn of the use of bristle cones and as Steve M has noted 10 of the 13 reconstructions use bristle cones. If the NAS was looking to a concensus of proxies and yet warning against what 10 of 13 used in their proxy, I would think that would put some doubt in NAS being consistent.

Steve M. informed you of the other 3 proxy reconstructions and that problems he had found with them can be found reported and discussed at this blog. Methinks you do protest too much, Lee and to the wrong source. Certainly NAS is more public than this blog.

Heat waves happen. July 1936 was the hottest July on record in the US. Maybe this July will equal or pass that, it was bound to happen eventually. Was the 1936 heat caused by manmade Global Warming?

Comment by JoeBoo

So no matter how bad the heat wave or how strong the hurricaine or how severe the drought or how hard the rain or how high the sea rises or how big the forest fires….its all normal because it’s “all happened before”.

So no matter how bad the heat wave or how strong the hurricaine or how severe the drought or how hard the rain or how high the sea rises or how big the forest fires….its all normal because it’s “all happened before”.

Lacking direct cause and effect, yes.

You think that being objective?

No, not just objective, that’s being a scientist. Running to unsubstantiated conclusions based on flawed science is actually the antithesis of “objective,” in case you never learned.

Lee, the studies illustrated by NAS in their spaghetti graph ALL use bristlecones/foxtails: Mann and Jones 2003; Esper et al 2002; Hegerl et al 2006 and Moberg et al 2005; as well as Osborn and Briffa 2006 which was illustrated. So my comment about the NAS illustrations was correct. The two reconstructions that do not use bristlecones – Briffa 2000, D’Arrigo et al 2006 (which are almost identical networks in the MWP) – were not illustrated by NAS.

Had NAS used the other two studies in their spaghetti graph, I would have expressed the point in different terms. BTW I’ve shown the problems with Briffa 2000 on this blog and submitted them to the NAS panel. They expressly noted that small subset non-robustness must be allowed for, which can be applied for these studies. I distinguished Moberg because I don’t think that the bristlecones are what specifically compromises it. However in order to prove that, you have to do the calculations, which I haven’t done yet. I’m only human.

So, Lee, I think that you owe me an apology since my remarks as I expressed them in terms of the NAS panel are valid.

Ken, this also leaves out important parts of the report:
“They also noted that reconstructions (not just those of Mann and MBH, but all reconstructions) are just not sufficiently precise to say much about temperature beyond 400 years ago. They then somehow seem to conclude that there is a concensus amongst the reconstructions back 1000 years and longer that allows them to say that the late 20th century could have been (or some other modifying phrase much more qualitative than quantitative) the warmest in a 1000 years. They warn of problems with all reconstructions and then note that taking all these problematic reconstructions together somehow gives a (my word not theirs) a concensus.”

If you read even the summary, they point to supporting qualitative, and non-dendro quantitative evidence that (in language they do not qualify) from multiple parts of the world that support the idea that late 20th century warming is unique on millenial time scales , and in the context of that additional NON-DENDRO supporting evidence, they say the conclusion of unique millenial-scale temperatures is plausible, while explicitly ruling out the dendro reconstructions as the source for that conclusion. Hell,. taht ahs beena theme here of late – that the NAS said athat th edendro reconstructins don’t tell us anything on millenial time scales. Given that, how can you also argue taht the NAS report relies too much on the reconstructions for millenial conclusions – they simply do not.

George, I’ve already known all I need to know about you for years past. You can only quote others and have no clue of the actual science. And just as those you quote have taught you to say it don’t, you have no idea whether ther IS a way to do a “proper” proxy study. Just as showing how a psychic cheats doesn’t prove that there IS a way to teleport matter or call back the dead. Get a clue before you mutter more inanity like this!

Steve,I just went back and looked at that chapter. They present that spaghetti graph, with only four studies, IN THE MIDDLE OF THE TEXT WHERE THEY ARE DISCUSSING THE PROBLEMS WITH RECONSTRUCTIONS. In your article you say they – “bring in other evidence” – a “half-dozen other reconstructions” that use bristlecones – without testing for the impact of bristlecones – obviously nto referring to that graph, which has only four reconstructions.

Among the problems with the reconstructions that they list on the same page as the graph, they explicitly list your criticism of the choice of bristlecone proxies (page 106), and they discuss that issue elsewhere as well. They go on to discuss other more recent studies, including several not in that graph, which include some of those you distinguish in your answer to me. And they specifically say that the early Mann paper was dependent on western great basin series – I will agree taht here they could have specified that this included the bristlecones. They pointed to problems in the data, and concerns that must be taken into consideration – IN THE CONTEXT OF EXPLAINING THAT THESE STUDIES ARENT ADEAQUATE FOR MAKING MILLENNIAL CLAIMS. They say that the agreement makes the millenial claims plausibe – whic in this chapter and context, I read as saying taht the identified errors don’t demosntrate taht teh millenial claims are false and it is NOT the warmest decade, but only that thise stuidies arent adequate support for that claim.

IOW, you are blasting them for failing to further analyze how this effectst he milenial claims of a set of results THAT THEY ALREADY SAID CANT BE RELIED ON for milLenial claims.

I stand by my argument that your article, as written, leaves a misleading implication that all subsequent studies include problematic bristlecone data series – your article was not restricted entirely to that spaghetti graph, and you explicilty mentkjonother data and studies.

I also stand by the remainder of my post, about the improtance ot publishing describing yoru ideas completley, in context, and exposing them to criticism and rebuttal in the formal literature.

#88. Lee, one of the non-dendro things that the NAS panel relies on is Antarctica. But their citation completely fails to support their point. I’ve posted up on this – do you have any response on this?

I’ve got some notes on Quelccaya organics which I’ll post up some time. It’s not black and white by any means.

As I’ve said repeatedly, I intend to submit some articles for publications. However, remember that it’s taken me over 2 years to get any of Esper’s data, which I got after submitting to the NAS panel; it is essential for what I need to do and I’m still missing other data.

Some of the proxies have been relied on to conclude that there was a LIA, MWP, Dark Ages climate minimum, Holocene maximum etc.

The problem is there are some proxies that are more reliable than others. I don’t think tree rings are good proxies at all because they can be affected by so many other variables other than temperature.

But you can use tree rings when you have really good evidence that a particular species of tree only grows in specific temperature conditions. You can use tree rings that erupt from underneath glaciers to determine that at a specific time, trees were growing in the location where the glacier now is (more common than you might think.) Some trees do not grow in Sweden any longer because it is too cold but their fossilized remnants can be used to see how much warmer it used to be in that part of Sweden than today (assuming you can date the sample properly and not use data selection again.)

Many of the ice core samples are based on solid science of atomic isotopes. The shell data and the diatom data from lakebed cores etc are based on solid science.

The anecdotal reports (and actual paintings) of people skating on the Thames river are fairly solid proxies. It has not frozen over since 1751 even though for several decades it froze over regularly for most of the winter.

So we should continue using proxies wherever possible (it is after all, the only data we have got.) But every proxy data series must contain warnings about how solid the series reflects temperatures. It must also contain estimates of variability caused by factors other than temperature.

Using tree rings to say today’s climate is warmer than at any time in the past 1,000 years is pure lying. To even take that a step farther and say 1988 is the warmest YEAR in 1,000 years is absolute hogwash. Yet these conclusions made it into the IPCC third assessment report and was featured prominantly in it (even though the IPCC is supposed to be the ultimate form of peer review.)

“You can settle the issue any time by showing all those goof, scoundral, unethical, biased climate scientist how to publish a proper proxy study. That you avoid doing this and persist in ankle-biting all their hard work tells me all I need to know.”

This is eerily similar to the “Chicken Hawk” argument, and just as specious.

Perhaps we should coin a name for it. How about the “Woodpecker” argument?

Lee, I excerpted the following from the NAS report. Please excerpt what you were referencing. The wordsmithing of NAS is perhaps what leaves you with your interpretation and me with mine but that does not change my point.

Less confidence can be placed in large-scale surface temperature reconstructions for the period from A.D. 900 to 1600. Presently available proxy evidence indicates that temperatures at many, but not all, individual locations were higher during the last 25 years than during any period of comparable length since A.D. 900. The uncertainties associated with reconstructing hemispheric mean or global mean temperatures from these data increase substantially backward in time through this period and are not yet fully quantified..

..The basic conclusion of Mann et al. (1998, 1999) was that the late 20th century warmth in the Northern Hemisphere was unprecedented during at least the last 1,000 years. This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature proxies and pronounced changes in a variety of local proxy indicators, such as melting on icecaps and the retreat of glaciers around the world, which in many cases appear to be unprecedented during at least the last 2,000 years. Not all individual proxy records indicate that the recent warmth is unprecedented, although a larger fraction of the geographically diverse sites experienced exceptional warmth during the late 20th century than during any other extended period from A.D. 900 onward.

Look at the graphs on page 2 of the NAS report and contrast the Mann HS from the other reconstructions and particularly Moberg’s and Esper’s. The HS shows little early variations in temperatures (pre-anthropogenic) while those of Moberg and Esper show much variation. The case for natural variation is evident in Moberg and Esper while nearly absent from the HS. It is more critical to the argument for AGW to explain this large natural variation than arguing the case that we have unprecedented warming over the past 25 years.

re 92: No, Steve. If those drawings were already out there (these are), and the engineers included the drawings that they were arguing should not be relied on (and the NAS said that the millenial concluson of those reconstructions can not be relied on), then it would clearly not be a breach.

My father was an engineer. I’ve seen an investigative report he was involved in, that included in an appendix all the construction drawings, clearly substandard (it failed), for the subject levee system and canal. Not only was that not a breach of professional conduct – it was required as part of his proper professional conduct.

Once again, that spaghetti graph was of reconstructions that the NAS said in that very chapter COULD NOT BE RELIED ON TO SUPPORT THE CLAIMS BEING MADE. They werent suporting the graph they showed – they were saying the graph could not support the claims being made for it.

California AG Puts Climate Skeptics on Trial
By Steven Milloy
August 1, 2006
California Attorney General Bill Lockyer is apparently trying to position California as a leader in the movement to silence scientific debate.

The State of California has filed a request in federal court to force auto makers to disclose all documents and communications between the companies and the so-called “climate skeptics.” California accuses the climate skeptics of playing a “major role in spreading disinformation about global warming.”

The underlying litigation is a lawsuit by General Motors, DaimlerChrysler Corp., and the Association of Automobile Manufacturers against the state of California challenging the state’s greenhouse gas emissions limits for new cars, light-duty trucks and sports utility vehicles (Central Valley Chrysler-Jeep Inc. v. Catherine Witherspoon, No. 04-6663).

California has been joined in the lawsuit by environmental activist groups including, the Sierra Club, Natural Resources Defense Council and Environmental Defense.

In a pre-trial discovery motion, California and the environmental groups asked for:

All DOCUMENTS relating to any communications between YOU and these individuals, and
All DOCUMENTS relating to YOUR relationship (or the relationship of any automobile manufacturer or association of automobile manufacturers) with any of them, including but not limited to payments directly or indirectly from YOU or any other automobile manufacturer or association of automobile manufacturer to any of them.
The state then goes on to quote from Ross Gelbspan’s book entitled, “The Heat Is On”:

Ever since climate change took center stage at the 1992 UN Conference on Environment and Development (UNCED) in Rio de Janeiro, Pat Michaels and Robert Balling, together with Sherwood Idso, S. Fred Singer, Richard S. Lindzen, and a few other high-profile greenhouse skeptics have proven extraordinarily adept at draining the issue of all sense of crisis. They have made frequent pronouncements on radio and television programs, including a number of appearances by some of them on the Rush Limbaugh show; their interviews, columns, and letters have appeared in newspapers ranging from local weeklies to The Washington Post and The Wall Street Journal. In the process they have helped create a broad public belief that the question of climate change is hopelessly mired in unknowns….

The tiny group of dissenting scientists have been given prominent public visibility and congressional influence out of all proportion to their standing in the scientific community on the issue of global warming. They have used this platform to pound widely amplified drumbeats of doubt about climate change. These doubts are repeated by virtually every climate-related story in every news-papers and every TV and radio news outlet in the country.

By keeping the discussion focused on whether there really is a problem, these dozen or so dissidents”¢’¬?contradicting the consensus view held by 2,500 of the world’s top climate scientists”¢’¬?have until now prevented discussion about how to address the problem.

California then asserts that:

As set forth above, Defendants are entitled to review the documents most likely to contain internal dissent at the manufacturers and the most likely such documents are those dealing with the tactics of entities like the GCC and individuals like the “climate skeptics.”

The automakers responded by stating that:

The so-called “climate skeptics” are not on trial in this case, and the court should resist defendants’ attempt to put them on trial. Nor does this case require the court definitively to resolve questions regarding “GLOBAL WARMING” writ large. At most, as Plaintiffs have stated before and will state again at the risk of redundancy, the only relevant issue in this case with respect to global warming is the much narrower issue of what impact, if any, the A.B. 1493 Regulations will have on global warming. To adjudicate this issue, the court will need to assess the greenhouse gas reductions that the A.B. 1493 Regulations will cause and then compare these reductions to the proffered experts’ view about how much this level of reduction will affect the global climate. In the context of this battle-of-experts, Defendants’ attempt to plumb the plaintiffs’ files for documents regarding Defendants’ hit-list of “climate skeptics” is beside the point.

There are at least three points to make here.

First, California and the global warming lobby doesn’t like what the skeptics have to say and, by virtue of this sort of intimidation, is apparently out not only to silence the skeptics but to make sure that no one dare support the skeptics lest supporters be implicated as aiding and abetting thought-crimes against California-approved, politically-correct global warming science.

Next, I wonder whether Attorney General Lockyer disclosed to the judge that Gelbspan is a rather dubious character — for example, he misrepresented himself as a Pulitizer Prize winner on the jacket of his book, entitled “The Heat Is On.” Gelbspan never won a Pulitzer, nor was he ever even nominated. Click for more on Gelbspan

Finally, AG Lockyer has a track record of trying to silence scientific debate. In 2001, for example, the pro-gun control Lockyer gagged California state experts who opposed Lockyer’s dubious plans for pre-sale ballistics fingerprinting.

The so-called “climate skeptics” are all that stand between junk science-based global warming alarmism and higher energy prices, reduced economic growth and increased Green political power.

Steve,I just went back and looked at that chapter. They present that spaghetti graph, with only four studies, IN THE MIDDLE OF THE TEXT WHERE THEY ARE DISCUSSING THE PROBLEMS WITH RECONSTRUCTIONS. In your article you say they – “bring in other evidence” – a “half-dozen other reconstructions” that use bristlecones – without testing for the impact of bristlecones – obviously nto referring to that graph, which has only four reconstructions.

First of all, the quotes “bring in other evidence” and “half-dozen other reconstructions” are from the Sciencemag article, presumably quoting or paraphrasing North.

The spaghetti graph (S-1) on Page 2 of the “Report in Brief” has, wait for it, SIX reconstructions. Of these only four go back further than 1500, and they all use bristlecones.

Steve, are yo saying that the NAS told the committe that thsoe graphs were accurate anoug boing back 1000 year stht milenial intepretations could be made?

James, the paragraph that Steve wrote atributed those quotes,at least implicitly, to “the NAS panel.”
–
…the due diligence of the NAS panel was so negligible and slight and that they relied on mere literature review for so much of their study. It’s ludicrous for them to say that bristlecones should be “avoided” in temperature reconstructions and then to “bring in other evidence” – a “half-dozen other reconstructions” that use bristlecones – without testing for the impact of bristlecones on these reconstructions.
–

Not only that, Steve was referrign to pblicatin in the NAS reprot of that graph,and hammering them for pubishing it at that place. In context, in the chapter on the reconstructions, that graph is showing the reconstructions that they are criticising.

DAMMIT, folks. You CAN NOT average the data from multi-proxy studies, because the dating errors in the individual studies tend to smooth out the average and make it look like a hockey stick shaft. It is really very simple. Some folks are trying to make this too complicated. See my favorite reconstruction.

Not only that, Steve was referrign to pblicatin in the NAS reprot of that graph,and hammering them for pubishing it at that place. In context, in the chapter on the reconstructions, that graph is showing the reconstructions that they are criticising.

The spaghetti graph on page 2 of the Report in Brief is not presented in the context of criticism of the reconstructions. In fact, the Report in Brief is quite approving of them and does not mention any caveats regarding the bristlecones. Try reading the first two complete paragraphs on page 3.

BTW, I thought you claimed that the “half-dozen” reference didn’t refer to the spaghetti graph in the first place.

re 97: Ken, the copy of the report that I have been accessing does not allow me to copy from it. I really would rather not retype – I gave the page cite.

That was my test of your seriousness on this issue, knowing full well that one has to retype it to do excerpts. Cite a page and a paragraph and I’ll retype it and then discuss it. By the way you often write in a style not unlike that used in the NAS report

I hope Steve is not slighting real academic paper publishing to work on a book.

Lordy, Lordy, TCO, the problem with the internet is that I cannot see whether you are really serious with these comments. I am guessing that Steve M is having fun doing what he is doing, but is not out to save the world by way of your advice. If he does write a book and then makes a movie from it and chooses to have you portrayed I am betting your character will be based on the Dana Carvey’s Church Lady.

Gawd, I notice that nobody had looked at the Graybill report on carbon fertilization experiments which he did. (Sorry, I don’t have a cite but you can all use Google scholar, can’t you?). Essentially what was found were CO2 fertilization effects at 650ppm, but only for young trees (these were some species of orange tree if I recall correctly).

The BC samples were taken from old, strip-bark, specimens according to Steve (I don’t have access to Graybill and Idso 1993). The NAS said that strip bark samples should be avoided, however Steve also lumps in all specimens related to bristlecones, including relatives which do not grow in a strip bark form except in response to natural pertubations, such as the foxtails which only grow in a strip bark form in response to lightning strikes and then only in the damaged area. This can be said of many other long lived species which grow in lightning prone areas. In fact all of the foxtails grow in a completely different climate regime from bristlecones. The two populations are found at the upper treeline on the west facing slopes of the Sierra Nevada in and around Sequoia NP and near treeline in the much better watered Trinity Alps. Steve has never bothered to break out these populations and do an analysis on them.

AFIAK, Moberg tried to look at long term signals and depended many fewer tree ring samples and used a completely different type of statistical analysis (from this site and discussions on whether the wavelet analysis was correct or not…). He came up with a variability what was greater than most of the recons and which was definitely outside of the 2 sigma error bars which Mann put on his original study, both on the upside and on the downside.

Finally I found it interesting that Steve was the only witness at the 7/27 hearing who felt that the current warming was not unprecedented, even Christy — who I believe is an honest skeptic — did not agree with him. This does seem to put him out on the fringe…

Finally I found it interesting that Steve was the only witness at the 7/27 hearing who felt that the current warming was not unprecedented

Sorry JMS, your recollection is faulty. The question, IIRC, was whether anyone *disagreed* that {current warming is unprecedented at a “level of confidence” of “confident”}.
Believing that the statement is “not proven” or even that the level of confidence is instead “plausible” means you disagree with the Congresswoman’s statement.

JMS, Steve did not say that he felt the that the current warming was not unprecedented. The question put was (paraphrase) “do you disagree that it is unequivocal that current warming is unprecedented?”. He made his disagreement quite clear – (paraphrase) “I don’t know if it’s unprecedented”.

IMO all of the particpants should have disagreed with the statement. There is no unequivocal evidence that current warming is unprecedented.

(Note, I have paraphrased as the transcript is not yet available. I’m pretty sure I have the sense of it right, including the use of the term “unequivocal”.)

This is a good example of Steve’s post at the top of the thread about people misrepresenting what was actually said.

Gawd, I notice that nobody had looked at the Graybill report on carbon fertilization experiments which he did. (Sorry, I don’t have a cite but you can all use Google scholar, can’t you?). Essentially what was found were CO2 fertilization effects at 650ppm, but only for young trees (these were some species of orange tree if I recall correctly).

Without a cite, I think you’re probably mixing some things together. Idso, who published with Greybill about the BCPs has done lots of experiments on various trees about the CO2 fertilization effect. His big study was on a mock orange of some sort, I think. Anyway, while the highest affect was with young trees, it also continues on older trees.

Steve has talked a lot about the situation with the BCPs used in the climate reconstructions and I believe he’s stated that the Hockey Stick is only found in the strip bark trees. I.e. the regular trees either don’t have it or at least on average don’t.

The BC samples were taken from old, strip-bark, specimens according to Steve (I don’t have access to Graybill and Idso 1993). The NAS said that strip bark samples should be avoided, however Steve also lumps in all specimens related to bristlecones, including relatives which do not grow in a strip bark form except in response to natural pertubations, such as the foxtails which only grow in a strip bark form in response to lightning strikes and then only in the damaged area. This can be said of many other long lived species which grow in lightning prone areas. In fact all of the foxtails grow in a completely different climate regime from bristlecones. The two populations are found at the upper treeline on the west facing slopes of the Sierra Nevada in and around Sequoia NP and near treeline in the much better watered Trinity Alps. Steve has never bothered to break out these populations and do an analysis on them.

Actually you do have access to Graybill and Idso as it is posted up on this site in a pdf that http://data.climateaudit.org/pdf/graybill.idso.1993.pdf I obtained. Graybill’s collection, all selected for strip-bark, includes foxtails. He specifically mentions collecting strip-bark foxtails from the Sierra Nevada. He also mentions strip-bark limber pine sites.

BTW an additional issue is that the problematic sites were all collected by Graybill. In one case, as I’ve posted up, Woodhouse collected the same species within a few km of a Graybill sample and got completely different results – this site is within a 45 min drive of UCAR world headquarters, but nobody’s bothered updating the information in 20 years.

How on earth would any of the people at the table be qualified to “know” if the current warming was unprecedented? There’s no evidence that Gulledge or Cicerone have studied the proxy data and Mann is Mann.

re 109: the graph on page 2, IMO, it is showing WHAT THE REPORT IS EXAMINING. The data on that graph includes the four series in the later graph that Steve describes and we have been discussing, plus borehole, glacier and instrumental data. That latter data is not from dendro reconstructions, and has nothign to do with bristlecone pines.

And on page 3, the three bullets on the top of the page, it explicitly limits what interpretations can be made from that data, in the ways that I have indicated. The details of the reasons why are contained in the remainder of the report.

They go on to say that this data is supported by additional large-scale reconstructions – they do not limit to the series with bristlecones – and by other kinds of non-dendro data.

Again, given that studies using other kinds of methods are consistent, and studies not using bristlecones are consistent, and the fact that NAS are already limiting the claims made from the dendro studies, adn non-dendro results also suport eh basic contention, I find Steve’s outrage that they didnt redo each study to see what happens if bristlecones are removed, rather over the top.

Reid, you may be right in that this could backfire, provided the climate skeptics get a full public hearing, but this seems unlikely to me at this time. It would be great if this public hearing could be made to happen.

The AG has asked for all correspondence between climate skeptics and the automakers – he may be on a fishing expedition to see if he can unearth some funding ties. However, it is equally likely that he is trying to smear and intimidate these highly capable scientists. He is also grandstanding for those who sees a corporate conspiracy under every Bush.

Lee, I did not write the article in post #99, except for the introductory ~30 words. If I had written that article, it would have been much more scathing in its criticism of Lockyer.

Of course he is Lee. But what does getting information from the automakers about correspondence to/from Skeptics have to do with a case where the State of California is being sued for imposing laws that are left to the Federal government.

He should be defendiing his client, not turning it into a politcal fiasco on a seperate issue.

I’ve got some notes on Quelccaya organics which I’ll post up some time. It’s not black and white by any means.

Comment by Steve McIntyre “¢’¬?

Steve,

This IS NOT an exact science. This is not engineering. That you can nit-pick each and every publication on some fine detail and assume the whole results are invalid is disingenuous and completely unimpressive.

If were playing darts you’d be right most of the time to bet on any given throw that I wouldn’t hit the bulls eye even though the space containing the bulls eye is more likely to be hit then any other similar sized area on the board.

All you are doing is pointing out that this is not an exact science. But none of your results changes the facts that the best evidence we DO have suggest minimal late holocene climate variability when compared to the present and projected warming.

As I’ve said before there is NO evidence to suggest any more variability in the late Holocene climate then what we see in the spaghetti graphs. Thompson’s latest piece adds a huge and independent and impressive confirmation to this evidence. You initial critique is unimpressive and simply borders on cynicism. I’m looking forward to your explaining away the rooted 5,000 year old Quelccaya plants.

Steve, in what way is the fact that you think “the antarctic claims: (which in fact are only a subset of the points about antarctica) arent supported, or that you’re going to write an article on Quelcayya organics, in ANY way responsive to the point under discussion?

The HS shows little early variations in temperatures (pre-anthropogenic) while those of Moberg and Esper show much variation. The case for natural variation is evident in Moberg and Esper while nearly absent from the HS. It is more critical to the argument for AGW to explain this large natural variation than arguing the case that we have unprecedented warming over the past 25 years.

Comment by Ken Fritsch

Moberg DOES NOT show natural variability comparable to what is happening and expected to happen with AGW. We’re talking a net difference of 0.8C over 600 or 700 years from the peak warmth of the MWP to the maximal cooling of the LIA as seen in Moberg. How does that compare to 2.0- 3.0 C increase in just 100 years.

I suggest you look at the Moberg graph add 100 years to the x-axis and 3 C to the y-axis and get an idea of what we are in for. It’s a hockey stick alright but the blade will be the x-axis.

re 109: the graph on page 2, IMO, it is showing WHAT THE REPORT IS EXAMINING. The data on that graph includes the four series in the later graph that Steve describes and we have been discussing, plus borehole, glacier and instrumental data. That latter data is not from dendro reconstructions, and has nothign to do with bristlecone pines.

Lee, you seem to have lost the plot. The report in brief has exactly one chart, and this is the spaghetti chart that includes six reconstructions, of which only four go earlier than 1500. These remaining reconstructions all include the bristlecones. If, as you contend, this was simply an illustration of criticisms of these reconstructions, why is it the only chart presented in the “report in brief”?

Quite clearly, these are the “half a dozen” reconstructions referred to by North in the top posting (which you seem to have conceded). Of the four pre-1500, all include the bristlecones.

The borehole and glaclier reconstructions do not extend beyond AD 1500.

re #123 gbalella, have you followed the thread on the Thompson et al, 2006 PNAS article on the Andean glaciers. There’s a discussion there of the organic matter evidence. There is also a discussion there of the oxygen isotope data and it’s interpretation. I’d be interested in your views on this.

The radiocarbon dates are not conclusive. All they say is that 5000 years ago the glaciers had retreated sufficiently for plant growth. The data say nothing about the possibility of retreats in the intervening 5000 years. There is a nice discussion between Lee and myself on this matter.

There is also another interesting article in the PNAS, June 2006 dealing with Andean glacier advance and retreat over the past several millenia. Unfortunately I don’t have a subscription, or easy library access to this journal. However the abstract indicates that the Andean glacier system in Venezuala has certainly been dynamic in terms of retreat and advance over the past few millenia. I’ll post a full citation on the Thompson PNAS article thread.

Re #123 gbalella. No, that is exactly the problem. It is made out to be an exact science. The increase in temperature over the twentieth century is stated as being .6C. No confidence levels are ever stated. Somehow we are to believe that with tools never designed to measure GLOBAL warming that we know it within .6 C over 100 years.

Exactly. And what is more its consequences are VERY long term, possibly VERY serious and its costs ASTRONOMIC. This makes it a unique field of human activity and would suggest to me that we should take much more care over the maths and science than we would over say mobile phones, nuclear power or medicine. It also makes Dr Wegman’s observations about social networks very pertinent.

Steve is doing what he can do. Others are doing what they can do.

For what its worth I think Steve would be far better off writing a book than worrying about getting more peer-reviewed papers out. For a start he would make money and he deserves to. He will reach a much wider readership and get more citations. Also he is far more likely to be a new Lamb than a new Mann.

The radiocarbon dates are in no doubt what so ever. I’m not sure what makes you claim they are. The plant material would not have been preserved had there been intervening retreats and likely younger plants would have been found. The evidence strongly suggest the glacier has NOT receded to its current extent for up to 5,000 years. This combined with the melting of the 11,000 year old Mt Kilimanjaro ice caps (yes I understand the complexities of this) and a documented break up of a 3,000 years old Arctic ice dam all suggest we are in an anomalous climate for the Holocene. And it should be noted that we’ve just begun.

The 18O data as well as the net mass balance histories also support each other an the dating of the 5,000 year old plant material. Finally, the observations are confirmed to have similar trends Himalaya. The article is devastating for those who want dearly to believe in a variable late Holocene. If you think the late Holocene was variable hold onto your hat cause you ain’t seen nothing yet.

In my opinion, the complexities of Kilimanjaro are different than you think. The evidence that the Kilimanjaro glacier is 11000 years old is very fragile. You can probably locate my posts on that by googling climateaudit Kilimanjaro. Again this is another Thompson publication which is abysmally published in Reader’s Digest (oops, Science).

It’s not the radiocarbon dates that are doubt in Quelccaya but how the evidence is construed. I’ll try to get to my post on Hormes et al 2001 in the Alps to show what I mean.

I happen to think the cost to respond are only great to the oil, gas and coal industries who currently are sitting on trillions of dollars in future profits and have absolutely no reason to support a response. The idea that the cost are astronomical is hogwash perpetuated by the fossil fuel industry. Call me conspiratorial but the evidence of what’s going on in our world to support oil interest is very obvious and I would claim that rather then me being off-base those who deny it are guilty of failing to do their due diligence with regards to their civic duty of ensuring democracy and true capitalism and a better future for those who follow us.

#130, 133. Let’s try to stick to scientific issues. In my opinion, if we need to do something, we should do it. That’s why it’s important that all these studies be carefully reported and analysed. If it were trivial, it wouldn’t be worth doing.

Globally, it is very likely7 that the 1990s was the warmest decade and 1998 the warmest year in the instrumental record, since 1861

That “7” is a reference defining what is meant by very likely. There are obvious error bars in the surface trend and the multi-proxy reconstruction graphs. The entire document is rife with explanations of the vast uncertainties. So I’m not sure what you mean. The 0.6C of warming and the subsequent warmth is so well supported by multiple other lines of evidence that it really takes a cynical view to assume its far off from the truth. Have you looked at the weather map of the US of A recently? Just another coincidental heat wave with 60% of the nation in drought conditions???

re #131 gbalella clearly you haven’t had the courtesy to read my discussion. Otherwise you would have seen that I`clearly said there was no doubt about the 5000 year data! I simply question the interpretation. Had there been subsequent retreats it is still entirely possible for organic material to survive, especially in a dessicated environment. Similarly, the date applies to a single plant site. No others were found. We are given no details of whether or not a systematic study was carried out to find other material..perhaps pollen, seeds, woody material in the sediment. The interpretation of a single result is naive. All that can be said is the glacier had retreated to this point 5000 years BP.

There is plenty of evidence of significant climate variability over the past 1000 years in the Venezualan Andes (Polissar et al. 2006, PNAS, 8937-8942) with mean temperature fluctuations of up to 3.4 degrees C. I have on my desk an isotope record from Berkner Island in the Weddell Sea which shows significant Holocene climate variability including the very late Holocene.

Your assertions are not backed by any credible evidence in the form of data, together with a rational interpretation.

Fair enough but I’m not sure how you go from endless debate about minute statistical details to deciding we need to do something. The politicians are finding this “uncertainty” as a reason to further delay a response when the overwhelming evidence of the big picture is quite clear. Ultimately Steve you are simply supplying fodder for the Robber Baron Oil industry to continue its current coarse of astronomical profits at the expense of the future of the rest of the world.

OK science then, well conjecture based on science. Steve picture things 100 years hence. Imagine the Moberg plot with a hundred more years of data and a rise of global temperature of 2.5 C. Looking at such a plot how concerned are you of the possibility of that plot being accurate……of the global consequences if it becomes a reality?

Have you looked at the weather map of the US of A recently? Just another coincidental heat wave with 60% of the nation in drought conditions???

If it’s not a coincidence, then surely you can tie these events directly into greenhouse theory and the top-of-the-line climate models, can’t you? Let’s see…the Canadian model predicts worsening drought conditions across the plains and midwest, maybe similar to what we’re seeing now. Bingo. But the Hadley model says basically no net change over the entire 21st century. Oops. And then when you look at “summer moisture change,” things look a lot different. The Canadian model suggests some of the current drought areas should be much drier due to AGW, but the rest wetter. And the Hadley model suggests widespread and significant increases in soil moisture. So it’s kind of tough to reconcile those predictions with what’s going-on.

I’m not sure what historical link you can find between global warming and drought here, but give it a shot. Maybe it’s just a coincidence that there’s no apparent historical link between global warming and droughts in the US.

#135 So then it is an exact science? You don’t have to be very cynical to assume that if a careful statistical analysis was done of the underlying data, ala McCitrick, Mcintire and Wegman, the variance may be increased from +- .2 to +_ 6. BTW thank you for pointing out the stated variance. I was wrong on that.

That ± error term is very significant. First, not only is it is large (one third of the mean!), it may be larger still than what the “authorities” claim. That’s why you need an independent audit. Second, the true mean is not necessarily contained within that [0.4, 0.8] confidence interval. Third, without a clarification of the confidence level (99%, 95%, 90%, 66%?) the ± itself is uninterpretible. Fourth, if the 20th century confidence interval is this large, you can well imagine how much real uncertainty there is in the historical record. Again, that’s why you need an independent audit.

There is no evidence for an “unprecedented 20th century warming trend”. Not unprecedented in 2000 years. Not unprecedented in 1000 years. Not unprecedented in 600 years. Not unprecedented in 400 years. Where will it end? The paleoclimatologists are continually being forced to back-pedal and revise their confidence statements as they are confronted with increasingly accurate estimates of error and uncertainty. The uncertainty is a reality they have sought to avoid, and even suppress. That’s why you need an audit.

I don’t want to get a yellow card from Steve, but you can’t complain here about the debate being endless. The HS debate would have lasted two weeks and ended three years ago in any other discipline. Correct me if I am wrong but this I the first time any government has held hearings under oath to establish a comparatively straightforward mathematical principle.

What I feel disappointed by is that, at least in UK Climate Research circles, non-centred PCA was known to be flawed soon after M&M first published. I have just had a surf around our DEFRA website and have not seen a single hockey stick. They have moved on.

So these are YOUR proxies? You think these are more rigorous then proxies used in the published studies? I think you have double standards.

The difference is that I’m not trying to use these as proxies. I have no idea what the temperatures were back then, nor does anyone else. The fact that the Hudson had ice in it during many summers is obvious evidence it was MUCH colder than now, and that growing conditions globally were MUCH better than now is pretty good evidence of extreme conditions one way or the other. To speculate as to how extreme, however, is left to people such as yourself who seem to think bad evidence (proxies) is better than no evidence. There is no double standard.

#133 the computer you are using right now was developed, created and produced by fossil fuels. You should turn it off then if you really believe in what you think.

Or how about instead looking at the reality of our modern world in a practical way, without emotional attachments with a positive outlook to the future. Or do you like to operate in a state mind clouded with fear? Or perhaps I just mis-understood you because I thought you just said, forget the science I know what to think is real about AGW because I believe oil companies are rich and they shouldn’t be?

This “heat wave” is bologna. For instance, I grew up in Southern California, with smog alerts and temps 100 degrees or more, and days we weren’t allowed to play outside. I was in NYC, Mass, Penn, Virgina and all the way down the east coast in 1976 for a bi-centenial historical tour of the USA with my GS Troop. I was in DC on July 4th. I thought I was going to die it was so hot and humid the whole month trip. We traveled all the way down the east coast and then all the way back to Los Angeles via the southern route…HOT HOT HOT. It was July, and it was Summer!
Sheesh.

I’m not sure what historical link you can find between global warming and drought here, but give it a shot. Maybe it’s just a coincidence that there’s no apparent historical link between global warming and droughts in the US.

Actually, 20th century evidence suggests that droughts in the US are becoming less severe, fewer in number and less widespread in spite of global warming. Oh, and anecdotally, while Colorado was hit hard with a warm spring and very little moisture, we just got done with the wettest July in recent history. Given that it rained again the past two nights (contrary to forecast), August is looking to be more of the same. Northern/Central Colorado also had the snowiest winter since ski resorts have been accurately tracking (about 25 years). Breck and Copper both had 400 inches, about 130% of normal. The drought has moved to the southern areas of Colorado, which also recently had huge rainfalls.

Wlr, why do you have to take things to extremes? I know the reality of how and why I’m here sitting infront of this PC. I’m well aware of how it was made. BUT, I’m also still free (perhaps not if you were running things?) to wonder whether there is another way the human world might run.

What’s your problem? Why the snarky ‘like to operate in a state mind clouded with fear’ lines (no we don’t anymore than you do), the ‘feel good if they see anything that supports the belief in AGW'(likewise) the ‘fantasy’ (no more than you?) stuff?

#145, I am not taking anything to extremes, and I was replying to another comment, not yours. Of course there’s plenty of ways the world might run, but the reality is ONLY right here right now, and what options we have and what we are up against in the year 2006.

We were also able to use technology to improve life spans, birthrates, childhoods, agriculture, stop diseases, improve motherhood, communication, travel and medicine…an endless list and guess what? To err is human. I am tired of my children having to grow and thrive right now in this guilt-fest of AGW. My daughter was born in 1981. Why now is this such a issue when her future and thoughts of family herself should be bright? I bet there are people like me and not like me spanning the globe that are just as sick of the guilt trips people like you seem to like to rely on for your reasoning about "life" and what it means. Why don’t you just yell "It’s Bush’s fault!" and get it over with? I apologise in advance if I get snipped.

Science doesn’t work via audits. Although an I have no problem with an audit the alternative, the historical method, is for some other researcher to publish their own mutliproxy study that shows something significantly different from the existing data (ie the spaghetti graphs). Likewise some one could publish their own construction of the surface temperature anomalies and maybe find a trend significantly different from the GISS/ CRU/ NOAA trends. In both cases no contrary study has EVER been published. The standard in the past was to give credence to such a uniformity of studies as being the most likely factual scenario or the “state of the science). This was the case until the political ramifications of the data came to be understood by the fossil fuel interest who have since done a great job of swift-boating the science of climatology.

Turn off your computer. Unplug yourself from the grid, turn your back on gasoline and hoe that field young man.

Show us how it can be done.

Comment by ET SidViscous “¢’¬? 2 August 2006 @ 10:52 am

No leave the computer on…stay on the grid but supply it with your own solar powered panels that will likewise charge your electric or hydrogen car….yeah and hoe that organic fields….and bring the boys home and let the Middle East and the oil companies fend for themselves…….or we could just follow along with the authoritarian powers that be…I guess that’s your unimaginative road to be taken?…..Sid?

What???? This is the most naive statement I’ve read in a while. Maybe I should get out of the world of science more often.

In both cases no contrary study has EVER been published.

Because the concept of using tree-rings as proxies for temperature is bunk. The methods and data used to reconstruct past temperature have been falsified. That is all science is required to do. Saying that there needs to be “another reconstruction” presents a very narrow view.

Moberg DOES NOT show natural variability comparable to what is happening and expected to happen with AGW. We’re talking a net difference of 0.8C over 600 or 700 years from the peak warmth of the MWP to the maximal cooling of the LIA as seen in Moberg. How does that compare to 2.0- 3.0 C increase in just 100 years.

There are some relatively “fast” cooling and warming cycles in Moberg and certainly these cycles are even faster (by 2X) in Esper. Remember also that the NAS report points out that using regression in the calibration step tends to reduce the variation in the proxies from the actual variation. If one foregoes regression they comment that calibration loses precision in estimating past temperatures.
Where do you obtain the 2.0 to 3.0 degree C increases in 100 years? Even if the variations occurred on a longer time scale and they occurred before the time that man was putting GHG from fossil fuel into the atmosphere someone would have to attempt to explain how and why they occurred.

Mark,
If I’m not mistaken , one of the longest, most severe droughts to hit North America in the last 1200 years occured during the LIA. It was situated in the Eastern third of the US -primairily the southeast US. It occured during the initial colonization of Virginia, the Carolinas and Florida (15th-16th centuries). The drought was so severe that the initial Spanish settlers in the Carolinas abandoned thier villages, and the settlers in Roanoke died.

One of the hallmarks of the LIA was its large variability in both surface temps and precipitation distribution. Droughts can occur in both periods of warmth and cold. Many people forget this point. Many people forget that the synoptic weather patterns that lead to extreme cold polar conditions in winter, can be the same that lead to extreme dry, hot conitions in summer.

Re #148: “This was the case until the political ramifications of the data came to be understood by the fossil fuel interest who have since done a great job of swift-boating the science of climatology.”

The swift-boating of Kerry worked because he never released his military records. Those military records would have cleared Kerry if he was telling the truth. He chose not to release his records thereby allowing himself to be “swift-boated”. It is a close analogy to Mann and other AGW scientists who don’t share their data and methods. You don’t need the wisdom of Solomon to know that Mann isn’t cooperating with M&M because his science is junk.

Did you guys actually read that article in the OC /Register? The episodic cooler ocean SURFACE temperatures are from upwelling – the variabilty between cold and warm water appears to be caused by alternating offshore winds and upwelling, and then onshore winds blowing warm offshore surface waters inshore to replace the cold water from earlier upwelling.

None of this is direcly related to the heat content of the oceans, which is what matters for AGW issues. It is a result of moving warm and cold masses of water around, not heating or cooling in situ.

Lee, I know, I just thought it tied in with the article at Roger Pielke’ Sr.s site about upper ocean temps cooling between 2003 and 2005, which is an interesting thing because ocean temps had warmed from 1993 to 2003.

Lee,
You are correct. Upwelling is a normal oceanic phenomena that occurs on the West Coasts of most continents. I think what was surprising to many is how quickly the offshore currents warmed or cooled. I don’t live out there, so I was a little surprised that this could occur. I know that off the coast of Chile during EL Nino years, the currents warm considerably, but that change takes weeks to occur.

“…no contrary study has EVER been published. The standard in the past was to give credence to such a uniformity of studies as being the most likely factual scenario or the “state of the science). This was the case until the political ramifications of the data came to be understood by the fossil fuel interest who have since done a great job of swift-boating the science of climatology.”

And look at the history of the search for the cause of Ulcers and the recent Nobel Prize awarded to the Australian scientist that had to fight and risk his own health to get the correct answer heard. Those pesky oil companies!

The last time the Quelccaya Ice cap had receded to its present position appears to have been over 5,000 years ago. The last time Mt Kilimanjaro melted completely was at least 7 and maybe 11,000 years ago. The last time Glacier national parks glaciers disappeared was…..well I’m not sure but I’m guessing it was several thousand years ago. Now if you could find evidence that its glaciers were completely gone during the MWP you might have a point but I’m betting that evidence doesn’t exist.

You’ve balked at all the evidence that suggest a stable last half of the Holocene and an anomalous contemporary warming but provided ABSOLUTELY NO EVIDENCE TO THE CONTRARY.

re #168: gbalella, you are boring. Instead of parroting the same things over and over again, I suggest you actually read some literature. From there you’ll find several studies indicating warmer-than-present conditions during the holocene. You can start, e.g., from the places I linked here.

Coincidence, or global cooling? link No wait – because these are cold related events, they are simply "weather" and not indicative of "climate change." No wait – must be colder because of global warming. Global warming is teleporting heat from South Africa to the US, and exchanging moisture back from the drought areas of the US to South Africa. Or maybe what we’re seeing in the US and South Africa has nothing to do with anthropogenic climate change and that leaping to any conclusions is absurd.

re 165 – the two are not mutually exclusive. If anthropogenic (or otherwise) global warming is happening, if is not going to be doing so by some kind of novel and brand new mechanism – it is going to increase the frequency of extremes in existing mechanisms, or pushing toward higher averages and extremes resulting from existing mechanisms.

Re#174/175, That’s a graph representing the southern hemisphere, for which data is too scarce to come up with any conclusions. And this “Mann” fellow has had his temperature reconstructions seriously discreditted, so you can’t put any faith in his work.
(cough, cough) 🙂

181: So Gerald – a graph of southern hemisphere reconstruction data published by Mann in:
Global Surface Temperatures over the Past Two Millennia
Geophysical Research Letters
Vol. 30, No. 15, 1820, August 2003
is evidence of the reasons that Mann doesnt publish on the southern hemisphere?

Lee, since you failed to pinpoint your references to the NAS report, I’ll give you my research on the matter and let the report speak for itself. I see the NAS report much like I view the AGW arguments in general, i.e. much circumstantial evidence that when viewed individually are not very convincing but when viewed, at least from the viewpoint of a proponent of AGW, together are supposed to be seen as overwhelming in making the case for AGW.

The NAS report appears to me to be that of a lawyer making a case based on circumstantial evidence without their audience (juror/voters) hearing the opposition lawyers making their case. As a public forum that makes the juror’s (voter’s) decision that much more difficult to make. The opposition was allowed to make its case on only one part of the evidence (the HS) and since the circumstantial evidence revolves on many pieces their case could withstand conceding (sort of) that one bit of evidence. As a matter of fact the case against the HS was presented simply as that and the “lawyers” for that case where specifically interested only in prosecuting the HS.

First the sort-of concession to the HS evidence:

Both the number and the quality of the proxy records available for surface temperature reconstructions decrease dramatically moving backward in time. At present fewer than 30 annually resolved proxy time series are available for A.D. 1000; relatively few of these are from the Southern Hemisphere and even fewer from the tropics (Figure O-2). Although it is true that fewer sites are required for defining long-term (e.g., century-to-century) variations in hemispheric mean temperature than for short-term (e.g., year-to-year) variations, the coarse spatial sampling limits our confidence in hemispheric mean or global mean temperature estimates prior to about 1600 A.D., and makes it difficult to generate meaningful quantitative estimates of global temperature variations prior to A.D. 900. Moreover, the instrumental record is shorter than some of the features of interest in the preindustrial period, so there are very few statistically independent pieces of information in the instrumental record for calibrating long-term temperature reconstructions.

Reconstructions of temperatures and external forcings during the 2,000 years preceding the start of the Industrial revolution are not yet sufficiently accurate to provide a definitive test of the climate sensitivities derived from climate models, mostly because the external forcings on this timescale (mainly solar variability and variations in volcanic activity) are not very well known. Climate model simulations forced with estimates of how solar emission, volcanic activity, and other natural forcings might have varied over this time period, however, are broadly consistent with surface temperature reconstructions (see panel D of Figure O-5).

Then the circumstantial evidence without the HS or other reconstructions:

Surface temperature reconstructions have the potential to provide independent information about climate sensitivity and about the natural variability of the climate system that can be compared with estimates based on theoretical calculations and climate models, as well as other empirical data. However, large-scale surface temperature reconstructions for the last 2,000 years are not the primary evidence for the widely accepted views that global warming is occurring, that humans activities are contributing, at least in part, to this warming, and that the Earth will continue to warm over the next century. The primary evidence for these conclusions (see, e.g., NRC 2001) includes:

“⠉Measurements showing large increases in carbon dioxide and other greenhouse gases beginning in the middle of the 19th century,
“⠉Instrumental measurements of upward temperature trends and concomitant changes in a host of proxy indicators over the last century,
“⠉Simple radiative transfer calculations of the forcing associated with increasing greenhouse gas concentrations together with reasonable assumptions about the sign and magnitude of the climate change, and
“⠉Numerical experiments performed with state-of-the art climate models,

Supporting evidence includes:

“⠉The observed global cooling in response to volcanic eruptions is consistent with sensitivity estimates based on climate models,
“⠉Proxy evidence concerning the atmospheric cooling in response to the increases ice cover and decreased atmospheric carbon dioxide concentrations at the time of the last glacial maximum is consistent with sensitivity estimates on climate models,
“⠉Documentation that the recent warming has been a nearly worldwide phenomenon,
“⠉The stratosphere has cooled and the oceans have warmed in a manner that is consistent with the predicted spatial and temporal pattern of greenhouse warming

Ken, I cited a page number andreferred to teh graph on that page, and listed the information contained there and, as I said, in the next page, which referred largley to taht graph, the studies ilustrated there, or additional similar studies.

I have previously detailed my analysis of the NAS summary – it was posted the day after Steve removed the weekend ban that JohnA placed on me, a monday, but I dont remember the date.

When you told me that you were asking me to retype paragraphs from the report manually, as some kind of absurd test of how serious I was, I lost interest. Homie ain’t playin’ that game.

So this is your favorite reconstruction? It’s based on the Keigwin Sargasso Sea sediment study and a single Stalagmite from South Africa. Have you asked yourself why this is your favorite reconstruction?

Have you looked at the two temperature reconstructions? The Sargasso Sea study shows at most 1.7 C of net temperature variability over about 300 years,. The South African Stalagmite reconstruction shows as much as 4.0 C of net temperature change in just 100 years. Sounds to me that they contradict each other. BOTH can’t be right. Further , the Sargasso Sea study trend is thought to reflect more changes in ocean currents then atmospheric temperature.

From the IPCC TAR “Keigwin and Pickart (1999) suggest that these temperature contrasts were associated with changes in ocean currents in the North Atlantic. They argue that the “Little Ice Age” and “Medieval Warm Period” in the Atlantic region may in large measure reflect century-scale changes in the North Atlantic Oscillation (see Section 2.6). Such regional changes in oceanic and atmospheric processes, which are also relevant to the natural variability of the climate on millennial and longer time-scales (see Section 2.4.2), are greatly diminished or absent in their influence on hemispheric or global mean temperatures.”

Any way here is an apparent new study using MULTIPLE stalagmites from different sites. Guess what? MORE SPAGHETTI!

Once again…there exist NO/NONE/NADDA mutiproxy studies that show anything significantly different from the Mann reconstructions and certainly no evidence of Holocene natural climate variability on the order of what we are seeing today and what your grandkids WILL see with AGW.

It seems to me that this thread has been invaded by folks who are arguing a completely different topic than what this board is about. As I’ve read the various post, this entire site is not dedicated to the existence or lack thereof of global warming, or even the existence of the LIA or the MWP. It is dedicated to good scientific method. Statistics are a mainstay in science regardless of dicipline. Further, it is pretty much a huge ethical crime so to speak for any dicipline to misuse statistics. I do research in Pharmacology, and if we were to misuse statistics as has apparently been done in Climate Science, we’d be dragged into a court room and fall victim to a nice multi billion dollar class action suit.

It saddens me to see some posters draggin up the anecdotal crap to support some position on the “global warming” issue. Apparently they are lacking in scientific amplitude, as science does not engage in anecdotal evidence; that is why we design studies and test hypothesis with statistics.

Steve … all I have to say, is regardless of what the alarmist who have invaded your site say, I consider your explorations in varifying the scientific methods associated with climate science a huge service to science and humanity. It is pretty sad that the climate scientist themselves have not joined you in your effort to evaluate their methods and make corrections to achieve more sound, scientific, and accurate results. .. But then, it seems they have become more political than scientific.

Re#182, looking here at a website for the paper, the Northern Hemisphere is proudly displayed among the “comparison of published reconstructions” (you need to read the fine print in the figure caption to realize it’s actually a comparison of published NH reconstructions – funny how that didn’t make it into the title heading), while the Southern Hemisphere is tucked away in a tiny figure at the top of the page. The so-called “comparison of published reconstructions” isn’t even from the paper, so I don’t know why it appears on the page, especially so prominently. Curious, isn’t it, that it’s not only there but misleadingly titled?

That graph shows that there was VERY little variability in the SH for the last two millennia. A magnified y-axis does not make lots of variability. At most we see 0.6 C of variability over 1000 years long term and 0.3 C of variability over 50 years short term. Again extend the plot over 100 years and up 2-3 C. Now THAT’s how AGW will compare with your natural variability.

From 1993 to 2003, the heat content of the upper ocean increased by 7.0 (± 1.4) àƒ’€” 1022 J.
This increase was followed by a decrease of 3.0 (± 1.1) àƒ’€” 1022 J between 2003 and 2005.
The decrease represents a substantial loss of heat over a 2-year period, amounting to
about 27% of the long-term upper-ocean reported heat gain between 1955 and 2003
[Levitus et al., 2005].

Forty-seven years of heat gain, and a quarter of that is gone in two years flat. The paper comes to the conclusion that the oceans are having a net heat loss to space.

It also looks like we are in for an extended solar minimum. The low could be as late as late 2008. Brace yourselves for some 19th century-type winters.

Ken, I cited a page number andreferred to teh graph on that page, and listed the information contained there and, as I said, in the next page, which referred largley to taht graph, the studies ilustrated there, or additional similar studies.

Lee, in regards to your #88 post in reply to my #80 you made the following comment:

If you read even the summary, they point to supporting qualitative, and non-dendro quantitative evidence that (in language they do not qualify) from multiple parts of the world that support the idea that late 20th century warming is unique on millenial time scales , and in the context of that additional NON-DENDRO supporting evidence, they say the conclusion of unique millenial-scale temperatures is plausible, while explicitly ruling out the dendro reconstructions as the source for that conclusion.

The NAS report Summary, contained in the first four pages of the report, always refers to the reconstructions as large-scale surface temperature reconstructions and makes no differentiation between dendro and non-dendro. I see no evidence in the NAS report Summary that they say what you have written above. That is why I need the page and paragraph(s) for which you continue not to provide. I do understand your typing problem as you have provided ample evidence for that.

page 3, first paragraph after the bullets.
Page 4, the first partial (continuiong) paragraph.

page 16, paragraph 2, last line.

These,a nd a lto of other places, outlne the uncertainties and limitations of the interpretatins.
—
There is also a line in ther somewhere about how these aprticular curves are “representative” of the larger number of reconstructions – that would include those not containing bristlecones – which give similar results.

To return to the subject of Steve’s original post, the NAS report says bristlecones may hae problem siwth CO2 fertilization, ansd shoud lbe avoided – not that they are necessarily wrong, bu tIMO because they hmay be problematic. But the reconstructin s taht do not have bristlecones give results similar to thsoe htat do, nd it is that coherence they are citing.

Re #146. What, so there is your way and ‘back to the stone age’ a no other? Don’t be so daft, Sid. The logic of your view is that when oil age runs it’s course we’ll end up back in the stone age. Err, I’ve news for you, no we wont…

RE #187 Again, Gbalella you simply take a graph and construe it as indicating that we are in for a serious bout of AGW. Lets look at the speleothem data plotted in the link you gave. It is a record going back from the latter part of the 20th century to about 1500. What does it show? Well it shows that the period that we all know as the LIA having an mean annual temperature 0.6 degrees below the recent average. The speleothems were from Scotland, Italy and China. Hmm…conclusion the LIA was certainly a NH wide event. What have we seen since…well temperatures rising towards the present day. Now just what does this tell us about AGW. I’d hazard a guess at about zero.

Really, you must learn to look at data with an objective eye and open mind and not come to the table trying to force data to fit with some preconceived conclusions.

Now look at the data from about 1970 onwards to 1987. There’s a very interesting cooling trend. As far as I’m aware this is not in the instrumental record. Another case of divergence? Perhaps we have the same problem here as with the tree ring data. i.e. the choice of suitable calibration periods etc.

Re #199, Paul, but have the speleotherms been audited ;). (tongue in cheeck mode) Why not raise a few quibbles like was done with the bristles? What about the effect of enhanced rainfall a quibbler might say? Or drought? Might acid rain effect things? Might deforestation in the past have had an effect? Might (heck, lets go there) biased, perhaps even fraudlent researchers politically motivate to to look for the right data play a role? Get where I’m going? Or will you react like MM might react to such suggestion? (stop tongue in cheeck mode)

It’s interesting that after the statistics producing the hockey stick was shown to be a falsification, it is claimed reconstructions are still supported by other evidence. The hockey stick reconstruction in the IPCC report was considered so important that the graph was shown at least seven times. I think that IPCC considered tree rings as the best evidence for AGW and any other supporting evidence of less importance. So now it would be interesting to check the next best evidence in the same way as Mann’s reconstruction. Which is next best? Here is a challenge for Lee and Hearnden.

Another big problem is that the surface record data is not very easy get. You need a lot of money to get the best data or data that has been used in previous reconstructions for comparison. So you need funds to make an independent study on reconstructions. But if you get funds only mainstream results are considered independent, any deviating results can be rejected because they must be wrong or if not they must still be rejected because you will always find a connection between the funds and oil industry or car industry.

You also should have a publication that meet the criteria of Mann or the environmental organisations. So results (like Steves results) can be discredited arbitrarily if they don’t meet the consensus criteria.

Re #199 Very funny Peter. In all seriousness I’ve worked on similar material myself (viz. Dennis, Rowe and Atkinson, 2001, Geochimica Cosmochimica Acta), also know and am working with the authors of this paper on other speleothem projects. Cave deposits are not easy to work with, nor are they easy to interpret, but I think they do offer great potential as terrestrial palaeoclimate indicators. The key word here is potential. As you’ve pointed out the chemistry of carbonate growth in caves is not simple. There has been some very good work done on Soreq cave by Miriam Bar-Matthews and co-workers, Ian Fairchild, Andy Baker and others have made some great contributions. What distinguishes these researchers is their belief in understanding the fundamentals, characterising their sites with respect to modern responses to precipitation, temperature etc.

As for this study, it’s neat. I don’t think it is telling us too much we don’t already know about the LIA and I’m certain it has nothing to say about AGW. I’m not sure the calibration with respect to temperature anomalies is robust. But what the matter the paper is a nice bit of science and it’s out there for public discussion.

…As you’ve pointed out the chemistry of carbonate growth in caves is not simple….

And that’s the rub. Because it’s not simple to find a clear signal (in the noise?), you must know, if you’ve read enough of this site, what will happen if you find the wrong kind of signal. Ok, if you find the right kind of signal (as it seems has happened) it’ll be fine, but the wrong one, oh no, can’t have that without what’s in #200 happening…

Whateve, good luck with your research and I hope, for your reputations sake, you find the ‘right’ signal 🙂

Gbalella
More spaghetti is the the result of dating errors. Some time ago I demonstrated that there were dating errors in the GISS data in the last 150 years.
Plotting GISS against a local weather station gives spaghetti but small sections of the graph matched perfectly along the whole length of the graph.
If they cannot get the date right for recent years how can it be correct for 1000 years ago?
All proxies other than tree rings exhibit a MWP at least 1 degree C higher than today.
Averaging them with their dating errors reproduces the Hockey Stick.
However if we do as logic dictates and plot them allowing for obvious dating errors a graph can be obtained that shows a MWP ~2 degrees warmer than today as well as the LIA.

Come on, it takes both data & methods to get conclusions. Would additional field work have saved MBH98? Or would they have been better off spending a little more time at the desk? Those advocating a shift of emphasis toward field work: when you get back to the office are you going to use RegEM to analyze your data, or not? The methods are not easy for an outsider to decode. That’s why they’re discussed so much here. Not disagreeing with your point, just explaining why there SEEMS to be an emphasis on methods here.

Re #181 and #182 – Lee, Further to the “publish”.
Is the “hockey stick” in IPCC and other locations from?
A) Southern Hemisphere reconstructions
B) Northern Hemisphere reconstructions
C) Average of ALL representative North and South Hemisphere reconstructions
D) Some picked reconstructions and patches not including South Hemisphere.

Re #209 bender, even Steve has posed the question why haven’t the proxies been updated? I doubt additional field work would have saved MBH98 but that is not the question. A little more time at the desk would probably have told them the available proxies are not up to answering the question they have asked.

Many palaeoclimate scientists are not asking the question about globally averaged climate. They are more concerned with characterising either the climate variability at a single location, or areally mapping climate for limited regions at time slices in the past.

The statistical methods used for the multi-proxy studies might not be easy for an outsider, but neither are the nuances of isotope geochemistry, or many of the other proxies.

Lee, the obligation of hte NAS Panel in respect to the bristlecone-using studies was to clearly state which other studies also used bristlecones and, if they themselves were not in a position to replicate the results without bristlecones, they had an obligation to state that their results could not be relied upon until the impact of not using bristlecones had been assessed.

That’s how any engineering firm with a “duty of care” (using this in a tort law sense) would have dealt with the issue. As soon as you think about the matter from the perspective of an engineering firm, it’s obvious. The interesting question – and one that I will ask Cicerone by the way – is why NAS panels opeate with lower standards of due diligence and care.

#211. There’s a couple of issues in the non-updating.
Paul, Hughes collected new bristlecone samples from Sheep Mountain – the MOST important site in MBH and Mann and Jones 2003 – in 2002. These are still unpublished. I’ve been around the block enough in mineral exploration to know that good results somehow get pulbic faster than bad results (although mining promoters can’t sit on bad results for very long, certainly not 4 years.) If Sheep Mountain ring widths continued off the chart through the late 1990s and early 00s, then we’d have heard about it. I’ll bet they are looking for some other results that show increases to blend out the bad results – just like a mining promoter would.

There’s another egregious example of failure to report adverse updated information that I’ve discussed here. (Paul, I don’t expect you to weigh on this one for obvious reasons.) Briffa made his name to some degree with his Nature 1995 publication of the Polar Urals supposedly showing that 1032 was the coldest year of the millennium. The dating of the 11th century cores was questionable as I wrote about last year. In 1998, new samples were obtained which reversed this conclusion. The results of the new samples were never reported nor the earlier study withdrawn. Instead Briffa switched to another site, Yamal, about 70 miles away, which had a very pronounced HS shape and is now a mainstay of multiproxy studies. I showed that the results of Briffa 2000 are reversed if the Polar Urals update is used. This sort of non-reporting and switching would not be allowed in a mining promotion. A mining promoter would have had to report the bad results clearly to the market and then try to convince the market that the new "good" results were more reliable than the old "good" results.

So Steve and yourself have collected your own data? I must have missed it as I thought this site was only about pointing out all the faults of every single piece of existing data. No data no debate right. So yeah where’d you drill your ice cores?

I’m simply trying to point out that the projections are for 2-3 C increase in global temperature over the next 100 years. The trends as measured over the last 25 years support this projection. And finally if you plot that projection out on a graph showing the natural variability of the last millennia or two you will be able to see the magnitude of the projections compared to past variability.

Now its possible that past variability has been of the order of 2C/century but I’ve not seen ANY evidence to strongly support this conclusion. On the contrary almost all existing evidence contradicts it. Some want to call into question the surface trends but they are supported by multiple other lines of solid evidence. And some want to criticize the models and their projections but again the current trends are supportive. So where am I lacking objectivity?

Can you admit that if “main stream” climate science is right then we are truly in for a very anomalous climate in the coming decade?

Plot the graph look at it and at least say to yourself…”What if they are right?”.

A little more time at the desk would probably have told them the available proxies are not up to answering the question they have asked

But they are STILL in denial. (They are still citing MBH98 as though it were authoritative & worthy of citation. Then people read that paper, figure the Mannomatic looks pretty good, figure they’ll try that …) Therefore the desk work to date is insufficient. Data collection goes on, yet published reconstructions will continue to rely on questionable data & methods. i.e. Better data do not solve the serious methodological problems.

Again, not disagreeing with your point. Field work is valuable. The question is: since you can’t do it all, what kind of new data do you want to collect? What are the priority areas?

e.g. You say “update the proxies”. But if they’re not ‘proxies’, as some people here argue, then what’s the point of updating them? What I would like to see is a true test of the proxy hypothesis. e.g. In the case of tree-rings, sample trees along a temp/RH gradient and prove that the sensitivity coefficients change as predicted. (THEN do the ‘cherry picking’.) Do controlled manipulations (temp/RH/CO2) with randomized experimental designs, in order to properly calibrate tree responses to KNOWN treatments (as opposed to the paleo approach, where HYPOTHETICAL annual treatments are ‘replicated’ non-independently as repeated measures over time (or space in the case of the gradient approach)).

I agree with all your points, and always enjoy your posts.

In fact, looking back at the comment I took exception to:

There’s far too many “desk top’ studies using old, unreliable and in many cases discredited data.

I would actually agree 100% … depending on where the emphasis was placed. We need more desk-top studies. There are (gulp) not enough of them. But, yes, it would be nice if these studies were not always based on ‘discredited’ data.

The question I want to draw attention to is: how do you determine whether or not a given datum should be credited or ‘discredited’? You suggest we get more data? I say you’ll have to pass the credibility test before you will get them published. Hence our mutual interest in developing robust methods.

Re #217 bender, we’re in complete 100% agreement. I’m not a tree ring person but everything you say about testing the proxy hypothesis and experiment design is absolutely correct.

Unfortunately there are too few studies where this is happening. For myself I’m trying to develop a new palaeothermometer based on 18-O partitioning between 13-C and 12-C in the carbonate anion of carbonate minerals (speleothems, shells, tufas, travertines etc.). This should give an unambiguous temperature signal but the isotope analytical procedure is difficult. We’ve just invested a half million pounds in a purpose designed and built instrument to make these measurements. Progress is slow….fight to get the funds, then build and test the instrument…fight to get more funds and the round-a-bout continues!

I didn’t mean to imply that there were too many desk top studies, rather too many desk top studies by ill informed palaeoclimatologists. It was a dig at those that gain kudos and funding for some not very clever work!

I actually think it would be great if all of us physical scientists actually consulted statisticians before we even get to the design stage of our experiments.

What was done
Temperature measurements from two Greenland Ice Sheet boreholes were used to reconstruct the temperature history of the Greenland Ice Sheet over the past 50,000 years.
What was learned
The data revealed that temperatures on the Greenland Ice Sheet during the Last Glacial Maximum (approximately 25,000 years ago) were 23 ⯠2 °C colder than at present.➠After the termination of the glacial period, temperatures increased steadily to a maximum of 2.5°C warmer than at present during the Climatic Optimum (4,000 to 7,000 years ago).➠The Medieval Warm Period and the Little Ice Age were also documented in the record, with temperatures 1°C warmer and 0.5-0.7°C cooler than at present, respectively.➠After the Little Ice Age, the authors report that “temperatures reached a maximum around 1930 A.D.” and that “temperatures have decreased during the last decades.”

People imploring CA to do “their own” reconstructions do not understand how audits work. Auditers are always “behind the curve” of innovation. Their job is to validate a claim, not innovate a process.

If CA contributors want to get ahead of that curve, they will have to become (or partner with) researchers who are at the leading edge. It is as simple, and as difficult, as that.

Look at the climatology social network (only part of which was mapped by Wegman). Choose as partners those that are least connected with the hub of the problem. Climate science is a highly competitive field. It should take very little effort to get some real competition happening. Unlike monopolistic corporations, research communities are so unstable in the face of money, they love to split up. That is when they are happiest: when they are fighting amongst themselves.

A zeroth-order Verification of GISS ModelE coding for one routine. The function tfrez, given below, calculates the freezing temperature for sea water. Note the following characteristics:

(1) The routine contains dead coding.
(2) Additionally, if the coding was activated, it would not compile because the variable ‘mu’ seems to be undefined.
(3) The calculation in the dead coding was replaced by the ‘UNESCO formula (1983)’

I have looked at this paper: http://pubs.giss.nasa.gov/docs/2006/2006_Schmidt_etal_1.pdf,
and cannot find a reference for that equation. The abstract states: ‘A full description of the ModelE version of the Goddard Institute for Space Studies (GISS) Atmospheric General Circulation Model (GCM) and results are presented for present-day climate simulations (c. 1979). ‘
I have also looked at: http://www.giss.nasa.gov/tools/modelE/modelE.html, and canot find the equation. The first paragraph on that page states: ‘This document is a short description of what GISS ModelE does and gives some references and descriptions of how it does it. Hopefully this will eventually morph into a full technical paper given enough time and resources!”

The coding cannot be Verified given the level of documentation that I have found so far. This was only a first attempt, and by someone not familiar with the code. However I think it does give a first glimpse into the lack of sufficient documentation for verifying the coding.

The coding cannot be Verified if there are no code specification documents available. For large, complex codes that have evolved over decades the code manuals generally provide the specifications. The manuals that should be available include (1) a models and methods theory manual in which every equation used in the coding is given, (2) a users manual which describes how to ise the code in its intended areas of applications, (3) a programmers manual in which details of the structure and coding are described, (4) a V&V manual in which the Verification and Validation processes and procedures and results are given, (5) Other manuals and reports in which the results of analyses with the software in its intended application areas are described, and (6) a software QA plan that describes the procedures which are in place for maintaining the quality status of the software.

If my understanding of the available documantation is correct, the GISS ModelE coding cannot be independently Verified.

#223. One of the nice things about the bristlecones is that they seem to be in really scenic places. My sister who lives in Colorado Springs was visting Toronto a few weeks ago; some of Graybill’s sites are near Colorado Springs on beautiful trails with pretty good road access (one of my pet theories is that most of the access to sampled sites was on roads developed in the 19th century for long-forgotten little mines.) There are some more bristlecones at Niwot Ridge near Boulder; your Starbucks would still be warm when you got to the site.

#224. At the House hearings, someone mentioned to me (perhaps Cicerone, Gulledge??) that one of the problems with paleoclimate research is that the field is not big enough to have fostered enough competing work.

This paper (found when I posted to the stalagmite in the Alps above):
“Reconstructing hemispheric-scale climates from multiple stalagmite records”
DESCRIPTION:
Reconstructed northern hemispheric temperature for the period 1500-2000 AD from
three stalagmite layer thickness records from Scotland, Italy, and China.
“Presented here is an initial attempt
to demonstrate the applicability of annually laminated stalagmite series to a
large-scale climate reconstruction, by producing a 500-year Northern Hemisphere temperature reconstruction. The reconstruction shows an overall warming trend
with a magnitude of 0.65 K and several other low-frequency characteristics
consistent with other independent Northern Hemisphere archives. The result is
sufficiently encouraging to warrant significant future effort in characterising
annual growth rate records from laminated speleothems.”

FUNDING SOURCES: This work was funded by NERC, UK as part of the ASCRIBE project
in the RAPID climate change programme, Grant No: NER/T/S/2002/00448.
AB was funded by a Philip Leverhulme prize.

#226. Dan, that’s a very interesting post. It’s always nice to see what happens when one applies objective standards to verification. The quotation is interesting: thay blame the lack of documentation on inadequate “resources” rather than taking responsibility for it themselves. In busienss software, I presume that documentation is part of the job and the job simply isn’t done until it’s documented according to standards that make it possible for others to re-trace the procedure.

In busienss software, I presume that documentation is part of the job and the job simply isn’t done until it’s documented according to standards that make it possible for others to re-trace the procedure.

In banking, a condition precedent for first drawdown on a loan will be that any financial models will be signed off on by one of the big audit firms, working for the lenders.
If you are the investment banker making such models, you get in the habit of producing good and comprehensive documentation at an early stage in order to reduce the amount of time and faffing spent during the audit process.

Re: #228: NaN = “not a number”. Memory hazy on this … but NaN is a symbol often used to describe an overflow/underflow calculation error, typically the result of a computation involving numbers that are so small or large they can’t be represented using the specified type (float, integer, etc.) of the datum. e.g. I believe division by zero often yields NaN. Not sure it always means the same thing in all contexts.

#235 and 236,
Thanks! This graph sure looks smooth for such little tiny parts of C over 50 yrs spurts for the whole planet, and I don’t see the instrumental record although it’s in the key, am I blind? … just a wife of a geologist thinking out loud drinking coffee here!!

Paul#235,
do tell if you care to. We like stalagmites! 😉 and we sure appreciate your experience and time you spend here!!

Thank you too SteveM for the space for the conversations, and putting up with folks like me!

#230 fFreddy, Many years of experience in models and methods development, and putting these into computer software, all within a regulated, at the national level, industry. The models and methods were concerned with inherently complex physical phenomena and processes, as is the case of AOLBGCM, and the resulting codes are naturally somewhat complex to apply and the results somewhat difficult to fully understand. In my case we dealt with transient, compressible flows of multi-phase fluids in geometrically complex systems and engineering equipment.

The results of the calculations were used as one of the basis for decisions that could potentially effect the health and safety of the public. I am certain that independent Verification and Validation will always be a part of all such codes and decision making. All that notwithstanding, fulfilling the documentation requirements always makes good practice.

This project will test the influence of thermohaline circulation changes versus other controlling factors on Holocene palaeoclimates. The tests rely on climatic reconstructions of the last 1000 years and the 8.2 ka event from the study of speleothems at sites along the Atlantic seaboard, dated by U-series and annual layer counting. Oxygen isotope compositions of palaeoprecipitation will be determined from fluid inclusions at high resolution. Palaeotemperatures will be derived from these data combined with delta 18 O analysis of speleothems, and climatic data also extracted using annual layer studies, using an enhanced time series statistical methodology. Modelling delta 18 O fractionations in atmospheric moisture will test the convergence of GCM predictions of the magnitude and spatial distribution of palaeoclimates with the evidence from the palaeoclimate record.

I’m sorry it is so condensed, but was done so to fit a grant application form. There is some more information at the following web address:

Lee, below I have identified and retyped the references to NAS that you have made. These references in my judgment do not back up your statements and therefore one, or both, of us have to be confused. The NAS committee is making general statements about large-scale surface temperature reconstructions and is not referencing specific dendro and non-dendro or making any other differentiation as you appeared to see in the report. They do talk about reconstructions going back to 1600 A.D. being the ones they have more confidence in the claim that the last 25 years have been warmer than any in the last 400 years. Please specifically point to how the retyped references support your original view.

Figure O-5 on pages 18 and 19 of the NAS report, if I cannot copy it here, shows: the instrumental records; borehole and glacier length proxies back to 1500 A.D.; large-scale surface temperature reconstructions by Mann et al. 2003, Moberg et al. 2005, tree rings by Esper et al. 2002 going back to 900 A.D. and Hegerl et al. 2006 going back to 1300 A.D.; and an NCAR climate model back to 900 A.D. and an Energy Balance Model (Crowley, 2000) going back to 1000 A.D.

Page 3 paragraph after bullet points:

The main reason that our confidence in large-scale surface reconstructions is lower before A.D. 1600 and especially before A.D. 900 is the relative scarcity of precisely dated proxy evidence. Other factors limiting our confidence in surface temperature reconstructions include the relatively short length of the instrumental record (which is used to calibrate and validate the reconstructions); the fact that all proxies are influenced by a variety of climate variables; the possibility that the relationship between proxy data and local surface temperatures may have varied over time; the lack of agreement as to which methods are most appropriate for calibrating and validating large-scale reconstructions and for selecting the proxy data to include; and the difficulties associated with constructing a global or hemispheric mean temperature estimate using data from a limited number of sites and with varying chronological precision. All of these considerations introduce uncertainties that are difficult to quantify.

Paragraph from page 3 continuing to top of page 4:

Based on the analyses presented in the original papers by Mann et al. and this newer supporting evidence, the committee finds it plausible that the Northern Hemisphere was warmer during the last few decades of the 20th century than during any comparable period over the millennium. The substantial uncertainties currently present in the quantitative assessment of large-scale surface temperature reconstructions changes prior to about A.D. 1600 lower our confidence in this conclusion compared to the high level confidence we place in the Little Ice Age cooling and 20th century warming. Even less confidence can be placed in the original conclusions by Mann et al. (1999) that “the 1990s are likely the warmest decade, and 1998 the warmest year, in at least a millennium” because the uncertainties inherent in temperature reconstructions for individual years and decades are larger than those for longer time periods, and because not all of the available proxies record temperature information on such short timescales.

Paragraph 2 from page 16:

A more recent and complete description of what we know about the climate of the last two millennia can be gleaned from an inspection of Figure O-5, which was prepared by this committee to show the instrumental record compiled from the traditional thermometer readings, several large-scale surface temperature reconstructions based on different kinds of proxy evidence, and results from a few paleoclimate model simulations. Figure O-5 is intended only to provide an illustration of the current state of science, not a comprehensive review of all currently available large-scale surface temperature estimates.

Unlike monopolistic corporations, research communities are so unstable in the face of money, they love to split up. That is when they are happiest: when they are fighting amongst themselves.

Did I miss the Ring Girl walking around with the “Round 1” placcard??? 🙂

Louis, (I think it was Louis) thanks for defending the mining / mineral industry and their adherence to scientific standards. During my time as a geology student, my eyes were opened to the extent of scientific rigor in the field. I cringe when AGW nuts spew their slanderous remarks implying that scientist who do, or have worked for the mining industry are somehow all bought dogs. Yet they never see the error of that line of logic when applied to their own funding providers.

PS. Anyone who can make a living looking at slivers of rock under a microscope (optical minerology) are braver men than I!

PPS. Forgive any misspells. Spell check in browser not work for some reason and I must go to work.

I’m sorry Peter. You must be reading with some text substitution filter, because I never said that.

YOU said we should be using alternatives and I quote “I’m also still free (perhaps not if you were running things?) to wonder whether there is another way the human world might run. ”

So long as your using the current way there will be no change. SO unplug from the current way and show us the other way the human world might run. I said nothing about the Stone age. I don’t care if you use Gbella’s solar panels that you think out of thin air, along with the enermous reserves of free Hydrogen. OR shipstones. Cold Fusion, whatever.

First line, did you, where?
I can tell you what data I used what I did, so that you can replcate it.(Do you really think a lowly retired technician could get anything published that argues against AGW)

gbalella re 215
The said GISP 2 data exhibits higher temperatures during the MWP than it does for the for the late 20th century.

Unfortunately, there is a truly regrettable editorial in the Washington Post this morning.
It states that, over the past two weeks a House Energy and Commerce subcommittee has held a pair of truly senseless hearings on global climate change…… The purpose was to pick at a single study of global temperature patterns, the so-called hockey stick. …the subcommittee has investigated the scientists who produced it and hounded them for information etc etc etc.
There is no mention of the substantial investigations carried out by Wegman, McIntyre and others……
Roger Bell

Look, this is simple. The NAS report says, we can not rely on the quantitative claims previous to 400 eyars ago, to support the realtive millenial temperatyre claims that were being made from them. It is a major result of teh analysis – if you will recall, y’all were crowing about this at the time.

They show several studing on those graphs – the ones they show happen to all include bristlecones – and call them representative of the whole If they had included a non-bristlecone reconstruction from the literature, woudl those graphs have looked qualitatively different? If not, then their statmetn of representative-ness is corect, and those reconstructions are representative.

They spend several pages pointing out issues – non-simultanaiety of temperature changes among them, and sampling and analysis issues as well, including strip-bark issues. They identify that potential issue.

They point to the reconstruction studies – in toto, not just the samples they showed – and recognize that the general agreement among them strengthens the argument. Now, the reason they can say this is precisely that if some of the analyses are incorrect and thrown out, others remain. Steve, you earlier listed studies that dont use bristlecones, so the literature of which thesea re representative does include non-bristlecone studies – and remember the basis of my criticism of yor post is that you didnt do so in your OP,a dn to me the post, intentionally or not, implied that such studies do not exist. They spend several pages listing the possible soruces of error, including the bristlecone issue, along with many other possible issues.

And then THEY POINT OUT THAT THE QUANTIITATIVE CLAIMS FROM THE RECONSTRUCTIONS ARE NOT ADEQUATE TO SUPORT THE MILLENIAL CLAIMS. What they are saying is, ‘here are a lot of possibly flawed, certainly insufficiently precise studies, which are qualitatively consistent between them, but do not give the quantitative results that are being claimed.’ They also point out additional non-dendro qualitative and quantitative supporting evidence.

From an engineering perspective, this is like someone reporting, ‘we have these several analyses(-showing the analyses-) which use similar or differing techniques, and all make the claim that the bridge design is adequate – but now I’ve shown that the precision of those analyses is not adequate, so we don’t know from these analyses if the design is strong enough. The fact that all of them give similar results means the design may be strong enough – but don’t build the bridge yet based on these analyses. Hell, he may even be saying, ‘I think the design is likely ok, lets don’t throw it out – but we need more analysis because these numbers arent yet adequate for cosntruction.’ And also, here are additional potential issues I’ve identified that need to be examined in greater depth as we go forward with the engineering.

It seems you would accuse such an engineering analysis of being a derogation of duty, because he didnt examine the identified potential additional issues in the depth you would like – after already pointing out that the existing design work isnt sufficient for the purpose.

—

Now, if you want to criticise the use of the NAS report for making policy deicison in the face of the data and uncertainty issues identified, go ahead – but the analogy with engineering breaks down. Engineers can get a crisp yes or no answer to the adequacy of the design, and must do so before construction. Policy makers (corporate or political) almost always have to make decisions in the face of incomplete data – that is why policy decisions are harder to reach than engineering decisions. The two disciplines are simply not analagous.

Re:#228 and the excellent responses.
Another aspect of “NaN” can be found here:http://www.mathworks.com/access/helpdesk/help/techdoc/ref/nan.html
In particular, they point out that two different NaN entries are *not* equal. In wlr’s example of a data table, it’s most useful as a placeholder for missing data that won’t mess up/affect later calculations (as might 0, another arbitrary value, or a character).

…the basis of my criticism of yor post is that you didnt do so in your OP,a dn to me the post, intentionally or not, implied that such studies do not exist.

And there we have it again. Steve writes one thing, and you read something different that only exists in your head. You then start shouting, being offensive, and wasting time and energy on attempts at self-justification.
Try apologising, for a change. It’s good for the soul.

Ffreddy, my criticism was for what Steve failed to say – which is that there are studies that do not include bristlecones This is important – Steve is being asekd to testify before Congress, among other things, and this blog is not just an obscure out of the way corner. I personally find his implicatin that the NAS panel was negligent, with the imbedded implication that without bristlecones there is nothing, to be offensive and damaging to the discussion – and I’m damn well going to say so.

Re 252,
Lee- Are you an engineer? Do you work with engineers? How do you get your information as to this statement;

“Engineers can get a crisp yes or no answer to the adequacy of the design, and must do so before construction. Policy makers (corporate or political) almost always have to make decisions in the face of incomplete data – that is why policy decisions are harder to reach than engineering decisions. The two disciplines are simply not analagous.”

I am not an engineer, but I work with them all the time and have done so for the last 7 years. My father was an engineer, as was my grandfather. I don’t think you know what your talking about in relation to the above statement and engineers.

#255
Lee, you don’t understand a damn thing you’re talking about. To make inferences regarding time-series and estimated sensitivity coefficents you need to know something about the statistics of time-series. Sit back, shut up, and learn. Or forever drown in your own ignorance.

Just above it is my post of July 28/06. It was suggested that I was being a bit paranoid when I posted on July 28, but I now submit that my prediction was right on! The delay in reporting in the mainstream media was many days later than the norm, and the spin was as predicted – poor, poor persecuted little Mann! Shame on the evil Republicans ! Shame! And one more thing: The Mann hockey stick “is hardly central to the modern debate over climate change” – let’s just ignore that the hockey stick only appeared about seven times in the IPCC 2001 TAR reports, and was used by many governments as the primary marketing tool to sell the bogus Kyoto Protocol to their gullible voters.

This prediction stuff is getting much too easy – my in-depth analysis and Ouija board (using top-secret, proprietary Fortran code) says you can count on warming to ~2015, and then plan for significant cooling.
Better bundle up, and buy some property in Costa Rica!

Fo a more detailed analysis of global threats to humanity and how to survive, just send money! Lots of money!

Apart from one article in the Wall Street Journal (July 16/06) and another in The Australian (July 19/06), the mainstream media has been remarkably silent on the July 2006 Whitfield hearings, during which Michael Mann’s famous “hockey stick” was irrevocably broken by the Wegman report and the US NAS study. [Note: I missed one article in Der Spiegel.]

I am reminded of last year, when Tony Blair’s comments (at the Clinton summit) that the Kyoto Protocol was nearly dead were given no play in the press until 10 days after the event. Major new stories should be reported immediately, but when a leftist sacred cow (like Kyoto) is skewered, there is a shocked silence until a proper spin can be prepared.

The pro-Mann spin seems to be developing, as reported in Realclimate: the lack of centering does not matter to the result.

Next fearless prediction: When the MSM finally gets around to publishing this story, most papers (Canada’s National Post and Calgary Herald will be exceptions) will spin the story into a tale of persecution of poor Michael Mann, and how the Wegman and NAS reports do not change Mann’s conclusions.

As I did with last year’s Tony Blair Kyoto remarks, I’ve posted the Wegman report on Samizdata in the UK.

OVER THE PAST two weeks, a House Energy and Commerce subcommittee has held a pair of truly senseless hearings on global climate change. The purpose was not to figure out how to cut carbon emissions. It wasn’t even to discuss the science of global climate change in general. Instead, the purpose was to pick at a single study of global temperature patterns, the so-called “hockey stick” graph — a trend line that purports to show a sudden and dramatic increase in global temperatures in the 1990s and therefore looks like a hockey stick. The graph is hardly central to the modern debate over climate change. Yet the subcommittee has investigated the scientists who dared produce it and hounded them for information. Now that a study of the graph by the National Academy of Sciences has largely backed up the hockey stick findings, the committee has been holding hearings to attack it some more.

A more responsible House hearing on climate change, held by the Government Reform Committee, revealed the utter frivolity of investigating the hockey stick. Even the Bush administration — which is actively avoiding regulation of carbon emissions — took pains to acknowledge the science of climate change. Speaking on behalf of the White House, James L. Connaughton made clear that global warming is real and that human causes are at least partly to blame.

In fact, the broad contours of climate science are a matter of considerable consensus. Increasing atmospheric concentration of greenhouse gases traps additional energy, which tends to cause warming of the Earth’s surface. The actual concentration of carbon in the atmosphere has increased enormously since the advent of the Industrial Revolution. And average global temperatures have risen in recent decades, an effect that is amplified significantly in the polar regions. The major outstanding question about global warming is not whether adding large amounts of new carbon to the atmosphere will tend to increase temperatures further. It is how sensitive the climate will be to what mass of additional carbon over time — and how bad the practical consequences of that sensitivity will be. On this point, there exists vigorous scientific debate. But it’s a debate to which congressional committees are laughably ill-suited to contribute.

The reality is that nobody knows how bad global warming will be; responsible estimates vary from manageable to catastrophic. So the prudent move is to take action now as a kind of insurance policy. Yes, reducing carbon emissions substantially is a daunting prospect given American and world dependence on fossil fuels — so daunting that it induces a kind of denial in many people. But it is a particularly ugly kind of denial that leads a congressional committee to spend this kind of energy attacking scientists, instead of confronting the problems their data suggest.

Re 261, Right on. The entire AGW debate seems to me to be partly an invention of the left in order to give them something they can call their own, claim doom and gloom using fear (and fear of the unknown) as the main tactic. The morons who currently call themselves republicans will easily be led astray of any inkling of truth or scientific reality in order to try to capture lost or losing votes.

If enough people of either political persuasion become convinced in the “Inconvienient Truth”, they eventually will vote that way. I think most republicans here in the US and other conservitive groups overseas have already thrown in the towel on this issue. Since the MSM’s mantra of AGW theory and “do something now before its too late” (to heck with what the science says)is constanlty bombarding us, eventually most will beleive simply because they have heard it enough. A tactic often used by propogandists. The eventual costs to the individual will come later, after the battle has been won in the media and minds of the masses. The truth doesn’t matter at all.

I wonder if someone could do a stastitical analysis of AGW alarmist stories in the MSM and correlate the findings to say, more stories in summer, less in winter?

Is this what passes for editorial comment these days at WaPo? I swear times must be tougher than reported as it would appear the staffing cuts have forced them to make use of their summer interns to write their editorials.

#226. I have tracked down sources for the UNESCO equation for the freezing temperature as used in function tfrez discussed in #226 above. The coefficients given in the routine agree with the sources that I found, one of which is here. I have also found an online calculator, here. I evaluated the equation from the function tfrez and compared the results from the online calculator. The results are the same. Note that function tfrez sets the pressure to 0.0. The function definition allows for pressure as an optional input. None of the six uses of the function have the second, optional argument.

The temperature from the UNESCO equation returns the freezing temperature in Celsius, as shown in the function listing. There is at least one place in the GISS ModelE coding that compares the ocean temperature with the results from the function. This implies that the code is calculating the ocean temperature in Celsius. I think the code is not solving an energy conservation equation for the ocean temperature. And I would say that it is not good practice to produce code that is not strictly conforming to SI. At some time someone could put additions and modifications into the code and assume that the temperature is in Kelvin.

In addition to a degree of ignorance in the editorial, I thought that there was a degree of nastiness as well as some stupidity.
In fact, the House Committee was confronting the problems in the
data and coming to different conclusions from those found by the “scientists”.
Roger Bell

RE #262 by Dane !!!!
… “The entire AGW debate seems to me to be partly an invention of the left in order to give them something they can call their own”…
Please don’t use “the left” in this connection. I am “the left” and it has completely nothing to do with the current problems. I think exactly that Bush administration is supporting ‘democratic development’ in the world to get most of the oil to US but that has nothing to do with the scientific problems concerning AGW.
We have a lot of environmentalists who corresponds to your assignments and that is the real concern in current politics.

Bender, if I were arguing tme series and estimated sensitivity coefficients, you might have a point. But since you appear to be slamming me for not knowing what I’m talking about on something I havent talked about, you dont.

And everyone, I have thanked Steve, criticized him, agreed with him, disagreed with him, accepted his argument to the point where I am now evaluating the AGW argument without giving more than very tenuous weight (at least for now) to the dendro evidence – and all of this is in the record here – and now I think he made a bad argument here, and especially that the implication of negligence was way out of line. I’ve explained why. The argument does not hinge just on the failure to mention studies that IMO undercut his point – it is the extension to the implication of negligence especially that irked me, as I’ve now explained at least a couple of times. Some of yo disagree – fine. But i see very few of you takingon my actual argument, jsut slamming me for reading it wrogn without adressing my explanatin fo what I read and why I read it that way.

And yes, although I would never claim to be competent to do engineering, I know something aobut engineers – I grew up the son of an engineer who, among other career accomplishments, was the planning and supervising engineer for the last largish dam project the state of california built in the 60’s. My point is that before starting construction, engineers can be and ARE sure that what they are building is going to stand up in the face of its design conditions. They have defined design criteria, they either meet those criteria or they dont, and if they havent or arent sure they’v met those criteria, they don’t start construction – are you guys disagreeing with that? Are you disgreeing that policy is different? Or are you just disagreeing?

Engineers can get a crisp yes or no answer to the adequacy of the design, and must do so before construction.

From who? In seven years as an engineer, I’ve never received a “crisp yes or no.” It’s my stamp, my signature, and my firm’s butt on the line when it comes to “the adequacy of design.”

Policy makers (corporate or political) almost always have to make decisions in the face of incomplete data – that is why policy decisions are harder to reach than engineering decisions.

If policy decisions are so difficult, how come policy-makers aren’t required to go to school for 4-5 years in a dedicated program, pass a preliminary exam, spend 3-4 years in training on the job under a professional, and then pass a licensing exam like engineers are in the US?

Engineers “almost always have to make decisions in the face of incomplete data,” too. Maybe the difference between us and the policy makers in this regard is probably that we have to make justifiable assumptions about the incomplete data and hope we’re on the right (with a “safety factor,” etc).

Lee, I am guessing that your #252 post is as close as I am going to get to an answer to my queries of you.

I had stated in my original post in our exchange that NAS had made specific criticisms of large-scale surface temperature reconstructions, including the warning of the use of bristle cones as temperature proxies, and then seemingly turned around and made the following statement excerpted from the NAS report and given below which states that Mann’s HS claims have been supported by additional large scale surface temperatures proxies and a variety of local proxies. They go on in the following paragraph to state, that with this new evidence, i.e. large-scale surface temperature reconstructions, on which they have just delivered, in my judgment, severe criticism and warnings of their limitations, and in conjunction with local proxies, they find Mann’s claim that the Northern Hemisphere was warmer in the last few decades of the 20th century than any comparable period over the millennium plausible.

It is that kind of analysis and language that leads people of your viewpoint to see one thing and others like myself to see another. I believe Steve M is making a similar point in his criticism of NAS neglecting to explicitly warn of the agreement of other temperature reconstructions with Mann’s HS when they used BCPs. Instead of being indignant with Steve M’s criticism, I think you should be indignant with vague and sloppy NAS analysis as evidenced in their report. I do understand that coming at it from the same general viewpoint as that of the NAS report writers makes that more difficult for you to see.

It also is apparent that some posters come to this blog in anticipation of becoming indignant and doing so in creative ways.

The basic conclusion of Mann et al. (1998, 1999) was that the late 20th century warmth in the Northern Hemisphere was unprecedented during at least the last 1,000 years. This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature proxies and pronounced changes in a variety of local proxy indicators, such as melting on icecaps and the retreat of glaciers around the world, which in many cases appear to be unprecedented during at least the last 2,000 years. Not all individual proxy records indicate that the recent warmth is unprecedented, although a larger fraction of the geographically diverse sites experienced exceptional warmth during the late 20th century than during any other extended period from A.D. 900 onward.

Based on the analyses presented in the original papers by Mann et al. and this newer supporting evidence, the committee finds it plausible that the Northern Hemisphere was warmer during the last few decades of the 20th century than during any comparable period over the millennium.

I risk preaching to the choir here I suppose, but let’s have a go. Contrary to media silence and other muted reactions by vested interests, the Wegman Report does not merely amount to some statisticians who don’t know anything about climate science having some technical issues with one study. This one study is the single most important study put forth in support of the AGW hypothesis and the issue is wholesale technical incompetence. The IPCC and Nature are directly implicated. I for one would not be able to rationalize this away.

re the WaP editorial, I especially like the use of the word “bad” in referring to AGW. This is an extremely common way to introduce bias into a discussion. The fact is that a continued rise in temperature would have both good and bad effects (at least up to a point) and it’s not at all clear that the bad consequences of, say, a doubling of CO2 would outweigh the good ones. ON the plus side we’d have,

1. Higher crop production via CO2 fertilization
2. Increased crop production in large areas of the NH which now have very short growing seasons.
3. Reduced deaths from cold weather (which deaths outweigh those from hot weather anyway)
4. Less energy needed for warming in winter and at night.
5. Probable increases in rainfall in most areas.

I won’t list the negatives since they’re so often reported. The ones which usually have such a fuss mentioned about them like glaciers disappearing or rising seas are mostly strictly adaptation ones which may have almost no cost when considered in the long run. The positives I’ve listed are generally continuing ones and pay dividends every year.

You said “engineers can be and ARE sure that what they are building is going to stand up in the face of its design conditions. They have defined design criteria, they either meet those criteria or they dont, and if they havent or arent sure they’v met those criteria, they don’t start construction – are you guys disagreeing with that”

Yes, I am disagreeing with that. As with MJ in post 270, my firm is ultimately responsible for the decisions of the engineers, as is the engineer himself. We usually go ahead with construction with many unknowns and assumptions made. If the assumptions are wrong, the system won’t work and we will get fired, but by no means do the engineers have all the details totally worked out with strict confidence intereval etc. It just doesn’t work that way. They do the best they can with what they have, we have a budget which is usually how we win the bid, and the engineers and geologists must get as much usefull data out of that budget as is possible, then decide on how to proceed or build. The firms reputation and legal responsibilities are part of that process.

Dan, it sounds like we have very similar backgrounds and I’d like to help out whereever possible with any code reviews/verification you are doing of any GCMs. I cut my teeth on Fortran IV/77 back in the early 80s (in the nucelar industry and now develope software apps in Java, C++, VB etc in the railway industry.

Steve I know we are supposed to be patient with those who don’t have the same background as us but Lee, you really haven’t done any engineering ever have you? Have you ever done any science for that matter? There are lots of genuine experts on this blog from a variety of different backgrounds. Listen to them, they know what they are talking about much more so than the claimed experts on RC.

Regards

KevinUK

PS following our record heatwave last week in the UK, the weather has now returned to the kind of summers that we have now become used to in the UK i.e. it’s relatively cold and pouring with rain – more like March than August.

Dane, when your soils report (I’m assuming this is in part what you produce) is requested by an engineer who needs to know if the soils can support a given load, your answer can fall into three categories. Yes, based on the set of criteria our company uses to answer that question, it can. Or no, based on our criteria, it can not. Or, we do not have sufficient information to decide.

In an engineerign context, the last two answers are effectivley equivalent – don’t build as designed. Further investigation may gain more info and give a yes or no anwer, or design changes may reduce loads to within what the soils will bear ro to take tthe nanswer outside the uncertainy, but you still end up with a “yes” answer before construction proceeds. There may be mistakes, there may be iterations of design and discovery during costruction – but the bottom line is yo dont build unles the engineering firm, using solid design criteria, says “this thing will stand up.”

In policy decision, answers of that third type are treated differently than in engineering. Policy decisins often must be made on the basis of data that is only sufficient for ‘based on our decision criteria, we arent sure.’ THAT is the distinction I’m making.

Kevin, I’m not an engineer and I just said I’m not an engineer. I am, as I’ve said several times here, a biologist. I’m not doing science any more, but during the decade or so that I was, I published as first author in Genetics, PNAS, J Neuroscience, J Neurogenetics, and several others, with some of those papers now having more than 50 cites.

What the hell makes you think I’m not listening here? I just frickin’ SAID I have modified my interpretation of the dendro work based inpart on what I’ve read and learned here. I think that Steve’s implication that the NAS comittee was negligent was way out of line, for the reasons I’ve stated, and I said so. That doesnt mean or even imply that I’m dismissing out of hand everythign else that is said here, and the massive “he’s criticising steve 9or others, etal), he must be wrong on everything’ response that I get is frankly becoming pretty funny. Tiresomely funny, but still funny.

It is disappointing that the Washington Post could not get the story right, but they’ll get a second chance: This week the American Statistical Association decided to add a special session to its annual meeting (starts in a few days in Seattle):

354 Late-Breaking Session #2: What is the Role of Statistics in Public Policy Debates about Climate Change?
08/09/06 8:30 AM – 10:20 AM

The first speaker is, of course, Wegman.

In addition, as SteveM has already noted here, AGU is conducting a special session on this topic at its fall meeting, and — if I understand correctly — SteveM has an invitation to speak.

#278,
Lee,
You said “but the bottom line is yo dont build unles the engineering firm, using solid design criteria, says “this thing will stand up.”

Thats not how it works. Engineers design a lot more things than buildings, all sorts of systems of every type imaginable, so the statement is not “Real World”. We go with what we bid, plus a few changes along the way within budget, and are usually pretty sure the system will work. But mistakes are made and clients are lost.

You also said “In policy decision, answers of that third type are treated differently than in engineering. Policy decisins often must be made on the basis of data that is only sufficient for “based on our decision criteria, we arent sure.’ THAT is the distinction I’m making.”

That too is how real world engineering often works. They are not always nearly as sure of things as you might want or think. Only the biggest budget projects with public health and saftey as factors may operate the way you suggest.

Im KevinUK and not Kevin. I apologise for my ad hom about you not having done any science. I won’t do that again. Please allow for the timing of posts on this blog. Often a subsequent post can be added while you are still composing a reply and so sometimes doesn’t make sense.

I’m relatively new to this blog, but before I started to post spent a lot of time researching AGW and the evidence for and against it on the internet. I’ve read a lot of papers (IPPC 2001 TAR, MBH98/99, M&M etc) but obviously not all of them and I’ve also followed the recently congressional hearings (first time I’ve ever done that). It takes a lot to catch up on this blog (as it covers so much) and unfortunately because its not structured its difficult to cross-reference and find stuff easily that you’ve previous read and absorbed. I tared as a ‘not sure’ but after all I’ve researched (and continue to research) I have become a definite ‘contrarian’. Steve (and others on this blog) are doing a great job in slowly unpicking the poor science that underpins IMO the whole AGW myth. Keep reading, keep researching (the truth is out there and doesn’t involve extra-terrestials) and before too long you will I hope also reach the point at which you will at least consider the possibility that AGW is a myth. Some of us are already well past that point.

Ummm – can anyone one point to one false statement of fact in the WP editorial?

Several… the first:

The graph is hardly central to the modern debate over climate change.

Sorry, but this is false.

Yet the subcommittee has investigated the scientists who dared produce it and hounded them for information.

Not necessarily false, but certainly a false characterization of the hearings. Michael Mann published a blatant lie/incompetent statement and the results are widespread use of similar, flawed methods. This wasn’t a “hounding” as the op-ed would have you believe. It was simply the exact sort of thing science does when it uncovers fraud.

Now that a study of the graph by the National Academy of Sciences has largely backed up the hockey stick findings, the committee has been holding hearings to attack it some more.

Hardly true. The NAS panel basically discredited the HS, and therefore, its findings.

” Two engineering safety reports in 1999 and 2000 explicitly warned of the recent failure and were ignored by the administrator-politicians in charge.”

Gary are you talking about the memo reported in the Boston Globe last week? If so, turns out that was a fake. They did a little bit of auditing on the memo and found discrepancies on every point. Not minor ones. Like the complaint about rust on the supports as they were waiting to be installed. The memo was written two months before the equipment in question was even delivered to the site. The memo was also dated two weeks before the writer was assigned to the project.

Not to say concern about the supports was not raised earlier, as I mentioned above, in one case the question was asked by a little girl on a school field trip.

Mark: “The graph is hardly central to the modern debate over climate change” this is a statement of opinion and an opinion that was shared by Gerry North the head of the NAS panel…certainly not a false statement.

I don’t believe that there is a single false statement of fact in that piece, with the possible exception ok KevinUK’s point although its pretty obcious they were referring to CO2 not black carbon

RE: # 148 – YOu are seemingly personally threatened by the idea of an audit. Must have something to hide. That’s why you resort to ad hom comments like “swift boating” and implied fossil fuel industry conspiratorial ties. What a joke. You are pathetic.

Yet somehow you fail to directly address all three points I made. Pretty telling.

The HS is central to showing a statistical anomaly, therefore it is central. As such, the NAS panel was very clear that the HS is meaningless beyond about 400 years ago (and not worth much for the past 400 years anyway).

this is a statement of opinion and an opinion that was shared by Gerry North the head of the NAS panel…certainly not a false statement.

Good for Gerry. He is incorrect. That climate scientists continue to attempt to show HS behavior in the climate is evidence enough to disprove that opinion, an opinion I might add, that was not in the NAS report itself. If it were not central to the debate, they would have stopped long ago.

The reason many in the warming camp continually choose to claim the HS is not important (yet continue to publish on the matter) is because this is the primary area in which they fail. Marginalize its importance, and they don’t look so bad. It’s a good bit of spin, and even entertaining if you can get over the hypocrisy, but spin nonetheless.

Since you don’t like what was given in response to your request, why don’t you tell us what the “facts” are in that editorial. I just re-read it and darned if I can find any. I might complain about their claim that warming is augmented in the polar regions, since there’s no real evidence of it in the south polar regions (except the much ballyhooed Antarctic peninsula). So what do you have that we COULD claim was a wrong fact?

I meant it as a joke before Mark T pointed out the real ‘non-facts’ but thank you for conceding my joke anyway.

Have you read much of this blog? Have you followed any of the posts Steve M has made descibing the analysis he has been doing on the other reconstructions referred to in the NAS report? The hockey stick IS central to the AGW debate just as Hansen’s and others (but like Mann he kicked it all off and is the ‘poster boy’) flawed modelling IS central to the debate. IMO these two are the ENTIRE debate. The model pedictions stand (and IMO therefore fall) on their demonstrations of their agreement with the reconstructions – reconstructions which have now been shown to be ‘unsound’.

Steve M is systematically dealing with the reconstructions and it won’t be long now I hope before he will summarise and conclude his findings. He has been shown by Wegman to have been right on MBH 98/99 and I am very confident that he will be shown to be right on the other reconstructions. Others I hope are about (have probably already started) to do the same on the modelling.

Re 226, 265
To Dan Hughes and anyone else who knows the answers
Have you seen any details on how the general circulation climate models include the effects of lines of water vapour, carbon dioxide and other relevant atoms/molecules?
Ideally, I think that a list,in wavelength order, of H2O, CO2 and other relevant lines should have been created and the calculations should work systematically through this list. Proceding in this way allows the calculations to take account of overlapping lines. Has this been done for GCMs?
Such lists have been used in stellar spectra modelling.
However, it is also possible combine all the lines in given wavelength intervals to form opacity distribution functions, which give cruder spectra.
Roger Bell

“Bush administration is supporting “democratic development’ in the world to get most of the oil to US”

If the US wanted to “get” most of the oil, they could just take it.

In fact, as far as I can see, the US only wants to be able to BUY the oil on the open market.

Considering that most oil producing countries produce nothing else of value, I think they are very fortunate to have a resource that was given by nature to them for nothing, and discovered, and developed by others.

Based on the analyses presented in the original papers by Mann et al. and this newer supporting evidence, the committee finds it plausible that the Northern Hemisphere was warmer during the last few decades of the 20th century than during any comparable period over the millennium

engineers can be and ARE sure that what they are building is going to stand up in the face of its design conditions

No. Buildings, bridges, levees, dams, etc, can and do fail under “design conditions” – sometimes it’s an issue of construction/materials, sometimes it’s an issue of engineering/design, and sometimes it’s a combination. As an example of such a situation, see the Citigroup Tower in NY here, which was caught in time.

And the “design conditions” themselves are based on a number of assumptions and incomplete data. For example, one can have soil boring tests performed in an effort to get a representative idea of geotechnical site conditions, but this does not give certainty about the conditions across the entire site.

Design flaws/issues are found all the time in other engineering areas, too. TWA Flight 800 and other incidents are examples of this.

This is gbalella. Apparently my post aren’t getting through the spam filter.

Before the HS there was the Lamb graph. If one looks at that the best I can discern is that it shows relatively little variability with a net difference of maybe 1.2C from the MWP to the LIA.

Note this graph was constructed prior to all the politicalization of climate science.

My main point is that is the variability of the late Holocene is only of the order of 1.5 C or even it it was 2.0 C over 500 years that will be dwarfed if the projection of 2-3 C of warming over the next 100 years comes true. And note that the recent data supports this projection. Further , I’d claim the best evidence suggest closer to 1.0 C of late Holocene variability….maybe less considering the glacier evidence.

So I challenge any of you to make a reasonable graph of what you think the last 2,000 years might have looked like ( maybe use Moberg) and then add 100 years to it with 2-3 C increase in temperature. LOOK AT THE GRAPH and just try to imagine its significance if they are actually right. Granted I understand that most of you don’t want to give them their projections but just look at it and imagine if it is right. Can you look at that graph and not be concerned.

Bottom line is that their projected warming dwarfs almost any known reconstruction of late Holocene variability.

Additional information about the correlated-k method is available in:
Robert. H. Essenhigh, Prediction of the Standard Atmosphere Profiles of Temperature, Pressure, and Density with Height for the Lower Atmosphere by Solution of the (S-S) Integral Equations of Transfer and Evaluation of the Potential for Profile Perturbation by Combustion Emissions, Energy & Fuels, Vol. 20, pp. 1057-1067, 2006. DOI: 10.1021/ef050276y, and R. H. Essenhigh, On Radiative Heat Transfer in Solids, AIAA Paper Number 67-287, AIAA Thermophysics Specialist Conference, New Orleans, April 17-20, 1967. Additionally, Chapters 11 and 12 in this book address directly radiative transport in the atmosphere: Gary E. Thomas and Knut Stamnes, Radiative Transfer in the Atmosphere and Ocean, Cambridge University Press, Cambridge, 1999. Endnote #16 in Chapter 11 cites a reference to a discussion of the correlated-k method as: A. A. Lacis and V. Oinas, Journal of Geophysical Research, Vol. 96, 1991. The pages numbers are given as 9,027-63, but I’m not sure what that means. Maybe 9027-9063??

I do not have online access to these latter references.

The modeling and calculations in the ModelE code seem to be based on the correlated-k approach. It is my understanding that line-by-line radiative transport calculations are considered to be a benchmark-grade approach that can be used to Validate other approaches.

muirgeo, what makes you think the CO2 sensitivity coefficient used to make that projection was robustly estimated? That’s what this is about. (That, and to a lesser degree the false claim, repeated again and again, that the “20th century trend is unprecedented”.) In case you’ve missed it, there’s quite an argument going on among climatologists about what that value is. Would you care to summarize that argument for us?

muirgeo, what makes you think the CO2 sensitivity coefficient used to make that projection was robustly estimated?

bender

Um common agreement among the best models? Basic principals of physic and climate?..on which those models are based ….oh and this the current warming trend of about 0.2C /decade over the last 30 years…. pretty much what the coefficient predicts.

“In fact, the broad contours of climate science are a matter of considerable consensus. Increasing atmospheric concentration of greenhouse gases traps additional energy, which tends to cause warming of the Earth’s surface. The actual concentration of carbon in the atmosphere has increased enormously since the advent of the Industrial Revolution. And average global temperatures have risen in recent decades, an effect that is amplified significantly in the polar regions. The major outstanding question about global warming is not whether adding large amounts of new carbon to the atmosphere will tend to increase temperatures further. It is how sensitive the climate will be to what mass of additional carbon over time “¢’¬? and how bad the practical consequences of that sensitivity will be. On this point, there exists vigorous scientific debate. But it’s a debate to which congressional committees are laughably ill-suited to contribute.

The reality is that nobody knows how bad global warming will be”

Notwithstanding KevinUK’s joke I find no errors here.

On the “central to the debate” question there really is alot of evidence independent of both models and millenial scale reconstructions. If Mann had never been born never published MBH98, if there had been no hockey stick would we still have a debate? This was the question asked of the head of the NAS panel and others and the answer they all gave was yes. How can something be central if the debate still remains when it is gone? There is no central componenent here, we have a preponderance of evidence that when taken together leads me (and the majority of climate scientists) to the conclusion that CO2 is responsible for a large, or at least significantly portion of the warming in recent decades.

Now, on the models, of course they are far from perfect but can we disregard them completely? I say no – they represent the sum of our current state of knowledge of the climate system, and without greenhouse forcing we can’t account for the warming of recent decades. Now could they be missing an important forcing or its magnitude? could they be treating clouds wrongly? Of course its possible but it is just wrong to assume that this is so. The great benefit of the models is to help us understand the physics – if we observe something that the model can’t explain then the physics in the model is likely wrong…there are certainly aspects of the GCM’s where this is obviously still the case and as they improve our understanding of the physics improves as well.

Further, I take exception to the inferences (seemingly held by many here) that the failure of peer review, and belligerence of one or a few climate scientists in one case somehow taints the entire body of knowledge of the climate system across a great many disciplnes. What follows quickly seems to be that all climate scientists are corrupt, the models are all worthless because they are tuned to give an output desired by the modeller (see all climate scientists are corrupt part), and that most scientists are just falsifying results to keep getting funding. Now peer review is of course not perfect, but it is a system that works to move science forward over time. Should we have independent audits of all papers on which policy decisions are based? Hey I say that would be great – if someone wants to pay for it. But reality is any policy decision based on just one, or a few papers?

I am happy to engage you guys on the facts, but lets be honest. I thought Steve came across very well at the hearings, he seems like an honest guy to me,and good on him for showing the fault with MBH 98. Mann came across well too but not honest, but had the air of someone who knows he is wrong and just won’t admit to fault…

I won’t stay for long if we can’t get past the all models are completely worthless, and all climate scientists falsify their results to match their beliefs and ensure conitinued funding stage…

The actual concentration of carbon in the atmosphere has increased enormously since the advent of the Industrial Revolution.

How can this be a fact when it contains the weasel- word “enormously” instead of any actual figure. The actual figure is 30% of a low % of the total atmospheric pressure. I wouldn’t call this enormous, but how do I know what YOU call enormous?

I already responded to the polar part.

it’s a debate to which congressional committees are laughably ill-suited to contribute.

Why would this be true? They can have people from all sides discuss their best analysis, as happened in the recent committees. In fact such a statement isn’t a fact at all but an assertion contrary to fact and reason.

The reality is that nobody knows how bad global warming will be

As I pointed out in another response on this thread the use of “bad” is simply poisoning the well. It tries to assert that global must, obviously, be bad, something which is not at all obvious.

This was the question asked of the head of the NAS panel and others and the answer they all gave was yes.

Not true and your analysis is misleading anyway.

Read the Lindzen paper linked in response 306. It does a good job of explaining the problems with models. If you have specific problems with what Lindzen wrote let’s have ’em.

Dave – I agree in part with your polar point although the peninsula warming is not trivial, and the arctic is warming. “enormously” is opinion obviously but I won’t argue semantics and tone, my point still stands about factual accuracy of the piece – I mean it is an opinion piece! You can make arguments for the authors opinions but you can’t point to any factual inaccuracy, beyond the marginal ones I’ve conceded to you and KUK. “ill-suited” obviously opinion, but really you saw the recent hearing – they don’t have a clue about the scientific arguments.

Now what was not true and misleading?

I do have some problems with some of (not all of) Lindzens points but they are well documented by others – but what reason do we have to believe Lindzen over say Hansen or another climate modeller? What problems do you have with Hansen’s analysis?

It is true that no one knows for sure what will happen in the next hundreds of years, I have no frigging clue my concern is based on the non-zero probabilities of abrupt negative effects, and the combined effect of observed warming and acidifying of the oceans. You can make serious arguments for large negative economic impacts here…

Cameron, I think one of the problems here is that you have a different interpretation of the word “fact” from most fact-loving people. When you’re given a number, it’s only half the fact. The other half of the fact is the uncertainty surrounding the method by whihc the number was calculated or measured. If you present a “factual” story with only half the facts, what does that make it? Half a story. Read the blog and get the whole story. Take your time, there’s alot to learn.

The actual concentration of carbon in the atmosphere has increased enormously since the advent of the Industrial Revolution.

…so has the actual concentration of particulates, which supposedly mask global warming. Somehow, particulates caused “global cooling” for a few decades starting around the middle of the 20th century, but up to that point, somehow they weren’t doing much at all.

Cameron, I did not notice errors in fact, but the editorial is still flawed. It’s sorta like a CO remarking that an indicator light is out on the conn while the boat is trying to recover from a jam dive.

Bender – not sure what you’re referring to specifically but I am fully aware how important uncertainty is, and I am very careful with the facts. We seem to be getting off on the wrong foot here already – why did you take such a patronizing tone? Now I have read some of the blog, I am not qualified to comment on the stats details but I have seen comment from Dave and Willis regarding ocean mixing, and chemistry that was flat out wrong but said with some authority. Shall we audit the blog? Thanks for the tip on links

Further, I take exception to the inferences (seemingly held by many here) that the failure of peer review, and belligerence of one or a few climate scientists in one case somehow taints the entire body of knowledge of the climate system across a great many disciplnes. What follows quickly seems to be that all climate scientists are corrupt, the models are all worthless because they are tuned to give an output desired by the modeller (see all climate scientists are corrupt part), and that most scientists are just falsifying results to keep getting funding.

Cameron, you seem a bit paranoid. Reality is close to opposite. Pielke Sr on his website has been presenting seemingly more inclusive & comprehensive improvements for the GCMs. Apparently tho, possible improvements in the models are being ignored, at least from w/o the GCM clique.

Further, I take exception to the inferences (seemingly held by many here) that the failure of peer review, and belligerence of one or a few climate scientists in one case somehow taints the entire body of knowledge of the climate system across a great many disciplnes. What follows quickly seems to be that all climate scientists are corrupt, the models are all worthless because they are tuned to give an output desired by the modeller (see all climate scientists are corrupt part), and that most scientists are just falsifying results to keep getting funding. Now peer review is of course not perfect, but it is a system that works to move science forward over time. Should we have independent audits of all papers on which policy decisions are based? Hey I say that would be great – if someone wants to pay for it. But reality is any policy decision based on just one, or a few papers?

Cameron, the issue here is that certain climate scientists have been found out using inappropriate assumptions (there is a linear relationship between tree ring widths and temperature for example), poor quality statistics, faulty algorithms, poor selection of proxies etc, and the quality of foundation papers has been found wanting.

A corpus of papers by closely associated climate scientists that rely on these foundation papers have also been found wanting. Oh, and not only that, but the climate scientists clearly refuse to adhere to accepted standards of science, for example, requiring proper archiving of data (and methods), and encouraging replication. Instead we have had an extraordinary display of poor practice, obstructionism and obfuscation. We have seen those asking questions abused, their motives impugned, and ad hominems flung about in an extraordinary way.

None of that would have mattered perhaps if the IPCC hadn’t made such a fuss of the Hockey Stick in TAR. However, the Hockey Stick was used, and in fact is still being used, by the alarmists as if it were sound science. Fact is, it has been shown not to be.

While this state of affairs is unfortunate for the Hockey Team, it simply doesn’t affect those climate scientists who HAVE done the right thing in following sound scientific practice, used sound methodology, archived their data, encouraged independent replication etc.

It is really simple. Follow sound scientific practice, and you will be acknowledged and appreciated. Your points will be taken seriously. But follow practices such as the Hockey Team has now been shown to follow, and you will become the subject of intense scrutiny, as those people have found out.

There is alot of paranoia in this debate 🙂 Not from me though – I have seen many times those types of inferences be drawn in blogs and in the media. Beng are you implying that the GCM clique don’t want there models improved? This would go to my point – obviously this is false – maybe there are technical difficulties or disagreements about how to improvethem. But there is no lack of desire, even if, shock horror they output less predictive warming after the improvements. Scientists on the whole want to understand the system better, they/we don’t care if our results prove the skeptics or the alarmists right, so long as our understanding is improved. Of course they make mistakes but are they corrupt?

Well Dave – it was an editorial. Its allowed to have tone – I was trying to get at why it was lambasted so in this thread, even with unsubstantiated accusations regarding the quality of the authors journalistic skills. Clearly it was because the author has different opinions than the popular view on this blog, rather than for any factual inedequacies.

Cameron: I agree with your point on the editorial being (presumably) factually correct. I use a different meaning of flawed than “factually incorrect”. I thought my pithy submarine analogy showed how it was flawed. And of course, this is a debatable area. I’m not saying that I proved or even made a substantial argument that it was flawed. I’m just differentiating the concepts. A story about 7DEC41 in Hawaii that noted that it was a poor surf day, might be factually correct, but would (I hope we all agree) be flawed.

Maybe the editorial is right in its focus and insights, but perhaps it is wrong. Things that on could engage on would be the “pointlessness” argument. Etc. Etc.

Bender – Its an editorial not an article!!! Giving half the story is no valid reason to criticize it – it is trying to make the authors point! which is:

“The major outstanding question about global warming is not whether adding large amounts of new carbon to the atmosphere will tend to increase temperatures further. It is how sensitive the climate will be to what mass of additional carbon over time “¢’¬? and how bad the practical consequences of that sensitivity will be. On this point, there exists vigorous scientific debate. But it’s a debate to which congressional committees are laughably ill-suited to contribute.”

If sensitivity includes feedback effects then would you disagree with this statement?

Actually, Cameron, I have no business trying to explain to you why it was “lambasted”, as I was not among the critics. So I’ll have to step back from this question. Do read the blog though. It might give you some helpful background information.

It is how sensitive the climate will be to what mass of additional carbon over time “¢’¬? and how bad the practical consequences of that sensitivity will be. On this point, there exists vigorous scientific debate. But it’s a debate to which congressional committees are laughably ill-suited to contribute.

Let’s grant the premise that the big question is what the impact of increased CO2 will be – big or small? manageable or unmanageable?

Congressional committees are presumably also ill-suited to engage in speculations as to whether Saddam had WMD, but, if intelligence is shown to be flawed, they are quite within their rights to inquire as to why the intelligence estimation was flawed and think about how to improve the peocess.

The hockey stick was a form of flawed intelligence that was heavily promoted. Why shouldn’t a congressional committee inquire into it?

For policy purposes, I think that congress should, at the end of the day, be guided by scientific consensus on the impact of increased CO2 as expressed by leading scientific institutions properly examined, but that they should also be far less passive and far more questioning of the institutions. They should demand better performance than they are getting. They should do things like Barton is proposing – do independent due diligence on climate models. They are entitled to demand open practices.

They are entitled to know whether there are problems with data and code access – that’s where this hearing got started. If there are problems in the disciplines, they should know about it and see if there’s something they should do about it.

At the end of the day, they’re going to have to make some decisions and they need to feel comfortable that breakdowns in scientific practices, such as in the case of the Team, have been eliminated.

re “NaN”… one tidbit: NaN is quite the handy value in various circumstances. It is usable as an “unknown” or “missing” value that (carefully handled) can propagate correctly through a variety of calculations.

Many systems are poisoned by use of zero in place of “missing”. I always get suspicious about real-world data collection that has no differentiation of zero and unknown.

Not sure this has much if any significance for climate research, but there it is.

For example if you use a processor’s floating point unit to divide a number by zero, and you have floating point interrupts turned off (or ignore it) then the result you get is a NaN. That way if you continue to do calculations with it you don’t accidentally get a valid number which is meaningless. I believe certain other conditions, such as generating a number too small or too large to be represented as a floating point value, or calling a trionometric function with an invalid input can also generate a NaN. So can, say, taking the square root of a negative number.

See here (look down the page for NaN) for an explanation from a third party.

Dammit, why won’t someone read and comment on the articles I linked to in #154. They look very reasonable to me and explain virtually all of the variation shown in numerous studies. Come on, folks, show me where they are wrong…

I tried your links when you posted them, but could never get to any “articles”. Maybe it’s me, but could you try rechecking the references. Should #154 be 153? And in 153, if you are linking me to “6”, the link sent me to a web page, but I don’t see what articles you might be referring to.

JAE,
Read your papers. They are two major problems I can see which are fatal to the claims made there:
1. All the talk about dating errors in two of the papers (EE & MG). The effects of this are so obvious as to be trivial, hardly worth publishing. But the critical question is: where do all these hypothesized dating errors come from? (Low SNR in both papers, which is why they were published where they were.)
2. Regarding the Ecological Modeling paper. Ever heard of Taylor’s thereom? Or put differently: how much of the variation in a time-series is “explained” by a fourier transform? If you can answer these two questions then you can tell me where Loehle’s “explanatory power” is coming from in this paper. Again, that’s why it was published where it was published.

338: Bender: Thanks. I wanted feedback, and I got it, finally. I don’t have the background to answer all your questions, but I am intrigued with this method of looking at the data. I don’t agree with your statement in #1. Dating errors are not a “trivial” matter; can you point me to something that shows this is not an issue? BTW, I understand the author is now adding many other proxies and is still finding the same clear cyclic patterns.

No doubt he will continue to do so. And if you look up Taylor’s thereom, or investigate how fourier transforms work, you’ll have the answer to your questions, and you’ll know why it’s a trivial result. Give it a try and get back to me.

I did not say dating errors were not a problem. I merely asked where they are supposed to come from. I have no idea how frequent or serious they are. What makes you think they are an issue?

Bender: Did you really read the articles? They explain why the dating errors are an issue. If you try to combine proxies with dating errors, the signals tend to “cancel” each other, resulting in a straight line. The publications also explain some of the sources of dating errors.

Regarding Fourier transforms and Taylor’s theorem, I don’t care what mathematical methods are used–if the model fits the data (which is shown for two completely independent reconstructions), then the model makes sense to me. If it turns out that the model fits the data in numerous cases, then it has great explanatory power. Most natural time series are composed of sine waves, after all.

JAE,
Of course I read them. You’re not answering the question. The *effect* of these putative dating errors is, as I said, obvious. The question is how common are they, and what is their magnitude. So, tell me how these alleged dating errors arise.

The reason you don’t understand what’s wrong with these papers is your attitude: “I don’t care what mathematical methods are used”. You say that “most natural time series are composed of sine waves, after all”. And there is the problem. You’re getting warm. What is Taylor’s thereom?

jae, I thought you cared to know what’s wrong with those papers. I’ve clearly explained what’s wrong with two of them: much ado about nothing. The third, well, if you are determined to like it no matter what I say, then go ahead and like it. I’m just not sure you understand the implications of a theorem like Taylor’s. The problem, in a nuthsell, is the same thing that’s wrong with the Mannomatic: unpunished ad hoc overfitting. Curve fitting is not “explaining”, it’s accounting. And if the accounting is done unfairly, then the conclusions will not be robust. If you are ok with that gamble, then fine. But that is the same risk that MBH98 took. If you can’t see the parallels, then you have some reading to do. Good luck in your reading.

So, if I have 10 proxies which show exactly the same trends, and I “overfit” a curve that summarizes these trends, I am not “explaining” anything? If I plot viscosity of a liquid against temperature and fit a nice curve to it, am I “overfitting?” Sorry, but I don’t understand. Thanks for the comments, though. I’ll keep reading…

If you’re plotting viscosity vs temperature then you have a nice clean theory to work from and nice precise measurements of both viscosity (as with a falling bead or some sort of moving paddle attached to a voltmeter where you can mathematically relate what’s measured to the property) and of the temperature of what you’re actually measuring the viscosity of.

With tree rings, you may have a wonderful protocol for getting precise ring thicknesses or densities even allowing for off-center cores or dry wood, etc. But relating this to the temperature requires many assumptions and approximations. And fitting the rings to an exact year may even bee a bit dicy as you go back in time,and it’s even more a problem with things like coral or ice cores or the like.

However, I also don’t know what Bender is getting at exactly, but I’m interesting in hearing. Taylor Series I’m quite familiar with from calculus but Taylor’s theorem, if I studied it (and I probably did), I’ve forgotten over the years.

jae, If you don’t understand what “overfitting” is then you don’t really understand what’s wrong with the Mannomatic. That’s why I’m pushing you to think about the implications of Taylor and Fourier. But since you (sort of) gave it a try, here’s my 2c:

(1) Taylor’s thereom says that any continuous function can be approximated to an arbitrary degree by fitting with a polynomial of arbitray order. Curve fitting, with nothing actually explained. Taylor’s thereom is not directly applicable to this case, but it serves as a warning: if you’re going to go fitting curves with large numbers of parameters, you better be careful about how you account for your degrees of freedom.

(2) A Fourier transform preserves 100% of the variation in a time-series, whether it’s “composed” of independent sinusoidal process effects or not.

Relevance: if you do a spectral analysis of a time-series, find out what the dominant frequencies are, estimate the magnitude of these peaks, and then fit a statistical model after-the-fact, then the significance of that fit has to be assessed considering that it was a post-hoc analysis (model built by hindsight, not insight). Because of this you do not have as many degrees of freedom as you think you do. Therefore your model may be an overfit model. Why? Because you’re estimating more parameters than you have degrees of freedom. (What’s the cure for an overfit model? Cautious interpretation. And I did not find your ringing endorsement to be very cautious.)

This modeling approach is fine for generating ideas as to what might be responsible for these indepednent sinusoids (solar activity cycles, earth orbit cycles, etc.). But it’s not a solid basis for a conclusion. It’s a starting point, not an end point. The real problem with purely descriptive models that have no mechanistic basis, however, is that they lead to no new lines of inquiry. They are not very interesting to a scientist interested in answers. (I would guess that this is why the paper was published in Ecological Modeling, not some more appropriate specialist forum such as GRL.) Point is: if you want to believe that a process is all natural background variability, then this method is biased to concluding precisely that. Question: do you want a model that addresses your assumptions about the driving processes, or one that gives you a serotonin rush?

This, by the way, is precisely the problem with (a) the Mannomatic (overfitting to artifact-ridden samplea), and (b) dendroclimatological response function analysis in general (where you’re trolling a posteriori for correlations, and not discounting the calcualted significance using a Bonferroni correction).

If you still like the Loehle papers after reading this, then I encourage you ask for a second opinion, and report back.

Now that I have done this for you, what you can do for me is rank all these alleged sources of dating error according to their perceived relative importance.

#338. bender, I think that dating errors are a real and serious issue in (say) the Moberg low-frequency proxies. I think that they are a potential problem in sediments and ice core proxies. That the impact of an error may be obvious doesn’t preclude paleoclimatologists from ignoring the problem, as we’ve seen in other situations.

jae- you can’t assume that proxies have a sinusoidal or any signal at all. A much more sensible model is Koutsoyannis’ scheme of noise on multiple scales.

Steve M: I think there are many good reasons for assuming a sinusoidal signal. Many things in the natural world are sinusoidal. I don’t know anything about the Koutsoyannis scheme, but will look at it.

Why would dating errors be more of a problem for a low frequency proxy than for a high frequency proxy? I’d think that dating errors would be more likely to wash out high freq variability while preserving low frequency variance. Thus if you’re likely to have 5-10 years errors in year 1000, you won’t see peaks and valleys a decade long, but you will see variations 50 or 100 years long.

Personally I found Bender’s analysis to be clear and convincing. It’s even more important to question results if they match your expectations, since you are clearly biased to accept them. Think of it this way Jae, how do you know that the results are not spurious? If you have no theory driving a priori predictions how can you tell? Keep in mind the rule with modeling: All models are wrong, but some are useful. Figuring out which are useful is the tough part.

Think of it this way Jae, how do you know that the results are not spurious? If you have no theory driving a priori predictions how can you tell? Keep in mind the rule with modeling: All models are wrong, but some are useful. Figuring out which are useful is the tough part.

OK, but one does have to have a null hypothesis, right? There are very good theories behind sinusoidal temperature swings, like sun spots, Milankovitch cycles, wobbles of the Earth axis, etc.

Jae, you also have to look at some of the limitations of Fourier. Most people know about the Shannon-Nyquist limits where your highest reconstructible frequency is going to be based on your sampling rate.

However, in this case we are looking for low frequency responses, extremely low frequencies with periods on the order of 100’s and possibly 1000’s of years. The problem you have here is that you don’t have enough samples going back far enough to detect frequencies with periods lower than about 400 years. The cycle represented by the MWP, LIA and the current warm period, if they form a sinusoidal process, looks to have a period between 800-1000 years, which is non-detectable using Fourier on the data that Mann used.

Even if you could model a proxy as being composed of nothing but independent sinusoidal effects, you would be left with a serious extrapolation problem, which goes like this. Suppose your sinusoidal model “explains” (i.e. accounts for) a large proportion of the variation in the proxy, say 80% in a millennial-scale time-series. Small changes in the model parameters are going to result in small changes in the overall fit of the model (over the full length of the time-series), but they will result in major changes in the “trend” exhibited in the last century of the fit. Why? Because the few percentage points of variability contained in that short time-frame will not have much influence on the fitting process. This means that a model that does a good job mimicking fluctuations in the middle of the time-series will not perform nearly as well at the ends. At the back end, well, who cares. At the front end, however, this matters, because this is where extrapolation will yield the much sought-after forecast.

I would expect that Loehle did a fair amount of post-hoc experimentation, undocumented, to make sure that the 20th century trends in his simulations pointed in the right direction: increasing. If your underlying assumption is that “what goes up must come down”, then your sinusoidal model has a limited range of descriptive ability. [The only way a trend can be sustained for some time is by superposition of two or more independent sinusoidal effects!] It is by these two effects, then, that the inevitable forecast is made: temperatures must soon come back down. Note: this isn’t your data telling you this, it’s the assumptions built into your model, in combination with your ad hoc experimentation, where you burned up degrees of freedom penalty-free. That’s called a shoehorn, and it’s the sign of a very weak model.

If, as you point out, you actually have a solid mechanistic interpretation for the fitted sinusoidal components, then it’s a different story. But as long as these are hypothetical, or as long as the amount of variation “explained” is quite low (40% in the case of Loehle (2004)), the sinusoidal model is not a good model. Its primary flaw is that it is incapable of the sort of surprise that AGWers are positing exists: non-stationarities resulting from non-stationary forcing processes such as CO2. You want to refute a hypothesis by addressing it squarely, not dispense with it by skirting around the issues, or hiding statistical shortcomings under the rug.

What goes up may actually keep going up. Let the data decide. Leave the shoehorn at home.

Bender: OK, I think I now understand what you are saying. But I am puzzled: is any model possible, because of “surprises?” If Loehle is successful in obtaining the same curves with 50 proxies, doesn’t that mean something?

Re #363
1. 50 proxies? I don’t see 50 proxies being modeled in the 3 Loehle papers you mentioned. Are we talking about a published work, something in progress, or just hypothetically speaking? Because the answer to your question “does that mean something?” would depend rather critically on the details. It might mean something.

2. I’m not sure I understand your question “Is any model possible, because of ‘surprises?'” Maybe it is my choice of the word “surprise” that is the problem. (It was a backhanded reference to Ludwig et al. 1997 (search for “surprise”), and perhaps unhelpful).

All I meant to say is that models ought to be capable of refuting our pet hypotheses. If you include a nonstationary forcing element, and it turns out non-significant in the estimation process, that would lead to a much more convincing argument than if you didn’t allow for that possibility.

Not really. Just finding a “curve” isn’t doing much. The whole purpose behind deriving a curve is to develop a function f(x), or alternatively f(x,y,z,t,q,….). Curves are only useful in seeing whether variables are related to the output, and determining if there is a visually recognizable pattern. Statisical processing methods which correlate variables to the function are really what is needed.

364. I believe the author has added 23 additional proxies and plans to publish his results (I don’t know what the results are). The “surprise” I was talking about refers to non-stationarity. Of course, if there really is non-stationarity, the model will fail. That’s a good test of it. As the author notes, if the weather does not start cooling within a decade, the model fails. Therefore, it is capable of refuting the current pet hypothesis.

365. Don’t understand your point. ALL of science is about developing relationships between variables. The author provides the correlation statistics, and they are quite good, IMHO.

I’m sorry I’m so dense, here. If I plot the viscosity of a liquid against temperature, I can fit a nice curve to the data (can’t remember whether it’s polynomial or logarithmic..). If I do my measurements very accurately, I get an r2 close to 1.0. Is this “overfitting” and therefore somehow statistically incorrect?

You’re not dense. You just don’t know what overfitting is because you haven’t done your homework.

Your example is not a case of overfitting because you presumably have many data points on your curve (dozens if not more), and you are only estimating 2-4 parameters. Therefore you have the degrees of freedom to estimate the things you want to estimate (intercept, slope, quadratic, etc). An rsq of 1.0 is fantastic. Especially because you have an a priori hypothesis: the cause of a change in viscosity is temperature, and the nature and strength of the relationship has some basis in molecular physics & fluid dynamics. Now what would you say about a regression where you increased the temperature in vessel A but recorded viscosity in vessel B in another room? All of a sudden your causality model is a little suspect. In which case what do your parameters mean? Now you’re getting a little closer to the Loehle model, although you’ve a ways to go yet.

Really, last post. (Can’t help replying when you use those imporing tones.)

Re #370, jae
Perhaps a better example would be tossing a coin three times, and getting heads each time.
You could then propose a model of a coin toss as an event that always gives you a heads, and never a tails.
You get an rsq of 1, but the model is still probably wrong.

Re #372
But if you replicate that experiment a thousand times and found 2/3 heads you might start to suspect a loaded coin. Fact is you can do a statistical test to prove it’s very likely the coin is loaded. The reason the test is valid is because you’ve got so many independent observations that your estimate of p(heads) is very precise. Your model is not overfit because you’ve got far more coin tosses (1000) (and therefore degrees of freedom) than you do parameters (1) in the model.

When you have more parameters in your model than cycles in your time-series, you’re on very, very shaky ground. Especially given the observations are non-independent on multiple time-scales. You run out of degrees of freedom.

The temperature-viscosity analysis, by the way, also has non-independent time-series observations. The other way to build that curve is to heat up a dozen different vessels and measure the vicosity in each. Now you have independent measures that would be free from lag effects, sampling artifacts (e.g. where the thermometer was placed in the liquid) and so on. So there are many ways that your analogy is a stretch.

That is a tough one to answer. In theory, if you have a (largely) one-to-one relationship (temperature->proxy) that is proven to be causual, where the two variables span the same range of variability in the calibration as the reconstruction period, then the uniformitarian principle holds, and it should be possible to overcome problems of autocorrelation, non-stationarity, complex noise structure, etc. If that could be done, then you would not have an overfitting problem: you would have many more independent observations than model parameters & regulatory processes.

That is the hope.

But I think it is good to have audits because there appear to be strong pressures (social, institutional, cultural, budgetary) to cut corners in the science.

377: Let me try. It is a statistical sin to fit a curve to data, non a-priori, without a sound theoretical basis, especially when numerous variables can affect the supposed relationship. Did I get it? Do I get an A?

Mmm, close. Rather, it is a sin not to recognise that any great mess of data, which is known to contain a significant random fudge factor, can be “explained” by any number of sufficiently complicated models. This sin can be expiated by constraining our freedom to make any model we want. Such constraints could be linkage back to known physical processes, or statistical tests.

The reason this doesn’t really apply to your viscosity experiments above is that the only random element there is the accuracy of your measurement apparatus for all the relevant input variables. I assume this limit of accuracy is quite small relative to the size of the effect you are measuring. If so, your experiments are not really statistical in nature – you can be confident that if you repeat the experiment tomorrow, you will get the same result.
In addition, I assume that you will measure the viscosity at 15 degrees C, then – holding all else the same – again at 16C, 17C, and so on. You end up with a nice little hypercube of data, where the axes are the input variables and the data points are the observed viscosity.
This makes it very easy to spot where you have a nice linear relationship (at least within your experimental range), and where things are more complicated. In the extreme, for example, you might notice that viscosity goes to zero once you go above 25C. When you check your experiment, you notice that the test liquid has just evaporated at 24.5C. You can then quite reasonably exclude these data on the grounds that you are only interested in viscosity in the liquid phase.

Returning to climatological data: we can measure the width of tree rings of a 500-year old tree with reasonable accuracy. We have no hope whatever of measuring the rings in the tree that germinated at the same time twenty yards away, but which was chopped down 100 years ago for firewood. (Or its neighbour, which didn’t survive the drought of 1723.) Even worse, we have no chance of measuring all the input data that we know must have contributed to the width of any particular ring. We don’t even know what all the data are (how significant is genetic variation within a particular species?). And, of course, we can’t wait hundreds of years to do proper controlled experiments.
Accordingly, our hypercube of data is very sparsely populated, and we don’t even know what all the axes are. The climatological data available to us is only a very small subset of the total climatological data that we know exists, but which we will never be able to measure.
All these unknowns are lumped together in a very large random fudge factor. Our only means for getting a handle on this data is to apply statistical tests.

To get back to “overfitting”: there are lots of possible models that can “explain” our messy data set. If we are only looking at one statistical measure (say, rsq) to tell us if our model is correct, then we run the risk of just selecting one among many spurious models. We can tune this model to improve our rsq to perfection – but it doesn’t necessarily mean that it is right.

Analogy time, with all the usual caveats.
All the climatological data available to us is a small subset of all the climatological data that could exist. Similarly, I am a small subset of the group of adult human beings.
I could go to the best tailor in the world, and get him to make me a really good suit. But it is only good for me, it is not good for all adult human beings. If you are six inches shorter or taller than me, then it will look ridiculous on you.
I can go back to the tailor, and have another fitting session. He can pull the suit in here, and let it out there, so that it fits me absolutely perfectly. But it is still no good for you, or the vast majority of humanity who are not the same size and shape as me.

As a different analogy, think of a statistical measure like rsq as being like a shadow cast by the data. If you can only see the one shadow, it gives you a good idea of what cast it, but you get a far better idea if you can see several shadows from several different angles.

I think Bender told me that overfitting was a gamble. Being sort of a gambling man, I might gamble on this. Is this statistical sin sufficiently egregious to send me to Hell? I guess I don’t care, since if it is, I will finally get to meet Doktor Mann and the boys.

All the climatological data available to us is a small subset of all the climatological data that could exist. Similarly, I am a small subset of the group of adult human beings.
I could go to the best tailor in the world, and get him to make me a really good suit. But it is only good for me, it is not good for all adult human beings. If you are six inches shorter or taller than me, then it will look ridiculous on you.
I can go back to the tailor, and have another fitting session. He can pull the suit in here, and let it out there, so that it fits me absolutely perfectly. But it is still no good for you, or the vast majority of humanity who are not the same size and shape as me.

As a different analogy, think of a statistical measure like rsq as being like a shadow cast by the data. If you can only see the one shadow, it gives you a good idea of what cast it, but you get a far better idea if you can see several shadows from several different angles.

So, you’re saying we can’t tell if the MWP was or was not warmer than now – presumably…No, you’re probalby not, since people here, despite what you say, KNOW the MWP was warmer than now (yup, given the above, search me how they do).

But, it’s an interesting comment. What do you tell us about humanity? Well, clearly you will (unless you’re exceptional or unlucky) have two legs, be a biped, have a large brain, binocular vision, aprobably be at least 1.5m tall, but probably not 2.5m tall. So, infact, just looking at you, one six billionth of humanity, tells us a lot about humanity. But, of course we know what humanity is. Suppose a future creature just has you to draw conclusions about humanity, how far wrong would it be? Well, it would know a damn sight more about us by examining you than nothing. And suppose they found another human? well, they’d know more. Bit like proxies eh…

Yes Peter. But Mann et all aren’t trying to say that climate has two legs, and binocular vision.

They are trying to say that Climate Has two legs, ten toes, stands at precisely 6 ft 1 inch, has sandy brown hair, blue eyes, a penchant for olives, likes to holiday in the South of France, but more often ends up in Hungary. Is Lactose intolerant, and who’s phone number is 555-267-8953.

It’s not what they are saying, it’s the precision with which they are saying it, and more importantly the conclusions that they draw from it.

His point is a very good one and you could learn something from it if you chose. But since you’re not interested in making correct inferences about stochastic dynamic systems, and are only interested in advancing your narrow political agenda, your choices are easy to predict.

Re #386, yup, this is where we part I think. The thing is some people here say the MWP WAS warmer than now and other what Ff is saying (we can’t know) – it can’t be both! I think the evidence is that now is warmer than the MWP (or soon will be) – not, as you seek to imply, that we/I know precisely but that we know to a confidence level. That’s my view of the evidence. Am I sure the MWP was cooler than now? No. Do I think it likely it was? Yes.

bender, would you care to address what I wrote rather than make silly political jibes? No, probably not. You mentioned politics. I haven’t mentioned politics nor was I thinking about it. I reply to posts I want to, because I want to say something about them. What’s wrong with that? You want it stopped, ask Steve (or better John, he’s more censorious). Imo, if science is about anything it’s about asking questions!

Am I sure the MWP was cooler than now? No. Do I think it likely it was? Yes

What you think is irrelevant. It’s the facts and uncertainties, e.g. as represented in your first statement, that matter to the folks high up that make the big decisions. And that is what science is about: getting answers to questions.

The AGW campaign of disinformation on the uncertainty issue has been effective so far. It will be interesting to see how far that wagon can continue to roll before its wheels come off.

Re #389, becuase, sid, I accept this
and the (sorry) scientific consensus view best seen in the IPCC tar – of course soon to be updated.

bender “The AGW campaign of disinformation…” sounds like a conspiracy… Yeah, well, if that’s what you think then you must think anyone who doesn’t think like you is in on it. Again, I just happen to think the AGW evidence is sound – sorry, just my opinion.

Re #393, thanks. That’ll be the fact as judged by you presumably? Worthless imo (hey, tew can play at that game! Only I’ll be deleted I expect…). That’ll be the ‘facts’ that can’t be converted into a recon? Yeah right.

Re #392:
You accept that? Those curves haven’t been audited, and they have no confidence bounds on them! If you accept that, you’ll accept almost anything. I thought you said you were interested in science?

Peter, the difference between us is that you are highly selective about the facts you choose to consider, whereas I look at all of them. One of the biggest mistakes you can make in selecting facts is to ignore the measurable fact of uncertainty. Uncertainty in the proxies and uncertainty in the GCMs.

And THAT Peter is what Steve M has been looking at, since no one else has. And that is what the discussion here is about. If your going to accept it without any examination. That’s fine. You can run along, there is nothing for you to see here.

Steve M though has brought up some points, and convinced others of their validity. But none of this concerns you, as you have already accepted the data in the Spaghetti graph. So you are essentially butting into a conversation, where you don’t belong.

So run along now.

Should you want to discuss some of the assumptions that went into building those graphs, and the repercussions if those assumptions are incorrect, then join in the discussion and back up your contributions. But if you don’t want to examine them, I don’t quite see your reason for being here other than obstructionism.

No. He did not dismiss all of those reconstructions. He said they haven’t been examined, and they have little in the way of error bars. As such they tell us little. Once they are examined, their assumptions codified or not, basically there is a lot of work that needs to go into thtose before they become valuabel information.

Again. Irrelevant to you. You accept them as is, so there is nothing for you to discuss. Again I don’t see your place in the discusions here. You can head over to real climate and cheer for the Hockey team with the rest of them.

No, of course I don’t mistrust every climatologist. (In fact I don’t mistrust any of them.) What has me concerned is that they are all using, more-or-less, a single method. We all know now about the non-independence among the multi-proxy studies, which rely on a single approach, now proven to be statistically invalid. And look at the inheritance pattern among GCMs – it’s the same method in each case, just replicated regionally. The uncertainties in both approaches are overwhelming.

No Peter, you like to twist words don’t you. Seems to be your FOrte. I said here those graphs are being examined. If you accept them as is, there is nothing for you to examine, hence I don’t see why your here. You rarely, if ever, contribute any meaningful information to the discusion here. So it seems your nothing but an obstructionist.

You can be interested in what people say, for sure. But that’s not what you do, You come into threads and berate people, including Steve M. You continually fail to see the point, whatever the point may be, yet make comments that are irrelevant as in your 385 here.

There are plenty of stastical posts here, that I “listen” (read) but can in no way contribute to the discusion. SO do you know what I do Peter. I say nothing.

You might want to try it.

Of course your welcome to say whatever, but try to make a contribution, rather than just trying to be an annoyance.

Peter H, if you don’t think that analysis of the validity of the proxies and the statistical methods used to analyze them “matters” and the REAL issue is something else, then that’s fine. But that’s what I’m trying to discuss here.

If you want to discuss your beliefs about temperature projection in the next century, there are many venues where you can discuss this. It’s an important issue; you’ve made your point maybe 400 times; it’s got nothing to do with any proxy or statistical issue here. Unless you’ve got something new to say, why don’t you pause and think about whether you need to say it at this venue for the 401st time?

Re #381, jae
Apology 1 : I haven’t actually read the Loehle papers. I’ll get around to it some time, honest.

Apology 2 : I need to improve that phrase you quote.
The thing is, the answer to your question is no, because climate is so massively non-linear. Even if you assume that the solar cycles are the only input variable that is changing, you can’t be sure of a continuous response – you might get a sudden step change, like the viscosity test liquid evaporating.

Regarding gambling – anyone is welcome to gamble with their own money. But the warmers and their political masters want to gamble with my money via the tax system. This is not acceptable.

Re #401,
Peter, honestly, I just don’t think you see where you’re being selective. At the risk of repeating myself from posts in other threads … you’re looking at these curves without considering the confidence envelopes that ought to be around them. And I’m telling you: they’re huge. And you, my frend, are making the same error that policy people make, when they look at these kinds of graphs, and think that they are accurate. They are very, very imprecise approximations.

Now what do you call it when these imprecisions are left out of the glossy pamphlets used to lobby policy-makers? Perhaps “campaign of misinformation” is a bit strong. But I do believe there is a deliberate attempt to remove uncertainty for the purposes of removing doubt for the purpose of creating policy momentum. Call it what you like; it’s there, Peter.

“So, to be welcome here one has to think about climate in one certain way?”

Please, so open and aware, and so not, at the same time? There’s been a whole topic dedicated to Lee’s questions and his/her comments made directly to SteveM on his opinion piece and the hearings. This topic is part of these for “other evidences”, on going.
Am I the only one too wonder where Lee is now (hopefully well and just at a loss for words) or do you guys all like to take shifts? LOL

“But I do believe there is a deliberate attempt to remove uncertainty for the purposes of removing doubt for the purpose of creating policy momentum.”

I believe that too, that’s how we got into reading here, because it is affecting our children in school and my husband at work!

Re #407, Steve why do I say something for ‘the 400th time’ because someone else say’s something I think wrong for the 400th time…Again, if you just want to hear one kind of comment censor me, or perhaps ALL comments not about proxies or statistics? Whatever I feel a jumped up excuse for a ban coming…

Re #409, and neither do you? I know there are confidence limits (indeed I’ve just been skimming Moberg 05 and they’re there plain to see) and that paleo climatology is an imprecise science. I just don’t accept it can’t say what it does with the confidence (or lack of if you like) that it does. I like recons and I’d like to see any new recon – especially one by Steve. Either we can say the MWP was warmer than now (as many here STATE), and show it graphically, or we (you) can’t.

Peter, other people who wander into other topics – TCO, rocks, bender – at least talk about the proxies sometimes or perhaps even most of the time. Could I invite you to try to do so occasionally or at least once? Who knows, you might find it interesting.

408. fFreddy, please read them; they are very easy and quick. I’m not assuming that the ONLY variable is solar cycles, just that climate changes probably follow relatively smooth curvilinear patterns over the long haul. There may be sudden step changes/noise (volcanoes, etc.), but it seems to me that the Milankovitch cycles, earth wobbles, etc. would tend to even out these sudden step changes over the long haul.

Re #411
Well, I’m glad you have such an open mind, Peter. But audits and innovations take time. Ask the gatekeeping reconstructionists and they’ll tell you the exact same thing. Oh, and for the 401st time, Peter, Moberg et al. (2005) has not passed audit yet. Be patient. And be prepared. You might not like what you see.

Re #412
Sorry, Steve. The diversity of material on proxies is interesting, and I read it all. But I am only really qualified to comment on tree ring responses and time-series analysis. Will try harder to not be baited off-topic by trolls.

some people here say the MWP WAS warmer than now and other what Ff is saying (we can’t know)

Not what I said, Peter. I find it very unlikely that tree rings will ever give us reliable information about the MWP, which is why I am bored of you telling Steve to do his own reconstruction. But, in their absence, I will fall back on Lamb and his anecdotal evidence, which hasn’t gone away.

Re #414, bender
Peter H is a funny one. He’s not really political, and he freely admits to not understanding the issues, which is why it’s not really worth trying to engage him on them (notwithstanding my post above …).
I’m not sure why he keeps coming here. I think he just likes to splutter.

Hey guys. I reckon you should all lay off Peter, Lee, Dano, Steve Bloom et al. They actually play a valuable role here in making such a mess of defending the AGW position, and giving the sceptics so many opportunities to explain in detail why the AGW position is not yet demonstrated.

If Peter et al didn’t exist, I think that you would have to invent them. They demonstrate the issues involved in “belief” (in the sense of us not really knowing from our own experience, but trusting what someone has told us). Bit of a problem when that someone is Michael Mann and the Hockey Team, who seem to have lost a bit of cred lately!

I think that the average lurker here (and at RC for that matter) can see which way is up.

Like bender, I’m not qualified to comment on many of the discussions here. My field is electrical engineering (signal processing) with some work in the astrophyics field. However, since many of the statistical processes are the same, the discussions on the errors in MBH98 have been very eye-opening.

I know this is a bit OT for this thread, but has anyone looked at any of the solar forcing work. Scafetta & West paper on solar forcing was very interesting, as was Dr Benestad’s attempt to refute it over on RC. A number of other papers on this topic are likewise starting to surface on Cornell’s arxiv.org.

Again, apologize for the OT post. Steve, if you feel this was inappropriate, please delete.