Whitfield SubCommittee: Witnesses to be questioned

Dr Mann was not available to come today but another invitation has been extended for Dr Mann to appear "next week"

Like this:

LikeLoading...

Related

This entry was written by John A, posted on Jul 19, 2006 at 8:16 AM, filed under General. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
Both comments and trackbacks are currently closed.

Re: #4
Looks like she’s a Democrat from Illinois. The subcommittee chairman (Republican) sits in the middle, with the other Republicans to his right (our left). The ranking (senior) Democrat sits on the other side of the chairman, with the other Democrats to his left (our right). The Republicans and Democrats alternate in speaking.

The hearing after the NAS report was only an hour, included a question and answer session with the press, and did not include speeches from politicians to get their prepared statements on the record. So far, this has gone on for over an hour without any comments from scientists.

#6 “Dr Mann has hired a lawyer to repesent his interests before the COmmittee – that’s not a good sign. ”

I don’t see it that way, from all that’s transpired through time certainly!

Mann even spread the rumor that this report wasn’t peer reviewed.
Mann only discusses his science with people who totally agree with him. “In Print” “Blogs” “MSM interviews” which all are edited.

What scientist wouldn’t want to talk to anyone about their work or even defend it in detail? I’ve never ever met one like that.
[unless they weren’t confident about it or finished with it, and they admit this without any pause]

It’s a sign to me that the truth , if it were layed out on the table, can’t speak for itself [ he must know this ]

I’d not wish this on any person. “Sucks to be you” moment to be sure.

I’m going to remember even if this does nothing but remind these people they know the truth as well as we do, it’s still a good thing.

I don’t get it. If this is about the science of paleoclimatology, and these statisticians are just parroting the CANADIANS over non-valid issue, and these industry bought schills are wrong about the science since they have, how did he put it, “no apparent background at all in the relevant areas”, why then does MM not send another “qualified” scientist to fill in for him and help clarrify, instead of “gasp” a lawyer who has “no apparent background at all in the relevant areas”???

This hearing is confirming my suspicion that this debate will “move on” from the Hockey Stick to chapter 10 in the NAS report. Lindzen seems suspicious of the confidence levels of the the data in the forcing models.

Very interesting that the Demos are not trying to validate Mann’s crap. They are trying to establish other evidence for AGW. Hope someone points out how equivocal the science is in these other areas. We simply do not know whether the current warming is a natural phenomenon or not.

Thanks # 26 and # 28. No-one ever gets my sarcasm. And I know why – it’s because you people obviously have “no apparent background at all in the relevant areas” concerning the science of sarcasm!!!! :-)

I swear I am going to use this line next time someone questions my authority on a subject of conversation of which my grasp of the subject is obviously shakey!

If you’re interested Henry, I can confirm that the recent peer-reviewed study of “whose country has the greatest soccer team” also did not depend on the Mann study.

The point being Henry, is that just because a study concurs with the Mann study as far as its conclusions go DOES NOT MAKE THAT STUDY VALID.

The fallacy is that if a study is peer reviewed then it must be true. Just because it comes from scientists does not make it true – it might make it likely to be true but that comes when the studies are tested and replicated. Just because its published and just because it has a PhD on it does not make it valid to base public policy on.

Just because the scientific academies of the G8 agree that globakl warming is a significant problem does not mean that it is.

If I understand the Storch defense of AGW correctly, his thrust is simply the rate of change of global warming has accelerated in a fashion such that it anomolous to any previous temperature history. This of course begs the questions:
a) what previous proxy history gives the level of sampling such that Storch can lay claim to this change rate is an anomoly. None correct? as his basis of reliable information with enough data resolution relies exclusively on an instrumental period of 30 years(?).
b) Given that proxy data is suspect at best and will not yield the rate of change information or precision required to see high frequency (30 year window) temperature anomonlies is it fair to say that this 30 year window can not encompass every conceivable natural occurance that may impact GW? For example, are solar events or axis tilt considered in such a snap shot look?

I regret my admitedly limited math and stat skills but have been lurking and following these latest developments as closely as my 20 year old undergraduate engineering degree will allow.

A characteristic of post-normal science is that the boundaries between science and value-driven agendas get blurred; that representatives of NGOs are considered to know better about the functioning and dynamics of systems than scientists; that parliamentarian committees delve into the technicalities of science; that amateurs engage in the technical debate: and that some scientist try to force “solutions” upon policymakers and the public. In such a situation it becomes entirely that individual scientists emphasize those insights which are assumed to influence certain policy decisions more forcefully, while downplaying others. Typical for such a post-normal situation is the flooding of the media with books and movies which dramatize the issue. Recent examples include: The Day After Tomorrow, State of Fear, Satanic Gases, The Revenge of Gaia, and An Inconvenient Truth.

Von Storch should not forget his interview in the german “DER SPIEGEL”, from Oct. Nr. 41, 2004 :

Here the whole Spiegel interview as follows:

——————————————————
Der Spiegel No 41-2004 page 158, October 4, 2004

Climate: The graph is non-sense

The German climate researcher Hans von Storch comments on the dispute
between scientist concerning the temperature curve of the last thousand
years and the greenhouse effect.

Spiegel :

You claim that the reconstruction of past temperatures by the US researcher
Michael Mann is wrong. What gives you this idea?

Storch:

The Mann graph indicates that it was never warmer during the last ten
thousand years than it is today. In a near perfect slope the curve declines
from the Middle Ages up to 1800, only to shoot up sharply with the beginning
of fossil burning. Mann calculations rest, inter alia, on analyses of tree
rings and corals. We were able to show in a publication in “Science’ that
this graph contains assumptions that are not permissible. Methodologically
it is wrong: rubbish.

Spiegel:

How did climate change instead?

Storch:

According to our computer model temperatures fluctuation were significantly
larger and took place faster. The temperatures were 900 years ago also once
approximately as warm as today. On the other hand, between 1400 and 1800 we
have essentially lower readings than Mann.

Spiegel :

Are you therefore claiming that the greenhouse effect does not exist?

Storch:

Definitely not. Our data show a distinct warming trend during the last 150
years. Yet it remains important for science to point out the erroneous
nature of the Mann curve. In recent years it has been elevated to the status
truth by the UN appointed science body, the Intergovernmental Panel on
Climate Change (IPCC).This handicapped all that research which strives to
make a realistic distinction between human influences on climate and natural
variability.

Spiegel:

New curves have been around for some time. Why were Mann’s critics unable to
get a hearing?

Storch:

His influence in the community of climate researchers is great. And Mann
rejects any reproach most forcefully. His defensiveness is understandable.
Nobody likes to see his own child die. But we must respect our credibility
as research scientists. Otherwise we play into the hands of those sceptics
of global climate change who imagine a conspiracy between science and
politics.

According to our computer model temperatures fluctuation were significantly
larger and took place faster. The temperatures were 900 years ago also once
approximately as warm as today. On the other hand, between 1400 and 1800 we
have essentially lower readings than Mann.

Is this indication that historical rates of change might indeed approach that of the instrumental time period?

“Are you therefore claiming that the greenhouse effect does not exist?

Storch:

Definitely not. Our data show a distinct warming trend during the last 150
years…”

He really answered that one wrong. The greenhouse effect is seperate from AGW. Not accepting a certain poxy reconstruction does not disprove the greenhouse effect that gives us our wonderful climate, that has kept the Earth many degrees warmer than it would without for many millenia, long berore man came about.

I believe that illustrates one of the basic flaws in the AGW discussion. Specific terms are interchanged with others, blurring their meanings and muddling debate.

Dr. Von Storch may have assumed the questioner was referring to Anthropogenic Global Warming when he asked about the Greenhouse effect. He should have answered the question as asked or restated the question before answering.

#20
Paleoclimotologists rely totally on stats to study variations in past global climate. Without statistical analysis, paleoclimatology would lack any kind of precision. MBH98 is nothing more than statisitcal analysis of global surface temperatures(via temp proxies) over the last 1000 years. MBH98 differs from previous climate assesments in that 2 recognized major global climate events were wiped off the books (MWP, LIA), and a new event (the famous 20th Century temp spike) appeared out of nowhere.

Yes, it does seem strange that 8 years ago the entire world of Climate Reserach accepted MBH98 without question, and that it took 2 Candadians who were not climate specialists to audit Mann’s now infamous study. The idea of Peer Review is now also put to scruitny. I don’t think anyone will now accept the phrase “Peer Reviewed Study” with as much confidence as in the past. Mann’s defensiveness is also strange. He is obviously a leader in his field with nothing to be ashamed of as far as ability is concerned. Why he doesn’t release his study in its entirety only he knows. If he is confident of his methods, he should have nothing to fear. An audit of MBH9X is something he should welcome.

What is ironic is the fact that AGW induced by GHG isn’t being debated directly by McIntyre. As far as I know he may very well believe it is occuring. But the biggest voices of the AGW/GHG clan have staked everything on the Hockey Stick. Why? Sceptics on the other side of the debate were essientially silenced as Heretics. Science isn’t suppose to work this way in a free society. If the IPCC had done the proper due diligence we wouldn’t be where we are today.

RE: #54 – such a thesis would rely on blindly accepting the surface record as an unbiased reflection of the “innate” air temperature, with no experimental error and no anthropogenic local dissipation and albedo modification impacts.

He is obviously a leader in his field with nothing to be ashamed of as far as ability is concerned.

Given his methodology (including his extensive ad-hom defense), I think he has plenty to be ashamed of.

Why he doesn’t release his study in its entirety only he knows. If he is confident of his methods, he should have nothing to fear. An audit of MBH9X is something he should welcome.

Probably because he’s at least smart enough to know that he screwed up, intentionally or unintentionally. If it were intentional, then he obviously does not have the ethical character required of true scientists, and he’s merely pushing an agenda. If it is unintentional, then he is in CYA mode trying to figure out how to get out of this unscathed.

What is ironic is the fact that AGW induced by GHG isn’t being debated directly by McIntyre. As far as I know he may very well believe it is occuring.

I think, if you read enough of his subtle comments on the blog, taking them in context, he refers to the “A” in AGW as “plausible,” though he fully understands the true meaning of that word (read the link to the “Adjective Creep” article over at junkscience). But he’s certainly no alarmist and he certainly realizes that debunking the HS does not disprove “A.”

Steve M., if you feel my comments are out of line or incorrect, I won’t complain if you delete/modify them.

66 Fair enough but then how does Storch make the claim that this 30 year window rate of change in global temp is anomolous if one can not provably look at previous temperature histories in a reliable fashion?

I agree, Ogie, that making such a claim with such a short window of observation is specious at best. Perhaps he is merely extending the observed record with the rest of the anecdotal evidence of rapid change. But, still, referring to the change as “anomalous” requires an understanding of the true statistics, which requires some sort of certainty on past climate variability, which requires proxies, which are shown to be unreliable. The claim is untenable with our current understanding of past climate.

IMO, even if we could know with certainty what the climate was the past few millenia, determining “anomaly” is very difficult given the 4 billion year history of climate on earth.

#68: I strongly suspect recent and future claims (including Von Storch’s) will be based on forcing modeling (as illustrated in NAS report Chapter 10). I’m referring to the simulated temperature plots as predicted by various forcings (solar, CO2, aerosols, etc…). As reliance on the Hockey Stick appears to be diminishing, the argument for AGW seems to be focusing more on modeling of these forcing components. Lindzen has been critical of this analysis and refers to it as “curve fitting” based on data with extremely low levels of confidence. I suspect that future audits may look at this underlying data and the confidence levels. I believe that it was Cuffey at the NAS panel question and answer session that made a statement that the confidence of the hockey stick was “neutral” to the argument for AGW. In fact, he stated (paraphrasing) that it the climate was more variable for the past 1000 years (no hockey stick), then this might mean that the effects of C02 emissions might even be worse.

Followup to #71: In other words, expect to hear reports that the scientific basis for AGW only depends on thermometer data from the past 100-150 years (and forcing data: CO2, solar, aerosols, etc…) and will not depend on temerature reconstructions of several hundred years.

If Lindzen is correct, then an audit of the data used in the curve fitting would show low confidence levels. This would only involve the few forcings that they use use (solar, CO2, aerosols, etc…) and would not require a discussion of near surface thermal dissipation and land use modification impacts. It might include an audit of Hanson’s corrections for Urban Heat Island (UHI) effects.

Re: 71 Again forgive my ‘newbieness’, but isn’t that contradictory to providing a more substantial ‘proof’ on the A in AGW? The shorter the time period of review for anthro effects the less likely the surety of the answer. It strikes me that the AGW crowd is using the “It(rate of temp rise) has to be anomalous (without necessarily provably demonstrating so) and if so it has to be tied to X forcing” , hence the curve fit data comment you alluded to.

Thats not science thats empirisicm. Heck you might as well say GW is tied to the amount of cell phone conversations as certainly those are on the increase as well.

Ogie, I would say that these are two completely separate lines of thinking. The forcing argument focuses on a shortened time span because C02 forcing has only been significant in recent years. Having said that, I wish that someone today’s hearing would have defined the context of the hockey stick as something like:

The existence of a hockey stick would strongly suggest that the current state of climate change is beyond what would be expected from natural variation, and would be moderately strong circumstantial “proof” of AGW. The absence of a hockey stick certainly weakens, but does not destroy the argument for significant AGW.

Another possible conclusion of Wegman’s report is that other climate data (such as the forcing curve fitting) should be audited.

jesus, Barton (looks like him) is trying to make out the satellite data were adjusted to match the models. That’s so amazingly wrong as to be allmost unbelievable!

That satellite data was being wrongly processed, that error was found and rectified. Just that one question (within minutes of my starting to watch it all) shows to me what Barton is up to – obfuscation.

The forcing argument focuses on a shortened time span because C02 forcing has only been significant in recent years.

Perhaps true, but making a forcing argument in light of past CO2 concentrations will be difficult due to known ice age concentrations that were an order of magnitude higher than now. Certainly there were other factors, then, as now, which makes it that much more difficult to assess what is “normal” vs. what is “anomalous.”

Humm, some intelligent questions from the next chap (Washington, bit blurry?). Now, oh dear, they’re wheeling out Chris Landsea – but VS is quite good, my regard for him increases.

I’ll also say, as a UK observer, that good old US political openness is on view. Ho hum, now the chairman can’t remember Ross McKitrick’s name, someone nudges him (somewhat of a gaff that). Back to Steve, bit of history. Sounds honest, but it’s a quiet attack on Mann. VZ is good again, disagrees with Steve about hockey sticks out of trendless data.

Just that one question (within minutes of my starting to watch it all) shows to me what Barton is up to – obfuscation.

And just that one question is enough to determine his obvious (to you) ulterior motive. By that logic, I can assume the same from “just that one statement” made by you.

Basically, you have no real substantive scientific argument so you have to lodge personal attacks as a defense of the holy grail that has been oh so independently shot down by real science (er, math). Good show.

Re:#100
Yes, that was telling. Inslee was clearly the strongest questioner for the warmer side. Sometimes a bit misinformed and misleading, but the Democrats would have had a pretty tough time w/o him. Maybe Waxman would have been stronger if he had stayed.

It did come to a rather flat end.
You would have thought the obvious next issue would be to get more details from Wegman and Steve about the status of the other multi-proxy studies.
Hmmm. So what’s next ?

Thanks for the tip but Media Player Classic will play RealPlayer media only if RealPlayer is otherwise installed. There are freeware RealPlayer to MP3 converters, but none that convert in real time. There are freeware codex files which purport to allow Windows Media Player to play RealPlayer media but I have had trouble with implementing the codecies required without spyware and virus alerts.

That’s a foolish argument given that Mann is on record saying he is not, either.

FWIW, PCA is actually a signal processing tool and last time I checked, none of these guys are engineers. This is a classic credentialism problem, btw. Sort of the opposite of an appeal to authority. From fallacyfiles.org:

If a question can be answered by observation or calculation, an argument from authority is not needed. Since arguments from authority are weaker than more direct evidence, go look or figure it out for yourself.

Steve’s done the work, and if they have a legitimate gripe, his errors should be easy to find.

Thanks for the tip but Media Player Classic will play RealPlayer media only if RealPlayer is otherwise installed

Don’t think this is true. I use it and don’t have RealPlayer installed. Website says you have to have QuickTime and/or RealPlayer installed. I have QuickTime installed – haven’t had any trouble playing QuickTime or RealPlayer files.

MediaPlayer Classic is a audio/dvd player that looks like WMP 6.0. But it is much more lightweight. Also this program does not need to be installed. It can play Quicktime and Realplayer file (must have quicktime and/or Realplayer installed to do so.Enjoy!

You may want to check and see if RealPlayer is hiding on your computer. It is a program that is very hard to eliminate completely.

The issue of Michael Mann- Piltdown/meltdown Mann, is now water under a bridge I guess. Everybody starts with- even if Mann had never been born … So without admitting anything, the world has moved on. What is the provenance behind the error bars on the stick graph, that didn’t appear in the WSJ). If they are valid, do they really allow for some rational possibility for a Medieval Warming period inside a 95 percent confidence band? wherever that comes from) or is that a lie. Is the Medieval Warm Period real and global
(H Goosse hgs@astr.ucl.ac.be) or not. What’s the evidence that we are not just in a catch up in a 800 year cycle. The hearing said the “signature of the warming now somehow precludes this grand 800 year cycle theory.
We need a heavy physics guy to address the unproved assertion that large linear increases in CO2 will cause unlimited AGW. Can the effect be “saturated” with enough CO2 so it won’t matter after that? (This went unchallenged because the focus was on Mann but the spotted us on the “Mann’s methodology is defective” point, and spent time in unchallenged assertions that the spike in CO2 will definitely cause runaway global warming. The huge uncertainty in feedback between water vapor and CO2 and clouds was ignored. We came prepared for somebody to defend Mann and we didn’t get that- got something else, so when Mann comes before the committee, they will accept the defense then, unopposed. Somehow I felt that we got rolled today. Our pocket was picked. Wegman insisted on narrowly directed testimony, and the thrust of the examiner was – does any of your assertions about Mann contradict this consensus on AGW, and the answer was, of
course, no but witnesses on the other side were allowed to and did engage in wide speculation about CO2 levels and AGW that makes anything said about Mann Obsolete. I see now, the nature of this beast. Come to fight one icon, and they change targets.
Nobody denies the recent CO2 spike, but what does that mean for global warming, and what are some confidence estimates for what you say based on something other than naked assertions? That’s the front now I guess. That prof North was a very effective witness. He said that there was
evidence that the warming in the MWP was A) not the same in all regions of the earth, and
B) the signature of the recent warming somehow proves that the recent warming is not due to whatever caused the MWP. He should be challenged about
how he knows these things and what confidence level he has it. Who was ready for that. I feel sort of bad about the testimony. At least we could have had
somebody say that it is the sense of Congress that data should all be revealed, and Steve (in the very brief time he had) could have mentioned he was called,
what was it, a data parasite, because he wanted the raw data from somebody. Could Barton not at least have given us that!

It consists primarily of the “standard” open source codecs (libavcodec, libmpeg2, etc). which, being open-source, never contain any spyware. As a bonus, libavcodec decoding is generally very fast. As a further bonus, this package gives you a vfw interface to several ENcoders, but I doubt anyone here cares.

Install this, and any DirectShow player, such as crappy Windows Media Player and Media Player Classic, will have access to the decoders. In other words, you can watch most Real and Quicktime formats without having to use Realplayer or the Quicktime player.

Re:122
Seems we are, once again, into unproven, and even unreasonable, territory WRT feedbacks. Even over at RC, Gavin admits that overall climate feedbacks needs must be negative (to give us short term variability and long term stability), yet we still have people suggesting that we can somehow get “runaway” greenhouse effects – aaaarg!

I was unable to take in the whole hearing, but judging from what I have seen and read so far, there now is a “consensus” that the Hokey Stick Team’s work is junk science. This is a good step forward. Now the proponents of AGW have to prove that the MWP was not at least as warm and that modern warming is anomolous because of it’s rapid rise. I fail to see how anyone can prove these points. There are scores of studies that, taken together, demonstrate that the MWP was a global phenomenon; whereas, there are very few studies, now that the HS has been splintered, that demonstrate the opposite. Of course, the political/media/environmentalist snowball is still rolling, and that is scary. But maybe the snowball is now a little smaller in size, thanks to MM.

I installed it specially for the hearing. It works fine, I watched about 6 hours of the hearing in real time.

I don’t have RealPlayer on my system, and you dont need to if you choose Media Player Classic, and not the “Lite” version.

“And with Real Alternative (which Media Player Classic is bundled with), you can add RealMedia files to that list, negating the need for RealOne Player, which has to be enough reason to download it in itself.”

To my knowledge, the positive feedback for GW involves CO2 warming the earth, which causes water to vaporize, which further enhances the grrenhouse effect, which warms the earth further, which vaporizes even more water, and so forth.

Venus has no oceans so I don’t see how there could be a positive feedback mechanism there.

jae, did you read the NAS report? The aditial evidence is NOT tied to the rate of rise. There is substantial evidence for late 20th century temperature/climate anomalies, and even more for very recent anomalies, some quantitative and some qualitative, that do not rely on the dendro-based reconstructions – and not on rate of change.

Mathematicians supposedly generally prefer to have as low an “Erdos number” as possible. (Where Erdos had number zero, someone who coauthored a paper with him has number one, and the number increases by an obvious recursion.) After today’s hearing, will climatologists brag off about having as *high* a Mann number as possible?

How do you know if something is anomalous if you don’t know what “normal” is? (Keep in mind that “normal” climate can’t be measured on short time-scales like decades) Without long-term (centuries) high-quality data on temperature, CO2, methane, water vapor, and a few other things, I don’t think we can know what “normal” is to a degree of accuracy that would let us determine if current climate trends are abnormal.

None of the other following studies have had similar audit, and until such time as they do, in the light of the many flaws uncovered by Steve McIntyre, and the Wegman Report, they can no longer be relied upon.”

Global science requires global-scale collaboration and the energy & enthusiasm of someone who is willing to make all the necessary connections. Mann’s connectivity is indicative of precisely this. To compare Mann’s & Wegman’s connectivity numbers as was done in the hearings is apples & oranges. Statisticians don’t collaborate at a global scale because they don’t need to. They don’t do planetary science. Same thing with computer based GCMs – there’s no need to work with everybody when just a few will do.

The level of scientific competition within a domain is proportional to the degree of funding relative to its policy implications. Paleoclimatologists are spread thinly across the globe relative to what is required to answer the questions of the day, and statisticians are not. It’s the curse of planetary-scale “middle number systems”. You are scapegoating the personality, when it’s the research domain and role he has chosen that mostly determines his network connectivity.

The trend, by the way, is toward INCREASED connectivity, not decreased. And not because scientists like it that way, but because policy makers and granting agencies demand it. Scientists generally prefer to work alone so that they can maintain control of their pet programs. THAT is what makes Mann somewhat unique: he is willing to bridge where others will not. And it is no coincidence that it should be a young person caught in this role. The older generation (no offense) is somewhat impervious to the new message coming out of graduate schools that science has to bridge gaps to be socially and economically relevant.

mark, rea dthe NAS report. Yor lsit is seriosuly deficient. I’ve read Wegman, Ive read the NAS report several times. NAS refers to quantitative and qualitative data supporitn ghte recent-anomaly claim over millenial time scales. My previosu posts, and again expecially my first post back on Monday this week, reviewed what they said. I didnt see the hearing – Ive been in client meetings for 7 hours each of the last two days.

Adn I’m sorry, but problems with a foudnign paper,for problems which are not directly attributable in alter papers using different emethodologies, does NOT render all of it ‘discredited hsitory” Nonetheless, I buy the NAS limitations on the quantitative claims from the dendro-based recosntructions, and I find the remaining evidence to be quite strong. Adn I dotn buy a blanket unsupported clam that an entire different line of ice core evidence, from different teams in different hemispheres and locatins, is ‘very dodgy.’ Especiaaly if the claim requires a broad concereted conspiracy to actually cook the data, as you imply.

Oh yes, and usually when someone hires a lawyer, it’s to defend themselves against criminal charges, or some other impending legal jeopardy.

The fact that Mann couldn’t make the hearing, when von Storch flew from Germany, and SteveM from Canada, shows, in my opinion, that Mann was avoiding having to testify under oath. A reasonable inference is that he had no good answers to the questions (charges) being levelled against him.

a lawyer… me? NO!… but I have had to hire one… I think it was around 1999. My neighbor parked a dumpster on my property (on my flowers no less!!!). It hadn’t occurred to me when I hired her that *I* might be facing criminal charges! It just seemed prudent to help me navigate a tricky situation in a way that let me preserve my dignity and got on with my own business.

Hmmm, now that I think about it, … I guess my great-grandfather was a lawyer… but I think he died in the 1940’s… so that hardly counts as relevant (a lot of degrees of separation).

If I were licensed to practice law I’d tell him to stay home unless he received a subpoena. If we accept the premises that Mann believes he’s in the right but realizes: this field is fully politicized, and that ‘the other side’ runs the committee, and that testimony could potentially result in a perjury charge [or just a pain in the *ss investigation into his use of federal grant funds] – I see a big downside. Keep in mind when Congress ‘invites’ people to appear, they don’t pick up the tab for travel and hotel.

Where’s the upside ? The House Committee is going to go with the unbiased Wegman report, not Mann’s self-serving defense of his own paper. Mann’s forum is amongst his peers: Nature, RealClimate, Al Gore, et Al.

There was about 7 hours worth of testimony, and questioning.
It’s not that Wegman said anything too different from his report. It’s just that a lot of other witnesses agreed with him, especially about the central charge that MBH98 could not prove hottest temperatures for the millenium, and was deeply, fatally flawed.

I do sympathise with your situation, as I know you would like to comment from an informed position.

Lets hope someone archived the RealMedia file, and puts it on the web.

no… that never crossed my mind. Of course, I never planned to sue someone when I hired a lawyer either… nor did I…. I just wanted a nice firm letter written. Such matters were not within my area of expertise.

Mind-reading isn’t my area either. I have NO idea why Mann wasn’t there or why he hired a lawyer. I’d hasten a guess that few people do know the complexity of thought that went into those decisions of his.

I will say that I don’t find conjectures of conspiracy either entertaining or informative…

you know… conjectures regarding someone’s motives are just not very productive. Every single individual has a complex life that drives their actions and sets their priorities. I personally can easily think of a dozen reasons why I might not head over to the Hill. Fortunately, we are governed by laws and not speculation. There’s nothing to deduce, infer or derive on someone’s absence or presence.

#152
How many people do you think watched that hearing ? I’d be very surprised if the number was as high as 500 [I’m one of them]. There aren’t too many people who are going to watch while 20,000 Americans are stuck in a war zone [Lebanon]. Mann can much more effectively defend himself in the New York Times through surrogates, or in person at the next Climates R Us conference.

He was represented at the Committee hearing, anyway. Did you hear the clueless Congresswoman talking about his switch to a ‘new methodology.’ Who fed her that argument ? Nobody at the hearing accused the absent Mann of serious impropriety, only making math errors. Mann was criticized for not sharing data, but evidence was given that this appears to be widespread behavior. Mann’s claim that ‘we’ve moved on’ was introduced. [Wegman disputed it.] What more could Mann hope for ? Man saved himself travel expenses, hassle, and time while he got everything he could have hoped for from this committee. Mann can tell his buddies that he blew off those ‘right-wing, bible-thumping, political hacks from oil states.

If I wer eunder attack by a congressional comitttee, with people invited to testify who had pbulished the kind of absurd atack peice that the social network was, and whos terms of reference stopped right at the point of deciding if the statistical issues actualy mattered to the reconstruction, I would hire an attorney, too. Doing the intelligent thing is not evidence of having done something wrong.

re 160 Lee
This marks any number of occasions you have inferred the WSS report a
‘hit piece’. Assigning motive to the networking theory is not very becoming and smacks of ad hom. I think it more likely (or at least in tone with the rest of the WSS report) that Wegman advanced the networking theory as a butressing argument for independent statistical review moreso than cheap hit value on Mann who he likely has No Apparent Motive to dislike.

I was unable to take in the whole hearing, but judging from what I have seen and read so far, there now is a “consensus” that the Hokey Stick Team’s work is junk science.

Which is why AGW advocates want to move on as quickly as possible.

Which is also why intelligent and honest AGW advocates will be mad at Mann (granted, there is considerable uncertainty as to whether this is a significant fraction of AGW advocates … Lee is the only one I can think of off the top of my head). This will be thrown in their faces for years to come.

#162: but merely human. Frailties are common and few are perfect. I’m just thankful that that nearly decade-old weaknesses of mine aren’t being paraded before “god and country.”

The real question has to be… what does it matter? I’m new to this debacle but hardly new to issues regarding climate. How come the interview with Jim Hansen in Technology Review (an MIT alum mag) isn’t receiving more attention? It’s informative.

#165 the graph you showed shows temperature variations of 5 degrees! While temperatures have only risen one degree since the industrial revolution yet they are some how anomalies.

And it says:
“Earth is now passing upward through the highest temperatures of the past 12,000 years, and the half a degree that is already in the pipeline will bring temperatures within half a degree of the high points they have reached only a few times in the past two million years. During a warm period about 120,000 years ago, for example, sea levels were probably five or six meters higher than they are today.”

I thought the NAS panel put to rest the idea that there was enough evidence to support 1998 was the warmest in a millennium. This sounds much more like politics to me then science. Also the whole feedback thing drives me crazy. What is the basis in physics for positive feed back. The guy even tries to use computer simulations which are known to over estimate the warming. As for the ocean as the “smoking gun, id like to put some numbers to that to see if it makes sence. Black body radiation is such that the heat given off by the earth is proportional to the temperature to the forth power. People claim that satilites show that the earth absorbs more heat then it gives off. Of course this is ture because it is warming but if it is a significant difference it suggests a temperature lag, like say the ocean. Okay, so fitting the temperature of the earth to
F=k*t^4 where F is the flux radiated from the eath then adding on the difference between the flux emmited and the flux absorbed we can calculate approximately how much warmer the earth would be if the oceans weren’t absorbing any heat. That is when the oceans reach equilibrium.

Ah, but miss J, if you have followed the debate you would know precisely why “it mattered”. Surely you are not trying oh so carefully to go along with the “we’ve moved on” arguement ?

Perhaps it is important for the same reason that Korean cloning research has been a bit in ferment of late ? I am sure you are perspicacious enough to imagine why replication of results was actually a valuable process ? Try this link as well for a short but salutary tale about the replication of results:Then and Now Of course, people wouldn’t do that now, would they ?

If I were Mann (and I’m only another lobotomy away from that) then hiring a lawyer, and letting it be known there’s one there representing him is a half smart thing to do. It would make people watch what they say and limit the amount of damage to him. Thus, potentially, the lack of conjecture on Mann’s motives for pushing such demonstrably shoddy work was not challenged.

The graph shows nothing more than that CO2 *follows* temperature changes with a lag of some 600 years during the ice age – interglacial transitions and with several thousands of years at the onset of a new ice age. Thus temperature changes *cause* changes in CO2 level. That doesn’t say anything about the possibility that CO2 has an influence on temperature. Of course, as there is a huge overlap in near all cases of warming/CO2 rise and vv., climate modelers like Hansen can include a huge feedback of CO2 on temperature. But there is one interesting period at the onset of the last ice age (at the end of the Eemian), where temperature (and methane) levels were already near their minimum, and ice sheets near their maximum, before CO2 levels started to decline. The subsequent fall of ~40 ppmv CO2 doesn’t cause a measurable drop in temperature. That points to a low influence of CO2 on temperature…
See here for the graphs, based on the Vostok data.

Further, Hansen was a little economical with the truth in another case: Based on the rapid rise of ocean heat content in the period 1990-2000, he calculated that this was confirmed by models, and due to the increase in GHGs. But he didn’t mention the period 1980-1990, when there was a *decrease* in ocean heat content (see Fig.1 in Levitus, despite increasing GHGs and leveled emissions of aerosols. In both cases, changes in cloud cover (with an order of magnitude larger effect than the increase in GHGs over the same period) may be the origin…

Keep in mind Miss J’s comments on another blog about Climate Audit July 17:

“my own experience over there was comical to me. I went to confirm that it was the site being discussed on this post and I learned that climateaudit.org was “a blog that is specifically about M&M’s criticisms of MBH’s papers on AGW”
okey-dokey… I won’t spend ANY more time there than I would spend time on a “knitting blog.”

…I will be at the hearing on Wednesday. This business with the “knitting” website has left me more than amused.

Using a traditionally female activity [knitting], to describe a blog you think is “comical” ? Shame on you.

Well, at least I know how to pluralise it! Clearly, your particular brain-scraping event removed that skill along with a section that controls sense of humour. And given your completely puerile previous non-arguments I am amused by your accusation of it being the best argument I’ve got.

I’ve commonly made reference on other blogs to the Sewing Circle at the local nursing home which is a dozen old women parallel processing a millenium of experience. Wait’ll you hear what they say about clouds.
========================================

#184-185 kim,
LOL that would have been very deep on her part then :)
I can’t wait to hear! ;)

I tried to point out gender referencing because the word “cheerleader” was used to discredit commentators alll the time and I got ripped by everyone. I can’t even pretend or be convincing [in type or elsewhere] that I am upset about those things. LOL!

There are boy cheerleaders, and men who knit. Knitting, like Tetris, does fit a feminine mind, just why, I don’t know.

I’m intrigued by a parallel illustrated by the phrase ‘Piltdown Mann’. Then, just as now, a large segment of legitimate scientists and intellectual and political leaders were taken in by the hoax. There is a good book in the parallel.

Miss J: Any reconstruction of temperatures that does not show a highly significant LIA and MWP is junk, since we have overwhelming evidence, historically and proxywise, that these temperature swings ocurred. Perhaps the MWP was not as warm as it is now (and maybe it was warmer), but any reconstruction HAS to deal with the fact that it ocurred.

Interesting, warmer than average in some places, colder in others. Very short time interval to really say anything about anomolies. I would like to see what it would look like if the time intervals were changed. Why did they choose 1960 to start with?

Interesting too how in jan 06 through May 06 when So Cal was freezing, well below what “felt” average, I didn’t see any of these type graphs, do they have them for the winter periods as well?

“”We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it?”

Did Jones really say this? If so, why is this not use to suport the assersion that the “Team” have not been, and are still not being honest and open about their research, which last I checked, is somewhat NOT in compliance with scientific methodologies?

RC is using Rutherford to bolster the results of the flawed Schitck methodologies. Does it also suffer from process deficiencies?

What is Mann if not a statistician? is a question John A. wanted answered. I thought it was a cheap shot at first, but the comparison to the Piltdown hoax is becoming more and more apt, as time goes by.
================================

The wave of Arctic air from Siberia hit Russia on 16 January and has not relented since, with temperatures plummeting to minus 31 degrees Celsius (minus 24 Fahrenheit) yesterday night. According to Moscow’s weather forecasting service, this is the lowest recorded temperature on this date since 1927

The painfully cold temperatures have reportedly killed at least 31 people in European Russia since 16 January. Most of the casualties were among the homeless, despite an order by authorities to allow homeless people to sleep in train and metro stations.

Animals in zoos across Russia are being given shots, or in some cases buckets, of vodka to keep them warm.

But even in homes, heat is not always guaranteed. Thousands of people were temporarily left without heat in several Russian regions after central heating pipes burst. “”

Did Jones really say this? If so, why is this not use to suport the assersion that the “Team” have not been, and are still not being honest and open about their research, which last I checked, is somewhat NOT in compliance with scientific methodologies?

McIntyre quoted this in his written presentation to the committee here

It might well be a cultural response, too. When the last Czar banned vodka sales in 1914 in order to make Russians capable of self-defense, it cut internal revenue by a quarter, hampering the effort to recruit and supply an army.
===================================

China and Japan experience massive amounts of snowfall as a result of these cold fronts, in excess of 10 Feet in some cases if I remember correctly.

It led to collapses of housing in China and Japan. I’m actually tasked to come up with a monitoring system that could be low enough cost to implement in homes to warn of these types of collapses (which we then can scale up for commercial structures).

I do recall similar cold issues in India where it was a particular concern as many people do not have the ability to get out of the cold.

Yeah, ET, I also linked to the same graphic in another thread responding to Peter’s panic about temperature anomalies. I should have noted that he did not post anything that showed the winter anomaly in Europe, which was upwards of 10C below normal from what I’ve read. Of course, he coudn’t, because that forces one to believe in alternative hypotheses that don’t fit with pre-conceived notions.

Just another case of cherry picking on temps. This winter was one of the coldest and longest in recent times, yet still a couple of weeks after summer started we are already “The warmest year ever” Damn sun didn’t even come out here until about 2 weeks ago, I know the same is true of Engurland. And we’ve plenty of evidence for the long and severe cold spell in Eurasia and Asia.

RE: #140 – I have quite a bit of experience with these diagrams. Mann’s is a small, highly interlinked one, not a global collaboration signature but an exclusive, secretive, cabal signature. A truly global person (such as me! LOL!) on the other hand has a diagram that looks more like the topology of the internet – an huge network with lots of nodes, less tightly coupled, etc.

I think that evaluation of the diagram needs to be done with some quantitativeness (numbers describing order and amount of connectivity) as well as in comparison to other networks of researchers. Anyone know of a good review article on networks of scientist co-authorship?

A few things that jump out though:
a. Lack of connections to mainstream statisticians
b. Lot of statistics going on in the work (Mann’s contribution being almost entirely statistical work)
c. Bizarre statements about statistics (“I am not a statistician”)d. New, untested methods being used
e. Said methods not being validated on known problems or vetted by hard-core statisticians, in hard core statistics journals.

Re #190: I think Miss J hasn’t been around here long enough to recall this herself, and I may not have all these details right, but I seem to remember that jae personally discovered the MWP temp records in an old Norse farmhouse that recently melted out from beneath a Greenland glacier. This was a most impressive proof to back up his claims about the MWP and LIA. So there.

Re #191, of course I accept it – btw it’s only a monthly temperature anomaly map. My point is more about if John can bring himself to accept the figures than anything else. Oh, and if the whole map was covered in huge red warm anomaly blobs and there was one small cold anomaly dot over the SE pacific you’d be banging on about that one dot I suppose?

Re #194, Dane the US isn’t just California (well, not last time I looked).

“Oh, and if the whole map was covered in huge red warm anomaly blobs and there was one small cold anomaly dot over the SE pacific you’d be banging on about that one dot I suppose?”

No Peter I’m dealing with one issue and one issue only. THere is plenty of verbiage about rising sea Levels, the only way melting Ice can do this is landborne ice, of Which Greenland is the primary repository in the Northern Hemisphere. I’m dealing with this issue, and this issue alone for the moment. Based upon this piece of information that you have presented, what is your opinion on rising sea levels.

And to paraphrase you.

Oh, and if the whole map was covered in huge blue cold anomaly blobs and there was one small warm anomaly dot over the SE pacific you’d be banging on about that one dot I suppose?

The pressure of carbon two oxygen rises,
Thickering the rings of Bristlecone Pines.
We’ll lose trillions to listens of Kyoto whines,
To thinner the ringers of pine trees aginners.
=============================

Re #225, nope it has snowed in the mountains and hills of Southern Austrailia and NZ in it’s current winter ;) , often does. But, I think it has been cold, but not ‘extreme’?, down under in places – the maps in the link show it.

Really, you had an issue with no Rain in Engurland this spring/early summer?

Not only would it be a first, but from what I’ve seen you’ve had quite a bit of rain so far this year. Most of it came via us here at New and Improved England, we’ve had loads and last we saw of it it was headed right for you.

Re #214 measuring linkage
Wegman himself will know how to quantify network linkage and compare between networks. The methods, as he says in his deposition are from “Graph Theory” and they are applied to many kinds of networked systems, not just social networks. These would include applications as diverse as computing networks and animal & plant community ecology. Using an appropriate null model you could say with some statical cofidence how anomalously interconnected a network is.

But the real issue here is the *quality* of the linkages in the network. Submitting articles to a journal that you edit, for example, is somewhat frowned upon. Very much so in some fields of science. Do a random survey of scientists and see what they think of that practice. Compare the survey response between fields.

It would be real interesting to see an index on each published paper as to the “inbreeding coefficient” resulting from the social chain reaction of writing, internal review, external review, peer review, associate editor, editor. Everyone who handles the paper will have some relationship with the author. The inbreeding coefficient is the product of the degree of separation amongst all those who handled the paper. The higher the number, the more independent the oversight.

While Rep. Barton’s request specifically targeted the results of the so-called “hockey stick” study (a 2,000-year record of Northern Hemisphere temperature), it also demanded a significant amount of data irrelevant to that set of peer-reviewed studies.

I purposely didn’t read VS or SM’s testimony pdf files until now, after watching the reactions on the blog. Very generally, VS’s is more entertaining/well-written (his socialogical aspects are fascinating & spot-on). However, even if alittle dry, SM takes the big step to truly begin breeching the wall of blasphemy — the MWP might have been warmer than today! VS suggest this as well, but SM provides some real evidence. That takes courage, but SM is battle-hardened already and may take the heat off other scientists to broach the subject like VS who are paid by universities.

Very important — SM never says there’s no “A” in GW. He states the opposite here on the blog — a warmer MWP than today does not discount “A”. VS repeats the same thing.

Re:129
The difference between Earth and all the other planets in our solar system is that earth has a biosphere and the others do not. It is the biosphere, IMO, that helps stabilise the climate. Of course, no other planet is in the same orbit, or has the same mass, or any of many other reasons why we can’t make direct comparisons.

Be that as it may, I think it’s important to realise that even the staunchest of AGW supporters believes long term climate has been stable, and that we experience short-time climate variability (on the scale of decades, not to mention day/night and summer/winter!). That implicitly recognises that there *must* be overall negative climate feedbacks. After all, if there were positive feedbacks, there has been enough natural variability (from volcanoes and other natural forcings) that we would have already seen a runaway in one direction or the other (cold or heat). Of course, that doesn’t rule out the possibility that the “A” part may be more than the system can handle, but it seems unlikely to me that simple CO2 could do this – combined with land use change, destruction of species and habitat, and many other “nasties” that we humans place on the biosphere, it may spell “the end”. But I just don’t think that CO2 on it’s own is sufficient.

Re 235:
When a tree ring-temperature calibration is performed, why is ring width (or whatever) regressed on temperature, as though ring width is not subject to sampling error? Would it not be more correct to bootstrap sample from the entire distribution of possible ring widths (which is fairly wide, especially in the parts of a chronology where replication is low), and repeatedly regress *those* on temperature? (Are dencroclimatologists ignoring the uncertainty in the mean crhonology? i.e. Are they assuming that the sample mean chronology EQUALS the expected population chronology?) If you did this (included error in the tree ring data), the confidence envelope on the temperature reconstruction would, I am pretty sure, balloon to the point where the highest possible MWP could easily be warmer then the present. Is it possible the dendroclimatologists haven’t thought of this?

On the meaning of PCs derived from PCA. I’m not even sure how a PC got elevated to the level of a policy tool. PCs don’t mean anything until you “interpret” them. i.e. They are subjective. Scientists argue all the time about what a given set of PCs may or may not represent. PCA is a starting point for hypothesis generation, not an end-point for conclusion. Read any textbook on plant community ecology and you will see how mushy a method PCA is. It’s an arithmetical data reduction technique, not an inferential tool. Strictly speaking, it’s not even a statistical method because it does not involve inferences about populations based on error-prone samples.

Yeah, unless you know a-priori what all of the sources are, and have explicit knowledge of their waveform, you cannot determine which is which. That’s the ultimate failing of this methodology as applied here. IMO, the PC Mann, et. al. select as a temperature seems more indicative of CO2 fertilization than temperature.

Neil, the biosphere certainly sequesters carbon, and probably faster when it is warmer. I presume there is some sort of steadiness to the amount of carbon entering vulcanically, and the amount sequestered biologically would temper and buffer the temperature rise that otherwise might go with increasing carbon dioxide. The action of the biosphere then keeps the earth on the cusp of glaciation, and kept from plunging possibly into irreversible freezing, by the gradually increasing insolation as the sun becomes slowly hotter.
============================

Anyone have a good rationale or valid explanation for why statistical estimates can be made using PCA in the method Mann did, where he takes data, does PCA and retains some small number of PCs? (Something better then “people do it”.) How does he know that the data in the retained PCs is more relevant then the non-retained ones. For doing pattern recognition or the lot, fine, use the PCAs because it allows boiling down the data to extract parts that are most useful. But for estimation and regression? I don’t see it. Seems like that is a different kettle of fish. Would you do that in market research or polling or something like that or use the regular data?

231. Yeah, I am familiar in a popular way with graphing theory. Even for a non-technical audience, to be relevant, the network needs to be quantified and compared. This is not as a technical issue, but as a “how does one evaluate it’s import” issue.

239. I was wondering about that. You ask it much more technically than I do, though. FYI, there are often several cores per tree, several trees in the reconstruction. So one has a pretty good idea of the error bar on the RW in a population sense, at least for that study.

#247 I don’t think you can do it the way Man did but I think principle component analyses could be useful for reducing a data set for climate reconstructions provided you have a good understanding of what data is retained. I suggested in my post:http://www.climateaudit.org/?p=756#comment-36922
How you might be able to pre weight the proxies so that the principle components you get are useful in representing the various factors that effect climate.

Why statistical estimates can be made using PCA in the method Mann did, where he takes data, does PCA and retains some small number of PCs?

Strictly speaking, they’re not “statistical” estimates. You have to understand that all that eigenalysis (of which PCA is just one flavor) does is decompose a variance matrix into a different variance matrix, where large number of non-independent variables (in this case temp proxies) are “re-configured”, if you will, to generate a different matrix where the eigenvectors are now statistically independent, or “orthogonal”. The first hope is that what you are effectively doing is finding common patterns in a set of non-independent proxies and re-writing them in the form of vectors where variation in each is attributable to a physically distinct process. If the processes are distinct, then the extracted variance components should be statistically independent and therefore interpretible as uncounfounded effects (e.g. climate forcings in this case). The second hope, to address your question, is that the common variance in the large number of proxies can be attributed to a very small numer of components. If it only takes 2-4 independent PCs to explain 90% of the original variation, well that’s ecologically a heck of alot easier to “interpret” than trying to explain the patterns in several dozen non-independent proxies governed by at least 2-4 processes each. Right?

How does he know that the data in the retained PCs is more relevant then the non-retained ones.

He doesn’t. The decomposition gives you a set of eigenvectors, each of which has a weight, or eigenvalue, from 0 to 1. This weight represents its contribution to the variance in the original dataset. 0.7 would be considered a pretty heavy weight, and would be typical for PC1 for alot of ecological data. A weight of 0.3 would be considered relatively small, but still noteworthy as it accounts for 30% of the variation in the original data. If you graph these weights for each of the PCs, then you arbitrarily decide a cutoff where you don’t think the variance is worth speculating on. e.g. Most biologists are pretty happy with explaining 50-70% of the data in a large multivariate dataset and at the same time they typically can’t think of more than 2-4 good reasons why a given eigenvector (or PC) might vary in the way that it does. So they will typically spend a great deal of effort trying to “interpret” PC1, and progressively less effort “explaining” PCs2,3,4,etc.

Some “interpretations” are more obvious and more plausible than others. In plant community data, for example, they’ve done enough of this kind of “analysis” to figure out that PCs 1 and 2 typically represent a moisture gradient and a nutrient gradient respectively (pines on dry, poor sites, spruce on moist, rich sites, etc). That’s just an example, but you get the idea.

Problem with PCA: a sort of “groupthink” can emerge if these kinds of studies are done without independent experimentation. e.g. Potted plant experiments to PROVE that pine and spruce really do perform better on distinctly different substrates.

MBH98 interpreted PC1 as AGW effect. SM showed that proper centering (on entire proxy, and not just the calibration period – honest mistake – DUH – NOT if you use commercially available software – and NOT if you don’t go tinkering with raw Fortan code – yikes!) changes the shape of PC1. Probably what happens with proper centering and scaling is that the HS-shaped eigenvector drops down to a lower eiganvalue, like PC6-8. i.e. It’s still there, but just not worth interpreting. I don’t know this for a fact. You tell me. You guys did a proper PCA on the MBH dataset, so you should have the eigenvectors (PCs) & eigenvalues (PC loadings) handy.

For doing pattern recognition or the lot, fine, use the PCAs because it allows boiling down the data to extract parts that are most useful.

Someone correct me if I’m misunderstanding what MBH98 claim to have done, but that is in fact it, isn’t it? I mean, I’m only telling you what PCA is. When you get the raw Fortran code, you tell me if MBH98 did this or not.

But for estimation and regression? I don’t see it. Seems like that is a different kettle of fish.

As I said, PCA is not a regression method at all. That’s why you don’t “see it”.

There are variance-covariance methods that could be used maybe to do what you describe. I’ve heard of PC Regression, for example, though I’ve never done it myself. But I don’t see how that would apply with paleo data where the goal is to extract components AFTER the regression calibration has been done.

Would you do that in market research or polling or something like that or use the regular data?

You do it wherever you suspect there may be “hidden factors” underlying variance patterns in multivariate data. Examples include psychology, medicine, economics, finance, you name it.

All this is in any standard multivariate stats textbook (e.g. Applied Multivariate Statistical Analysis, Johnson and Wichern, 2001).

To reduce something so that it can be expressed with minimal components is very different from saying that using these minimal components are more effective then using regular regression. For instance even if temp is the major variable in tree rings. and PC1 is the first component. This does NOT mean that temp is concentrated in the PC1. It could be, might not be, or might be equal.

#231 one while #250 suggested that moisture and nutrients represent the first principle component. Of course it really depends on what are the dominate stresses on the tree. Clearly for bristle stone pines the dominate stress is carbon dioxide.

There were inadequate questions by the government reps, who seemed more interested in grandstanding than questioning the witnesses. I thought that the Republicans showed significantly more intelligence and integrity in their questions than the Democrats.

NAS Panel Chairman Dr. North was willingly “led around by the nose” by the climate-alarmist Democrats.

I was disappointed in the NAS/North report, which unsuccessfully attempted to whitewash Mann’s work. When the NAS panel said that Mann was partly correct, in that the world had definitely warmed in the past 400 years (as Earth exited the Little Ice Age), it was as disingenuous as saying that this morning was a lot brighter than last night.

I was further disappointed that North repeated his opinion that most of the current warming is due to human activities – if you read the NAS/North report, you will not find sufficient evidence to demonstrate that this is true – this conclusion is, in this case, mere unsubstantiated opinion by some members of the NAS committee.

Kudos to Wegman et al – Wegman adhered to his group’s mandate, refused to be led around by the nose or submit to bullying, and was neither obtuse nor politically-correct – as the NAS panel had been.

Re #251,252
Not sure I can decode what you are getting at. The goal of PCA in the case of multiproxy data is to rewrite the non-independent proxies as independent forcings. Can I be any plainer? PCA ASSUMES there is a interpretible causal relationship. It doesn’t estimate anything.

1. Do any of you know about the “Divergence Problem”, which is described as follows: The post-1975 tree ring data shows cooling, not warming.

2. Rep. Inslee gets the prize (so far) for worst questions – his false analogies to gravity, smoking, etc. are misleading and stupid. Inslee’s reference to the Naomi Oreskes study, which was thoroughly discredited by Benny Peiser, is at best naive and more likely just plain dishonest.

Explanations of the basics of PCA being a mechanism for sorting out variance into variables that maximize variance is not helpful for the discussion of whether a collection of higher order PCs is a better predictor then the data itself.

PCA is used quite extesively in the social sciences generally and market research specifically (that’s where I developed my understanding of it). most often, it’s used as a data reduction technique, most commonly to to reduce a large set of attitudinal items to a set of basic underlying dimensions. Typically one has some a priori hypotheses about what the underlying dimensions are (which is how you develop the individual attitude statements in the first place).

If you’ve ever undergone psychometric evaluation (e.g. in an employment profile), the basic dimensions (like introversion-extroversion) were originally developed using PCA.

Bender, under MBH, the bristlecones reside in PC1. Using proper centering, they are relegated to PC4, explaining 8% of the variance. I don’t think Steve has ever published all the eigenvalues, all though I’ve asked him. It would be interesting to see how they stack up against the eigenvalue-rule or scree-test.

As I’ve mentioned eslsewhere, this is at the heart or Mann’s defence that the centering issue “doesn’t matter” for the reconstruction. It doesn’t matter as long as you get the bristlecones into the regression stage (as PC1 or PC4) becasue the regression stage doesn’t “care” about the ordering of the PCs or the proportion of variance they explain.

So if you don’t mind your reconstuction of the climate history of the planet is based on a handful of high-altitude NA trees that explain a mere 8% of the variance of the proxy network, then it indeed “doesn’t matter”.

250 is ok. I just get sick of the posts that explain the basics about the data reduction. Learned that from Nate Lewis’s chemical nose already. It is a good technique for pattern recognition, but absent some reason for thinking that x will vary more with PC1 (and why should it) then with Y, why would you do the whole PCA and throw away the lower ones for ESTIMATIONS?

Explanations of the basics of PCA being a mechanism for sorting out variance into variables that maximize variance is not helpful

I was going back to basics because I didn’t understand what you were saying, and figured some clarity on terms could improve the precision of your statements so that I could be helpful. You can’t exchange ideas when the language is imprecise, because the message too often gets misinterpreted.

[To summarize my text this way (PCA as “a mechanism for sorting out variance into variables that maximize variance” … [shudder]) shows that we could be in for a rough ride if we try to continue this exchange. So maybe I better just back off and try to intuit what “the discussion” is about.]

“the discussion of whether a collection of higher order PCs is a better predictor then the data itself”

If the discussion is ‘how can a model be better than the data on which the model is based?’ then count me out, because it can’t. I can only assume that the discussion is about something else that IS logical and that you are working at getting the right words to express what that is.

I thought I answered the question. PCA is a way of trading off identifiability of non-independent variables (you know for certain what you measured in all these numerous proxies, but can’t explain the complex patterns of variation, which are more or less shared among proxies, but are not quite equal) for parsimony of interpretation of independent causal processes (where you have fewer objects to interpret, but where you have to guess at their interpretation).

i.e. You don’t get something for nothing, if that’s what you are asking. You make a deal with the devil and trade identity for parsimony. Sometimes the PC intepretations are so obvious they will have clear implications for further experimentation, and are thus considered valuable or interesting. Often they are not so easy to interpret, and the paper won’t even pass review.

On the topic of imprecision … what’s a “higher order” PC? You mean a plain, ordinary PC, a derived component? What do you mean here by “predictor”? A predictor typically refers to an independent variable used in a regression. Are you using the term in some other specialized sense? Uninterpretible questions can’t be answered squarely.

Sorry if I’m just not getting it. Don’t get frustrated. I’ll sit back and listen.

I disagree with Crowley’s comments (~5:00) about Lower Troposphere temperatures now agreeing with the surface record. I am aware of the study that makes this claim, but the data does not support it, imo. Anyone can refer to the satellite data and reach their own conclusion.

No warming trend in LT from 12/1978 to 04/1997, just oscillation around zero – then the huge 1997-98 El Nino spike peaking in 04/1998 which quickly reversed itself; Possibly 0.2 degree C warming from 2000 to 2005 but note the complete lack of correlation with atmospheric CO2 levels, which have been rising consistently, at least since measurements began in 1958 – (see http://cdiac.esd.ornl.gov/trends/co2/graphics/mlo145e_thrudc04.pdf)

Note the “possible” warming from 2000-2005 may still reverse itself as past upward oscillations have done – also note there is no convincing evidence (that I have seen) that the possible 2000-2005 warming was caused by increased atmospheric CO2 levels.

Also, even this alleged warming, if caused by greenhouse gases, is not linear, it is logarithmic (warming flattens with increased atmospheric CO2 concentration) – so linear extrapolations of 0.0x degrees C per decade greatly exaggerate future warming (unless we assume incredibly high increases in future CO2 levels).

Correlation of past temperatures with CO2 concentrations is poor to non-existent, indicating CO2 is a weak driver of warming. Also CO2 lags temperature, does not lead it.

Correlation of temperature with solar activity is much stronger than with CO2 concentration. Could the Sun be much more important than CO2? See Jan Veizer’s latest paper (Geoscience Canada, March 2005). Veizer proposes that the primary driver of Earth’s climate is solar and celestial, which drives the water cycle, which in turn drives the CO2 cycle. In summary and in the big picture, CO2 is not the driver, but the result.

why would you do the whole PCA and throw away the lower ones for ESTIMATIONS?

Are you asking a general question about PCA or do you have something more specific re MBH98 in mind?

The general answer is this: no one in their right mind throws ANYTHING away if it’s interpretible (i.e. correlates highly with some independent predictor). Interpretation is what you’re after in the fishing expedition of multivariate statistics.

Of course, it looks bad if you think you can interpret PCs 4-8, which account for a measly 10% of the variation, but can’t say a thing about what the 90% of the variation in PCs 1-3 might represent.

One last comment for tonight. Re-reading MBH98 just now for the first time in 3 years, and what they do is very weird. And I’m not talking about centering, scaling, data pre-processing, and the like. They start by doing PCA on the INSTRUMENTAL data, not the proxies … hmmm … and they don’t show the eigenvectors (PCs) for that anywhere, although they plot the corresponding spatial EOFs in Fig. 2 …

I will load that paper back up in my head in case you have a specific question. These methods are queer, quite aside from all the oft-cited problems.

Regarding allegations that hurricanes are getting stronger and more frequent due to humanmade global warming:

The US National Oceanographic and Atmospheric Administration (NOAA) is an authority on this subject. See http://www.noaa.gov/

From page 11 of the NOAA report, “THE DEADLIEST, COSTLIEST, AND MOST INTENSE UNITED STATES TROPICAL CYCLONES FROM 1851 TO 2004″:
“Table 6, which lists hurricanes by decades since 1851, shows that during the forty year period 1961-2000 both the number and intensity of landfalling U.S. hurricanes decreased sharply!”
The report is at: http://www.nhc.noaa.gov/pdf/NWS-TPC-4.pdf

The report was updated in August 2005. I’ve copied Tables 4 and 6 into a spreadsheet and plotted them (sorry I can’t reporduce them here).

The NOAA report is co-authored by Chris Landsea.

There is no data for 2005 when frequency of hurricanes was high, but one year does not make a trend.

Analysis:

1. Since the world cooled from 1940 to 1975, and humanmade warming allegedly has occurred since 1975, let’s examine the data before and after 1975.
Note that the 1990’s (according to Mann et al’s famous hockey stick article, MBH98) was allegedly the warmest decade in the past millennium and 1998 was the warmest year.

2. The most intense hurricane to hit the USA was in 1935 (minimum pressure 892 mm) and the second most intense was in 1969 (909 mm). Only the third most intense was after 1975, and that was in 1992 (922mm).
It could be argued that the intensity of Category 5 hurricanes has decreased since 1935 (see my first 2 charts).
It could further be argued that the intensity of Category 4 hurricanes has decreased since 1886.

3. The number of hurricanes per decade is a measure of frequency.
Again, it could be argued that there is a decrease in the degree of frequency of hurricanes since the 1940’s, both from Category 3-5 hurricanes and also Category 1-5 (my 3rd chart, in which I have not included the partial decade from 2001-2004 nor anything from year 2005).

Solar Cycle 25 peaking around 2022 could be one of the weakest in centuries.

May 10, 2006: The Sun’s Great Conveyor Belt has slowed to a record-low crawl, according to research by NASA solar physicist David Hathaway. “It’s off the bottom of the charts,” he says. “This has important repercussions for future solar activity.”

The Great Conveyor Belt is a massive circulating current of fire (hot plasma) within the Sun. It has two branches, north and south, each taking about 40 years to perform one complete circuit. Researchers believe the turning of the belt controls the sunspot cycle, and that’s why the slowdown is important.

“Normally, the conveyor belt moves about 1 meter per second”¢’¬?walking pace,” says Hathaway. “That’s how it has been since the late 19th century.” In recent years, however, the belt has decelerated to 0.75 m/s in the north and 0.35 m/s in the south. “We’ve never seen speeds so low.”

According to theory and observation, the speed of the belt foretells the intensity of sunspot activity ~20 years in the future. A slow belt means lower solar activity; a fast belt means stronger activity. The reasons for this are explained in the Science@NASA story Solar Storm Warning.

“The slowdown we see now means that Solar Cycle 25, peaking around the year 2022, could be one of the weakest in centuries,” says Hathaway.

This is interesting news for astronauts. Solar Cycle 25 is when the Vision for Space Exploration should be in full flower, with men and women back on the Moon preparing to go to Mars. A weak solar cycle means they won’t have to worry so much about solar flares and radiation storms.

On the other hand, they will have to worry more about cosmic rays. Cosmic rays are high-energy particles from deep space; they penetrate metal, plastic, flesh and bone. Astronauts exposed to cosmic rays develop an increased risk of cancer, cataracts and other maladies. Ironically, solar explosions, which produce their own deadly radiation, sweep away the even deadlier cosmic rays. As flares subside, cosmic rays intensify”¢’¬?yin, yang.

Hathaway’s prediction should not be confused with another recent forecast: A team led by physicist Mausumi Dikpata of NCAR has predicted that Cycle 24, peaking in 2011 or 2012, will be intense. Hathaway agrees: “Cycle 24 will be strong. Cycle 25 will be weak. Both of these predictions are based on the observed behavior of the conveyor belt.”

How do you observe a belt that plunges 200,000 km below the surface of the sun?

“We do it using sunspots,” Hathaway explains. Sunspots are magnetic knots that bubble up from the base of the conveyor belt, eventually popping through the surface of the sun. Astronomers have long known that sunspots have a tendency to drift”¢’¬?from mid solar latitudes toward the sun’s equator. According to current thinking, this drift is caused by the motion of the conveyor belt. “By measuring the drift of sunspot groups,” says Hathaway, “we indirectly measure the speed of the belt.”

Using historical sunspot records, Hathaway has succeeded in clocking the conveyor belt as far back as 1890. The numbers are compelling: For more than a century, “the speed of the belt has been a good predictor of future solar activity.”

If the trend holds, Solar Cycle 25 in 2022 could be, like the belt itself, “off the bottom of the charts “.

Bender, if you are trying to DESCRIBE a system. Then PCA is fine. And you get something from the transform and from the lost PCs (a simpler system).

But, I don’t understand why it should be expected to be a good estimation technique for proxies to use the first PC or first few, rather then the data itself. Why temp response (or any variable) should be expected to be concentrated preferentially in the PCs.

#267 TCO, I think my answer is only going to frustrate you because I really don’t understand the question. It sounds to me like you are asking how PCA accomplishes what it does. It sounds to me like you are mystified how it is possible to decompose a nonorthogonal matrix into a smaller number of orthogonal components.

Maybe your question is not about PCA per se, but about the monstrous procedure employed by MBH98? Which could not even be described as Mannian “PCA” … because that “overdetermined optimization procedure to determine the best combination of eigenvectors represented by the multiproxy network” (p. 781) sure as heck isn’t PCA.

I have never seen that particular method before, they don’t actually give it a name, and it probably would take a real serious statistician like Wegman to put a name on this highly unorthodox procedure. This is probably where the secret Fortran code comes in. [Incidentally, you can sense from the defensive language in the detailed Methods section (p. 786) that the reviewers were probably highly critical of certain aspects of this method. I susepct MBH had to rebut these criticisms fiercely to get the Editor’s reluctant approval.]

When authors get this creative (and it’s creative alright) in their Methods, it is standard practice to publish the new method FIRST in the statistical literature, and THEN publish the data paper that relies on the method. It is extremely unusual to have that degree of methodological innovation in a data paper in Nature.

You guys have been at this awhile. Surely you have a shorthand name for this step of the analysis? “Neofs eigenvectors were trained against the Nproxy indicators …” (Bottom of first column to top of second column, p. 786.) “Training” maybe? “Training” is not PCA.

TCO, even in this case, you are trying to describe the system. From the data itself, you cannot determine what the individual forcings are. Remember, each tree-ring is, presumably, a linear combination of various forcings, the most likely would be: temperature, CO2 fertilization, ground fertilization/soil quality, moisture (rain) and solar.

When you take the tree ring observations, the output would be some function of these factors. Perhaps you could write it as width/density = (a11 * CO2) + (a12 * GF) + (a13 * M) + (a14 * S) + (a15 * T) + NOISE. Now, these forcings can be written in matrix form for multiple observations. PCA simply attempts to identify what these coefficients are (sort of) by assuming that since the axx coefficients are uncorrelated (independence is not required, though it is for ICA), if we generate an orthogonal matrix of eigenvectors and their associated eigenvalues, we have “found” the mixing matrix A. In other words, we have described the whole system. This system, however, is still large since we still have all the observations.

Now, the reason that “data reduction” is used to “identify the system” is because we assume that the number of observations is MUCH larger than the number of valid forcings (only 5 in the above example). Therefore, if we do PCA on the data, we “reduce” the system down from the many tree-ring observations, to a simple little set of the forcings by throwing out all the insignificant PCs (the smallest, and it is normally obvious where the cutoff resides). By eliminating the small eigenvalues, you now have a new “system description,” which is much easier to deal with (the correlation matrix, Rxx would be 5×5 in our example).

Unfortunately, by doing things the way Mann did, what would otherwise be considered noise has been elevated to PC1. Not only that, Mann merely ASSUMES that this shape is temperature because it kinda-sorta matches with the instrumental record. It could just as easily be CO2, or moisture, or solar or soil quality for all we know. PCA does not tell you which is which unless you have a-priori information. Mann uses a (weak) correlation with temp to determine that this PC is indeed a temp signal, but it also correlates well with CO2 fertilization and solar activity. Oops.

Just an aside. A story on solar attribution of AGW by NASA from 2003 submitted by someone a day ago to digg has hit the front page of the Science section. Its getting a lot of comments (84) and diggs so far. http://www.digg.com/view/science.

Certainly it is at least reasonable to assume we can, and do, influence the climate. The problem, however, is that this influence is overwhelmed by natural processes to the point that a) it is likely impossible to determine what that influence is, i.e. it is in the noise and b) because of a), it isn’t worth getting our panties in a bind over something we can’t even detect.

Personally, I’m more worried about the decreasing solar activity that is pending, since our climate is still considered “sub-optimal” in terms of many countries’ ability to support their population (it is safe to say that a warmer Russia would be extremely economically beneficial to their people).

When authors get this creative (and it’s creative alright) in their Methods, it is standard practice to publish the new method FIRST in the statistical literature, and THEN publish the data paper that relies on the method.

Yep, but climate science doesn’t seem to be that way. They just stuff it all in the methods section, and their results somehow justify the method.

You guys have been at this awhile. Surely you have a shorthand name for this step of the analysis?

I think the term “Mannian PCA” fits (I believe coined by Steve M long, long ago), even if it’s not PCA (kind of like “Emperor’s New Clothes,” even though there are no clothes).

#275. Yes. On the one hand we have NASA predictions of temperature increases of about 0.5C by 2050 and 1C by 2100 due to GHG. On the other we have a sun that can go into a funk like it did in the Maunder minimum for no reason we know yet and drastically reduce temperatures and productivity. Moreover it could do it again any time. I think there are risks in both directions, but it would be a good idea to attempt to quantify and weigh both possibilities.

North’s NAS report on “Surface Temperature Reconstructions for the Last 2000 Years”

and

NASA’s Prediction for Solar Cycle 25.

Note that in the first ~hour of the Whitfield Committee hearing of July 19/06, it was stated that Mann, through two lawyer’s letters, refused to attend on July 19 and again on July 27. Mann asked Crowley to attend in his place. Mann’s lawyer’s letters were entered into evidence. Note also that all witnesses were “sworn in” when they testified to the Committee.

Previous Comments by Allan M.R. MacRae, 25 Jun 2006

The June 22, 2006 NAS report on climate reconstruction is a fairly reasonable technical document, although it is less than up-to-date and less than competent in some subjects. Regrettably, the Summary and Press Release are somewhat inconsistent with the Report and exhibit some bias, and the June 22 verbal comments of the panel exhibited strong pro-AGW bias by some members and should be ignored as not representative of the committee Report.

For example, the Report upheld virtually all the technical criticisms of Mann’s hockey stick (MBH 98 and related papers) by McIntyre and McKitrick (M&M), but some of the panel members went so far in their verbal comments as saying the M&M criticisms were not material – in this regard the committee’s verbal comments were hogwash, or more accurately, whitewash.

In truth, Mann’s hockey stick eliminated both the Little Ice Age and the Medieval Warm Period from the historic climate record. The NAS committee report confirmed the existence of both these climatic periods, but somehow managed to ignore this important point with respect to Mann’s conclusions.

Furthermore, the committee blamed the IPCC and the press for overemphasizing the impact of Mann’s hockey stick on the global warming debate, ignoring the fact that Mann was a lead author in the IPCC report who promulgated such overemphasis, and also that Mann’s supporting website “realclimate” further promoted such overemphasis of his flawed conclusions.

Finally, the committee failed to comment adequately on Mann’s reluctance to provide his data for confirmation by others and Mann’s refusal to provide full disclosure of his analytical methods. These acts have been condemned by other scientists in the strongest possible terms.

Some panel members also felt the need to say that the modern warming was clearly human-made, but the committee report provided no evidence to back up this claim. Such statements should be deemed unsupported unless evidence is provided.

Some of my criticisms of the NAS Report are its failure to address the following important issues:

There are legitimate questions about the accuracy of the surface temperature database. Much of the current alleged warming is based on few thermometric measurements in the polar regions from Russia and Canada. However, the USA’s NOAA dataset, which is likely the very best quality database in the world, shows slight Summer and Fall cooling in the USA from 1930 to 2005, and about 0.5 C warming only in Winter and Spring seasons. http://www.ncdc.noaa.gov/oa/climate/research/cag3/na.html

The committee failed to recognize that the alleged rise in surface temperatures as measured by thermometry is inconsistent with the satellite/balloon records, which showed little or no net warming in the Lower Troposphere (LT) from 1980 to 2000 (including the 1998 El Nino spike which quickly reversed itself). http://vortex.nsstc.uah.edu/data/msu/t2lt/tltglhmam_5.2

Attribution of recent warming to human-made CO2 ignores the amplifying effect of solar variation caused by cosmic rays. The net result is that the warming effect of a stronger sun is amplified by the cosmic ray effect, and the impact of a weaker sun is similarly amplified. See Veizer and Shaviv (2003) and Veizer (2005) .

I believe that this cooling will be much larger than the warming trends observed to date and could have significant negative impacts on Canada, the northern USA and Europe, particularly our agricultural sectors. Please note that we predicted that global cooling would start about 2020-2030, in my September 1, 2002 article in the Calgary Herald (based on discussions with Dr. Tim Patterson, paleoclimatologist, of Carleton University).

The issue I’m most concerned with is actually global cooling. While it is technically gratifying to have predicated in print >three years before NASA that we are going to enter a global cooling phase by about 2020, this is not good news for the planet. During cooling periods, civilizations do much worse than during warming cycles.

Here are some of the historical cycles of warming and cooling:

…over the past 3000 years there was “an alternation of three relatively cold periods with three relatively warm episodes.” In order of their occurrence, these periods are described by Desprat et al. as the “first cold phase of the Subatlantic Period (975-250 BC),” which was “followed by the Roman Warm Period (250 BC-450 AD),” which was followed by “the Dark Ages Cold Period (450-950 AD)” which “was terminated by the onset of the Medieval Warm Period (950-1400 AD),” which was followed by “the Little Ice Age (1400-1850 AD), including the Maunder Minimum (at around 1700 AD),” which “was succeeded by the recent warming (1850 AD to the present).”

They didn’t call it the “Dark Ages” for nothing. Cold periods have typically been characterized by famine, plague and war. However during the Little Ice Age there were clear indications that civilization was starting to adapt to natural climate change. Let’s all hope we do better during the next cooling phase.

The big question remains: How much of the current warming is natural – I would guess over 80% – and the coming cooling will probably overwhelm the current warming.

Regards, Allan

P.S. Re our prediction of cooling starting in ~~2020: In the spirit of data archiving, I will hint that Tim Patterson is an expert researcher who focuses, among other subjects, on the (~80-90 year) Gleissberg Cycle – I further manipulated the data (using secret, proprietary statistical methods) on my Fortran-programmed Ouija board. :-)

Excellent description (#271), Mark! I suggest people struggling to understand what this PCA is doing in the reconstruction should carefully read & understand #271. IMHO, you can’t explain the thing much better than that.

Note that in the first ~hour of the Whitfield Committee hearing of July 19/06, it was stated that Mann, through two lawyer’s letters, refused to attend on July 19 and again on July 27. Mann asked Crowley to attend in his place. Mann’s lawyer’s letters were entered into evidence. Note also that all witnesses were “sworn in” when they testified to the Committee.

Based on this, either the committee does not have subpoena power, or it chose not to issue subpoenas. However, the fact that the “witnesses” were sworn in does indicate they can be held accountable for perjurious statements. Even foreign interests could be held accountable. I’m thinking Mann saw that this was a trap, and bowed out because, in the absence of subpoena, he has no legal obligation to put himself in a situation where he either lies, or refutes his own science.

Bender: my issue is not with the decomposition into vectors that most efficiently describe the parameter space. My issue is with the decision to eliminate the lower order ones and then compare temp (X) versus PC1 or some number of PCs, versus comparing it to the Y itself. I think this is not just a Mannian practice, but that others use PCA to make estimates in this manner as well.

RE: #191 – “With respect to 1961 – 1990 base period” – it’s a moving average, not an absolute baseline. What can one conclude? June 2006 was warm relative to Junes during that 30 year period. So what, what does that tell me, in the big picture. Do you understand the issue here?

RE #286: TCO, I see your point, finally. And it is a good one. (Especially when, as in MBH98, the first five PCs account for little of the total variation (27% in the case of the p. 781 temperature PCs (not the proxy RPCs))).

The reason the lower ranking PCs are typically dismissed is because they are either small (low eigenvalue) or uninterpetible (low correlation with independent predictors). If you dismiss purely on the basis of eigenvalue (variance accounted for), you risk missing a potentially highly interpretible effect. And you are right – this is a problem because your independent predictor might correlate better with say PC8 (low eigenvalue) than with PC1 (high eigenvalue). In which case you’d have an awful hard time arguing for your favored interpretation of PC1 – because that interpretation would be better attributed to PC8.

Has that got it?

In a fishing expedition you should never dismiss PCs out of hand. Especially when the top five explain so little of the variation in the original data matrix. (There is no real rule as to how many PCs to retain for interpretation. “Rules of thumb” are not rules, they are conventions.)

I think this is not just a Mannian practice

Correct. People stop interpreting lower ranking PCs simply because it would sort of highlight your ignorance about a system to say you can interpret PCs 6-8 , but you haven’t a clue what PCs 1-5 represent. You can do that in a PhD dissertation. But you typically can’t publish it. Because what it really suggests is you’ve got the start of a half-answer.

The reason the lower ranking PCs are typically dismissed is because they are either small (low eigenvalue) or uninterpetible (low correlation with independent predictors). If you dismiss purely on the basis of eigenvalue (variance accounted for), you risk missing a potentially highly interpretible effect. And you are right – this is a problem because your independent predictor might correlate better with PC8 (low eigenvalue) than with PC1 (high eigenvalue). In which case you’d have an awful hard time arguing for your favored interpretation of PC1 – because that interpretation would be better attributed to PC8.

Has that got it?

In a fishing expedition you should never dismiss PCs out of hand. Especially when the top five explain so little of the variation in the original data matrix. (There is no real rule as to how many PCs to retain for interpretation. “Rules of thumb” are not rules, they are conventions.)

I think this is not just a Mannian practice

Correct. People stop interpreting lower ranking PCs simply because it would sort of highlight your ignorance about a system to say you can interpret PCs 6-8 accounting for 10% of the variation, but you haven’t a clue what PCs 1-5 (with 80% of the variation) represent. You can do that in a PhD dissertation. But you typically can’t publish it. Because what it really suggests is you’ve only got the start of a half-answer.

The AGW crowd, when you bring up possible solar influence to climate change, all say the same thing. “It’s been calculated and there isn’t enough change in solar output to account for the climate change.”

All well and good, it creates an impossible argument, they have something they stand by, no matter of evidence will change there mind. Oh well.

But then the thought this morning. The change in CO2 also cannot directly account for the change we’ve seen. Willis is good at doing those calculations and has shown it here numerous times. CO2 warming is insufficient to account for current warming trends, and also insufficient to account for the warming output by models. In both cases positive feedback is attributed. One example, in fact the primary example, is that CO2 causes warming, allowing the air to hold more water vapor, which causes more warming, allowing the air to hold more water vapor, etc.

Well if this positive feedback can come into play because CO2 warms the atmosphere, amplifying small amounts of warming into larger warming, why cannot the same feedback be in play for solar warming. Small changes that are “insufficient to account for the warming we have seen.” start the amplification cycle. These feedback, if in place, and I have no reason to doubt that they are, will work for any warmth, whether solar induced or GHG induced.

The reason the lower ranking PCs are typically dismissed is because they are either small (low eigenvalue) or uninterpetible (low correlation with independent predictors). If you dismiss purely on the basis of eigenvalue (variance accounted for), you risk missing a potentially highly interpretible effect. And so you are right – this is a problem because your independent predictor might correlate better with PC8 (low eigenvalue) than with PC1 (high eigenvalue). In which case you’d have an awful hard time arguing for your favored interpretation of PC1 – because that interpretation would be better attributed to PC8.

Has that got it?

In a fishing expedition you should never dismiss PCs out of hand. Especially when the top five explain so little of the variation in the original data matrix. (There is no real rule as to how many PCs to retain for interpretation. “Rules of thumb” are not rules, they are conventions.)

I think this is not just a Mannian practice

Correct. People stop interpreting lower ranking PCs simply because it would highlight your ignorance about a system to say you can interpret PCs 6-8 accounting for 10% of the variation, but you haven’t a clue what PCs 1-5 (with 80% of the variation) represent. (You can do that in a PhD dissertation. But you typically can’t publish it. Because what it really suggests is you’ve only got the start of a half-answer.)

My issue is with the decision to eliminate the lower order ones and then compare temp (X) versus PC1 or some number of PCs, versus comparing it to the Y itself.

TCO, I think you are correct in that this is poor practice, though comparing to Y itself (I’m assuming you mean Y as in the observation vectors?) is not possible because they contain linear mixtures of the other signals as well. In other scenarious, it may be a valid technique to compare a specific PC to some instrumental quantity BUT, it requires that you know at least something about the signal you are extimating in the first place. For example, in a radar or comm application, you know a-priori what specific frequency you’re looking for, and have a rough estimate of when that signal will be present. In comm, you even know the modulation (AM, FM, QPSK, etc.) of the waveform, so when you estimate the signal, you can immediately deduce whether you have the right answer (i.e. if your estimation of an AM signal turned out to be a simple sine wave, you may have picked the wrong PC).

RE 288, interesting, but I assume it’s accounted for, however. The way I understand it, the modeled positive feedback is based on the calculated temperature change due to all forcings, not simply the change in CO2.

The AGW crowd, when you bring-up solar variation (or natural variation in general), is also prone to respond with, “Well, if historical solar influence is greater than we thought, then we’re really going to fry when the sun kicks in again, so we’d really better cut back on GHGs because the two together would be disastrous!!!” They seem to see an increase in realized natural variation as another dreaded factor to add to gloom-and-doom scenarios as opposed to evidence that GHG forcings and/or positive feedbacks are overestimated (and/or negative feedbacks are underestimated).

I think man should of tried to fit his principle component set against each of the driving parameters. For example, solar, volcanism and CO2. If and only if the principle components explain each of the drivers well, then it is reasonable to use them for temperature reconstruction.

Re #271,
Yes Mark, after a year of trying to understand PCA, that’s how I’ve come to understand it as well. It’s good to see that my brain is too muddled from age yet! All of this brings up the importance of the off-centering. If proper centering in the calibration phase brings the key hockey-stick shaped PCs down to PC4 (as in the North American series), then I think one would be hard pressed to explain why we should keep it (PC4) when the components in PC1-PC3 have not been identified. In fact, I think this tells us that temperature is NOT a big component in tree-ring variation and thus is a lousy proxy. To me this says that they need to go back to the drawing board. This experiment has failed. It also says that any other study that directly uses Mann’s method (or even a similar one), or the NOAMER PC1 directly, should be reexamined as they will be affected.

Re #294
PCA is a *covariance method*. (That’s what it means to say it is a data reduction method.) There are well-established, valid *covariance-covariance* methods already in existence, however, which could be used to answer the question you pose: “how does this set of X forcings relate to this other set of Y proxies”. Whereas PCA is not an estimation procedure, this would be.

It is not clear to me why MBH98 did not do that. Really. There may be a good reason. (But plant community ecologists do this all the time, through canonical correspondence analysis: how does this set of X plant species abundancies relate this this set Y of soil & site characteristics. Surely geologists do that kind of thing too, in order to predict the kinds of places where precious deposits are likely to occur?)

I think the answer may lie in the software used to implement training algorithms like the p. 786 Fortran Mannomatic or RegEM, which are used to come up with the link function. Maybe these packages are configured in a way that it is easy to go from valid interpolation to questionable extrapolation? Purely, purely a guess. I’m not at all suggesting it is the time to open up the hood of RegEM.

Re #294
PCA is a *covariance method*. (That’s what it means to say it is a data reduction method.) There are well-established, valid *covariance-covariance* methods already in existence, however, which could be used to answer the question you pose: “how does this set of X forcings relate to this other set of Y proxies”. Whereas PCA is not an estimation procedure, this would be.

It is not clear to me why MBH98 did not do that. Really. There may be a good reason. (But plant community ecologists do this all the time, through canonical correspondence analysis: how does this set of X plant species abundancies relate this this set Y of soil & site characteristics. Surely geologists do that kind of thing too, in order to predict the kinds of places where precious deposits are likely to occur?)

I think the answer may lie in the software used to implement training algorithms like the p. 786 Fortran Mannomatic or RegEM, which are used to come up with the link function. Maybe these packages are configured in a way that it is easy to go from valid interpolation to questionable extrapolation? Purely, purely a guess. I’m not at all suggesting it is the time to open up the hood of RegEM.

p.s. #296: You don’t need to include ALL the known forcings. So don’t let that stop you. But of course the more known forcings you include, the more accurate the model estimates. (An incomplete model may be an approximation, but possibly not a bad approximation.)

Scientists told the House committee that humans are causing most of the earth’s warming and the planet is 8 degrees to 10 degrees hotter than it was thousands of years ago. Some voiced concern with the pace of U.S. efforts.

Re: #277. All Congressional committees have subpoena power. If you are subpoenaed your choices are to show up and talk, show up and take the Fifth, or refuse to show and hope the Committee in question does not send U.S. Marshals to find you. That they chose not to use it (or so we assume) vis-a-vis Mann is interesting. They may have concluded enough light could be generated without him. An argument could be made that more light was generated without him than would have been with him, since those who did testify had less (or no) reason to be defensive, as Mann would have.

The fact that Mann had lawyer(s) involved (or so it has been reported; I have not independently verified this) makes the entire question more intriguing. The lawyer was there for some reason, and that reason likely is, or was, that he was negotiating with the Committee. Perhaps Mann tried to negotiate control over a panel in exchange for his appearance? Or attempted to testify without being on a panel that could contradict him? Or insisted that he not be forced to appear on the same panel as Steve McIntyre? Mann could have held out for favorable terms, betting the Committee would not subpoena. The Committee may have acceded to his terms, possibly after concluding his appearance on that date was unnecessary. The Committee knows it can change its mind at any time.

There is an untold story hinted at by the (alleged) presence of a lawyer. Lawyers are not cheap. Would be interesting to know who Mann (or someone on his behalf) hired and their specialty. One thing is very likely: If Mann had a lawyer there it is because there was work for his lawyer to do there.

A chronic illness only partly explains why James Hansen decided to skip the House Government Reform Committee’s first hearing on global warming in seven years. The embattled NASA scientist also passed on yesterday’s event because lawmakers are “still in denial” about the reasons for dramatic changes in the Earth’s climate, he said last night in an e-mail.

In the message Hansen sent to reporters to explain his absence from yesterday’s hearing, the director of the Goddard Institute for Space Studies said he had a conflicting doctor appointment to deal with a cold that interacts with his asthma to create a drip in his lungs. But he also indicated he would have adjusted his schedule if the witness list did not also include skeptical points of view.

“I would get out of my sickbed to testify to Congress on global warming, if they were ready to deal responsibly with the matter,” Hansen wrote. “But obviously they are still in denial, inviting contrarians to ‘balance’ the science of global warming.”

Hansen apparently was objecting to the House panel’s late addition of John Christy, a professor and director of the Earth System Science Center at the University of Alabama in Huntsville. In his testimony yesterday, Christy told lawmakers that scientists “cannot reliably project the trajectory of the climate” for large regions of the United States.

Christy also said it would be a “far more difficult task” to predict the effects should the United States adopt a mandatory greenhouse gas policy.

Hansen’s e-mail said skeptical points of view cloud the climate debate rather than enlighten it. “The function of the contrarians is to obfuscate what is known, so as to keep the public confused and allow special interests to continue to reap short-term profits, to the detriment of the long-term economic well-being of the nation,” he said.

Hansen said Congress should direct the National Academy of Sciences to update its 2001 report to President Bush on the state of the science surrounding global warming. “Until then, it is just a charade,” he wrote.

Consistent statements?
During the hearing, the first on global warming in the panel since 1999, both Democratic and Republican lawmakers pressed top Bush administration officials to explain the White House stance on the science behind global warming.

Tom Karl, director of the National Climatic Data Center, said there is “considerable confidence” that global warming is mostly attributable to increased greenhouse gas concentrations, especially since the 1970s.

And White House Council on Environmental Quality Chairman Jim Connaughton testified that a 2001 National Academy of Sciences’ report led Bush to move beyond the scientific debate to focus on developing low and zero-carbon technologies. “It goes well beyond recognition,” Connaughton said.

But Rep. Chris Van Hollen (D-Md.) said Connaughton’s testimony is not consistent with the president’s recent public statements. The Maryland Democrat asked Connaughton to explain how his comments yesterday compared with a recent interview Bush gave to People magazine.

In the People interview, Bush said, “I think we have a problem on global warming. I think there is a debate about whether it’s caused by mankind or whether it’s caused naturally, but it’s a worthy debate. It’s a debate, actually, that I’m in the process of solving by advancing new technologies.”

Van Hollen complained that Bush’s published comments are more widely read than anything Connaughton said during the hearing. “That kind of statement, read by millions of people, gives the impression we’ve not reached a consensus,” the lawmaker said of Bush.

Connaughton disputed Van Hollen’s characterization of the Bush interview. He later told reporters part of the problem was that People magazine only publishes short sound bites. “The president has said much more than that,” Connaughton said.

House panel opens inquiry
In an effort to more closely scrutinize the White House’s formation of federal climate policy, the House Government Reform Committee yesterday also launched an inquiry into media reports that the White House edited scientific documents on global warming to emphasize uncertainty.

Chairman Tom Davis (R-Va.) and ranking member Henry Waxman (D-Calif.) requested the White House materials on global warming, including all of former CEQ chief of staff Philip Cooney’s climate-related efforts.

Cooney resigned last spring after news reports that he edited reports to soften the link between global warming and industrial emissions of greenhouse gases. In their letter to Connaughton, the lawmakers note that Cooney is not a scientist.

The lawmakers gave Connaughton’s office until Aug. 11 to produce the documents, which include any materials related to “CEQ’s reviews of and suggested edits to materials produced by other federal agencies regarding climate change.”

Picking up on accusations published in several press reports, the lawmakers also requested documents tied to “efforts by CEQ to manage or influence statements made by government scientists or experts to representatives of media regarding climate change.”

CEQ also was called on to produce materials tied to its communications with other federal agencies and nongovernment parties regarding climate change science.

The panel’s inquiry marks a shift in Congress’ oversight of the Bush administration’s environmental policy. The House has focused largely on energy policy while holding only a handful of hearings on Bush’s climate-change agenda.

While Waxman described the letter as the beginning of an investigation, Davis said it was only part of his panel’s normal functions. “I wouldn’t call it an investigation,” Davis said. “I’d call it oversight.”

Where’s Gore?
Hansen was not the only invited witness who did not accept the panel’s invitation to testify. Former Vice President Al Gore also turned down a request to appear at the House hearing, Davis said.

Picking up on remarks Gore made to the Los Angeles Times that he is willing to go anywhere to talk about climate change, Davis said he asked the star of the documentary “An Inconvenient Truth” to pick any date in June or July to appear as a witness.

“But apparently ours was not one of the ‘audiences’ he had in mind,” Davis said. “While Mr. Waxman and I are disappointed, we understand that movie screenings and book signings are time consuming, and we hope his book signing in Northern Virginia went well yesterday.”

Coming soon … “The War on Global Warming”
Hollywood also has more plans in store to convince the public about global warming’s effects.

Movie producer Marshall Herskovitz — whose screen credits include “I Am Sam” and “The Last Samurai” — testified before the House panel that Americans need to be convinced they can do something to stop rising greenhouse gas concentrations. He likened his plan to what the government did to kickstart the economy and public opinion during World War II.

In an interview, Herskovitz said he would soon team up with Paul Haggis, director of last year’s Academy Award best picture film “Crash,” on a new marketing blitz. He said it would be called “The War on Global Warming.”

Herskovitz said he was not ready to reveal additional details about the campaign except to say it would include television advertisements and other forms of public outreach. He said his efforts were not affiliated with another Hollywood-based environmental activist, Laurie David.

re 294: Yes, I assumed any congressional committee would have subpoena power, but I am not an expert. That said, I would also assume that this power does not extend to foreign nationals not living in the US, i.e. Steve M.. I.e., my guess is that Steve M. was under no legal obligation to attend. Pretty telling that he did and Mann did not (well, he did via surrogate). Of course, once you’re under oath, in the US, perjury applies even to foreign nationals I’m sure.

RE: #279 – Global cooling in this day and age could be a significant challenge, given the population growth over the previous 200 years and our overreliance on technology. Imagine solely the impacts on energy prices – it’s sobering. Add in issues with food shortages – downright ugly. If asked to choose my poison warming would always win hands down. Why is this so hard for the masses (and the alarmist pied pipers who lead them) to grasp?

Steve (McIntrye), I’d also like to thank you for making the trek South and putting up with our incessantly bloviating political hacks. These guys are public SERVANTS, but they behave as though they sit at the right hand of God. Your presentation was calm, polite, direct and consise. Hopefully, the tide is turning towards better science on climate issues, and in large part we have you to thank for this.

“Not all senators were uniformly impressed. Hillary Clinton was the first to try to cut him down to size. “His views on climate change are at odds with the vast majority of climate scientists; it also appears in a work of fiction,” the senator for New York said dismissively. “I think that the topic of this hearing is very important but organised in a way to muddy sound science rather than clarify it,” she added, before thanking the other four witnesses who attended, but not Crichton.”

“By the time Crichton and the other four panellists had finished their opening statements, most of the senators, including Mrs Clinton, had left to attend another Senate hearing on the ramifications of Hurricane Katrina.”

I watched the replay of the hearing this afternoon. How did you manage to keep your cool in the presence of our political hacks. Thank you for hanging in there for five hours. Thanks for being such a professional!

#309 — “In an effort to more closely scrutinize the White House’s formation of federal climate policy, the House Government Reform Committee yesterday also launched an inquiry into media reports that the White House edited scientific documents on global warming to emphasize uncertainty.”

One would also like to see this, ‘In an effort to more closely scrutinize the IPCC’s encouragement of federal climate policy, the House Government Reform Committee yesterday also launched an inquiry into media reports that the IPCC SPM group edited scientific documents on global warming to fabricate certainty.‘

One wonders why one sort of purported editing gets frantic attention while a free pass is given to the well-publicized obfuscation (suppression?) by the SPM authors of the deep scientific uncertainties written into the IPCC TAR by the scientific staff.

During this hearing, the Democrats often made scary, unsubstantiated claims about the horrors of global warming, and further bogus claims about the state of concensus in climate science. See MIT’s Richard Lindzen on the concensus issue – I’ll post one of Lindzen’s articles if I can locate it.

Several Republicans did ask the key question:
How much future warming will occur and how much of that is due to human activities?

It is interesting that the climate alarmists have gradually moderated their claims – I recall warming projections of ~5 to 11 degrees C from the computer modelers just a few years ago – now they appear to be claiming much lower numbers as they try to history-match their models.

I suspect that both the climate computer models and the input assumptions are not only inadequate, but in some cases key data is completely fabricated – for example, the alleged aerosol data that forces models to show cooling from ~1940 to ~1975. Isn’t it true that there was little or no quality aerosol data collected during 1940-1975, and the modelers simply invented data to force their models to history-match; then they claimed that their models actually reproduced past climate change quite well; and then they claimed they could therefore understand climate systems well enough to confidently predict future catastrophic warming?

Using the above methodology, one could create a climate model that would history-match past temperatures to any randomly-chosen parameter, and then argue to impose legislation to minimize that parameter. Based on the tone of the hearings thus far, I would choose to correlate global warming with Democrats. :-)

Note that as the level of CO2 increases in the atmosphere, the resulting warming is logarithmic, not linear or exponential – each ppm of increased atmospheric CO2 has less warming effect than the previous one.

Closed Form Solution to Bound the Warming Question

Using the USA NOAA annual data from 1930 to 2005, I calculate ~~0.3 degree C additional warming from today for a hypothetical doubling of CO2 from pre-industrial levels. I’m sure others can improve upon this crude analysis, but I doubt that anyone can demonstrate serious future warming unless you assume huge amplifiers, for which there is no evidence.

Basis: 0.38 degrees F average annual warming from 1930 to 2005 – note that all of this warming occurred in winter and spring and slight cooling occurred in summer and fall.

Obviously, this analysis makes certain unconservative assumptions about the relationship of atmospheric temperature and CO2 (by ascribing all current warming to atmospheric CO2) – and still there is no problem.

Actually, if I were inclined to worry about climate change, it would not be about warming – it would be about the slight cooling during the USA’s growing and harvest seasons. Could evil CO2 actually be preventing even greater summer cooling, which could threaten the growing season of the world’s greatest food producer?

k = deltaT/ln(CO2b/CO2a)

deltaT = k*ln(CO2b/CO2a)

Run using USA NOAA annual data, converted to degrees C (1930 to 2005):

The issue of man induced climate change involves not the likelihood of dangerous consequences, but rather their remote possibility. The main areas of widespread agreement (namely that global mean temperature has risen rather irregularly about 0.6C over the past century, that atmospheric levels of carbon dioxide have increased about 30% over the past century, and that carbon dioxide by virtue of its infrared absorption bands should contribute to warming) do not imply dangerous warming. Indeed, we know that doubling carbon dioxide should lead to a heating of about 3.7 watts per square meter, and that man made greenhouse heating is already about 2.7 watts per square meter. Thus, we have seen less warming than would be predicted by any model showing more than about 0.8 degrees C warming for a doubling of carbon dioxide. This is consistent with independent identifications of negative feedbacks.

Alarming scenarios, on the other hand, are typically produced by models predicting 4 degrees C. After the fact, such models can only be made to simulate the observed warming by including numerous unknown factors which are chosen to cancel most of the warming to the present, while assuming that such cancellation will soon disappear.

Alarm is further promoted by such things as claiming that a warmer world will be stormier even though basic theory, observations, and even model outputs point to the opposite.

With respect to Kyoto, it is generally agreed that Kyoto will do virtually nothing about climate no matter what is assumed. Given that projected increases in carbon dioxide will only add incrementally to the greenhouse warming already present, it seems foolish to speak of avoiding dangerous thresholds. If one is concerned, the approach almost certainly is to maximize adaptability.

1. INTRODUCTION

After spending years describing the physics of climate to audiences concerned with global warming, I came to the realization that I was speaking to people who were not aware of the basic premises of the issue. The listeners were typically under the impression that the case for climate alarm was self-evident and strong, and that concern for the underlying physics constituted simply nit-picking in order to see if there were any remotely possible chinks in the otherwise solid case. Given that most people (including scientists) can rarely follow 15 minute discussions of somewhat complex science, the conclusion of the listeners is that the objections are too obscure to challenge their basic prejudice.

I decided, therefore, to examine why people believed what they believed. What I found was that they had been presented mainly three claims for which widespread scientific agreement existed. While these claims may be contested, they are indeed widely accepted.

The only problem is that these claims do not suggest alarm. Rather, upon careful analysis, they make clear that catastrophic implications are grossly unlikely, but cannot be rigorously disproved. Thus, the real situation is that the supporters of alarm are the real skeptics who cling to alarm against widely accepted findings. The profound confusion pertaining to this situation is only reinforced by quibbling over the basic points of agreement. Such quibbling merely convinces the public that the basic points of agreement must be tantamount to support for alarm. We will begin by analyzing the popular consensus.

2. THE POPULAR CONSENSUS

In a recent set of articles in the New Yorker, which defend climate alarmism, Elizabeth Kolbert1 presented a fairly good summary of the popular consensus:

All that the theory of global warming says is that if you increase the concentration of greenhouse gases in the atmosphere, you will also increase the earth’s average temperature. It’s indisputable that we have increased greenhouse-gas concentrations in the air as a result of human activity, and it’s also indisputable that over the last few decades average global temperatures have gone up.

To be sure, this statement makes the logical error of ignoring other sources of climate change or the ubiquitously changing nature of climate. However, strictly speaking, the statement is not wrong. A briefer summary was provided by Tony Blair: The overwhelming view of experts is that climate change, to a greater or lesser extent, is man-made, and, without action, will get worse.

Of course, this statement is too brief to actually mean much, but, given that climate change is always occurring, it is implausible to argue that all change is for the worse. Certainly, North America and northern Europe are much more pleasant without 2 km of ice cover.

How have such anodyne statements become the mantra for alarmism? Let us break up these points of agreement so as to be able to better examine this question. Let us also begin introducing all-important numbers into the claims.

1. The global mean surface temperature is always changing. Over the past 60 years, it has both decreased and increased. For the past century, it has probably increased by about 0.6 ±0.15 degrees Centigrade (C). That is to say, we have had some global mean warming.

2. CO2 is a greenhouse gas and its increase should contribute to warming. It is, in fact, increasing, and a doubling would increase the greenhouse effect (mainly due to water vapor and clouds) by about 2%.

3. There is good evidence that man has been responsible for the recent increase in CO2, though climate itself (as well as other natural phenomena) can also cause changes in CO2. Let us refer to the above as the basic agreement. Consensus generally refers to these three relatively trivial points.

[…]

10. CONCLUSION AND SUMMARY

So where does all this leave us? First, I would emphasize that the basic agreement frequently described as representing scientific unanimity concerning global warming is entirely consistent with there being virtually no problem at all. Indeed, the observations most simply suggest that the sensitivity of the real climate is much less than found in models whose sensitivity depends on processes which are clearly misrepresented (through both ignorance and computational limitations). Attempts to assess climate sensitivity by direct observation of cloud processes, and other means, which avoid dependence on models, support the conclusion that the sensitivity is low. More precisely, what is known points to the conclusion that a doubling of CO2 would lead to about 0.5C warming or less, and a quadrupling (should it ever occur) to no more than about 1C. Neither would constitute a particular societal challenge. Nor would such (or even greater) warming likely be associated with discernibly more storminess, a greater range of extremes, etc.

Second, a significant part of the scientific community appears committed to the maintenance of the notion that alarm may be warranted. Alarm is felt to be essential to the maintenance of funding. The argument is no longer over whether the models are correct (they are not), but rather whether their results are at all possible. Alas, it is impossible to prove something is impossible.

As you can see, the global warming issue parts company with normative science at a pretty early stage. A very good indicator of this disconnect is the fact that there is widespread and even rigorous scientific agreement that complete adherence to the Kyoto Agreement would have no discernible impact on climate. This clearly is of no importance to the thousands of negotiators, diplomats, regulators, general purpose bureaucrats and advocates attached to this issue.

At the heart of this issue there is one last matter: namely, the misuse of language. George Orwell wrote that language “becomes ugly and inaccurate because our thoughts are foolish, but the slovenliness of our language makes it easier for us to have foolish thoughts.” There can be little doubt that the language used to convey alarm has been sloppy at best. Unfortunately, much of the sloppiness seems to be intentional.

A question rarely asked, but nonetheless important, is whether the promotion of alarmism is really good for science? The situation may not be so remote from the impact of Lysenkoism on Soviet genetics. However, personally, I think the future will view the response of contemporary society to ‘global warming’ as simply another example of the appropriateness of the fable of the Emperor’s New Clothes. For the sake of the science, I hope that future arrives soon

This may not be the final answer, but it looks like mid-1970’s for USA’s NOAA – see below.

Where did the data come from that modelers used to force their climate computer models to show cooling from ~1940 to 1975, in order to history-match? Did they invent the data (in order to force the history-match)?

“Aerosol measurements began at the GMD baseline observatories in the mid-1970’s as part of the Geophysical Monitoring for Climate Change (GMCC) program. Since the inception of the program, scientific understanding of the behavior of atmospheric aerosols has improved considerably. One lesson learned is that human activities primarily influence aerosols on regional/continental scales rather than global scales. The goals of this regional-scale monitoring program are to characterize means, variability, and trends of climate-forcing properties of different types of aerosols, and to understand the factors that control these properties. GMD’s measurements also provide ground-truth for satellite measurements and global models, as well as key aerosol parameters for global-scale models.”

Measurements of aerosols did not begin in the 1970s. There were measurements before then, but not so well organized. However, there were a number of pyrheliometric measurements made and it is possible to extract aerosol information from them by the method described in: Hoyt, D. V., 1979. The apparent atmospheric transmission using the pyrheliometric ratioing techniques. Appl. Optics, 18, 2530-2531.

The pyrheliometric ratioing tachnique is very insensitive to any changes in calibration of the instruments and very sensitive to aerosol changes.

In none of these studies were any long-term trends found in aerosols, although volcanic events show up quite clearly. There are other studies from Belgium, Ireland, and Hawaii that reach the same conclusions. It is significant that Davos shows no trend whereas the IPCC models show it in the area where the greatest changes in aerosols were occuring.

There are earlier aerosol studies by Hand and in other in Monthly Weather Review going back to the 1880s and these studies also show no trends.

So when McRae (#321) says: “I suspect that both the climate computer models and the input assumptions are not only inadequate, but in some cases key data is completely fabricated – for example, the alleged aerosol data that forces models to show cooling from ~1940 to ~1975. Isn’t it true that there was little or no quality aerosol data collected during 1940-1975, and the modelers simply invented data to force their models to history-match; then they claimed that their models actually reproduced past climate change quite well; and then they claimed they could therefore understand climate systems well enough to confidently predict future catastrophic warming?”, he close to the truth.

Re post #160 and Lee’s allegation that Wegman wrote an “absurd attack piece”:

Did anyone else hear during the July 19/06 Whitfield hearing, the admission by Dr. Wegman that he had voted for Al Gore in the last federal election?

Wegman is clearly a person of intelligence and integrity (notwithstanding his voting record), who handled himself very well under the sometimes highly manipulative, leading questions posed by committee reps.

I was less impressed with the grandfatherly Dr. North, who repeatedly allowed himself to be manipulated and led by the Democrats on the committee. North is apparently a strong advocate of AGW/Kyoto, and a poor choice to head the recent US-NAS committee, imo.

Can you please briefly describe the pyrheliometric technique, and how the historic data samples are obtained?

It is very interesting that you say:
“In none of these studies were any long-term trends found in aerosols, although volcanic events show up quite clearly. There are other studies from Belgium, Ireland, and Hawaii that reach the same conclusions. It is significant that Davos shows no trend whereas the IPCC models show it in the area where the greatest changes in aerosols were occuring.

There are earlier aerosol studies by Hand and in other in Monthly Weather Review going back to the 1880s and these studies also show no trends.”

Pyrheliometric measurements at Davos, Switzerland from 1909 to 1979 are used to reconstruct the time history of atmospheric transmission. Measurements were numerous enough to allow yearly and seasonal values of atmospheric transmission to be determined. Other than the eruptions of Katmai in 1912 and Agung in 1963, there are no significant long-term changes in atmospheric transmission observed at this central European site. The implications of this result upon those theories of climatic change which depend upon changes in atmospheric transmission are briefly considered.

The whole “aerosol” issue suspiciously popped up out of nowhere when the climate-science clique had to come up w/an explanation for the 1940-1975 global drop in temp (don’t consider solar — that can’t be regulated. Hey, let’s try sulfer emissions, that can be regulated). The clique talks about “the MWP was just a localized event”, and then say sulfate emissions from very localized areas (mostly the east US & Europe) cooled the whole globe significantly?? Even tho those sources have since cut emissions somewhat, China, India, SE Asia, etc, have increased emissions since 1975, especially recently. Are China, SE Asia, etc, cooling significantly (or to any degree) now? Don’t think so.

If CO2 is discredited as a significant greenhouse gas, ozone will be the next villian (think EPA).

John, it was discussed here awhile back that if you assume the “increase in growth” curve for increased CO2 is a logrithmic response, high-altitude trees are lower down the curve due to lower CO2 partial-pressure & respond more quickly than low-altitude trees to ambient increases in CO2. It may have something to due w/what some observe (S_M, I believe) — that many or most of the classical hockeystick proxies come from high altitudes.

Many or most of the proxy studies don’t even adequately document elevations/locations of the treering proxies.

“Are you the same D.V. Hoyt who wrote the three referenced papers?” Yes.

“Can you please briefly describe the pyrheliometric technique, and how the historic data samples are obtained?”
The technique uses pyrheliometers to look at the sun on clear days. Measurements are made at air mass 5, 4, 3, and 2. The ratios 4/5, 3/4, and 2/3 are found and averaged. The number gives a relative measure of atmospheric transmission and is insensitive to water vapor amount, ozone, solar extraterrestrial irradiance changes, etc. It is also insensitive to any changes in the calibration of the instruments. The ratioing minimizes the spurious responses leaving only the responses to aerosols.

I have data for about 30 locations worldwide going back to the turn of the century. Preliminary analysis shows no trend anywhere, except maybe Japan. There is no funding to do complete checks.

Re #334 Once the policy people have their envelope-free hockey stick, there’s no need for hard science any more. “Science is so costly, so confusing … and sooooo boring. Decision-making would be so much easier if we just sidelined them.”

By air mass do you mean different angles so that the column mass of air through which the sun is shining has those relative masses? And what exact frequencies does a pyrheliometer measure? Or is that what it does? BTW, I realize that I could undoubtedly google for the answers, but sometimes it’s better to have the answer here from someone who knows the answer intimately so that it’s on record and to save all future readers here from having to do so.

“By air mass do you mean different angles so that the column mass of air through which the sun is shining has those relative masses?”
Yes, fixed elevation angles, or fixed zenth angles, depending on which way you prefer to measure.

“And what exact frequencies does a pyrheliometer measure?”
All the radiation from 0.3 to 2.5 microns where the quartz glass cuts off the solar radiation. This is about 98-99% of the total radiation.

Thanks again to Douglas Hoyt for his valuable comments. I hope he will stay on this site and continue his contributions.

It is regrettable and indeed reprehensible that atmospheric aerosol data has not been fully analyzed, when billions have been spent elsewhere on bogus “climate research”, much of it little more than alarmist propaganda – intended to raise the level of fear rather than help understand this complex issue.

For example, I recently came across an article (no doubt well-funded) that claimed that poison ivy was growing faster and more virulent because of increased atmospheric CO2 levels. It is well-established that increased atmospheric CO2 is a very effective fertilizer of most/all plants, but apparently poison ivy is fundable, but similarly increased yields of wheat, corn and soybeans are of less interest. I have also seen numerous studies by biologists and geographers which assume an alarming level of warming (e.g. greater than 4-5 degrees C) and then predict the resulting reduction or extinction of a plant or animal species within a particular geographic area – these well-funded studies are, in general, just more examples of “garbage in, garbage out”.

Conclusions (Primary, subject to revision):

A. The climate computer models that claim history-matching, including the 1940-1975 cooling period, used fabricated aerosol data and are therefore rejected as unsound.

B. Adequate research funding should immediately be allocated to analyze the aerosol data, as far back as such data is available.

C. History-matching of climate computer models should be re-run using the actual aerosol data and the results compared to the previous runs using the fabricated data.

Predictions (Fearless) :

1. Using actual rather than fabricated aerosol data, properly history-matched climate computer models will illustrate that more than 80% of the current warming trend is due to natural factors such as solar radiance, and less than 20% is due to humanmade causes. Using such corrected models, projections of future warming due to human activity (assuming a doubling of atmospheric CO2 to 560 ppm) will equal less than 0.3 degrees C.

2. Natural solar-driven cooling, which will begin prior to ~2020 during Solar Cycle 25, will overwhelm the current warming trend.

” John, it was discussed here awhile back that if you assume the “increase in growth” curve for increased CO2 is a logrithmic response, high-altitude trees are lower down the curve due to lower CO2 partial-pressure & respond more quickly than low-altitude trees to ambient increases in CO2. It may have something to due w/what some observe (S_M, I believe) “¢’¬? that many or most of the classical hockeystick proxies come from high altitudes.

Many or most of the proxy studies don’t even adequately document elevations/locations of the treering proxies.
”

Thanks for the CO2 relation model. It is very unfortunate that most proxy studies don’t include elevations and locations. It is also unfortunate that they don’t include other factors such as typical rainfall and cloud cover. Speaking of trees I am wonder about the whole growth cycle of a tree. Clearly trees tall enough to reach the light should respond similarly to age but I wonder how shorter trees respond. Clearly until they reach a certain height they can not be a climate indicator.

It is well-established that increased atmospheric CO2 is a very effective fertilizer of most/all plants, but apparently poison ivy is fundable

I doubt they would have gotten funded if the FACE experiments were based on the premise of studying poison ivy response to CO2. I believe the goal of this aspect of the study was to measure ALL plants’ responses (since they were measuring neighbouring plants, the marginal cost of the additional measurements was incredibly low), and it was just that the most surprising outcome was the QUALITY of the response from poison ivy. It didn’t just grow more, I believe it also contained new chemical constituents never before observed.

The FACE experiments are not just about the effects of CO2 enrichment. They are used as a platform by other researchers, such as plant physiologists, to figure out the underlying basic of things like how plants work.

You’re going to see all kinds of seemingly quirkly results coming out of the FACE studies because of the fact that it is so interdisciplinary. Nature LOVES to publish those kinds of “gee-whiz” papers that make the rest of us sometimes wonder where the academic world’s priorities are.

#342 I find it odd the praise nature gets here. I have previously read physics newsgroups and found the opinion of nature as a reputable Journal pretty low. My uniformed impression of nature is more like a popular science magazine or a newspaper then a Journal. I am sure a lot of the research is high quality but I think the focus is more on what will more people read then what will contributes the most to science.

It wasn’t praise. It was a statement of fact that it is the journal with the highest “impact factor” in terms of the size of audience your paper will reach. I did not mean to imply that it was authoritative. Just that it is widely considered the most prestigious journal for ecosystem scientists, and many other domains. Newsgroups and logs yield biased samples, so your impression is not surprising. The academics who are most critical of Nature are likely the ones who have tried and failed to get published there.