The decline in the quality and accuracy of climate change coverage over the years is quite a paradox. Surely, now that this issue has been in the public sphere for over twenty years, journalists and media outlets should be able to get it right. You would expect that their reporting would get better over time, not worse. That’s not so, says Dr. Santer.

“One would hope that in journalism it was similar [to science]”, he continues, “that in the midst of complex issues there would be some attempt to really get to the bottom of them. I’ve seen little of that search for understanding in the journalism on climate change.”

Coverage of ClimateGate, the scandal that wasn’t, gets Santer particularly riled up. He describes it as “reflexive, knee-jerk, reactive, not thoughtful, and rather asymmetric too: devoting a lot of publicity to the stolen emails without really trying to understand context or trying to understand issues.”

As if it wasn’t enough for the media to treat information vital to our future so lightly, they have also helped to spread unfounded accusations of fraud against climate researchers. Scientists are people just like anyone else, and should not be subject to such harassment. “These attacks on people like Phil Jones,” Dr. Santer agrees, “had tremendous personal cost. He was nearly driven to suicide by the hatred that he encountered.”

Indeed, Dr. Phil Jones, the director of CRU – the British research group that had their security system hacked and their private correspondence stolen – suffered from depression and suicidal idealation due to the barrage of hate mail and death threats he received following the media’s hostile coverage of the incident.

Who goes into scientific research expecting death threats? “[Jones] has done more than almost anyone in the world to improve our knowledge of observed changes in the temperature of planet Earth,” says Santer. “He was not deserving of this kind of treatment.

“So much attention was devoted to some incautious phrases in these emails, rather than to ask, “What kind of pressure has this guy been labouring under and operating under for years now? What sort of systematic attack by Freedom of Information Act has he been trying to deal with?

“Was Phil Jones angry and frustrated? You bet.”

Another long-standing aspect of climate change journalism that puzzles Dr. Santer is artificial balance – when neutrality is prized above all else, even above objectivity and truth. Sometimes the two sides of an issue, especially one of a scientific nature, aren’t equal, and shouldn’t be treated as such. Doing so, says Santer, “reinforces in [people’s] minds the opinion that the science is not settled, that experts are split 50-50 on human effects on climate, and that’s fundamentally wrong. That’s not the way things are. We have a few vocal individuals, who, for whatever reason, have very powerful voices in the media, and that have received attention out of all proportion to the scientific quality of their work.

“These fringe voices now have megaphones,” he continues, “and have means of amplifying their voices and trumpeting shoddy, incorrect science. We’ve seen the rise of the blogs, we’ve seen the rise of these “independent public auditors” who believe that they have carte blanche to investigate anyone who produces results they don’t agree with, and if that individual doesn’t comply with their every request, they indulge in this persecution campaign on their blogs and make your life very uncomfortable. I’ve had direct personal experience with that.

“The irony is that at a time when the public, more than ever, needs sound information on the science of climate change, needs plain English accounts of what we know and what we don’t know, there’s this cacophony, there’s this huge sea of noise – and, unfortunately, the people who shout loudest and contribute to this sea of noise are those who are often least informed.”

So where do we go from here? How do we repair public understanding of a scientific issue that many perceive as a purely political one? How will the media move past polarized reporting that misses the mark more often than not? Dr. Santer offers his two cents.

“I think that the media have to decide, ultimately, whether their goal is making money and satisfying their shareholders, or whether it’s reporting in the public interest, on issues that are of overwhelming importance to our generation and to future generations.

“I would argue that climate change is one of those issues, and the media have a civic responsibility to get it right, to get the reporting right, to get the science right, to devote resources to these issues… and they’re failing. They’re not living up to that responsibility.

“I don’t see an easy way of changing it; I do think that something has to change.”

One strategy could be to build the dwindling pool of science journalists back up. Santer stresses the importance of having such specialized reporters, rather than sending out general reporters to cover complex scientific issues. “Just like you can’t build a computer model of the climate system overnight from scratch, you can’t create a science reporter overnight from scratch either,” he says. “That familiarity with the issues and with the people, and with the right questions to ask. That takes time.”

Our future hangs on information and understanding, as it has ever since our species gained the ability to destroy what supports us. The only thing that can save us from ourselves is ourselves. “If people are to do the right thing about climate change,” says, Santer, “then they need good information, not wishful thinking, not disinformation.

“The sad thing is that many folks don’t want to know about the science at all. They just want to have business as usual and really not consider even the possibility that we might be changing the climate of planet Earth, that they might be culpable in that, and that they might need to think about the future.

“Lots of folks really don’t want to be confronted by the future,” he concludes. “It’s scary.”

104 Responses

Excellent interview. I agree with Dr. Santer – the media is the biggest reason for the proliferation of global warming misinformation and denial, but I don’t know how you solve the problem. Like any other business, media outlets are in it for the profit. More readers and better ratings means higher profits, and sensationalism/controversy means better ratings. And global warming being a huge scam, or if there’s a big argument among climate scientists, that creates the controversy that gets the ratings they want.

So we also need the public to demand accurate reporting rather than infotainment. But I don’t know how you make that happen.

Ben Santer’s perspective and comments are interesting and scary. I can’t claim to have had anything like the horrible experiences that he reports, and nothing like Phil Jones (suicidal! geez. Poor fellow.) – my role in this is small and on the margins. But I have had some hate mail in response to letters and opinion pieces on climate change or public talks I have given, and also a blogger chase after my research on climates of the past seeking access to our data and offering unspecified criticisms of our statistics for what purpose I can only imagine. It is intimidation, nothing less. They are bullies.

The hate that talking about AGW seems to generate from some of those that reject a human fingerprint, or just plain reject that climate is changing (these are the worst) baffles me. I just don’t understand it. But I offer this observation; as someone who teaches evolution I occasionally attract hate-mail on that topic (“You will burn in hell”; “You’re doing the devil’s bidding”, stuff like that). It seems to me there is a parallel psychology at work, but I hasten to note that these are not necessarily the same people or even people who share the same beliefs. But what do they share? They aren’t really interested in the science, just defending their world-view.

I also think, as Ben notes in the interview, that the rise of blogs, and the anonymity they offer, feeds the vehemence of opinion on the internet. There are no, or few repercussions when no-one knows your name. The shock-jocks on TV, radio and internet sites cater to that audience. For this reason I use my real name on ClimateSight. I have nothing to hide – I am a climate scientist and that is public knowledge. I can understand why some posters here may choose to use a screen tag. But I note that some of those most likely to have their post removed for breach of ClimateSight’s Comment Policy, do not. Why is that?

Kudos to getting this interview and doing such a wonderful job. Keep your heads up, everybody, because there are quite a few people working on getting the right message out. Scientists are getting energized to do more public outreach so it will get better.

Having worked in journalism, I think it’s less devotion to corporate profits and more that reporters want to avoid getting yelled at by the same folk who harass the scientists. Actually, reporters on any beat often want to cover their asses by being able to say, “Well, I was just reporting what he/she said” when someone on any side of any issue calls to complain. If they pointed out that what Person A said is untrue, they might get phone calls accusing them of bias. This laziness or unwillingness to defend informing the public is just more tragic when it comes to this issue.

If you want to do something about it, I can provide on small avenue: I started a Climate Rapid Response group on Facebook: http://www.facebook.com/group.php?gid=351728604633
I hope to get members to write letters to the editor but also to reporters to help them understand the importance of what they write. Most of them aren’t evil; they just don’t quite get it, and we need to be the ones they’re afraid to offend by not printing the truth. We need to make those phone calls. We don’t need to be bullies; we just need to get active. I hope to also get folks to start posting when online stories about climate go up so we can be the first few comments below the story.

For the general quality of science reporting, I’d like to see a standard science section on television news services. It needn’t be as long as the financial or sport reports. And it certainly shouldn’t be the usual diet of spectacular/ astounding/ groundbreaking reports of medical or astronomy topics.

If a daily dose of – such and such a report came out today, check out this cute, or huge, or ugly, or tiny animal or plant, a graph of something, a technology update – just like the stockmarket or currency reports. And climate issues would get a standard coverage, not a here’s a report and here’s someone who doesn’t agree with it rubbish we’re constantly served up.

I like to think we agree that climate chance science has become the most important subject on the global agenda. Whichever side you fall on, sceptic or ahem, climate hawk, the repercussions are immense. Such a pivotal subject should be managed by the giants of the scientific world, the brightest and the best.

So why does it come as a surprise, that their quiet, genteel little scientific backwater should suddenly become a shark infested maelstrom? Public scrutiny and anger was inevitable. Climate science should have pre-empted the hostile attention by ensuring that everything was as watertight as possible. They should have been scrupulous that no exaggeration was published in their name. Why didn’t they do that? Or worse, why do they think what we have now is good enough?

The media don’t have to do anything to help the situation. They latched onto AGW because it presents catastrophe and their business is mayhem. They are as happy to rip climate scientists to pieces as they are to make them celebrities. Whatever sells.

Irrespective of the truth about CO2 and temperature, the furore will only get worse in the near future and it has little to do with the nefarious activities of oil or coal companies. The costs of CO2 reduction are only just beginning to bite and individuals are barely starting to grasp what it means to them personally. Few people, believer or disinterested bystander have begun to change their lives to reflect the sorts of changes wide scale reduction would demand. I’d guess that even climate scientists are yet to imagine a world with less than a quarter of the energy they currently use. Sceptics may be trail blazers in giving climate scientists grief but it will seem like a walk in the park when the rest of the public turn their attention on them.

So please climate science, for everyone’s sake, get your house in order. Introduce the sort of checks and balances that are required for highly hazardous industries. Document everything as if you were expecting to be in court explaining it tomorrow and under no circumstances let advocacy tempt you to claim more than you can prove. In the long run it will serve you better than futilely trying to avoid the hostile attentions of an ever growing opposition.

Public scrutiny and anger was inevitable. Climate science should have pre-empted the hostile attention by ensuring that everything was as watertight as possible.

That’s bullcrap, and I suspect you know it, given that you try so hard to represent your opinion as the ‘public will’.

The ‘skeptic’ ‘arguments’ against climate science which you are spreading have It’s either ‘The models don’t fit the observations!’ or ‘The models fit the observations, but they’ve been rigged!’ or ‘The models fit the observations, the models haven’t been rigged, but the observations are fake!’ or ‘The models fit the observations, the models haven’t been rigged, the observations are fake, but … the US Constitution!’ Ad infinitum.

Frank you are wrong Tiny Co2 is correct when the tax payer is confronted with bills he cannot pay for an energy increase designed to make it prohibitive you will have to defend your science in the most open and truthful way. The consequences of this will come home to us all in the long run. Why not be open and as up front as you can now for any obtuseness will be ruthlessly pulled apart when the money worries bite.

Who is saying that scientists are not already open and truthful enough already? With the exception of CRU, which was a small group with no administrative staff, I can think of nowhere that is overprotective of raw data. NASA, for example, goes above and beyond – they publish all their data online as well as every line of code. Also, why do you refer to it as Frank’s science? What does Frank have to do with any of it? Even if he was a researcher (which he isn’t, to my knowledge – correct me if I’m wrong, Frank), the entire field would not belong to him alone. -Kate

As Kate says, climate science already has been defended and presented openly and truthfully. Anyone can read the peer-reviewed research anytime they would like. Most of it is available for free and easily accessible on sites like Google Scholar.

I’m not sure where the statement “when the tax payer is confronted with bills he cannot pay for an energy increase designed to make it prohibitive” comes from. Every independent economic assessment of the proposed (failed) climate legislation in the USA found it would cost an average of approximately 75 cents per person per week, and less for lower income households, who might even have come out ahead. Of course now that the EPA is forced to regulate greenhouse gas emissions instead, that might change, but it still shouldn’t increase energy bills dramatically.

But regardless, climate science has been presented openly, truthfully, and transparently. Anybody who wants to learn about it has many avenues available to do so.

Frank, I don’t have to represent myself as the voice of public will, I just have to observe. In any area you care to mention the public are backing away from what it takes to reduce CO2. There’s a good saying ‘watch what they do, not what they say’.

This is not about the science, this is about the reaction to what the science demands of us. At the moment, most people react to climate change as an abstract concept. They believe or disbelieve with no attention to the detail. Once you start to demand money and reduced lifestyles, they very quickly get interested. If there are holes in the science they’ll want to know why. If they’re not satisfied with the answers, they’ll rebel against the cuts and against the science. Sceptics don’t drive doubt they just put a voice to it.

What won’t impress the public is the claim that scientists shouldn’t have to put up with that level of scrutiny and interrogation. Being a scientist isn’t a free pass to credibility. That doesn’t mean they have to put up with abuse but it does mean they have to respond to the questions, yes, even the silly ones.

You think the science is all it should be, fine, you are liberty to change your lifestyle to fit the worst case. Just don’t expect the rest of the World to follow.

Is it just me, or is it bizarre that there are several ‘skeptics’ commenting on this interview saying basically the same thing about lack of transparency and costs, even after being informed that there is no lack of transparency and costs associated with proposed solutions are minimal? Almost as if they’re being instructed to comment here and told what to say.

Actually, it reminds me of a story I was reading about complaints to National Public Radio after they fired Juan Williams. NPR stations would get numerous callers complaining about the firing using the same key words like “totalitarianism”, and they would give hints that they weren’t actual NPR listeners by saying things like “I will no longer watch your station”.

The way I see it, the problems introduced by the media are simply symptomatic of the troubles we face. The reason we’ve got into the mess we’re in is due to this incessant focus on growth – and this applies to the media just as much as it does to any other business. It’s this insane* fixation on continual growth that’s the main driver for all the bad stuff we’re doing to the planet.

Cue the flak from the free-market fundamentalists.

* No, I don’t think that’s going too far. Ask me again whether I still think our way of life was insane in a century’s time. I bet you’ll agree with me then that it was. Errr… is.

I know nowt about science but I do know when you want to find out about things then you have got to study it ruthlessly and make sure everything you discover is recoreded and comented and if you use softwear then you got to use coments so people can see wht you done. Otherwise its all bolocks and who can piss highest up the wall

With the exception of CRU … I can think of nowhere that is overprotective of raw data.

The complaints about the CRU withholding data were generally levied at data that the CRU legally could not release, as they didn’t own it. I wouldn’t call respecting intellectual property overprotective. If they’d released it, you know the inactivists would have tried to spin it as a violation of copyright or something.

@Mac:
The name of an act is often more political spin than the act itself (the most obvious recent example is the Patriot Act, but it’s not a uniquely Republican ideal: every politician does this to some extent).
A more relevant case-in-point: The Data Quality Act. It has a good name. In a nutshell, it says that all research that receives federal funding must disclose all of its data under FOIA. Sounds like it was written to promote scientific integrity, doesn’t it?
Sadly, it’s far more of a chilling effect. See, this does not apply to private research – big surprise, since the DQA was written by a lobbyist. Imagine you’re an industry that works in an area with little to no regulation, and evidence is building up that your product is dangerous and needs regulation. You’re free to contest this *as soon as the study comes out* (or earlier if you hear about it, say by having a lawyer read over federal research grants). You’re free to spin the data through any form of statistics you want to produce an “opposite” or “unclear” result. And you’re completely protected from any public inquiry into your data or methods. The net result is that regulation – which is already slow due to extreme lobbying requiring 110% proof before regulating anything – is delayed or defeated, and free science is restricted.

Classic example of how FOIA can be abused. Another example is discussed at Kate’s earlier link – since each FOIA request requires a fixed amount of time under consideration before being rejected (even if it’s nearly identical to another rejected request), and of course time is required to retrieve data if the request is successful, it’s pretty easy to coordinate a large number of FOIA requests to essentially act as a Denial of Service Attack. (The comment thread on that discussion provides evidence that this was exactly what was going on, by the way. Start here and see if there’s another explanation.) It’s also worth noting that DoS attacks in their traditional internet-styled form are a vector for cyber-terrorism.

@Shub_Niggurath:I would also like to see some citations from David Greenwood for his parallels between evolution science and climate science.
Reading comprehension fail. He wasn’t comparing the sciences, merely the reactions to the science. Evolution, as it stands, is a unifying mechanism behind biology. Climate change is a necessary consequence of physical laws. See the difference?

That said, you’re right that it’s uncited. However, it’s kind of unnecessary, as it’s common knowledge that evolution instructors often get letters of protest from creationists. This was one of the key points behind the landmark Kitzmiller v. Dover Area School District court case, which is even more interesting because the antiscience movement was dressed up as science. (If you’ve never heard of this, a good intro overview was actually put together by NOVA here.)

“If you are bringing bad news, don’t come knocking on this door for any reason. You got that, sweetheart?”

Now let’s get real for a minute. The average American consumes more fossil fuel than the average citizen of any other country. We don’t consume most of that fuel because we want to, but because we have to. Experts say that we must cut fuel consumption by 85%. But if we did that, most of us couldn’t heat or light our homes, many of us couldn’t get to work, farmers couldn’t raise crops, produce couldn’t get to the stores, schools couldn’t bus our children, we couldn’t drive them to activities, we couldn’t shop or visit friends or travel, we couldn’t … The list goes on and on and on.

Reducing fuel consumption by 85% is impossible. Experts say that we have to cut back, but they don’t tell us how. They warn us of future human suffering if CO2 is allowed to exceed 500+ ppmv and methane (CH4) is allowed to reach 6 ppmv. They say that disaster could strike in fifty years or a hundred years or they just aren’t sure when it will happen. But they know it will come. Most of us realize that we will probably not live to see the disaster, but we fear that our children and grandchildren and their children and grandchildren will. Most of us probably worry a lot about how bad it will be for them and what they will think of us for doing nothing. But what can we do?

We are living in the present, and we have to function and survive now; we can’t be constantly worrying about how people will survive fifty or a hundred years from now. To do so would make us ill. So our only recourse is to put those bad thoughts out of our mind. It is likely that many more of us than will admit believe that if 97% of scientists say increasing levels of CO2 and CH4 are manmade and could be very dangerous, then increasing levels of CO2 and CH4 are manmade and could be very dangerous.

But our lives must go on; we can’t constantly live in the shadow of fear. In the 1950s during the height of the Duck and Cover Cold War, annihilation by nuclear weapons was a constant threat and fear. But we couldn’t do anything about it, so most of us gradually drove those terrifying thoughts out of our consciousness. In time we forgot about mushroom clouds or imagined that the missiles and bombs didn’t exist. And whenever someone reminded us that the evil missiles were still pointing at us, we would get depressed or really mad. We didn’t want those awful images in our heads; we didn’t want those missiles and mushroom clouds to exist.

So here we go again, a few do-gooders are getting pleasure out of constantly reminding us that rising levels of CO2 and CH4 are going to destroy the world and everyone in it, and we can’t do anything about it. They want to keep us living in constant fear of future events over which we have no control. So when newspapers and TVs and magazines and radios and skeptics and deniers and … tell us that Global Warming and Climate Change are hoaxes or bad science or conspiracies or commie plots or … it makes us feel much better, because then we don’t feel so guilty for putting those bad, bad, bad thoughts and worries and fears out of our mind. We must do that for the sake of our sanity.

So, “If you are bringing bad news, don’t come knocking on this door for any reason. You got that, sweetheart?”

George, nobody is saying we need to cut fuel consumption 85% by tomorrow. It will be a slow, gradual process over a number of decades. It’s entirely within the realm of possibility – transition to electric vehicles and toward renewable energy in the power grid.

It will take time and it won’t be easy, but the longer we wait to start, the harder it will be.

I see my earlier comment has been deleted as inflammatory. I raised a scientific issue (a scientific publication that shows the climate model created by Dr. Santer and colleagues) shows increases in mid and lower troposphere temperatures that are more than double observations). How is this scientific issue inflammatory?

The part when you started insulting Dr. Santer and questioning his integrity without evidence was what got your comment deleted. That shouldn’t come as a surprise. I do not intend to allow my site to become a dumping ground for baseless accusations. -Kate

Dana says “Is it just me, or is it bizarre that there are several ‘skeptics’ commenting on this interview saying basically the same thing about lack of transparency and costs”

There’s no conspiracy when the article has been flagged at a sceptic site that people turn up, but why can’t what they say be his or her own opinion?

Santer complains about journalists failing to get to the bottom of complex issues since Climategate, I’d say they’ve never got to the bottom of complex issues, the only thing that has changed is the that they’re no long giving climate science a free pass. Do you really doubt that the public will ask harder (or at least more frequent) questions when they realise what it’s going to cost? If they feel that their concerns are not being addressed do you doubt they will become angry and some of them will voice that in unpleasant ways? Transparency is a matter of opinion but the issue of costs is self-evident.

Frank, I don’t have to represent myself as the voice of public will […] Once you start to demand money and reduced lifestyles, they very quickly get interested.

And guess what, you just represented your own personal opinion as the ‘public will’ — again. Given that there are lots of people who are now already jobless even without any climate regulation, I’m sure they’ll happily sacrifice the prospect of a career in alternative energy in order to preserve their current lifestyle. If they still have a lifestyle, that is.

If there are holes in the science they’ll [i.e. I’ll] want to know why. If they’re [i.e. I’m] not satisfied with the answers, they’ll [i.e. I’ll] rebel against the cuts and against the science. Sceptics don’t drive doubt they just put a voice to it.

There are no holes in the science — no major showstoppers, at any rate. However, there are plenty of huge holes in so-called ‘sceptic’ arguments — unless you think “man-made global warming is a hoax because it violates the US Constitution!” is a logical argument.

If there are major ‘doubts’ or ‘questions’ about climate science, it’s not because the science is unsound; it’s because the questions are stupid.

I recommend visiting http://www.communitysolution.org/ and reading Pat Murphy’s books PLAN C and Spinning Our Wheels to understand the monumental challenges facing the world. We have pushed CO2 and CH4 into uncharted territory, and scientists can only guess what the consequences will be. Pushing up CO2 concentration is like closing a locked door behind you with no key; there is no turning back. Reducing fuel consumption by 85% is considered necessary to stop CO2 growth and hopefully bring CO2 concentration back down to 350 ppmv, considered by some to be a safe upper limit. http://www.350.org/about/science

De we have decades? It would be wonderful if scientists could discover an anti-CO2 elixir, but that isn’t going to happen. Forests are our best anti-CO2 remedy, but we have been destroying them at unprecedented rates. Now some hopefuls are advocating geoengineering http://en.wikipedia.org/wiki/Geoengineering as the miracle savior. That is also wishful star-wars type thinking, and with almost no chance of accomplishing anything useful.

Do we have decades? To answer that question we must look at the monumental amount of fossil fuel the world is consuming today and the alarming rate world consumption is increasing, not decreasing. Green solutions are so miniscule they aren’t even the tip of the iceberg. Green solutions are many decades away, if ever. Cutting fossil fuel consumption on a monumental scale is the only solution, and the world hasn’t cut anything yet, as consumption is still growing.

Do we have decades? Scientists say that if we cut back consumption by 85% now, we may be able to halt CO2 buildup in the atmosphere and reduce it from 390 ppmv to 350 ppmv. But what if we continue on our destructive path for decades more and push CO2 to 450 ppmv or 500 ppmv or 550 ppmv? As we push levels higher and higher, the CO2 door continually locks behind us, making reductions to 350 ppmv ever more distant or impossible.

Do we have decades? No! We have to start now.

It doesn’t mean we have to finish now. That’s the distinction Dana was making, I believe. -Kate

Dear Kate
I am a researcher starting off in an area of science that has been around for a while. We’ve seen our fair share of nastiness relating to research, public policy implementation and ethical questions. Indeed we have about x times more of that stuff going on, in the area where I work in, for a far longer period of time.

I read about your obvious achievements to date here and there, after I visited your site and I felt a twinge of happiness. I am not making this up. Sure, I am skeptical, I am a denier, in denial etc, etc but I was glad that you’ve put in so much work at this stage and I like your style of thinking.

But, … and I am pretty sure of this – in contrast to the comment-deleting apparently closed-minded phenotype that is evident on your blog – you come across as an open-minded person in your writings and videos.

Give the skeptics and deniers a chance to say what they have to say. It might change your perspective. Why delete anything but outright abuse?

You can read more about my decision to moderate comments here.. I acknowledge that my moderation is imperfect – sometimes I over-moderate and sometimes I under-moderate. On the whole, though, I try to prevent the discussion from spiraling into a food fight. This means flagging not just downright abuse but also personal attacks. When a respected scientist takes the trouble to be interviewed for some nondescript blog such as this, I will not subsequently provide it as a dumping ground for people accusing him of fraud and bad faith. Similarly, I feel that comments accusing me of censorship and attempted control of information do not add anything to the discussion – which is why I deleted your last paragraph.

As Tamino says, “Freedom of speech is the freedom to say what you want. It is not the freedom to say it in someone else’s house.” I am in no way preventing people from getting their opinions out. You can all go to your own online spaces and discuss how much you hate my comment policy, for all I care. But why should I be expected to publish them?

In my opinion, the decision to moderate comments has little to do with being open- or closed-minded. I am still reading and thinking about everything you all say. I feel it is necessary, however, to take actions to keep the discussion civil and scientifically valid. Thanks for your input. -Kate

Sorry, Frank you can’t really put me in the position of rebelling against the costs of CO2 reduction since my CO2 footprint is about 3 tonnes. There are few CO2 reduction techniques I don’t follow. I had CFL lightbulbs about 15 years ago. I recycle, buy second hand, grow my own food and car share in a vehicle that gets over 50mpg. I haven’t flown in over 3 years. OMG I’m a greenie ;-) And yet, I still question the quality of the science.

Did I say the public had heard of Climategate? Why should they, people are only just beginning to ask questions. Green taxes even here in the UK are relatively new and hidden amongst other forms of taxation. Resistance so far is minimal. It won’t stay that way.

On second thoughts Frank, I believe you, the public are sceptical because they’re stupid and always will be. So give up now Frank because if that’s true then you’ll never convince them until it’s too late.

In the above talk, Heinberg mentions Transition Towns http://www.transitiontowns.org/http://www.transitionnetwork.org/ as one way to ease the pain of adjusting to reductions in fossil fuel consumption. Also see Murphy’s and Heinberg’s books for other viable ways to reduce fuel consumption related to Peak Oil and Climate change.

@Frank Bi:
It wasn’t hosted on WordPress, it was hosted by a friend of mine, and his server is down pending… something. I haven’t had anything new to say for a long time (at least not climate-relevant; there were a few bike tour posts and a couple guest essays from a friend of mine). When that changes I’ll hound him, assuming it’s still down.

…I suppose I could try to write a bit about terror management theory. There’s some fun findings there that might be relevant to messaging…
To put it briefly, TMT proposes that large elements of each person’s worldview (particularly religious worldviews, but secular culture and ideology also counts) are constructed as means of allowing us to offset the unsettling emotions that stir when we’re aware that we will, one day, die. This may explain the prevalence of the life-after-death/transcendance message in many religions and the need to cling to national symbols and titles like “patriot”. While I generally question the generalizability of this theory, it does put forward two interesting hypotheses which have empirical support – and one of them, mortality salience, suggests that if cultural worldview provides a “security blanket” against fear of death, then reminding people of their inevitable demise will increase the need for and importance of cultural symbols.
Translation: “We’re all gonna die!” messaging, i.e. climate catastrophe, may lead people to double down on their ideology, which – if that ideology is something opposed to climate action – works at cross-purposes to the original message.
I’ll be happy to back this up with citations and more detail in a later essay, which I’ll start as soon as I can.

It reminds me somewhat of this earlier study, which doesn’t even reference TMT but makes a similar suggestion.

“When a respected scientist takes the trouble to be interviewed for some nondescript blog such as this, I will not subsequently provide it as a dumping ground for people accusing him of fraud and bad faith.”

Kate, I agree. Critics should attack the science, not the scientist.

Are you sure you won’t reconsider and run for president in 2012? You will be a much better president than any potential canditate I’ve seen so far.

Seeing as I’m not a US citizen, that wouldn’t be possible. Besides, I’d much rather be a scientist. -Kate

“To put it briefly, TMT [terror management theory] proposes that large elements of each person’s worldview (particularly religious worldviews, but secular culture and ideology also counts) are constructed as means of allowing us to offset the unsettling emotions that stir when we’re aware that we will, one day, die.”

I wonder if anyone has done a survey of when a child or adolescent first contemplates dying. At some point each person comes to know that they will die and that everyone must die, but I believe the real issue is death’s degree of imminence.

The human mind is strange indeed; it represents a continuity of self that must go on forever. The passage of time for self exists only when awake, dreaming, or conscious. We know that every atom of our body will return to Earth, but self cannot end. The need for self to continue after death is what makes religious indoctrination so successful.

The theory of evolution would imply that every living creature would have inherited a natural survival instinct, otherwise it would have become extinct. What motivates prey to flee from predators or fight for survival? Is fight or flight the fear of death or just an inherited survival instinct? How long can a creature or human endure a constant state of fight or flight before succumbing to exhaustion? What happens when the threat is perpetually imminent but never materializes, like the mushroom cloud or peak oil or climate change?

“…devoting a lot of publicity to the stolen emails without really trying to understand context or trying to understand issues.”
This passage in your interesting article is attributed to Dr Santer. I would be interested to know what grounds Dr Santer has for stating that the e-mails were ‘stolen’. I ask this in the knowledge that there is an on-going police investigation into the release of the e-mails, which has, as far as I am aware, (please correct me if I am wrong on this) not yet come to any conclusions as to the origin or circumstances of the release.

We discussed this in an earlier comment thread.. Frank provides some links, if you scroll down to his comment. In short, it’s pretty obvious that the emails were stolen (as opposed to hacked by some kid who likes to hack things, or leaked by a member of CRU), and there is information as to where the attacks originated, but it’s pretty limited. -Kate

We explain panel and multivariate regressions for comparing trends in climate data sets. They impose minimal restrictions on the covariance matrix and can embed multiple linear comparisons, which is a convenience in applied work. We present applications comparing post-1979 modeled and observed temperature trends in the tropical lower- and mid-troposphere. Results are sensitive to the sample length. In data spanning 1979–1999, observed trends are not significantly different from zero or from model projections. In data spanning 1979–2009, the observed trends are significant in some cases but tend to differ significantly from modeled trends.

Did I say the public had heard of Climategate? Why should they, people are only just beginning to ask questions. […] Resistance so far is minimal. It won’t stay that way.

Earlier you said:

So why does it come as a surprise, that [climate scientists’] quiet, genteel little scientific backwater should suddenly become a shark infested maelstrom? Public scrutiny and anger was inevitable.

So you were saying that there’s ‘public anger’ over climate science due to “Climategate”, but when I point out that only 9% of Americans have even heard of “Climategate”, you now say that there’s no ‘public anger’ yet but there will be anger ‘soon’.

You know what I think? I think you’re just making stuff up.

* * *

Hengist McStone:

I am genuinely puzzled by the arguments put forward by TinyCO2 above. Since when has public perception , communication and the media been a part of the scientific method ?

Well said, Sir.

* * *

Brian D:

Thanks for the information. The stuff about Terror Management Theory seems intriguing — I hope you get to write about it soon. :)

A more relevant case-in-point: The Data Quality Act. It has a good name. In a nutshell, it says that all research that receives federal funding must disclose all of its data under FOIA. Sounds like it was written to promote scientific integrity, doesn’t it?

Sadly, it’s far more of a chilling effect. See, this does not apply to private research – big surprise, since the DQA was written by a lobbyist. Imagine you’re an industry that works in an area with little to no regulation, and evidence is building up that your product is dangerous and needs regulation. You’re free to contest this *as soon as the study comes out* (or earlier if you hear about it, say by having a lawyer read over federal research grants). You’re free to spin the data through any form of statistics you want to produce an “opposite” or “unclear” result. And you’re completely protected from any public inquiry into your data or methods. The net result is that regulation – which is already slow due to extreme lobbying requiring 110% proof before regulating anything – is delayed or defeated, and free science is restricted.

Reminds me of a passage from on of Gavin Schmidt’s private letters:

The contrarians have found that there is actually no limit to what you can ask people for (raw data, intermediate steps, additional calculations, sensitivity calculations, all the code, a workable version of the code on any platform, etc) and like Som-ali pirates they have found that once someone has paid up, they can always shake them down again.

You have to believe that Gavin was being sincere when he wrote that — it wasn’t meant to be public. And he’s right: there are always more details that people could demand — particularly if they have no understanding of a subject.

Climatology is a branch of physics — and the people who are experts in the field have spent about a decade or more becoming experts in the field. They can generally assume a great deal of background knowledge on the part of their audiences as these audiences consist primarily of other experts. They don’t have to repeat the obvious every time. They don’t have to walk people through the basics of calculus, thermodynamics or quantum mechanics. Or how data is acquired or the methods have been tested by years of use and experience.

As I have stated elsewhere:

The demand for “100% openness and transparency” sounds reasonable on the face of it — that is until one attempts to spell out precisely what it means and how it is to be applied. In “Knowledge and Decisions”, Thomas Sowell elaborates upon an insight by Friedrich A. von Hayek using the example of a restaurant.
*
In a well-run restaurant which serves the finest cuisine, many of the details that go into how the restaurant is run and how the food is prepared are left unarticulated or are stated or thought of only in fairly vague terms. A pinch of this, a handful of that, and how to tell when a customer is about request something before they actually consciously signal it. How the floor is to be mopped, when to vacuum, all of the expectations of the owner, the manager, the cook and the maitre de.

People learn first by watching others then by doing, much like one learns how to play a piano, ride a bicycle or type on a keyboard. (These latter examples were used by Ayn Rand to illustrate essentially the same principle.) There isn’t any one time or place in which all of the details that are involved get spelled out.
*
And what does it take to spell them out specifically? Setting aside the high end restaurant for the time being at least, you might want to consider McDonalds. They actually try and spell out in precise detail how a given McDonald’s restaurant is to be run. The result? Volumes. (I know because I’ve worked behind the counter and have seen the books for myself.)

It is about the same size as a set of encyclopedias. In this way they are able to standardize practices throughout the entire restaurant chain. They are able to achieve uninformity — and replicability — defining methods, procedures and specifications in such a way that no detail depends upon the tacit expertise of any member of the team in order to duplicate what is done the same way everywhere else.
*
But that generally isn’t how it works. Not at most restaurants, nor in most human endeavors. There are those who have expertise — who have automated a body of knowledge that was only in part articulated while they were in school acquiring their degress and the rest of which was acquired in a form that was scattered throughout the experience acquired over the earlier part of their career.

Experts have a body of knowledge — tacit and automized — which distinguishes them from the novice and the man on the street. It is what results in a division of cognitive labor — and distinguishes between the the expert and the novice in any line of endeavor.

*
Now admittedly for an individual scientist with respect to a single study, what he does may be less complicated than what it takes to run a restaurant in its every detail. However, what stands behind and is presupposed by the study in terms of its literature, physical principles, methods, significance and implications in all likelihood often dwarfs that which is serves as the foundation for the good majority of human endeavors.

And the scientist doesn’t stop there, honing what he has done to the point that anyone can easily replicate it. He assumes a certain level of expertise on the part of those who read the study, that much of what he has done can be taken for granted as that which is generally assumed, tacit and forms the background as part of the foundation for much of the activity in his line of study. And then after the writeup he moves on to the next study.

What you are dealing with in science is essentially the same sort of division of cognitive labor that Hayek found in the free market. You would think libertarians and “Texas Tea” partiers might realize as much. That those who demand “complete openness and transparency” are in essence making the same mistake as those who would have a centralized committee of a centralized substitute its decisions for those of each and every participant in a decentralized free market economy.

But how exactly did we find out about Gavin’s letter? It was one of the many emails that was stolen in “Climategate.”

I have always thought that “Climategate” was appropriately named — as there are very real parallels to be drawn in terms of the theft of material intended for use in an extensive smear campaign…

But there are also some essential differences. One of the biggest is that in the case of Watergate the material that was being stolen was meant to be used against political opponents in a smear campaign. In Climategate the opponents weren’t political opponents but scientists engaged in science.

And in Watergate those who broke in were caught and tried as criminals. In Climategate those who broke in weren’t caught, the press had little or no interest in them or their motives, and the press instead largely became deeply involved in the smear campaign, often going to great lengths to paint the victims as the criminals.

Timothy, you touch on some important points and an important debate about
what should be reasonably expected to be supplied to replicate research.

Please consider the American Physical Society position. Their on line
statement reads (in part):

“The success and credibility of science are anchored in the willingness of
scientists to:

1. Expose their ideas and results to independent testing and replication by
others. This requires the open exchange of data, procedures and materials”.

What responsibility do you think this entails? No one has asked Dr. Santer to instruct them on first principles of physics, but rather he was asked to supply intermediate information to be able to analyze and replicate his results. [citations needed – journals and governments require intermediate calculations]

The minds you have to change are not those of “deniers” (ugly term) but the editors of Science, Nature, Geophytical Research Letters, and the
policy makers at the National Science Foundation.

Your familiarity with the writings of Sowell and Hayek on dispersed
knowledge is laudable, and you correctly point out their important insights.
You are also correct that knowledge is not free, and that it takes
considerable effort to be able to understand available information. (I
suggest you read the important book by Terence Kealey, “Sex, Science and
Profits” which covers exactly this point).

Timothy, I like your McDonalds example. What do the manuals have to say about flies in the workplace? I haven’t worked in a fast food place, but I’ve known someone who has at a 24 hour joint. Around 3 a.m. or so, when the place is empty, it can become quite boring.

So what do you do to break the boredom? How about getting rid of those pesky flies? As far as this guy knew, the manuals didn’t say anything about flyswatters, or if they did, he hadn’t read them anyway or seen a flyswatter lying around. His solution, chasing those critters down and clapping them between two halves of hamburger buns, with meat and condiments still intact, not only did the trick but was quite challenging and entertaining as well. He hadn’t heard of any customer who had complained about tainted hamburgers, and he suspected that they were happier to enter a fly-free restaurant for their McMorning Muffins.

Now I could go on about sanity sanitation in the workplace, but deniers don’t seem overly concerned about what goes on behind those swinging doors anyway. What is it they say, “What you can’t see won’t hurt you?” That is, not until your stomach has been pumped out or you’ve sat upon the throne for a couple days.

Wired.com: People who are coming at the issue in good faith, you mean. What’s their response?

Norgaard: Climate change is disturbing. It’s something we don’t want to think about. So what we do in our everyday lives is create a world where it’s not there, and keep it distant.

For relatively privileged people like myself, we don’t have to see the impact in everyday life. I can read about different flood regimes in Bangladesh, or people in the Maldives losing their islands to sea level rise, or highways in Alaska that are altered as permafrost changes. But that’s not my life. We have a vast capacity for this.

This is starting to make sense; since I eat only at five star restaurants, a fly in somebody’s burger is not my life.

Wired.com: How is this bubble maintained?

Norgaard: In order to have a positive sense of self-identity and get through the day, we’re constantly being selective of what we think about and pay attention to. To create a sense of a good, safe world for ourselves, we screen out all kinds of information, from where food comes from to how our clothes our made. When we talk with our friends, we talk about something pleasant.

Then I suppose “fly in a burger” would be off limits at the dinner table.

Wired.com: How does this translate into skepticism about climate change?

Norgaard: It’s a paradox. Awareness has increased. There’s been a lot more information available. This is much more in our face. And this is where the psychological defense mechanisms are relevant, especially when coupled with the fact that other people, as we’ve lately seen with the e-mail attacks, are systematically trying to create the sense that there’s doubt.

If I don’t want to believe that climate change is true, that my lifestyle and high carbon emissions are causing devastation, then it’s convenient to say that it doesn’t.

That’s exactly what my friends and relatives said when I told them about fly enriched burgers. “You’re making that stuff up.” Now, as far as I know, my comments haven’t stopped a single one of them from their daily trek to their favorite fast-food hangouts. “And even if it’s true, there’s nothing we can do about it anyway. So shut up!”

I told them that they could open up their burgers first and check for flies. But have you ever tried finding a fly amongst all that goo they put on those things? Then I said maybe a magnifying glass and some toothpicks? In the end, all my solutions were too complex and didn’t seem do-able, so they just said, “why bother?”

“Kids, wanna go get a Smiley Meal?”

“Yeah!!!”

Wired.com: Is that what this comes down to — not wanting to confront our own roles?

Norgaard: I think so. And the reason is that we don’t have a clear sense of what we can do. Any community organizer knows that if you want people to respond to something, you need to tell them what to do, and make it seem do-able. Stanford University psychologist Jon Krosnick has studied this, and showed that people stop paying attention to climate change when they realize there’s no easy solution. People judge as serious only those problems for which actions can be taken.

Another factor is that we no longer have a sense of permanence. Another psychologist, Robert Lifton, wrote about what the existence of atomic bombs did to our psyche. There was a sense that the world could end at any moment.

Global warming is the same in that it threatens the survival of our species. Psychologists tell us that it’s very important to have a sense of the continuity of life. That’s why we invest in big monuments and want our work to stand after we die and have our family name go on.

That sense of continuity is being ruptured. But climate change has an added aspect that is very important. The scientists who built nuclear bombs felt guilt about what they did. Now the guilt is real for the broader public.

What responsibility do you think this entails? No one has asked Dr. Santer to instruct them on first principles of physics, but rather he was asked to supply intermediate information to be able to analyze and replicate his results. [citations needed – journals and governments require intermediate calculations]

The minds you have to change are not those of “deniers” (ugly term) but the editors of Science, Nature, Geophytical Research Letters, and the
policy makers at the National Science Foundation.

Here’s a quick question: have you verified for yourself that the work of McKitrick, McIntyre, and Hermann (2010), which you just promoted, is actually good? I tried to download the paper and the supplementary material, only to be faced with a paywall. What did you do?

I’m all for openness of data and methods, and I’m sure the editors of GRL, APS and so on are similar for it. What I’m against is anti-regulationists using a demand for ‘openness’ as an excuse to shut down scientific work they don’t like, while not demanding the same degree of openness for ‘scientific work’ they do like. As Brian D points out in his comments on the Data ‘Quality’ Act, here the demand for ‘openness’ is clearly just a pretext.

Norgaard may have gotten a few things mixed up in deniers in my previous post.

She states that: Robert Lifton, wrote about what the existence of atomic bombs did to our psyche. There was a sense that the world could end at any moment.

Then she states in the next paragraph: Global warming is the same in that it threatens the survival of our species.

I disagree that the atomic bomb and global warming had/have the same psychological effects on the populace. Back in the 1950s and 1960s, the atomic bomb was In the Now, which means that it could happen to me at any instant. The mushroom clouds that I’d seen on black and white TV and on the movie screen were in my mental vision every waking minute.

On the other hand, global warming is In the Future, which means that it poses no immediate threat to me. Worry about imminent death is a lot different than worry about future death. My reaction to someone pointing a gun at me would be a lot different than my reaction to the knowledge that I or someone else will someday die.

Norgaard also stated that: The scientists who built nuclear bombs felt guilt about what they did. Now the guilt is real for the broader public.

I disagree with this comparison. it is likely that most scientists didn’t feel guilt about their contributions to nuclear weapons until months, years, or decades after they’d witnessed the horrific aftereffects. Ties between the A-bomb and melted bodies were real, shown clearly in thousands of photos. I saw some of those photos before I was ten years old, and those images are seared into my mind.

Just as ties between the “American wars in Iraq and Afghanistan” and the disputed “horrific death toll and suffering in those countries” are not real to most of the American populace, ties between “heating and cooling our houses, owning and driving three vehicles when and wherever we wish, and finding abundant food on the store shelves” and …flood regimes in Bangladesh, or people in the Maldives losing their islands to sea level rise, or highways in Alaska that are altered as permafrost changes… are not real.

Scientists could do nothing once the A-bomb genie had been let out of the bottle, so they had good reason to feel guilt. The stopper is still halfway in the Global Warming genie bottle, so the populace has much less reason to feel guilt now. And I suppose that real guilt will not materialize until after the Global Warming genie has been freed and reports of global weather catastrophes become ten times worse.

Timothy, you touch on some important points and an important debate about what should be reasonably expected to be supplied to replicate research.

Please consider the American Physical Society position. Their on line
statement reads (in part): …

I take it you are quoting from:

Science extends and enriches our lives, expands our imagination and liberates us from the bonds of ignorance and superstition. The American Physical Society affirms the precepts of modern science that are responsible for its success.

Science is the systematic enterprise of gathering knowledge about the universe and organizing and condensing that knowledge into testable laws and theories.

The success and credibility of science are anchored in the willingness of scientists to:

1. Expose their ideas and results to independent testing and replication by others. This requires the open exchange of data, procedures and materials.
2. Abandon or modify previously accepted conclusions when confronted with more complete or reliable experimental or observational evidence.

Adherence to these principles provides a mechanism for self-correction that is the foundation of the credibility of science.

There is a difference between “replicate” and “duplicate.” Don’t confuse the former with the latter. “Duplicate” means to make an exact copy. Scientists have little interest in making exact copies of earlier research. They are generally interested in performing new research that extends the boundaries of human knowledge, not merely repeating what has already been done before. The level of detail required to replicate a piece of research is not the same level of detail as that which is required to make an exact duplicate of the research.

And this is in no small part what my earlier comment was concerned with: the level of detail. To what extent one would have to articulate the exact data, materials and methods that were involved in arriving at a given result.

You will notice that the statement is a short declaration of principle. It does not spell out the level of detail to be used. Nor does it spell out who should be able to find the information available sufficient to “replicate” the results.

This is the fellow who was originally criticizing Ben Santer for not making his results “replicable,” and as Santer states:

Our results were published in a peer-reviewed publication (the International Journal of Climatology). These results were fully available for “independent testing and replication by others”. Indeed, I note that David Douglass et al. performed such independent testing and replication in their 2007 International Journal of Climatology paper.

Douglass et al. used the same primary climate model data that we employed. They did what Mr. McIntyre was unwilling to do – they independently calculated estimates of “synthetic” Microwave Sounding Unit (MSU) temperatures from climate model data. The Douglass et al. “synthetic” MSU temperatures are very similar to our own. The scientific differences between the Douglass et al. and Santer et al. results are primarily related to the different statistical tests that the two groups employed in their comparisons of models and observations. Demonstrably, the Douglass et al. statistical test contains several serious flaws, which led them to reach incorrect inferences regarding the level of agreement between modeled and observed temperature trends.

Mr. McIntyre could easily have examined the appropriateness of the Douglass et al. statistical test and our statistical test with randomly-generated data (as we did in our paper). Mr. McIntyre chose not to do that. He preferred to portray himself as a victim of evil Government-funded scientists. A good conspiracy theory always sells well.

McIntyre is, according to the Wall Street Journal, a “semiretired Toronto minerals consultant” who has spent “two years and about $5,000 of his own money trying to double-check the influential graphic” known as the “hockey stick” that illustrates a reconstruction of average surface temperatures in the Northern hemisphere, created by University of Virginia climatologist Michael Mann. He does not have an advanced degree and has published two articles in the journal Energy and Environment, which has become a venue for skeptics and is not carried in the ISI listing of peer-reviewed journals.

McIntyre was also exposed for having unreported ties to CGX Energy, Inc., an oil and gas exploration company, which listed McIntyre as a “strategic advisor.” He is the former President of Dumont Nickel Inc., and was President of Northwest Exploration Company Limited, the predecessor company to CGX Energy Inc. As of 2003, he was the strategic advisor of CGX Energy Inc. He has also been a policy analyst at both the governments of Ontario and of Canada.

At the 2007 Fall meeting of the American Geophysical Union, McIntyre gave a joint presentation on hurricanes and climate change with Roger Pielke Jr.

McIntyre has no expertise in climatology. The closest he’s come to is that joint presentation at the AGU with Pielke Jr. which was given years after he requested Mann’s data.
*
A somewhat more extreme example replication as opposed to duplication than Santer and Douglass is that involving the Atmospheric InfraRed Sounder aboard Nasa’s Aqua satellite which is able to “see” concentrations of carbon dioxide at different levels in the atmosphere. Now the sounder is a very complicated instrument capable of seeing things at 2,378 channels. It also has a great deal of internal diagnostics — including the use of a tungsten lamp to check the operation of its sensors. Its results also get tested against those of other instruments onboard the same satellite, the results of other satellites. And as I note elsewhere they even get tested against airborne flask measurements:

Satellite measurements differ considerably in how they are done from the more traditional airborne flask measurements. Among other things a flask is only able to sample air from a specific location, but the analysis can be done in the lab, whereas a satellite image has to peer through the upper layers of the atmosphere if it is to view a specific layer deep within, but it is capable of simultaneously observing a large part of the Earth. Nevertheless, these satellite measurements show close agreement with airborne, better than 2 ppm. (ibid., Chahine, et al. (2008))

The minds you have to change are not those of “deniers” (ugly term) but the editors of Science, Nature, Geophytical Research Letters, and the policy makers at the National Science Foundation.

As far as I know Santer has no problem getting published in the journals. Me? I am a former philosophy major. I don’t try to get published in the journals on climatology.
*
Geoff wrote:

Your familiarity with the writings of Sowell and Hayek on dispersed knowledge is laudable, and you correctly point out their important insights. You are also correct that knowledge is not free, and that it takes considerable effort to be able to understand available information.

I was a member of the Objectivist movement for thirteen years. I was the ring leader of “The Objectivist Ring” and the webmaster for “The Free Radical,” a semi-monthly glossy libertarian magazine out of New Zealand. Hayek? His work is voluminous — so I can’t say that I have read more than a fragment of it. But I am familiar with his ideas and I have read a few books by Thomas Sowell and I read Ludwig von Mises “Human Action” cover to cover.
*
Now you wrote, “… not those of ‘deniers’ (ugly term)…”

I didn’t use the term “deniers” above so why did you bring it up? I do however believe it is accurate. What climatologists are dealing with is an orchestrated disinformation campaign — and many of the organizations that are involved in this campaign were likewise involved in the campaign to attack the science showing the connection between tobacco and a large variety of health problems including lung cancer. I have a list of thirty-two of these organizations here:

These are industry-funded organizations that have been involved in a variety of disinformation campaigns. And they often use the ideology of libertarianism. As I state, “Given the ideology of libertarianism, whenever industry has faced scientific facts that it deemed inconvenient it has had a willing army of believers who are ready to regard such facts as nonexistent or at worst irrelevant.”

Libertarianism is a form of ideological extremism — something that I am somewhat familiar with, its views and psychology. Inside and out. Literally.

“Next time I see Pat Michaels at a scientific meeting, I’ll be tempted to beat
the crap out of him. Very tempted.”

“I looked at some of the stuff on the Climate Audit web site. I’d really
like to talk to a few of these “Auditors” in a dark alley.”

True, and not commendable, but it does not shed any light on Dr. Santer’s actions or intentions. Take this angry statement in context – as Peter Sinclair says, “A privately expressed desire to punch someone at some unspecified future time, in an email that the supposed target was not even aware of? That’s not a threat.” See the first two minutes of this video. Highly recommended. -Kate

On November 02, 2010, Americans may be signing death warrants for the world’s grandchildren.

This sobering article, Death Denial, written by George Monbiot and Published in the Guardian, 2nd November 2009, paints a dismal picture for the future of mankind.

There is no point in denying it: we’re losing. Climate change denial is spreading like a contagious disease. It exists in a sphere which cannot be reached by evidence or reasoned argument; any attempt to draw attention to scientific findings is greeted with furious invective. This sphere is expanding with astonishing speed.

Just when draconian actions must be taken to avert a potential world disaster, the United States may be on the verge of “outlawing” climate change. Monbiot’s article explains why the Republican Party and the Tea Party, comprised of a much higher percentage of people over 60 than in other parties and likely to take over the House and Senate, may kill any proposed actions that would help mitigate future climate changes.

Such beliefs seem to be strongly influenced by age. The Pew report found that people over 65 are much more likely than the rest of the population to deny that there is solid evidence that the earth is warming, that it’s caused by humans or that it’s a serious problem(9). This chimes with my own experience. Almost all my fiercest arguments over climate change, both in print and in person, have been with people in their 60s or 70s. Why might this be?

See Monbiot’s article for several answers to his question and references therein.

Monbiot is a fantastic journalist. Thanks for pointing out this article.

I will point you to the open thread for this month, which is an ideal place to share articles you find interesting. -Kate

There are a couple of different issues you may have conflagrated. It is an interesting topic of discussion whether science reports of studies funded by government agencies should be given to private companies to sell. You may know this issue is a very heated debate between funding agencies and private scientific publishers. Dr. Santer’s studies (mainly funded by the U.S. Department of Energy) were published in Science and the International Journal of Climatology. To obtain these papers you would have to pay for them (as I’ve done). This is not the point at issue, but rather the requirements of providing documentation and datasets to ensure accurate replication.

As to the McKitrick et.al. study, I referred you to the journal’s abstract. You can buy the “in press” version there or wait for the final version and purchase that one. However, if you would like to replicate the work, you can find the submitted final version, the code, and the data at http://rossmckitrick.weebly.com/model-testing.html (for free). I have satisfied myself that McKitrick et.al. have made a strong case that the models in question appear to significantly overstate warming in the mid and lower troposphere over the period 1979-2009 compared to observations.

If I paraphrase your second paragraph, you seem to be claiming that asking for datasets it only a pretext for attempting to shut down research. It’s not at all clear to me how you come to this conclusion. As far as I can judge, the provision of the requested dataset, which Dr. Santer did eventually provide, required little more from him that a few mouse-clicks (plus some paperwork to get it formally released). He used the same dataset in 2005 and 2008, and presumably would have it handy immediately after on line publication of his 2008 paper (which is when the request was made). I am not aware of any argumentation by Dr. Santer that this task was difficult. How does this shut down research?

(I’ll discuss the objections Dr. Santer actually made in my reply to Timothy).

1) National Science Foundation – “NSF advocates and encourages open scientific communication. NSF expects significant findings from supported research and educational activities to be promptly submitted for publication with authorship that accurately reflects the contributions of those involved. It expects PIs to share with other researchers, at no more than incremental cost and within a reasonable time, the data, samples, physical collections and other supporting materials created or gathered in the course of the work. It also encourages grantees to share software and inventions, once appropriate protection for them has been secured, and otherwise act to make the innovations they embody widely useful and usable”.
Link http://www.nsf.gov/pubs/gpg/nsf04_23/6.jsp
2) Science Magazine policy -“Data and materials availability: All data necessary to understand, assess, and extend the conclusions of the manuscript must be available to any reader of Science. After publication, all reasonable requests for materials must be fulfilled”. “Climate data. Data should be archived in the NOAA climate repository or other public databases”.
Link – http://www.sciencemag.org/about/authors/prep/gen_info.dtl
3) Nature – “An inherent principle of publication is that others should be able to replicate and build upon the authors’ published claims. Therefore, a condition of publication in a Nature journal is that authors are required to make materials, data and associated protocols promptly available to readers without undue qualifications in material transfer agreements”. “After publication, readers who encounter refusal by the authors to comply with these policies should contact the chief editor of the journal (or the chief biology/chief physical sciences editors in the case of Nature)”.
Link – http://www.nature.com/authors/editorial_policies/availability.html
4) American Geophysical Union (publisher of Geophysical Research Letters and Journal of Geophysical Research)- Referencing and Archiving Data ; “data cited in AGU publications must be permanently archived in a data center or centers that meet the following conditions:
a) are open to scientists throughout the world.
b) are committed to archiving data sets indefinitely.
c) provide services at reasonable costs”.
Link – http://www.agu.org/pubs/authors/policies/data_policy.shtml

These days many of the critical issues in climate science are issues of statistics. Economic journals, which regularly deal with statistics, often require intermediate data sets or code, for example:
A. American Economic Review – “For econometric and simulation papers, the minimum requirement should include the data set(s) and programs used to run the final models, plus a description of how previous intermediate data sets and programs were employed to create the final data set(s). Authors are invited to submit these intermediate data files and programs as an option; if they are not provided, authors must fully cooperate with investigators seeking to conduct a replication who request them”. Link http://www.aeaweb.org/aer/data.php
Another example:
B. Journal of Polical Economy (link http://www.journals.uchicago.edu/page/jpe/datapolicy.html )
“For econometric and simulation papers, the minimum requirement should include the data set(s) and programs used to run the final models, plus a description of how previous intermediate data sets and programs were employed to create the final data set(s). Authors are invited to submit these intermediate data files and programs as an option; if they are not provided, authors must fully cooperate with investigators seeking to conduct a replication who request them. The data files and programs can be provided in any format using any statistical package or software, but a Readme PDF file documenting the purpose and format of each file provided, and instructing a user on how replication can be conducted, should also be provided”.

Thanks for your note.
You have given a fuller quote from the APS position, thanks.
I understand the difference between “replicate” and “duplicate.”
You may not be aware that what was requested in this case was a dataset that consisted of extractions of climate models which were then subjected to statistical analysis. Since the reason for the inquiry was to perform an alternative statistical analysis, the requestor believed there was no reason to re-build the dataset from scratch, as that process might have introduced unrelated errors. (Link http://climateaudit.org/2008/11/10/santer-refuses-data-request/#comment-167390
As far as I am aware, the dataset requested is the one used by Dr. Santer and colleagues for papers in 2005 and 2008. There has never been any suggestion (as far as I know) that providing the dataset took more effort than a few mouse clicks (and some paperwork to clear release). If it only took this level of effort, would you then withdraw your support for withholding the dataset?
Dr. Santer did not use the reason of excess work requirement when he refused to provide the dataset. He seems to make two arguments:
1. It is possible to create a plausibly similar data set (this is the reason in the quote you provided).
2. Dr. Santer had not completed full scientific analysis of the data sets and they are integral components of both the overall lab’s (PCMDI’s) ongoing research, and of proposals he has submitted to funding agencies.

It may be agreed that the premise of the first argument cannot be denied, since as Dr. Santer notes, in the paper by Dr. Douglass et.al. (abstract at http://onlinelibrary.wiley.com/doi/10.1002/joc.1651/abstract ) they did recreate the dataset. In his reply to Steve McIntyre, Dr. Santer suggests that Steve is too lazy to do this work. However Steve has said the reason he was reluctant to prepare the dataset himself was (as mentioned) the potential of introduction of errors, which were not relevant to the statistical analysis (see link above). Given Steve’s indefatigable efforts on analysis and blogging, the laziness argument does not seem plausible.
Regarding the second argument, Dr. Santer seems to be asserting some intellectual property right. I don’t want to get into long discussions of IP, but the case can be made that government funded science does not include such rights unless specifically agreed by the funding agency.

At any rate, my point is these arguments ultimate did not convince the lab or the funding agency (the US Department of Energy in this case), who required Dr. Santer to make the requested data available. I doubt that his argument will prevail in future. You may be aware that the National Science Foundation (sorry for all the US references) has announced a new grant policy, and one of the important changes is strengthened policies for data sharing (Links – story at http://pubs.acs.org/cen/email/html/8839gov2.html ; new proposed NSF policy at http://www.nsf.gov/pubs/policydocs/pappguide/nsf11001/index.jsp

At the time Dr. Santer advised his colleagues that he planned to release the dataset, one of his colleagues (Dr. Thomas Wigley) commented “This is a good idea. … To have these numbers on line would be of great benefit to the [scientific] community. In other words, although prompted by McIntyre’s request, you will actually be giving something to everyone”.
As noted, Dr. Douglass and colleagues also requested the dataset from Dr. Santer but were also refused so they made their own. Steve McIntyre seems to have requested the dataset independently.

[citations needed – Steve McIntyre’s results have been verified by credible sources. You cited the NAS report (which, although it agreed that some of the methodological criticisms were valid, concluded that MBH98 was generally correct – right answer despite the wrong method. This was definitely not McIntyre’s conclusion. You also cited the Wegman report, which has since faced charges of plagiarism, and is awaiting investigation, as well as an informal endorsement by one researcher on Climate Audit. Forgive me, but I don’t see how these sources credibly support your assertion. -Kate]

I think the proposition that data performed by government funded scientists (absent a relevant IP agreement) can withhold their data because they want to work on it further will not prove more tenable in the future than it has in Dr. Santer’s recent case. It doesn’t matter that Dr. Santer did not convince me with his arguments, but he has not convinced the decision makers. All major journals support providing data as a condition of publication, and funding agencies are strengthening their data sharing policies.

Thanks for your reference to Chahine (GRL-2008), it’s a fascinating paper. One quick question – in the photo on your avatar page, the CO2 levels are much higher in the developed northern hemisphere then the southern hemisphere. We often hear references to CO2 being “well-mixed”. Do you have data or a reference paper on the rate of mixing?
Please forgive me for not commenting in detail on your “orchestrated disinformation” comments. I don’t agree with your analysis, but this comment is too long already. If you think libertarians are so powerful, why is it that no libertarian policies are gaining ground (non-interventionist foreign policy, sound money, freedom of drug choice, freedom of association, no taxes, property rights, free trade, etc)? Politics will take us too far away from climate, but I note that Steve McIntyre has said his politics are left of center (whatever that means).

Geoff, you can keep dumping quotes, or you can answer my simple question:

have you verified for yourself that the work of McKitrick, McIntyre, and Hermann (2010), which you just promoted, is actually good? I tried to download the paper and the supplementary material, only to be faced with a paywall. What did you do?

There are a couple of different issues you may have conflagrated. It is an interesting topic of discussion whether science reports of studies funded by government agencies should be given to private companies to sell.

Wrong, wrong, wrong, wrong, wrong. The issue is why anti-regulationists keep insisting that only publicly-funded scientific work should be held to high standards of ‘openness’, yet privately-funded work (e.g. work by people connected to the oil industry) somehow shouldn’t get the same scrutiny.

What can this be, but a handy excuse by anti-regulationists to shut down scientific work they don’t like?

This is a very simple question, and I’m surprised that you’d misunderstand it.

Geoff:

However, if you would like to replicate the work, you can find the submitted final version, the code, and the data at http://rossmckitrick.weebly.com/model-testing.html (for free). I have satisfied myself that McKitrick et.al. have made a strong case that the models in question appear to significantly overstate warming in the mid and lower troposphere over the period 1979-2009 compared to observations.

Interesting thanks. But my understanding is that models already come with uncertainty bounds, so why do McKitrick et al. need to do further massaging of the models to compare them against observations? Are McKitrick et al. trying to create another ‘model’ from the original model — in which case, it won’t really be comparing the original model to the observations? Since you say you’ve satisfied yourself that their work is good, maybe you can explain to me your understanding of what McKitrick et al. are doing.

There is a difference between “replicate” and “duplicate.” Don’t confuse the former with the latter. “Duplicate” means to make an exact copy. Scientists have little interest in making exact copies of earlier research. They are generally interested in performing new research that extends the boundaries of human knowledge, not merely repeating what has already been done before. The level of detail required to replicate a piece of research is not the same level of detail as that which is required to make an exact duplicate of the research.

And this is in no small part what my earlier comment was concerned with: the level of detail. To what extent one would have to articulate the exact data, materials and methods that were involved in arriving at a given result?

You will notice that the statement is a short declaration of principle. It does not spell out the level of detail to be used. Nor does it spell out who should be able to find the information available sufficient to “replicate” the results.

You write:

1) National Science Foundation – “NSF advocates and encourages open scientific communication. NSF expects significant findings from supported research and educational activities to be promptly submitted for publication with authorship that accurately reflects the contributions of those involved. It expects PIs to share with other researchers, ….

I notice that it says “expects significant findings”. This does not imply all of the material from intermediate steps.

I also notice that it says “share with other researchers”. This does not imply that some Canadian mining executive can encourage his entire readership to request materials simply in order to pester researchers.

“… It also encourages grantees to share software and inventions,…

I notice that it says “encourages”. When they say “encourages” this leaves it up to the authors to determine to what extent it is reasonable to share material and under what conditions.

I note that this says “reasonable requests”. This may very well involve sharing requested material only with people who are qualified to use the material — rather than those who are intent simply upon pestering and wasting a researcher’s time — particularly with a flood of requests where each request takes several hours to fill.

As with East Anglia. Nearly different Freedom of Information Act requests in the span of six days — where all readers of a certain Canadian’s blog were encouraged to duplicate a request letter — only with certain blanks being filled in by the requester, e.g., the five countries whose data was being requested — when the researchers at East Anglia would have to make requests of those nations for permission to share the data.

I hereby make a EIR/FOI request in respect to any confidentiality agreements)restricting transmission of CRUTEM data to non-academics involing the following countries: [insert 5 or so countries that are different from ones already requested]…

Between 24 July and 29 July of this year, CRU received 58 freedom of information act requests from McIntyre and people affiliated with Climate Audit. In the past month, the UK Met Office, which receives a cleaned-up version of the raw data from CRU, has received ten requests of its own.

It may be agreed that the premise of the first argument cannot be denied, since as Dr. Santer notes, in the paper by Dr. Douglass et.al. (abstract at http://onlinelibrary.wiley.com/doi/10.1002/joc.1651/abstract ) they did recreate the dataset. In his reply to Steve McIntyre, Dr. Santer suggests that Steve is too lazy to do this work. However Steve has said the reason he was reluctant to prepare the dataset himself was (as mentioned) the potential of introduction of errors, which were not relevant to the statistical analysis (see link above).

Douglass did in fact introduce errors when performing their own analysis, but imagine for the moment that it had been Santer who had made the errors in his analysis or even code, and rather than describing his approach then having Douglass work out the details with respect to the code he simply handed over the analysis and code. There would have been no double check performed. Particularly when what you are doing is handing over code.

But when you have to code your own procedures independently of other investigators in order to test their results and your results differ then you know enough to investigate the differences.

Furthermore, as noted by Santer in relation to Douglass, McIntyre’s skepticism always seems to be with respect to conclusions that he doesn’t like.

Please see:

Mr. McIntyre could easily have examined the appropriateness of the Douglass et al. statistical test and our statistical test with randomly-generated data (as we did in our paper). Mr. McIntyre chose not to do that. He preferred to portray himself as a victim of evil Government-funded scientists. A good conspiracy theory always sells well.

These papers have been reviewed by a group convened by the US National Academy of Sciences, and they did not disagree with any of the major criticisms in the papers.

What the National Academy of Sciences found was that the basic conclusion of Mann Bradley and Hughes 1998 and 1999, namely, that warming during the late 20th century is unprecedented over the past 1000 years, is well supported, by the initial study and subsequent evidence.

Please see:

The basic conclusion of Mann et al. (1998, 1999) was that the late 20th century warmth in the Northern Hemisphere was unprecedented during at least the last 1,000 years. This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature reconstructions and pronounced changes in a variety of local proxy indicators, such as melting on ice caps and the retreat of glaciers around the world, which in many cases appear to be unprecedented during at least the last 2,000 years….

Based on the analyses presented in the original papers by Mann et al. and this newer supporting evidence, the committee finds it plausible that the Northern Hemisphere was warmer during the last few decades of the 20th century than during any comparable period over the preceding millennium…

The substantial uncertainties currently present in the quantitative assessment of large-scale surface temperature changes prior to about A.D. 1600 lower our confidence in this conclusion compared to the high level of confidence we place in the Little Ice Age cooling and 20th century warming. Even less confidence can be placed in the original conclusions by Mann et al. (1999) that “the 1990s are likely the warmest decade, and 1998 the warmest year, in at least a millennium” because the uncertainties inherent in temperature reconstructions for individual years and decades are larger than those for longer time periods and because not all of the available proxies record temperature information on such short timescales.

Now regarding Wegman, there would appear to be a strong basis for claiming that the report itself involved considerable dubious scholarship — for example in failing to acknowledge the need for a set of rules by which to objectively determine the appropriate number of principal components to retain — an error that was present in the original McIntyre and McKittrick paper.

Please see:

Here Wegman is attempting to claim that Wahl and Ammann acknowledge that the differing numbers of principal components is itself a “change in strategy”. But this is a gross misrepresentation of Wahl and Ammann’s point, which was that an objective criterion is required to determine the number of PCs to be retained and that number will vary from sub-network and period, as well as centering convention. M&M arbitrarily selected only two because that’s what Mann had done at that particular step and network. They failed to implement Mann’s criterion (as noted previously), or indeed any criterion, and thus produced a deeply flawed reconstruction.

Officials at George Mason University confirmed Thursday that they are investigating plagiarism and misconduct charges made against a noted climate science critic.

In 2006, GMU statistics professor, Edward Wegman, spearheaded a Congressional committee report critical of scientists’ reconstructions of past climate conditions — notably the 1999 “hockey stick” paper in Nature, which concluded that the 20th Century was the warmest one in a millennium. A National Research Council report later that year largely validated the 1999 paper’s research, but the “Wegman” report has knocked around in public debate over climate ever since.

GMU spokesman Daniel Walsch confirms that the university, located in Fairfax, Va., is now investigating allegations that the Wegman report was partly plagiarized and contains fabrications[emphasis added]. Last month, a 250-page report on the Deep Climate website written by computer scientist John Mashey of Portola Valley, Calif., raised some of these concerns. Mashey says his analysis shows that 35 of the 91 pages in the 2006 Wegman report are plagiarized (with some of the text taken from a book, Paleoclimatology: Reconstructing Climates of the Quaternary, by Raymond Bradley of the University of Massachusetts) and contain erroneous citations of data, as well.

It appears that not only did 35 different pages in the Wegman Report involve plagiarism, but the changing of key words that drastically altered the meaning of the text. And much of the material appears to have been from a work by Bradley — one of the three authors of MBH 1998 and 1999.

Thanks for your reference to Chahine (GRL-2008), it’s a fascinating paper. One quick question – in the photo on your avatar page, the CO2 levels are much higher in the developed northern hemisphere then the southern hemisphere. We often hear references to CO2 being “well-mixed”.

“… in the photo on your avatar page, the CO2 levels are much higher in the developed northern hemisphere then the southern hemisphere…”

Not that great. As I stated in the webpage:

The image is based off of monthly data from July 2003. Deep blue represents about 360 ppm by volume — but you will see that only around Antarctica, cyan is about 375 ppm, yellow 377, and deep red about 380 ppm.

Water vapor falls out as precipitation in the Earth’s atmosphere (e.g., it rains) and it has a short residence time — roughly 10 days. The half-life of methane is in the neighborhood of 10 years. Carbon dioxide? Depending upon the size of the slug that is added to the atmosphere you may easily have as much as 25% remain in the atmosphere after a thousand years.

Consequently water vapor has a shallow atmospheric distribution. If it followed the same rule as the non-precipitable gases its scale height (or “e-folding distance” as it is sometimes called) would be roughly 13 km. As it is we are looking at something more along the lines of 1.5-2 km.

Please see for example:

Unfortunately, it turns out that the above formal expression for the scale height is not correct for water vapor. Given a typical surface temperature (C), that expression would give a scale height of about 13 km. Observationally, the scale height of water vapor in the Earth’s atmosphere is between 1.5 and 2 km (e.g. Ulich 1980). Therefore, the slightly more complicated expression (with scale height and surface temperature explicitly included) must be used. In this memo, a scale height of 1.5 km will be assumed. Since the derived precipitable water is linearly proportional to the assumed scale height, the results can be scaled as desired with little effort.

In contrast the distribution of carbon dioxide is much more uniform. This represents a variation of less than 5.6%. The maxim difference in variation of CO2 ppm of the actual atmosphere relative to the modelled distibution?

As I stated at the webpage:

… while the variation is slight, AIRS has shown that there is greater variation in CO2 concentration than one would expect simply on the basis of model calculations.

… you are looking at an estimated 374 and actual 381. Less than 2%.
*
Geoff wrote:

Do you have data or a reference paper on the rate of mixing?

Not yet. The climate website that I am putting together is quite new — only started putting it up earlier this week. And at this point the mixing rate is not that high on my list of priorities insofar as carbon dioxide varies at most by about 5.6%.

There are 5 other organizations that he belongs to that have also received money from Exxon:

The Advancement of Sound Science Coalition
American Spectator Foundation
American Council on Science and Health
CFact
Tech Central Science Foundation

… and then there are the 7 other organizations that he belongs to that are part of the disinformation network but which haven’t received money from Exxon — at least according to Exxon’s tax returns.

Now I took the organizations that Michaels belongs to and expanded the diagram to include all key people. Then I removed all peopl that weren’t associated with two or more organizations already included on the diagram. Finding the Koch brothers I included three organizations that each of the two brothers are involved with that weren’t already on the diagram.

With each person and each organization you should be able to bring up more information by hovering your mouse over them and left-clicking items in the shortcut menu.

For example, with Pat Michaels we find:

Dr. Michaels has acknowledged that 20% of his funding comes from fossil fuel sources: ( http://www.mtn.org/~nescncl/complaints/determinations/det_118.html )Known funding includes $49,000 from German Coal Mining Association, $15,000 from Edison Electric Institute and $40,000 from Cyprus Minerals Company, an early supporter of People for the West, a “wise use” group. He received $63,000 for research on global climate change from Western Fuels Association, above and beyond the undisclosed amount he is paid for the World Climate Report/Review. According to Harper’s magazine, Michaels has recieved over $115,000 over the past four years from coal and oil interests.

Likewise, by comparing the list of organizations given on this page which contains over 150):

… I have found that just three families (the Kochs, Scaifes and Bradley) are responsible for over $250,000,000 worth of funding to the same network.
*
Geoff wrote:

If you think libertarians are so powerful, why is it that no libertarian policies are gaining ground (non-interventionist foreign policy, sound money, freedom of drug choice, freedom of association, no taxes, property rights, free trade, etc)?

“If you think libertarians are so powerful…” I don’t think the libertarians per se are that powerful — except insofar as they affect the Republican Party. But I assume you have heard of the Tea Party movement, haven’t you? It appears to have more or less taken over the Republican Party.

“why is it that no libertarian policies are gaining ground (non-interventionist foreign policy… freedom of association, no taxes, property rights, free trade…” I have no problems with property rights or even free trade. But I do have a problem with trying to gut all regulation of pollution. Whether its cancer-causing asbestos insulating schools, lead additives to gasoline, dioxins or what have you.

And no taxes? How on Eaarth do you plan on paying for government? Or is that another one of those things you would privatize, like the state and interstate highway system, public schools, prisons, public parks, the sewage system, garbage collection and so on. What of the police force?

What would a privatized police force and legal system look like? Well I suppose the mafia gives us something to work from for that. Yet cutting and eliminating taxes seems to be the policy that has most taken hold of the Tea Party movement, and by extension the Republican party.

But I am not primarily concerned with libertarianism or the Republican Party as with the assault by industry on science that they find inconvenient — first and foremost that being the science that is our understanding of anthropogenic global warming.

Sorry, but you’re wrong again and seem to be mixing up a few different issues. I’m in favor of journals enforcing (and strengthening) their data sharing policies (or establishing them if they don’t have them in place). These naturally apply to the data for any journal article published, whether by private scientists or public. In the case of the 2008 article by Dr. Santer and colleagues, the journal did not have a data sharing policy established (unlike Science or Nature as mentioned).

Regarding the data sharing of publicly funded science, this policy is adopted by most government funding agencies globally. Freedom of information laws apply to governments, not to private individuals or organizations. Exxon Mobil is not subject to FOI laws, but of course neither is Source Watch.
As a practical matter 95+% of all science related to climate is funded by governments. The reason you don’t hear anyone asking for Exxon Mobil’s data is that they have little or no published research on climate, and no one relies on data or scientific papers from Exxon Mobil to make policy decisions.

Your first claim that journal openness standards should not apply to all scientific articles is not correct. As for the second claim (of wanting to “shut down” certain science) you have not demonstrated any way in which this is supposed to happen. Why would providing data (which even Dr. Wigley supported, and which was determined to be the law by the lawyers at the DOE and Lawrence Livermore Lab) shut down science? The amount being spent annually in the US on climate-related science is [citations needed]. Please show where this “shut down” is occurring?

There may be some practical limitations, but as far as I can tell the actual providing of the data used by Dr. Santer et.al only took a few mouse clicks to make available. If you are going to claim that providing this data (which again was determined to be required by law) can potentially shut down science, please provide an example (actual or even theoretical).

As to the McKitrick paper, your comments suggest you haven’t read it. Your understanding may increase if you read the paper and look at the data. I strongly suggest you read the papers by Dr. Santer and colleagues of 2005 and 2008, and the paper by Dr. Douglass and colleagues of 2008 as referenced in earlier posts. At any rate, McKitrick et.al. did not “create another ‘model’ from the original model”, but used the output of the model provided by Dr. Santer to run a different (and arguably better) statistical test.

Why are these articles important? To understand that you need some context. The issue is that most climate models (and mainstream theory) predict more warming in the mid and lower troposphere than at the earth’s surface as part of the “global warming” process. This has been an issue for more than a decade, since satellite data did not show greater troposphere warming than the surface. In 2006, a group of experts was brought together (by the US government) to review the issues (including Dr. Santer, and satellite experts Dr. John Christy and Dr. Roy Spencer. You can read the problem outline as follows:

“The issue of changes at the surface relative to those in the troposphere is important because larger surface warming (at least in the tropics) would be inconsistent with our physical understanding of the climate system, and with the results from climate models. The concept here is referred to as “vertical amplification” (or, for brevity, simply “amplification”): greater changes in the troposphere would mean that changes there are “amplified” relative to those at the surface.
For global averages, observed changes from 1958 through 2004 exhibit amplification: i.e., they show greater warming trends in the troposphere compared with the surface. Since 1979[i.e., the satellite era], however,
the situation is different: most data sets show slightly greater warming at the surface” (Karl et.al, 2006, page 3) (see whole report at http://www.climatescience.gov/Library/sap/sap1-1/finalreport/sap1-1-final-all.pdf 9.1MB pdf file).

The Karl report of 2006 concluded that most of the differences could be reconciled, but not the tropical troposphere discrepancy, which was blamed on measurement issues. The 2008 Douglass study came to a difference conclusion. This study (done by three university scientists plus a semi-retired geophysicist who is a fellow of the American Physical Society and the American Geophysical Union) determined that rate of warming in the models (including error bars) were outside the limits of observations (including error bars). The later 2008 paper by Dr. Santer disputed these findings, and claimed that the Douglass paper made a statistical error in estimating model mean uncertainly, and proposed a new calculation. Indeed, with this new calculation equation, the models and observations do overlap for January 1979 through December 1999.
The new McKitrick paper (published in one of the journals of the Royal Meteorological Society of the UK) agrees with Dr. Santer that the Douglass calculation method for model mean uncertainty is not correct, but also does not accept the method proposed by Dr. Santer et.al. (For one thing, it has been been demonstrated that while the Santer 2008 method does indeed show model and observation overlap for 1979-1999, it does not show overlap if the period is extended to 1979-2008). They proposed a new method for comparing trends, and conclude that these models exhibit mid and lower troposphere warming that is 2-4 times that of observations over the period 1979-2009.
So that’s where we stand. It is “established” science that “global warming theory” predicts higher mid and lower troposphere warming compared to surface warming. It is a matter of scientific debate whether models and observations overlap. The latest scientific paper (McKitrick et.al.) concludes they do not over the period 1979-2009. The ball is in Dr. Santer’s court to agree or demonstrate why not. The key issues are statistical (plus more observations).

Hi Timothy,
As I said in my earlier comment, as far as I can tell the amount of “detail” requested (and ultimately provide by Dr. Santer) took only a few mouse clicks. If that is the case, do you support this release?
You are correct that the data sharing policies of the journals cited and NSF are not “water-tight”. That is why I have proposed a blue-ribbon panel be established to review and propose clearer rules for establishing exactly how to define “recorded factual material commonly accepted in the scientific community as necessary to validate research findings”. The econometric journals I cited, and now also many medical journals are requiring intermediate data.
Once again I remind you, it was determined by the lawyers at DOE and LLL that the data requested was required by law to be provided.

Next you have raised the issue of multiple requests for data sent to the Climate Research Unit (CRU) in the UK. If you have studied this matter, you will know that the reason for this “coordinated” effort was that CRU had rejected valid requests for data over the past few years earlier. As you probably are aware, it has been determined that CRU broke the relevant UK law when they did not provide the requested data (but they cannot be prosecuted due to a statute of limitations). They also broke university policy.
As you’ve indicated, there seems to have been 58 FOI requests in one week. While this may seem like a lot, you may not realize that the University of East Anglia (where CRU is located) has a full time Information Policy Officer whose job it is to respond to FOI requests. If he had responded in accordance with the UK FOI law to the original request (about 10 over 5 years), these additional requests would not have been sent.

Regarding McIntyre, Santer and Wegman
1.You say “Douglass did in fact introduce errors when performing their own analysis”. The “error” (which most people now agree) was not in their recreation of the Santer model output, but in the statistical analysis. So there was no “value-added” in having to re-create the Santer model output.
2. You (and Dr. Santer) don’t seem to like Steve McIntyre’s choice of what he decides to work on, but it seems he works on studies that he finds interesting. I suppose Dr. Santer also chose to study the Douglass et.al. paper because he found it interesting (and he disagreed with it).
3. Thanks for repeating the citation of the McIntyre paper which was deleted from the on line version of my remarks. The paper can be read at http://climateaudit.files.wordpress.com/2009/12/mcintyre-grl-2005.pdf . The two replies to comments are written in response to comments from a professor at MIT and the other one to a famous professor of statistical issues in climate. Most people think that the arguments of McIntyre are the stronger ones but you can read the papers and judge for yourself.
4. I don’t agree with your characterization of the NAS 2006 report. They did not say “warming during the late 20th century is unprecedented”. They said it is plausible. Well, I agree it’s plausible. They said the evidence is stronger that that the warming of the late 20th century is greater than in the last 400 years. It does not surprise me at all that it has gotten warmer since the Little Ice Age. What did they say about earlier time periods? You’ve quoted one section and below I quote from the summary:
“Less confidence can be placed in large-scale surface temperature reconstructions for the period from A.D. 900 to 1600. Presently available proxy evidence indicates that temperatures at many, but not all, individual locations were higher during the past 25 years than during any period of comparable length since A.D. 900. The uncertainties associated with reconstructing hemispheric mean or global mean temperatures from
these data increase substantially backward in time through this period and are not yet fully quantified.
• Very little confidence can be assigned to statements concerning the hemispheric mean or global mean surface temperature prior to about A.D. 900 because of sparse data coverage and because the uncertainties associated with proxy data and the methods used to analyze and combine them are larger than during more recent time periods”.

So there is a question mark is the late 20th century warming was greater than the Medieval Warm Period. I hope you will agree your comment overstates the confidence in the view that the current period was warmer.
5. [citations needed – the NAS report did not have rigorous methodology]

6. You should be aware of further scientific developments. In a study published in the Journal of Climate, scientists from the Danish Meteorological Institute tested seven methods of paleoclimate reconstruction (including most of the studies considered by the NAS panel in 2006). They found “All methods systematically underestimate the amplitude of the low-frequency variability…. All three relevant diagnostics—the relative bias, the low frequency amplitude, and the trend—unanimously demonstrate this considerable underestimation. The low temperatures in the preindustrial period is typically underestimated with 20%–50%”. This would indicate that the increases in temperature in periods in the past 500 years have been significantly larger than shown in the studies. They conclude “The underestimation of the amplitude of the low frequency variability demonstrated for all of the seven methods discourage the use of reconstructions to estimate the rareness of the recent warming. That this underestimation is found for all the reconstruction methods is rather depressing and strongly suggests that this point should be investigated further before any real improvements in the reconstruction methods can be made” (abstract at ttp://journals.ametsoc.org/doi/abs/10.1175/2008JCLI2301.1 ).

8. [citations needed – Phil Jones agrees with McIntyre – do you have a quote or something?]

9. [citations needed – Deep Climate is the only one leveling criticisms against the Wegman report. Why, then, has it come under official investigation?]

10. It is worthwhile to note that Dr. North testified under oath before Congress that he did not disagree with the Wegman report.

11. As to the charge of plagiarism, I would make two points. First of all, while it is clear that Dr. Wegman used the Bradley study as a template to discuss how tree-rings have been used to try to gauge temperatures before thermometers, this was clearly just a matter of laying out an area of background and methods. Charges of plagiarism in simple descriptions of background and method are rarely thought of as serious. That’s not just my opinion but that of the Office of Research Integrity of the US Department of Health & Human Services:
ORI’s definition of plagiarism provides the following caveat:
“ORI generally does not pursue the limited use of identical or nearly identical phrases which describe a commonly-used methodology or previous research because ORI does not consider such use as substantially misleading to the reader or of great significance.” (see http://ori.hhs.gov/education/products/plagiarism/).
12. Supposing that Dr. Wegman (who in his report did extensively reference the Bradley book) had simply copied the Bradley explanation of the background. Would that have changed his analysis in any way? The clear answer is no. That’s probably why Dr. Bradley has made the offer to withdraw his charge of plagiarism if Dr. Wegman withdraws his report. It’s clear that Dr. Bradley has no response to the clear statistical problems spelled out in detail by Prof. Wegman and his colleagues.
13. The McIntyre analysis, supported the Wegman report, the Christiansen paper and the Loehle paper show clearly it is not possible to determine that the late 20th century decades were warmer than the Medieval Warm Period.

What about all the other paleo papers that generally support MBH98’s conclusion? Mann08, Kaufman09 off the top of my head; all the teams from the spaghetti graph; anyone else want to chime in with more citations? -Kate

Dr. Patrick Michaels

I have the greatest respect and admiration for Dr. Michaels. You may be aware that he was a professor of climate at a major university (University of Virginia) and the State Climatologist for Virginia. He was elected the president of the association of state climatologists. It Is clear that he lost his positions due to his outspoken insistence on sound science which contradicted the claims of more politically powerful scientists and administrators. Are you familiar with his extensive body of peer reviewed papers in the scientific literature? Is there any one of them you would like to dispute?

I believe Dr. Michaels will become recognized in history as one of the courageous scientist who fought against poor science, in the same way that a few scientists stood up against Lysenkoism and eugenics.

You claim to find Exxon Mobil connections, but as I said in my reply to Frank, these small amounts pale in comparison to the billions being spent annually by governments on climate science. You mention a few millions but suppose it was tens of millions, it would still be paltry compared to the government spending.

It’s not the absolute amount of funding that matters, but the credibility of the source and whether they may favour a particular conclusion that funding rides on. -Kate

Try a thought exercise – suppose Dr. Michaels was a VP for Exxon Mobil. Would that change the scientific evaluation of his papers in any way? The whole point of the scientific method is to work to eliminate potential sources of bias (including sharing of data and methods). I completely reject the insinuation that scientists who work for companies or who get grants from companies are untrustworthy. Of course there may be bias and so such support should be disclosed. But to claim that privately funded scientists are parrots who distort their science to favor their companies is an insult without foundation. There are of course other types of bias (for example White Hat bias which I’m sure you are familiar with). That’s why support for scientific methods is so important, and why the actions of some scientists to undermine these established scientific methods is so detrimental.

I don’t think it’s polite to our host to get into long political discussions, so I’ll simply note that your suggestion that scientific argumentation that contradicts or undermines science supported by the IPCC is a libertarian conspiracy doesn’t seem very plausible in view of the fact that other libertarian platforms are not being widely accepted (and you have provided no evidence that it’s true).

I’m on a tight schedule here and can’t provide a lot of details, but generally industry seems to know that it can get more bang for its buck not by actually funding science, but by funding public perception of science.

For instance, what two unusual things do the Smithsonian Institute’s recent exhibit on human evolution and last year’s Nova documentary Becoming Human have in common?
1) Both posit an unfounded claim that rapid climate changes, in many directions, led to the development of human evolution (i.e. that we are adapted for change and that led to us being clever, etc). The underlying message is that A) climate changed quickly in the past, and B) we evolved to deal with it before, therefore C) we shouldn’t fear it now.
2) Both received major funding by Koch, who has a vested interest in promoting climate inactivism.

Likewise, rather than funding expensive satellite or field work to actually create scientific debate (which is only covered in peer review), it’s so much easier to pay a PR company or think tank to get the public to think there’s a debate.

The reason you don’t hear anyone asking for Exxon Mobil’s data is that they have little or no published research on climate, and no one relies on data or scientific papers from Exxon Mobil to make policy decisions.

Nonsense. Republican party members routinely rely on ‘scientific studies’ coming from groups with shady funding sources to defend their policy proposals (here’s a quick example). Any lack of peer review is simply attributed to some nebulous left-wing conspiracy.

So people do rely on data from private groups to make policy decisions, yet you claim that research from private-funded groups shouldn’t get the same scrutiny as research from publicly-funded groups. Why?

But my understanding is that models already come with uncertainty bounds, so why do McKitrick et al. need to do further massaging of the models to compare them against observations?

The new McKitrick paper […] agrees with Dr. Santer that the Douglass calculation method for model mean uncertainty is not correct, but also does not accept the method proposed by Dr. Santer et.al. […] They proposed a new method for comparing trends,

In other words, you haven’t the slightest idea what McKitrick et al. (2010) did, other than ‘well, they use a new method’. Yet you’re perfectly willing to accept that their method is sound.

So that’s where we stand. It is “established” science that “global warming theory” predicts higher mid and lower troposphere warming compared to surface warming.

It’s not “global warming theory.” It’s climate models that are based on physics. And it has next to nothing to do with the forcing that is causing global warming. If warming is due to an enhanced greenhouse effect the upper tropical troposphere is supposed to warm more quickly that the lower troposphere and the lower troposphere more quickly than the surface. Same thing if it is due to increased solar insulation.

The reason being? In the tropics moist air convection will play a stronger role, leading to heat being transported to the mid-troposphere more efficiently, resulting in a decrease in lapse rate.

Now think about this for a moment… The mid-troposphere is closer to space, and global warming will be less severe the more efficiently heat is transported to space. So if the tropical mid-troposphere is warming more quickly than the surface this implies that the heat is getting transported from the surface to space more efficiently — so like a fan pushing heat out of the kitchen, this means that the warming at the surface will be less severe.

In contrast, if moist air convection isn’t transporting heat as effciently as models say it should this will be like a weaker fan pushing heat less efficiently out of the kitchen. And as such global warming — where it really matters to us, living at the surface — will be more severe, not less.

Whether they realize it or not McKittrick, McIntyre and Herman are arguing for a weaker negative feedback and therefore a stronger net positive feedback and thus that global warming will be more severe.

It is a matter of scientific debate whether models and observations overlap. The latest scientific paper (McKitrick et.al.) concludes they do not over the period 1979-2009. The ball is in Dr. Santer’s court to agree or demonstrate why not. The key issues are statistical (plus more observations).

Not necessarily Santer — there are questions being raised by others. Nevertheless it may take a year before a technical response makes its way into the journals. However there has been some discussion of this paper on the blogs mid-August of this year. McIntyre actually showed up for the one at Annan’s.

I’d be more interested in a detailed explanation of the narrow “models” confidence in MMH2010. As far as I can tell it is based on the estimate of the mean trend (the C.I. of which narrows with the number of runs), not on the spread of trends per se. Even then, it still seems tight, probably because the of MMH’s peculiar model-run averaging and weighting scheme.

There are a lot of other issues in MMH 2010. There are discrepancies between values in various tables, and between table and figures. The stated model trends do not match linear trends calculated from the MMH archive. Some of the A1B model runs used are not fully documented in CMIP-3 archive.

I’m still planning a post on these issues, once the Wegman report quiets down.

This is the report to Congress by the US . Global Change Research Program, signed off by Dr. John Holdren. On page 146 you can see that spending in FY ’08 was US 1.8 billion, for FY ’09 2.4 billion, and estimate for FY ’10 2. billion.

My comment that “Even Phil Jones agrees at present we cannot determine with accuracy if the Medieval Warm Period was warmer than the last decades of the 20th century or not” was deleted from my earlier post. Please read the BBC interview with Phil Jones dated 13 February 2010. The relevant quote is:

Q – “There is a debate over whether the Medieval Warm Period (MWP) was global or not. If it were to be conclusively shown that it was a global phenomenon, would you accept that this would undermine the premise that mean surface atmospheric temperatures during the latter part of the 20th Century were unprecedented?

A. “There is much debate over whether the Medieval Warm Period was global in extent or not. The MWP is most clearly expressed in parts of North America, the North Atlantic and Europe and parts of Asia. For it to be global in extent the MWP would need to be seen clearly in more records from the tropical regions and the Southern Hemisphere. There are very few palaeoclimatic records for these latter two regions.

Of course, if the MWP was shown to be global in extent and as warm or warmer than today (based on an equivalent coverage over the NH and SH) then obviously the late-20th century warmth would not be unprecedented. On the other hand, if the MWP was global, but was less warm that today, then current warmth would be unprecedented”.

My comment on a scientific article demonstrating problems with most paleoclimate reconstructions based on tree-rings was deleted with the comment that citations of articles from the journal Energy and Environment are not accepted. This type of censorship is not productive in my view, and note that E&E is a peer-reviewed journal often publishing work from highly respected scientists , although to be sure it is not as prestigious as Nature and Science.

However, my comment did not quote an article from E&E. Rather, it quoted and referenced an article from the journal Climatic Change, which was until his recent passing edited by Dr. Stephen Schneider (a very strong proponent of CAGW). This journal is in the top tier of scientific climate journals.

I post my comment again:

There is also evidence that the studies depending on tree-rings very likely underestimate the upper temperatures in their reconstructions. In “A mathematical analysis of the divergence problem in dendroclimatology”, Dr. Loehle shows that since tree growth is non-linear, the use of tree-rings will limit the amplitude of reconstructions. As a consequence: “The nonlinear response of trees to temperature, which can produce divergence, makes it difficult to detect past climate episodes warmer than those occurring during the calibration period. If all trees grow at temperatures far below the inflection point of the growth curve … during the calibration period, then it may be possible to detect warmer past temperatures, but this will not be known a priori from the data or the fit statistics. That is, one can not tell if a given tree or composite proxy is able to detect past temperatures warmer than those present in the calibration data, though it might. Thus it is fundamentally impossible using tree ring data to say that recent decades are warmer than any time in the past n years because temperatures warmer than those of the first half of the twentieth century are likely to be suppressed by the method used for reconstruction”. (abstract at http://www.springerlink.com/content/45u6287u37x5566n/ ).

Sorry Geoff, I thought Leohle et al was in E&E, I must have been thinking of a different paper of his. We have discussed E&E in the comments here before and have concluded that it simply is not at the same standard as other journals. -Kate

In my earlier comment I said “The reason you don’t hear anyone asking for Exxon Mobil’s data is that they have little or no published research on climate, and no one relies on data or scientific papers from Exxon Mobil to make policy decisions”.

And you replied:

“Nonsense. Republican party members routinely rely on ‘scientific studies’ coming from groups with shady funding sources to defend their policy proposals (here’s a quick example). Any lack of peer review is simply attributed to some nebulous left-wing conspiracy”.

I agree with you that politicians of all stripes cite all kinds of non-scientific studies (some of which are nonsensical). I’m not talking about politics, I’m talking about science (which I think is the main topic of this blog).

Scientists are people and they have alI kinds of of interests. The reason that the scientific process has been adopted is to be able to compare results and share data which helps to detect and elimnate bias. That’s why I support data sharing (as do most funding agencies).

If you have an idea how to get politicians to stop using unscientific reports please let us know. I prefer to leave the politics to others (but maybe you’d like to comment on how much money GE has spent promoting windmills?).

It still sounds like you haven’t read any of the papers I linked for you. If you have any specific questions about the issues or the statistics, please let me know.

All the evidence you provided was regarding WG2, which deals with impacts and adaptations. In the context of your previous comment, it was obvious that you were referring to WG1 topics – paleo, etc. WG2 had some citation errors and a typo on the Himalayan glacier date, but WG1 is independent, much more rigorous and confident, and does not have such citation errors. In fact, in many instances, it has underestimated the scale of the problem – citations and further discussion here. -Kate

As I said in my earlier comment, as far as I can tell the amount of “detail” requested (and ultimately provide by Dr. Santer) took only a few mouse clicks. If that is the case, do you support this release?

It depends on the “detail” being requested, who is doing the requesting and why they are doing the requesting.

If the detail being requested happens to be the raw data I might provide them with the data or point them to where they can get it themselves. If it is the results of my own intermediate calculations based upon the raw data I would prefer to have them do their own calculations — since then they are checking my results.
*
Geoff wrote:

1.You say “Douglass did in fact introduce errors when performing their own analysis”. The “error” (which most people now agree) was not in their recreation of the Santer model output, but in the statistical analysis. So there was no “value-added” in having to re-create the Santer model output.

Let’s be clear about what you are refering to as “Santer model output.” We aren’t talking about raw data or the output from some climate model. We are talking about “independently calculated estimates of ‘synthetic’ temperatures from climate model data.”

Please see:

Douglass et al. used the same primary climate model data that we employed. They did what Mr. McIntyre was unwilling to do – they independently calculated estimates of “synthetic” Microwave Sounding Unit (MSU) temperatures from climate model data. The Douglass et al. “synthetic” MSU temperatures are very similar to our own. The scientific differences between the Douglass et al. and Santer et al. results are primarily related to the different statistical tests that the two groups employed in their comparisons of models and observations.

And there is value added insofar as by independently calculating estimates from the climate model data Douglass et al. were checking the results — the calculations – of Santer et al. As I previously pointed out if Douglass et al. simply went with the results of Douglass et al.’s calculations there would have been no independent verification Had Santer et al. made a mistake in their calculations this would have simply made its way into the work of Douglass et al..

Independent checks have value. That is why the double book-keeping of banks involves adding up columns then rows and adding up rows then columns. If the totals don’t match you know to re-check your calculations. And if the results of two different, independent sets of calculations do match — as in the case of the intermediate results of Santer et al. and Dougalss et al. — that too is value added.
*
Geoff wrote:

Once again I remind you, it was determined by the lawyers at DOE and LLL that the data requested was required by law to be provided.

Was it? According to Santer:

A little over a month after receiving Mr. McIntyre’s Freedom of Information Act requests, I decided to release all of the intermediate calculations I had performed for our International Journal of Climatology paper. I made these datasets available to the entire scientific community. I did this because I wanted to continue with my scientific research. I did not want to spend all of my available time and energy responding to harassment incited by Mr. McIntyre’s blog.

In fact, earlier you seemed to realize that it was Santer’s decision that lead to the release of the data, not something that was imposed upon him by lawyers. Let me remind you that you said:

At the time Dr. Santer advised his colleagues that he planned to release the dataset, one of his colleagues (Dr. Thomas Wigley) commented “This is a good idea…. To have these numbers on line would be of great benefit to the [scientific] community. In other words, although prompted by McIntyre’s request, you will actually be giving something to everyone”.

… although if I may add my own value, here is the complete paragraph:

This is a good idea. However, will you give only tropical (20N-20S) results? I urge you to give data for other zones as well, viz, SH, NH, GL, 0-20N, 20-60N, 60-90N, 0-20S, 20-60S, 60-90S (plus 20N-20S). To have these numbers on line would be of great benefit to the community. In other words, although prompted by McIntyre’s request, you will actually be giving something to everyone.

So it wasn’t having online the subset of numbers that McIntyre requested which would be of great benefit to the community (scientific or otherwise) but the entire set. And it was in this context that putting the numbers online was in Wigley’s views a good idea.
*
Geoff wrote:

As you’ve indicated, there seems to have been 58 FOI requests in one week. While this may seem like a lot, you may not realize that the University of East Anglia (where CRU is located) has a full time Information Policy Officer whose job it is to respond to FOI requests.

As the Information Policy Officer, David Palmer would not be filling the requests — but in receiving them (sometimes) and identifying how the policies regarding information and the availability of information are to be applied. The task of filling the requests would still fall on CRU, and as Phil Jones makes clear, even those requests that get turned down cost his team time.

Please see:

But he pleads provocation. Last year in July alone the unit received 60 FoI requests from across the world. With a staff of only 13 to cope with them, the demands were accumulating faster than they could be dealt with. “According to the rules,” says Jones, “you have to do 18 hours’ work on each one before you’re allowed to turn it down.” It meant that the scientists would have had a lot of their time diverted from research.

58 requests at 18 hours each assuming a 40 hour work week… Well, you have the same numbers I do so I’ll let you do the math.
*
Geoff wrote:

If he had responded in accordance with the UK FOI law to the original request (about 10 over 5 years), these additional requests would not have been sent.

And the death threats? I suppose those could have been foregone as well.

Actually McIntyre has been pestering Phil Jones and CRU since 2002.

Please see:

Since 2002, McIntyre has repeatedly asked Phil Jones, director of CRU, for access to the HadCRU data. Although the data are made available in a processed gridded format that shows the global temperature trend, the raw station data are currently restricted to academics. While Jones has made data available to some academics, he has refused to supply McIntyre with the data.

… and the initial draft of “Code of Practice for Responding to Reuqests for Information Under the Freedom of Information Act 2000” authored by David Palmer is dated 15 Sept 2004, so while the law was on the books, the guidance in how to apply that law didn’t exist until two years after McIntyre initially started making requests.

In the absence of such guidance compliance may not have seemed so necessary, particularly as so much of the data was already online:

A further irritation was that most of the data was available online, making the FoI requests, in Jones’s view, needless and a vexatious waste of his time. In the circumstances, he says, he thought it reasonable to refer the applicants to the website of the Historical Climatology Network in the US.

2. You (and Dr. Santer) don’t seem to like Steve McIntyre’s choice of what he decides to work on, but it seems he works on studies that he finds interesting. I suppose Dr. Santer also chose to study the Douglass et.al. paper because he found it interesting (and he disagreed with it).

As near as I can tell he pesters real scientists, wastes their time, goes on fishing expeditions to find something that he can blow out of proportion or otherwise misrepresent — and he stokes paranoid delusions on the part of rightwing fanatics that — judging from the death threats and at least one dead rat that someone found on their doorstep just after the doorbell had been rung one night — may end up getting someone killed some day.

They did not say “warming during the late 20th century is unprecedented”. They said it is plausible. Well, I agree it’s plausible.

And what was it that I said?

Oh yes:

What the National Academy of Sciences found was that the basic conclusion of Mann Bradley and Hughes 1998 and 1999, namely, that warming during the late 20th century is unprecedented over the past 1000 years, is well supported, by the initial study and subsequent evidence.

Not that the warming during the “late 20th century is unprecedented” but that the conclusion that it is unprecedented over the past 1000 years is well-supported.

And what do they specifically state? That while the evidence is not unanimous, there is a large body of evidence in favor of this conclusion, and that evidence at least suggests the warming is unprecedented for at least the last 2000 years.

Please see:

The basic conclusion of Mann et al. (1998, 1999) was that the late 20th century warmth in the Northern Hemisphere was unprecedented during at least the last 1,000 years. This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature reconstructions and pronounced changes in a variety of local proxy indicators, such as melting on ice caps and the retreat of glaciers around the world, which in many cases appear to be unprecedented during at least the last 2,000 years. Not all individual proxy records indicate that the recent warmth is unprecedented, although a larger fraction of geographically diverse sites experienced exceptional warmth during the late 20th century than during any other extended period from A.D. 900 onward.

Based on the analyses presented in the original papers by Mann et al. and this newer supporting evidence, the committee finds it plausible that the Northern Hemisphere was warmer during the last few decades of the 20th century than during any comparable period over the preceding millennium.

So by “plausible” it would appear they mean “after consideration of all the available evidence it would be reasonable to conclude”.

Nevertheless, they did find some overreaching — and I quoted from that as well.
*
Geoff wrote:

They said the evidence is stronger that that the warming of the late 20th century is greater than in the last 400 years. It does not surprise me at all that it has gotten warmer since the Little Ice Age.

It would seem they think there is some rather suggestive evidence that late 20th century warming has been unprecedented for at least the past 2000 years. But at the time they didn’t seem to think the case was yet strong enough for them to be comfortable with concluding as much.
*
Geoff wrote:

10. It is worthwhile to note that Dr. North testified under oath before Congress that he did not disagree with the Wegman report.

Did not disagree… Now that sounds like a ringing endorsement! Are those his exact words?

Here are the words of the report itself:

The basic conclusion of Mann et al. (1998, 1999) was that the late 20th century warmth in the Northern Hemisphere was unprecedented during at least the last 1,000 years. This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature reconstructions and pronounced changes in a variety of local proxy indicators, such as melting on ice caps and the retreat of glaciers around the world, which in many cases appear to be unprecedented during at least the last 2,000 years.

I would like to correct some potential misunderstanding about the conclusions of the 2006 National Research Council report to which Mr. Barton referred. Quoting from the report’s summary: “Based on the analyses presented in the original papers by Mann et al. and this newer supporting evidence, the committee finds it plausible that the Northern Hemisphere was warmer during the last few decades of the 20th century than during any comparable period over the preceding millennium.”

While we did find some of the methods used in Michael E. Mann’s original papers to be less cautious than some of our members might have used, we have not found any evidence that his results were incorrect or even out of line with other works published since his original papers.

Mr. Barton’s reference to “Mr. Mann’s global warming projections” is incorrect and quite misleading. Mr. Mann’s work does not make projections about global warming. His work, and that of our committee, was concerned with the reconstruction of temperatures in the past. As stated in the report, this area of research does not attempt to make any inference about future temperatures. While knowledge of past climates fills in context, the arguments for anthropogenic global warming are mainly based upon the past 50 years of data, including temperatures, model simulations and numerous other indicators.

As North makes clear, it is all too easy to overemphasize Mann, Bradley and Hughes 1998 and 1999. Even if McIntyre might wish to avoid the subject there are other more recent studies — and they pretty much all say the same thing.

But even if they didn’t, it isn’t how much things have warmed so far that should be worrying us: it is how much they will warm over the next century or so — and MBH 98/99 actually has very little to say about that.
*
We know the basic physical principles. We also know that given the thermal inertia of the oceans, even if we were to put a cap of carbon dioxide emissions today the world would continue to warm for decades to come. And the global average temperature wouldn’t come down much anytime soon after that. 8000 years ago it may have been warmer than it is now. Perhaps.

However, it now seems likely that the Arctic Ocean will be ice free for at least a few weeks each year before mid-century, The last time the Arctic Ocean has been seasonally ice free would appear to have been several million years ago. For example, genetic studies suggest that North Atlantic and North Pacific populations of right whale appear to have been separated for 6 mya.

From the Abstract:

North Atlantic and Southern Ocean populations of all three species are reciprocally monophyletic, and North Pacific C. erraticus is well separated from North Atlantic and southern C. erraticus. Mitochondrial clock calibrations suggest that these divergences occurred around 6 million years ago (Ma), and that the Eubalaena mitochondrial clock is very slow.

… and bringing together the results of nearly 300 studies a more recent paper shows that sea ice appears to have been a constant feature of the Arctic for more than twice that:

Although existing records are far from complete, they indicate that sea ice became a feature of the Arctic by 47 Ma, following a pronounced decline in atmospheric pCO2 after the Paleocene-Eocene Thermal Optimum, and consistently covered at least part of the Arctic Ocean for no less than the last 13-14 million years. Ice was apparently most wide- spread during the last 2-3 million years, in accordance with Earth’s overall cooler climate.

Furthermore, it has been argued that it is likely that a seasonally ice-free Arctic Ocean isn’t stable.

Please see:

Because a similar amount of solar radiation is incident at the surface during the first months to become ice free in a warming climate as during the final months to lose their ice in a further warmed climate, the ice-albedo feedback is similarly strong during both transitions. The asymmetry between these two transitions is associated with the fundamental nonlinearities of sea-ice thermodynamic effects, which make the Arctic climate more stable when sea ice is present than when the open ocean is exposed. Hence, when sea ice covers the Arctic Ocean during fewer months of the year, the state of the Arctic becomes less stable and more susceptible to destabilization by the ice-albedo feedback. In a warming climate, as discussed above, this causes irreversible threshold behavior during the potential distant loss of winter ice, but not during the more imminent possible loss of summer (September) ice.

So once we begin to lose the Arctic Sea ice on a seasonal basis it is likely that as temperatures continue to rise we will lose it for longer and longer parts of the year until it is gone more or less for good. And at that point the fate of perhaps the majority of Greenland’s ice will likewise be set.

Timothy Chase
[1] [citations needed – 18 hours is the maximum time to respond to FoIA in England, not the minimum]

[2]Secondly, it is abundantly clear that Santer did not for one moment feel happy about complying with McIntyre’s requests for data – that much is abundantly clear from the emails and his comments above, to date.

[3] Your citation of the Gerald North NAS report is incomplete and subject to less-than-careful parsing of the text of the report.

North’s report found that:

(a) temperatures of the last few decades were higher than any during the “preceding four centuries”.

(b) “Very little confidence can be assigned to statements concerning the hemispheric mean or global mean surface temperature prior to about A.D. 900 because of sparse data coverage …”

(c) The committe found it “plausible” that the “Northern Hemisphere was warmer during the last few decades of the 20th century than during any comparable period over the preceding millennium.”….but continue to say that “substantial uncertainties currently present in the quantitative assessment of large-scale surface temperature changes prior to about A.D. 1600 lower our confidence in this conclusion…”

(d) The committee concluded that “even less confidence can be placed in the original conclusions by Mann et al. (1999) that “the 1990s are likely the warmest decade, and 1998 the warmest year, in at least a millennium…” because of the above mentioned uncertainties.

Citations:
Same sources, provided by Timothy

There were issues with MBH98 – however you interpret the NAS report. Nobody is denying that. However, it was the first of its kind, and many other paleo reconstructions have come out since. Even Mann et al has a newer paper out (2008). None of these reconstructions are perfect either – none will ever be. But why are we still talking about a 12-year old paper, when there are stronger, more up-to-date studies in the literature? -Kate

Geoff, in defence of demanding transparency only from publicly-funded scientists, you said

no one relies on data or scientific papers from Exxon Mobil to make policy decisions.

and I replied,

Nonsense. Republican party members routinely rely on ‘scientific studies’ coming from groups with shady funding sources to defend their policy proposals (here’s a quick example).

to which you now reply,

I agree with you that politicians of all stripes cite all kinds of non-scientific studies (some of which are nonsensical). I’m not talking about politics, I’m talking about science

Duh. Do you expect people not to see you dodging and weaving as you defend the indefensible? Do you have any non-nonsensical arguments for why we should demand transparency only from publicly-funded scientists, but not privately-funded scientists?

It still sounds like you haven’t read any of the papers I linked for you. If you have any specific questions about the issues or the statistics, please let me know.

Well, I admit that I’ve yet to read McKitrick et al. (2010) since it’s behind a paywall, but I have read Douglass et al. (2007), and I spotted their error.

My questions to you about your understanding of McKitrick et al. (2010) are these:

(1) In a nutshell, what is your understanding of the method that McKitrick et al. (2010) use to compare climate models to observations? Yes, I already know it’s a ‘new method’. I’d like to know what the method is (in your understanding).

(2) Why is such a ‘new method’ even needed, given that climate models already come with their own uncertainty bars and thus one can simply directly match models with observations?

Shub, I quoted what Phil Jones had to say regarding the time required to work on an FOIA before you could turn it down.

Please see:

But he pleads provocation. Last year in July alone the unit received 60 FoI requests from across the world. With a staff of only 13 to cope with them, the demands were accumulating faster than they could be dealt with. “According to the rules,” says Jones, “you have to do 18 hours’ work on each one before you’re allowed to turn it down.” It meant that the scientists would have had a lot of their time diverted from research.

[2]Secondly, it is abundantly clear that Santer did not for one moment feel happy about complying with McIntyre’s requests for data – that much is abundantly clear from the emails and his comments above, to date.

I made it abundantly clear as well.

Please see:

A little over a month after receiving Mr. McIntyre’s Freedom of Information Act requests, I decided to release all of the intermediate calculations I had performed for our International Journal of Climatology paper. I made these datasets available to the entire scientific community. I did this because I wanted to continue with my scientific research. I did not want to spend all of my available time and energy responding to harassment incited by Mr. McIntyre’s blog.

Furthermore I believe I made it quite clear that climatologists generally don’t find dealing with McIntyre a please task:

As near as I can tell he pesters real scientists, wastes their time, goes on fishing expeditions to find something that he can blow out of proportion or otherwise misrepresent — and he stokes paranoid delusions on the part of rightwing fanatics that — judging from the death threats and at least one dead rat that someone found on their doorstep just after the doorbell had been rung one night — may end up getting someone killed some day.

… and Ben Santer is certainly no exception to this.

[3] Your citation of the Gerald North NAS report is incomplete and subject to less-than-careful parsing of the text of the report.

The parsing was quite careful. And while I agree that the NAS report had several criticisms regarding Mann et al. (1999) I included the central criticisms in an earlier comment.

Specifically, I quoted:

The substantial uncertainties currently present in the quantitative assessment of large-scale surface temperature changes prior to about A.D. 1600 lower our confidence in this conclusion compared to the high level of confidence we place in the Little Ice Age cooling and 20th century warming. Even less confidence can be placed in the original conclusions by Mann et al. (1999) that “the 1990s are likely the warmest decade, and 1998 the warmest year, in at least a millennium” because the uncertainties inherent in temperature reconstructions for individual years and decades are larger than those for longer time periods and because not all of the available proxies record temperature information on such short timescales.

The basic conclusion of Mann et al. (1998, 1999) was that the late 20th century warmth in the Northern Hemisphere was unprecedented during at least the last 1,000 years. This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature reconstructions and pronounced changes in a variety of local proxy indicators, such as melting on ice caps and the retreat of glaciers around the world, which in many cases appear to be unprecedented during at least the last 2,000 years….

And lest we forget, earlier you asked regarding the charges that the Wegman report involves extensive instances of plagiarism:

If you copy a block of text, but change the wording so that the meaning of the original passage is not intact, is that plagiarism?

The extreme selectivity with which you apply standards in your criticism is almost comparable to that of McIntyre.

Regardless, at this point Mann, Bradley, Hughes (1998/1999) is a subject for the history of science. There have been other, more recent studies. When you argue against it you are doing battle with ghosts. Science itself moves on.

I can’t tell if that was Kate editing your comment for citations or not due to the lack of italicizing, so I’ll provide that citation.

The 18 hours claim was sourced to Phil Jones in Timothy’s own link. He said: “According to the rules,” says Jones, “you have to do 18 hours’ work on each one before you’re allowed to turn it down.”

If a scientist isn’t credible to you, consider Steven Mosher (co-author of Climategate: The CRUtape letters) confirms the requirement here, where he refers to it as a “constraint,” specifically saying that Jones “spent less than the 18 regulated hours on the request” and that his (Mosher’s) request “was denied because CRU determined that responding to my request would take more than 18 hours.”

Now, assuming you aren’t familiar with UK FOIA law (and let’s face it, most of us aren’t), it’s easy to think of these as being consistent with a minimum. I know, because I made the same mistake a while back on this very blog (in fact, that post is remarkably high on the Google search for “FOIA 18 hours”).

But, I was wrong. It stems from an exemption under Section 12, put in plain English here:

Section 12 of the Freedom of Information Act allows public authorities to refuse to answer requests for information if the cost of complying would exceed the ‘appropriate limit’ prescribed in the Fees Regulations (SI 2004/3244).

Those limits are described here, but in a nutshell, for authorities central to the government, they can refuse to comply if the cost of doing so exceeds £600. For non-central agencies, like the CRU, the limit is £450. This total cost includes all fees associated with retrieving the information, including manpower (for some cases, this is all it requires). Although some agencies retain the ability to figure out the cost per hour of complying with a FOIA request, the CRU does not appear to have such power, and is thus considered to be equivalent to £25 / hour – thus, the limit is reached after 18 hours.

Now, it seems that among some government agencies, these two limits are colloquially equivalent, even if they aren’t exactly the same, meaning you’d express the limit either as “18 hours” or “£450”. (source, which also claims that the administrative process of determining how long a request would take is itself around 3 hours long.) Thus, even for requests that have extra costs for data retrieval (thus hitting the limit before 18 actual hours have passed), it may be appropriate conversationally to refer to the limit as “18 hours”. (Note that McIntyre’s ClimateAudit fishing expedition was for data that was held under copyright by entities other than the CRU. Thus, even if the CRU were legally entitled to release the data, there would be extra costs involved. And for all inactivists still on this talking point, the CRU being licensed to use the data is not the same as being licensed to distribute it, any more than having a Windows license from Microsoft gives you permission to freely distribute it to others.)

Thus, the 18 hours / £450 is a maximum, not a minimum. Oddly, Jones’ statement is consistent with this too – “you have to do 18 hours’ work on each one before you’re allowed to turn it down”. It just isn’t clear which he’s referring to in his own words – and it implies a minimum if you aren’t familiar with the process. (Unsurprising, actually. He isn’t very media-savvy. When Andrew Weaver spoke at my university the week after SwiftHack, he had some harsh criticism of Jones sticking to scientific rather than public/media expectations, so it isn’t exactly surprising that Jones would be less than clear. Scientists generally are trained only to communicate with other scientists. And the journalist interviewing him for the Guardian didn’t bother to follow it up or put it in clearer terms either.)

I was wrong on this point earlier. It’s easy to see where the mistake stems from, though.

That said, the point Timothy was making holds. There’s no reason to send that many requests – in a coordinate fashion, by the way! – unless you want to bog them down with numbers. This is especially true for FOIA requests that are known, in advance, to be impossible to carry out. Mosher’s own post from above (his FOIA was part of the bundle coordinated at ClimateAudit; Mosher admits as much there) complains about Jones combining all the requests into a single request and processing it as a bundle. However, this is certainly allowed by the rules (called out in plain English here, as those requests weren’t just “similar” but word-for-word identical except for the countries involved; one of the official submissions (number FOI 09-97) even includes ClimateAudit’s “[insert 5 or so countries that are different from ones already requested]” form letter!), and it’s bloody obvious to anyone with a pulse that the requests are essentially the same thing (different parts of “send us all your data”). Why would Mosher be complaining about Jones following the rules and saving time on legal issues (thus allowing more time for research) unless part of his goal was to bog down the CRU?

The most common and obvious type of DoS attack occurs when an attacker “floods” a network with information. When you type a URL for a particular website into your browser, you are sending a request to that site’s computer server to view the page. The server can only process a certain number of requests at once, so if an attacker overloads the server with requests, it can’t process your request. This is a “denial of service” because you can’t access that site.

It seems the analogy still holds – even if the 18 hours is a maximum, not a minimum. All that changes is how dramatic the time requirement is – the intention, made clear by Mosher’s complaint, remains unchanged. Certainly Jones could have been more clear – and I, among others, could have investigated this sooner – but our defense remains.

Thanks for looking that up, Brian. I didn’t know about that either. -Kate

About

Kaitlin Alexander is a PhD student in climate science at the University of New South Wales in Sydney, Australia. She became interested in climate science as a teenager on the Canadian Prairies, and increasingly began to notice the discrepancies between scientific and public knowledge on climate change. She started writing this blog at age sixteen to help address this gap in public understanding, and it slowly evolved into a record of her research as a young climate scientist. Read more

Enter your email address to subscribe to this blog and receive notifications of new posts by email.