New independent surface temperature record in the works

Good news travels fast. I’m a bit surprised to see this get some early coverage, as the project isn’t ready yet. However since it has been announced by press, I can tell you that this project is partly a reaction and result of what we’ve learned in the surfacesations project, but mostly, this project is a reaction to many of the things we have been saying time and again, only to have NOAA and NASA ignore our concerns, or create responses designed to protect their ideas, rather than consider if their ideas were valid in the first place. I have been corresponding with Dr. Muller, invited to participate with my data, and when I am able, I will say more about it. In the meantime, you can visit the newly minted web page here. I highly recommend reading the section on methodology here. Longtime students of the surface temperature record will recognize some of the issues being addressed. I urge readers not to bombard these guys with questions. Let’s “git ‘er done” first.

Note: since there’s been some concern in comments, I’m adding this: Here’s the thing, the final output isn’t known yet. There’s been no “peeking” at the answer, mainly due to a desire not to let preliminary results bias the method. It may very well turn out to agree with the NOAA surface temperature record, or it may diverge positive or negative. We just don’t know yet.

Global warming is the favored scapegoat for any seemingly strange occurrence in nature, from dying frogs to hurricanes to drowning polar bears. But according to a Berkeley group of scientists, global warming does not deserve all these attributions. Rather, they say global warming is responsible for one thing: the rising temperature.

However, global warming has become a politicized issue, largely becoming disconnected from science in favor of inflammatory headlines and heated debates that are rarely based on any science at all, according to Richard Muller, a UC Berkeley physics professor and member of the team.

“There is so much politics involved, more so than in any other field I’ve been in,” Muller said. “People would write their articles with a spin on them. The people in this field were obviously very genuinely concerned about what was happening … But it made it difficult for a scientist to go in and figure out that what they were saying was solid science.”

Muller came to the conclusion that temperature data – which, in the United States, began in the late 18th century when Thomas Jefferson and Benjamin Franklin made the first thermometer measurements – was the only truly scientifically accurate way of studying global warming.

Without the thermometer and the temperature data that it provides, Muller said it was probable that no one would have noticed global warming yet. In fact, in the period where rising temperatures can be attributed to human activity, the temperature has only risen a little more than half a degree Celsius, and sea levels, which are directly affected by the temperature, have increased by eight inches.

Richard Muller, a UC Berkeley physics professor, started the Berkeley Earth group, which tries to use scientific data to address the doubts that global warming skeptics have raised. Javier Panzar/Staff

To that end, he formed the Berkeley Earth group with 10 other highly acclaimed scientists, including physicists, climatologists and statisticians. Before the group joined in the study of the warming world, there were three major groups that had released analysis of historical temperature data. But each has come under attack from climate skeptics, Muller said.

In the group’s new study, which will be released in about a month, the scientists hope to address the doubts that skeptics have raised. They are using data from all 39,390 available temperature stations around the world – more than five times the number of stations that the next most thorough group, the Global Historical Climatology Network, used in its data set.

Other groups were concerned with the quality of the stations’ data, which becomes less reliable the earlier it was measured. Another decision to be made was whether to include data from cities, which are known to be warmer than suburbs and rural areas, said team member Art Rosenfeld, a professor emeritus of physics at UC Berkeley and former California Energy Commissioner.

“One of the problems in sorting out lots of weather stations is do you drop the data from urban centers, or do you down-weight the data,” he said. “That’s sort of the main physical question.”

Global warming is real, Muller said, but both its deniers and exaggerators ignore the science in order to make their point.

“There are the skeptics – they’re not the consensus,” Muller explained. “There are the exaggerators, like Al Gore and Tom Friedman who tell you things that are not part of the consensus … (which) goes largely off of thermometer records.”

Some scientists who fear that their results will be misinterpreted as proof that global warming is not urgent, such as in the case of Climategate, fall into a similar trap of exaggeration.

The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.

“We believed that if we brought in the best of the best in terms of statistics, we could use methods that would be easier to understand and not as open to actual manipulation,” said Elizabeth Muller, Richard Muller’s daughter and project manager of the study. “We just create a methodology that will then have no human interaction to pick or choose data.”

Post navigation

205 thoughts on “New independent surface temperature record in the works”

Oh please!
“We intend to provide an open platform for further analysis by publishing our complete data and software code. We hope to have an initial data release available on this website in early 2011.”
What sort of science is this? (Big winky!)

“We just create a methodology that will then have no human interaction to pick or choose data.”

In creating a methodology, human interaction to pick or choose data is inevitable. One can only hope that the BEST Study methodology is more transparent than the current ones. And that its authors are more open to constructive criticism.

“but mostly, this project is a reaction to many of the things we have been saying time and again, only to have NOAA and NASA ignore our concerns”

Aren’t your concerns supposed to be have been published as a proper analysis a long time ago? It’s now two years since you published your conclusions in “Is the U.S. Surface Temperature Record Reliable?” and one year since the Menne analysis.

Where is the actual analysis that demonstrates the concerns that NASA and NOAA should be paying attention to?

REPLY: We have a paper in peer review, note the difficulties encountered by O’Donnell et al with a hostile reviewer, Steig, and perhaps then you’ll understand why skeptical papers can take much longer to run the gauntlet. Besides, it took us three years with volunteers to get a large enough sample. Menne used preliminary data (mine against my protests), and a sample that was not spatially representative nor contained enough class1-2 stations. That paper was pure politics.

If you can do a better job with zero budget, herding volunteers, in your spare time, for no pay, against a well funded government sponsored consensus, by all means do it. Otherwise wait for our paper. – Anthony

Finally! With all the talk of dropped stations, uncorrected UHIE, inappropriate and biased correction so rural stations, older stations, the changed bias of urban for rural and loss of high altitude and high latitude data – finally a group with no (apparent) financial connection to the IPCC or Chevron (/sarc!!) will create a database. I think.

If the group started first with New Zealand as a “test”, we would know the way of the future. NIWA and BOM (Australia) have a discarded dataset followed by a result that looks just the same, and not at all like the raw datasets that we were shown by the NZScCoalition. Hmmm.

I’ll sure look to the New Zealand subset with interest. If NIWA is supported, then we’ll have to wonder if the GISTemp is all that bad ….

On the face of it this is not what I expect from the denizens of my old home town. Be prepared to learn that any 1000 randomly chosen thermometers selected from the full set and calibrated over time tell the same story as all 39,000 thermometers similarly calibrated. The alarmist science will, at the end of the day, stand. ±0.1 ºC

Being as how we are in an interglacial warming period, the earth SHOULD be getting warmer. As it has in earlier interglacial periods.
Sometimes I think a PhD replaces common sense. A number of years ago, a VP of our company brought me a memo prepared by one of his PhD team. It was an analysis of a proposal we had received from an outside expert. The analysis was cogent, complete and … negative. The PhD clearly showed, through analysis, that the proposal could not work. Then he concluded with his recommendation: that we put the guy on a consulting contract to work on it.
The VP and I had a good laugh over it. I told him I thought the PhD was doing good until he got to the end…where he had to exercise good judgment … and common sense.

Anthony, I look forward to reading your impending public vindication. Many thanks for this site.

REPLY: Here’s the thing, the final output isn’t known yet. There’s been no “peeking” at the answer, mainly do to a desire not to let preliminary results bias the method. It may very well turn out to agree with the NOAA surface temperature record, or it may diverge positive or negative. We just don’t know yet. – Anthony

“The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.”

Is this just US based? Or global? How is this possible considering the cat ate the raw, real, unadjusted, data? Also the use of the word “deniers” along side “exaggerators”. To me this seems their “program” is set firmly in the AGW, alarmist, camp.

What happens if the results they get don’t fall somewhere between the “deniers” and “exaggerators” as expected?

They already making value judgments on others’ positions and it sounds like they are starting off with an expected conclusion before they’ve even begun. I’m betting they will find exactly what they are looking for, and that it will fall right in line with this “consensus” view they mention….and we will be nowhere closer to the truth.

The comments in the article give me little hope this will be an unbiased approach.

Of all places in the world, and of all Universities in the world, how likely is it that a policy-free and objective temperature record would come out of Berkeley, San Fransisco, the perennial hotspot of left-wing activism.

Of course nothing wrong with left-wing politics. Being Scandinavian, I share many of their ideas, but not their ideas on AGW.

Could this be the outcome of the soul-searching of the warmists in the wake of Climategate: lets make another temperature record, this time seemingly objective, seemingly open and cooperative towards the deniers, but ultimately confirming the good old story.

I don’t know. I have seen too much to be gullible.

Afterall – being a skeptic means not believing until there is a very good reason to.

Sorry, but I think we’re off on the wrong foot already, if the article itself isn’t biased towards AGW.

The statements ” the skeptics – they’re not the consensus”, “..intention of becoming the new, irrefutable consensus”, and “not as open to actual manipulation..” Not AS open?? But still open to manipulation, and by whom?

I love the pretext of the project, but it smacks of a built in bias, and has all the colors of a new premise for a grant proposal machine.

Now if there was a consortium of scientists from Both sides of the discussion, I’d be less concerned.

Well yes that was my point! I have been waiting for the paper for a long time! The only updates I get on its status are when you occasionally make reference to it in comments here.

If the paper is done and submitted (to where?) then that’s great news and I look forward to reading it.

To go back to your initial complaint about NASA and NOAA – the publication of your analysis showing there’s a problem for them to be concerned about is the starting point to them addressing it. It seems unreasonable to me to criticise them for not fixing a problem you (or anyone else) have yet to demonstrate.

Congratulations Anthony! A good dataset is a requirement to understand our globe.

There are a number of things in the description of their methodology which caused me an involuntary flinch, but I’ll withhold judgment until I see the results. One particular item was treating local datasets with differences as lower weighted outliers after the areas of agreement were removed. After the examples which we’ve seen on this site, I’m not sure that lower weighting is appropriate. Still, this is a major step toward forcing a real focus on the physical basis of climate, and the data which documents it.

So long as the data is gathered and applied consistently, this can only be a good thing. More accurate information is always better than less. I look forward to seeing the data. It is interesting to note that Dr. Muller is not denying the existence of global warming, but wants to counter the extremes at both ends of the AGW debate with more accurate data.

Are they going to publish yet another “global mean temperature”? I’m surprised that physicists are willing to work with that kind of metric: Since the actual heat content (the enthalpy) of the air depends on its water vapor content, it’s highly temperature dependent – which means that a world that has a large region of +10 anomaly in the Canadian Arctic has not been heated much less than a world that has an equally large +10 anomaly in the tropics or subtropics. But if they both come out the same if you calculate the mean temperature.

This is going to get exciting – there’s no chance that everyone will simply say “well, now we know” and act rationally on the information.

If the data confirms Hansenesque warming, most sceptics simply won’t believe it. And if it shows negligible warming, and in particular shows no accelerated warming in recent decades, then the hockey team et al won’t believe it.

I have my doubts too. With words like ‘consensus’ and ‘denier’ along with phrases like “global warming is real”, its hard for me to be optimistic even if they are offering you a seat at the table so to speak.

As Steve McIntyre would say; Watch the pea under the thimble (very – VERY closely!!!!)

It could simply be that language like this was used to get the attention from a biased media and scientific community but whatever you do, don’t let your guard down.

REPLY: The word “deniers” was added by the reporter. And global warming “is” real. We expect some warming, my view is that it is exaggerated for political purposes. The key is find out what the true signal is. – Anthony

In reading the methodology material I was unable to determine how the Urban Heat Island (UHI) impacts are going to be addressed in this study. I would hope that this critical issue is to be evaluated in this independent temperature data study.
Is that the case? If so can someone please explain where this issue is addressed in the methodology. Thanks.

Conclusions about bias must be drawn until the report is published. This article was published in The Daily Californian, not written by the researchers. The fact that these researchers will publish all of their data and code and, as well, attempt to resolve issues uncovered in surface stations project is a good start. Rather than criticize this attempt at objective scientific inquiry, the sceptics’ job is to later critique their methodology.

That one jumped out at me as well. I was feeling really good about the prospects for “truth”, whatever it may be, coming out of this exercise until the professor made is “warming is a fact” statement. Hopefully, the mechanisms promised for removing the biases of the observer from the results will serve to make the professor’s stated bias irrelevant.

I think this is fantastic. The data will be collected openly we hope and anyone who wants access will have it, allowing them to spin it anyway they want (as expected). At least it will be available. The problem here though is the same problem that has always existed, thermometers will provide evidence that the climate changes, they will not be able to show that CO2 is the cause. It’s good though.

So what’s new here? Apparently the globe is warming and the seas have risen and neither of those is what the fight is about anyway.

“There are the skeptics – they’re not the consensus,” Muller explained. “There are the exaggerators, like Al Gore and Tom Friedman who tell you things that are not part of the consensus … (which) goes largely off of thermometer records.”

What science is going to be done when the top dog is already using “consensus” twice in one paragraph?

I suggest that there will be at least 4 factors that will determine whether or not this new effort will have any merit:

1/ How will the UHI effect be handled for cities? UHI “corrections” present a prime opportunity to introduce fudge factors to get the results one would like to see.

2/ Will so-called homogenization be used allowing temperature to differ from the actual measurements made at specific sites? The correct approach would be to constrain any fit to reproduce temperatures actually measured at all input sites.

3/ Will the data be interpolated over vast regions for which there is no site data? Integrating over such regions to calculate a “global temperature” can result in large systematic biases.

3/ Will the global heat content [the moist enthalpy as Espen pointed out above quoting Dr. Pielke] be reported along with the global temperature?

I’ve read Muller’s book “Physics for Future Presidents” and I enjoyed it quite a bit, that is until I got to the chapter on global warming. Up to that point the book had been a fine presentation on our abilities and limitations in energy, space, terrorism, etc based upon actual physics but when I got to the global warming chapter it was “AGW is real, trust me I know what I’m talking about”.

Color me skeptical on Muller’s ability to remain impartial and not be swayed toward the AGW corner.

But there is a youtube video showing a lecture from Muller where he talks about how the Hockey Team lied and he seemed to be quite disgusted with them and that he’d relied upon what they said to form his opinions on AGW so maybe he has turned a corner. But he also starts the lecture with a bunch of AGW stuff so who knows…

Personally, I think we should also be setting up new automated stations to obtain more uniform coverage, which we can use to build 30-60 years worth of high quality new data.

I agree completely Rick. Individuals have the ability today to have a home weather station feeding data into their home computers that could be feeding data to a commingled base elsewhere, freely accessible by all. Ten years from now and beyond, we would have some serious raw data that would be very useful in establishing trends and verifying current projections.

Your comments to Anthony Watts at February 11, 2011 at 8:15 am and February 11, 2011 at 8:39 am are offensive in the extreme. You owe him an apology.

Your first post questioned why Anthony Watts’ paper on the NASA and NOAA global temperature data sets had yet to be published. And he answered that completely. I copy that answer here in full to save others the task of finding it: his reply said;

“We have a paper in peer review, note the difficulties encountered by O’Donnell et al with a hostile reviewer, Steig, and perhaps then you’ll understand why skeptical papers can take much longer to run the gauntlet. Besides, it took us three years with volunteers to get a large enough sample. Menne used preliminary data (mine against my protests), and a sample that was not spatially representative nor contained enough class1-2 stations. That paper was pure politics.
If you can do a better job with zero budget, herding volunteers, in your spare time, for no pay, against a well funded government sponsored consensus, by all means do it. Otherwise wait for our paper. – Anthony”

Your second post ignored that and complained at Anthony Watts by saying;

“To go back to your initial complaint about NASA and NOAA – the publication of your analysis showing there’s a problem for them to be concerned about is the starting point to them addressing it. It seems unreasonable to me to criticise them for not fixing a problem you (or anyone else) have yet to demonstrate.”

Say what!?

Anthony Watts had told you that “We have a paper in peer review”. In other words, his demonstration of the “problem” is submitted for publication, so he HAS demonstrated a “problem”. Furthermore, he has repeatedly presented aspects of the “problem” on this blog and a few minutes search would have shown them to you. And if you were to accept his words you would wait for the submitted paper to be published and then – in the unlikely event that you were capable – you could dispute it.

Put another way, you are calling Anthony Watts a liar by claiming he has “yet to demonstrate” a “problem” when he told you he has demonstrated it.

To use your words, It seems unreasonable to me for you to criticise him for your unjustified and unjustifiable refusal to believe his veracity, and your public statement that you do not believe him castes aspersions on his veracity. To my mind that means your behaviour is despicable.

A cautious welcome, but I still have my doubts, because garbage in, garbage out, and if the global temperature station network still is full of unknown errors, it will not be reliable, and there’s no sign of validating the effects of e.g. manual -> automation when a lot of stations were moved closer to power source and therefore closer to heat.

On the good side: An institution like Berkeley can’t afford to get it wrong (unlike the UEA)

On the bad side: so many other institutions who ought to have known better have just fluffed the science, so what’s to stop this being another one?

*****“We believed that if we brought in the best of the best in terms of statistics, we could use methods that would be easier to understand and not as open to actual manipulation,” said Elizabeth Muller, Richard Muller’s daughter and project manager of the study. “We just create a methodology that will then have no human interaction to pick or choose data.”
*****

Lofty goals — I hope they try to fulfill them. The devil will be in the details.

One wonders, tho, if the cultural-conformity conditions at Berkeley U (or most any other university) would permit this. Any results other than the party-line would stir up a green-hornet’s nest.

I’m sorry, but why is NASA doing climate change measurements and Muslim outreach? Looks to me like an expensive duplication of efforts that already occur at NOAA and the State Department: let me guess, Bizerkley is now getting in on the climate change gravy train too? No wonder we’re not getting more bang for the buck and going to the moon where the climate never measurably changes.

Wether they are biased or not, as long as their data, code and methodology is made public, it will be possible to determine the validity of their result, and even (from those that have the time and will) to come up with alternatives that addresses concerns that might have been missed. That would be a big step forward indeed.

“There are the skeptics – they’re not the consensus,” Muller explained. “There are the exaggerators, like Al Gore and Tom Friedman who tell you things that are not part of the consensus … (which) goes largely off of thermometer records.”

OK, if this is an accurate quote then I’m also very concerned.

“There are the skeptics – they’re not the consensus,” ???

Huh? Is this implying that “the skeptics”, one of which I believe I am, do not accept the “consensus” that the globe has been warming since the end of the LIA?

“There is no convincing scientific evidence that human release of carbon dioxide, methane, or other greenhouse gases is causing or will, in the foreseeable future, cause catastrophic heating of the Earth’s atmosphere and disruption of the Earth’s climate. Moreover, there is substantial scientific evidence that increases in atmsopheric carbon dioxide produce many beneficial effects upon the natural plant and animal environments of the Earth. ”

We’ve run into one of those define “skeptic” and “warmist” problems again. In my opinion, only a non-skeptic states or implies that skeptics do not beleive that the planet has been warming. What we deny is that anthropogenic CO2 emissions are the primary cause and may not even be a minor contributor.

Good, meaningful global temperature data will show what we skeptics have somewhat consensually agreed – that the planet is warming as it recovers from the LIA and has been doing so in a very natural manner.

Of course, global warming is real. We know this from historical evidence. There are no more frost fairs on the Thames. The question is, ‘how much and what caused it’? An improved set of data may answer the first part but this absolutely depends on how much we can trust the data. It will do little to bring us to the cause. It may tame the exagerators and that will be good. In the end, real science will give us the answer. That the Supreme Court of the US ruled that CO2 is a harmful gas is just unbelievable. How many science degrees do these judges have?
This debate cannot be settled until we have a clear understanding of how climate works and, I suspect, that is many years in the future. Meanwhile an improved dataset can only assist us. (Good luck Anthony). It might even give a pointer to the next ice age, surely just around the corner. (Keep pumping the CO2 folks – the alarmists might be right (and I’m too darned old to cope with an ice age).

It’s coming out of Berkeley so one has to be cautious about any expectation that the folks concerned are going into this unbiased, but if they’re willing to fully document and honestly support what they do, then they’ll be making a worthwhile contribution.

I’ve been mulling over for a while just how one would go about creating a database of temperatures, where each entry was not just a value but a complete biography of the data point including time, location, any related imagery, qualitative metrics including confidence intervals, annotations, etc.

Where it gets sticky is being able to assign essentially a GUID to each data point and its associated metadata, so as to be able to track its use through all subsequent aggregations and analyses that rely on that value, and to ensure that any qualitative and annotative metadata automatically propagates through those aggregations and analyses and cannot be short-circuited by, say, someone whose statistical and data-warehousing expertise may be, say, a bit short.

The basic data structures involved are easy, but the analytical and processing side stumbles – if you (for instance) sum a range of values you also have to generate a GUID, data structure, and biographical metadata for the sum as well as the one-to-many relationship to the GUIDs of all the values you just summed.

Obviously, maintenance of all this overhead would be necessary only as a repository for ‘published’ results, but it still strikes me as some distance beyond where we are now.

I can’t see how using thousands of stations with all the variables involved will give us an accurate indication of where global Ts are going.
Adjustments will need to be made for station moves, UHI effects etc etc.
This will become just another point of back n forth arguements.

I would have preferred just using stations with very long unbroken records such as the Central England record irregardless of how few of these stations there are.
Afterall, CO2 should be doing it’s “work” without fear nor favour all over the world, in all seasons and during all natural cycles such as the ENSO PDO and AMO etc. If we have just a handfull of reliable records of 60 years or longer, that should be enough to give us a good indication of what’s happening to global Ts.

An indication is the best we can hope for with the current measuring systems in place.

“Your comments to Anthony Watts at February 11, 2011 at 8:15 am and February 11, 2011 at 8:39 am are offensive in the extreme. You owe him an apology.”

I don’t believe Anthony needs you to be offended on his behalf. No offence was intended and none was apparently taken

“Anthony Watts had told you that “We have a paper in peer review”. In other words, his demonstration of the “problem” is submitted for publication, so he HAS demonstrated a “problem”. “

When the paper is published we’ll see what’s up with that (a little joke there, hope you enjoyed it).

REPLY: Oh, I’m plenty offended, but I tried to be polite. I’ve stuck my neck out, done the work, recruited co-authors, argued the science in reviews, and put my name to my words. You on the other hand, snipe from the shadows, contributing nothing. That’s the real joke, and it’s on you. – Anthony

Scan the page and you see, “Despite efforts to stabilize CO2 concentrations, it is possible that the climate system could respond abruptly with catastrophic consequences” followed by extensive discussion on climate engineering projects.

The integrity of the research team notwithstanding, further inquiry into the nature of Novim Group’s mandate is warranted.

From this laymans point of view it only needs one thermometer to prove the global temperature is rising. That thermometer must be placed in a desert far from human habitation and in an open environment. Then simply record the MINIMUM temperature. After a few years of readings it should become apparent if the minimums are increasing, however slowly.

If I had to do an analyses of surface temperature, I would use the information about wind. If there is a very local UHI, it should be washed away by wind. Also, if the wind is strong enough, the air will pass over a larger region over a certain time, so the temperature should be more representative. I wonder how much effort has been done to understand how to deal with the effect of wind on temperature measurement.

It is great to do the best statistical analyses, but just like computer models, statistics alone don’t go anywhere. I guess for now we have to wait and see what they have done exactly.

They’re going to use 39,390 temperature stations!! Anthony already had enough trouble compiling and analyzing data on the 1221 U.S. stations out of the 3,000 world-wide stations, detecting where the potential biases are on data collection and analysis, and showing they were low quality stations. Now they’re adding 36,000 new stations?! From flippin’ where? And what’s the quality of the databases they’ll receive? Handwritten records? Electronic (yeah, right!)? Going back how many years? How many station changes, and how well documented are these?

Anthony, don’t get suckered like Delingpole and Moncton did recently by trusting these guys (they are from Berkely, after all. Can you imagine what would happen to them on campus if they found global warming wasn’t as bad as thought? They’d be drawn-and-quartered and survivors of the purge hounded off campus. Careers over!). My suggestion – grab from their database the data for the stations you’ve inspected during surfacestations and compare them to your database. Maybe it’ll give you info you didn’t have yet. Although I would guess that if they were going to tweak data, they wouldn’t be dumb enough to do it on the most-scrutinized 1,221 U.S. station database subset in the 39,390 station dataset.

Personally, I think this whole Berkley effort is going to degenerate into farce. If the last 30 years of surface temp records don’t follow the satellite pattern, does that nullify the surface or the satellite data? If the surface temperature record climbs faster than the satellites, are we to “re-calibrate” the satellites to higher levels? If the surface temperatures are lower than the satellites, do we tweak up the surface temperatures to match the “gold standard” satellites? Or the reverse – admit major biases in the surface temperature records and adjust those back 150 years to show lower global warming in 150 years during the surface thermometer records but accept the boost in temperature during the satellite era for the last 30 years?

One thing for sure – if this study shows a big boost in temperature over the last 150 years or simply “confirms” the AGW global warming rise, it’ll be front page in the MSM for weeks.

Looking at the methodology described at the Berkeley site, they do not even mention systematic error. Systematic error inevitably contaminates the surface temperature record. If they neglect to discuss or evaluate that error, they’ll end up producing yet another centennial global temperature anomaly trend with plenty of statistical moxie but with no physical meaning.

There is no way to remove systematic error from an old data set. One can only estimate how large it might have been, and put conservative uncertainty bars on the result.

In either case — ignoring the systematic error in the 20th century record, or adding an estimated uncertainty width — there’s no doubt but that the global (or any local, for that matter) centennial temperature anomaly trend will be no better than almost entirely spurious.

If the Berkeley group ignores that empirical truth, their product will only kick up more controversy and be yet one more entry in the obscure the issue sweepstakes.

UC Berkeley is well known for their politically unbiased faculty rather than the decidedly libtard demolib bias amongst the faculty of most other institutions of higher learning. Finally we can be assured of getting the truth. /sarc

My initial reaction was that this is a good development, but it appears likely that this will be a missed opportunity to examine matters from scratch.

Personally, I consider the idea of a global average temperature set to be absurd. It would be more sensible to have a data set dealing individually on a country by country basis. After all, every country will be affected differently to rising temperatures, and the global distribution of rising temperatures may point to a cause behind temperature rise which cause may be lost or not be apparent when looking at data globally.

Further, it would be sensible to compile such data set based only upon good quality raw data that requires no obvious adjustment, ie., only class 1 station data, preferably only class 1 rural data. This might mean fewer stations but some times less is more. A few accurate and uncurrupted stations may better tell what is truly going on.

Of course, however they compile the data set, it should be compiled in such way that one can do an analysis on sub data, ie,, select only rural data set, or only urban data set, or a combination of both. Similarly, only class 1 station data set, only class 2 station data set, only class 3 station data set etc nad a combination of all of these.
Compiling the data set in this manner will hekp analyse what is going on.

Would someone please explain to me where he found the 39,000 stations? Sure, having more stations has got to be much better than the shoddy and manipulated GHCN data we have now, but we still need quality control. What percentage of all these stations are properly sited, etc.?

“Note: since there’s been some concern in comments, I’m adding this: Here’s the thing, the final output isn’t known yet. There’s been no “peeking” at the answer, mainly due to a desire not to let preliminary results bias the method. It may very well turn out to agree with the NOAA surface temperature record, or it may diverge positive or negative. We just don’t know yet.”

This is science! This is one of the most BASIC tenants of statistics! I hope people will pay greater attention to this seemingly innocuous statement, for it is part of the very core of all valid data gathering, processing, and interpretation.

Bravo for realizing and upholding this staple of the scientific method.

“Alan Clark says:
February 11, 2011 at 9:17 am
Personally, I think we should also be setting up new automated stations to obtain more uniform coverage, which we can use to build 30-60 years worth of high quality new data.”

“RickA says:
February 11, 2011 at 8:09 am”

“I agree completely Rick. Individuals have the ability today to have a home weather station feeding data into their home computers that could be feeding data to a commingled base elsewhere, freely accessible by all. Ten years from now and beyond, we would have some serious raw data that would be very useful in establishing trends and verifying current projections.”

NASA has already done that. Starting a few years ago, they’ve finished putting about 114 stations around the U.S. (a few doubled up for quality control), spaced pretty equidistantly around the country, fully automated and in areas not subject to UHI, trees, whatever. Class 1 quality at each site (though their fencing looks a little weird, but that’s just me.) Their website says IIRC that they want to collect about 30-50 years of data so they can detect reliable long term trends in the U.S. I suspect that initial contacts by Anthony was a motivating factor.

To go back to your initial complaint about NASA and NOAA – the publication of your analysis showing there’s a problem for them to be concerned about is the starting point to them addressing it. It seems unreasonable to me to criticise them for not fixing a problem you (or anyone else) have yet to demonstrate.

Anthony has very clearly demonstrated huge problems in the sitings, spacing from buildings, nature of the surroundings, and other important issues affecting many, many, many temperature stations. Nor was he the first, Roger Pielke (IIRC) demonstrated the same thing for Colorado stations a few years before. Note that these are problems according to the official guidelines for surface stations, not just some issues that Anthony or Roger made up.

Now, if NOAA and NASA could pull their heads out of their fundamental orifices and put down their models and look out the window at the real world, surely those well-documented problems with their data collection apparatus would be a, what did you call it …. oh, yes, a “starting point to them addressing it”.

And in fact, in any well-run operation, the starting point would have been the NOAA/NASA internal evaluation of the siting of their ground stations. For Anthony to have to document the ground stations is a ringing indictment of the people that you so vigorously defend. They have obviously not done their jobs. Why do you defend that?

And for you to attack him with the fatuous claim that he has not provided anything for NOAA/NASA to use as a starting point is, to put it mildly, evidence of a serious misunderstanding on your part … Anthony has done their job for them, and deserves your thanks, not your approbation.

There still remains the problem that one cannot average intensive variables!
I have a 200 ml of water at 80 °C and a liter of water at 10°C. What is the average volume? First I add to get the total. 200 + 1000 = 1200 ml. This is okay, it is the total volume. Then I divide by two to get the average of 600 ml.

What is the average temperature? First I add to get the total. 80 + 10 = 90°C. This is the total temperature????? WattsUpWithThat!?! This is meaningless.
Now if both samples were exactly the same size and composition, then the total heat in each one could be measured by knowing the temperature. One could calculate the average heat in each one and compute the temperature if they were mixed together. Mathematically it would look like averaging the temperatures. This only works for exactly the same size and composition and no phase changes, volume changes, etc.

One cannot claim a 5°C change in bone dry Arctic air with a dew point of -50°C is the same as a 5°C change in tropical air with a dew point of +80°C. The energy change per unit mass is different and averaging the temperature changes is invalid. Don’t even think about the energy change per unit volume. Mixing identical volumes of incompressible fluids is one thing. Mixing volumes of air at different densities is different. Do you mix them reversibly or irreversibly? It makes a difference.

On the face of it this is not what I expect from the denizens of my old home town. Be prepared to learn that any 1000 randomly chosen thermometers selected from the full set and calibrated over time tell the same story as all 39,000 thermometers similarly calibrated.

#####

Be prepared to learn that any 100 randomly chosen tell the same story.

I think she (JC) is genuinely trying to balance things. Of course she doesn’t subscribe to the deception and corruption of Mann and Schmidt, but still, she does not oppose their view of the coming catastrophe.

For whatever reason. Keep in mind that JC’s coming to fame was with the hurricane katrina.

She has a chance now to be a genuine broker between the two factions, or to disappear as just another policy-driven advocate.

“””
In fact, in the period where rising temperatures can be attributed to human activity, the temperature has only risen a little more than half a degree Celsius, and sea levels, which are directly affected by the temperature, have increased by eight inches.
“””

What is the basis of that claim? I thought tide markers established in the early 1800s still held true.

Would someone please explain to me where he found the 39,000 stations? Sure, having more stations has got to be much better than the shoddy and manipulated GHCN data we have now, but we still need quality control. What percentage of all these stations are properly sited, etc.?

##########

search back through comments i’ve made over the years and youll find the links
to many of the sources. The nice thing about most of the data is that it is raw and unhomogenized. No adjustments.

You get the same results using this expanded data set as you do with GHCN.

As for siting bias? well, we have one field study showing the magnitude.

“REPLY: The word “deniers” was added by the reporter. And global warming “is” real. We expect some warming, my view is that it is exaggerated for political purposes. The key is find out what the true signal is. – Anthony

”
Ah, ok then. I understood that the opinion was taken from the scientist. At any rate, I am glad to see you offered the chance to contribute.

I tried mentioning your “surfacestations.org” website in an editorial I submitted a while ago, and it never saw the light of day. My article “a response from a climate skeptic” did however make it through a few months earlier (which is what made me think the paper might be open to more alternative views). Its too bad that people are giving you crap over the surface stations project because I feel its much closer to true science than most of the big budget productions out there today.

As for global warming being real, I agree with that too. However the meaning behind the words can be very different depending on the context which is what made me weary.

Muller: ““There are the skeptics – they’re not the consensus,” Muller explained. “There are the exaggerators, like Al Gore and Tom Friedman who tell you things that are not part of the consensus … (which) goes largely off of thermometer records.””

Muller is likely a good physicist, but he may have his own political issues. No one is restricted to only repeat what there is a consensus on. It is reasonable to suggest that the flooding in Australia may have been related to AGW. Obviously there is not a consensus on an event that just happened. It is not reasonable to say the flooding there is definitely linked to AGW. Sometimes people only hear in black and white and just ignore the caveats.

“Muller came to the conclusion that temperature data … was the only truly scientifically accurate way of studying global warming. … Without the thermometer and the temperature data that it provides, Muller said it was probable that no one would have noticed global warming yet.”

If this is what Muller thinks, he is wrong. The sea ice and glacier changes would be noticed even if the thermometer had never been invented. So too with the many ecological changes. It is hard to ignore the crabs in Antarctica. As for extreme weather, if concern for AGW did not exist it unlikely the recent extreme weather events would cause us to suspect AGW – so I would agree with him there. Perhaps that was all he intended. But, since we do know AGW exists and will likely impact weather at some point you are not going to stop people from looking for a connection.

I have been lurking here and a few other sites for several weeks now, trying to get my arms around this AGW issue, and like 95% (my supposition) of the people without any science background, I remain profoundly confused. I read here regularly, and elsewhere, that the earth is warming. I also read here, and elsewhere, that for the last 12 years the earth’s temperature has been stable or cooling. It seems to me that these do not have to be essay questions so I will phrase them simply:
Is the earth warming?
Is the earth cooling?
Is the earth currently warming but at a decreasing rate from before 1998?

I am impressed by the breadth and depth of knowledge I find here, and feel somewhat intimidated by it when I am posting such simple questions. Can anyone enlighten me? Is it remotely possible that this Berkeley study could?

“”””” Jack Maloney says:
February 11, 2011 at 8:12 am
“We just create a methodology that will then have no human interaction to pick or choose data.”

In creating a methodology, human interaction to pick or choose data is inevitable. One can only hope that the BEST Study methodology is more transparent than the current ones. And that its authors are more open to constructive criticism “””””

No need; Mother Gaia already took care of that; and both the weather and climate are now exactly the way she said they should be. Problem solved.

Coming after Demetris Koutsoyiannis’ work, among others, showing that GCMs are completely unreliable, that people can still write something like that shows a complete lack of understanding of the source of meaning in science.

Here’s the strictly scientific view on the cause of recent climate warming: no one knows.
Here’s the strictly scientific view of the effect on climate of recent rise in atmospheric CO2: no one knows.

In all the hoopla about AGW, no one knows what they’re talking about. No one.

Built in bias, at least by the Daily Cal Senior Staff Writer Claire Perlman.
“The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.”

Ms. Perlman,

Let’s have a look at the data, see the various analyses that derive from it, and have a good old fashioned debate about all of it, before we speak of “irrefutable consensus” and start calling people “deniers”.

Anthony,

Thank You and Many Thanks to your collaborators! We all look forward to a more reliable data base. Is there anything an average Joe or Josephine might do to assist this?

The answer to your questions depends completely on the time scale you use. There is no one answer.

If you talk about the past 3 months, it’s cooling. The past 3 years, it’s warming. The past 12 years it’s cooling. The past 30 years it’s warming. The past 1,000 years, it’s slightly cooling, or rather heading back up to “normal” on that baseline.

That’s the problem with the whole thing, the sense of scale. If you look back 10,000 years, we are in a warm, interglacial period coming out of an ice age, and nothing about this period is warmer or unusual than any other interglacial period. We’re both average in temperature and length of this period so far. So it could get warmer. It could last longer. Or things could suddenly get a whole lot colder for a few thousand years.

Saying Man has anything to do with the signal is a difficult assertion. But, that is what they are trying to do by looking at the sudden upward spike that was 1998. That is, the warming of the past 30 years, which coincides with the fact we now have satellite data and global temperature coverage that lasts only about that far back. We have a very short memory and experience with climate, so sudden changes spook us, and we think maybe we are to fault, maybe we did something wrong.

That’s what the scientific discussion is about. IS Mankind having an effect and to what degree, by adding 3% more CO2 to the air per year than would have been by purely natural sources, so it is said.

I wish those who would like to tread the middle zone would not use statements like:

“There are the skeptics – they’re not the consensus,” Muller explained. “There are the exaggerators, like Al Gore and Tom Friedman who tell you things that are not part of the consensus … (which) goes largely off of thermometer records.”

First science is not about consensus. Copernicus was a skeptic and not part of the 1000 year “consensus” of geocentricity. It was those who were part of the “consensus” who turned out to be totally wrong.

Why is having 39,000 stations with bad data an improvement? I accept as a given that physical location, instrumentation and time of observation has introduced errors into the existing data. I do not accept the premise that more observations statistically reduces the error. I would much rather see random validation of existing stations by putting up 3 stations near an existing station but in locations that meet the NOAA siting criteria. Data could be taken for a few months to a year and compared against the exiting station. For just over $100 I got a low end weather station that measures inside and outside temperature, wind speed & direction, dew point and rainfall and has a wireless console that stores data for several weeks. I would hope that for a few hundred dollars one could get a highly accurate instrument that would record and store temperature only. In all of the discussions of the problems with siting of the stations, I have not seen any description of collection of emperical data.
Anthony: Looking forward to your paper. Why is it taking so long, when Menne was able to take some of your preliminary data and get his paper out so quickly?

I have been familiar with Mullers web sites for a couple of years. He publishes the ice core data that screams”no unusual warming”, and yet he toes the global warming party line. He does not fudge the data and he has my respect.

I’m a bit concerned about the handling of UHI, but I’ll reserve judgement for now, at least (especially since you said you had a good deal of input).

On a similar note, though, I would be quite interested in seeing a series of comparative graphs for each station, with annotations as to their location (urban, rural) and noting when changes were made to said stations (ie replaced equipment, moved, etc).

Is there anything of this nature available, or in the works?
Or, is it something that could possibly be worked on in conjunction with your surfacestations project? (In which case I would be interested in helping)

Hay Sharperoo,
Do yourself a favor.
Find graphs that plot loss of thermometers to rise in temperatures.
Find satellite “data loss” areas on Earth that correspond to surface temp data loss.
Find record low temps in heat island city(s)..
Most of that has been posted on this site in the last month.
The more data points the better.
If temp history is your thing then, record data from the same locations for a long long time, don’t move the thermometers, and don’t remove them.

Another decision to be made was whether to include data from cities, which are known to be warmer than suburbs and rural areas, said team member Art Rosenfeld, a professor emeritus of physics at UC Berkeley and former California Energy Commissioner.

“One of the problems in sorting out lots of weather stations is do you drop the data from urban centers, or do you down-weight the data,” he said. “That’s sort of the main physical question.”

Drop the urban stations, obvious if you want a clean unbiased study, how will they down weight them, deduct 0.05 C from the result.

A wise person once said, “Trust, but verify!”
The idea sounds great, but from what Dr. Muller has revealed so far, I am not optimistic. Perhaps the best news is the promise that the code and data sources will be open so that we can see what it done with UHI concerns, siting issues, TOB, missing data, station moves, homogenization, and extrapolation.

“so deniers and exaggerators alike can see the numbers.”
This is from Berkeley, so there should be no surprise at the slant. But letting them constantly get away with using the pejorative “denier” label is just like letting them get away with the “tea bagger” label. They’ll keep using it until it sticks. Hey? If we use the “N” word enough, will that make it meaningless?

Most of us started as ‘lurkers’ at WUWT and other sites. The debate is more civil and reasoned here, so we ‘come out of the AGW closet’ here. Welcome, seeker of more knowledge!

1. Is the planet warming? Yes, and it has been (with fits and starts) since the end of the last glacial epic about 10,000 years ago. The warming did not progress uniformly. There were shorter periods of cooling and warming within that 10,000 period.

2. Is the planet cooling? Within the general warming trend of the last 10,000 years, there have been notable cooling periods. The most often named is “The Little Ice Age” that started somewhere around 1400AD and may have ended as late as 1915AD. Most of the Anthropomorphic Global Warming debate centers about the shorter warming trend since then.

3. Is the earth currently warming but at a decreasing rate from before 1998? I’ll have to let others respond. My lunch break is inadequate!

As Ken G (and others) have said, this does not feel right. “They already making value judgments on others’ positions and it sounds like they are starting off with an expected conclusion before they’ve even begun. I’m betting they will find exactly what they are looking for, and that it will fall right in line with this “consensus” view they mention….and we will be nowhere closer to the truth.
The comments in the article give me little hope this will be an unbiased approach.”

“There’s been no “peeking” at the answer, mainly due to a desire not to let preliminary results bias the method.”
“in the period where rising temperatures can be attributed to human activity, the temperature has only risen a little more than half a degree Celsius”
“Global warming is real”
Hmmmmm…

Anthony – you describe it as “Good News“. I truly hope that you are right, but belief is pending.

This new enterprise looks good and I wish them well, BUT (there is always a but) when is the network of surface stations going to be built, or existing stations upgraded, that conforms to the standards already laid down? To my admittedly simple and unscientific thinking, I always have been a great believer in starting at the beginning, therefore a high-quality surface stations network should be the first priority in such an enterprise; messing about with statistics again, but with more existing stations included, seems a very complicated way of acheiving an unbiased measurement on an on-going basis. To simplify the matter again, a broken ruler is damned hard to get meaningful measurements from.

I’m sorry, I have not read all the comments so someone may have pointed this out. Has anyone looked into the organization behind this project, http://www.novim.org/ ? I have not had time to study up on them but it seems from a somewhat quick review of their web site and their participants that they are not exactly neutral observers. I do not want to falsely cast dispersions on what could be a very worthwhile project, but on the other hand I am a bit leery here of this based upon what I have found so far…just saying

As Diane’s relationships with her anorexic Greenpeace daughter Phoebe and her tweedily corrupt professor deteriorate, her scepticism turns to hectoring. “Green is proxy for anything. Class war. Hate your dad. Hate America. It’s the perfect religion for the narcissistic age.”

Her public profile grows and she tells Jeremy Paxman on Newsnight that “the real global warming disaster is that a small cohort of hippies who went into climate science because they could get paid for spending all day on the beach smoking joints have suddenly become the most powerful people in the world”.

———————-

I think we’re beginning to win this argument – as ‘art’ often leads the way in detecting new trends.

The warmist camp says that the current datasets of GISTemp and HadCruT are good enough. Hansen says that satellite data does not need to be nor should it be added into the land temperature data as it does not measure the same thing (nor, I suppose, show the same trends that exist in the land data). ARGO data is not used for … I’m not sure why, but it is also not good enough to be considered. In other words, all is good and there is no reason to re-do the historical data.

This 39,000 station data review will take a long time and, as far as the warmist camp is concerned, is unneeded and irrelevant. Right now New Zealand has had its NIWA data more-or-less confirmed by the Australian BOM. So, the reasoning would go, why are you doing this?

If you/we want the new analysis to be considered worth paying attention to, something small and clear must be done that can incontrovertibly be held up as a challenge in court, in Congress or on the cover of Time magazine. If, as the New Zealand Science Coalition says, the NIWA data is as bad as appears, and New Zealand is not warming at 0.9K since 1988 (or whenever) and is the fastest warming country on the planet, then New Zealand is the place for a first-attempt at record repair. A first-area comparison that is devastating to the New Zealand prior history will make further work relevant. (Australia and Canada would be second and third on the list, I’d suggest, followed by the continental USA with its UHIE problem).

We have been led to believe that skeptics have solid data that conflicts with the Hansen-Gore CAGW meme. Can we not focus somewhere and bring that out now, before the EPA and others get further into our homes? Can we not say, “This is wrong! and tomorrow I’ll be showing you how the rest of it isn’t right, either.”

If the data is bad globally, it must be bad regionally. An initial small region showing how manipulated the public has been would be more than a shot across the bows. For both sides

Anthony – like you, I am happy to judge the project on its own merits when it meets the light of day.

The NOVIM group, private sponsors of this study, seem (to me anyway) to be rather odd. They use Alarmist Warmist language, and seem to be interested mainly in things like rapid geo-engineering solutions to “climate change”. On the face of it, this suggests that they may have a financial interest in the outcome of related research. Their people seem young (e.g. PhD in 2005).

May I suggest it would be wise to find out who are the private funders of NOVIM? Other readers may be able to help.

“Prejudice” a. An adverse judgement or opinion formed beforehand or without knowledge or examination of the facts.
b. A preconceived preference or idea.
People, let’s see what this program produces, let’s see the raw data, the methodologies of data interpretation etc. before we rush into giving judgemental opinions.
Is this not what Anthony, McIntyre et al have been working for all this time? Let the group publish their findings, along with all of the associated data, and then let us form some sort of conclusion.
Remember, what is being published in this post is written by Claire Perlman
Daily Cal Senior Staff Writer, not anyone from the group involved in this project.
The proof of the pudding is in the eating. Shall we wait for the pudding to be served up before giving our opinions as to its quality?

“The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.”

Is this just US based? Or global? How is this possible considering the cat ate the raw, real, unadjusted, data? Also the use of the word “deniers” along side “exaggerators”. To me this seems their “program” is set firmly in the AGW, alarmist, camp.
=============

Hmm, yes I caught the deniers language give away too. Neither did I like the “irrefutable consensus” bit.

As has been said many times, science does not work on consensus, that’s a political term. Neither can any such global average from a huge stack of crap quality data, never intended for climate, be considered irrefutable.

When crap from tens of thousands of homes gets churned up and homogenised at the local sewage treatment utility, you still end up with a tank full of shit. The only consensus is that it’s irrefutably smelly.

I do not find anything in their methodology blurb that says how they will deal with UHI , airport temperatures and sub-standard weather stations but the comment in this article about “down weighting” is both vague and worrying. Down-weighted bad data is still bad data.

So it seems the object is to water down the UHI by some unspecified factor and hope we’ll accept the ensuing warming as “irrefutable”. Does not sound too promising.

If they want to call it BEST , I hope they can live up to it. Though best is only relative. Seeing the opposition, they are not setting the bar too high trying to do better.

Are you really saying that 10 stations with long histories can adequately tell you how the temperature of the whole earth has progressed, rather than just point to the fact that there has been some warming, which of course is to be expected as we come out of the LIA?

First, it is good that a group of competent scientists — including statisticians –are attempting to be honest brokers in retailing ground station measurements. Second, the process involved is entirely transparent. There is absolutely nothing to whine about here. Please move along and find something else to bitch about.

why not do a positive control. Deliberately pick sites that fail the criteria, inside growing cities, airports and the like and see what contamination looks like, with respect to Tmax/Tmin and changes in the yearly temperature warming cooling rates?

DJ says:
February 11, 2011 at 8:34 am
Sorry, but I think we’re off on the wrong foot already, if the article itself isn’t biased towards AGW.

The statements ” the skeptics – they’re not the consensus”, “..intention of becoming the new, irrefutable consensus”, and “not as open to actual manipulation..” Not AS open?? But still open to manipulation, and by whom?”

My thoughts – when I read the word denier and the ‘new consensus’ I have to ask just when did science become consensus? And when did skepticism become not science based? It is a bit like the ‘null hypothesis’ advanced by Mr Trenberth, it doesnt make sense.

Either the good gentleman has a bit of a problem with language and communication, or we are on a hiding to no-where. No can trust. Be careful Anthony that you are not dragged by implication of association into some quagmire of maybe not deceit, but distortion.

One reporting station at San Francisco airport is used to characterize all weather in the Bay Area for the GISSTemp data set. Anyone who has lived for any length of time in that area knows that this is preposterous. That station cannot reliably measure conditions anywhere in San Francisco. It can be raining in the Avenues and remain sunny in the Mission District. In just 25 square miles you can choose your climate by choosing your neighborhood. As GISS uses only four thermometers to deduce weather everywhere in California this problem scales up. Forget statistical sampling. Some thermometers have good predictive value for large areas while others can’t be relied upon for small areas. The more thermometers the better.

Scan the page and you see, “Despite efforts to stabilize CO2 concentrations, it is possible that the climate system could respond abruptly with catastrophic consequences” followed by extensive discussion on climate engineering projects.

The integrity of the research team notwithstanding, further inquiry into the nature of Novim Group’s mandate is warranted.

You are skeptical because of the participation of the Novim Group. You don’t need to be concerned. Their influence will be cancelled by the participation of the Charles H Koch Foundation, a creature of the polluting Koch Industries. They have funded anti AGW conservative think tanks like Heritage, Cato, AEI etc.http://en.wikipedia.org/wiki/Political_activities_of_the_Koch_family
You can now relax.

It seems that many of the same methods used by GISS to correct bad data will be used by this new effort. They are however hoping to get away without gridding because they are using more stations.

It is doubtful that there will be a large effect on the recent temperature record. The 2 satellite records, and the three leading thermometer records show the same behavior for the recent record.

REPLY: eadler I swear you are some sort of “polluting creature” too. Mind pollution maybe? Koch sponsors the PBS NOVA TV program, it’s right there at the bottom of the web page, soooo…..be sure to close your mind to that too, they can’t possibly have a decent science program if Koch is involved now, can we? – Anthony

Looking at the methodology described at the Berkeley site, they do not even mention systematic error. Systematic error inevitably contaminates the surface temperature record. If they neglect to discuss or evaluate that error, they’ll end up producing yet another centennial global temperature anomaly trend with plenty of statistical moxie but with no physical meaning.

There is no way to remove systematic error from an old data set. One can only estimate how large it might have been, and put conservative uncertainty bars on the result.

In either case — ignoring the systematic error in the 20th century record, or adding an estimated uncertainty width — there’s no doubt but that the global (or any local, for that matter) centennial temperature anomaly trend will be no better than almost entirely spurious.

If the Berkeley group ignores that empirical truth, their product will only kick up more controversy and be yet one more entry in the obscure the issue sweepstakes.

Please define what you mean by systematic error. Looking at their web page, they allow for station moves, the UHI, and equipment changes. These are sources of systematic error. They also check adjacent stations to look for discrepancies, in as similar manner to GISS and CRU. This is a means of catching systematic errors.

Actually I beg to differ with Anthony on this one; “There is warming etc” UAH satellite data shows cooling for the last two months (below anomaly). The argo data shows no warming for all years and SST are much more meaningful in my view, The SH shows no warming, there is no tropospheric hot spot, definitely no “Global” effect here. Of course I am a blatant denier by now… All that said, if the study uses only raw data and no cities I would believe the outcome which will show no change (flatliner). Temps are going to go up and down for the rest of everyones life here and your childrens children until everybody gets incredibly bored, like measuring respiratory rate in a normal patient LOL.

why not do a positive control. Deliberately pick sites that fail the criteria, inside growing cities, airports and the like and see what contamination looks like, with respect to Tmax/Tmin and changes in the yearly temperature warming cooling rates?
The same thing could be accomplished by doing the reverse. Keep only the good sites and see what the trends looked like.
In fact this has been done. In the US, only the stations acceptable by Anthony’s criteria were examined, versus the full set of stations. The result for the US temperature trend did not change significantly.

In addition, when urban stations were dropped from the global data set used by GISS, it made no difference in the trend. This result was reported in the peer reviewed literature.

I am glad the this larger data base is being examined. It is encouraging that it is being funded by one of the Koch brothers, who are opposed to the idea that global warming is a problem. It ensures that an objective study will be done, and makes it likely that it will be accepted by “skeptics”.

Are you really saying that 10 stations with long histories can adequately tell you how the temperature of the whole earth has progressed, rather than just point to the fact that there has been some warming, which of course is to be expected as we come out of the LIA?

#######
And what would be the difference? ‘coming of out the LIA’ explain nothing.
Here is the point. Regardless of the cause the temperature of the earth has gone up over the past 150 years.

If you sample 40,000 sites you will get one estimate of the trend.
If you randomly select 10,000, 5,000, 3000, 1000, 500, 100 you will get similar trends. That’s because the distribution of trends is fairly normal ( kinda spiky)
In Ar4 I belive they looked at the 4 longest records. Same general answer.

Thats because over century scales you don’t have areas of persistent cooling while the rest of the globe warms.

Global warming is real, Muller said, but both its deniers and exaggerators ignore the science in order to make their point. “There are the skeptics – they’re not the consensus,” Muller explained.

Oh please …. these muppets appear to have completely missed the point …… as usual. I don’t know anyone (at all) who says that there hasn’t been some global warming over the last century or so. How can these guys come out with a statement like the one above ? Honestly, whats the point ?

why not do a positive control. Deliberately pick sites that fail the criteria, inside growing cities, airports and the like and see what contamination looks like, with respect to Tmax/Tmin and changes in the yearly temperature warming cooling rates?

######
thats been done over and over again.

The biases dont rise above the noise floor. Doesnt mean they are not real. They are just small.

The presumption that using ALL 39 thousand station records will somehow improve estimates of temperatures world-wide precipitates a sense of uneasiness that the essential nature of the problem is being missed. It can summarized in two sentences:

1) Records of adequate length are available ONLY from population centers of one size or another–not from locales untouched by civilization.

2) There are large regions throughout the globe where for which there are NO credible records–including most of the oceans.

Without proven techniques for identifying LOCAL man-made effects upon station temperatures–and removing them–merely using more records will NOT solve the intrinsic problem of data CORRUPTION. Nor, barring the discovery of a treasure trove of previously unknown records in the most unlikely places on the globe, will the GEOGRAPHIC COVERAGE be materially improved. Thus, despite a more punctilious minding of largely academic P’s and Q’s, I do NOT expect the findings of this panel to differ substantially from the indiscriminate data products of the “major” index manufacturers.

P. Solar and others. dr. Curry put me in contact with the researchers long ago.
My interaction with them has been favorable. Issues that I raised about metadata
are on the table. Their Approach or method is mathematically sound and in line
with the methods of RomanM and jeffId. Its the best method.

Sadly I don’t think Muller’s project will resolve a single solitary thing, because it addresses the least relevant and least interesting part of the controversy – adding needless precision to the argument about exactly how much the difficult-to-define “average” temperature has changed in the extremely recent (last 150 years or so) past.

Actual questions that are controversial:

Whether the change is a threat.
Whether and to what extent human activity affects the climate.
What can be done to ameliorate or adapt to that change.
Whether anything should be done to ameliorate or adapt to that change.

So far #1 and #2 are the big fighting points, and #3 is filled with pie-in-the-sky flying-windmills kinds of things at the moment, some of which is unfortunately costing money and resources.

I conclude that, as well-intentioned as it may be, Muller’s efforts are almost entirely pointless.

I say “almost” because, hey, a major database of raw temperature data combining existing data could be useful. It just won’t resolve the debate at all.

The satellite data do show some warming over the last 30 years and both RSS and UAH show that 1998 was the hottest year so far, although I have to admit that it could be argued that 2010 was a statistical tie with 1998. But with the Met Office also having 1998 warmer than 2010, even thought the race was close, the first thing that I will be watching for is whether or not 1998 still beats 2010 and retains its rank as the hottest year of the last 50 according to their results. With the great increase in the number of thermometers being used, this would naturally mean more thermometers in the northern Arctic about which there has been a lot of discussion. I do not dispute that the northern Arctic has warmed, however there is no way I believe it has warmed as much as GISS would have us believe.
The area in the north polar region above 82.5 degrees is 2.2 x 10^6 km squared. This is where satellites apparently cannot get readings. The ratio of the area between the whole earth and the north polar region above 82.5 degrees is 5.1 x 10^8 km squared/2.2 x 10^6 km squared = 230. So that area above 82.5 degrees is only 1/230 or 0.43% of Earth. This is not enough to allow GISS to give 1998 as low a ranking as it does.

Anthony,
Since most of the comments seem to be posted by people that have no
knowledge of Richard Muller I would like to see his credentials before
the comments.

Department of Physics at the University of California at Berkeley, and Faculty Senior Scientist at the Lawrence Berkeley Laboratory, where I am also associated with the Institute for Nuclear and Particle Astrophysics.

Named by students as Best Class at Berkeley! (It has won this honor for the last two years in a row.)

Like most here, I look forward to a more accurate “global temperature anomaly” trend. However, they cannot settle the issue of CAGW with a temperature anomaly trend. I think most skeptics acknowledge that the earth has warmed since Franklin and Jefferson began measuring temperatures. There are the matters of attribution and sensitivity to solve.

I don’t think that is accurate. He says they have an argument that must be considered, and time will tell. He is open minded on the question.

Last month’s article by McIntyre and McKitrick raised pertinent questions. They had been given access (by Mann) to details of the work that were not publicly available. Independent analysis and (when possible) independent data sets are ultimately the arbiter of truth. This is precisely the way that science should, and usually does, proceed. That’s why Nobel Prizes are often awarded one to three decades after the work was completed-to avoid mistakes. Truth is not easy to find, but a slow process is the only one that works reliably

It is 2004, before the NAS report, which weakened the conclusions of the Hockey Stick. Subsequently it was shown that M&M’s claim the the HS was invalid because it was an artifact of non centered PCI is incorrect. It was shown that centered PCI also gives a hockey stick if the correct PCI procedure is used.

Since then many papers have been published which confirm the hockey stick shape of the NH temperature anomaly. Muller subsequently changed his mind.

Errors have been discovered in the “Hockey Stick” statistical model that shows the potential for dramatic future increases in global temperatures. This claim arises from a February 2005 article in Geophysical Research Letters by Stephen McIntyre and Ross McKitrick claiming various errors in the methodology of Mann et al. (1998) in the principal component analysis they used to generate the “Hockey Stick.” McIntyre and McKitrick first made this claim in 2003 in the social science journal Energy and Environment. The same paper was later unanimously rejected by the editors and reviewers of the journal Nature before being printed in Geophysical Research Letters at the urging of physicist and McArthur Fellow Dr. Richard Muller. In response to the 2005 article scores of scientists worldwide verified the statistical methods used by Mann in the generation of the “Hockey Stick” and no peer-reviewed claims doubting its veracity have arisen since

In fact, Professor Muller has changed his mind since then, based on what has been written since then. In his 2009 power point, he shows the Hockey Stick graph as something political decision makers need to know. Check out page 67 of the following presentation that he gave:

1. Is the planet warming? Yes, and it has been (with fits and starts) since the end of the last glacial epic about 10,000 years ago. The warming did not progress uniformly. There were shorter periods of cooling and warming within that 10,000 period.

Actually, no. The planet warmed from the approximate end of the last major glacial epoch, which was around 20 to 17 kya (thousand years ago), until about 8 kya. Since then the overall trend has been cooling. Warm episodes have been trending both shorter and cooler over the Holocene, and cool episodes both cooler and longer during the same span. The planet recently – geologically, that is – seems to have been warming over the last 200 years or so, but you should read Pat Franks’ (2010) paper on the real measurement error of the available global surface temperature record, which IIRC is, at two sigma, about +/- 0.92 degrees C (one-sigma is +/- 0.46 degrees C). Franks concludes that the trend over that span is statistically indistinguishable from a 0-degree trend. We do know that geographic plant distributions and thermometers all seen to show evidence of warming, but the instrumental record is simply too uncertain to offer any really good estimate of the magnitude of the change.

Sorry, this looks tainted already. The statement: “Global warming is real” is itself suggestive that man is causing it. And if your period looks back 1000 years, it is likely false, not true. Global warmth variability is real. If the 1930s were warmer than today, and so understood, no one would say warming is real. It may not be. Warmer today than 1977? Sure. So what?

And without solid ocean and humidity data over the last few hundred years, we really don’t know anything. So is there really any point to a global mean temperature comparison? Not much.

“The word “deniers” was added by the reporter. And global warming “is” real. We expect some warming, my view is that it is exaggerated for political purposes. The key is find out what the true signal is. – Anthony”

Ah, this makes more sense now, a clear indication of bias in the media (What a surprise!). Still, would Muller have not been given the chance by the reporter to “eyeball” the article first before publishing?

I’m talking of systematic error in the temperature record due to problems at the instrumental level. Inaccuracies enter field-measured temperatures because of solar loading on sensor shields and wind speed effects, which cause the sensor to record something other than the true air temperature.

There are empirical ways to account for this and remove the error from each measured temperature, but they require independent precision monitoring of insolation intensity and wind speed (and variations in albedo, too, actually). None of that was ever done at USHCN climate stations during the 20th century, nor likely anywhere else. Some of the new CRN stations may include that capacity, but monitoring now won’t do anything for systematic inaccuracies in the prior 150 years of the instrumental record.

I didn’t see any recognition at the Berkeley site of the systematic effects that cause sensor error in the field. Systematic error won’t show up in any statistical test of the raw data, or in cross-comparisons of regional temperature-time series.

Because systematic effects at the sensor are caused by the same processes that govern air temperatures (insolation, wind, albedo), the resulting systematic errors will be pretty much as regionally correlated as the air temperatures themselves.

I will be interested to hear of how they handle the incredibly vexing question of station classification. I.e. rural or urban

I’ve seen plenty of evidence of stations still classified as “rural” whilst having already in reality transitioned into what should be an “urban” classification. This is where adjustments for UHI can be done incorrectly and skew your whole data set.

Sometimes things are so badly broken that they cannot be reassembled. I suspect this is the case when your looking at reconstructing a temperature record from surface station measurements.

Larry Hamlin wrote on February 11, 2011 at 9:00 am:
QUOTE
In reading the methodology material I was unable to determine how the Urban Heat Island (UHI) impacts are going to be addressed in this study. I would hope that this critical issue is to be evaluated in this independent temperature data study.
Is that the case? If so can someone please explain where this issue is addressed in the methodology. Thanks.
UNQUOTE

Thanks Larry.
I agree completely.
From my observations of various Australian individual locations, UHI explains ALL the long term trend.
And there are all the problems of unsuitable sites, faulty instruments and housing, etc that Anthony has written about.

It seems to me that expanding from a few thousand data points to over 39,000 makes the task of getting the answer right, in terms of truely reflecting the agverage global temperature, much, much, much more difficult.

Open source data and programs would be a real advance.
But I am still very doubtful about programs that attempt to spread the data over the surface of the globe, beyond the actual spots where the temperatures are taken.
Forget the possibility of manipulation or bias.
Just what does the gridded output mean?

We know, thanks to very close attention to data by ancients long ago, that moving the thermometer at Sydney Observatory Hill in 1917 – about 150 metres down hill to the south east, made a significant difference to the measured temperature – max & min, month by month and annually.

Those gridded outputs are just grizzled mish mash, in my opinion.
And downgrading some of the data relative to others, seems to be unacceptable as well.

With 39,000 plus data points, why not just add them up and divided by the number?
The answer may not be the mean temperature of the earth, but it would sure tell us more as it changes over the years, than any complex output from programs that even if open source, the man in the street cannot understand.

I going into this, I am going to presume “innocence” with an expectation of honest effort. And further, I am prepared to tentatively applauded what appears to be a well intentioned and refreshing effort to conduct climate science in a transparent and open manner.

That said, I must echo many of the comments above in that I am far more interested in the future collection of long-term surface temperature data using instrumentation that is properly located, is properly and consistently QA/QC’d, is routinely checked using independent instrumentation, and is inherently accurate enough to measure ambient surface air temperatures within the range of interest (say +/- 0.001 degrees F).

This leaves reasonable doubt as to wither any analysis of the historical instrument record truly has scientific value. In the face of this uncertainty, we are relying of the “magic” of statistics. While I value the views statisticians and firmly believe statistics is a valuable tool; no amount of statistical analysis can make-up for inadequate data. Absent reliable and accurate readings, we must recognize we are simply “making do”.

Sadly, in my view, the historical surface temperature record is simply too crude to have conclusive scientific value. I remain inclined to place far greater weight on satellite data – the UAH data in particular – and very little weight on even properly analyzed historical surface temperature data.

On balance I’m willing to give “Berkeley Earth” the benefit of a doubt. But, like most science, the resulting product will only be a part of the picture. We will still be left to judge the “preponderance of the evidence”.

As a Stanford man, I can almost guarantee that these doofi from Berkeley can and will find that, in fact, the globe has warmed 35 degrees C. in the last 10 years, and we have until tomorrow morning to raise the hammer and sickle flag and retake our world from the evil, carbon dioxide-spewing capitalists or suffer catastrophic…whatever.

Mike Haseler says:
February 11, 2011 at 9:24 am “…An institution like Berkeley can’t afford to get it wrong (unlike the UEA)…” Well as a matter of fact Berkeley can hardly ever get it right on anything these last 40 years or so.

And finally, @sharper00. There’s nothing sharp about you. The double zeroes at the end of your monicker say it all. I would favor Anthony simply scrubbing all you have to say all the time, but he won’t do that, and I understand why. He’s just a nice man.

Let me add, by the way, that the assigned uncertainties in the surface station CRN rating key, that Anthony is assessing for his paper, represent guesstimated systematic errors and are statistically analogous to the (+/-)0.2 C “reading error” guesstimate of Folland, et al., 2001.

The CRN keys likewise fall under Case 3b in my paper. That means the CRN key uncertainties propagate into an anomaly average as s^2 = sqrt{[sum over N of(CRN key)^2]/(N-1)}, and will end up producing a large uncertainty in any average air temperature anomaly time series.

When all is said and done, there will almost certainly be no way to avoid the conclusion that the current instrumental surface air temperature record is pretty much climatologically useless; likely any trend less than at least (+/-)1 C will be lost under the uncertainty bars.

This is what you get when people try to do ‘science’ while playing with a computer. 5 x garbage is still garbage. They can only process this much data with no quality control or calibration. As most thermometers are in built up areas (where people can get to them) the larger data set will be WORSE. It’s a con job.

“Sadly, in my view, the historical surface temperature record is simply too crude to have conclusive scientific value. I remain inclined to place far greater weight on satellite data – the UAH data in particular – and very little weight on even properly analyzed historical surface temperature data.”

So when the satellite measure matches the land surface record what do you conclude about the land surface record?

And when 10 years of CRN data ( all pristine sites) match the “old” sites that they are paired with, what do you conclude?

Do you think the LIA existed? why? on what evidence? is that “evidence” as accurate or as highly sampled as the evidence from 1900-2010?

“Some of the new CRN stations may include that capacity, but monitoring now won’t do anything for systematic inaccuracies in the prior 150 years of the instrumental record.”

Well, that’s not actually the case. The CRN are set up in a “paired” configuration for a large part of the network. That means old stations are paired with new stations. That
will allow for the creation of transfer functions from the new network to the old.

Look at it this way. You accept the conclusions of ODonnell 2010. I do. In that paper the new data of the satellites was used to calibrate and infill the old land data. You’ll
have the same kind of proceedure with CRN and the old network. Already we know that CRN does not deviate from the old network.

If computational horsepower is needed consider using the same shared engine as SETI@Home – distributed computers. Of course there is at least one climate model running (since around 2000) called Climate Prediction…

What is the point. Even if you use every temperature measuring device in the world there will still be large areas of the Earth’s surface that is not monitored.So until we have a thermometer on every Square metre of Earth there will still be room for a fiddle factor.
And what does it mean anyway.I think the current average is somewhere around 14.5 C.Is that the average were I live no is it the average were you live probably not.
If I took all the stations inside the Arctic circle and averaged them would that be the world temperature no.If I took all the stations within 1deg north or south of the Equater and average them would that be the world temperature no.
So why would a a whole heap of stations placed randomly around the world be any different.

But global warming – i.e the fact that our planet warmed in the 20th century – is not at dispute.

The point is we just don’t know by how much – becuase of the politicisation of the existing datasets. And we have no idea what caused it – the measured increase may be attributable to natural variation, the UHIE, reflect change in land-usage, solar variation, a combination of the above or something else entirely.

We don’t even really know if warming has ceased in the last 15 years becuase of “adjustments” made to the existing datasets to make every succsessive year “the hottest eva!!!”.

A documented, open source, temperature record – one that is based on the numbers and not the politics, and one where the math can be indipendantly verified by both sides – is vital starting point in _scientifically_ answering these questions and I see this as a very positive development.

If we cannot even properly quantify the temperature change we have no business trying to attribute that change to human activity, or to anything else. Attributing changes in temperature to any one factor in a vast and complex climatic system of which we currently have only limited understanding – carbon for example – before we even have agreement on how much the temperature changed is junk science conducted largely by Malthusian activists.

I’m encouraged by this effort and can’t wait until they release the data as I have a methodology using absolute temperatures that is different to the normal way of measuring temps, and more of this sort of data will suit it perfectly.

The way I look at at the temperature recordings is that they are all in error, so the more data that can be collected, the more the errors are reduced.

We see from the brouhaha over Steig that two stations 2.5km apart have a 2deg difference in recorded temps. It could be thermometer error, it could be that it really is different by that amount, who knows.

500metres from a temp station, the temperature will be different, half an hour after the recording is made, the temperature will be different. Yet all we have are these snapshots in time and place of the temperature.

The trick is to find the things that follow a normal distribution of error, and those that don’t. For instance, instrument accuracy is said to be +/-2deg. I think it would be right to expect that there are just as many errors upwards as those downwards, so they follow a normal distribution.

In summer, it would be expected that the temperature would be above the average of the max and min for longer than it would be below. However, it is balanced by longer lower temperatures in winter, so it could be said to follow a normal distribution over the course of the seasons.

UHI does not follow a normal distribution, neither does the March of the Thermometers, or the lowering in average elevation of the temperature stations.

These are three of the major ways, and no doubt there are others, that the data has become less useful than it could be, and they need to be quantified and adjusted for, and the more data we have, the better we can test those things and generate accurate results. Also the use of anomalies results in a huge loss of information, in my view.

Remember, Anthony has been involved in this, and if for no other reason than as a mark of respect to him, we should await the releasing of the data before saying things we may regret later.

Well, this is a rather exciting development! But what a lot of work it will be. Temperature records, like any records, have mistakes in them. There will be so many subtle problems to fix. For example, how do you handle the case where a weather station is moved? A move of a few kilometres from the coast can dramatically affect the temperatures. How do you take this into account?

I’m going to assume that we have skillful people doing this, and that they will manage to make sense out of the vast amounts of temperature data, and get meaningful results out. I have a reasonable faith that clever people can do miracles!

Attitudes to this new study seem to be splitting skeptics into those who think the science is tractable and worth pursuing, and those who think its all too hard and we should just make our minds up without any evidence.

“And when 10 years of CRN data ( all pristine sites) match the “old” sites that they are paired with, what do you conclude?”

Assuming the Climate Reference Network (CRN) sites in question were co-located with the older the surface temperature stations. I would conclude the “pristine” sites were accurate to within their calibration range at the time the data was compared – based on independent verification.

Note however that my understanding is that the CRN data is obtained from land based” automated instrument package[s], transmitted to a GOES satellite which in turn transmits the data to Wallops Island, VA.” (See http://www.data.gov/geodata/E5110AB7-6A2A-7705-9B63-CBDEDA02DFA5) If I am in error, and the CRN data represents temperature data collected directly by satellite and not land-based data collected by satellite please let me know – you knowage in this area being better than mine.

Even without the land-based CRN or “pure” satellite data, I would normally be inclined to “believe it likely” that scientific readings at “pristine” sites were “likely” accurate during periods prior to the independent verification – under a blanket assumption that the site’s QA/QC procedures were followed…Unfortunately I have lost all confidence in NOAA’s QA/QC program for surface temperature stations.

That said, I would not/could not “conclusively” state the stations pre-comparison data was accurate …unless I had access to reliable QA/QC data showing consistent and independent verification of the source instrument readings.

Further, if the inherit accuracy of the source instrument was beyond to the range required to answer the climate change issue at hand – then I would be forced to conclude that I could not discern a usable result for that purpose.

My comment was not intended to suggest that all surface temperature stations readings have no scientific value. Rather that there is not a sufficient number of reliable “pristine” stations available throughout the world from which one can draw a reliable conclusion about the world “surface” temperature…or to even assumed a difference from an arbitrary set “normal” temperature.

Consequently, in my view, while the historical record may provide a “indicator” to “suggest” past events, the data is simply not reliable enough nor available in sufficient quantity to draw a firm conclusion about the “world temperature”. (Assuming , as a side issue, said number has any meaning).

In conclusion, where surface temperature data can be verified as reliable and is of sufficient quantity to draw specific conclusion I have no problem accepting the results. I am simply not convinced this is the case.

Between gentleman, recognizing you appear to have a divergent view, do you come to a different conclusion(s) with the same facts? Or do you differ with my view of the realiablity, quality, and quantity of the data avaliable? What reasoning divides us?

I’m talking of systematic error in the temperature record due to problems at the instrumental level. Inaccuracies enter field-measured temperatures because of solar loading on sensor shields and wind speed effects, which cause the sensor to record something other than the true air temperature.

I think you are making a logical error in this paper in case 2 sec 2.2. You claim that for a given station, the variation in the actual temperature , s, contributes to uncertainty in the monthly average, which is in addition to the measurement noise. This is incorrect. The average of the real temperature has no uncertainty as a result of the real variation. If you were choosing N temperature samples at random from an infinite sample, then the average that you get would have a statistical uncertainty that you state, even when the measurements were perfectly accurate. But this is not what applies to the monthly average at a given station. You are not choosing N values from from a random sample of temperatures. The N temperature measurements at a given station are all that there is. The average is the average with no uncertainty due to sampling.

I don’t have access to the references you cite, and am not acquainted with the terminology you use, so I can’t comment on the details of the other aspects of your analysis of temperature uncertainty. I will have to wait and see what the climate science community makes of it.

The actual global temperature is not what we are calculating, but rather the change in temperature over time. Its seems to me that the sensor errors, that you mention, will cancel when the temperature anomaly is calculated, unless there is a systematic drift over time.

“It seems unreasonable to me to criticise them for not fixing a problem you (or anyone else) have yet to demonstrate.” Sharper00

It seems reasonable to ME to criticize them (NOAA NASA CRU etc. etc.) for pushing a draconian re-ordering of the world’s economy, one that without question would result in gargantuan increases in human misery, as a “solution” to a “problem” that they (or anyone else) have yet to demonstrate.

Thank you for asking the question BD and for the responses. I too am a “lurker” and the bee in my bonnet has always been about the integrity of the scientific inquiry that led to this fading AGW alarmism. I really took an interest when Climategate broke, when emails seemed to indicate that the peer-review process was being actively corrupted by a small number of scientists. More recently, as you might have seen here at WUWT, there is more evidence of peer-review corruption with the Steig/O’Donnell affair.

In summary, the atmosphere has been warming since the last ice age. But some would have us believe, based on a corrupted scientific inquiry process, that our consumption of fossil fuels and the consequent emission of CO2 is catastrophically exacerbating the warming trend. It follows that we have it in our power to reverse the trend by reducing our consumption of fossil fuels.

If the scientific inquiry process IS corrupt, then how can we know the true causes behind the warming trend? For me, the critique starts there, with an inquiry about the scientific process itself.

The lead scientist of this new group Robert Rohde has I believe been an administrator of Wikipedia since 2005, using the name Dragons Flight he has been pretty ‘active’ mainly in climate related topics

Look him up, his style while not as obvious as one William M Connolley is none the less……..well make your own mind up

BTW I predict this team will find even more warming than had been previously found

“In addition, when urban stations were dropped from the global data set used by GISS, it made no difference in the trend. This result was reported in the peer reviewed literature.”

Oh really?

You can Peer review this one;

I don’t know what the nature of the data that the family in your video was accessing. If the data was not homogenized an UHI effect will be detected. It is a real effect. Before climate scientists use the data, it is homogenized to account for station moves, equipment changes and abrupt temperature changes due to environment. The result is that once this is done, no difference between urban and rural data bases can be detected.

All analyses of the impact of urban heat islands (UHIs) on in situ temperature observations suffer from inhomogeneities or biases in the data. These inhomogeneities make urban heat island analyses difficult and can
lead to erroneous conclusions. To remove the biases caused by differences in elevation, latitude, time of observation, instrumentation, and nonstandard siting, a variety of adjustments were applied to the data. The resultant
data were the most thoroughly homogenized and the homogeneity adjustments were the most rigorously evaluated and thoroughly documented of any large-scale UHI analysis to date. Using satellite night-lights–derived urban/rural metadata, urban and rural temperatures from 289 stations in 40 clusters were compared using data from 1989 to 1991. Contrary to generally accepted wisdom, no statistically significant impact of urbanization could be found in annual temperatures. It is postulated that this is due to micro- and local-scale impacts dominating over the mesoscale urban heat island. Industrial sections of towns may well be significantly warmer than rural sites, but urban meteorological observations are more likely to be made within park cool islands than industrial regions.

“Be prepared to learn that any 100 randomly chosen tell the same story.

heck, pick the 10 longest records and you get the same story.”

That’s what happens when people won’t let facts get in the way of a story.

What do you think the story would say if we took the 10 longest rural records and used only raw data with no adjustments?

The problem is the instrument record isn’t accurate enough or long enough or global enough to pull such a small signal out of the noise of the past 130 years. Then because that can’t tell a credible story they try to manipulate, extrapolate, interpolate, adjust, and otherwise massage the poor data to make it better. Once you commit to massaging poor data like that with statistical techniques and unverifiable quality assumptions you can make it say whatever you want it to say which is why there’s a trite expression “Lies, Damned Lies, and Statistics” which came from a popular book by the same title.

Anthony Watts added in this thread “global warming is real” and only the magnitude is in question.

Not quite, Anthony. You rely on the satellite record for that which is short (32 years) and not without its own problems, questionable assumptions, and assorted other artifacts to say nothing of it not measuring the air temperature directly with a thermometer 4 feet off the ground inside a Stevenson screen but rather is making an indirect measurement of radiation that has travelled through kilometers of atmosphere and has to be adjusted and tranformed with mad skillz to get an actual temperature out of it. The number of revisions over the past 32 years to how the satellite data is and was massaged is legion and the sad fact is the satellite record is still the best temperature we have despite all the problems with it.

So when you say “global warming is real” you’re manufacturing a factual statement. If you said “global warming appears to be real over the past few decades” I would have no argument with it but you didn’t – you stated it is a fact when it is no such thing.

I, for one, am very interested in seeing the raw data from all 39,000 sites across the world.

As long as they outline how many are in cities, UHI will not be a problem since we can deduce how much of whatever increase is just UHI.

It doesn’t matter what the results are.

We have the right to have access to all the raw data in this very important issue (and I prefer to see all of it – not just the ones NCDC or GISS or the Hadley Centre have picked out for me and made all kinds of unknown adjustments to).

OK the ‘law of large numbers’ says there is warming. No one disagrees with that- its what coming out of the LIA also says.

But the ‘ law’ says nothing about whether the warming is unprecedented, the records being too short, or whether there is a significant AGW component. The MWP occurred ,with variation, over approx 300 years. Similarly the LIA again with variation during the period, lasted over a similar period. There was precious little AGW involved in these events.

Why should we assume human influence is now the dominant factor in climate change?

So now, having read every post on this thread (and thank you for the attempt at direct answers Allen,Duster,Ged and MtK), I can safely conclude that there are no answers, only more questions. As an elected official in a small town who must make reasoned judgments in the allocation of tax dollars, it leaves me, well, cold. :) As a tidbit for all you analytical mathematicians out there, our community recently discovered (somewhat rudely) that the price of wind is directly proportional to the price of fossil fuels, at least as it relates to the generation of electricity. Someday, on an appropriate thread, I’ll reveal some of the dirty little secrets regarding recycling. As Kermit the Frog once said, “It’s not easy to be green”.

Steven Mosher, “So when the satellite measure matches the land surface record what do you conclude about the land surface record?”

The systematic error of a PRT temperature sensor, e.g., inside a CRS shelter, has been measured, Steve. It’s about (+/-)0.5 C, and about half that for an MMTS sensor. Why does comparison with satellites tell anyone anything about the reality of empirically determined systematic errors in a surface air temperature sensor?

Apart from that, satellite temperatures from IR sensors are calibrated against buoy SST measurements, and so are no more accurate than the buoys are. Since buoy SSTs are also used to deduce or calibrate marine air temperatures, it’s not so surprising that satellite and surface temperature trends should match.

“And when 10 years of CRN data ( all pristine sites) match the “old” sites that they are paired with, what do you conclude?”

Was the CRN data corrected for the systematic error impacting its own sensors? If you look at Hubbard and Lin’s 2002 paper (reference 13 in the E&E paper), you’ll see that the precision sensors in all tested shelters were systematically biased in the same direction.

In their 2004 paper, Sensor and Electronic Biases/Errors in Air Temperature Measurements in Common Weather Station Networks J. Atm. Ocean. Tech. 21, 1025-1032, H&L, among other things, examined errors in USCRN sensors. They found that, “For the USCRN PRT sensor in the USCRN network, the RSS errors can reach 0.2 – 0.34 C due to the inaccuracy of CR23X datalogger…”, where “RSS” is root-mean-squared. These are systematic errors, not random, and do not decrement as 1/sqrtN.

When the systematic effects are derived from the same forces as determine air temperature, it’s not surprising that anomaly trends correlate, even when they’re inaccurate. But in any case, there doesn’t seem to be much reassurance available in comparative analysis.

Espen says
———
Are they going to publish yet another “global mean temperature”? I’m surprised that physicists are willing to work with that kind of metric
———
Despite the moist enthalpy argument, you can’t away from the fact that if the global heat content of the oceans, the land and the air is going up, then the global average air temp is also going to go up as well.

You,ve let yourself become distracted by a “can’t see the wood for the trees” argument,

It seems reasonable to ME to criticize them (NOAA NASA CRU etc. etc.) for pushing a draconian re-ordering of the world’s economy, one that without question would result in gargantuan increases in human misery, as a “solution” to a “problem” that they (or anyone else) have yet to demonstrate.
———-
Re gargantuan increases in human misery:

Let me guess:
1. This statement is incontrovertible
2. The economics is settled.
3. It must be true because there is a consensus of Internet bloggers.

Steven Mosher “Well, that’s not actually the case. The CRN are set up in a “paired” configuration for a large part of the network. That means old stations are paired with new stations. That will allow for the creation of transfer functions from the new network to the old.

Steve, how will transfer functions from a new CRN sensor presently parallel to, say, a LiG set up in a CRS screen, remove systematic error from the prior decades of LiG temperatures? Error will have varied systematically with erratic micro-climatic conditions. At best, after a decade or two of parallel measurements, you’ll get an estimate of the average bias and SD for the LiG/CRS system, relative to the CRN system. Subtracting the average bias from prior LiG temperatures will not remove the relative LiG SD. That SD will be uncertainty bars around any LiG measurements spliced onto a measured CRN trend.

If there happened to be a larger systematic variance or a different bias in the past LiG measurements than in the LiG temperatures taken in the later parallel measurements, then subtracting the new average bias may even make the older LiG temperatures less accurate. But we’d never know, and so that would produce another form of uncertainty, namely implicate uncertainty in that we’d not really know whether our correction actually corrected the older record.

Further, the LiG SD will have built into it the unmeasured systematic error from the CRN sensor. Unless, that is, a further parallel calibration temperature sensor system was put in place that is relatively impervious to solar loading and wind speed effects. That sensor would yield the accurate air temperatures that will reveal the systematic bias in the CRN sensor temperatures.

A fully critical experiment would include a pyranometer and an anemometer to independently measure radiation and wind speed, and use those plus the more accurate temperatures to obtain empirical transfer functions to correct the systematic bias out from the CRN temperatures.

So, I see the CRN-LiG parallel set-ups to be just business as usual for NCDC, where they merely want to adjust and renormalize older LiG or MMTS temperatures to line up with newer CRN temperatures. The parallel measurements won’t help them with systematic error already in the older instrumental surface air temperature record, and won’t help them with the systematic error that will also enter the newer CRN temperatures.

As I recall, both Ryan O’D and Jeff C observed that their study corrected a faulty method, but said nothing about a physically real temperature trend in Antarctica. Given that, it looks like your argument there assumes a conclusion that was not present.

eadler, you wrote, “[In a] monthly average at a given station[, y]ou are not choosing N values from from a random sample of temperatures. The N temperature measurements at a given station are all that there is. The average is the average with no uncertainty due to sampling.”

Daily temperatures at a given station are not day-by-day constant across the month. They may oscillate and there will likely be a trend with a non-zero slope across the month. The average of those temperatures will be one number, it’s true. However, it is false to say that the average temperature is an accurate representation of the temperature for that month. The average has a magnitude uncertainty that communicates the variation in daily temperature across that month. Including the sqrt(variance) of the magnitude is the only physically complete representation of a monthly average temperature.

That leads to the interesting point that the 30 sets of 12 months in a 30-year anomaly normal period will display a magnitude uncertainty in their 30-year monthly averages. That magnitude uncertainty should be propagated into any long-term temperature anomaly trend based on that 30-year normal, as an indication of the natural variation in temperature during the climate regime of the normal epoch. I present that calculation in my next paper, already reviewed and accepted by E&E, and it turns out to further seriously impact the meaning of the 20th century surface air temperature anomaly trend.

You wrote that, “The actual global temperature is not what we are calculating, but rather the change in temperature over time. Its seems to me that the sensor errors, that you mention, will cancel when the temperature anomaly is calculated, unless there is a systematic drift over time.”

When calculating an anomaly by subtraction, the errors in the normal and the temperature propagate as their rms. They don’t subtract away. The rest of your comment about errors canceling is true only when errors are known to be random. Systematic errors are not random, and the estimated climate station measurement errors are not known to be random. Applying the statistics of random errors to measurement errors that are not known to be random, is a mistake.

This project is not capable of settling the issue. The debate isn’t about what the earth’s temperature has been doing for the last 150 years. All this project can do is damage the warmist meme by showing that we haven’t warmed. I would be surprised if that were the case.

“The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.”
———————————————————————-
What a load of cobblers. Putting together a dataset has nothing to do with creating ‘a new, irrefutable consensus’. There ain’t no such animal as a ‘new, irrefutable consensus’, anyway.

I don’t see any harm in this project, if they are as transparent as they promise to be. But, given the points made by PPs and Anthony about the quality of even the raw data, it will probably end up as a GIGO exercise. More crappy data does not mean more accurate conclusions.

Where a problem could arise is if the results of putting together a dataset are splashed across the world as somehow providing answers on a global scale. That would be several bridges too far. In fact, it is more likely that the dataset will be useful at local levels to provide pointers to the veracity of numbers being generated in particular areas.

As has been pointed out, none of this touches on causation anyway. But, as Ryan O has discovered, methodology is every bit as fraught as subsequent steps in this field.

Let me guess:
1. This statement is incontrovertible
2. The economics is settled.
3. It must be true because there is a consensus of Internet bloggers.

Or maybe you are just making up a story.
————

Consider the misery generated to date by a program that is so tiny that even its developers admit that it will have no measurable impact on the climate. I am referring to CAFE standards, not the internet kind, or the kind that should only sell coffee grown in the shade, but the program wherein the U.S government mandates the fuel mileage of motor vehicles sold in this country.

Four studies have looked at the number of deaths caused by this program, and no, the studies were not done by “Big Oil”. The studies I refer to were done by the following:

JR Dunn writing in The American Thinker compiled the results of these studies and published ranges in the estimated deaths from the CAFE standards to date.
This ranges between 42,000 and 125,000 Americans killed as of April of last year.
CAFE standards alone have already caused what anyone but a Bolshevik would describe as “gargantuan human misery.” Imagine what a program that would materially impact CO2 concentration in the atmosphere would do.

I don’t envy your lot as a politician. You deal in rhetoric and as Plato so fervently argued, the truth cannot be found using it. However, as a politician you must make decisions and arguments with the information you have at hand. The problem with climate science is that it does not operate within a paradigm theory as do chemistry (Lavoisier’s combustion theory) or physics (Einstein’s theories of relativity). While there are competing theories to account for the behaviour of the climate only one has been given the full backing of the rhetoricians irrespective of the validity of the theory. This theory, as I have said before, has its basis in a corrupt line of scientific inquiry, so we cannot know what is true by using this theory.

I think that what we do know about climate is presently so incomplete that we cannot discern man made effects, much less the magnitude of those effects. So rather than hitching political fortunes to global climate dogma the prudent politician should put government resources to use in ways that produce direct benefit for his constituents. Wind farms, as you have found out, are dubious “investments”.

“…Global warming is real, Muller said, but both its deniers and exaggerators ignore the science in order to make their point…”

Yes, global warming IS real, and nobody really denies that.

The question all along has always been: “Just exactly how much warming have we seen, and is this warming outside the bounds of natural warming in the past”?

Several scientists have questioned the use of thermometers used for daily readings being “re-purposed” to provide a climate record.

Over the years, the moves, changes in equipment, UHI, encroachment on the thermometers and other things puts a possible error in the system.

Add to that the adjustments, dropping of stations, 1200km smoothing, extrapolation to account for areas of no data, classification of rural or not based on nightlights, refusal of scientists to simply tell people which sites they used and how they processed that data, use of different averaging periods, etc – and you see how they’ve managed to muddy the original question.

And we haven’t even TOUCHED the whole idea of an “anomaly”, especially when we don’t know what “normal” is.

It seems the only use we see of the processed data is as a basis to declare “warmest month since whenever”.

Unfortunately, the “exaggerators” have already attacked. They’ve looked at the list of donors and determined the effort to be worthless.

That is, unless the data confirms their theory. Then, they’ll fall over themselves to deem the study a success.

The rural vs urban temperature difference in the GISS data is so easily identified by even a 10-year-old, it is a wonder that the re-analysis needs doing at all. The correlation with global temperature increase with decreasing station count is as easily seen with other comparisons within the GISS official data. The divergence of global temperatures between land stations and satellite data is similarly easy to see. An objective review along these lines of the current data, presented to Congress as a challenge to requested funding based on (false) claims seems straightforward and simple.

Why is it that what we see posted in such clear and simple graphs has no apparent credibility and use for the Inhofes who wish to explose the CAGW fantasy?

A series of about 4 graphs seems to show it clearly. And that is the ADJUSTED data. What is wrong, technically, with these comparisons?

The rural vs urban temperature difference in the GISS data is so easily identified by even a 10-year-old, it is a wonder that the re-analysis needs doing at all. The correlation with global temperature increase with decreasing station count is as easily seen with other comparisons within the GISS official data. The divergence of global temperatures between land stations and satellite data is similarly easy to see. An objective review along these lines of the current data, presented to Congress as a challenge to requested funding based on (false) claims seems straightforward and simple.

Why is it that what we see posted in such clear and simple graphs has no apparent credibility and use for the Inhofes who wish to explose the CAGW fantasy?

A series of about 4 graphs seems to show it clearly. And that is the ADJUSTED data. What is wrong, technically, with these comparisons?

There is not much divergence between the global average temperature anomaly between satellite observations and the 3 major temperature station data bases.

LazyTeenager says:
Despite the moist enthalpy argument, you can’t away from the fact that if the global heat content of the oceans, the land and the air is going up, then the global average air temp is also going to go up as well.

You just don’t get it, do you? The global average air temperature may in theory drop even if the heat content of the atmosphere rises (for instance if the rise occurred just in already warm areas, while colder and drier areas got cooler).

(Besides, ocean heat content has not increased at all since we started to get better measurements)

eadler, neither you nor LazyTeenager have ever given evidence that you’ve studied the air temperature record enough to actually understand its problems. And yet, you’re both ready to dismiss its critics. Therefore, from your own perspectives and at the very best, you’re both as prejudicially biased as they are.

You also wrote, “There is not much divergence between the global average temperature anomaly between satellite observations and the 3 major temperature station data bases. … When the data beginning in 1980 is analysed using the same baseline years for temperature, the graphs correspond quite well.”

Since satellite temperatures are calibrated to buoy SSTs, they’re not independent data sets. Their correlation, therefore, tells you nothing about the reliability of the record.

AGW climate science is rife with the sort of sloppy tendentious analysis you posted.

Satellite data is about 0.22K cooler than the GISTemp data. As I understand/understood, the satellite data is calibrated internally, but there must have been some ground-truthing done, some baseline established. So the temp difference is real.

To be cooler, the satellite data is likely ocean-biased (relative to GISTemp). GISTemp is probably land-biased in the same way. Since there is a known UHIE issue of undercorrection (of an undetermined amount, said by warmists to be neglibible, said by skeptics to be up to 0.3K), is it possible that the satellite-GISTemp difference IS the uncorrected UHIE?

A 0.22K of incorrectly corrected UHIE in the GISTemp record would be about right by general guesses. What would that do to the temp rise since 1965? The urban-rural ratio has increased dramatically since then: the UHIE inadequate correction will bring down current temperatures (and anomalies) while leaving older records intact.

So: is the UAH/GISTemp temperature value discrepancy the uncorrected UHIE?

The most important mission of BEST is to establish open science regarding the AGW hypothesis, and to measure it’s performance using common sense data and pragmatic homogenization procedures. It is paramount that your analysis be fully disclosed for all to evaluate, critique, comment, and use to test alternative hypotheses.

Let rational and open science prevail. We who seek the truth, including a plurality of the electorate, require a scientifically balanced basis for responding to policy prescriptions. You have the power to replace the divisive and non-productive “advocacy spin” with which we are deluged daily with measured and reasoned consideration within the tradition of truthful scientific inquiry, including all it’s uncertainties.

I think its been adequately demonstrated that repurposing human energy regimes, with overwhelming cost, behavior mods, and personal inconvenience will not occur without broadscale disclosure of these nuances of enviromental science. Have faith in the common sense of an informed electorate, as much as you can count on the resistance of an electorate beseiged by advocacy spin. The people are receptive to the honest truth.