A "lights=1" USHCN station

Increasingly, non standard equipment is being observed as substitutes for max/min thermometers and MMTS systems. In this case, vandalism issues with the MMTS forced a change to this station setup in 2002 that was less accessible.

In 2002 a change shows up on the GISS graph for the station. A step of about 2 degrees C seems apparent at that time, but other stations in the area also have a similar event, so it is unclear just how much of a magnitude this equipment change, elevation change, and location change may contribute to the Port Angeles record.

Oddly though, while the change in lat/lon shows up in NCDC’s MMS, not a single mention of the new equipment can be found, nor a mention of the new observing height. MMS still says all the original equipment is in use:

According to Don’s survey notes, the old MMTS is still at the old site:

“In 1987 the station was moved to the location on the lawn at the Southeast corner of the Port Angeles City Hall (see photos of Prior Location). It remained there until 2002. The exposed location allowed for repeated acts of vandalism, and while the MMTS still remains on the site, it no longer functions. In 2002, the City of Port Angeles purchased a weather station from the Davis Instrument Company and installed it on the top of the utility pole. NOAA was made aware of the change and accepts the data.”

I wonder if this new equipment gets regular inspections at this height to check for wasp nests in the IR shield or bird excrement in the rain gauge?

UPDATE( Steve Mc):
I’ve done a few figures checking the adjustments at Port Angeles WA. Station moves recorded in 1952, 1985, 1988. TOBS changed from midnight to morning in 1935; to afternoon in 1944, except for a short interval at 10 pm in the last 1940s. MMTS was introduced in 1984. The USHCN stage adjustments are shown here:

Next is a comparison of GISS stage adjustments – Port Angeles has a significant Y2K error. It also has a very large “UHI” adjustment to unlit areas (it has GISS-brightness of 28) that reverses the Y2K and more.

Here is a comparison of the USHCN TOBS version and the GISS adjusted version. Neither shows any particular trend; both show a very warm 1998 with something of a mean-reversion since then.

I can’t tell from the picture. Is the sensor 180 degrees around from the light? IE, is the pole at least partially blocking the sensor from the IR produced by the street lamp? Those beasties get nasty hot.

What’s the point of posting this picture? It seems like the whole patten of posting pictures and cackling about them is promotional. Especially when Steve says he thinks it unlikely that an effect from the site issues will be proven.

Is this site going to become some sort of Daley substitute? We haven’t seen a science paper since that GRL paper two years ago. There remain all sorts of issues in understanding and quanitfying reconstruction methods. But we’ve decamped to a PARADE of picture shows.

#6 – The point is that ridicule sometimes serves a purpose when people who ought to be reasonable decide not to be. There will be a whole spectrum of rebuttals to the AGW campaign. Some of it definitely will be promotional, but as long as it isn’t outright fabrication, then objections are mostly a matter of taste. When the PR on this blog overwhelms the criticism of scientific methods and practices, then I’ll agree that it needs to be dialed back. Until then, let the sun shine on the failure of professionals who work for the citizenry to do their jobs properly.

I wondered the same thing. Yesterday, while doing the Clemson, SC survey, my camera malfunctioned as I approached the electric fence that was 10 ft. from the Davis sensor. Unsure as to the actual cause.

one point of surface stations is to confirm/disconfirm the accuracy
of metadata, and to improve it.

Hansen has an UNTESTED HYPOTHESIS in his approach. Namely, that “nightlights”
pick out rural/urban.

How do you test this hypothesis. Visit the site.

If lights =0 and you find a site potentially polluted by human activity
the lights=0 is flawed. the assumption is that rural sites measure the lng
term climate signal. the sampling method ( lights =0) assumes that this
signal, the lights signal picks out rural. That needs testing. A site visit
and site dcumentation tests that assumption.

perhaps light=0 fails a small prtion of the time. Fine.
perhaps the corruption we find is ambiguous. Fine.
Still, it is a valid excercise to CONFIRM the metadata.
( elevation is another simple example)

#6 Especially when Steve says he thinks it unlikely that an effect from the site issues will be proven.

Does Steve actually say this?

Nevertheless, it would seem to me indeed that a 100% proof is not easy, but using IPCC definitions it is “very likely” that AC’s, parking lots, etc will affect T not only by offsets but also by trends. And added to this because things like the AC’s were “very likely” not always there, this also affects homegenity.

Now second, claims have been made that this is all corrected. Either using metadata or by using statistical methods. As commented in #13 the metadata seems to be uncomfirmed in various cases. This leaves claim that statistical analysis fixed all these issues. Recent discussions on this forum of these statictical corrections seems to suggest this is not trivial. In addition to that some of ‘the correctors’ seem reluctant to even disclose te exact method of corrections.

Especially when Steve says he thinks it unlikely that an effect from the site issues will be proven

I don’t recall saying anything in precisely those terms (although maybe you can dredge some passim comment.) I think that the larger issue is the non-US siting issues, where there is a MUCH larger trend that the US where there are a lot of rural sites (whatever their defects); my impression is that the offshore sites tend to be more urban. I think that some of the benefits may come from a better analysis of UHI effects.

I think that it’s a good idea to audit and verify underlying data. When I started trying to replicate Mann’s work, I didn’t “expect” anything to be wrong with it or that, if I did, anybody would be interested in what I thought. When I realized that nobody had ever audited the damn thing, I thought that it would be interesting to do. With surfacestations, remember that Vose, Karl et al 2005 said that documentation of the type that Anthony is providing was an “ideal state”. It’s intrinsically worth doing.

I find that it already enables a much sharper examination of the various adjustments.

#15 Bernie…
Correct me somebody if IⳭ mistaken but Nasa Giss
numbers are from another Port Angeles station 425742
Tu Tiempo has a third one, 727885 at the Morgan R
Fairchild Int AP…The more WX stations networks we
have the better, everyone will be satisfied…I must say
after eyeballing through some 10 states at USHCN from
west to east warming in the US is not to impressive…,
not even in NV Nebraska!(USHCN thread starting 2007 Feb 15)
Temperatures in nature outdoors seem more stable
than in some overworked computers indoors…

The US Surface temperature measurement system was designed to tell people what they want to know, which is “how hot is it where I live”.

It was NEVER intended to scientifically measure the climate of the earth. How could it? It doesn’t cover the rest of the world, nor the oceans. So, stop acting surprised and horrifed by station conditions. They are fine for what they are intended.

All we need are about a dozen pure greenfield temperature stations on each continent and on each ocean, in order to calibrate the satellite data.

Obviously, it’s not what you want to hear, because you want to audit everything with your statistical methods and endlessly discuss the effects of air conditioners, but you need to face reality.

I would say the point is to document the degree of compliance with monitoring site location standards. This serves two purposes: 1. it gives whoever manages this network an opportunity to see problems and correct them (feedback for quality control) and 2: it gives researchers in the future an opportunity to select sites that comply with location standards in their research.

The USHCN is billed as a network of “high quality” and so far it appears that compliance with location (and apparently equipment) standards is the exception rather than the rule. This needs to be known. If the data coming from these sites are junk, then any conclusions reached that are based on those data are also junk.

You can not measure surface climate 25 feet in the air over a paved street or in the middle of a parking lot or on top of a roof or next to an air handler condenser. I would also go so far as to state that electronic instruments are likely to be less stable over time than mechanical too. There would be more drift. A hybrid should be developed that uses a mechanical thermometer that is read electronically rather than a thermistor element that is subject to drift and noise. Mechanical thermometer’s don’t drift and can last for a century. I would be greatly surprised if electronic measuring devices last more than 10 to 20 years before needing replacement because should a part fail, it will likely be no longer manufactured and so a replacement would be unavailable and require the replacement of the entire apparatus.

Modern electronics has probably resulted in less accurate measurements over the span of time required for climate research to spot trends in the tents of degrees per decade range over the course of a century or longer. I would place more trust in a mechanical thermometer that is read electronically and reset automatically at the same time each day.

I’m simply exercising the “power of large numbers”. A large number of photos that show out of compliance station sitings helps zero in on the true mean of data measurement quality.

When the USHCN “high quality network” is fully surveyed to the best of our abilities, and the number and ratings of quality is assigned and averaged, we’ll get a number that will tell us the true quality of the network. It may be high, medium, or low. We don’t know yet.

But posting these pictures, if nothing else, has been thought provoking. Sometimes those thoughts yield something very valuable. For example, had I not posted the Detroit Lakes photos that everybody was hollering about, we would not then have the subsequent discussion about the Y2K problem in Hansen’s adjustments. And the thoughts extend well beyond this blog. The first step in solving any problem is to get people to acknowledge and talk about it. I’d say the project has at least accomplished that.

Until Steve tells me to stop, I’ll continue. The premise of the project was to have open and accessible data and methods throughout. I’d rather be accused of being too open than to being closed such as we’ve seen from some others scientists. Also, since I don’t reside in a University environment, I have no peers to bounce ideas off of at lunch. Posting in this “ongoing seminar” is the next best thing, and feedback from participants has already brought improvements to this project that are being incorporated.

The US Surface temperature measurement system was designed to tell people what they want to know, which is “how hot is it where I live”.

It was NEVER intended to scientifically measure the climate of the earth. How could it? It doesn’t cover the rest of the world, nor the oceans. So, stop acting surprised and horrifed by station conditions. They are fine for what they are intended.

Gunnar I couldn’t agree more. But since it is being used for that purpose, at least in part, it deserves examination at the measurement level. We have to start somewhere, why not here?

The Climate Reference Network appears to be setup correctly in “greenfileds” and will likely do a good job of measuring the near surface temperature record in the future. But its incomplete, and does not have enough time series data to gather trends from yet.

Anyone ever touch a creosote drenched telephone pole that has had the sun beating on it all day? And sun there is, in abundance, in Port Angeles. That whole stretch betweeen there and Sequim is in the rain shadow of the Olympics, prevailing winds are from the SW. I think Sequim gets something like 14 or 15 inches of rain annualy, Port Angeles is around 20.

>> But since it is being used for that purpose, it deserves examination at the measurement level. We have to start somewhere, why not here?

Because by accepting this false premise, you may score points in a battle, but lose the war of truth. Anyone who chooses to make conclusions about global climate using a corrupt and extremely limited surface network, never intended to measure climate, has a big non scientific agenda. They are choosing this network EXACTLY because it wasn’t intended for this purpose. That opens the door to their adjustments and numerical funny games. If you think they can be pressured into revealing their cards, you far underestimate them.

>> The Climate Reference Network appears to be setup correctly in “greenfields” and will likely do a good job of measuring the near surface temperature record in the future. But its incomplete, and

All it needs to do is calibrate the satellite measurements.

>> does not have enough time series data to gather trends from yet.

No amount of time will be enough with this kind of measurement. As Sam has explained numerous times, isolated temperature measurements cannot tell us much about the complete thermodynamic state of the earth system. Like a balloon, you can push in on one side, but only a fool assumes that it didn’t bulge out at some other spot.

The US Surface temperature measurement system was designed to tell people what they want to know, which is “how hot is it where I live”.

It was NEVER intended to scientifically measure the climate of the earth. How could it? It doesn’t cover the rest of the world, nor the oceans. So, stop acting surprised and horrifed by station conditions. They are fine for what they are intended.

Gunnar I couldn’t agree more. But since it is being used for that purpose, it deserves examination at the measurement level. We have to start somewhere, why not here?

I dont’t agree.
Putting a station inside a courtyard, under a tree, over asphalt, at 50 m above the ground, etc. is not a good service at all. You cannot provide people that kind of readings. Meteorological observables are a matter of science and must be taken in a scientifical manner.
Everyone can put a thermometer wherever he wants, not a pubblic institution!

>> Putting a station inside a courtyard, under a tree, over asphalt, at 50 m above the ground, etc. is not a good service at all. You cannot provide people that kind of readings.

Sure you can. The people who want to know these specific measurements live “inside a courtyard”, “over asphalt” and next to AC units. When people living in NYC check the current temperatures, they DO NOT want to know what NYC would be like IF it were a “green field”, minus their 11 million neighbors. They want to know the current temperature, without any adjustments!

You are so focused on this so called “climate controversy”, you have completely forgotten or misunderstood why these weather stations exist. They are meant to tell us the weather! WHERE WE LIVE! Based on these requirements, it is unscientific (ie untruthful) to tell people that the temperature in their city is 10 deg F lower than it actually is.

If you want to do climate science, use the global satellite measurements.

There are weather stations, and there are climate stations.
They serve different purposes.
Weather stations are designed to be fed into weather forecasting models. They need to show temperatures as they currently exist in the environment the forecast is being made for.
Climate stations are designed to be fed into climate forecasting models. They need to representative of the earth as a whole, not just the human modified poritons of it.

The problem is that we are trying to use weather stations as climate stations. I do not believe that circle can be squared. The weather stations were never designed to produce climate data. No amount of reverse engineering can recover historical data that was not collected.

Gunnar #28,
I strongly disagree.
First of all, I’m talking about “meteorological observables”, not of climate issues.
I know that people want to know the temperature where they live, but if you provide them the readings taken in your courtyard, you are providing a datum representative of nothing more than your courtyard.
So I’m not “focused on this so called “climate controversy””, I’m just focused on science. It’s so simple!

“If you want to do climate science, use the global satellite measurements.”

Climatologists are doing so – that is why those satellites were put up there in the fist place.

The problem, of course, is that we want to know what happened to climate in the past – and we cant retroactively put satellites in orbit in the past.

Which means, we’re kinda stuck with the extant historical data and proxies. We can either do the best we can at extracting climate signal from those, preferably using multiple kinds of evidence and lines of analysis so the possible biases are cross-checked, or we can throw up our hands and say ‘its unknowable.”

Love this site. Hate the trolls. Ya know city temp data could be got without asphalt, air-conditioners and whatnot if they were placed at golf courses. Every city has them, they could be caged to protect from stray shots and golf courses tend to be fairly secure since they only want paying customers. The geeks servicing them would need to wear plaid pants of course.

#33, there’s different methods of analyzing the historical data though.

The method “Look at the physical sites, and chuck all of the ones that fail a minimum standard currently” hasn’t been tried. Yes, it would be better to send observers back in time to do a hell of a lot more observations. But that is just as asinine as whining that we can’t send a competent comprehensive temperature monitoring network back in time either.

I’ve used both methods on my own data. Filtering in advance “This data point is going to suck, the power went out,” or “I might run out of ingredient X”, and retroactive statistical analysis to determine other outliers that I can think about chucking. And, importantly, figuring out what marginalia or metadata associated with those flipping outliers. There’s no such thing as too much metadata. And at a minimum that’s what physically looking at the sites gives.

No, chucking ‘all currently bad sites’ isn’t going to exactly correspond with ‘all sites that have sucked historically’. But, there is the off-chance they might have a smidgen of correlation.

fine – looking at only the ‘properly sited’ stations gives an independent analysis of the data. Nothing wrong with that – I don’t think anyone is objecting to that. Have at it – are yo committing to doing that analysis, and publishing in the literature where the results can be carefully examined?

But:
1. Designing an alternative analysis DOES NOT INVALIDATE THE CURRENT ANALYSIS. Hansen et al chose to do QA and corrections based on analyzing the data and comparing stations. There were good reasons to do that, and some weaknesses. Finding some badly sited stations from among those he used, which is so far all that is being done, DOES NOT IN ITSELF INVALIDATE THAT APPROACH – much as the screaming here about the stations would indicate otherwise.

2. Even if only well sited stations are retained in the analysis, a similar examination of the data must be done. You indicate this in your post above, but it bears emphasizing. Just as bad siting does not in itself tell us the trend data are irrecoverably contaminated (or contaminated at all), good siting does not tell us there is no trend contamination, or whether there are time of day issues, whether a thermometer was replaced possibly into a different location within the shield – this can have significant offset effects – whether local irrigation was commenced or discontinued – and on and on.

3. Posting the photos on the front page, with superimposed declining uncorrected temp data for the ‘well sited’ station, and increasing uncorrected temp data for the ‘badly sited’ station, without noting that the trends are indistinguishable for those stations in the corrected data, and especially without prominently noting that the preliminary photo-only data is UTTERLY USELESS for deciding the validity of the extant analysis, is at best intellectually misleading. At best.

1) Who said it did? Unless, of course, it shows something markedly different. Then you compare and contrast the two separate methods and try to determine which is more accurate.

2) Of course. I didn’t mean to infer, imply, or otherwise allude to any such insanity.

3) If the photo showed an MMTS hooked up to a blast furnace, along with operator notes ‘moved here in 82′, do you really need more information to say this site should be removed from the analysis? Without wading through Hansen it is tough to say the exact weighting any individual site ends up with. I, personally, would chuck this one faster than some of the more sensational photos. Height is an acknowledged and accepted influencing factor.

I do see the difference between ‘Temperature Data’ and ‘Temperature Trend Data’, and figuring out trend contamination is trickier. But the very first step is ‘achieve census of extant sites’. Until that’s done, you’re aiming to motivate people to participate to actually get the flipping data you want to evaluate. Four or five pictures of a site by anyone, of any skill level is more than NOAA has done. Even when the sensor is on NOAA property.

Is it earthshaking news that Port Angeles isn’t home to a well maintained site? No. Is it one more step towards a census? Yes.

In general: If the system wasn’t designed to do what we’re doing with it, I’d logically think that using it for that purpose would yield results that are inherently incorrect. Which is why the temperatures get corrected, one would think. Claiming that this (the “global average based upon these stations”) means anything nearly as well as it’s being made out to be in the first place is, I think, dishonest.

But that’s not the subject here, what value the data has! It’s that if we’re collecting it, and the stations are not meant for it, but it’s all we have. So whatever it is that means, why not restrict things to the best stations?

Which ones are the best? Gather the data on the state of the stations, grade them, and see what things look like with the Class 1 or Class 1 and 2 stations only. We should have already been able to do that, and that is the entire point!

Okay, so what does that photo tell us about this site? First we have known possible contaminations, that one would imagine are actually probable contaminations. The phone pole, the material covering the phone pole, the light, the power to the light. Then there are the ones we don’t know about; insects and birds (bats too perhaps). Then are the obvious issues that break the standards; proximity to the light, the height of the sensor, and the equipment itself.

I can semi-answer that ‘errs in lights=0 matters?’ question right now. In this case, yes, it certainly appears so. In other cases, maybe, maybe not. Obviously it’s a near certainty that in some cases, and maybe in most cases, the answer will be “yes”. (If that’s really even up for debate that the obvious answer already is “yes”.) What we can’t answer, yet, is what percentage. And once we can, we will be able to determine what the extent of the “matters” is for any station that’s kept in the analysis. Does that invalidate the network? I don’t think so. (And again, what use the information is, and what it means, that’s a separate issue.)

And wait there’s more! If we ignore the question if these global means of these separete systems actually have any meaning at all, much less as a whole, how’s this for a question about the data itself.

Since the trend on the surface using land temps only (well, the material under the equipment and how it mixes with air 5 feet (or more) up) is a trend of .07 C per decade over 125 years, how that’s possible when when the SST readings have only risen a little over .01 C per decade over 150 years. (I’m surprised it’s a 7:1 ratio, but maybe I shouldn’t be.)

Regardless, measuring air at 25 feet and 5 feet are vastly different things, and adjustments shouldn’t be needed, so maybe nothing should be surprising. That’s why the sites get audited! And, I believe, why so many folks are upset it’s getting done. So all the bickering ignores the reasoning behind this, I think. Finding sites that don’t meet standards, and later, hopefully, explain or remove the inconsistencies in all the records.

Modern crypto is based on burying the signal in so much noise that without the key the signal is very hard to extract.

What is being done in climate is trying to extract a very small signal from a very large noise. Except that since there is no decrypted text to refer to there is no way to be sure all the noise has been removed.

It is ludicrous to talk about .01 C differences in a signal + noise that varies by 50 deg C.

Small differences statistically extracted might be valid if the system was stable and repeated measurements over time were collected.

However the system is not stable and is influenced by multiple chaotic oscillators.

And that is even before we get into the difficulties of actual measurement.

A “law of large numbers” is one of several theorems expressing the idea that as the number of trials of a random process increases, the percentage difference between the expected and actual values goes to zero.

Or in other words, since we’re combining things with math, we get math effects. Or that continually averaging imprecise information makes that information more precise.

So the only reason we get down to that .01 is by the sheer volume of the measurements being combined, giving a smoothing over time, not a native accuracy to that level, no…

Assuming that I have that right, 75 Joules added to Air would cause its temperature to increase by 3 degrees, while the same 75 Joules added to water would only raise it by one degree.

However, the mass of the atmosphere is 5.3 x 10^18 kg. That’s a lot, but it’s only .378% of the oceans, weighing in at 1.4 x 10^21 kg. The oceans are only 6% of the mass of the crust, and the crust is only .374% of the earths’ mass.

The Sun is heating all these components simultaneously. (AGWers pretend that Sun only heats atmosphere). So, assuming for the moment that all components were at equilibrium, and the Sun became more active. Air temps would increase the most, oceans much less, crust much less, core least. However, it’s energy that matters, so a slight rise in ocean temperatures will deliver a lot of energy to the atmosphere.

Wouldn’t it be better to find the actual errors and base the corrections on that?

Hansen bases his corrections on assumptions about the quality of the data. If those assumptions are wrong (it appears to be since his methods did not ferret out the currently discussed error) then his corrections are of small or possibly even negative worth.