Alarmists throw in the towel on poor quality surface temperature data – pitch for a new global climate reference network

From the Journal of International Climatology and the “if you can’t beat ’em, join ’em” department.

To me, this feels like vindication. For years, I’ve been pointing out just how bad the U.S. and Global Surface monitoring network has been. We’ve seen stations that are on pavement, at airports collecting jet exhaust, and failing instruments reading high, and right next to the heat output of air conditioning systems.

USHCN Climate station at the University of Arizona – in a parking lot Photo: Warren Meyer

We’ve been told it “doesn’t matter” and that “the surface monitoring network is producing good data”. Behind the scenes though, we learned that NOAA/NCDC scrambled when we reported this, quietly closing some of the worst stations, while making feverish and desperate PR pitches to prop up the narrative of “good data”.

Read my report from 2009 on the state of the US Historical Climate Network:

That 2009 report (published with the help of the Heartland Institute) spurred a firestorm of criticism, and an investigation and report by the U.S. Office of the Inspector General who wrote:

Lack of oversight, non-compliance and a lax review process for the State Department’s global climate change programs have led the Office of the Inspector General (OIG) to conclude that program data “cannot be consistently relied upon by decision-makers” and it cannot be ensured “that Federal funds were being spent in an appropriate manner.”

Comparisons of 30 year trend for compliant Class 1,2 USHCN stations to non-compliant, Class 3,4,5 USHCN stations to NOAA final adjusted V2.5 USHCN data in the Continental United States -graph by Evan Jones CLICK FOR LARGE IMAGE

Now, some of the very same people who have scathingly criticized my efforts and the efforts of others to bring these weaknesses to the attention of the scientific community have essentially done an about-face, and authored a paper calling for a new global climate monitoring network like the United States Climate Reference Network (USCRN) which I have endorsed as the only suitable way to measure surface temperature and extract long term temperature trends.

During my recent trip to Kennedy Space Center (Thanks to generous donations from WUWT readers), I spotted an old-style airport ASOS weather station right next to one of the new USCRN stations, at the Shuttle Landing Facility runway, presumably placed there to study the difference between the two. Or, possibly, they just couldn’t trust the ASOS station when they most needed it -during a Shuttle landing where accurate temperature is of critical importance in calculating density altitude, and therefore the glide ratio. Comparing the data between the two is something I hope to do in a future post.

ASOS and USCRN climate monitoring station, side by side at the Shuttle Landing Facility at Kennedy Space Center. 2-28-18 Photo: Anthony Watts

Here is the aerial view showing placement:

Location of ASOS and USCRN sensors at the KSC Shuttle Landing Facility. Image: Google Earth with annotations by Anthony Watts

Clearly, with its selection of locations, triple redundant state of the art aspirated air temperature sensors, the USCRN station platform is the best possible way to measure long-term trends in 2 meter surface air temperature. Unfortunately, the public never sees the temperature reports from it in NOAA’s “State of the Climate” missives, but they instead rely on the antiquated and buggy surface COOP and GHCN network and it’s highly biased and then adjusted data.

So, for this group of people to call for a worldwide USCRN style temperature monitoring network, is not only a step in the right direction, but a clear indication that even though they won’t publicly admit to the unreliable and uncertain existing COOP/USHCN networks worldwide being “unfit for purpose” they are in fact endorsing the creation of a truly “fit for purpose” global system to monitor surface air temperature, one that won’t be highly biased by location, sensor/equipment issues, and have any need at all for adjustments.

I applaud the effort, and I’ll get behind it. Because by doing so, it puts an end to the relevance of NASA GISS and HadCRUT, whose operators (Gavin Schmidt and Phil Jones) are some of the most biased, condescending, and outright snotty scientists the world has ever seen. They should not be gatekeepers for the data, and this will end their lock on that distinction. To Phil Jones credit, he was a co-author of this new paper. Gavin Schmidt, predictably, was not.

This is something both climate skeptics and climate alarmists should be able to get behind and promote. More on that later.

Here’s the paper: (note they reference my work in the 2011 Fall et al. paper)

There is overwhelming evidence that the climate system has warmed since the instigation of instrumental meteorological observations. The Fifth Assessment Report of the Intergovernmental Panel on Climate Change concluded that the evidence for warming was unequivocal. However, owing to imperfect measurements and ubiquitous changes in measurement networks and techniques, there remain uncertainties in many of the details of these historical changes. These uncertainties do not call into question the trend or overall magnitude of the changes in the global climate system. Rather, they act to make the picture less clear than it could be, particularly at the local scale where many decisions regarding adaptation choices will be required, both now and in the future. A set of high-quality long-term fiducial reference measurements of essential climate variables will enable future generations to make rigorous assessments of future climate change and variability, providing society with the best possible information to support future decisions. Here we propose that by implementing and maintaining a suitably stable and metrologically well-characterized global land surface climate fiducial reference measurements network, the present-day scientific community can bequeath to future generations a better set of observations. This will aid future adaptation decisions and help us to monitor and quantify the effectiveness of internationally agreed mitigation steps. This article provides the background, rationale, metrological principles, and practical considerations regarding what would be involved in such a network, and outlines the benefits which may accrue. The challenge, of course, is how to convert such a vision to a long-term sustainable capability providing the necessary well-characterized measurement series to the benefit of global science and future generations.

Changes in instrumentation were never intended to deliberately bias the climate record. Rather, the motivation was to either reduce costs and/or improve observations for the primary goal(s) of the networks, which was most often meteorological forecasting. The majority of changes have been localized and quasi-random in nature and so are amenable to statistical averaging of their effects. However, there have been regionally or globally systemic transitions specific to certain periods of time whose effect cannot be entirely ameliorated by averaging. Examples include:

Early thermometers tended to be housed in polewards facing wall screens, or for tropical locales under thatched shelter roofs (Parker, 1994). By the early 20th century better radiation shielding and ventilation control using Stevenson screens became ubiquitous. In Europe, Böhm et al. (2010) have shown that pre-screen summer temperatures were about 0.5 °C too warm.

In the most recent 30 or so years a transition to automated or semi-automated measurements has occurred, although this has been geographically heterogeneous.

As highlighted in the recent World Meteorological Organization (WMO) SPICE intercomparison (http://www.wmo.int/pages/prog/www/IMOP/intercomparisons/SPICE/SPICE.html) and the previous intercomparison (Goodison et al., 1998), measuring solid precipitation remains a challenge. Instrument design, shielding, siting, and transition from manual to automatic all contribute to measurement error and bias and affect the achievable uncertainties in measurements of solid precipitation and snow on the ground.

For humidity measurements, recent decades have seen a switch to capacitive relative humidity sensors from traditional wet- and dry-bulb psychrometers. This has resulted in a shift in error characteristics that is particularly significant in wetter conditions (Bell, Carroll, Beardmore, England, & Mander, 2017; Ingleby, Moore, Sloan, & Dunn, 2013).

As technology and observing practices evolve, future changes are inevitable. Imminent issues include the replacement of mercury-in-glass thermometers and the use of third party measurements arising from private entities, the general public, and non-National Met Service public sector activities.

From the perspective of climate science, the consequence of both random and more systematic effects is that almost invariably a post hoc statistical assessment of the homogeneity of historical records, informed by any available metadata, is required. Based on this analysis, adjustments must be applied to the data prior to use. Substantive efforts have been made to post-process the data to create homogeneous long-term records for multiple ECVs (Mekis & Vincent, 2011; Menne & Williams, 2009; Rohde et al., 2013; Willett et al., 2013, 2014; Yang, Kane, Zhang, Legates, & Goodison, 2005) at both regional and global scales (Hartmann et al., 2013). Such studies build upon decades of development of techniques to identify and adjust for breakpoints, for example, the work of Guy Callendar in the early 20th century (Hawkins & Jones, 2013). The uncertainty arising from homogenization using multiple methods for land surface air temperatures (LSAT) (Jones et al., 2012; Venema et al., 2012; Williams, Menne, & Thorne, 2012) is much too small to call into question the conclusion of decadal to centennial global-mean warming, and commensurate changes in a suite of related ECVs and indicators (Hartmann et al., 2013, their FAQ2.1). Evidence of this warming is supported by many lines of evidence, as well as modern reanalyses (Simmons et al., 2017).

The effects of inhomogeneities are stronger at the local and regional level, may be impacted by national practices complicating homogenization efforts, and are more challenging to remove for sparse networks (Aguilar et al., 2003; Lindau & Venema, 2016). The effects of inhomogeneities are also manifested more strongly in extremes than in the mean (e.g., Trewin, 2013) and are thus important for studies of changes in climatic extremes. State-of-the art homogenization methods can only make modest improvements in the variability around the mean of daily temperature (Killick, 2016) and humidity data (Chimani et al., 2017).

In the future, it is reasonable to expect that observing networks will continue to evolve in response to the same stakeholder pressures that have led to historical changes. We can thus be reasonably confident that there will be changes in measurement technology and measuring practice. It is possible that such changes will prove difficult to homogenize and would thus threaten the continuity of existing data series. It is therefore appropriate to ask whether a different route is possible to follow for future observational strategies that may better meet climate needs, and serve to increase our confidence in records going forwards. Having set out the current status of data sets derived from ad hoc historical networks, in the remainder of this article, we propose the construction of a different kind of measurement network: a reference network whose primary mission is the establishment of a suite of long-term, stable, metrologically traceable, measurements for climate science.

…

Siting considerations

Each site will need to be large enough to house all instrumentation without adjacent instrumentation interfering with one another, with no shading or wind-blocking vegetation or localized topography, and at least 100 m from any artificial heat sources. Figure 2 provides a site schematic for USCRN stations that meets this goal. The siting should strive to adhere to Class 1 criteria detailed in guidance from the WMO Commission for Instruments and Methods of Observations (World Meteorological Organization, 2014, part I, chap. I). This serves to minimize representativity errors and associated uncertainties. Sites should be chosen in areas where changes in siting quality and land use, which may impact representativity, are least likely for the next century. The site and surrounding area should further be selected on the basis that its ownership is secure. Thus, site selection requires an excellent working and local knowledge of items such as land/site ownership proposed, geology, regional vegetation, and climate. As it cannot be guaranteed that siting shall remain secure over decades or centuries, sites need to be chosen so that a loss will not critically affect the data products derived from the network. A partial solution would be to replace lost stations with new stations with a period of overlap of several years (Diamond et al., 2013). It should be stressed that sites in the fiducial reference network do not have to be new sites and, indeed, there are significant benefits from enhancing the current measurement program at existing sites. Firstly, co-location with sites already undertaking fiducial reference measurements either for target ECVs or other ECVs, such as GRUAN or GCW would be desirable. Secondly, co-location with existing baseline sites that already have long records of several target ECVs has obvious climate monitoring, cost and operational benefits.

Figure 2.Schematic of the instrumentation at a typical USCRN station in the CONUS. The triplicate configuration of temperature sensors is repeated in the three precipitation gauge weighing mechanisms and in the three sets of soil probes located around each tower (taken from Diamond et al., 2013)

Siting considerations should be made with accessibility in mind both to better ensure uninterrupted operations and communications, and to enable both regular and unscheduled maintenance/calibration operations. If a power supply and/or wired telecommunication system is required then the site will need to provide an uninterrupted supply, and have additional redundancy in the form of a back-up generator or batteries. For many USCRN sites the power is locally generated via the use of a combination of solar, wind, and/or methane generator sources, and the GOES satellite data collection system provides one-way communication from all sites.

For a reference grade installation, an evaluated uncertainty value should be ascertained for representativeness effects which may differ synoptically and seasonally. Techniques and large-scale experiments for this kind of evaluation and characterization of the influences of the siting on the measured atmospheric parameters are currently in progress (Merlone et al., 2015).

Finally, if the global surface fiducial reference network ends up consisting of two or more distinct set-ups of instrumentation (section 4.1), there would be value in side-by-side operations of the different configurations in a subset of climatically distinct regions to ensure long-term comparability is assured (section 3). This could be a task for the identified super-sites in the network.

…

There are many possible metrics for determining the success of a global land surface fiducial reference climate network as it evolves, such as the number and distribution of fiducial reference climate stations or the percent of stations adhering to the strict reference climate criteria described in this article. However, in order to fully appreciate the significance of the proposed global climate surface fiducial reference network, we need to imagine ourselves in the position of scientists working in the latter part of the 21st century and beyond. However, not just scientists, but also politicians, civil servants, and citizens faced with potentially difficult choices in the face of a variable and changing climate. In this context, we need to act now with a view to fulfilling their requirements for having a solid historical context they can utilize to assist them making scientifically vetted decisions related to actions on climate adaptation. Therefore, we should care about this now because those future scientists, politicians, civil servants, and citizens will be—collectively—our children and grandchildren, and it is—to the best of our ability—our obligation to pass on to them the possibility to make decisions with the best possible data. Having left a legacy of a changing climate, this is the very least successive generations can expect from us in order to enable them to more precisely determine how the climate has changed.

h/t to Zeke Hausfather for notice of the paper. Zeke, unlike some of his co-authors, actually engages me with respect. Perhaps his influence will help them become not just civil servants, but civil people.

Climate scientists have already developed an adjustment methodology for existing data that produces a robust correlation to increases in atmospheric CO2.
We know what’s wrong with the existing measurements and how to fix them in conformance with the CO2 hypothesis. What else do we need? Why should we destroy the existing adjustment jobs and businesses?

How can we test such a methodology given the thousands of data points and the therefore numerous degrees of freedom. The various influences of all the related systematic errors would be non-linear and multivariate. Saying “oh look we finally massages the statistical analysis to match this pattern of CO2!” is not science nor scientific in method. Nice try, might want to consult a high school-level textbook before replying.

The sheer absurdity that the land station set was ever ‘fit for purpose’ in determining a global temperature trend given the massive urbanisation over the past century or so let alone the local UHI effects as illustrated by the pics in the article, is utterly bizarre to my mind. Without the inherent trend of the HI effects it is completely speculative if there is any meaningful ‘trend’. Mannian reconstruction does not even rate as speculative IMO.

Satellites don’t take 2m level measurements, and that’s where human beings, their crops, and their meat animals live. IF adaptation is required, you need the information where you live, not the surface to thousands of meters.

D. J. Hawkins March 2, 2018 at 2:30 pm
Satellites don’t take 2m level measurements, and that’s where human beings, their crops, and their meat animals live. IF adaptation is required, you need the information where you live, not the surface to thousands of meters.
—–
I happen to live in the US, and the USCRN show no material warming since its inception. So by your reasoning, Americans have nothing to worry about. I couldn’t agree more.

“Satellites don’t take 2m level measurements, and that’s where human beings, their crops, and their meat animals live.”
But that’s not where AGW theory can be best falsified. The higher levels of the troposphere are where there is less noise, and where AGW theory makes its strongest predictions.

IF adaptation is required, you need the information where you live, not the surface to thousands of meters.

I’m considering adaptation. It was -16°C out last time I checked and woollen socks would be nice. My toes are cold-numb and that’s not good for blood pressure
I’m pretty certain I’m not needing a certified network for adaptation in this case. The question is not at which decade I will be able to take the socks away.
The same applies to sea level. If my socks wet, I certainly don’t need a satellite telling me how much the distance of the mean sea level from the center of the Earth has increased. It is totally irrelevant.
There are cases, where precise information on trend is useful, but it is hardly the reason why we adapt. After all, we tend to not plan even for the past, as Mosher once mentioned.

Hi Anthony,
While we may disagree to some extent on the magnitude inhomogenities in existing monitoring stations (and their ability to be addressed by statistical homogenization approaches), I think we can all agree on the need for better monitoring going forward, particularly in areas of the world with much sparser coverage than the US and Europe.
One important benefit of a global land reference network would be to allow us to test how well our homogenization approaches are doing in correcting for biases in the larger weather station networks (e.g. how the raw and adjusted data compares to data from nearby reference network stations). There will always be projects looking at local or regional temperatures that will benefit from denser (if more inhomogenous) weather networks, even in a world that has a global reference network. The global reference network will play an important role in helping ensure that systemic biases associated with changes in local station measurement techniques or conditions aren’t impacting our results.

Well now, the literal fact is, ….. iffen they changed all of the Surface Measuring Stations to incorporate “liquid immersed” temperature sensors …. then there would be no further need or use for “homogenization approaches” (data changing) because of random “noise” of short term temperature “spikes” or “drops” (decreases).
The “liquid” would automatically function as a “temperature averaging” function and thus the temperature sensors would be providing actual factual near-surface air temperatures.

MarkW, ….. I was thinking said “mass” surrounding the temperature sensor should be a liquid, ….. such as “-40F anti-freeze”, ……. because it is a per se, natural “conductor” of heat energy and not an “insulator”, …. with said “heat” transferring rather quickly and evenly throughput the liquid.
And there should be two (2) temperature sensors in each unit so that their output can be compared to each other to insure the unit is functioning correctly.

Honestly, shouldn’t the very scientists who “homogenize” data be the ones who thought of and championed a way to verify that the processes were working correctly? Like 20 years ago? I never trusted data I had to manipulate into telling its secrets unless I had a way to verify that the methods used were correct, within an acceptable margin of error. If I had no way of direct verification, then the new adjusted data set remained suspect.
Not speaking directly about you because I don’t know you, but seriously minded scientific people would ALWAYS look for ways to verify – and they would not trust proxy data that is also being manipulated unless they could not gather actual direct data to work with.
Serious minded scientific people are aware of and very careful of Confirmation Bias. It seems to this outsider that all of the Climate Science community is stuck in an echo chamber of bias. They need to step back and look once again at the entire picture – personal beliefs put aside – and ask themselves “Does all of the data support my claim, or do I need to adjust to the new reality?”

Well, thats largely why NOAA started setting up the US Climate Reference Network around 20 years ago, before taking pictures of poorly sited stations was de rigueur. And recently used it to verify that our homogenization processes were working correctly: http://onlinelibrary.wiley.com/doi/10.1002/2015GL067640/full

“If I had no way of direct verification”
Of course you can verify. The unadjusted data is available. You can calculate the index with no adjustment at all, if you prefer. It makes very little difference.

“Then why do it?”
Individual stations changed. Moved, changed times etc. Some went up, some went down. But when you add it all up, there is little net difference. But you don’t know that until you do it.

“But you don’t know that until you do it.”
Then your argument would be, we don’t know what it’s going to do till we do it. Not “it makes very little difference” like you already know.
You are just a spin artist. Please retire.
Andrew

“There is overwhelming evidence that the climate system has warmed since the instigation of instrumental meteorological observations.”
I hope you meant initiation of instrumental meteorological observations, Zeke . . or is that a confession of sorts? ; )

Nick, how far back is the raw data available? Do you have raw, as-recorded data from 1880 on for each US station? I was under the impression that that is only available as microfiche, or paper copies, but that most of those were destroyed. If you only go back 20 years from today, well, that is useless.

The real problem is that any data is adjusted at all.
It makes no difference either way is what they say. Except the original raw data shows a huge difference to what is used today as the raw data.
I don’t think the climate has changed at all. There is no physical evidence for it as in where the tree lines are, what will grow in your area, when the snow comes or when it melts out. When you turn your air conditioning or furnace on. How many times per year record cold days happen or record warm days.
The only change is how the global temperature line keeps getting adjusted up with a higher trend, despite the fact that it makes no difference.
I mean, how many times are we supposed to believe the basic global temperature trend changed from where it was. Every month, for 25 years now, it has been adjusted higher. Sooner or later, some actual physical impact should show up. If it doesn’t and your climate hasn’t changed at all, then why are so concerned about this gas.

Zeke, Mosher, et al & etc. will have us ignore Australia, New Zealand, South America, and all of the other places, including the US, where temps are shown to have been adjusted to enhance the climate consensus.
…who ya going to believe, your lying eyes or our homogenization?
Zeke, as to your claim that a recent study shows your whirliblend of data is valid, that is no more than one theologian endorsing another.
Thanks for playing, but you are not doing science.

“Except the original raw data shows a huge difference to what is used today as the raw data.”
Do you have any evidence of that? I don’t believe it is true.
They do use adjusted data (and say so) when calculating global averages.

Bill Illis hits nail on head:
None of the actual metrics of climate are changing in any meaningful way.
Instead we have contrived derivative numbers. And when those derivatives are tested they are proven time and again to be dubious at best.

“ Nick, how far back is the raw data available? Do you have raw, as-recorded data from 1880 on for each US station? I was under the impression that that is only available as microfiche, or paper copies, but that most of those were destroyed. If you only go back 20 years from today, well, that is useless..”

Scott R, any surface temperature data/records prior to, say 1940, …. should be taken with “a grain of salt”, ……. and the reason(s) can be found herein the following, to wit:History of the National Weather Service
Excerpted from above noted NWS History, to wit:

“At 7:35 a.m. on November 1, 1870, the first systematized and synchronous meteorological reports were taken by observer-sergeants at 24 stations in the new agency. These observations, which were transmitted by telegraph to the central office in Washington, D.C., commenced the beginning of the new division of the Signal Service.
The Signal Service’s field stations grew in number from 24 in 1870 to 284 in 1878. Three times a day (usually 7:35 a.m., 4:35 p.m., and 11:35 p.m.), each station telegraphed an observation to Washington, D.C. These observations consisted of:”

“The National Weather Service (NWS) Cooperative Observer Program (Coop) is truly the Nation’s weather and climate observing network of, by and for the people. More than 8,700 volunteers take observations on farms, in urban and suburban areas, National Parks, seashores, and mountaintops. The data are truly representative of where people live, work and play.
The Coop was formally created in 1890 under the Organic Act. Its mission is two-fold:”

It has always tickled me that the climate fraternity has always been eager to “homogenise” (i.e. adjust by some never explained yet nefarious means) the data to fit their narrative, but seemed unable (or unwilling) to homogenise (i.e. make them uniform in instrumentation, structure and layout) their data collection stations.

Zeke;
Assuming you could place as many stations as you wanted to, what spacing for land stations would you consider adequate to nail down your answers? USCRN has about 114 so far, or very roughly one per 26,000 square miles. I’m guessing that 10x to 20x would be about right but I think your opinion would be a little more informed.

this is the big question d.j.hawkins.i have always been troubled by the impression temperature trends anywhere give(whether real measured or through data “adjustment”) . take the uk. the influence of the atlantic is huge . the weather patterns vary according to what phase the amo is in,so not only can you have a warm or cold body of water directly affecting the temperature over a significant period but weather coming more often from a nearby land mass during the cool phase and straight off the ocean during the warm.
i would like to see sites on both sides of major mountain ranges at the same elevation , opposing coasts on all land masses greater in size than the isle of man and a project to measure the uhi and how it grows in real time on as yet undeveloped area. if agw is as dangerous as its proponents suggest they should have no trouble getting the funding to do all the above.
who knows it might be another instance of climate scientists being “amazed” at the unexpected results.

No Zeke, your side can’t agree.
“The science is settled”.
Either admit you were wrong or be seen as a hypocrite who now wants it both ways.
The evidence your side used to hijack the world with climate apocalypse claptrap is now admitted by your side to be no good.
How dare you try to pretend it is no big deal?

A lot of people can’t afford heat in Europe right now. People are dying. Some of of the price increases are a direct result of changes made in the name of global warming. Those changes were made, in part, because of the increased temperatures the public has been shown due to those homogenization algorithms. Those homogenization algorithms contribute to the global warming message. If you are right, then you’ve killed people today in the name of the future. If you are wrong, you’ve killed people today for no reason. Either way you should man up and admit that your message is killing people today.

I see one problem if temperature readings come without contamination from localised heat sources……the averages will probably dip.
Can the deniers amongst us promise to take that into account and not upset the acceptors ……are there are some deniers hereabouts? Yes? No? Just checking I’m in the right room 🕶 …_

DiggerUK, are you really asking if the people that have been saying its wrong are going to say, “See I told you so!” and upset the people that have been screaming at everyone that dares to disagree with them? Is that really what you’re asking?

“There is every chance that properly sourced and uncontaminated temperature readings could lower the averages.”
Averages are the problem. You can’t average readings from different locations and come up with anything meaningful. THERE IS NO GLOBAL TEMPERATURE!

“Because by doing so, it puts an end to the relevance of NASA GISS and HadCRUT, whose operators (Gavin Schmidt and Phil Jones) are some of the most biased, condescending, and outright snotty scientists the world has ever seen. They should not be gatekeepers for the data, and this will end their lock on that distinction. ”
By being this way they have protected their “turf” and kept their income flowing for the last 9 years.

Great! We can deploy this over the next 3-5 years to accurately measure the coming drop in world-wide temps. And as such, there can be no “homogenization” because the data will be coming from a certified, bonafide and unchallengable measuring station. A win-win.
Of course, the historical data will all have to be ratcheted down even more to keep the drop off in temps getting too far out of hand. We will probably be at HadCRUTv15 before it is all over.

3 – 5 years? Surely you jest, First, there will have to be studies, and then R&D on the snsor suite to deploy. Then there will have to be the actual rollout and testing. Then the new sites will have to be selected all over the world (even in war-torn places) and the land acquired and communication capability emplaced. Then, and only then, will we have a network of sensors we might be able to trust. Who will be collecting the data? Who will be analyzing it? Who will decide that a given station is malfunctioning and either delete its data or correct it? Note that by the time the full network is up and operational, the first stations installed will be beyond their use-by dates and need replacing.
3-5 years for a Government program???

Ok you do have point here. I will say though Wunderground has done a great job tying in personal weather stations globally so at least a communications network is up. With the exceptions of little coverage over 50% of the continents and probably 99% of the oceans, it is ready.
Just need to replace my Bass Pro Shops/Accurite weather station with the new ultra unit when it is developed and we are in business. Since every official reading is currently being adjusted, I would suggest these personal weather stations are just as accurate anyway, so start using these and replace them with the new units when ready.

“3 – 5 years? Surely you jest, First, there will have to be studies, and then R&D on the snsor suite to deploy. Then there will have to be the actual rollout and testing. Then the new sites will have to be selected all over the world (even in war-torn places) and the land acquired and communication capability emplaced. Then, and only then, will we have a network of sensors we might be able to trust. Who will be collecting the data? Who will be analyzing it? Who will decide that a given station is malfunctioning and either delete its data or correct it? Note that by the time the full network is up and operational, the first stations installed will be beyond their use-by dates and need replacing.”
And even after all that, you STILL won’t have a global temperature.

Great! We can deploy this over the next 3-5 years to accurately measure the coming drop in world-wide temps. And as such, there can be no “homogenization” because the data will be coming from a certified, bonafide and unchallengable measuring station. A win-win.

But developing nations have a vested interest in rising temperatures (to get cash from the UN to fund their mitigation efforts (supposedly)). They’d be tempted to massage the data before passing it along. Only if their weather stations uploaded their data automatically to satellites, without human intervention, would a worldwide system be trustworthy.

Wonderful news.
I have a difficulty with the climate stuff because the researchers are doing experiments on the data in an unblinded manner.
Better data will help, but to make it a science, we need to figure out how to blind the experimenters IMHO.

“throw in the towel”?
Who? How? Of course temperature measurement can be improved, and scientists are in favour of doing that. USCRN was introduced about fifteen years ago. Did anyone try to spin that as “throwing in the towel”? It’s just trying to measure temperature better in the future.
It doesn’t change the issue of trying to determine temperature history. There we still have to deal with what we have always had. An extensive record of measures taken, not by climate scientists, for reasons other than study of climate. A huge amount of information, with imperfections. It’s what we have and we can’t re-do it. We just have to figure it out, as scientists have been doing.

“There we still have to deal with what we have always had. An extensive record of measures taken, not by climate scientists, for reasons other than study of climate.”
Right, of course. Only “climate scientists” can take (and later adjust as required) an accurate measurement.
The rest of us out here, why, we’re just knuckle-dragging nobodies with no skin in the game whatsoever.
E.g., at King Ranch in FL there’s a 5000 acre orange grove subject to a freeze during certain parts of the year…why should they care about accurate temps???

“Only “climate scientists” can take (and later adjust as required) an accurate measurement.”
Who on Earth said that? This article makes a big deal about inaccuracies in the past record. Not me, I think it is mostly pretty good, especially recently. I doubt if “climate scientists” will be reading the thermometers in the new systems either.

We have lots of good data from the past. We can’t start again with that. The situation with global CRN will be like that of USCRN. It takes a long time to build up a new history. And as you build up, you find that the new stations aren’t really telling you anything different to the old.

Well here’s the thing Nick I don’t really give a hoot what you think about what the future station will report. The simple fact is and you can’t get around it is that the old station data has been corrupted and that’s why they’re calling for a new network moving forward so that they can ensure a continuity of quality data that is not biased nor has need for adjustments. You can whine about it all you want I simply don’t care.

Fascinating how Nick knows that the new stations are going to report exactly what the already admitted bad stations were reporting.
Sounds like the fix is in.
BTW, we have lots of data, but the evidence that it is good can’t be found.

The trouble is not the existing record so much as the methodological rsationale behind adjustments. For instance cooling historical data requires imputing behaviour to the data collectors of the time. That fails the Occam’s Razor for simplifying assumptions. Worse, if you are confident enough to adjust historical records, then you are also assuming that the imputed behaviours were “consistent.” If they were not then no amount of adjustment makes sense. Also, if they were, then the best practice adjustment would be to adjust modern data to match historic data. Adjust modern data to meet the assumptions made about the older data. I would like to see the results of that.

“cooling historical data requires imputing behaviour to the data collectors of the time”
No, GHCN adjustment procedures are based solely on the observed series. A sudden change which is not seen in nearby stations. The method does not impute behaviour. But when you look into the underlying reasons, it rarely relates to behaviour either. Usually a station move, change in observation time or instrumentation. There is no suggestion anyone was measuring wrongly; just that what they measured then isn’t quite the same thing.

Nick…maybe its true looking at past. Old (raw) v New (adj) data made no difference in your analysis. But that is no guarantee of future. Hence, stop taking crap data and messing with it to fit your perceived idea of truth and allow good data to be measured in the first place….cuts out need for all the smartypants adjustments. Oh, that might require a few less grants. Oops.

Nick, you are such an idiot I just wanted to read and be amused, but then you came out with another gem
‘just that what they measured then isn’t quite the same thing.’. Are you so thick that you don’t realise how stupid you sound?

jim,“Are you so thick that you don’t realise how stupid you sound?”
No. It seems you are too thick to figure it out. If as station is in one place, and moves to another, the first set of measurements, however accurate, are not of the same thing. They are temperatures at a different location, and an adjustment is needed to bring the two series into line.
You offer no reasoning, only abuse.

“Climate science’s” original sin is the claim that derivation of ‘global temperature” is settled science.
Problem #1: Physics accurately calculates speed of light to 6 significant digits, magnetic moment of an electron to 12 significant digits; however, current climate science barely measures environmental temperature to 3 significant digits (degrees Kelvin).
Problem #2: Calculating “global temperature” (or other variants) is, at best, a poorly documented process of “in-fillng” and “data homogenization”.
Problem #3: Even stipulating temperature data collection is 99% accurate, there is no physics/mathematical proof that the atmosphere is a system that can be actually be modeled. Needless to say, there is no fundamental formula of climate science that can provide meaningful temperature predictions even if given accurate data.
From a scientific standpoint, not being able to accurately measure or predict fundamental physical phenomena is a silly place to be, especially after claiming you have already done it for 100 years into the future. From a political standpoint, it’s perfectly understandable.

“A huge amount of information, with imperfections. It’s what we have and we can’t re-do it. We just have to figure it out, as scientists have been doing.”
No Nick, they are not “figuring it out”. What they are doing is attempting to bootstrap information out of thin air. If there were some underlying solid theory for what the data ought to look like then the information input from that theory can allow these sort of statistical approaches to get somewhere. For example no one objected to software fixes for Hubble space telescope imaging before the hardware fix was made and the reason for that is you can calibrate the image across the field of view using a known spherical target. What makes this possible is that the theory of gravity determines that celestial objects over a certain mass limit are approximately spherical. That theory drives the data modifications. There is no theory as to what the surface temperature data ought to look like so any attempt to modify it is of necessity driven by the modifier’s opinions.

Hi Nick,
I’ve been bugging you lately in every 2nd thread with a couple of Qs. Any chance you respond to a contact request here: matz.hedman@telia.com ?
Maybe boring with constant newcomers and back to square zero but anyway…. I would certainly appreciate it.
Cheers

Nick Stokes
“A huge amount of information, with imperfections. It’s what we have and we can’t re-do it. We just have to figure it out, as scientists have been doing.”
First sentence – indisputable.
Second sentence – indisputable.
Third sentence, first phrase – indisputable
Third sentence, second phrase – not only disputable but, according, to many more trustworthy sources, untrue. ‘Climate scientists’ appear to spend more time and effort trying to show that any information that does not tally with their supposition that climate is warming in line with increasing anthropogenic CO2 must be imperfect and therefore adjusted or eliminated altogether.

Solomon: “Third sentence, first phrase – indisputable” I dispute it, why “must” we figure out average global temp? Can’t we just live with the data we have, put apt error bars on it, and stop this adjustocene? Of course, that would put US (and others) average temps from the thirties back up to higher-than-now, and tony heller would need to supersize Hansen’s charts to show an error bar above Hansen’s 30’s highs. Seriously, though, does it seem that we have a “mcguffin” here? These folks want to gin up hair-on-fire panic over .01C jumps on charts with no error bars. I’d say these stations that were installed and are fit for purpose (shuttle or airport landings) should be used for purpose; and when you use them outside that purpose, fine for academic curiosity, but no policy should be set based on this data that even N. Stokes admits is so poor, it’s “improved” by post-hoc adjustments. So: No, we don’t “have” to figure it out, not at all. The whole exercise has actually damaged science, IMO.

Sunsettommy March 2, 2018 at 1:43 pm
Nick, do you keep forgetting that a siting standard for Temperature measuring devices exist?
It never says place them at end of runways, next to building and so on.
Remember that the primary purpose of such temperature measuring devices is to provide data for pilots and they are sited according to FAA regulations, any use as sources of climate data is secondary.https://www.faa.gov/documentLibrary/media/Order/JO_6560_20C.pdf

Probably because that exact station is specific in real time to that specific runway, as any incoming Space Shuttle for a landing would have operated in that exact weather condition in real time. Perhaps to be able to calculate the glide ratio as the article stated, and to gather local specific head, tail or cross wind info and all other pertinent weather data. That would make sense to have a dedicated weather station located next to the runway that was being used for mission critical work, since this would be the local weather any incoming Space Shuttle landing would face or any large aircraft landing would be dealing with such as the specialized 747 carrying the Space Shuttle piggyback for re-launch.

Glide ratio doesn’t change with temperature or density. The airspeed for best glide ratio does change. You need the density altitude to get the airspeed to aim for. Weight comes into the calculation too, but only aerodynamics affect the glide ratio.
Yes, it’s pedantic, but most folks think it glides more steeply if heavy or in thin air. No, you just need the right speed.

This has been common knowledge for anyone with half a brain. Temperature data since the advent of computers and satellites just in the last 20 years is entirely different than the previous 100 years. To compare data from 40 years ago with today’s is insulting and a bold face lie. Laughable and unscientific on its surface. Disgusting.

Welcome paper, but probably without any practical impact. Money isnt there to implement globally. And in many places doubt the infrastructue exists to calibrate and maintain CRN level quality. Finally, even if those hurdles were somehow overcome, would need over three decades of results to have any impact on the CAGW debate. It will likely be over long before then given the many failed climate predictions, and failing ‘green’ solutions like diesel autos, solar, and wind.

True. We will likely always have South Australia as a CAGW crash test dummy, and Michael Mann, Gavin Schmidt and Naomi Oreskes as warmunists. But the zealots will be fewer, and their fatal flaws more clearly exposed, where it matters—to voters, legislators, and regulaters. The Maldives hypocracy is already exposed, as just one example.

The best approach for all the reasons you mention is to continue to enhance satellite measurements. We lost the satellite surface measurements a while back. That needs to be fixed. We will never be able to get the coverage or accuracy that can be derived from satellites with a bunch of surface stations.

And bring back the water vapour measurement program that Hansen cancelled in 2009. The official reason was they couldnt determine whether there was more or less over a 20 year period. I believe it was because they couldnt demonstrate an increase therefore it was useless to the AGW theory. On that point I have been thinking about latent heat. When water goes from solid to liquid and then to gaseous molecules the latent heat builds up at each of the 2 steps. However when the direction is reversed the latent heat is released in both of the 2 steps. If that is so and some of this IR goes downward then wouldn’t there be runaway global warming just from the hydrologic cycle alone? The only thing I can think of that would negate this is: During the evaporation process there are 2 types of cooling going on a) cooling of the ocean surface and b) cooling of the troposphere. Actually the plants also cool in the same process by a different name transpiration. The balance of the energy cycle (because of evaporation and condensation which are equal by the way because water is neither created nor destroyed on earth) causes the amount of heat that was taken out of atmosphere during the evaporation process ( I am not counting the heat lost by the plants and oceans because that is the latent heat transferred to the water molecules) to equal the heat transferred to the atmosphere during condensation. So this whole process only shows the balance without considering the role of CO2 nor the role of the latent heat . So if we consider the role of latent heat that was added from the oceans and plants during evaporation that heat has to exit to space during condensation for the heat equation to balance. the Energy balance graph of the National Weather Service of NOAA does not mention the role of evaporation causing latent heat to be added to H2O water vapour. It has a small yellow arrow called Sensible heat and calls it rising air currents. Why that is different from the large yellow arrow rising from the ground I cant fathom. The graph gives the heat fluxas a 100 % total for each pathway of heat flux. The graph balances but surely water vapour from evaporation has to play a role. In fact if you look at the accounting totals they do mention evaporation . So therefore the latent heat graph label is wrong. It is not the water vapour changing to liquid and ice. It should be labeled ice melting and water changing to water vapour. As explained above there are 2 sides to the latent heat energy balance. So there should be another red arrow representing water vapour changing to liquid or ice. Also NOAA has the surface emitting 116% of the heat flux. This is impossible if you took a one moment in time heat graph picture. The whole earth doesnt cycle from a glacier to a hotbed except in million year cycles. . These omissions make me question NOAAs understanding of the energy balance cycle.

So predictable. I have encountered this in both public and private life. All you ever get are denials and aggression from those who have not been doing a proper job => kill the messenger. Quietly however, they go back to work and start checking for the very problems that the messenger warns about. Finally, after a comprehensive study and big fanfare, those in charge announce that due to their genius they have discovered exactly what the messenger warned about and have brilliantly designed steps of remedial actions also what happen to be exactly what the messenger suggested (of course absolutely NO mention is ever made of the messenger in any reports, press or anything at all. In fact, almost universally the messenger or whistleblower is always discredited by those in charge. In the corporate or public world, the messenger is lucky to keep their job and will never be given a high level managerial position because they are identified as not being a “team player” (lying to protect the “team” is more highly valued than integrity in 99.9% of human organizations)
Armed with this knowledge and after some hard lessons, I have learned to keep my mouth shut. I even saw ongoing blatant corruption by senior managers but knew that reporting it was an immediate CLM. I think this behaviour has become completely entrenched in all Western corporations – hence Dieselgate and iphonebatterygate….

Two things. First, any piece of equipment can fail. If it’s precision monitoring equipment, you’d like it to fail completely at one time, so there is no doubt about the quality of the data just prior to failure (this is of course a very undesirable failure mode for safety equipment such as life preservers). If you’re not lucky and your monitoring equipment drifts for some unknown time at some unknown rate, you don’t know how much data is suspect and by what amount.
Second, it’s still not clear to me how sparsely located monitoring equipment can validly represent large surrounding areas, especially encompassing different microclimates. A good example is rainfall gauges in the Hawiian islands, where 20 miles on the road can move you between coastal desert and tropical rain forest. Averaging the two is meaningless.
Actually three things. Even if you get a pristine new monitoring network sited and maintained to ideal standards, you either have to wait 50+ years to have an idea of what’s going on with the climate or you have to devise a way to bridge old and new data to get a longer timeline. I don’t think there is any way to use the new data to make the old data any more reliable. What is more likely to happen is the problems with the old data will infect the new data in the interests of homogenization.

Jeremy
As an old, retired CFO, I would strongly encourage you to go try to qualify for one of those executive jobs running a multi-billion dollar global corporation. Everybody KNOWS it’s easy…heck, some people even believe financial statements should be accurate to 2 decimal places.
However, I do agree anybody can be a politician.

Sorry but as a top-tiered business schooled, CFAed, buy-side analyst/investor weened and educated by Buffett and other true long-term investors, management stock options are the very definition of dishonesty and thimblerigging. With the amount of money these folks are being paid, there is absolutely no reason these people can’t acquire their stock the same way I do— by writing a check (and thereby sharing in BOTH the upside AND the downside).
Otherwise, management stock options are nothing more than a “heads I win, tails I don’t lose” asymmetrical shell game.

When the company does well, stock options get more valuable.
When the company does poorly stock options get less valuable, some become worthless.
Stock options are a viable means for tying the interests of an executive to the well being of the company.
Stock options that can’t be exercised until some time in the future help to make sure the executive is taking the long term interests of the company into account.
Just because you don’t understand or benefit from something is not evidence that something fraudulent is going on.

>>>>>> Mark W <<<<<<<<
I understand management stock options all too well (and so does Warren Buffett— which is why there are no management stock options at Berkshire Hathaway).
Managers who have real "skin in the game" behave very differently than those who don't.
I repeat: There is absolutely no reason managers can't acquire their stock the same way I do— by writing a check (and thereby sharing in BOTH the upside AND the downside. Without that “skin in the game” management stock options are an asymmetrical “heads I win, tails I don’t lose” shell game.

Those with stock options share in both the upside and the downside, as the value of those options goes up and down along with the value of the stock.
If you can’t understand that, then your claim to understanding falls flat.

>>>>>>>>>>>Mark W<<<<<<<<<<<<
What's with all the sophistry and casuistry ?
How about the application of a little Occam's Razor and Keep It Simple Stupid (the well-known "KISS" rule) ?
Managers want upside? There's a very easy way to create a precise "identity of interest" between shareholders and managers. Write a check.“Writing a check is what separates a conversation from a commitment.”
-Warren Edward Buffett
Otherwise, management stock options are an asymmetrical shell game of “heads, I win; tails, I don’t lose.” Tell us all about “reloading.” Tell us all about annual grants. Tell us all about the farce of valuing them. Anybody who closely examines the mathematics of Black-Scholes quickly realizes that it’s a joke.
There are very good reasons Berkshire Hathaway doesn’t employ them.

Perhaps they might start by including Ithaca (Cornell University) in NY State. One of the best, high quality, well maintained, and long running sites in NY.
It has had morning readings since at leat 1947, and NOAA admit they have no info prior to that.
Yet GHCN have managed to totally corrupt the temperature record:https://notalotofpeopleknowthat.wordpress.com/2018/01/26/tobs-at-ithaca/

The NOAA data sheet for Cornell is here. From it, the history of adjustments is thus:https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/03/cornell.png
The metadata for the site is here. And from it we find that there was indeed a station move on 1969-07-09, which moved it 0.7 miles East. And another move, it seems on 1948-05-01. It doesn’t say how far, but previously it was 1 mile from the PO, and it ended up 2.6 miles away, so looks like about a mile. That seems to account for the two main changes.

Is this adjustments made to the raw measurements and older data?
If so, this would indicate that ALL data from 1930 to 1942 has been adjusted downward (cooled) by 1C from the original measurements and that all measurements since 1970 have had .6C added to them (Warmed).
This would certainly look like an artificial warming trend of 1.5C was added into the anomaly data since 1930
Or am I just interpreting this information incorrectly??

“This would certainly look like an artificial warming trend of 1.5C was added into the anomaly data since 1930”
It would reflect adjustment to the absolute values to that effect, yes. It sounds like the site moved twice, each a bit further out of town.

Nick, the thing is when station moves, it is not just that it moves. It is probably having a clean white-washed shelter after the move. Thus, the adjustment should not necessarily only be a step change, but a glide.
So, how to find a glide bias?

The remarkable thing about data “homogenization” as practiced by “climate science” is that its methods demand not only geophysical ignorance but also circular reasoning. The Ithaca example is emblematic of both vices: very modest changes of siting in a college town of 30K residents are used to justify major data adjustments throughout the entire record–bringing its trend into conformance with that at Binghamton, a city of 301K residents, 47km away.
Not surprisingly, the negative secular trends evident at Ithaca and other small towns, such as Elmira, throughout the region thus wind up being converted into positive trends. Such balderdash is commonplace in the absence of any realistic recognition of the corruptive effects of UHI and of station-maintenance issues, aided by the blind hubris that unknown non-climatic effects can be reliably removed via statistical massage.

The West Texas Mesonet has been around almost as long. The Kansas Mesonet is newer, but still has some good stuff. I check both from time to time depending on which way the wind is blowing.
Looking forward to your reports from the conference.

Looks like a response to political climate change by the laws of the conservation of career. I know right where the money for this project should come from – out of the budgets of the UN and the agencies that fund all of the inane climate change (cha-ching!) and gullible warming studies.

Latitude is right, what’s point if the data handlers of the new network just start post facto changing the info again? We would need honest people collecting and protecting this data. The current criminals involved in it need to go away. The obfuscators (we know who they are) running interference on blogs like this need to retire, too.
Andrew

Bottom line.
We don’t really know the past temperatures. Present temperatures are uncertain.
The foundation of CAGW and all its supporting models along with policies based on it are built on sand.
Time to move on.
CO2 is not going to “kill us all”.
PS Would it be asking too much for Al Gore to refund what he profited from the foundation-less scare?
Maybe give toward paying down the principal on the national debt?

Finally!! I am excited. Good data is a must. Let’s not let our disdain for global warming theory ruin this great news—good data is what has been asked for all along. Not adjusted, not interpolated.
It’s interesting that one needs extreme accuracy for shuttle landings, but “adjusted” and “intepolated” are good enough to remake the entire energy and commerce sectors of the world.

You make a good point. I can’t release an aircraft after a major repair without approved data that has gone through stress analysis etc. but we are expected to accept that the world as we know it is going to end if we don’t curtail CO2 based on data that has to be “made accurate” due to bad collection. Zeke and his colleagues need to keep doing what they are doing to make the best (no pun intended) of what data we have but, I prefer that we continue to use the energy sources that have served us well until technology makes them obsolete, instead of a government mandated reduction in the quality of my life.

Good catch by DHR above!, and you beat me to it. The photos of USCRN sites they use to sell the USCRN concept show big valleys with no people and one lonely station in the middle. This station? Next to a runway? That’s what we have now, sites next to runways, heating plants, etc., as Anthony has documented. I’d like to hear Anthony’s take on what he actually saw around this station, maybe I’m missing something. But if this is a typical USCRN station, I’m very disappointed.
That said, I agree wholeheartedly with Anthony and this approach. We need a network of CRN quality stations, in CRN Level 1 quality sites, worldwide. USA should build and donate the stations, subject to siting approval. The budget for that is a drop in the bucket of the $20B+ US is already spending each year on climate, per the required annual Congressional report. I agree with the criticism that it will take awhile, but it will start giving accurate trend data right away.
An important early target would be to fill in the map of the large areas with no data, places like the Arctic and Antarctic, large swaths of Africa and South America, etc. Those gray areas in the maps on Goddard’s site. The statistical analyses seem to say that a small number of good sites can capture the trend over a large area, which is what we care about most. Not a big cost for a big benefit. How fun it would be to leave behind forever stale debates about kriging vs linear interpolation, etc.
But I sure don’t want these Global CRN (“GCRN”) stations sending their data to GISS or Hadley. They have lost everybody’s trust, deservedly so. I like Steve McI’s suggestion from a few years back that we already have government organizations that track large databases with apparent integrity and strong statistical tools and understanding, things like GDP, unemployment, census. I’d like to see the data, from USCRN too, go to some group like that whose budget doesn’t depend on US continuing to spend that $20B+ each year on climate.

My humble (and it is humble) opinion is there is not enough money & other necessary resources to populate the earth & oceans with consistently accurate temperature sensors,
What measurement that will get done needs to be a different technology (satellites are a good start).

German meteorlog Klaus Hager has, for 3144 days (8 + years) compared an old stevenson
lig thermometer and a new Pt 100 resistance thermometer in the same spot.
Result: The Pt100 runs 0,93 C warmer than the mercury thermometer.
He says that this is only applicable to this very spot. No general conclusions can be drawn.http://www.hager-meteo.de/aktuelle%20berichte.htm
But: How many sites have 8 years overlap at an instrumentation change?
Very few, I dare say.

His point is that every sensor is different. And that includes both mercury and Pt100 thermometers.
That these two units are 0.93C apart is not evidence that any random two mercury and Pt100 thermometers are also going to be 0.93C apart.
The proper procedure when replacing one sensor with another is to place them side by side for a year or two, in order to plot how they differ under various weather conditions. Make a note of those difference, and only then, de-commissioning the old unit.

Ummm… A PT 100 PRT is reasonably accurate, but not accurate to claim 0.93C without specifying the uncertainty.
Just for the probe, best accuracy is more than 0.05C over typical temperature ranges — if a 1/10 DIN probe is used. More than 0.5C if a Class B PT 100 is used. (Did he specify?)
But then you have uncertainties due to wiring, the meter, and more.
AND… given that he’s not measuring known temperatures (ie a calibration system), averaging doesn’t improve things at all.
So, overall his calibrated PT100 meter most likely provides measurements of about 0.93C +/- 0.1C or so… assuming a 1/10 DIN probe. If only Class B, 0.93C +/- 0.6C or so.
Without details like this, it’s pretty much mush.

Will the new climate reference network use satellites?
And actually measure temperature in places other than airports?
With no need to interpolate between sites?
Free of UHIE bias?
Giggle. Sorry that was just me musing about something which is so insanely unlikely as to be an impossible thought. Wash my brain out with Clorox.

For any climate data collection site it needs to be made clear; Airports cannot be used. Not should not, not with caution, not with adjustments. Just plain cannot. Anyone who cannot see the reasons for this are unqualified to site climate data instrumentation.

For past climate data it needs to be made clear; Airports cannot be used. Not should not, not with caution, not with adjustments. Just plain cannot. Anyone who cannot see the reasons for this are unqualified to use climate data.

I think airports were used because the system was was in place, not for any “Global” data but, to provide data for the planes landing or taking off from that little spot on the globe.
The numbers were useful locally. They’ve been used globally.

That’s a point that needs to be driven home. The current sensor network was never designed to measure climate. The equipment and quality controls for such a system were never designed in.
One thing I’ve always been in science is that you must use your instruments as they were designed to be used.
Using them outside their design constraints will always contaminate your data. Usually in ways that can’t be predicted or compensated for.

Isn’t the whole purpose of using anomalies, instead of actual temps, to keep site moves, equipment changes, etc., from affecting long term readings? Homogenization shouldn’t be coming into play.
iirc, the computerized homogenization adjustments made by GISS are done daily and not just when major changes to monitoring sites occurs

We have some well sited historical sites in rural areas that have a long history. They generally show little warming over the last century. link
I worry that this new standard will be an excuse to decommission those sites. That would deprive us of a long unbroken record, resistant to adjustments.

Your link is to US stations. USCRN has been around for fifteen years, and hasn’t been used as an excuse for decommissioning sites. In fact, most non-CRN sites are there for reasons unrelated to climate research, and those reasons won’t go away.

There is indeed a worrying trend in GHCN to maintain far fewer CURRENT stations in it than previously, only 62 in Australia, only around 10 in the UK. That is enough if those stations don’t break or change, but nowhere near enough to do a proper job of … homogenisation, for recent times.
Also, much so-called unadjusted data in GHCN is nothing of the sort, it contains obvious errors (Irish stations in 2015 for example), and differs from the source data in some cases, often with inconsistency between monthly versions 3 and 4.
Somebody needs to do a top-down design of the system for storing RAW data.

” GHCN to maintain far fewer CURRENT stations in it than previously”
GHCN didn’t maintain more stations previously. Data before 1997 is from archives. A whole dataset can be read in at once. Data since is maintained – ie updated monthly, and hopefully within days. That is a much more demanding requirement.“Somebody needs to do a top-down design of the system for storing RAW data.”
Met offices do that. The international system is that of CLIMAT forms, submitted monthly by those mets

OK, GHCN is full of archived data, though some of it can only be described as padding (very short records around 1960/70), excellent for homogenisation in the 20th century, but for reasons unknown the number of stations currently reporting is way too small from some countries, IMHO.

I remember reading your 2009 article when you first published it, and another talking about how you put a thermometer on your car and drove it from the center of town to the edge of town – and back again – to verify the UHI effect.
Of course it makes sense for either side of the Global Warming argument to want accurate measurements – measurements that don’t have to be adjusted year after year for some unexplained reason. This looks like a step in that direction.
Way to go, Anthony!

“These uncertainties do not call into question the trend or overall magnitude of the changes in the global climate system.”
To me, that’s like saying, “my data contains uncertainties and errors, but the conclusions I draw from that data are correct and should not be questioned. Doesn’t the magnitude of those uncertainties play a role? Do we even know what that magnitude is? If the uncertainties are few, the general trend may be determined, but the magnitude of the changes in the global climate system have to be questioned. The greatest uncertainty concerning a global average temperature has to do with the lack of data from large areas of the world. Local climate, just miles apart, can vary a great deal. So it’s not just the uncertainties in the temperature data we have that is the problem. It’s the lack of temperature data from vast areas of the world that brings the greatest uncertainty to the magnitude of global climate changes over time. Estimating temperatures over those vast areas just doesn’t cut it.

Those de rigeur genuflections to the climatocracy sound exactly like the sort of qualifications used by enlightenment scholars to protect themselves when science started blowing holes in the orthodoxy of the day.

Congratulations Anthony! Thanks for all your hard work. When doing nothing favours warming temperature records from bitumen, concrete, jet exhaust or cars. etc., they are happy to do nothing as it suits the Warmista Narrative. When on the rare occasion, they need accurate results for rocketry, they leap into action.

When I first saw your work detailing the “How Not to Measure Temperature” failures of the system, and the resulting lousy quality of the data, it turned me from a believer into a skeptic. I’ve been reading your blog ever since.

“Now, some of the very same people who have scathingly criticized my efforts and the efforts of others to bring these weaknesses to the attention of the scientific community have essentially done an about-face…”
I have to wonder about the motivation. Could it be that they are now pointing out the weaknesses and uncertainties in the data as a hedge in case global temperatures do not rise as fast as they projected? If we enter another “hiatus” period, or if temperatures actually drop, they will need an excuse to explain it. I can see them saying something like, “The lack of warming on a local basis does not mean that global climate change is not happening. The planet is still warming. It’s just that the data we have is not sufficient to accurately show what is happening on a global basis.”
Unlike other scares in the past, they will never give up on climate change. The hysteria about climate change may enter hiatus from time to time, but as soon as there is a heat wave, a major flood or drought, a deadly hurricane, or other extreme weather, it will make a comeback. All we can hope for is that the majority of people will eventually get sick of all the hype and will choose to ignore it.

The “global warming” media freak show was never about climate change. The UN/IPCC was always about the globalization agenda under the environmental climate change smoke screen. This fact has been admitted openly by various apparatchiks over the years. Then the money making aspects became exploited by ubiquitous money grubbers. I’m sure there are a few street vendors trying to sell gas masks with CO2 scrubbers at crazy prices to the gullible.

“I’m sure there are a few street vendors trying to sell gas masks with CO2 scrubbers at crazy prices to the gullible.”
An interesting concept…would the gas masks scrub the inhaled CO2 or the exhaled CO2?

interesting that the almost 8 billion people on the planet collectively in a year are responsible for 1ppm CO2 being put into the atmosphere. Of course that CO2 was withdrawn out of the atmosphere by the plants. So we humans get a pass on that one from the AGW people. Good thing too or else there would be a call for a culling of the human race by the alarmists. I am sorry to say that after reviewing more and more of these climate study “scientific” reports , I am increasingly getting pissed off at this whole AGW fiasco. The reports are not worthy of high school science projects never mind being written by PhDs. I keep referring to one report that divided up the world’s land areas into 2 areas “Dry lands and wet lands and tried to convince the reader that different physics mechanisms were acting on the 2 types of land. Simple garbage. yes folks a PhD was responsible for that.
On a different note we need more CO2 in the atmosphere not less. If it turns out that we are indeed warming the planet slightly less than 1C every century I would gladly take the increase in temperature being as I live in a cold climate. The plants will love it too. Paying the prices of carbon taxes and cap and trade limits and doubling and tripling of my electricity prices for a failed vision of unreliable green energy is making my blood boil. THIS IS WAR.

The expensive climate reference network sensors provide actual data that represent local measurements over long periods of time because the location of sensors follows a scientifically justified methodology.
Why build such a network unless the previous system was inadequate, or known to be faulty.
There are a very few older surface weather stations that do have a history of good siting and maintenance. All of those stations must be located in rural areas well away from any contaminating influences.
Those stations that follow the rules show no significant warming. There is no point in mixing bad stations in with the good. Bad data must always be rejected, because it isn’t data, it’s garbage.
Here is a starting point for those interested in siting criteria for the USCRN stations.ftp://ftp.ncdc.noaa.gov/pub/data/uscrn/site_info/CRNFY02SiteSelectionTask.pdf
Using google maps, it appears that the temperature sensors for the Titusville 7E station are within 60 meters of the concrete parking pad with the shuttle mockup, and 90 meters from the edge of the runway. That may be a violation of Class 1 siting criteria for USCRN stations.
Class 1: Flat on horizontal ground surrounded by a clear surface with a slope below 1/3 (<19 degrees). Grass/low vegetation ground cover <10 cm high. Sensors located at least 100 meters (m) from artificial heating or reflecting surfaces, such as buildings, concrete surfaces, and parking lots.
Class 2: Same as Class 1 with the following differences. Surrounding Vegetation 5 degrees

Good article. The obvious truth is that ‘scientists’ have a plethora of temperature measurements to play with, and play they do. Functionally, none these measurements are scientifically valid, since heat sinks like airport runways, city centres, farms, factories, forests, freeways, insolation etc. confound the raw ‘data’ beyond repair. No matter. There’s money to be made by scaring the hoi polloi with doomsday predictions. And that’s what’s happening today.

I have read that fan aspirated shields draw air across a temperature sensor to improve the accuracy of the air temperature measurements. Many years ago (~50) when doing heat balances on power plants the heat of pumping had to be factored int the analysis. I realize this is a small amount of added heat, however, it seems to me that it is relatively the same percentage as what i was dealing with, (i think.) Is this of any concern.

While I completely applaud Anthony’s efforts to make this happen, I think the other half of the issue is to stop this from being taxpayer funded and make it something that is crowd-sourced so that the funding is voluntary, not forced, and to bring private sector efficiency and transparency. I can’t see any practical use for the information to anyone but possibly the agriculture industry who can pay for it if they want it, and government use of the data can only lead to some sort of predatory policy. And if we had a true climate threat such as global cooling from a volcano, then we would know that without any new instrumentation. If people are that interested in the data then that is what experiment.com is for. Please correct me if I’m missing something here.

These uncertainties do not call into question the trend or overall magnitude of the changes in the global climate system. Rather, they act to make the picture less clear than it could be, particularly at the local scale where many decisions regarding adaptation choices will be required, both now and in the future.

Does that statement strike anybody else as ambivalent ?
Isn’t a distorted picture, a picture of questionablequality ? How is the picture “less clear”, unless the means of measuring the quality of the picture is less clear ?, namely the trend and the magnitude of changes? That whole paragraph, then, is doublespeak, saying something and then denying what is said in the same sentence, by changing the first assertion to different words in the second assertion that neutralizes the first assertion logically.
The fact that you mispronounce my last name does not call into question the correctness of your pronunciation. Rather, your mispronunciation acts to make my name less clear than it could be. Bullsh*t.

Weather stations at airports are used not because they give good data for the area. But because they that classic climate ‘science’ idea better than nothing.
This is really 101 stuff, if you cannot accurately measure you can only guess, throwing models at it does not change that.

The biggest fault with surface station measurements besides all the ones related to accuracy is that the measurement only applies to that tiny portion of the planet immediately adjacent to the station. Any averaging of readings is only an average of a multitude of tiny localities. This has virtually no relation to an average global temperature. Satellite measurements are slightly better but even they cannot take an instantaneous measurement of temperatures over the entire globe at every moment at any resolution and cannot arrive at a true mean.
It is what we have, however, and they seem to indicate an insignificant warming from a rather cool period in recent history. It probably means absolutely nothing in any terms relative to the human or natural environment.
What is important is the obvious damage being done by government policy based on the absurd predictions of climate models. But don’t get me started on that.

I was proud to be a participant in the audit process – I surveyed and logged most of the climate stations in Illinois. I also gained a healthy respect for the Verizon network. I used my phone as a hotspot for my laptop during the search process and not once, in all the backwoods and small towns in Illinois, did I not have signal.

If there were an article posted here about how the sun will rise tomorrow in the East, Nick would call-out the author for a misleading title and point-out that the sun doesn’t physically rise above the horizon. He spends the vast majority of his time here as a troll. It is particularly funny when he starts trying to address US political history. You, Dick, and Harry can go join him under the bridge.

Tom if Nick can prove CAGW to me I’ll swap sides I have no brainwashed mentality on either side but I can count to 14 so for now for me the ideology of CAGW is just that a religion based on faith not science .

1: Anthony, thanks for all your hard work.
2: A world temperature is not climate, and a small change in that temperature is not a change in climates. Note the ‘s’.
3: Evidence of a temperature change, or even a climate, is NOT evidence that the change was caused by more or less CO2.
4. Greater access, now, to inexpensive electricity is ethical and honorable. Making electricity more expensive and less reliable, and destroying wealth is neither ethical nor honorable.
5. I asked a surgeon how the medical folks determined a “go – no go” decision for elective intervention on a person with a condition that was slowly getting worse and might someday become critical.
The answer was the exact opposite of the precautionary principle. The intervention (surgery) is dangerous. If the patient will not notice a better life, there is not reason to go forward.
Likewise with CO2 and global warming. There is no justification for destroying the society that has been and is being created unless it can be shown there is a need and the solution(s) will work.
We are not to that level of understanding.

I’ve read the article, and while I am glad to see that someone is finally using some common sense in regard to weather stations and how they should be placed, there is one thing that is seldom addressed, but should be.
That is how to get the meteorology people as a collective group to stop using bombastic advertising language in their opening or headline statements. At this point, Accuweather is the worst of the bunch, employing terms like ‘bombogenic’ and ‘bomb cyclone’, as if sever storms are something new, or inventing terms like ‘windmageddon’. I know this a means of drawing attention to their website, and, they hope, more traffic, but it is amateurish and makes them difficult to take seriously when they do that.
If you could address something like this – take The Howling out of the weather forecasts – it is badly needed.
Good article, Anthony. I enjoyed reading it. Small steps are what it takes to get things done.

…how to get the meteorology people as a collective group to stop using bombastic advertising language in their opening or headline statements.

The govt. NWS meteorologists do not have that problem – they don’t need to ‘sell’ weather but they have a mission & responsibility to convey a potential dangerous and/or life threatening situation when it occurs (tax dollars hard at work). However, folks like The Weather Channel, AccuWeather, etc. as well as CNN, MSNBC, etc. *do* have a need to ‘sell’ weather when it is in the headlines…and there is where you have your problems. That is why I have problems when people want to privatize aspects of govt. There is no profit in meteorology because it is (I believe) a moral obligation of the govt to protect life and property of it’s citizens and if it is driven by profit, you either have to glamorize the headlines to sell advertising or make people pay for warnings.

…employing terms like ‘bombogenic’ and ‘bomb cyclone’, as if sever storms are something new…

Well…’bomb cyclone’ & ‘bombgenesis’ are popular but valid terms used to describe explosive cyclogeneis (>24mb pressure fall in <24 hrs) and when those situations impact the public, the public needs to know the potential threats which will occur in a short time span. It won't be just cloudy & showery but heavy precipitation and high/potentially damaging winds for a potentially extended period.https://oceanservice.noaa.gov/facts/bombogenesis.html
just my $.02

JKRob, your big problem is that you only see the problems you want to see.
Yes, companies can and do use strong language to try and convince people that they need to use the companies products.
Your error is in your belief that governemt workers are immune from this temptation?
How many government agencies have used bombastic language to convince the people to call their politicians and protect the budget of X, Y, Z department?
Nobody at the EPA goes around trying to convince people that ever smaller levels of pollutants are going to kill us, therefore the EPA needs a bigger budget?
Just look at all the AGW alarmists who are constantly trying to scare people to justify their salaries.
The only difference between private companies is that private companies can’t force you to buy your products, nor can they jail you for disagreeing with them.

The government’s hurricane reporting service has over-hyped hurricane speeds and other metrics for at least the past 5 years. WUWT threads on individual hurricanes contain complaints about this.
(The excuse for the alarmism is that they are trying to get people to take the warning seriously and button down / get prepared / evacuate. But if they cry wolf regularly, people will take them LESS seriously.)

All of the above, including Accuweather’s censoring its website, and “selling” the weather to the public when it isn’t a product – all of these make them look VERY amateurish.
i sent Accuweather an e-mail telling them politely that their local weather station’s readings were incorrect and they might need to have someone check it, as it gave a reading that included cloudy and raining on a day when the weather was a clear, bright blue, sunny sky from the western horizon all the way out to Lake Michigan, and NO rain. The response I got was ‘we rely on our weather stations for accuracy’, ‘we take our job seriously’ – blah, blah, blah, but no acknowledgement that something was out of whack.
Weather Underground is better, letting you use readings from several weather stations which they show on a map.
Yes, weather guessers should stop naming thunderstorms. That is childish.
I have taken to looking out the windows or the front door. I have to get my own weather station. This just gets silly. Thanks for your responses.

Anthony, that is not an ASOS but a US Air Force AN/FMQ-19 or Fixed Base Weather Observing Systems (FBWOS) that was install by Coastal Environmental Systems of Seattle under contract with and supervision of the weather program office at Hanscom AFB, MA. We used the same 10 meter tilting wind tower as the FAA and NWS ASOS systems, but that is where the comparisons stop. Link below is to a briefing about systems handled by the Hansom office, including the FMQ-19.http://www.afceaboston.com/documents/events/cnsatm2011/Briefs/01-Monday/08-Dreher-HBAJ%20WeatherOverview.pdf
As a meteorologist and supervising systems engineer for about 30 of the 110 projects, I constantly had a hard time agreeing with or feeling good about the location of most of them. We were guided by very strict FAA requirements for placement on airfields to avoid being an obstacle to flight. These CRN systems are placed there under the same restrictive guidance and will be no better in data gathering other than they will be more accurate than the FMQ-19. I know of a few FMQ-19 placements, including the very first one at McChord AFB, WA (Now Joint Base Lewis-McChord) that are tainted for temperature, and likely dew point, measurements. There was, and should still be, very specific guidance for placement of each sensor in regards to the surrounding area. When these consolidated systems came along, it became much harder to satisfy all the requirements equally and we were forced to accept locations that were less than optimal based on the overriding FAA safety of flight rules for airfield structures.
I admire your efforts in bringing these problems to light and thank you for your time and energy in educating us all.

Thanks, one similarity appears to be that they both use the HO-83 Hygrothermometer, at least from the photo that is what it appears to be. Unfortunately, my “handler” wouldn’t let me go over and inspect it first hand.

At one time I was officially referred to by most agency’s bosses as their resident cynic and skeptic. Also after working on a research project where we developed the best data set for any marine fish in the Atlantic Ocean we were forced to turn it over to the NMFS. That data set was never the same afterwards. Even though we pointed out where they had changed our data, which they had promised not to change, they claimed their models required the data be adjusted. We never received a good explanation as to why. Before I discovered Anthony Watts work with weather stations I knew we had a problem with weather stations. I knew of stations that had moved, stations that seldom saw upkeep, stations where instruments were replaced with no comparison to the instrument being replaced even though still functioning. Anyway, the idea of an improved data collection system is a wonderful idea. My concerns are the gatekeepers and other involved in maintain the stations, collecting and reporting the data. Is there a standards/ operating manual for properly maintaining the station and collecting and reporting data from each station. As one of my staff said during a meeting where another division was trying to get us to spend money on some federal mandate to get matching funds. “Why should we trust this to work at all?” The answer actually was, “we are a government agency, just trust us.” At which point my staffer said, “Trust hell, I have worked for government way too long to trust it.” Finally she said, “I guess we have to trust but verify.”

This is all fine, good and necessary. But my pet peeve from an engineer’s perspective is that air temperature alone is totally inadequate if you’re trying to monitor warming or cooling, otherwise known as changes in heat content. Minor changes in relative humidity produce drastic differences in the amount of energy contained in a given volume or mass of air. To the extent that several degrees of temperature change become irrelevant. If I try to run a heat balance on a system and i don’t have the accurate heat content of the components the calculations are going to be garbage. Which is why i shake my head every time they trot out a new, brightly colored temperature map of the world.

My thoughts exactly. If you are going to design and deploy a new measuring system, it should be a requirement to measure what is really needed – heat content, not temperature. Otherwise, you are equating the heat content of a station at 9000 ft. on the leeward side of a mountain range to a station at sea level on the coast. That would be ridiculous! The whole conversation needs to be changed away from temperature to heat content.

I give you full credit Anthony for having brought this issue to the forefront. I have the uncharitable thought that for the ideologues running these terribly compromised stations, they served to make adjustments as part of the plan. How reasonable it sounds that gee we have to adjust these stations because they are obviously off. They aren’t wrong on the face of it but it ran a cover for “algorithms” that Mark Steyn described best in a Senate hearing on quality of data: How can we know with certainty what the global temp will be in 2100 when we don’t even know what the temperature WILL BE in 1950!
I notice also that they have switched the message to a simple ‘Despite the status of the network we know that the weather has warmed since 1800.’ They leave the reader to conclude that this is because of our energy sins. My older Russian friends pick up on this kind of propaganda instantly.

Rob Bradley delivers another laugh, sorry it took me this long to see it. Yes, Rob, Steyn dropped out of school; yet he bashes down the walls of CliSci’s strongholds so effectively that little science-site tr0lls like yourself can’t let his name be uttered without trying (failing) to slur him.

Nick Stokes March 2, 2018 at 12:19 pm says “Of course you can verify. The unadjusted data is available. You can calculate the index with no adjustment at all, if you prefer. It makes very little difference.”
Relevant to Australia, Nick, you must know that the full scientific job of verification has not been done, by you or any others I can find. You mainly count the number of plus adjustments and compare this count to the minus adjustments, say they are about equal, claim end of story.
This counting method does not address for example the charge, often levied, that the adjusters have cooled the distant past and/or warmed the near past to give an exaggerated warming trend.
The thorough way to verify uses not just the sign of adjustment, but also the magnitude of the adjustment, its duration and its potential on the time line to leverage regressions on which trend estimates are often based.
So here is a challenge. You know my email, we are both in Melbourne. You have access to facilities that I do not. I know how to do the exercise. Will you work with me to do the job properly, once and for all?
Here is a graphic example of my motivation. BOM state in a public report from 2016 that “The duration, frequency and intensity of heatwaves have increased across large parts of Australia since 1950”. The first non-northern case I tested the other day was Brisbane, result was this –http://www.geoffstuff.com/35cdays.jpg
I had been working on another claim that was more affected by adjustment, that “The duration, frequency and intensity of heatwaves have increased across large parts of Australia since 1950”. Part of this compared how heatwaves were affected by adjustment, specifically raw CDO versus the BOM adjusted ACORN-SAT. Here is another first graph, this for one-day heatwaves as I computed 1, 3, 5, and 10 day heatwaves – Geoff.http://www.geoffstuff.com/cool_sydney.jpg

Geoff,“You mainly count the number of plus adjustments and compare this count to the minus adjustments”
No, here I am just calculating the global average (which is the most widely cited figure) with and without GHCN adjustment.
I think I have your email, but if you have mine handy, please drop me a line.

Oh bum. The first BOM claim should have been
““The number of days per year over 35 °C has increased in recent decades, except in parts of northern Australia.” – but the top graph was correct. At a major population centre, a change of 64 days to 4 days is hardly an increase, is it? Apologies, Geoff.

Perhaps this has been done before, but the problem of poor siting can be somewhat offset by working with normalized data, the temperatures at each time at the site divided by the temperature there at a given starting time or perhaps by the mean temperature over a set period, a year or a multiple number of years, then observing the trend of the normalized variables. The impact of “high” or “low” sites would be mitigated this way and trends made less obscure.

“buggy surface COOP and GHCN network and it’s highly biased and then adjusted data” Hey – I’m a COOP observer and I’m not biased. And all the effort I put in to keep my reporting consistent and accurate is purely volunteer, I get absolutely nothing for my efforts. Thanks a lot.

I know many of the observers personally, and this is not an indictment of any individual, but of the management of the COOP network by NOAA. I realize that many observers like yourself are truly dedicated/ What happens though, to those dutifully obtained observations is that they often get adjusted. That’s the biggest issue. That is where the bias comes from, not the observer.

Where I work we don’t report temperatures. We do report precipitation. When that is snow or ice, we melt it to get our number. When that’s been my job, I try to report the most accurate number I can.
What happens to that number after it is reported…, well, I did my part honestly.
Another Scott,
Keep honestly reporting the numbers. What others do with them is on their heads.
Anthony gets that.
(Have I ever mentioned how the record high and low temperatures for my little spot have been “adjusted”?8-)

Dear Another Scott,
The complaint has not been about the observer, the complaint has been that the adjustment process is not fully vetted, tested, and verified. Rather, the adjustment process is undertaken by biased individuals whose reputation, credentials and politics have a lot riding on the outcome of the adjustment process.
Perhaps there is a more diplomatic way to put this . . . but bluntly: those in charge of the adjustment process assume that your fellow COOP observers are too stupid to realize that they are reading yesterday’s temperatures rather than today’s; therefore they put in a Time-Of-Day adjustment which accounts for something like .7 degrees of the warming in the adjusted series.

“those in charge of the adjustment process assume that your fellow COOP observers are too stupid to realize that they are reading yesterday’s temperatures rather than today’s”
No, they assume that the observers read and accurately reported the readings on the min/max thermometer at the arranged time. The observers don’t, or shouldn’t, report other values (except current temperature, in another place on the form).

1,500,000 penguins found in East Antarctica. It seems the scientists either took 2 years to count the nests or else they were deciding how to fit this discovery into the AGW hoax. They are still spinning the tale that penguins have been declining in west Antarctica. Well apparently these penguins in East Antarctica have been living through the whole industrialization of mankind era and they seem to be surviving quite well because the temperatures in East Antarctica are as cold as ever. AGW hoax exposed again.https://www.cnn.com/2018/03/02/health/penguins-antarctica-massive-colony-trnd/index.html

Alan, those penguins aren’t in East Antarctica. They’re at the very tip of the tip of the Antarctic Peninsula, which probably has just about the mildest climate in all of Antarctica:http://sealevel.info/danger_islands_680x461.jpg
Yet we’re supposed to believe that warming threatens the penguins? Seriously??
Actually, the penguins seem to prefer a warmer climate than most of Antarctica currently has. Unfortunately for the penguins, “polar amplification” of global warming seems to only work in the northern hemisphere (and nobody knows why). Even the Antarctic Peninsula (contrary to widespread misinformation) isn’t warming significantly, and Doran et al 2002† found that, although the Earth as a whole experienced 0.06°C/decade of warming during the 20th century, there was “a net cooling on the Antarctic continent between 1966 and 2000, particularly during summer and autumn.”
† Yes, that Doran!

I applaud the efforts that WUWT has made over the years regarding the proper placement of weather stations and the obvious failure of so many stations to provide so so accurate data, knowing what we do about the present placement of some of the weather stations. Or how some of that inaccurate data may have be massaged to get the biased confirmation that was sought. Much of this adjusted data is what has been used to determine that the earth has had increasing temperatures. Or a complete fabrication of the temperature record in some cases, by some climate scientist activists.
As someone already pointed out up thread about getting a new weather station network:, “Better late than never”. Designing and siting a whole new generation of accurately recording weather stations and proper sites should be one of out highest motivations in weather data collection, as well as to form the basis for any future policies regarding future weather and climate and to be able to show scientifically that the information was recorded accurately. And how would future generations be able to compare our older historic data from now on in, if we don’t get it right this time? We owe it to ourselves and future generations to start this accurate data collection now.
I would suggest that any new weather station network that is developed utilize a real time Blockchain application that ensures that every individual weather station represent actual data collected as incorruptible. In other words, the raw data every 5 minute interval is recorded and distributed and is forever the same data and not able to be tampered with in any possible future scenario. A simple blockchain ledger of the information would ensure the data is forever incorruptible. We have seen ‘adjustments’ to questionable weather stations, supposedly to try and make them read accurate. However, that leads many to question if original raw data records are correct, and what conformation basis was utilized by those doing the adjusting.
Let’s not make that mistake again either.

Well, this is kind of puzzling. I was looking at the locations of the stations used in GHCN, and wondered how many there were way up north where it is insisted that massive warming is going on. I wanted to know just how many reporting stations there were.
There aren’t many. I have them installed in a .kml file on Google Earth, and between some of them up there, the ruler shows a distance of up to 2200km. That’s a long way to interpolate data. So I was looking for stations way, way up there, and found a station at Ostrov Vize, Russia. The station ID is RSM00020069 in the GHCN daily files, and 22220069000 in the GHCN Monthly files. I have no idea why the stations have two different ID (actually, three, because its WMO ID is 20069).
I went to the NOAA site where one can get data for a station for any month it was recording, so I looked up RSM00020069, and entered March 2017. This is part of the data I was shown:
Year Month Day Max Min
2017 3 1 2
2017 3 2 -2
2017 3 3 -11 -18
2017 3 4 -11 -18
2017 3 5
2017 3 6 -18
2017 3 7 -7
2017 3 8 -15
2017 3 9 -24
2017 3 10 -13
2017 3 11 -25
2017 3 12 -5
2017 3 13 -19
2017 3 14 -17
2017 3 15 -4 -13
2017 3 16 -9
2017 3 17 3
2017 3 18 26
2017 3 19 29
2017 3 20 27
2017 3 21 22
2017 3 22 -10
2017 3 23 3
2017 3 24 22 -9
2017 3 25 9
2017 3 26 1 -10
2017 3 27 -8
2017 3 28 7 0
2017 3 29 0
2017 3 30 -7
2017 3 31 -17
-------------------------------------------------------
AVG 4 -13

This is why I’m confused. The NOAA QuickData page for that station shows pretty sparse data for March 2017, while the NOAA Daily shows complete data that isn’t even close to what the first report showed. To top it all off, the Monthly average is 0.1°C warmer than what you get averaging the Daily data.
Is it any wonder why it’s frustrating to try to figure out what’s being done with the data?

James,
The columns didn’t quite line up.
Try using the “pre” format options.
( “less than sign”pre“greater than sign” at the start of your table. End your table with “less than sign”pre/pre“greater than sign” )
More (and better) info here. https://wermenh.com/wuwt/index.html

I believe it was Anthony, shortly before he got started on the surface stations project, conducted an experiment where he took two boards, and painted one with white wash and another with modern white latex paint.
He found that the board painted with latex got a little bit warmer than the board painted with white wash.
Apparently while both reflect visible light well, white wash does a better job of reflecting in the IR spectrum.
This got started because somebody had noted that in the past all temperature enclosures were painted with white wash, but in recent years they had been shifting over to latex because it lasted longer.
The other thing was, very few stations wrote down when the changeover occurred.

Meltdown warning : This post/thread has been picked up on by Daily Caller News Foundation and if Drudge picks up on it too, then pack your servers in ice.
And it all began with “the name/website that was not to be spoken” by the Climategate team. LOL
I can’t even begin to offer the appreciation and respect Anthony Watts and WUWT deserves!

How can I trust someone who plainly states their goal is to provide actionable data to our children in order to absolve for our legacy of global warming? Watch for legitimate declines in temperature being attributed to equipment changes. Watch for more downward adjustments to the historical record. Watch for cherry-picked “super-sites,” whatever that means.

Anthony, this is fabulous! Kudos to you for your dogged determination and to all the volunteers and their thousands of hours of support to illustrate the system was FUBAR.
Unfortunately, my read is still that we have tens of thousands of data points in the record — spanning decades — for which we have little confidence, but which are the only data we have. The old data have questionable validity, utility and value, but yet they will be used for decades.
Meanwhile it will likely take decades for the establishment of a new network, and years of data collection and analysis (plus we should assume there will be associated “spin” regarding the results) before we have a real understanding of how very poor the existing data will be.

When the Great Lakes had record ice 5 years ago, the raw temperatures in the states around the Great Lakes showed much colder than usual temperatures. But after the adjustments The Great Lakes states were reported has having normal temperatures that winter. That example is one of many why I am skeptical of the adjustment process.

Nice admission.
Funny how when Anthony first brought this small problem up, the “authorities” responded most unkindly.
Now they essentially pretend it is their own idea.
We all know certain people who behave this way.
The claims of significant information remain deliberately off the radar.
The stated claim of warming between 0.7 to0.9 degree C in the “Average Global Temperature” are never stated with error ranges.
Naturally if the estimated error range accompanied the A.G.T the result would be laughter.
Much ado about nothing, this Emperor has never been clothed.

Quote from the abstract: “These uncertainties do not call into question the … overall magnitude of the changes in the global climate system.”
I think it strange and notable that Anthony Watts does not comment on this assertion about the overall magnitude. Watt’s once published analysis indicating the opposite. Or am I misunderstanding something?
Watts and the other volunteers that contributed to the survey work at http://surfacestations.org/ performed vital and fundamental research into climate science. The home page of surfacestations.org says it was last updated in 2012. That’s too bad.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!OkPrivacy policy