Press Release – Watts at #AGU15 The quality of temperature station siting matters for temperature trends

30 year trends of temperature are shown to be lower, using well-sited high quality NOAA weather stations that do not require adjustments to the data.

This was in AGU’s press release news feed today. At about the time this story publishes, I am presenting it at the AGU 2015 Fall meeting in San Francisco. Here are the details.

NEW STUDY OF NOAA’S U.S. CLIMATE NETWORK SHOWS A LOWER 30-YEAR TEMPERATURE TREND WHEN HIGH QUALITY TEMPERATURE STATIONS UNPERTURBED BY URBANIZATION ARE CONSIDERED

Figure 4 – Comparisons of 30 year trend for compliant Class 1,2 USHCN stations to non-compliant, Class 3,4,5 USHCN stations to NOAA final adjusted V2.5 USHCN data in the Continental United States

EMBARGOED UNTIL 13:30 PST (16:30 EST) December 17th, 2015

SAN FRANCISCO, CA – A new study about the surface temperature record presented at the 2015 Fall Meeting of the American Geophysical Union suggests that the 30-year trend of temperatures for the Continental United States (CONUS) since 1979 are about two thirds as strong as officially NOAA temperature trends.

Using NOAA’s U.S. Historical Climatology Network, which comprises 1218 weather stations in the CONUS, the researchers were able to identify a 410 station subset of “unperturbed” stations that have not been moved, had equipment changes, or changes in time of observations, and thus require no “adjustments” to their temperature record to account for these problems. The study focuses on finding trend differences between well sited and poorly sited weather stations, based on a WMO approved metric Leroy (2010)1 for classification and assessment of the quality of the measurements based on proximity to artificial heat sources and heat sinks which affect temperature measurement. An example is shown in Figure 2 below, showing the NOAA USHCN temperature sensor for Ardmore, OK.

Following up on a paper published by the authors in 2010, Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends2which concluded:

Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends

…this new study is presented at AGU session A43G-0396 on Thursday Dec. 17th at 13:40PST and is titledComparison of Temperature Trends Using an Unperturbed Subset of The U.S. Historical Climatology Network

A 410-station subset of U.S. Historical Climatology Network (version 2.5) stations is identified that experienced no changes in time of observation or station moves during the 1979-2008 period. These stations are classified based on proximity to artificial surfaces, buildings, and other such objects with unnatural thermal mass using guidelines established by Leroy (2010)1 . The United States temperature trends estimated from the relatively few stations in the classes with minimal artificial impact are found to be collectively about 2/3 as large as US trends estimated in the classes with greater expected artificial impact. The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization. The homogeneity adjustments applied by the National Centers for Environmental Information (formerly the National Climatic Data Center) greatly reduce those differences but produce trends that are more consistent with the stations with greater expected artificial impact. Trend differences are not found during the 1999- 2008 sub-period of relatively stable temperatures, suggesting that the observed differences are caused by a physical mechanism that is directly or indirectly caused by changing temperatures.

Figure 1 – USHCN Temperature sensor located on street corner in Ardmore, OK in full viewshed of multiple heatsinks.

1. Comprehensive and detailed evaluation of station metadata, on-site station photography, satellite and aerial imaging, street level Google Earth imagery, and curator interviews have yielded a well-distributed 410 station subset of the 1218 station USHCN network that is unperturbed by Time of Observation changes, station moves, or rating changes, and a complete or mostly complete 30-year dataset. It must be emphasized that the perturbed stations dropped from the USHCN set show significantly lower trends than those retained in the sample, both for well and poorly sited station sets.

2. Bias at the microsite level (the immediate environment of the sensor) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend. Well sited stations show significantly less warming from 1979 – 2008. These differences are significant in Tmean, and most pronounced in the minimum temperature data (Tmin). (Figure 3 and Table 1)

3. Equipment bias (CRS v. MMTS stations) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend when CRS stations are compared with MMTS stations. MMTS stations show significantly less warming than CRS stations from 1979 – 2008. (Table 1) These differences are significant in Tmean (even after upward adjustment for MMTS conversion) and most pronounced in the maximum temperature data (Tmax).

4. The 30-year Tmean temperature trend of unperturbed, well sited stations is significantly lower than the Tmean temperature trend of NOAA/NCDC official adjusted homogenized surface temperature record for all 1218 USHCN stations.

5. We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.

6. The data suggests that the divergence between well and poorly sited stations is gradual, not a result of spurious step change due to poor metadata.

The study is authored by Anthony Watts and Evan Jones of surfacestations.org , John Nielsen-Gammon of Texas A&M , John R. Christy of the University of Alabama, Huntsville and represents years of work in studying the quality of the temperature measurement system of the United States.

Lead author Anthony Watts said of the study: “The majority of weather stations used by NOAA to detect climate change temperature signal have been compromised by encroachment of artificial surfaces like concrete, asphalt, and heat sources like air conditioner exhausts. This study demonstrates conclusively that this issue affects temperature trend and that NOAA’s methods are not correcting for this problem, resulting in an inflated temperature trend. It suggests that the trend for U.S. temperature will need to be corrected.” He added: “We also see evidence of this same sort of siting problem around the world at many other official weather stations, suggesting that the same upward bias on trend also manifests itself in the global temperature record”.

This work is a continuation of the surface stations project started in 2007, our first publication, Fall et al. in 2010, and our early draft paper in 2012. Putting out that draft paper in 2012 provided us with valuable feedback from critics, and we’ve incorporated that into the effort. Even input from openly hostile professional people, such as Victor Venema, have been highly useful, and I thank him for it.

Many of the valid criticisms of our 2012 draft paper centered around the Time of Observation (TOBs) adjustments that have to be applied to the hodge-podge of stations with issues in the USHCN. Our viewpoint is that trying to retain stations with dodgy records and adjusting the data is a pointless exercise. We chose simply to locate all the stations that DON”T need any adjustments and use those, therefor sidestepping that highly argumentative problem completely. Fortunately, there was enough in nthe USHCN, 410 out of 1218.

It should be noted that the Class1/2 station subset (the best stations we have located in the CONUS) can be considered an analog to the Climate Reference Network in that these stations are reasonably well distributed in the CONUS, and like the CRN, require no adjustments to their records. The CRN consists of 114 commissioned stations in the contiguous United States, our numbers of stations are similar in size and distribution. This should be noted about the CRN:

One of the principal conclusions of the 1997 Conference on the World Climate Research Programme was that the global capacity to observe the Earth’s climate system is inadequate and deteriorating worldwide and “without action to reverse this decline and develop the GCOS [Global Climate Observing System], the ability to characterize climate change and variations over the next 25 years will be even less than during the past quarter century” (National Research Council [NRC] 1999). In spite of the United States being a leader in climate research, long term U.S. climate stations have faced challenges with instrument and site changes that impact the continuity of observations over time. Even small biases can alter the interpretation of decadal climate variability and change, so a substantial effort is required to identify non-climate discontinuities and correct the station records (a process calledhomogenization). Source: https://www.ncdc.noaa.gov/crn/why.html

The CRN has a decade of data, and it shows a pause in the CONUS. Our subset of adjustment free unperturbed stations spans over 30 years, We think it is well worth looking at that data and ignoring the data that requires loads of statistical spackle to patch it up before it is deemed usable. After all, that’s what they say is the reason the CRN was created.

We do allow for one and only one adjustment in the data, and this is only because it is based on physical observations and it is a truly needed adjustment. We use the MMTS adjustment noted in Menne et al. 2009 and 2010 for the MMTS exposure housing versus the old wooden box Cotton Region Shelter (CRS) which has a warm bias mainly due to [paint] and maintenance issues. The MMTS gill shield is a superior exposure system that prevents bias from daytime short-wave and nighttime long-wave thermal radiation. The CRS requires yearly painting, and that often gets neglected, resulting in exposure systems that look like this:

See below for a comparison of the two:

Some might wonder why we have a 1979-2008 comparison when this is 2015. The reason is so that this speaks to Menne et al. 2009 and 2010, papers launched by NOAA/NCDC to defend their adjustment methods for the USCHN from criticisms I had launched about the quality of the surface temperature record, such as this book in 2009: Is the U.S. Surface Temperature Record Reliable? This sent NOAA/NCDC into a tizzy, and they responded with a hasty and ghost written flyer they circulated. In our paper, we extend the comparisons to the current USHCN dataset as well as the 1979-2008 comparison.

We are submitting this to publication in a well respected journal. No, I won’t say which one because we don’t need any attempts at journal gate-keeping like we saw in the Climategate emails. i.e “I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow — even if we have to redefine what the peer-review literature is!” and “I will be emailing the journal to tell them I’m having nothing more to do with it until they rid themselves of this troublesome editor.”.

When the journal article publishes, we’ll make all of the data, code, and methods available so that the study is entirely replicable. We feel this is very important, even if it allows unscrupulous types to launch “creative” attacks via journal publications, blog posts, and comments. When the data and paper is available, we’ll welcome real and well-founded criticism.

It should be noted that many of the USHCN stations we excluded that had station moves, equipment changes, TOBs changes, etc that were not suitable had lower trends that would have bolstered our conclusions.

The “gallery” server from that 2007 surfacestations project that shows individual weather stations and siting notes is currently offline, mainly due to it being attacked regularly and that affects my office network. I’m looking to move it to cloud hosting to solve that problem. I may ask for some help from readers with that.

We think this study will hold up well. We have been very careful, very slow and meticulous. I admit that the draft paper published in July 2012 was rushed, mainly because I believed that Dr. Richard Muller of BEST was going before congress again the next week using data I provided which he agreed to use only for publications, as a political tool. Fortunately, he didn’t appear on that panel. But, the feedback we got from that effort was invaluable. We hope this pre-release today will also provide valuable criticism.

People might wonder if this project was funded by any government, entity, organization, or individual; it was not. This was all done on free time without any pay by all involved. That is another reason we took our time, there was no “must produce by” funding requirement.

Dr. Nielsen-Gammon has been our worst critic from the get-go, he’s independently reproduced the station ratings with the help of his students, and created his own series of tests on the data and methods. It is worth noting that this is his statement:

The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization.

The p-values from Dr. Nielsen-Gammon’s statistical significance analysis are well below 0.05 (the 95% confidence level), and many comparisons are below 0.01 (the 99% confidence level). He’s on-board with the findings after satisfying himself that we indeed have found a ground truth. If anyone doubts his input to this study, you should view his publication record.

COMMENT POLICY:

At the time this post goes live, I’ll be presenting at AGU until 18:00PST , so I won’t be able to respond to queries until after then. Evan Jones “may” be able to after about 330PM PST.

This is a technical thread, so those who simply want to scream vitriol about deniers, Koch Brothers, and Exxon aren’t welcome here. Same for people that just want to hurl accusations without backing them up (especially those using fake names/emails, we have a few). Moderators should use pro-active discretion to weed out such detritus. Genuine comments and/or questions are welcome.

Thanks to everyone who helped make this study and presentation possible.

If Figure 4 is any clue, then “adjusted temperatures” can be just about anything.
Only in climate science can you have one set of data be low (Class 1/2), have a second set of data be in the middle (Class 3/4/5), and then have the final average of all the data be the highest of all (NOAA).

The point of adjusting and homogenising badly sited thermometers about as logical as taking an average and standard deviation of many provably broken climate model outputs and pretending they represent something not inconsistent with your measurements.
It is very difficult to get trend to 0.02K/a from devices which are not measuring something that can be well defined to that precision. Simply telling “diurnal min/max temperature in shade at two meters height” is far from defining the problem. And I’m not denying it’s warming, our host calculated it as 0.2K/decade in US, it is just that the uncertainty is not only about the temperature, but about the thing to be measured. I’m content with climate scientists as long as they don’t let uncertainty to be used as a weapon for detrimental mitigation attempts.

This is a colossal effort and achievement by Anthony Watts and deserves the widest study and acknowledgement. I hope that there are no mis-guided efforts to block its publication. The benefits of this study are self-evident. Reliable data is the basis of all science, and reliable data has been missing from the Climate debate for a long while.

How funny, I thought only “climate scientists” were qualified to speak of such an issue, or have an opinion on it. I guess I hadn’t realized an MBA and a bachelor’s in agricultural science and a “freelance consultant” position make you a “climate scientist”.

Frankly, the personal insults ban should also apply here. Ms. O’Brien may be obsessive and vitriolic, but that’s no reason to bring insults toward her in this forum.REPLY – +1. Hear, hear. Please, guys, if ever there was a time for the high road, it is now. You all know my feelings on the matter. ~ Evan

Two Labs,
There are other reasons. One example: I’ve never commented there, but she has referred to my comments here in very disparaging terms. Anthony gets treated even worse. So if your suggestion is to just turn the other cheek, I don’t agree, because that way you get slapped on both sides.
Anyway, we’re hardly being “personally insulting” to her. Just telling it like it is.

What makes you a “climate scientist” is doing the drill and surviving peer (and independent) review. Anthony has done this several times. I have done once.
No sheepskin required. Sou’s criticisms of this project have yielded value to it — and me. I so wish that there were not so much bad blood under the bridge. both sides in this have a lot to learn from each other. Being on speaking terms helps.

Absolutely. Congratulations and many thanks to all involved.
But this project demonstrates why Climate Science is not a science. At the very start, the object is to get the best data quality possible and work with good data, not poor. But Climate Science has not involved in that way. Heck, their main data set does not even measure what they are interested in, ie., it does not measure energy and does not tell one whether energy is accumulating over time.
The first step that the Team should have engaged in, was an audit of all the stations used to compile the land based thermometer record and to identify those best sited, those with the best maintenance record and data recording rigour, and those that had the longest data record in time. They should only have used good quality data source which requires no adjustment whatsoever.
If Global Warming is truly Global, then one does not need 6000, or 2000 or so stations, but one does require good quality data. It would have been much better to have rooted out the good data sources even if this resulted in only 300 to 700 stations world wide. Heck, even 100 to 200 stations would tell us all we need to know if the data that they are returning is good data. Of course, there would be spatial issues but that is not really so much of a problem since Climate is regional and not global and climate response and impact to climate change is also regional and not global. What we need to know is what each continent is doing and what each country is doing so it does not matter greatly whether globally the distribution of the spatial coverage is less than ideal.
Presently what we are doing is simply evaluating the efficacy of the adjustments made to cr*ppy data. What is needed is only good data that needs no adjustments whatsoever.

Good job, Anthony. It’s been a very long haul for you and other authors of this work. We should also congratulate the many volunteers who took up the enormous task of documenting every surface station generating temperature data used to influence public policy, despite opposition from government satraps (who are therefore plainly unfit for public trust).
“I hope that there are no mis-guided efforts to block its publication.”
Unfortunately, as the fetid contents of the Climategate emails (undenied by their authors) plainly demonstrate on numerous occasions and beyond any possible doubt, there were persistently corrupt and malicious efforts to hinder publication which were not merely mis-guided.
Those efforts were very carefully guided, indeed, and demonstrate just how fundamentally untrustworthy the “climate science” establishment has been from inception — and always will be (because when it comes to personal integrity, the leopard never really changes his spots.) Corrupt people attract others and weed out of their ranks anyone who may question their carefully wrought fictions. These academic authorities hand pick and hand feed a new generation of “climate scientists”. There’s no reason to believe that crop of shiny new faces will offer any improvement. The acorn doesn’t fall far from the oak. The professional villainy won’t end when the present top-tier “team” of carney barkers and fraudsters have died or retired. It will be permanently institutionalized at the expense of the public.
Nothing from this branch of fraudulent “science” can be trusted. Especially assertions based on methods, data, or assumptions not independently verified (the way real science actually works). Altered data without the original available should be thrown out entirely since it’s as tainted and untrustworthy as the people who “adjusted” it. And anyone who points to it as evidence supporting any conclusive assertion should be pilloried as either a fool or a scammer. Probably both.

Anthony, I, and I’m sure, the rest of the “screeching mercury monkeys” who surveyed stations back in the day thank and congratulate you on persevering with this research. These results demonstrate clearly that method matters and fiddling with numbers ex post facto isn’t going to fix faulty procedures.

Yes, at last the evidence we all knew was there and somehow nobody was able to give us! This is a massive achievement and breaks the foundations of the lie on which this fake science has been built over many years. Yes everyone who has been following this site must feel pride and joy for what Anthony Watts has and is achieving.

You mean Anthony only used people who were not on the take of the Koch brothers and big oil.
Is that even legal to do climate research without oil money or Koch money ??
I think the OED definition of science talks about “observation and experimentation”.
Good enough for me.
Anthony’s Army all deserve to be publicly recognized for their home grown do it yourself go out and observe science achievement. you hired a great crew Anthony.
Cheap too !
g

What really has been missed in this whole debate was that there are in fact FOUR RADIOSONDE data sets that AGREE with TWO Satellite data sets which show NO warming for the past 18 years. This is incontrovertible evidence. Somehow the radiosonde data was never mentioned or put on graphs until recently. I find this an incredible omission. I wonder if this data corresponds well with Anthony’s latest unadjusted compliant surface data for the same period… trend anyway.

“incontrovertible” Nothing in empirical science has that status. In using that term you only ape the corruption of the APS other attempt to close debate by those on the other side. Otherwise I thoroughly support your case.

P.S. To jim: Here is are two helpful (I hope!) excerpts from the Christy, et. al., paper:

This paper describes the completely revised adjustments to the Microwave Sounding Unit (MSU) deep-layer tropospheric temperature products first reported in Spencer and Christy (1990). These satellites, the first being launched in late 1978, ***
To show that the excellent agreement between the MSU and the average of these 97 radiosonde stations is not a coincidence, we select three geographically distinct subregions and perform the same statistical comparison. ***

Correct me if I’m wrong, but aren’t the satellite data adjusted to match the radiosonde data in some fashion?
Maybe adjusted isn’t the right word, but I thought the radiosonde data were somehow used as a reference for deciding what satellite data correlate to a certain tropospheric temperature.
If so that doesn’t invalidate the significance of this correlation, but they shouldn’t be considered two completely independent data sets.

After looking at the data like this, I started to look at how much each series change going from min to max and back, and while the absolute temps aren’t the same in the different zones, this daily cycle over the year returns to on average 0.0F.

That will have been a reference to our original Tmin findings in Fall et al. (2011). Those numbers will shortly be superseded by our current paper, which is, in a sense, a followup study, far more intensely done and using a far more difficult rating process.

Hi, Marcus — lol, I have so little else to occupy my RAM, that WUWT stuff can use most of the available RAM (some of it is, unfortunately, on ROM and my brain refuses to access it to let me write what it says… IOW: I forget a lot, too) — Mike Crow’s work impressed me from the start and not TOO long ago, there was a thread that also brought his fine work to mind… .
And, Marcus: don’t ever go away — WUWT needs your lovely personality (things can get mighty, MIGHTY, heeeaaaaavvvvy around here sometimes (even to the point of fiercely mean! — you keep the atmosphere light and healthy — humor, enthusiasm, and good cheer are ESSENTIAL!).
Each one of is has a role to play. Each one of us is important.

Thanks for the computer science lesson, MarkW. I really messed up how I wrote that. I used the fact that ROM (read only memory) cannot be altered by the “reader,” thus, could not be accessed in such a way as to make it available to my “write-out” (i.e. memory recall) code. I blew it!
And, want to (just in case you see THIS one, heh) say: Way to go standing up for truth in science (against AGW) to the extent that you lost your job at a major laboratory — you are a hero for truth!
Janice

I think the interesting comparison is these data sets to the USCRN, the climate reference network.
All those are pristine top quality sites with triple redundant aspirated temperature sensors.
No adjustments allowed or needed, and guess what… they show NO warming for the past 10 years. The decade time interval probably extends to the right or Anthony’s graph.

Why on earth should NOAA have any interest in showing curves like this? :https://wattsupwiththat.files.wordpress.com/2015/06/uscrn-trend-plot-from-ncdc-data.png
“The U.S. Climate Reference Network (USCRN) is a systematic and sustained network of climate monitoring stations with sites across the conterminous U.S., Alaska, and Hawaii. These stations use high-quality instruments to measure temperature, precipitation, wind speed, soil conditions, and more. Information is available on what is measured and the USCRN station instruments.
The vision of the USCRN program is to provide a continuous series of climate observations for monitoring trends in the nation’s climate and supporting climate-impact research.
Stations are managed and maintained by the National Oceanic and Atmospheric Administration’s (NOAA) National Centers for Environmental Information.”

Two things are important to note, here.
1.) The trend is flat just an insignificant bit on the cool side.
2.) COOP tracks very well with CRN from 2005 to 2014.
(Trends are to be considered Tmean unless otherwie specified.)
This is important, because it supports our hypothesis: Poor microsite exaggerates trend. And it doesn’t even matter if that trend is up or down.
Poor microsite exaggerates a warming trend, causing a divergence with well sited stations. Poor microsite also exaggerates a cooling trend, causing an equal and opposite divergence. And if there is essentially no trend to exaggerate (as per the 2005-2014 interval), there will be essentially no divergence.
That explains why poorly sited stations have stronger warming trends than well sited stations from 1977 – 1998. It explains why poorly sited stations have stronger cooling trend from 1999 – 2008. And, finally, it explains the lack of divergence between COOP and the CRN from 2005 – 2014. That is what is called working forward, backward — and sideways.

Evan, I also found when looking at how individual stations measured temps evolve (difference between daily rising and falling temps)over a year’s time to be slightly cooling.
If you haven’t seen what I’ve done previously, I think it’s a nice compliment to your teams work. I haven’t looked at absolute temperature trends, just the delta change, and have processed unaltered station data into various sized grids.https://micro6500blog.wordpress.com/2015/11/18/evidence-against-warming-from-carbon-dioxide/

Science or Fiction, thanks for posting the recent USCRN plot. It would be interesting to see a comparison plot for the same time period using the best sited USHCN sites that Anthony and company examined. Anthony, this would make a great post in the future … hint hint.

That explains why poorly sited stations have stronger warming trends than well sited stations from 1977 – 1998. It explains why poorly sited stations have stronger cooling trend from 1999 – 2008. And, finally, it explains the lack of divergence between COOP and the CRN from 2005 – 2014. That is what is called working forward, backward — and sideways.
And CRN from 2001 to 2015 doesn’t diverge from good or bad stations.
Going forward when it warms it will be interesting.

Problem is, those results would — in isolation — be moot. there should be little diversion in trend between well sited USHCN, poor sited USHCN, and CRN because during the interval they overlap, there is essentially no trend to exaggerate.

Evan, What is COOP? Also I tried to post this earlier from home (on my 3rd computer) without success
I think. Here it is again. I think it is related to what you are saying here.
Anthony, Evan, or anyone else who may know: What is meant by this? “Trend differences are not found during the 1999- 2008 sub-period of relatively stable temperatures, suggesting that the observed differences are caused by a physical mechanism that is directly or indirectly caused by changing temperatures”.
Does this mean no trend difference between the Class 1/2 versus Class 3/4/5 during this time or none between the NCDC adjustments and the Class 1/2 during the last 7 years?

1.) To be clear, the COOP network is the entire ~6000 NOAA stationset, of which the 1218-station USHCN is a subset of the “best” stations, those with the ,ongest history, and most complete data/metadata.
2.) I think Anthony may have used a pre-edited version of the abstract. The one he posted earlier is the corrected version.
1.) There is no divergence between the COOP network and CRN from 2005 (when CRN went online) to 2014. That is because poor Microsite exaggerates trend, and there is no significant trend during that interval to exaggerate.
2.) There is a trend from 1999 and 2008. A cooling trend. And the poorly sited stations cool more rapidly than the well sited stations during that interval.
Therein we see that heat sink exaggerates trends — in either direction — and when there is no trend to exaggerate, there will be no divergence between well and poorly sited stations.

Going forward when it warms it will be interesting.
In that case I would expect a divergence, with the poorly sited stations showing the highest trends.
If the sun a.) does a bunk, and b.) the data gives half a hoot about it, then, in combo with the current negative PDO (etc.) progression, we might see a bit of cooling — and the poorly sighted stations would be expected to exaggerate that trend.
If the negative PDO pushes down with AGW pushing up, and the trend remaining flat, expect no material divergence between well and poorly sited stations (though the poorly sited stations would probably warm more in summer and cool more in winter).
Eventually, the PDO will flip back to positive and we will be into medium-term warming no matter how you slice it. Of course, the microsite problem may be either solved or reasonably adjusted for (by us if no one else). It’s even possible almost-as-good alternative energy will be available (but don’t bet on wind/solar as currently approached).

“Of course, the microsite problem may be either solved or reasonably adjusted for (by us if no one else)”
To me, it is far from obvious that poor measurements can be compensated for by automatic routines. It is not even obvious to me that poor measurements can be adjusted by manual routines.

Not bad.
But you can go further than that. We find UHI, while it may have a significant effect on offset, it has not much discernible effect on trend, not for the unperturbed set, anyway. And the compliant urban set trends well under the non-compliant rural set.
It’s all down to Microsite. Via the heat sink effect.Microsite is the New UHI. You heard it here, first.

Evan, please give us a clean “laymans” definition of “microsite”[The “very local” 10-50-100-500 meters around a site that affects any or all of the following factors:
Local sensible heat sources (air conditioners, heaters, stoves, ovens, buildings, furnaces, kilns, or generators. these may, or may not, be running any any given time. 5, 10, to 20 meter effect.)
Local radiated and re-radiated energy (from buildings, walls, asphalt or concrete parking lots, sidewalks, and parking garages. 10-50 meter effect.)
Local wind breaks, or wind accelerators. (Wind is blocked by a building or wall, or wind is accelerated across the sensor by being forced between a row of buildings at certain wind directions, or air is moved from a hot spot (parking lot or building wall) towards (or away from) the sensor. 50-500 meter effect.)
Local shading (or removal!) of natural shading and trees over time. 10-50 meter effect.
Local UHI. An otherwise “ideal” sensor recording good data for the nearest 500 meters unchanged, is in the middle of a small city or county, who 10000-50,000 meter radius now has 10x to 50x the urban heat island seen in the 1920’s or 1930’s.
.mod] I’ll add that UHI is inherently non-local. It is Mesosite. Microsite is only concerned with the immediate proximity of the station, be it urban or non-urban. At most 100m. distant, and usually what matters is the 30m and 10 m. radii. Well sited urban station trends (sic) clock in lower, on average than poorly sited non-urban stations. Microsite IS the New UHI. ~ Evan

Scott: Until Evan has time to answer, just a couple of excerpts from the above press release that might be helpful:

… differences between well sited and poorly sited weather stations, based on a WMO approved metric Leroy (2010)1 {Leroy, M. (2010): Siting Classification for Surface Observing Stations on Land, Climate, and Upper-air Observations JMA/WMO Workshop on Quality Management in Surface, Tokyo, Japan, 27-30 July 2010} for classification and assessment of the quality of the measurements based on proximity to artificial heat sources and heat sinks which affect temperature measurement.
*** stations are classified based on proximity to artificial surfaces, buildings, and other such objects with unnatural thermal mass** using guidelines established by Leroy (2010)1 {Ibid.}

**{my guess is: “unnatural thermal mass” = heat (or energy, heh) -retaining/emitting to a degree not normally found in nature}
That is to say, the above criteria would be the characteristics of a given “microsite.”
Just a little help (I hope) from your non-tech, friendly, neighborhood librarian,
Janice

The problem isn’t urban itself, it is the change in the microsite conditions over time. If the heatsinks in the area change during the period of study a bias will appear in the data. In the study of “climate change” it is the changes that make or break the data. If you build a parking lot, change the surface of the nearby playground, build a large building, install a chiller plant, upgrade a chiller plant, change brown space to green space, these all will impact measurement trends, and that is what pollutes the trend data. Sure cities will be warmer at night than the surrounding countryside.
We can’t compare urban readings from the 30’s to now because of too much change in the microsite conditions though.

If a station’s microsite rating changes during the 30-year study period, we drop that station. Poor microsite exaggerates trends even when a station’s siting is constant and unchanging throughout.
I cannot emphasize how very important that concept is. Our entire hypothesis would be falsified without it.

It’s not perfect, but it’s as good as it can reasonably be. We define our terms and what we think is going on in the paper, itself.
We will also be archiving the data and formulas in Excel, which will put it in a format that anyone can dicker with it or change the parameters — add or drop stations, change ratings, add categories (i.e., subsets), add whatever other version of MMTS adjustment you like, that sort of thing. (And I have some iconoclastic notions of how MMTS should really be addressed.)
But the thing is, we welcome review. Some station ratings are obvious at a glance, but there are a few close calls. So it will all be open for review, complete with tools to test and vary. This paper is not intended as an inalterable doctrine. It is just part of a process of knowledge in a format it is easy to alter and expand.
If anyone has any questions, I’ll be glad to answer.

How many stations were “close calls”? Would it be possible to take a station that was borderline between say 1 and 2, and call it a 1.5? I suppose if there are only a dozen or so close call stations, any change to the results would be too small to be meaningful.

The only case where it makes a dime’s worth of difference is the Class2\3 demarcation. That is where the biggest difference occurs. That is the split between compliance and non-compliance.
There is a small handful of stations that are close calls. Some time earlier, for experimental purposes, I dropped the five coolest Class 1\2 stations. The trends were, of course a bit higher, but the confidence remained statistically significant (95%+ level).

this is the main takeaway for me after the results bcbill. outstanding effort by anthony and the team. the dogged determination to get it right,the huge amount of time and effort that took and the continued commitment to make sure all data is made available to ensure the in depth scrutiny a paper so important requires.
thank you all involved for restoring some confidence in science, for me at least.

Congratulation to AW and co-authors. The station ground truth data collected by volunteers is pure gold.
I conducted a small experiment using just the surface stations CRN1 from the database, guest posted here earlier this year. What was compared was GISS raw to GISS homogenized for those pristine stations. (Did not expand to CRN2 to get valid statistics, as my Koch check never arrived.) What it showed (keep in mind the limited sample size did not provide conclusive statistics) was that GISS homogenization did a fairly decent job of removing large urban UHI, but for suburban and rural stations it imported heat ‘contamination’ from poorly microsited ‘adjacent’ stations. In other words, the homogenized GISS end result is irreparably unfit for purpose. For sure for CONUS. Essay When Data Isn’t suggests the general result is also true globally, and not just for GISS. For the same reasons.

Essay When Data Isn’t suggests the general result is also true globally, and not just for GISS.
We would like nothing more than to take this show on the road to the GHCN. But that would require either an intense and precise foreign volunteer effort — or real funding.
Online satellite resources such as google earth are a lot better than they used to be but are yet inadequate to the entire global task. In some areas (not by any means all) of the the US, you can pick an MMTS off a fly’s butt and trace its funky little shadow. Outer Mongolia, not so much. And, “Beware the bight of Benin. Those who go in don’t come out again.”
We’d have to leg it or have other legs leg it to those stations and observe them with Leroy (2010) parameters in mind while they’re doing it. And my Uzbecki is getting a little rusty.

Congratulations Anthony and all. I was pleased to buy the first publication on surfacestations…to help with funding. The fact that almost every site was visited and photographed by volunteers – and what a rogues gallery of station pictures!! When they came out, NOAA ran out of all their offices and took down the worst stations in the album. The optics of this for the world’s number one climate agency must have scared the daylights out them. It woke them up for sure. They probably spent a good part of that year’s budget digging up the worst stations, putting out papers and op eds, polishing the door knobs and just about everything they could think of.
Having visited essentially all the stations but a few, in my mind, makes you guys THE experts on the US temperature networks. Collectively, I would say more work on this one metric that has caused so much angst and trillions in spending on energy toys and studies was done by Anthony et al than the smoke shoveling of the world’s temperature agencies and university departments. Big computers adjusting the world with algorithms have been shown how the job is done!
I say the rest of the world can also be done. A call from the mighty WUWT would reach all 200 countries in an hour. Crowd sourcing, photos and videos of each station would done and selection of the best (you might have to go with classes 2 and 3 for the rest of the world, though – perhaps adjustable using a factor you have determined for these cases in the US. This would finally create the WUWT Global T Network. I suggest your 30year trend is even at least slightly warmer than reality, but probably the best we can do. The work would be even better with funding to twin random stations worldwide with the newest temperature instruments available, running them side by side to see what we get. I’m sure Canada and Australia could be done fairly quickly, most of Europe is what we would call a short drive and should be done quickly. Add Mexico and soon the argument that US is only 3% of the land mass would be shut off.
The next thing is to bring the work up to 2015 and compare it with the satellite record and CRN. I believe we are going to get wonderful coroboration with the satellite records.
Oh and evanjones, I’ve been to the Bight of Benin a couple of times, once in the 1960s for three years with a civil war on that killed 3 million people and I came back again! Of course, I’m from Manitoba.

They’ll pull lower tropo satellite response, “but we live in the cities”. On a rational note it looks more and more like 1998 El Nino brought in a step change as nothing was really going on up to that point and not much since. Go figure.

On a rational note it looks more and more like 1998 El Nino brought in a step change as nothing was really going on up to that point and not much since. Go figure.

All it would take is a change in the location of a large pool of warm ocean water that persists. The heated water that evaporates is carried downwind where the water vapor cools, part of liberating all of that energy (heat) warms everything else up, including surface stations.
How many billions of gallons of warm water (from vapor) is this El Nino transporting on to the continent to cool, How much energy does all that take?

There are no compliant networks with which to make the comparison. CRN is the only one and that network has only been online during trendless times.
Our findings are ~10% under the RSS/UAH6.0 trends. And LT trends are supposed to be 10% to 40% higher than surface trends, depending on latitude. So our current results split the uprights — on the safe side. Not only is Klotzbach et al., vindicated (at least supported), but so is Dr. Christy.

Hate to be the spoiler here, but….
All this work means nothing if NOAA doesn’t recant. As I have said many times before, here and elsewhere, this whole AGW thing is not about science, it is about money, and that makes it a fraud issue. You can pump out all the data you like, and personally, I believe it. But it was clear from the beginning there was no AGW. This study will just go in the trash, like all the others. Science is now corrupt, and the crooks are running the show. For every meaningful chart you show, they will come back with a mountain of hogwash.
If you really want to fight this corrupt influence on science, you have to go to the heart of it. Scientists committing fraud by lying to attract funds for their personal gain. That is a crime. It is white-collar crime. We put people in prison if they steal $10,000 from a bank, but when a scientist commits fraud for half a million, what do we do? Send him/her to an all expenses paid trip to Paris.
Turning a blind eye to this crime, will only make things worse as the years go on. With shield laws, like tenure, that are protecting criminals, textbooks written by snake oil salesmen, and institutions/universities/conferences/governments working together to conspire and defraud the taxpayer, you really think that a lowly over glorified group of bloggers is going to change the system? If you do, your damn fools!
When are you people are going to face facts? Its not about the science, its about money, always has been, always will be. Until we are prepared to treat white-collar criminals like we do with blue-collar criminals, nothing is going to change. Do all the studies you like. YOU ARE ALL WASTING YOUR TIME, and putting society, and the economy in real jeopardy. All because, you/we are all too proud, or arrogant, or dare I say it, cowardly, to really face this problem head on. That is, the problem of white-collar crime!

Right behind you Dorian!!!
I can only assume that you’ve created an organization, located and rented headquarters, done all the required paperwork for tax purposes, created a foolproof campaign, hired the appropriate lawyers to put our case together and are ready to go with coffee pots and phones all plugged in and ready to roll. When’s the next meeting?

Dorian, calm. Latest word is that NOAA gave some of the subpoenaed emails to Rep. Smiths committee. Smith said he was working from NOAA whistle blower information. So the Karl ‘adjustments’ will likely become ‘Exhibit A’. It does not happen overnight when you are fighting a 25 year world war with gov funding, MSM, and leftist sentiments on the other side. But it can and does happen, one skirmisch, one battle, at a time. Soldier on.

You can’t win a war without ammunition, and people like Anthony are manufacturing bullets (cannonballs are a closer approximation…or missiles…) 24/7 for the cause. But really, what does Dorian expect from a “lowly over glorified group of bloggers” anyway? Matching uniforms polished to gleaming… swords reflecting torchlight across the meadow in the pre-dawn light….a magnificent army so vast and so furious that a mere glance would make all enemies collapse like Mike Mann’s proxy data?
Sounds like Dorian needs to examine some FACTS himself. For example, since he doesn’t really spoil much of anything here, he can’t call himself “the spoiler here”. 🙂

You know, I commented on the thread where NOAA was pooh poohing overhyped Godzilla El Nino that there is nothing like a congressional investigation to moderate such an agency’s excessive enthusiasm for end of world climate. I didn’t know the emails are already flowing in!!! This is exactly the kind of activity needed to corral activist, ideologue science. Bad stuff gets done in the dark.

“you really think that a lowly over glorified group of bloggers is going to change the system?”
What a sad, defeatist, ineffectual little coward you are.
Thankfully, we don’t have to rely on pathetic little gob$hites like you who are beaten before you start to get stuff done.

REGARDING NOAA
You don’t understand. It’s not what NOAA thinks. It’s whether these results stand up under review that ultimately counts. She do or she don’t. The real deal. It just takes a little time, that’s all. We don’t take their word for it, so it’s only common courtesy not to expect them to take our word for it.
On the one hand, Microsite/Tmean Trend having been (re)introduced as an issue, and the current results challenging the official record, this subject will no doubt undergo a degree of further examination.
On the other hand, we are making extraordinary claims. And extraordinary claims require extraordinary proof.
I would like to add, most emphatically, an in no uncertain terms:This is no fraud. This is not scam. NOAA has not lied. This is an error.
It is an error we ourselves partially made in Fall et al., by taking the easy way out by failing to convert to Leroy (2010). It appeared to be (and was) an intensely time-consuming task, and we thought (incorrectly) it wouldn’t have made any material difference anyway.
[Besides, who knows? I never quite did get around to running our unperturbed subset using Leroy 1999 ratings. But someday, maybe someday soon, I will. Maybe the findings will change as a direct result of addressing the criticisms of the 2012 pre-release. And if those results turn out to be compatible with what we have found using Leroy 2010, I am going to have one heck of a scientific horse-laugh. Our critics all seem to be enamoured of the quaint notion that the pre-release was for publicity purposes (loved it!) rather than for purposes of eliciting hostile independent review (the real and carefully explained reason).]
And there were very valid criticisms of our 2012 pre-release. Criticisms we had to address. So if we made all those errors, how can we call fraud on NOAA if they make the exact same sorts of errors? Confirmation bias? There is no man without one on any side of this, and that’s why we have a scientific method — to protect ourselves from it.
Human nature is. Seeing as how scientists, or even ones playing one on TV (like me), are at least part human. I am not inclined to judge. I find it just gets in the way. So let us put out past differences aside, get our heads together and make a little science, already.
Speaking personally, my favorite way of understanding the gestalt, ebb, flow of homogenization is to hash it out with the world’s leading expert on it, not deconstructing abstracts. I’m a game designer/developer, not (just) a rules lawyer.
So how do I do that if I am not on speaking terms with him? I want to get my hands and head into this stuff. I don’t want to be always trading potshots. Let’s do science.

A beautiful call for civility: Treat opponents with respect and give them room to correct their errors. Make room for honest, mutually respectful disagreement. Give sympathetic people in NOAA cover to engage in dialog with skeptics.
This shows how science is done. Let’s hope it catches on in a field that needs it.

evanmjones,
“I would like to add, most emphatically, an in no uncertain terms:
This is no fraud. This is not scam. NOAA has not lied. This is an error”
I do not see how you could possibly know such a thing . . your sanity comes into question to my mind, by speaking so.REPLY – I have been up to my eyeballs in the data, both raw and adjusted. Perhaps three thousand hours in. Maybe more. Having deconstructed the mechanisms, it is my honest opinion that this is error compounded by confirmation bias and not fraud, scam, or any other synonym thereof. We made much the same sort of mistakes ourselves, at the outset. Maybe we are making other mistakes, quien sabe? If so, they are honest errors. And I make no presumption that NOAA is not subject to the same degree of honest errors that we are.
As for my sanity, it has been called into question so many times, I have developed an immunity … but the above is my call from the trenches. ~ Evan

JK
I don’t “know” Evan, but reading the pattern of his replies he realizes he is about to engage in a hostile environment. My guess is he wants to give the benefit of the doubt so that he can possibility divide and conquer. By taking his approach he allows those at the authority level to self separate. If he comes in firing 6 guns, he makes it much harder for that to occur.
Perhaps that’s his frame of reference.
:::: sorry for jumping in evan, it’s just such a juicy topic :::::REPLY – Sure is. And that’s what I do. It will be incumbent on me to defend this paper in hostile territory. And it is in hostile territory that I acquired the invaluable feedback that allowed for the corrections since 2012. That is valuable to me. I need the other side. So do we all, though some of us may not yet realize it. Besides, what I do is push. And in order to push, I need something to push against. We are not out to convince our pals. We are out to convince our opponents in this. ~ Evan

knutesea,
“… If he comes in firing 6 guns, he makes it much harder for that to occur.”
Do you feel sticking an ‘It seems likely to me’ in there somewhere, would make it any “harder”?REPLY – I fight with knives in both hands. I was trained in deconstruction and the dialectic from the day I was born (and those who trained me have not always been entertained to find their own weapons turned upon them). But I prefer a clean fight. An honorable fight. No one will know the knife that bears the poison, not until I choose to use it. But, be advised, it is a part of me, part of my arsenal. I can no more lay it aside than cut off my hands. And any who engage me ignore that at their peril ~ Evan

Sir, he emphatically stated what he cannot (by any stretch of my imagination) know to be true . . this is not a good way to maintain credibility with serious people, it seems to me. Why one does it, is virtually irrelevant to me, it’s crazy talk . .

amen to that evan. too many people see this as an issue of right and left. for me it has always been about right or wrong. you ,anthony and the team are doing things the correct way. you guys do the science, leave the potshotting to idiots like me.

evan,
I was moving down the comment thread looking for a good place to add my praise and thanks for what you guys have done, when I came upon that emphatic declaration you made . . My concern in this particular matter is your credibility, honest.REPLY – I know, really I do. And I appreciate it. Understand that when I do what I do, they can throw all the low blows they care to — but cannot lay a glove on me. I have disarmed them, evaded them, forced them to fight on my terms. Furthermore, they become aware that I have other weapons at my disposal that I do not — but can — use. And deterrence is a powerful tool. ~ Evan

Dorian, you came late to the party it seems. Skeptics are the number one target of the zealots. Your white collar crime stuff has already spawned a number of whitewashes – the latest ones, however, based on skeptics information and data are going to be something different. Guess how we know there has been white collar crime in the first place? You are getting an inkling I can feel it. It was through the relentless hard work of skeptics who have never let something go by that doesn’t look right. Skeptics published Climategate, skeptics turned the light on the RICO 20, the lead guy having collected 63 million dollars from one agency and has no significant work to show for it and hired his wife and daughter to run the empire. The NSF didn’t go after them, skeptics did and after the NSF, too. Skeptics have had a number of scientific papers cause to be retracted. Skeptics have emboldened marginalized scientists of dissenting opinions to publish more and more good alternative climate studies, skeptics have given the most powerful testimony at Senate and Congressional committee hearings and in the UK parliament. That’s how it’s done. You don’t have much to contribute it seems except to put down skeptics efforts.

Early: I wish they were all dead.
Lee: Why, I do not wish they were all dead. I merely wish that they would return to their homes and leave us in peace.
Early (later, to Stuart): I would not say so in front of General Lee, but I not only wish they were dead, but in hell.
These attitudes manifested themselves in their respective fighting styles. Who was the better general, Lee or Early? The cool hand or the hot head? History has made its judgment.

The first step to proving fraud is to demonstrate that the fraudulent statement isn’t true. This paper does good work towards satisfying that condition. It’s a fundamental building block that your fraud approach must have in order to succeed.
Embrace the healing power of and.

But you are wrong, Dorian. WUWT and other skeptics are having a profound effect. The word is getting out, and the CAGW activists are being contained. Climate change isn’t a big concern for most people thanks to the skeptical voice. This paper will add to the impression that many have that skeptics are serious, worth a listen, and have a case.

Dorian, We also may have Mother Nature on our side. After all, if it continues to warm less than the models “project” (despite numerous adjustments), fewer will be able to argue the C in CAGW. And there are still many real scientists in the field. Even if some were pulled into the more alarmist or activist camp, they will look at new evidence and modify their opinions. I believe this is a great thing for climate science, hopefully it will get published in a good journal, but even without that, because it was done carefully, it constitutes another step in the building blocks that make up the progress of science.

Dorian,
“YOU ARE ALL WASTING YOUR TIME, and putting society, and the economy in real jeopardy.”
Please explain how society and the economy could possibly be put in real jeopardy by what Watts et al (or WUWT et all) has done here? I’m having difficulty imagining how you arrived at that idea . .

JK
NOAA doesn’t actually have to recant. The 2010 OIG report says that NOAA will ground truth the stations.
They didn’t. Moan and groaned about funding. Anthony did it cheapo style.
If NOAA is any good at spin, they embrace Anthony’s work (show the happy face) and then drag him thru a looooooong period of validating his methods. He needs to be cautious of this tactic and establish groundrules upfront of what they are actually concerned about. Set a timeline for review, major milestones, blah blah.

Please explain how society and the economy could possibly be put in real jeopardy by what Watts et al (or WUWT et all) has done here?
I think perhaps he fails to see the iron fist within the velvet glove.

Dorian,
It’s just another human mess made by folk with reasonable motivations which has become an unstoppable rolling juggernaut.
Back in early 80s some scientists’ concern over what carbon-dioxide might do to the climate, was taken up by excellent promoters with specific ideologues including:-
(a) World Government is necessary to stop humans destroying the natural world which would make it uninhabitable.
(b) De-industrialise to prevent humans destroying the natural world which would make it uninhabitable.
Unfortunately, the attempt to reduce carbon emissions is actually increasing harm to the natural environment. And the juggernaut rolls on, dragging innocent people with it e.g. workers concerned tofeed their family. Meanwhile, government subsidies are a lucrative income for some industrialists.

Congratulations Anthony
This is of huge importance if the criteria for classifying the stations are recognized as unbiased. The difference between 0.204 C/ decade and 0.324 C/Decade is 59%; i.e. a rather big error. I’ll guess that the error is just as big, if not bigger, in the rest of the world.
However, the importance of your finding depends on whether the objectivity for the classifying criteria can be questioned or not.
Be prepared to be attacked there Antony. The best defense is to give full access to all the data once it is published. Furthermore that is also the best scientific method.
/Jan

PS: I have seen studies that show that even small populations can have a UHI impact. If the area within in a few miles of the sensor has gone from a population of 5000 to 10000, that can have an impact on the measured temperatures.

evanmjones says:
December 17, 2015 at 5:10 pm
Not bad.
But you can go further than that. We find UHI, while it may have a significant effect on offset, it has not much discernible effect on trend, not for the unperturbed set, anyway. And the compliant urban set trends well under the non-compliant rural set.
It’s all down to Microsite. Via the heat sink effect.
Microsite is the New UHI. You heard it here, first.

“The “gallery” server from that 2007 surfacestations project that shows individual weather stations and siting notes is currently offline, mainly due to it being attacked regularly and that affects my office network. I’m looking to move it to cloud hosting to solve that problem. I may ask for some help from readers with that.”
Cloud hosting by BlackBerry corporation is outside the reach of the US government. BlackBerry has never been hacked. BlackBerry is well trusted.
If you need a contact there, I can help.

Thanks, Mr. Watts.
One could ask why our taxpayer-funded government/academic scientists don’t publish this kind of study. But, there’s no reason to ask: it’s because this kind of study gives them answers they don’t like and don’t want. (Note, I didn’t ask why they don’t conduct such studies: for all I know, they may have done so. They just don’t tell us about the results.)
That is, “hypothesis myopia” and “asymmetric attention”, at the very least, are at work.http://www.nature.com/news/how-scientists-fool-themselves-and-how-they-can-stop-1.18517

One could ask why our taxpayer-funded government/academic scientists don’t publish this kind of study.
They didn’t make it, that’s all. We did, that’s all. Nothing wrong with that. In fact, that’s the way I like it. Gives a mere citizen scientist some elbow room.

What I’d really like to know now is how much — if any — adjustment is done by NOAA’s algorithms to these Class 1 & 2 stations, and if any, why?
It’s real ergly, son. They are bumped up from 0.204C/decade to 0.336.
That’s what happens when homogenization bombs.

If this is the extent of the problem within the USA, can you imagine how much over-estimation there has been for global temperature rises. Land based weather stations elsewhere in many other countries will be of far lower standards in both quality and reliability and affected even more by heat islands due to their relatively recent greater populated areas’ expansion around the weather stations.
Has anyone tried to compare this corrected trend with USA area satellite data based trends over the same period of time. That would be interesting to see as it could explain the difference between the CAGW supporters quoted global temperature rises using land based weather stations and the parallel satellite date which shows very significant temperature rise flattening if not even no rise at all!

No, just regional averages. And some regions are better covered than others (but our basic gridding addresses this).
Note well that our ungridded data runs cooler than the gridded. We have pushed hard against our own hypothesis. We pre-released in order to elicit hostile independent review — which we have addressed. More papers should do that, I think. Measure twice. Cut once.

Well done Anthony. I have also been checking on my home town of Broome in Australia, where BoM has their instruments sited at the local small airport, finding a few maximum temperature spikes at passenger jet arrival and departure times. In recent times 4 new large helicopter hangars were built close to BoM’s premises and instruments, They house 13 or 14 large offshore passenger helicopters. Yesterday 5 took off within minutes, with a temperature spike at the same time. A nearby station at Broome Port shows no spikes at all. http://pindanpost.com/2015/12/14/airport-heat-islands-artificial-maximum-temperatures/

congratulatons Anthony…now tell the Met Office:
17 Dec: BBC: Matt McGrath: Met office says 2016 ‘very likely’ to be warmest on record
When compared to the pre-industrial levels, the forecast predicts that next year’s temperature will be 1.1C above the 1850-1899 average. This is edging closer to the 1.5C level that governments agreed last week they would do their best to keep under in the long term.
Last year, the forecast for 2015 predicted a central estimate of 0.64 above the average. Observational data from January to October this year shows the global mean temperature so far this year is running at 0.72 above 1961-1990…
“The forecast for next year is on the back of some other strong years,” said the Met Office’s Prof Adam Scaife.
“In 2014 we had 0.6 which was nominally a record, 2015 so far we’ve had 0.7 which is also nominally a record, and next year we are talking about 0.8 – so you can see that very rapid rise over three years and by the end of 2016 we may be looking at three record years in a row.”…
The impact of the strong El Nino that started this year continues through the first half of next year…
The forecasters at the Met Office say it is responsible for up to 0.2C of next year’s value. In combination with continuing climate change, the forecasters believe it will lead to new records.
“There is an uncertainty range, the bottom end of the range for 2016 is very close to the current value for 2015, so it’s not impossible that it will come out the same as 2015 but it is very likely to be higher,” said Prof Scaife.
The Met Office says that the rise in temperature predicted for next year may not continue indefinitely…http://www.bbc.com/news/science-environment-35121340
MSM are lapping this up – before 2015 has ended.

Every evening the BBC weather forecasters tell us that rural area temperatures will be a degree or so lower than the readings from weather stations on their charts which are located largely in the far more urbanised but lesser areas of the overall UK area. What drives heat island effects are a variety of man-made inputs: transport exhausts, industrial processes including power generation, domestic heating and/or air conditioning, heat generation from all electrical appliances and equipment etc. etc. A great deal of these are independent of weather effects, or seasonal effects. These heat generation sources have also increased significantly over the last 20-30 years or so, particularly globally. In such circumstances how can the Met Office dare suggest that these later years are hotter or that such data can be used for assessing CAGW or substantiating the massive sums of money being thrown at it?.
Land-based instrumentation in such an operational environment cannot surely be reliable, nor can it be adequately weighted, given the many differing and different variables affecting their results, and surely not credible for using as “adjusted” temperatures when considering and assessing decadal rises as small as 0.15 degrees C, or even less!

It’s not the offset. It’s the trend. The trend’s the thing. Urban/rural show no significant differences. Offset is all very well. But in terms of trend, Microsite dominates UHI. Well sited urban stations may be hotter, but they average lower trends than poorly sited non-urban trends.

pat: “17 Dec: BBC: Matt McGrath: Met office says 2016 ‘very likely’ to be warmest on record”
So the fix is already in, is it?
I bet 2017 is ‘very likely’ to be warmest on record too, and 2018, 2019 and 2020 after that.
Even if the ice age suddenly strikes and the Met Office is under a kilometre of ice.

“When the journal article publishes, we’ll make all of the data, code, and methods available so that the study is entirely replicable.” that is a gold standard of science very well done
assume “pain” is to be “paint” re “the old wooden box Cotton Region Shelter” and
ignoring “data that requires loads of statistical spackle” is priceless

Very good – I was about to comment on that myself, but search for “paint” first.

We use the MMTS adjustment noted in Menne et al. 2009 and 2010 for the MMTS exposure housing versus the old wooden box Cotton Region Shelter (CRS) which has a warm bias mainly due to pain and maintenance issues.

We do this by applying the Menne (2009) offset jump to MMTS stations at the point of conversion (0.10c to Tmax, -0.025 to Tmin, and the average of the two to Tmean). We do not use pairwise thereafter: We like to let thermometers do their own thing, inasmuch as is consistent with accuracy.

They moved the official station in our city within the last 10 years or so, from north of the airport (GJT) (in the stinking desert where few locals live and why the airport was put there) to their new NWS office building in the middle of the asphalt area. Yes, it’s in the correct white, louvred box with a small area of limestone rock, but where the desert is snow covered at times, the car lots are cleared of snow. It always registers warmer year round than the one I have in a shaded area surrounded by grass, trees, and now snow (though my device is bimetallic, not merc/alcohol).

They moved the official station in our city within the last 10 years or so, from north of the airport (GJT) (in the stinking desert where few locals live and why the airport was put there) to their new NWS office building in the middle of the asphalt area.

Thanks, micro6500. I left out my point that car parks get cleared of snow. The desert needs to wait until the sun is warm enough to overcome the heat reflecting off snow and melt, other than the tracks of the vehicle that used to drive in to get the readings. Although they tried to mitigate the area right around the new location, zooming out will show a lot of asphalt that gets cleared of snow. Google Map as of 12/2015

Nice work, Anthony. You set a standard to which others should attempt to emulate.
If I read the numbers correctly, it looks as if we should take any warming trend we see from adjusted temperature record trends and multiply it by ~2/3. e.g. for HadCrut, GISS, BEST, etc.
calculations:
Adjusted trend slope = 0.324
Compliant trend slope = 0.204
.204/0.324 = 0.627. (so actually 0.63 is the correct ratio).
Does that sound reasonable? That’s basically extrapolating Anthony’s result across the globe of course, which might be fraught with peril. Also extrapolates Anthony’s result back to 1885, the start of most modern temperature records. Also assumes HadCrut, BEST, etc. are doing similar adjustments.
This therefore impacts the confidence interval on the relationship between C02 and temperature. It lowers all such confidence intervals, reducing them to at least 2/3 of what they used to be (note, confidence intervals aren’t linear, but too lazy to do the Z-score math right now. “what happens to the confidence interval when you move the mean by 1/3” is left as an exercise for the reader).
My new canned response to “XYZ variable is correlated with temperature trend” is going to be “try that when the temperature trend is actually 2/3 of the adjusted record”, and cite Anthony’s paper…
Peter

Well, 324/204 = 1.588. So it’s only a ~59% exaggeration. But that’s with CRS units, and they run much hotter. Without CRS, we get a “gold standard” MMTS (for most of study period) of 0.163C/decade. And it’s likely lower because part of those records are CRS (with an upward MMTS Tmean bump for conversion).

P.S., you cannot knock a third off the top of the global metrics. We do not include sea surface or SAT. We consider land surface, only, and that is only ~30% of global coverage. Haddy SST may be under attack by others, but land-only is what we do.

We do not include sea surface or SAT. We consider land surface, only, and that is only ~30% of global coverage

Good point. So the 2/3 multiplier won’t work. However…
Have you seen how widely variable the estimates of SST are? Given a recent El Nino article here on WUWT that graphed all the estimates on one graph, it seems there’s an error of +/- 0.8degC on SST estimates! So the error bars are probably far larger than any of the common estimates.
Peter

I have only one skeptical comment about this result, and it’s a technical comment:
How do you account for confirmation bias? Even if you see out of the corner of your eyeball (subconsciously) a larger temperature trend a human being will be more likely to select that station as being out of compliant. Anthony is awesome, but he’s still a human.
This is why medical industry does double blinded studies, to try and remove confirmation bias. (confirmation bias still sneaks through in that the drug companies throw away entire studies if they don’t confirm a good result, but that’s a different layer of the same problem).
Was there any attempt here to remove confirmation bias? Is there a way to apply Leroy (2010) that removes as much potential for confirmation bias as possible?
I’ll just add for fairness that the same argument applies to the keepers of the adjusted temperature records. That’s why I’m a fan of averaging every temperature record together and adding into their error bars the “human bias” error bar, that being variance between the records.
Peter

There is a simple answer. The individual station surveys were done by hundreds of volunteers. They all took multiple pictures up close and personal. Then those plus google earth can be used to MEASURE objectively the written, explicit CRN criteria. There can be no overall confirmation bias in such a methodology. Surely you were not implying a real critique, rather just hoping elicit this sort of comment. Now Karl 2015….

Nope, areal critique, I’m unfamiliar with the CRN procedure, so it may sound naive 🙂 .
So who interprets the pictures, the hundreds of volunteers or the authors?
Also helpful to show the population distribution of the metrics. The ones far away from threshold limits (e.g “at least 3 meters from a heat source”, and actually 20 meters, will be indisputable. The ones closest to the thresholds are the ones that would possibly be subject to confirm bias.
Peter

So who interprets the pictures, the hundreds of volunteers or the authors?
Me. With the Rev pushing and Doc J-NG pulling. No one else is qualified, and I don’t consider it a wrap until I have personally given it the hairy eyeball (and sometimes not even then). I make the Proximity Views.

How do you account for confirmation bias? Even if you see out of the corner of your eyeball (subconsciously) a larger temperature trend a human being will be more likely to select that station as being out of compliant.
We compute (except in cases of the prima facie obvious) areas of heat sink within specific radii (using polygon area tools and/or measurement views) and apply those findings to the Leroy (2010) rating system. This is not something you just whip up.
We avoid bias by wearing our “own enemy” hats and making all the ratings (photos, GE maps, Birdseye images, etc. publicly available. That is all one can do. I will bet after extensive independent review that not all station ratings will remain exactly the same. And more stations will be surveyed and added to the mix. Then there are the Class As to ponder. Not to mention GHCN.
This is but a frozen moment in a continuing process.

As you shake out the cobwebs you’ll want to consider requesting an audit by the below group.
Taken from the 2010 OIG Audit
“A NOAA panel of representatives from NWS, NESDIS, and OAR identifies, surveys,
evaluates, recommends, and selects USHCN-M sites within grid areas evenly distributed
across the 48 contiguous states. The panel analyzes survey packets consisting of a site
survey checklist, site score sheet, site obstruction drawings, and site photos to determine
the ideal location of USHCN-M stations. The USHCN-M Executive Steering Committee
overseeing the panel is chaired by the directors of NCDC and the Office of Climate, Water, and Weather Services. Members of the committee come from various NOAA organizations, as well as the Commer
ce and Transportation Program Office.”
Go big. By challenging the current reviewers to audit your work you gain level par and really get into the weeds with them. Doing this is actually doing the work the congressional committee would need to validate your work.
Hope that helps.

We compute (except in cases of the prima facie obvious) areas of heat sink within specific radii (using polygon area tools and/or measurement views) and apply those findings to the Leroy (2010) rating system. This is not something you just whip up.

Sorry, didn’t see this before I asked (again).
So how much of this is judgement, and how much is just ordinary math?
Please see my other suggestion – create a metric that gives you a confidence in judging, and plot the distribution of that metric compare to the pass/fail line. If there’s a pile of stations near the pass/fail line, you have a good potential for confirmation bias. If there are very few stations near the pass fail line, then we shouldn’t worry about confirmation bias. For example, a naive-non expert like me would see a possible metric as the efficacy of the heat ink compared to Leroy 2010 – presumably there’s a pass fail line. Then plot the population distribution and look at the pass fail line and see how far away you are. if you have multiple metrics, then the line becomes a decision surface, and your final metric is the distance from the point to that surface (or decision volume, etc, out to N dimensions. Usually easier to use 2-3 metrics only, that way you can visualize it).
I used this technique in manufacturing and in automated ECG interpretation. It’s very useful in tell you how much to trust that very human judgement – by quantifying the judgement and running statistics on it. For example it turns out for automated ECG interpretation, the data points the automated algorithm thought were ambiguous (i.e near the decision surface), were the same ones the doctors (the reference experts) had a hard time judging as well.
Peter

If there’s a pile of stations near the pass/fail line, you have a good potential for confirmation bias. If there are very few stations near the pass fail line, then we shouldn’t worry about confirmation bias.
There are a few, but not very many. Besides, with the tool I provide, you can change a station’s rating fairly easily and argue for the change.

The most reliable way to remove confirmation bias is to make your data available for others to review.
You and anyone else are free to review the stations and come up with your own independent ratings. If they differ, write to the editors. They have shown that they are open to honest criticism.

Gosh, yes. (Or the equivalent.) It is essential to use 30-year monthly data and average them for the 30-year trends in order to avoid annual seasonal bias. What if data is missing for a station in July? That would spuriously reduce the annual average.
It is also important for stations with truncated coverage because it applies their monthly only to the time period covered by the data. And all data is anomalized to remove offset bias (which affects trend) for station dropout. Our non-anomalized Class 1\2 data shows only 0.151C/decade rather than0.204. So we anomalize.

thanks, that makes sense.
re anomalies, how is it that all three datasets in the graph have an anomaly of 0.0 in 1979. What baseline is being used?REPLY – That is because we baselined it to 1979. If you don’t baseline it, the trends form an X rather than a <. ~ Evan

I mean, what baseline time period?REPLY – None, whatsoever. Time series is 1979 – 2008. (We have 1979 – 1998 and 1999 – 2008 subperiods, too.) Everything is anomalized. Relatively anomalized, at that. The monthly data happens in whichever year it happens and is therefore weighted, accordingly.
Very green. Very generic. No way for a truncated or broken series to throw its weight in the wrong direction. Goodbye to the old station-dropout blues. Annual distortion. Moot. The best rules are the ones you don’t have to write. And when I anomalize, I do it all the way. Let the numbers and their all-natural, 97%-pure, renewable, sustainable relationships do their little thing; that’s what they’re good for, the beastly little horrors. Add a little sodium benzoate to preserve freshness, dust ’em off with a little DDT, and release them into the wild. I remain a mathematical a hippie, at heart. (I am also a game designer.)
But you can easily baseline for whatever you want. Nothing wrong with that (apart from the fascism). Add offset, rinse, repeat. I baselined the graph above to 1979 for purposes of clarity. But I needn’t have, and the original (greenesque) version the trendlines formed an X rather than a <, as would be expected.~ Evan

Normally the anomalies would be all be reported against a common 30-year baseline. USHCN is reported against the 1981-2010 baseline.
Using that baseline the CONUS average temperature anomaly for 1979 is -1.91°F or 1.06°C.
The 1979 value for your two other datasets will have different 1979 anomalies based on variation from the baselines for those stations, so the blue and gold lines would start above or below the red line.
By moving them up or down so they align in 1979 you end up with the red line on top, but that in fact may or may not have the highest anomaly later in the period shown.
With respect to the trends however of course it makes no difference.

Evan – From your explanation it really isn’t clear exactly how you compute the anomalies. The anomaly for each month at each station is the amount that it is different from the mean for that station and month of the year over a given period of time. Isn’t that what you compute?
What do you mean by “relative animalization” and “do it all the way”?

Okay, what I mean is I let the numbers run free. Station X’s June anomaly is calculated by subtracting the average of the 30 years of Station X June data. It is not baselined to anything but itself. it is an all-natural, individualized anomaly. Average all twelve 30-year monthlies (weighting for varying numbers of days in each month). And there’s your 30 year trend for Station X.
For a region, use the average of all stations in that region per month and do the above.
Even for the smaller Class 1\2 set, there is only one region with only 2 stations, and — thankfully — they do not produce an outlier. For MMTS, class 1\2 only. The smaller the subset, the worse the coverage, of course.

Alright, then, that sounds like the standard way to calculate anomalies for each station. The 30-year average is the baseline.
Then do you use a gridded average of the station anomalies in each region to calculate the regional anomaly for each month?

Wow, congratulations !
Can someone calculate how many years back from the most recent readings on this set of data that there is zero average change? The same way it was done for the satellite and radiosonde data for 18+ years. That would be interesting as a comparison.

It wouldn’t help us. (Or only be useful as support.) For there to be a divergence in trend, there must be a trend for the trend divergence to manifest itself. An interval of no trend would produce no trend divergence.

Very useful as support for the lay public. The public can really grasp the concept of 18+ years of no trend. If it gets to 25 years of no trend, everyone quits believing regardless of other past data.
The USCRN, satellite, radiosonde, and Anthony’s data set are headline grabbers and will influence sentiment.

Am I missing something. IF the gold standard sites are warming at 0.204 per decade. If that continues we get to 2c per century. Now I understand the variability but I am not sure this report lets the heaters off the hook. They may well say ‘so what it is still 2c per century.’

Ardy, you are missing something big and obvious. CONUS is developing rapidly. Massive land use change, for example. The land world is not. The oceans not at all. You cannot project CONUS trends directly onto world trends. All the non-CONUS polar bears will never forgive, and never forget. Even in non-CONUS Alaska.

Am I missing something. IF the gold standard sites are warming at 0.204 per decade. If that continues we get to 2c per century. Now I understand the variability but I am not sure this report lets the heaters off the hook. They may well say ‘so what it is still 2c per century.’.

You can’t extrapolate 30 years out to a century, especially when you have multiple 60-80 year oscillations. It also happens that the warming phase for one of those oscillations started in the late 1970s.
It’s a fundamental violation of signal processing theory to assume there are no frequencies lower than what you can observe in your window length, unless you know for SURE that there are no mechanisms that can cause such oscillations. In our case there are known mechanisms, e.g. the PDO, AMO, etc. There are also more speculative mechanisms that are on multi-hundred year cycles that we simply don’t have enough data to figure out what they are caused by. However, it’s the null hypothesis that they exist unless you can prove by data or mechanistic means that they don’t exist.
This doesn’t affect Anthony’s result, because the thermometers are all measuring the same physical signal*, what he’s determined is how differently they measure that same signal based on criteria for how well situated they are based on an agreed-upon standard.
Peter
* assuming average temperature is some sort of physical signal, that’s a separate philosophical debate.

Certainly. But our study period is 1979 to 2008. That was a period of strong natural warming. But it is only positive PDO for ~30 years at a time. We needed a period of unequivocal warming to demonstrate our trend divergence hypothesis. So we needed a real trend.
Note that in the short 1999-2008 period, there is a sharp cooling, which is shorter, but allows us, with a much larger ~800 unperturbed stations sample, to examine the effects of a cooling trend. As expected, poorly sited stations cooled faster even as they warmed faster during the 1979-1998 interval.
In short, we may expect perhaps half of the 0.204/decade over a full 60-year PDO cycle. (Or a full and complete “ENSO”, as BT, would insist.)

Anthony Watts wrote in the lead post,Some side notes.
“. . .
Some might wonder why we have a 1979-2008 comparison when this is 2015. The reason is so that this speaks to Menne et al. 2009 and 2010, papers launched by NOAA/NCDC to defend their adjustment methods for the USCHN from criticisms I had launched about the quality of the surface temperature record, such as this book in 2009: ‘Is the U.S. Surface Temperature Record Reliable?’ This sent NOAA/NCDC into a tizzy, and they responded with a hasty and ‘ghost written flyer’ they circulated’. In our paper, we extend the comparisons to the current USHCN dataset as well as the 1979-2008 comparison.
. . .”

To: Anthony Watts or Evan Jones or John Christy or John Nielsen-Gammon
Question: Can exactly you did for the period 1979-2008 also be done for some period before 1979? What would be any barriers to doing it for a period before 1979?
John

Or for 1008 – 2015. That would be most interesting as it is the period that NOAA says is the most warmed period (once the oceanic data was “fixed”).
Shouldn’t that be a simple addition? but it wasn’t done ….?

Yes, but the sample stationset would be reduced because of moves or TOBS flips (etc.).
Metadata bites. With sharp teeth. We have only 410 metadata-unperturbed well and poorly sited stations going back to 1979. Only 92 are Class 1\2. take that back and the numbers go down. Going back to only 1999 (from 2008), we have twice that many unperturbed stations.
You can only go where the available metadata will take you, and there are limits. (That’s why homogenization is so pesty seductive. You can go careening fecklessly all over with that. And don’t think we won’t — but we’ll do it right, unless the VeeV comes around and does it for us.)

Correct me if I misread you, but you are saying that after 2008 there are insufficient non-modified and non-poorly sited stations to continue the study? I would have thought that the BEST sites would have been protected from disruption.

What I am saying is that a 1979 – 2008 stationset has more unperturbed stations than a 1979 – 2014 set. Any station that moved from 2008 to date or had a TOBS flip would be perturbed for the 1979 – 2014 interval. Perhaps it could be truncated. More likely we would have to drop it.

It will be really interesting to know what kind of response this gets from the folks at the conference.
Will the paper make it into the conference proceedings?
This is really exciting. Congratulations.

“30 year trends of temperature are shown to be lower, using well-sited high quality NOAA weather stations that do not require adjustments to the data.”
Stop right there!
If those stations show lower trends, then they most certainly do require adjustments to the data.If the data doesn’t fit the model, it’s wrong.
All these years of blogging about it, and you still haven’t learnt the basics of Climate Science (™).

At the risk of being contrary, may I suggest that the congratulatory nature of many of these comments is premature. This is, after all, a press release. Once the paper itself is published, along with its data, methods and other supplemental information, it can be discussed on an informed basis, analyzed and / or replicated, and its ramifications (if any) quantified. Such a stance is no different than what should be expected of all “science by press release” regardless of its source.
I would be surprised if Anthony himself would disagree.

He is presenting the paper live and I believe he has a poster session on the paper.
That deserves congratulations. Period.
It is no guarantee he and his co-authors are right. But they are being heard.
In the Oil and Gas business, particularly at AAPG, sometimes the oral paper is all you get to show given the confidentiality of the data. Attendees get to see the seismic; we don’t get to take it home.

Hi Stephen;
To the extent that the presentation of a paper at a conference is worthy of congratulations (which of course it is), I agree with you.
The point I was trying to make is that many comments here are apparently accepting the study’s conclusions at face value, prior to the release of the supplementary information necessary to closely scrutinize it. Such a practice is rightly criticized here when employed by people on the other side of the debate, and it should not be embraced now just because Anthony is one of the authors or because the study’s conclusions are more palatable to the readers of this blog.
Note that I am not questioning the results, just saying that a full analysis of the paper cannot be completed until all data and methods are made available.
Best regards,
Ken

Evan;
I’m not in any way denigrating the obvious effort that went into the paper, nor am I contesting the conclusions that are being presented here. All I’m saying is that in the past, science by press release has been vigorously, and correctly, criticized on this site. The study’s supplemental information is not yet available for review, and until it is one should hesitate to make conclusions as to its validity.
As an author, naturally you’re informed as to the methods that went into it. The rest of us are not. But I see many people here simply accepting the results at face value, based apparently on their faith in Anthony and the rest of the authors. I’m not saying that such faith is misplaced; I have no reason to believe that any of you are anything but honorable and competent. By the same token, “faith” has little place in dispassionate analysis. We’ve all seen studies which, though plausible on their face, cannot withstand close scrutiny for one reason or another. I believe that true skeptics apply the same standard of evidence to all studies regardless of their source, or how much we want the conclusions to be correct.
In short, once we’ve seen the data and the methods and the paper is exposed to critical review and commentary, then congratulations may be in order.
But I will go so far as to commend Anthony, you and your co-authors for the remarkable effort and persistence you have displayed in bringing this paper to its current state. I look forward to its formal publication and the release of its supplemental information.
Kind regards,
Ken

A Watts: 9/24/2012
Here, in my opinion as 30 year TV/radio/web media reporter on science is what should be in any professionally produced science press release:
•The name of the paper/project being referenced
•The name of the journal it is published in (if applicable)
•The name of the author(s) or principal researcher(s)
•Contact information for the author(s) or principal researcher(s)
•Contact information for the press release writer/agent
•The digital object identifer (DOI) (if one exists)
•The name of the sponsoring organization (if any)
•The source of the funding for the paper/project
•If possible, at the minimum, one or two full sized (640×480 or larger) graphics/images from the paper/project that illustrate the investigation and/or results

Incredible work and well deserved accolades, Mr. Watts and team. Might I add here in the very midst of Paris COP21 real science is still appreciated, admired, and is still possible especially in a time of corruption of science itself. Indeed needed now more then ever before.

Shhhhhhhhhhhhh LB….they’re still trying to agree on the exact slogan they want to use, and then there’s the mass email to the flying monkeys, and getting Sou to the emerald city for a wash and wax, and…. 😀

I burned through a forum and a half on Sou’s blog, going back and forth with the Father of Climate Data homogenization, mostly on a forum she set up specifically for me to discuss it. They learned we are for real. I learned how homogenization can turn from Kindly Uncle H to the H-bomb. It was a fair bargain. Both sides got what they came for.
I also engaged on dedicated posts on SkS and, especially Stoat. They had a lot of snappy questions. And if I can’t supply the snappy answers, how are we going to get through peer review? Much less independent review. Anthony was wise, so very wise to pre-release in 2012. I don’t think we’d have made it without it.

He saw it on Drudge. That is, he saw the link to the article. (That’s pretty much what Drudge is: links.) It’s still there. Right now, it’s the third link in the left column. On Drudge, not the Daily Caller. 🙂

Congrats Anthony! I do hope NOAA starts to do something about their previous methods of gathering temperature data.
From my experience in debating the hard-core climate alarmist though, they will look at your study and claim the data you used was cherry-picked, and then simply dismiss it. No, you can not use logic with these people.
As for NOAA, if it does decide to do something about it, this gives it opportunity to do even more adjustments all the way back to 1880. It should love this opportunity, you know.

It occurred to me (and I think JC) that sometimes the best way to get through a problem is to go around it. Hit me like a rock one night when I was fast asleep. So, for our first cut, rather than adjust, we drop.
(BEST does not have that luxury. Mosh has the GHCN to deal with. We do. We have the data-metadata rich USHCN to play with. They did the best they could with what they had. Maybe we’ll wind up barking up that tree, ourselves — time will tell.)

That decision, to drop the sites whose metadata shows no microsite changes, was to me the brilliant moment. As a guy with a couple patents, I’ve had those moments, for me they come in the middle of the night also, and either wake me up, or I wake up and there the idea is, like a little gift box. Great idea!!!!

Excellent.
My father would have approved too.
He spent several years on weatherships for the UK Met Office in the 1950s.
He was unhappy with the SST methods, (bucket on a rope and a thermometer) which could be subject to either evaporative cooling or radiative heating (warm ship) depending the season and the weather.
He tried to persuade the authorities to improve the methods, but was ignored, so he left the marine division, and I grew up on various RAF bases.
Glad to see my donation going to good use.

L, you missed a key point. The new data is a more ‘pristine’ subset of the old CRN 1/2. AW explained this, and said that if some of the now excluded CRN 1/2 stations were back, themresult would be more ‘extreme’. Me, I prefer warmunist ‘bullet proofing’ to a max skeptic result. Regards.

In all the three maps, it is clear that there is no uniformity in temperature rise. Unless these are explained, it has little meaning. What are the changes in loca/regional conditions for such local/regional variations. This gives, thus insight to real global warming, if any.
I read a report today’s The Times of India, satellite data showing globally lake temperatures growing up. Even with such wide regional variations in surface temperature.
IPCC must sit down and look in to this aspect instead of wasting public fund and harping on global warming and carbon dioxide.
Dr. S. Jeevananda Reddy
Dr. S. Jeevananda Reddy

The point here is that for all nine regions, Warming is greater (by at least a little) for the poorly sited stations than the well sited stations. Quite apart from our robust statistical significance writ large, that is very unlikely to occur by random chance.

You note correctly that this is not the same as our current map. That is the result of addressing the criticisms adduced after our 2012 pre-release. That is why pre-pub independent review is so darn valuable. More papers should do it.
The CRS issue may swing it back the other way, perhaps, but that is a subject for followup. this is a continuing process.

For our immediate purposes, yes. For the longer term, what we do now gives us the ability to apply a little adjustment of our own to those stations. In the pursuit of “fuller coverage”, naturally, which will be one of the medium-term criticisms.
First we demonstrate that what they are doing is incorrect. And then we show them how to do it right.
We are armed, we are dangerous, and we are not going away.

OK, you have my undivided attention.
“A 410-station subset of U.S. Historical Climatology Network” which was used for the “Class 1/2 compliant” data set in the figure at top.
Is/will that data set be available in monthly, and bonus points for the set updated to the current month?
I would love to do a quick comparison to UAH and see what happens.

It is restricted in the current study to 1979 to 2008. Carrying it forward would reduce the trend and thus the divergence, making it more difficult to distinguish. The post 2005 CRN/COOP non-divergence comparison supports the heat sink hypothesis, but, by itself, does not test it.

Stand by for an “adjustment” of past records due to massive building works at the agricultural reach station on Burwood Hwy Scoresby in outer east Melbourne. The ideally situated station appears to have been moved.
The BOM have already relocated the nearby Dunnes Hill station to Ferny Creek.
Station Details ID: 086104
Name: SCORESBY RESEARCH INSTITUTE Lat: -37.87 Lon: 145.26 Height: 80.0 m

Mr. Layman here.
To this Layman and from what he’s learned over the years, the ever-changing names for what started as “CAGW” has it’s foundation built on surface station numbers. Those numbers connection to the reality of what was going on around them are suspect due to siting issues and record keeping issues…if you’re trying to get a global or even regional picture rather than just telling your passengers whether or not they should put on a sweater before they get off the plane.
(Of course, political objecitves and the money it can supply enters in here somewhere. I’m not sure where.)
An attempt was made to take those designed-for-local numbers and glean Global numbers from them.
Once the “CAGW” meme was established, GISS numbers have been changed to support it.
An honest examination of the individual sites is taking an axe to the root of “GAGW” and it’s many offsprings.
We need economical and practical energy. We don’t need to give any politician the power to shut it off to prevent “AGW”. If we do, that’s where the “C” comes in.

>which has a warm bias mainly due to pain and maintenance issues.
I know climate scientists like to enforce a consensus, but do they really resort to hurting reporting units if they don’t give the right results?

bloke down the pub has asked before.
If the raw temperatures exist and you are only interested in the difference then you should be able to use all the data not saying temperature but if it was getting hotter or colder.

Thanks Anthony et al.: the FACTS! I wish we would have them world wide.
What we need is a new world wide standard for measuring surface temperatures. Yours I suggest! We can forget the rest of the surface measurements.
Part of presenting data should standard (!) be the information whether someone has been “adjusting” the data or that the presented data are only consisting of ‘raw’ official data. If ‘adjusted’: I will forget them anyway.
And If not collected in your way: I will look at those data as being unreliable. That is what they probably are.
And for the rest: let’s have a look at the satellite and weather balloon data. Satellite data are world wide and like balloon data they are measuring more of the lower troposphere. As far as I can see: the most reliable data.

Ardy says ” Am I missing something. IF the gold standard sites are warming at 0.204 per decade. If that continues we get to 2c per century”
Anthony is not attempting a climate forecast merely projecting past trends ahead in a straight line .
The establishment forecasters make an egregious schoolboy error by ignoring the natural temperature cycles – especially the millennial cycle which peaked in about 2003.http://2.bp.blogspot.com/-zZLVnsvgYTw/Vj0GEDv2q7I/AAAAAAAAAag/eumhxpS9ciE/s1600/trend11615.png
This is the sort of projection the establishment uses to support the COP21 CAGW nonsense.http://4.bp.blogspot.com/–pAcyHk9Mcg/VdzO4SEtHBI/AAAAAAAAAZw/EvF2J1bt5T0/s1600/straightlineproj.jpg
A new forecasting method needs to be adopted. For forecasts of the timing and extent of the coming cooling based on the natural solar activity cycles – most importantly the millennial cycle – and using the neutron count and 10Be record as the most useful proxy for solar activity check my blog-post athttp://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
(Section 1 has a complete discussion of the uselessness of the climate models.)
“In the Novum Organum (the new instrumentality for the acquisition of knowledge) Francis Bacon classified the intellectual fallacies of his time under four headings which he called idols. The fourth of these were described as :
“Idols of the Theater are those which are due to sophistry and false learning. These idols are built up in the field of theology, philosophy, and science, and because they are defended by learned groups are accepted without question by the masses. When false philosophies have been cultivated and have attained a wide sphere of dominion in the world of the intellect they are no longer questioned. False superstructures are raised on false foundations, and in the end systems barren of merit parade their grandeur on the stage of the world.”
Climate science has fallen victim to this fourth type of idol.http://www.sirbacon.org/links/4idols.htm )

I want to be clear. We are not attempting to project US Tmean using 1979 to 2008 data. We are only using that strong warming interval in order to demonstrate the effect of significant proximate heat sink on trend. This interval is not useful for projection and would be a cherrypick for that purpose.

“5. We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.”
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>..
Important. Well done.

I visit here daily, because I believe that we are being conned. But as far as this UHI effect is concerned, I fail to see the point. These temperature monitoring points record higher temperatures, because of where they are located. I get that. But can their location also dictate that readings get higher, year on year? Also, why would you want to record temperatures only in places where just the weather is involved? All this heat is being transmitted into the same system, so why is it important where you get the readings from? surely the only thing you’re looking for is a trend? So eventually, wherever the readings come from, a trend will become apparent. I’m not a scientist, or even a qualified person in any respect, but I ain’t stupid. The reason I became anti AGW, when it first reared its head, was because I appreciate the age of the planet, and I appreciate that our(I mean humankind here) presence here covers less than a fingersnap of the total time that the planet has existed. Then somebody turns up with numbers relating to the temperature of our planet going back for a tiny fraction of that aforementioned fingersnap, and says that they can see a trend??? I smell a rat, instantly. Then I come here, to this site, and find out about the MWP, and the warmist attempts to get rid of it, because it doesn’t fit the cause.
People on this site annoy me on a regular basis, because they tend to disappear up their own fundaments in their efforts to refute the warmist propaganda. What I cling to, is the incredibly stupid proposition that CO2 is the reason for the perceived warming! That is the main plank of their argument, and the weakest point too! 0.4%!!! If AGW ever had a chance of being convincing, the alarmists went for it, big style! They made CO2 the bad guy before anyone knew how little of it exists, and how vital it is to our very survival in our world. There are, I learn, other elements(greenhouse gasses), which have the same warming effect on climate, and that one of them, water vapour, exists in quantities compared to which CO2 becomes almost less than minimal. So surely, logically, the AGW brigade should pick on water vapour?
I realise that everyone here has an axe to grind, and a point to make; and that many contributors are actual (principled)scientists. But it seems to me that the big hole in the AGW argument is the one I just mentioned. It seems to me that this is the drum everyone should be banging. in rhythm.

You ask a good question: “But can their location also dictate that readings get higher, year on year?” Anothony’s results show that yes, it can. The thing is that the change say from rural to urban is not a one time event. For example, our local weather station is at an airport. When I was there recently, a departing jet literally shook the terminal building like a (small) earthquake. There’s a *LOT* of energy in jet exhaust. What do you think happens when the number of flights increases? Another station I know is in a park, near a road. What happens as car traffic increases over time? What happens when a large shiny building is built on the other side of the road? And so it goes. But what I’ve just written is a just-so story saying it *could* happen that way. The empirical evidence is Anthony’s numbers showing that it *does* happen that way, on the whole, and of course in the “surface stations” web site.

But can their location also dictate that readings get higher, year on year?Yes it can and yes it will. because the buildup around the station is gradual, certainly not a step function and it will trend in the hotter direction as additional heat sources are added.

Additional heat sink/sources are not necessary. We find that poorly sited stations warm (and cool) faster than well sited stations when Microsite ratings are constant throughout. This is an essential point.

To establish a meteorological station, there are standards defined. When you establish a station against these standards,you can not compare such met data with data collected at a standard station. This is basically because, non-standard data are influenced by the local/surrounding variations. Because of this only we are questioning the data averaging at global level as with it several local variations are involved.
When you plot a data of a station from standard station, you can explain the perturbations in temperature. We need such analysis to clear the global warming [a global issue] from ecological — land & water use/cover — changes with the time, which are local.
IPCC was not sure of the sensitivity factor and thus it goes on reducing the sensitivity factor from report to report with wide range [ maximum to mean to minimum] with scientific validity.
Dr. S. Jeevananda Reddy

Thanks. But we are not being conned. But we are being subjected to a serious error. Therefore we endeavor to define, account for, and otherwise explore this error. Making a mistake is not fraud, it is just a mistake.

It seems to me that data collection problems should be the first thing to be looked at. These are basic scientific tenets that the other side has consistently refused to even consider. Clearly, and to anyone with a whiff of common sense, there is an issue, and one would think they would immediately investigate, They refused. Period.
This “Microsite” issue is a another “shark that has been jumped”. The blow back will be harsh, as you are not just upsetting one apple cart, you are upsetting ALL the applecarts.
The numbers undermine every aspect of AGW, not just CAGW. This coupled with the failed models blows the Co2 conjecture right out of the boiling oceans.

Today in rural Montana, Big Horn county, Hardin, I saw a 7 degree differential between the two digital thermometers I have mentioned before. One hangs over cement about 10 feet and the other in a tower, not near cement. Cold island effect.

I know this is a pain, but….
We use the MMTS adjustment noted in Menne et al. 2009 and 2010 for the MMTS exposure housing versus the old wooden box Cotton Region Shelter (CRS) which has a warm bias mainly due to pain and maintenance issues.

You’ll have to wait until publication. Twice already we’ve had reason to regret releasing preliminary data. So we must tread with caution. But you shall have it. All of it. That’s a cross-my-heart promise.

The Class 1\2 station trends run ~10% cooler than the sats. this is a near-perfect result for our purposes, as basic physics indicates that surface trends should be lower than sats over a warming period.

This is a really *beautiful* piece of work. Conceptually simple, technically a lot of hard grind, clearly explained, *solid* results. Since you don’t claim a zero or negative trend, there is even some possibility that you might be believed. If you seriously believe in global warming, dear reader, you have to want the best measurements you can get to tell you where the money needs to be spent. So any honest believer in CAGW (and I know a few) should want the best temperature measurement network that’s practically affordable and needs to know that we currently don’t really have one. (Amongst other things, how will we know we’ve *succeeded* in curbing AGW if our thermometers are wrong?) This isn’t just carping, this is real data with real numbers demanding real action. If I didn’t have a broken toe right now, I’d be jumping up and down with exclamations of admiration. WELL DONE!

How They Do It
The US federal government typically requires a Quality Assurance Plans (QAP) in any data that is used for decision making. This is a standard that appears to have evolved over the years in response to outcries of monkey business and the arbitrary use of collected info (data).
I did a little research to try and locate the current QAP for how NOAA collects climate based data on land (CONUS). The latest I found was in 2010. There may be a more recent one, but that’s what I found doing a quick search.https://www.oig.doc.gov/OIGPublications/STL-19846.pdf
The above linked QAP review was done via congressional request, under Inspector General review so it has some weight and of course peer reviewed. The work that Anthony did will be compared to the requirements of the existing QAP.
It’s the rules for the game so to speak. If the work meets the requirements of the rules OR presents a best available science reason for a departure (improvement) from the QAP, he’ll have to be prepared to defend it.
Since he is doing this work “independently”, one of the potentially successful strategies might be validating the worth of the work (comparison to the QAP) and creating a co-monitoring solution where his work and methods are reaffirmed by NOAA replication. That way you are in their wheelhouse. The strategy has the added simultaneous effect of forcing transparency.
Just a couple of thoughts from the peanut gallery and a common practice of NGOs.

All true Knute, Add in to the mix that NOAA is in “Hot Water” with L. Smith’s sub-committee, and one of the loyal opposition for is running for President is also conducting his own hearings in the Senate.
This could get interesting. The paper seems to be showing up in various web pages.
michael

Yes sir Mike
Would hate to see it blown off with a nonsensical but successful cognitive dissonance “it was cherry picked attack”. The best way is to head that off by showing how it is as good if not better than their own standards.
Will help congress committee if you do the work for them.

How we do it:
1.) Isolate the stations with unperturbed metadata (both good and bad).
2.) We apply the minimum corrections/adjustments necessary.
3.) We let ‘er roll.
P.S., we do subsets. By equipment, by mesosite, by region. We also provide full data for the stations we dropped.

Do you mind defining …
1. unperturbed metadata
2. types of corrections and criteria for application
I realize you may be swamped, but it will come up sooner rather than later.
For additional thought, your definitions for 1 and esp 2 will be compared to current algorithm V2 from the BAMS 2009 pub, but you probably already know that ….

1. unperturbed metadata
— No change in site rating throughout. No moves where the previous location is unknown (we lost most of them that way) Localized moves where both ratings are the same are included. An unmoved station whose ratings were changed by encroaching heat sink is considered as a move and is dropped.
— No significant change of TOBS. If a station has a flip and a flip-back and roughly equal TOBS at each end, that will not affect trend and that data is included. JN-G ran TOBS-adjusted data for the set in order to check and the results were much to same. We list TOBS with the data, so you one make any changes (dropping or adding) as one sees fit.
We include stations with equipment changes, but make the necessary MMTS adjustments to correct.2. types of corrections and criteria for application
MMTS only. Offset adjustment applied consistent with Menne (2009).

Chilly
Glad you see some worth there.
I’m beginning to think that web based groups such as WUWT, CE and Nova are the “new” NGO. If Anthony et al can successfully be viewed by NOAA and friends as a valid NGO, the authorities will have to embrace working with their findings much in the same way they are obligated to work with any other “environmental” NGO. Methods review, independent audit, priority recommendations for work to be performed.
Excellent opportunity.

I hate temperature data expressed as color gradient maps, or any form of colored map. I can’t think of anything more useless, and I’m sure that’s why IPCC reports are rife with them. All we need is the damn trend of the actual damn temperature.

Dr. N-G’s involvement ought to pay dividends. He had struck me as a supporter of the alarming-warming meme, but it turns out he is an honorable scientist, and I was wrong. His contributions ought to open many eyes.

I think you may be surprised. Mosh himself has said siting is a “good” issue, as it is a potentially systematic effect rather than eating around the edges.
All he needs to do to create an interesting and different approach is to apply his methods, but only pairwise good stations with the good and bad with the bad. I will be most interested in those results.

Anthony Watts — I appreciate getting some clarification on two issues — in Fig. 4 the pattern showed ‘W’ followed by ‘M’. What is the width of this? Secondly, All those 410 stations are from standard met station — inside the Stevenson Screen or they are different types? Is it possible to present a trend of a met station within the agriculture zone and one from urban zone? This why am asking is: to get the clarity on urban-heat-island effect and rural-cold-island effect.
Dr. S. Jeevananda Reddy

Scale is in hundredths degrees C. Interval is 30 years. Stationset is all of the unperturbed stations of the USHCN2 (except a few we never found — yet). All equipment is included, CRS, MMTS, and ASOS. MMTS-only trend is lower (0.163C/decade).
We include Crops as one of our subsets. And CRS-only, urban/non-urban, Rural-only MMTS, etc. You can subdivide as you wish and create further categories.

Here is something like an Australian land mass temperature comparison. It covers the period 1972 to 2006 inclusive (I did the work in 2008 or so. The start date was chosen to be after the change from degrees F to degrees C reporting here).
The rationale is on the spread sheet here –http://www.geoffstuff.com/pristine_feb_2015.xls
Australia has over 1,200 land weather station sites on record. I chose the 44 sites whose history suggested maximum isolation and minimum effects of the hand of Man. Both maximum and minimum daily temperatures were used as the basic data.
I calculated linear trend lines and compared the trends expressed as degC change per century.
The trends were so noisy and hard to interpret that I gave up, noting that the most pristine sites may well have the poorest quality, an enigma with no solution.
NOTE: I did not take averages of trend numbers because that common approach (as in CMIP% for example) is not statistically pure.
Naturally, I am happy to field questions.[Thank you. .mod]

Some metadata available online from BOM. Usually sketchy.
Some other sites like Rutherglen and Amberlet remain in a state of “We agree to disagree” about importance and accuracy of recently released addional metadata.
My career in mineral exploration took me to most corners of What so local knowledge was used in selecting sites for this work. Also extensive use of Google Earth for aerial views geography distances etc and government stats for populations.
I put a sting in tail by correlating the 44 station trends with their digital
World Meteorological Number. Supports my claim of noisy useless data.
To be a study similar to yours in USA, I would need to use BOMhomogenised data but I have not found any remote sites like these that have gone through the mincer.
Note that derived trends exceeding say +/- 3 degC per century are intuitively non- physical.

Great work Anthony. I’m glad to see John N-G was on board and in agreement. The implication is that the entire “homogenization” process is making results less accurate than might be accomplished by wisely choosing meso or synoptic scale sites over microscale sites within the GHCN as opposed to homogenizing. I also prefer raw milk from happy pastured cows to homogenized milk from confinement feedlot farms. 🙂

Reblogged this on Sierra Foothill Commentary and commented:
Ellen and I supported Anthony’s project by collecting weather station data across the country. We are proud of our contribution to citizen science.
We surveyed stations in Deleware, Maryland, South Dakota, Wyoming, Nevada, Idaho and California.

These 410 stations, how are they distributed by elevation and latitude, and how do those distributions compare to those of all stations? If there are differences in the distributions, how do those differences affect reported average temperatures?

Andy
If this dataset is used by Cruz’ committee, its QAP will be compared to the standards methods employed by the federal government (NOAA) From what I’ve read, those methods were reviewed and approved by an independent panel.
In order for Anthony to override the counter that he cherry picked, he’ll have to either have a better (approved by a supporting independent peer review or otherwise approved method) method or have found flaw in how NOAA executed their own QAP and methods.
The committee will request a Data Quality Audit.
Essentially, the committee gets to use the “independent” data once it’s been audited.
It’s a big step and very nitpicky.

This is almost as exciting as reading the first Climategate thread here. (I was the first commenter on it.)
Fortunately, the warmists won’t be able to whitewash this one away. AW has put a spoke in the wheels of the bandwagon.
And to think that AW had to pass the hat to pay for his way down–and had to drive to cut costs. While money was no object for the 40,000 attendees in Paris.
They ought to hold the next COP in Chico.

Congratulations Anthony !! Great work by you and your team.
The results of this work should be included in the up coming Congressional hearings which will address NOAA’s political manipulation of the temperature data which used less accurate data to adjust temperatures upward to comply with Obama’s global campaign of climate alarmism.

Right. The point being that the land temperature trend gets diluted severely by the unchanging over-ocean trend. (As was stated upthread.)
PS: I’m getting a funny appearance of this site on my screen today. It’s probably just me–but has anyone else had this glitch?

But they do. However all warming trends are, most emphatically, not created equal. And this paper warrants further study. Leroy 1999 and 2010 were created to site new stations as much as to evaluate existing ones, and some of its aspects are meataxe. As we used to say in the old Fall et al. days, some class 3 stations are more equal than others.
We may refine this and create a new and improved rating classification system of our own. This is a continuing endeavor.

I just want to say. All who take interest in this subject, be they friend or foe, they are my family. And anyone who has surveyed a station, no matter where he has been or what he has done, he is my brother.

Evan, I love your attitude towards observerational data. As Carl Sagan would say, “that is the heart of science”.
I’ll also beg for some kudos to the coop observers, who are mostly volunteers who love recording the weather and are willing to commit to decades of doing so. I’ve been doing this myself for 30+ years, which means over 10,000 days of max, min, precip, snow, fog, thunderstorms, and other observations. I’ve got a good site – up in the mountains of Colorado, next to a ranch, no runways or air conditoners (but those damn hills blot out the sun well above 3 degrees high). But those in less ideal locations are just as diligent. The folks at the weather service would love it if all their coop observers were out in the boonies, but for them finding a careful, consistent, and persistent observer is just as important as finding an ideal micro site.

I’ll also beg for some kudos to the coop observers, who are mostly volunteers who love recording the weather and are willing to commit to decades of doing so. I’ve been doing this myself for 30+ years
Kudos, my brother, and to all the civic-minded patriots who man the lonely front lines of observational science. We appreciate your efforts. We value them. We salute you.
(Besides, observers do not site their own stations, in any case. That is up to the regional directors.)

This looks like a very important work by Anthony. My admiration and thanks for a job well done. I find this quote particularly interesting:
“We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.”
It makes me think back to Karl et al. Hmmm.

THEY could have done this study. THEY could have told their grad students that this would make a great dissertation. THEY should have wanted to ensure they had a firm foundation.
But they didn’t look. Because they didn’t want to see.

Evan,
I just wanted to thank you for spending the better part of this evening responding to questions and comments while this paper is being published. Your patience and diplomacy here are truly heartwarming and I wish you all the best in this and future endeavors. Hey! Lookie there…anthropogenic cardio warming!
🙂

Thanks for your kind words (and also to others I have not directly addressed). Bear in mind that our opposition comprises good scientists, too, and everything we have done we have bounced off their hard work.

They thought they had accounted for it. No crime in that. Besides, if they had done it correctly, we wouldn’t have had the privilege. It’s a privilege I value.
C’mon, Evan. They refused to even look at it. Dismissed it out of hand,…

I had the same reaction to Evan’s comment. What evidence is there to prove that “they thought they had accounted for it”. My understanding is that they were told at least once, maybe more, that their calculations DID NOT account for it, and nothing changed following their enlightenment. Now, either they didn’t BOTHER to check to see for sure whether they had adjusted for it or not, or they knew prior/found out after being informed and CHOSE NOT TO adjust for it for some reason. THAT at the very least can be called a crime of negligence. Whether or not we can call it something else in the end, remains to be seen.

I agree, Aphan, et. al. — their own actions leave one with no other rational conclusion but that NOAA (as an organization) was/is either:
1. Incompetent (to the point misfeasance)
or
2. Lying (malfeasance)
Given the scientific credentials of NOAA’s scientists, I think it is choice #2.
One can distinguish the individuals of integrity who work at NOAA and have chosen for whatever reason (hope of reform-from-within or merely economic, or whatever) not to quit their jobs
from
the corrupt organization (proven over and over) called “NOAA.”
**************************
Dear Good natured Evan,
“There is a time for everything…,” including a time to firmly denounce wrongdoers.
Moral equivalency between you and Anthony and John Christy on the one hand and NOAA on the other is not accurate.
Still rooting for you!
Janice

C’mon, Evan. They refused to even look at it. Dismissed it out of hand …
They looked at Fall et al. and looked no further. Yet, with both Anthony and me as co-authors of Fall et al., who am I to blame them for that?
So the ball was in our court. That’s the way I like it, anyway.

This looks like a very important work by Anthony. My admiration and thanks for a job well done.
I find this quote particularly interesting:
“We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.”
It makes me think back to Karl et al. Hmmm.

This necessary (by inference) revision of the global temperature record puts it below the lower bound of the models’ projections. So now we can say, “The consensus is 97% wrong.” How pleasant to turn the tables! And how deserved!

Congratulations Anthony and colleagues.
I had to smile when I read your point 5 below. May one hope that those who make the “adjustments” will mend their ways and change the “adjustments” to co-ordinate with the well sited stations?

“…
5. We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.
…”

Yeah, Mosher is kicking up quite the fuss over there about the data not being released. If only he were as vociferous in his criticism of Mann and Jone,,,, Jones… ho, now I get it, he probably just wants it so that he can find something wrong with it 😉
I’m looking forward to seeing the paper actually published, and am betting that there will be major political pressure to prevent that. Double edged sword since it is already getting press and suppressing the paper would just result in a Streisand effect.
I’ll wait for the data when authors are ready to release it. I don’t mind waiting, seeing Mosher in fits amuses me.

Mosher is neither intellectually honest nor is he qualified to review anything scientific (B.A. English). Why skeptics put any trust in this alarmist shill is beyond me. All he does is waste everyone’s time where ever he posts. He is the one who orchestrated the Muller is a recovering skeptic meme and thus tried to tank all skeptic criticisms of AGW. Now he is trying to pretend he is a scientist.http://www.populartechnology.net/2014/06/who-is-steven-mosher.html

Journals won’t accept papers whose contents have all been pre-released, right? My guess is that Watts hasn’t been able to get a journal to accept the paper yet (not necessarily a negative if it’s a skeptical paper), so that’s why he’s keeping the data under wraps. (He should maybe have said this himself, if that’s the case.)

Mosher is neither intellectually honest nor is he qualified to review anything scientific (B.A. English).
They say the same about me. And my M.A. is in US History. (CONUS.)
“We are much alike,” Mosh and I. “Both proud of our ships.” We just sail different seas.

Yes. Error bars for well sited stations are larger than for poorly sited stations because there are so many more poorly sited stations.
Homogenized data shows the smallest deviation of all — that’s what happens when you essentially make undifferentiated pap out of your data. And unless you had the raw stuff available, you’d never even see it — unless you knew where to look. Anthony is the one who found the needle in the haystack.

There are so many posts to read, so I’m gonna shortcut the process and just ask: How did the presentation go? Was it well-attended? Was there heckling? Rude questions? Or did Anthony talk to the empty chairs?

Evan:
Did you take the arithmetic mean for the summary statistics, or did you take a geographically-normalized mean such as NOAA, BEST etc do?
Because that could be a large source of the difference as well.
thanks. And thanks for replying to a pile of posts here, this has been one of the more fascinating threads in a while
Peter

We simply used a gridded, regional area-weighted average. This only becomes an issue when the subsets become so small/skewed (e.g. urban Class 1/2 data or CRS-only data) that there is not at least one station per region. Why bother with essentially circular logic, anyway?

There seems to be a typo in the poster.
“The overall warming effect of a heat sink on a nearby sensor is greater at the end of a warming phase than at the start of it.”
“Conversely, the effect of a heat sink is less at the end than at the beginning of an overall cooling phase.”
I think the first sentence should say heat source.
Also, these statements will need careful justification in the full paper.

When the journal article publishes, we’ll make all of the data, code, and methods available so that the study is entirely replicable. We feel this is very important, even if it allows unscrupulous types to launch “creative” attacks via journal publications, blog posts, and comments. When the data and paper is available, we’ll welcome real and well-founded criticism.

So… this is just science by press release? Look, there was a reason skeptics criticized climate scientists for rushing off to publish press releases about their results before their papers had even come out. The reason is: Because it’s wrong.
Publishing a press release and encouraging people to tell everyone about your results while telling them they don’t get to see any paper, data or analysis supporting those results until some unspecified and unknowable future date is wrong. Doing so just ensures one’s results get to be discussed and repeated long before anyone can possibly examine them to judge their credibility. That’s not how science should work, whether it comes from a mainstream climate scientist or a skeptic.
If people have nothing to actually look at then there shouldn’t be a press release.

Journals won’t accept papers whose contents have all been pre-released, right? My guess is that Watts hasn’t been able to get a journal to accept the paper yet (not necessarily a negative if it’s a skeptical paper), so that’s why he’s keeping the data under wraps. (He should maybe have said this himself, if that’s the case.)

The statement “Journals won’t accept papers whose contents have all been pre-released, right?” is not true.
There may be some journals like that, but there are many (the majority of?) journals that will accept on that basis. Also, it is very common for poster or oral presentations at major scientific society conferences to go on to be published in that society’s journals. Sometimes the journal editors go around spotting good papers exactly for that reason. Your comment appears to be trolling and is largely fact free.

My claim (actually a guess, as indicated by my question mark) wasn’t “fact free,” because it is true of some journals, as you concede. It was only an exaggeration–if your claim is true. So AW & Co. are wise not to take the risk of excluding themselves from acceptance anywhere.
Here’s another point that occurs to me. Journals might use AW & Co.’s publication of data as an excuse to reject their paper. AW has reason, or justifiably thinks he has reason, to suspect that some of them might be looking for an excuse.

Anthony presented the paper and findings at a convention. It has not been published yet. When it is published, all of the relevant data and methods will be published with it. AGU made the press release. Take it up with them.

So… this is just science by press release?
We prefer to think of it as just press release by science.Look, there was a reason skeptics criticized climate scientists for rushing off to publish press releases about their results before their papers had even come out. The reason is: Because it’s wrong.
Not by us. We encourage it. We did it. The reason is: because it’s right. And if more papers did it, fewer papers would be falling headlong and flat upon their faces within a month of publication.

I’ve tried submitting a couple comments while logged in under my Twitter account (where the username would show up as Brandon S?) but they haven’t appeared, not even as awaiting moderation. Is there any chance they could get fished out?

Brandon, I tried to use my Galaxy last night to post comments using this exact name and account, but they wouldn’t post either. It’s happened before. I’m wondering if using “apps” somehow causes them to not get recognized by WordPress or something…

It would be fun if someone could round up all the troll comments, the bilious attacks and personal insults that have been removed from this site over the years. No doubt there have been a huge number attempting to insult, denigrate, defame, libel, abuse, insult, malign and vilify the hard working people who have used their own time, petrol (Ha! a fossil fuel plot) etc. to gather the data for this study, on this post alone. There were 16 posts on this subject on Bishop Hill before the first troll got out of bed and dragged his tablet out from under its rock. It is such fun watching them splutter into their fairtrade, harvested by sustainable methods, transported by sailing ships coffee. A book could be published of them all, or we could get Lew to do a conspiracy theory paper.
Good to see some science as I understood it still being done. The modern version of people in lab coats “reanalizing” data does not really cut it. ( and no……. re-anal-izing is not a spelling mistake)

As that is a sword that cuts both ways, I am not inclined to judge.It is such fun watching them splutter into their fairtrade, harvested by sustainable methods, transported by sailing ships coffee.
Why, yes. So we enjoy the show.

Sorry, it is creating more confusion:
with 1218 stations official the trend varied between 0.224 to 0.409 oC/decade
with 410 stations data the trend varied between 0.04 to 0.292 oC/decade
with 808 stations data the trend varied between 0.219 to 0.442 oC/decade
Also, this is not based on the standard [WMO specification] Stevenson Screen data
Also,it is only for 30 years — not enough to understand trend as this may be part of natural variability.
I really don’t understand what is the real motive in presenting this data. Is it to show the problems in the data series building, then it is o.k but if the intension is to show the trend then this may not be right way. As I pointed earlier, please present a trend of historical data of an urban and rural station measured using Stevenson screen and not open sensors on the streets.
Also, W followed by M in Figure 1 is around 10 years and not 30 years as replied earlier.
Dr. S. Jeevananda Reddy

with 1218 stations official the trend varied between 0.224 to 0.409 oC/decade
with 410 stations data the trend varied between 0.04 to 0.292 oC/decade
with 808 stations data the trend varied between 0.219 to 0.442 oC/decade
That is the lowest/highest trend for individual region. 1st is Region 9 (West, lowest-trend region), while the 2nd number is for Region 7 (Southwest, highest-trend region) .Also, this is not based on the standard [WMO specification] Stevenson Screen data
This is based on data for all equipment: CRS, MMTS, ASOS/AWOS. (MMTS adjustment applied for equipment conversion.) We do have a CRS-only subset. But CRS data is fatally flawed from the getgo, for both Tmin and (especially) Tmax. A CRS box in and of itself is a heat sink that spuriously increases trend for both Max and Min.Also,it is only for 30 years — not enough to understand trend as this may be part of natural variability.
It covers a positive PDO phase. So it is ~half natural, ~half anthropogenic. But that is not what we are trying to assess. We are merely comparing the trends of well and poorly sited stations over a 30-year period of unequivocal overall warming.

Dr Reddy. I believe what the poster actually say is:
Among the nine regions of the CONUS
with 1218 stations official the trend varied between 0.224 to 0.409°C/decade
with 92 unperturbed class 1/2 stations data the trend varied between 0.04 to 0.292°C/decade
with 318 unperturbed class 3/4/5 stations data the trend varied between 0.219 to 0.442°C/decade

30 year trends of temperature are shown to be lower, using well-sited high quality NOAA weather stations that do not require adjustments to the data.
This was in AGU’s press release news feed today. At about the time this story publishes, I am presenting it at the AGU 2015 Fall meeting in San Francisco.

Also, W followed by M in Figure 1 is around 10 years and not 30 years as replied earlier.
I am not sure what you mean. But figures apply to amount of warming per decade over the full 30-year study period. Does this answer the question?

Normally I’d say yes, but unfortunately your have dirtied yourself by adding a link at the top to a rant by Sou/Hotwhopper aka Miriam O’Brien.
You profess a love of rational discourse in science, but you consort with someone whose sole purpose is denigration. It doesn’t say much for your professionalism.
That said, some good points there, many of which we are already aware.

(Note: “Buster Brown” is the latest fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. Therefore, all the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

Normally I’d agree. But Miriam O’Brien has labeled me a criminal (in writing) just for having a different opinion and daring to write about it. That’s over the top and indicative of her own lack of tolerance for “differing matters of opinion.”

(Note: “Buster Brown” is the latest fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. Therefore, all the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

Happy to hear you found some good points.
As long as it is nearly impossible to come to an agreement on scientific questions which have an answer, I do not think it would be productive to have a discussion about which blogs tone is worse. Always happy to discuss scientific topics where I have expertise.

John Whitman asked Victor V.:Will Anthony (and his co-authors) have a chance to review drafts of your future climate focused papers before they go to a journal for submittal, as he has allowed you to do (eg – your previous review)?
There’s your answer, John. V.V. says there are scientific questions that have an answer. But as a typical climate alarmist, he refuses to accept what the planet is clearly saying; he doesn’t like the answer, which shows he and his ilk are wrong.
And:Always happy to discuss scientific topics where I have expertise.
So I guess that’s the last we will hear from V.V.

@ Buster Brown (who has at least 3 other names, per a mod, not long ago…) — Either you have poor reading skills or you deliberately mischaracterized (by implication) Anthony’s well-founded, rational, conclusion that for V. to include someone as dishonest (not to mention erroneous) as “Sou” in a discussion of Anthony’s paper is NOT to be “open-minded.” Rather, it is to create a false impression of “balance.”
{Much as O’Reilly of Fox does with his often pseudo-“fair and balanced.”}
That is, whether you meant “Sou” or V., you are wrong.

(Note: “Buster Brown” is the latest fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. Therefore, all the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

(Note: “Buster Brown” is the latest fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. Therefore, all the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

“Janince, every time one of my kids “dirties” themselves, I find that they are dirty. Never have they “dirtied” themselves and remained clean.”
And do your children simply, suddenly grow dirty spontaneously, or did they dirty themselves with something else? Because if your children are like every other child I know, including my own, the conditions required for becoming a dirty child are at least 1-a child and 2-a source or substance with which a formerly clean child has come into contact. So, unless you have never cleaned your kids, OR your children simply become dirty spontaneously, your anecdote is merely that.

(Note: “Buster Brown” is the latest fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. Therefore, all the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

My children would most likely BE “super special and out of the ordinary” because even they understand that logic allows people to know the difference between a statement of fact, and a statement of analogy. I mean, if I can assume logically that none of us can SEE V physically, (including Anthony and Janice) then using that same logic and reason, I would never ASSUME that Anthony was implying that V “dirtied” himself physically. If I actually tried to TELL my kids why I logically arrived at that conclusion, they would all most likely respond with some form of “DUH mom”.
They would also likely examine your posts here and declare that you think like a knucklehead. But then again, that might just be because they are super special out of the ordinary kids. You might try to find some, and observe.

Dirty boys ?
Buster strikes me as a guy who wants a little more attention than needed and does so by spinning things that are uninteresting. My 2 cents. My boys would ask him if he was breast fed, but they are not a sensitive bunch.
Now on the subject of dirty children, all I can say is encourage them to experience the world. Give them a good sense of when to stop being stupid so they survive to live another day.
I really only know about raising boys. THEY esp need to get dirty in all aspects of life. Test their mettle. Flex their limits. Exhaust themselves actually. Eventually they test the values in any pack as they should and figure out who they are and what they can bring to the table. Creates winners, losers, drives excellence. Yes, I know spoken like a knuckle dragging man.
I don’t know about girls, but I often wonder how one would have turned out in Knuteland.

(Note: “Buster Brown” is the latest fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. Therefore, all the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

(Note: “Buster Brown” is the latest fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. Therefore, all the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

REPLY – Please be advised. We only do meters around these-here parts. ~ Evan

Victor Venema,
Will Anthony (and his co-authors) have a chance to review drafts of your future climate focused papers before they go to a journal for submittal, as he has allowed you to do (eg – your previous review)?
It seems like that kind of reasonable professional reciprocation, as a common courtesy, would show good will. What do you think of this suggestion?
John

John Whitman, I could not review the draft of Watts et al. (2015) because only the press release/blog post and poster was made available. In general, I do not think that it is a good idea to search public attention for a study before it is published, especially when the result is likely to be contested, whether James Hansen does this or Anthony Watts, but at least in case of James Hansen there was a manuscript available.
In case you would like to hear whether I see Anthony Watts as a person with a deep understanding of homogenization algorithms, what I mainly write about, whose unique expertise would likely improve my manuscripts, I think you know that answer and you seem to be mainly looking for a fight.

“…homogenization algorithms…”
Well, there’s your problem right there. You guys just love to ‘homogenize’ everything until you get the answers you want. Your problem is that the real world is debunking your homogenization nonsense.
And let me add to John Whitman’s question: it’s not just Anthony Watts who should be able to review drafts of your future climate focused papers before they go to a journal for submittal. Everyone reading this site should have the same opportunity.
But I doubt you will allow that. Because if you did, I suspect that your manuscript would never be published.

Victor Venema on December 18, 2015 at 9:36 am
“John Whitman, I could not review the draft of Watts et al. (2015) because only the press release/blog post and poster was made available. In general, I do not think that it is a good idea to search public attention for a study before it is published, especially when the result is likely to be contested, whether James Hansen does this or Anthony Watts, but at least in case of James Hansen there was a manuscript available.
In case you would like to hear whether I see Anthony Watts as a person with a deep understanding of homogenization algorithms, what I mainly write about, whose unique expertise would likely improve my manuscripts, I think you know that answer and you seem to be mainly looking for a fight.”

Victor Venema,
Here is my understanding.
I think you said that the 2015 AGU FALL MEETING poster authors (Watts, Evans, Christy, Nielsen-Gammon) have insufficient professional standing/merit to receive from you what I called a “reasonable professional reciprocation, as a common courtesy, [that] would show good will” to see your future early draft research prior to submittal to journals. Actually, you focused only on one team member, Watts; curiously, you did not focus on the balanced team he is an essential uniquely contributing part of.
You appear to say no, even though you were allowed the opportunity to see an earlier pre-2015 draft of the research; where the earlier pre-2015 draft research was what you identified in your sentence ””Glad to hear my previous review was “highly useful””.
I had hoped to understand more clearly your personal view of what is reasonable basis for a professional attitude in the matter; not to argue with you. I now have a much improved clearness about your premises that inform your professional attitude. Thank you.
John

Victor Venema“Glad to hear my previous review was “highly useful”.
Hopefully my new review is also helpful.”
Complete with accusations that Anthony Watts is a “science denier”, I note.
After visiting your blog, I felt I needed to scrub myself with bleach.

My dear VeeV! Yes, highly useful, indeed. For the both of us, I’d like to think. Please feel welcome in this forum. Speaking strictly personally (and having no dog in the fight), I am quite unconcerned with whom you consort. Besides, think of all the good times we had there.
Well, to take it up from last time around, as you can see, there appear to be no untoward step changes in our graph. The bumps track each other rather well, if I do say so. First, we see a match (before MMTS conversion occurs — and we all know what they say about those CRS jobs, and it’s lamentably true to the [T]max), then an increasing divergence. Just like I said. So if we got it wrong, it ain’t that.
So I entreat you again: You course is clear. All you gotta do to salvage things is to do pairwise — not with just any old nearby station, but with stations of similar ratings (Class 1\2-to-Class 1\2 and Class 3\4\5-to-Class 3\4\5).
You can dustbin the baddies. Or at least apply (or infer, if GHCN metadata is as bad as I suspect) a microsite adjustment. A whopping big downward one, that is. Then split all the jumps you like. After all, if you must smear the homog-sauce all over the main course, you may as well do it right.
So come down a few floors and join the proles. Be part of the solution, not part of the problem. Why be a horrible warning when you have such a prime opportunity to be a shining example?
Be the man who saved the GHCN. Who knows, maybe you’ll win the Nobel Prize (for Science, not the Beauty one like in 2007). Or at least earn a spot in the Deniers’ Hall of Shame.
Besides, if you don’t, somebody else surely will. There is no getting the toothpaste back in the tube. Too late in the day for that. And, like the old jingoism jingle goes, “You got the men, you got the guns, and you got the money, too.”
It could be the beginning of a beautiful trendship.

I think this proves beyond a doubt that ground based thermometers should no longer be used..We have satellites that cover the ENTIRE globe AND don’t need continuous ( and sometimes retro active ? ) adjustments that are subject to a personal bias, as has been seen lately !! Satellites only, from now on !!

(Note: “Buster Brown” is the latest fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. Therefore, all the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

Why, no, they don’t, do they? They measure LT. And, as we know, LT trend is supposed to track higher than surface, anyway, at least during a warming phase. “Basic Physics” and all that.
Except when it doesn’t, of course.
Which means, as it stands, both can’t be right. And I think we both have a pretty good idea, at this stage in the game, which one can’t be right.

Probably some, but I don’t think a whole lot. We are only looking 100m. away from each station, anyway. More often only Sometimes pn;y 10m. Even 5m. or 3m.
Of course, a mesosite delta can affect trend, but we don’t think that’s weighing in much, here.

I note on NCEI/NOAA an anticipatory release prior to Anthony’s talk telling everyone the US surface stations network is reliable and properly adjusted. They gave as their reference the old Menne paper that had been rushed out in 2010.
“A recent study conducted by scientists at NOAA’s National Centers for Environmental Information found no evidence that the U.S. temperature trend is inflated by poor siting of stations that comprise the US Historical Climatology Network (USHCN).”http://www.ncdc.noaa.gov/monitoring-references/faq/temperature-monitoring.php
I believe the work of Anthony et al has resulted in increased employment at NOAA-NCEI and has given them their work orders over the past few years. Some spin doctors were added as well, I’m sure.

Such an ‘anticipatory release’ telling everyone ‘everything is fine’ can backfire badly when is contradicted within a few days by study with p<01 significant results.
Frankly, now the smog has cleared from Paris COP21, you'd think the team would lighten up.

Most studies are unable even to beat Standard Error overlap. That’s only ~70% confidence after the dust clears. And that’s only the flashy external error bars. Who knows what mishmash of internal uncertainties even goes into producing those nice, thin, crisp external bars? (Yes, homogenization monster, I am looking at you.)

“We are submitting this to publication in a well respected journal. No, I won’t say which one because we don’t need any attempts at journal gate-keeping like we saw in the Climategate emails.”
Given the importance of getting the numbers right, and the effort you and your co-authors have put into this, it is essential that this project go through peer review. If I was an editor of a journal you sent this to, I’d likely push hard for publication even if peer reviewers were somewhat negative (if the study is clearly shown in review to be unpublishable, then of course there is nothing one can do). But this is one thing you’ve earned form your efforts to attack the science: a bit of inoculation. I would rather see the study clearly aired and addressed publicly, fairly, than killed in committee.
Sou and Variable Variability raise important questions about your study, which I assume you will address in the submitted article. Also, I assume your submitted paper will address issues about this sort of thing raised by Menne et al 2010. Also, there may be two uphill battles for the relevancy of this work. One is the whole idea of adjusting data. If your work is correct, data will have to undergo an adjustment, but the “deniers” (if I may use that term here) have been telling the “alarmists” that they are being bad boys for adjusting the data a bit too long to let that suggestion go uncommented on. The second is the relevancy of the US, which tends to buck the trend in global surface temperatures, over this particular period of time (AGW is a multi-decadal long term trend) to the overall picture of global warming. So you should probably try to address those issues in your paper too, I suppose.
Good luck, and may Reviewer Three be swift and kind.

Gregladen,
On what logical basis do you make the claim that “it is essential that this project go through peer review”? Accurate, honest peer review only means that some unbiased scientists (hopefully) read the paper and didn’t find anything wrong with it’s conclusions based on it’s methodology. It does not work like some kind of scientific seal of “truth”. Peer review should never be thought to mean that all of the reviewers believe and agree with the conclusions of the paper in question. Either Anthony et al make valid conclusions based upon empirical evidence, or they do not. To say they deserve “a bit of inoculation” makes it sound like they deserve preferential treatment, which has no place in “science” at all.
Anthony can and will publish this paper, whether or not the journal they are currently planning to submit it to accepts it or not. And it will be clearly aired and addressed publicly and fairly…and unfairly.
Some questions: You state- ” ‘deniers’ have been telling the ‘alarmists’ that they are being bad boys for adjusting the data”.
1) did you mean ALL deniers? Or just some? In particular have Anthony et al done so? If Anthony et al do not have a history of rejecting all adjustments to all data, then it would be silly to attack them as if they did.
2) Were ALL complaints from “deniers” about adjusting data based on flawed logic/reason or were some complaints about adjusting data based on logical and reasonable concerns? Because if one believes that there are absolutely no logical/reasonable arguments against adjusting data, it would make one doubly illogical/irrational to use someone else’s obviously irrational/illogical arguments against Anthony et al. Right?
Neither the uphill battle nor any campaign against this paper has any affect on it’s “relevancy” at all.

On what logical basis do you make the claim that “it is essential that this project go through peer review”?
Oh, it is. It will. With a healthy side of independent review. I look forward to it. Gotta get it past the ‘skins, you know.

Ah, that pal review thing was a loser from the getgo. See what it got them. A whole passle of papers competing for space to fall flat on, that’s what. Sure. you can fiddle peer review — but you can’t fiddle independent review.
When your teach told you that cheating was only cheating yourself, well, he got that one right.

I hope you are being sarcastic… Not sure though. In case you are not…
Sea level rise has not accelerated in the ‘CO2 era’.
Oceans have warmed a tiny fraction of a degree, and only if you use the adjusted data.
Ice sheets have been shrinking since the end of ‘the little ice age’. And the Antarctic ice sheet is actually growing (as shown in a recent NASA study).
Declining Arctic sea ice has somewhat stabilized. The reduction was mostly due to wind driving ice out of the arctic, rather than unusual warmth. And the fact is, the arctic ice has little affect on anything. The albedo effect is low due to the very high latitude (as compared to the growing antarctic ice which is at much lower latitude).
Glaciers have been retreating since the ‘little ice age’, this is nothing new or catastrophic.
What extreme events are we talking about here. Hurricanes and ACE show no trend. Tornados show no trend, global drought shows no trend. You will have to back that one up a bit if you want to be taken seriously.
Ocean acidification? The oceans have neutralized by 0.1 or 0.2 on the pH scale, becoming somewhat more neutral, not acidic. This is truly a non issue.
Decreased snow cover? Seriously, have you looked at the data?http://climate.rutgers.edu/snowcover/chart_anom.php?ui_set=0&ui_region=nhland&ui_month=11
Rutgers shows that since the late 1980’s snow cover has been essentially trendless (maybe slightly up) and hovering right around the zero anomaly.

“The oceans have neutralized by 0.1 or 0.2 on the pH scale, becoming somewhat more neutral, not acidic. This is truly a non issue”
the canadian marine authorities say the seas are stable at between 7.5- 8.5.

@ Moyer (0825 today):
1. Prove with evidence that each of those phenomena is actually occurring to a degree of significance making what we “see” of them worth noticing.
2. Prove with evidence that human CO2 caused any of those phenomena.

From the head post: When the journal article publishes, we’ll make all of the data, code, and methods available so that the study is entirely replicable. (emphasis mine)
This is entirely proper. But I would like to push back on criticisms that until the data, code, and methods are published, the results are NOT replicable.
We are not talking about one-time experimental results. We are talking about a different way to analyze historical government, public domain records together with microsite issues that can be gathered by public documents, google and USGS satellite images, and a generalized concept of “What if we use a subset of stations with few moves and record complications.
There are not any barriers to performing the work yourselves with data that might be on your shelves. Indeed, scientific replication does not mean using the same data, same code, and same methods — that only repeats mistakes. Scientific Replication is about taking the concepts and reapplying those concepts with similar but different data, similar but different methods.

But I would like to push back on criticisms that until the data, code, and methods are published, the results are NOT replicable.
Well sure. But twice burnt, twice shy. So you-all will have to wait. We beg your indulgence in this.

Anthony Watts wrote: “5. We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.”
I have never quite understood the NCDC’s description of its Station History Adjustment Program. They say this here:
“Application of the Station History Adjustment Procedure (yellow line) resulted in an average increase in US temperatures, especially from 1950 to 1980. During this time, many sites were relocated from city locations to airports and from roof tops to grassy areas. This often resulted in cooler readings than were observed at the previous sites. When adjustments were applied to correct for these artificial changes, average US temperature anomalies were cooler in the first half of the 20th century and effectively warmed throughout the later half. ”
Not being a climate scientist, that reads to me as if surface stations are moved to cooler locations and post-move temperatures are adjusted upwards while pre-move temperatures are adjusted downwards.
To generally have temps adjusted downwards in the first half of the 20th century they must have concluded that UHI was a problem before 1950, and to have generally adjusted temps upwards later than 1950 they must have concluded that UHI was no longer a factor. This flies in the face of energy use, urbanisation and population increases.
If you then use those moved surface station records in the homogenisation process I could see why the minority of good quality stations end up getting adjusted upwards – they look like the outliers even though they are the best quality records.

Not “we,” not I, James at 48. YOU (and, yes several others, here, but there are many of us who are not “luke warmistas” — not — at — all).
“Luke-warmers” ASSUME (gratis — based on speculation extrapolated from the properties of CO2 in a highly controlled laboratory setting utterly unlike the climate system called “earth”) that human CO2 causes significant warming of “global” temperature. That is, it is AGW-lite.
They have no quantifiable evidence for this assumption.
It just feels good to them to take a middle-of-the-road stance. It has much more to do with personality (or professional peer concerns) than with data.

I rarely disagree with you Janice, and I suspect you and I don’t really agree on this topic either. I only disagree with how/what you said here:
“Luke-warmers” ASSUME (gratis — based on speculation extrapolated from the properties of CO2 in a highly controlled laboratory setting utterly unlike the climate system called “earth”) that human CO2 causes significant warming of “global” temperature. That is, it is AGW-lite.”
I believe that definition belongs to the vast majority of AGWers outright. AGW=the belief (no matter how one arrived at that belief) that human CO2 causes significant warming of global temperatures. I think “luke warmers” would mean something more in the middle…that human CO2 probably warms the Earth, but not a lot, or that it can but has not yet. etc.
I don’t think that you and I can logically presume to know what it is that each and every “luke warmer” assumes. We can’t even really presume to know what James at 48 defines as a “luke warmista”.
Hugs my sistah friend!

Hi, Aphan,
I think we DO agree. The key word I think we are tripping over is “significant.” I was using it (I hope) in a technical sense, to mean: of any statistically meaningful driving causative effect.
Re: the generalization about lukewarmers, I may, indeed, be incorrect about some, but all whom I have seen commenting on WUWT have asserted the laboratory properties of CO2 (extrapolated without quantifiable evidence to the earth climate system) often along with a “just feels right to give it some effect” / “probably warms…” kind of reasoning.
Thanks for valuing me enough to communicate honestly with me!
Your WUWT pal,
Janice
P.S. Even if we do disagree — THAT’S OKAY! #(:))

— not — at — all
I’m always sad to hear that. Lindzen is. Christy is. Judge Judy is. Spencer has made a positive crusade out of refuting the Sky Dragon Slayers. Heck, The Rev won’t even allow SDS posts here, anymore.
Dunno about those outside the community, but on the inside, ‘specially when it comes to TCR and ECS, we’re the New Consensus, all 95% of us.

Of interest here is that going back merely 30-years, resulted in 808 Weather Stations being dropped due to poor quality, poorly located, or being moved.
Try going back 60-years, 90-years, 120-years. Lengths of time that those involved with Hockey-stick did.
The number of continuous Weather Stations drops to point of being like that lone Bristlecone Pine tree. Meaningless when it comes to science and unconscionable when it comes to World policies.

Of interest here is that going back merely 30-years, resulted in 808 Weather Stations being dropped due to poor quality, poorly located, or being moved.
Oh, we keep the ones with poor microsite. We only drop for moves, changes in rating, and significant TOBS bias.
Out of the 410 remaining, only 92 are Class 1\2, and I anticipate a bit of argument over that, too.

— “we keep the ones with poor microsite”
Thanks for clarification.
— “only 92 are Class 1\2”
Which calls into question AGW Climatologists using Proxies, when there would be very limited number of Class 1 & 2 Weather Stations over sufficient window in time to calibrate Proxies with sufficient accuracy.

That and worse. And, anyway, if you don’t account for microsite, your trends are too high, even without the homogenization bomb “correction” making it even worse. So you are doing your pairwise with mostly bum stations.

The original Hockey-stick paper when back to 1400 BCE (roughly 600+ years) and the second, an update to the first, went back 2,000 years approximately. The United States Weather Service, formerly known as the United States Weather Bureau, has only exited since 1890, so it cannot go further back than that. No rational mind would assume anything outside of what can be reasonably determined by the evidence anyway.

Not Half Enough
The “globe” is warming half of what we’re told
It’s such a shame for I don’t like the cold,
Seems the temperature has been trending flat
See 3 W’s @ Watt’s Up With That:
While climateers pour over goats entrails
Realists did not believe their fairy tales,
As scamsters pushed their exaggerations
For one world rule by United Nations.
Brave volunteers went and checked the gauges
Cries of thermageddon now assuages,
Five hundred million for each degree
For non existent warming – that’s the fee;
But if its only half of what they say
We will only have half of it to pay,
Please tell the Pope and reverend Barrack
Stop the panic and send our money back.
So thank you Mr Watts and your great team
Your cheque is in the post – well you can dream,
Nobel prizes are just around the bend
With professorships and a gross stipend;
Then I awoke to the sound of silence
Apart from the warmists threats of violence,
So let the doomsters cry “the end is nigh”
Real seekers after truth dont need to scry.
PAH 18/12/15

DB
The dirty underbelly of investment banking is starting to get a little antsy that rebate supported alternative energy is running out of time to become “as good as fossils”. Some risk managers are advising a ‘hedging” of long coal and oil to hedge against overinflated expectations for alternative energy.
Anthony’s article is good “cover” timing as the folks who recommended alternative energy plays don’t want to be the one to admit that didn’t know what they were talking about so they are looking for a scapegoat … just in case.

(Note: “Buster Brown” is the latest fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. Therefore, all the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

(Note: “Buster Brown” is the latest fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. Therefore, all the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

Buster Bluster makes zero sense once again, and as usual. Syracuse has nothing to do with whether lawyers — or Mr. Bluster — can be trusted. And of course there are no degrees in climate alarmism, because alarmists have no credibility. Note that Bluster is a climate alarmist.
In the climate field scientific skeptics rule, and Bluster is no skeptic. He’s just a site pest who doesn’t understand the Null Hypothesis.
Taylor, on the other hand, has some kind of relationship with Heartland in addition to Forbes. Good for him; Heartland does more to promote honest science on a shoestring budget than the multi-$millions the .edu rainmakers bring in for promoting their DAGW hoax.

(Note: “Buster Brown” is the latest fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. Therefore, all the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

(Note: “Buster Brown” is the latest fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. Therefore, all the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

(Note: “Buster Brown” is the latest fake screen name for ‘David Socrates’, ‘Brian G Valentine’, ‘Joel D. Jackson’, ‘beckleybud’, ‘Edward Richardson’, ‘H Grouse’, and about twenty others. The same person is also an identity thief who has stolen legitimate commenters’ names. Therefore, all the time and effort he spent on writing 300 comments under the fake “BusterBrown” name, many of them quite long, are wasted because I am deleting them wholesale. ~mod.)

The histrionics over Heartland are hilarious considering their AGW outlay is orders of magnitude smaller than what outfits like Sierra and Greenpeace spend promoting their AGW pseudoscience. Hell, the alarmists illegally spend more taxpayer dollars on AGW promotion (as NSF and EPA have been caught doing) than Heartland spends legally promoting skeptics with their own damn money.

The basic point of significantly increased trends in UHI-corrupted station records would be better made with far-longer linear regressions than 30 years. At that computational interval, 60-year natural cycles produce very strong oscillations that have nothing to do with any secular trend. While the availability of vetted century-long records is reasonably good in the USA, it is quite limited in the rest of the world. That’s what makes estimation of the historical global average temperature challenging.

Congratulations to Watts et al for an important addition to the climate jigsaw puzzle. Well done. One does wonder why NOAA etc have not undertaken this properly before, including and especially BEST! Very frustrating.

Richard
NOAA and their executive committee had promised the OIG back in 2010 that they would ground truth the monitoring stations. They also gave a weasel out that they didn’t have the funding to do THAT level of ground truthing.
Now that a NGO (WUWT) has done it for them, it has the potential to create a seismic shift in what is the government’s role vs what is the NGOs role.
Big ole opportunity to establish new terms of interaction.
I’m excited to watch it play out.
This style of “cooperation” is not unique, but arguably seldom employed.
I went fishing today (deadens parts of my brain i think or maybe it just makes me not care) and for the life of me cant remember the name of the ornithologist whose ides was rejected by his boss decades ago and he smartly extended the idea to his network of bird watching buddies.

“they would ground truth the monitoring stations.”
As a person stuck with paying part of tab for funding AGW Alarmism, I would expect our government to quit advocating any policy or position based on any temperature data that came from a station that didn’t meet Compliant Class 1.
Idea that one can use temperature data with error of 1.0C (for class 3) or worse to make claims of AGW involving possible claimed changes of 0.1C is abhorring. Even worse is that AGW Climatologists coupled such bad data to even more questionable/inaccurate Proxy data, to make Alarmist claims of AGW goes well beyond unconscionable.

Well done.
[Gracious nod.]One does wonder why NOAA etc have not undertaken this properly before, including and especially BEST! Very frustrating.
OPEN MESSAGE REGARDING OUR RIVALS:
BEST wants to, but we have not yet released our data. So don’t blame them. Plus, they have known for donkeys’ years that we are doing the [ratings]. So why should they re-invent the wheel whey can use/review/revise ours to their liking?
But they need our data for that. So blame them not.
What I want to make sure of is that Mosh and/or Zeke do not merely separate the compliant and non-compliant stations and average them. I need them to redo their pairwise comparisons, so that compliant is paired only with compliant and non-compliant is paired only with non-compliant.
What we need from BEST is an apples to apples and oranges to oranges comparison. Not pairwise pap.
Mosh? Zeke? Hear that, guys? Split all the jumps you like. But do it 1\2 to 1\2 and 3\4\5 to 3\4\5 style.
As for NOAA, they use our previous published Fall et al., and I would neither expect them to nor hold them responsible for not using our study before it has been published. What they do after that, we cannot know.
Look, NOAA is ubiquitously defensive, territorial, arrogant, sanctimonious, sometimes secretive, slow to let go their own preconceptions and, from personal experience, they can be sneaky. But they are not frauds, charlatans, or scam artists, and when I hear them characterized as such, I will not countenance it.
They are no worse than most, and better than some. Besides, they made the CRN, which is a thing of beauty. I surveyed the upwards of a dozen of those and they are so Class 1 it hurts. Compatible equipment, triple redundency PRTs, hourly readings (bye-bye TOBS-bias). Beauty to bring tears to a man’s eyes. Makes you think America is a big country all of a sudden. A good longterm surface record will emerge from that.
And believe me, I have rolled in the mud with the numbers. I did not use R — I put all this stuff into Excel piece by piece and reviewed it station by station. No black boxes. Every calc ran through my hands. I have been immersed in the results of their adjustment procedures. I am a wargame designer/developer. I know what they did and how they made this error and why they though homogenization worked.
It is not fraud. Not fraud. It is an error which we partially made in Fall et al., and we made omissions in both that paper and our 2012 release that we even acknowledged in the pre-release, but did not address.
If you insult and mischaracterize their errors as fraud, then you only justify their doing the same for our errors, past, present and future. I must insist that the readers here consider that they must give NOAA and BEST (etc.) every bit as much slack as we require them to grant us. Errors are allowed in science. Theirs. Ours. Even yours.

While I fully appreciate your generous fair-mindedness regarding persons you say are “ubiquitously defensive, territorial, arrogant, sanctimonious, sometimes secretive, slow to let go their own preconceptions and, from personal experience, […] sneaky”, the question isn’t whether they are outright charlatans but whether they are minimally fit for public trust as civil servants. Particularly in the sciences.
That’s a considerably higher standard than your entirely apt description of ordinary politicians who are notoriously and wisely trusted by no one.

evanmjones: “But they are not frauds, charlatans, or scam artists, and I will not countenance it.”
Really…
So what do you call this?;
mknormal,yyy,timey,refperiod=[1881,1940]
;
; Apply a VERY ARTIFICAL correction for decline!!
;
yrloc=[1400,findgen(19)*5.+1904]
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
2.6,2.6,2.6]*0.75 ; fudge factor
(...)
;
; APPLY ARTIFICIAL CORRECTION
;
yearlyadj=interpol(valadj,yrloc,x)
densall=densall+yearlyadj
FOIA\documents\osborn-tree6\briffa_sep98_d.pro
Or how about “Mike’s ‘Nature’ trick”?

“the old wooden box Cotton Region Shelter (CRS) which has a warm bias mainly due to [paint] and maintenance issues. The MMTS gill shield is a superior exposure system that prevents bias from daytime short-wave and nighttime long-wave thermal radiation.”
Published calibration experiments show that even well-maintained CRS shelters suffer a warm bias that varies daily and regionally with insolation and wind speed.
Likewise, the MMTS shelters, though better than the CRS, show systematic biases both day and night — night bias being significantly less.
The net finding is that even Leroy class 1 sites will show systematic biases that will put a permanent and significant uncertainty (~±0.4 C for CRS; ~±0.3 C for MMTS) into any temperature measurement, and that cannot be assumed to decrement away in any large measurement average.

Them CRS units carry their own heat sinks around on their shoulders. Plays havoc with Tmax. Even butts up Tmin quite a bit. Look at the data. Sticks out like a fish in a tree.
Sure, they have that old-world aesthetic charm. But that’s all they’re good for.

I’ll add that it’s not the MMTS units that are going wrong. It’s the CRS boxes that have always been going wrong. For purposes of this paper, we add the jumps for conversion (not the pairwise), but what the job really requires is not jacking the MMTS trend up, but squeezing the CRS trends down. Even if one keeps the offset adjustment, and I’m not yet convinced that offset is even correct.

The MMTS shields are subject to systematic error, too. That’s been shown in several very careful calibration experiments. One of them was discussed in terms of the global average temperature, here (869.8 KB pdf)
Only the new aspirated CRN PRTs are relatively free of systematic error.

Buster
I’ll temporarily pretend to be whatever you need (almost) if you helps with your cognitive dissonance.
How about a priest (i think there’s only one kind) and missionary who offers you absolution if you give me 18% of your income and permission to establish climate saving alternative energy schemes on your property.
I’ll even throw in lessons on safe spaces and how to crush the microaggressor … for the kids .. always for the kids.

Isn’t the main point that we are looking at less than 1 degree C increase since 1880?
Many of the temperature readings have been adjusted and there is uncertainty as to their accuracy. Measurement of temperature in terms of variation from an average rather than plotting the baseline temperature creates the illusion of significant increases and this has happened because of the newness of climate science and the lack of defined procedures.
Does a world average temperature have any real meaning at all?
This possible small increase in temperature has not resulted in any increase in natural disasters so why are we about to spend so much money on a misdirected effort to stop further warming?

+ 10
Indeed, a daubert fan might say …
“the increase in the measured attribute is within the known rate of natural variability, therefore the expert has not presented information to claim a causal relationship”
Then, a more highly paid and connected attorney pleading his case before an equally highly paid and predispositioned judge would say …
“you honor, it is not the place of this court to put the potentially affected populace at risk to such an important hypothesis. we encourage the court to err on the side of caution in making its decision on the validity of this expertise”

Didn’t get an answer to this, maybe because I posted at 1AM:
Evan:
Did you take the arithmetic mean for the summary statistics, or did you take a geographically-normalized mean such as NOAA, BEST etc do?
Because that could be a large source of the difference as well. It would also be apples-oranges comparison. Did you take the same kind of mean for the NOAA stations as well?
thanks. And thanks for replying to a pile of posts here, this has been one of the more fascinating threads in a while
Peter

Question for: Anthony Watts or Evan Jones or John Christy or John Nielsen-Gammon:
In your digging, research, work …
Was there any subset of 1,218 Weather Stations that spanned more than 150-years, which might meet less restrictive criteria than set forth in study?
(Looking at what accuracy, at best, “AGW Climatologists”, could have used to calibrate their Proxies with.)
Is so, could you make those stations known when you make data public?
(Thanks, either way.)

Was there any subset of 1,218 Weather Stations that spanned more than 150-years, which might meet less restrictive criteria than set forth in study?
Not a prayer. Unless you go in for inferring metadata. Even then, not likely.

Oh, yeah? Tell that to me when I’m happily snapping away, lying on my back on a sloping ice-covered rooftop with no retaining wall, five feet from a sheer 40-foot drop to the concrete.
But be prepared to duck; you might rate a snowball or two.

You will find that a disproportionate number of USHCN Class 1 sttions are CRS units. As a result, their trends are even a little higher than the Class 2s. Just shows what lugging its own personal heat sink around does to a sensor.

So, in the future, when the projected warming hasn’t materialized (yet again), the scientists will have to “adjust” today’s data back down to show warming then.
It’s like the data equivalent of whack-a-mole. Pretty soon, NASA and the NOAA will need an entire “Administration of Adjustments” with sub-departments of staff just to keep track of all the changes.

Reblogged this on gottadobetterthanthis and commented:
During my years of nuclear-related research, my experience convinced me that we cannot measure temperature with a reliable accuracy better than 1.5°C, not under any natural conditions, hardly even under ideal conditions in the lab. Given what Anthony is showing, it is really ridiculous, absurdly silly, to talk about earth warming with the data we have so far.
Sure, we have other evidence, circumstantial evidence, that earth is warming, but our best peleo evidence indicates it has all happened before, even at faster rates, though warmist refuse to acknowledge that obvious fact.
My bottom line is with the people. Acquiescing to the alarmists enslaves people, condemns the world’s poorest peoples to continued starvation and deprivation, and kills people today with all that goes on in the name of “Green.”
Make it personal: Grandma cannot pay the heat bill, and eat, and buy medicine. Left-leaners blame the government and right-leaners, but the the fact is, there is no excuse for the heat bill factoring into this situation. In the West, we can have, and should have, such abundant energy that heating bills are trivial. Instead, Grandma decides it isn’t worth it, and she spends her last night under the stars sleeping in her garden as the frost takes her.
More personal? Think cold, dark operating room with only enough emergency power to keep the vitals monitors running while the doctors open your daughter for a critical, life-or-death surgery, simply because the coal-fired power plants have all been shut down and the wind just doesn’t happen to be blowing.
Remember:
Wind blows, but windmills suck!
Also:
Cold kills. Warmer is better!

“Given what Anthony is showing, it is really ridiculous, absurdly silly, to talk about earth warming with the data we have so far. Sure, we have other evidence, circumstantial evidence, that earth is warming, but our best paleo evidence indicates it has all happened before, even at faster rates, though warmist refuse to acknowledge that obvious fact.”

The heat-sink hypothesis is an unphysical one. This was pointed out to Evan Jones over a year ago in discussion at Stoat’s. The press release makes no mention of having found a physical explanation. “Heat-sink” in this context is merely a euphemism for: We haven’t found a physical explanation.
Anyone that reflects on what a heat-sink does and how they’re used quickly realizes this is bass ackwards. Heat-sinks reduce trends, not exaggerate them. We don’t put heat-sinks around CPUs in our computers because we want them to run hotter.
I find this whole explanation – or lack of one – especially disappointing because Evan assured us this was easily figured out by their co-author physicist.
First he said, “Our physicist co-author thinks this factor is easy to nail and he does know about the Hubbard paper.”
Later he said, “We will, of course, be hitting it from the physics angle, as well. So it won’t be a statistics-only study. It will be backed by a mechanism that explains why and how (and to what extent) this occurs.”
OTOH, there is a known component of the measuring system that *does* exaggerate highs *and* exaggerate lows – the Dale/Vishay 1140 thermistor used in the MMTS stations. This was documented by Hubbard and Lin, Air Temperature Comparison between the MMTS and the USCRN Temperature Systems (2004).
Since the Menne MMTS Bias adjustments were based on all stations, regardless of microsite, it’s easy to envisage that Menne’s MMTS adjustment isn’t entirely applicable to a subset of the stations. The Hubbard MMTS Bias adjustment is instrument specific – regardless of location or microsite – since it’s just a description of the physical response curve of the sensor itself. But Menne relies on pairwise homogenization while Hubbard & Lin did a year-long side-by-side field study comparison.
While there is nothing wrong with homogenization per se, using the average result from a large group of stations and expecting it to be applicable to all subsets is a leap of faith. It is also unnecessary considering the Hubbard MMTS Bias I adjustment is available. If nothing else, obtaining the same results also using Hubbard would make the results more robust and eliminate the MMTS sensor as a potential physical explanation.

May I point out that the first occurrence of “heat-sink”, much less “heat-sink hypothesis” is in your post. So, is this a red herring? If not, could you provide a link to the “heat-sink hypothesis” to which you object.
Perhaps even the link to that discussion over at “Stoat”. Otherwise I feel this comment is a waste of time.

You may be a tad confused on this point. The Menne et al approach applied a custom adjustment to each MMTS transition based on the difference between that station and nearby stations that did not have a MMTS transition. The approach used by Watts et al (as far as I know) applies a constant MMTS adjustment to all stations.
MMTS is somewhat complicated. There is a clear max cooling bias that shows up in most cases, and a min warming bias that shows up in some cases but not all. The max cooling bias is instrumental, but my personal suspicion is that the min warming bias is mostly due to station site changes (since MMTS sensors require an electric hookup, they were often set up closer to buildings than the CRSes they replaced). One thing I’d like to use Anthony and Evan’s work for is to help identify stations that did not move when transitioning to MMTS to help test this hypothesis. Might be a fun little paper in it.

Zeke – Yes, I understand the Menne approach, but that is a totally different approach than Hubbard & Lin’s description of the Dale/Vishay 1140 thermistor results. Hubbard and Lin’s results were based on co-located, side-by-side, field comparison. So their min bias is also instrumental – not due to site changes.
Menne describes the differences in results here: THE U.S. HISTORICAL CLIMATOLOGY NETWORK MONTHLY TEMPERATURE DATA, VERSION 2, where they write: ” As a result, the overall effect of the MMTS instrument change at all affected sites is substantially less than both the Quayle et al. (1991) and Hubbard and Lin (2006) estimates. However, the average effect of the statistically significant changes (−0.52°C for maximum temperatures and +0.37°C for minimum temperatures) is close to Hubbard and Lin’s (2006) results for sites with no coincident station move.”
Considering this is a press release with no reproducible method or data to work with, it’s possible I’m mistaken, but it’s hard to reconcile your interpretation with Evan’s statement:”We do this by applying the Menne (2009) offset jump to MMTS stations at the point of conversion (0.10c to Tmax, -0.025 to Tmin, and the average of the two to Tmean).”

Some comments. First, yes, we apply a constant offset. We do, however, apply it to the month of conversion for each station, so the effect on trend per station will vary widely.
It is a simplification, but not as simplistic as you might think.
Second, MMTS Min warming adjustment is slight. It would make only be +0.0125C to Tmean offset difference to the jump if excluded entirely, which doesn’t stack up to much over three decades.
Third. We are not doing it exactly the way Dr. Menne does it. We are using Menne’s offset numbers as provided by Menne (2009). Menne now does a pairwise homogenization seven years front and back from the point of conversion. Having used that old dodge in wargame design myself (to “simulate” accuracy), all I could do was laugh and shake my head. Not so much because he did it, but because he actually appeared to believe it. And boy-oh-boy did it ever make that nasty CRS issue fade into the shadows, or what. So that’s what he does.
What we do is add the Menne (2009) offset at point of conversion.

One thing I’d like to use Anthony and Evan’s work for is to help identify stations that did not move when transitioning to MMTS to help test this hypothesis.
Hullo, Zeke. Good question.
A few of them did move a bit, but most (by far) were placed in the same spot. You see quite a few old CRS units still in place with an MMTS right next to it. NOAA appears to use a major move as an opportunity to start afresh with an MMTS. But as those stations are considered to be perturbed, this is moot.
If the rating changes as the result of a localized move, then the station is considered to be perturbed and is dropped.
Also, USHCN metadata has greatly improved. They often record station moves by as little as 3 feet. If you want to examine that, look at NOAA metadata online (HOMR) and the surfacestations gallery, along with an active Google Earth.

Welcome to my world, guys. I will go on a bit.The heat-sink hypothesis is an unphysical one. This was pointed out to Evan Jones over a year ago in discussion at Stoat’s. The press release makes no mention of having found a physical explanation. “Heat-sink” in this context is merely a euphemism for: We haven’t found a physical explanation.
And there I was, thinking it was a euphemism for, “Gosh, those trends sure average a heck of a lot higher when those houses and cementy things are near the sensor. Wow, look at those Tmin numbers. Well it seems pretty obvious why that is.”
As Dr. Leroy put it: the quality of observations cannot be ensured only by the use of high-quality instrumentation, but relies at least as much on the proper siting and maintenance of the instruments.
He refers to “heat sources”, writ large. We refine the observation to distinguish that which generates heat (“heat source”) from that which does not generate heat, but absorbs and re-radiates it (“heat sink”).
Well, anyway, you don’t seem to think much of the term, that’s obvious. Or we wouldn’t still be going on about it after all this time. Is it possible that what you find bothersome about all this is that the words “heat sink” sit so well on the tongue?
Dr. Leroy wasn’t looking at the trends when a station is exposed to “heat source” (which, by his definition includes sources and sinks), but offset. What we do is use his rating system and then look at the trends of the stations thus rated. In your haste to remind be to stick with the trends, I fear you have strayed into the land of offsets a bit, yourself. Besides, being colder does not mean you are not warming faster, as the Arctic guys like to say.Anyone that reflects on what a heat-sink does
What a heat sink does is reflect.and how they’re used
Well, in greenhouses, they’re used to take the edge off Tmin and bump up Tmax. That’s the offset effect, anyway. You wouldn’t know how that would affect trend during a warming interval until you measure it, of course. You guys remind me of the story of the dude who got tossed out of the Aristotellian tribe for the crime of instigation to commit empiricism.quickly realizes this is bass ackwards.
I recommend realizing a little slower.Heat-sinks reduce trends, not exaggerate them. We don’t put heat-sinks around CPUs in our computers because we want them to run hotter.
You are talking offset. You need to be be thinking trend. I could just leave it at that.
A CPU is a heat source. It is generating its own heat. It is the hottest thing in the room. A CPU is generally located in an enclosed space, and is likely not exposed to get much sun. So the heat sink is taking up energy generated from the computer — a closed and trendless system.
Placing a heat sink next to a computer when sitting outside on a sunny lawn is not going to cool it down. Both the sink and the computer are receiving radiation from both the sun and the surrounding atmosphere. The heat sink is absorbing more energy from the sun than it is from the CPU, then re-radiating some of it back towards the CPU, recorded only at at Tmax and Tmin. Not to mention the general lack of nocturnal/dinurnal variation of a room in a building. When is Tmin inside a closed, artificially controlled environment?
So if anything, the heat sink will be marginally increasing the heat of the CPU at either Tmax or Tmin, which are only times the temperatures are recorded by USHCN. Not that this is much of a practical issue outside a closed room.I find this whole explanation – or lack of one – especially disappointing because Evan assured us this was easily figured out by their co-author physicist.
I have no doubt that you do. I think I can feel your disappointment radiating off you at Tmin. We never managed to land him, unfortunately. We’ll have to get back to it.
Please note that I was being starkly open about our process, far more than any other paper I’ve seen. Perhaps too open. But the idea is to operate as much as possible in the open. That’s what we do.First he said, “Our physicist co-author thinks this factor is easy to nail and he does know about the Hubbard paper.”
Well, that work hasn’t been done yet. It will have to wait for followup.Later he said, “We will, of course, be hitting it from the physics angle, as well. So it won’t be a statistics-only study. It will be backed by a mechanism that explains why and how (and to what extent) this occurs.”
The best laid schemes of mice and men gang aft agley. We can (and do) describe the mechanism, but we are going to need someone to add in the formulas. We’ll address this in followup.OTOH, there is a known component of the measuring system that *does* exaggerate highs *and* exaggerate lows – the Dale/Vishay 1140 thermistor used in the MMTS stations. This was documented by Hubbard and Lin, Air Temperature Comparison between the MMTS and the USCRN Temperature Systems (2004).
Groovy. We already add an MMTS adjustment offset. When we publish, I will supply a tool that will allow you to drop in whatever MMTS numbers you like better than ours. Either by formula or by swapping in a new MMTS-adj dataset.
Let us know when you do. We would find the results interesting.
But in any event, it won’t be enough of a bump to change things much over what we already did. Maybe 0.01C/decade on the outside.
And speaking of gluteal direction, all you guys think about is how to horsewhip the MMTSs in line with the CRSs. It never seems to occur to you that it’s the CRS units that are the actual problem in the first place — carrying your own personal heat sink around on your shoulders wherever you go will do that. Especially as the paint fades (net).
It’s the CRS units that are giving the spurious results. And, as the MMTS units were calibrated to the CRS units, I see little real justification even for adding in the offset jumps. Either that or the calibrators have some ‘splaining to do. But, being a swell guy, I’ll go along. For now.
It is possible that the offsets should remain — and don’t think I won’t be looking at pairwise to check. But it is glaringly obvious that the CRS trends, esp. Tmax are going to have to be adjusted down. Way down. And that has implications that are going to shake the chain all the way back to 1880.
I think it’s youse guys, not me that have things reversed.Since the Menne MMTS Bias adjustments were based on all stations, regardless of microsite, it’s easy to envisage that Menne’s MMTS adjustment isn’t entirely applicable to a subset of the stations. The Hubbard MMTS Bias adjustment is instrument specific – regardless of location or microsite – since it’s just a description of the physical response curve of the sensor itself. But Menne relies on pairwise homogenization while Hubbard & Lin did a year-long side-by-side field study comparison.
Just plug in Menne’s data. MMTS adjustment only data is available from NOAA if you care to do that. Or H&L. Besides, a little bigger or little smaller offset isn’t going to matter here. What’s going to matter is the bad CRS bias. You are the ones looking at this backwards.While there is nothing wrong with homogenization per se, using the average result from a large group of stations and expecting it to be applicable to all subsets is a leap of faith. It is also unnecessary considering the Hubbard MMTS Bias I adjustment is available. If nothing else, obtaining the same results also using Hubbard would make the results more robust and eliminate the MMTS sensor as a potential physical explanation.
There is nothing wrong with homogenization per se, if there is no systematic error in the data. Then it is kindly Uncle H. but when a systematic error is introduced to the data series, Kindly Uncle H goes postal. This is a known thing.
Yet I see no reason you can’t sub in Hubbard’s data. You could even do it station by station. You can be provided with excel sheets that will enable this process when we publish. But even if the bump in trend is double ours, it’s not going to affect our results much.

In Engineering a heat source is at a higher temperature than a heat sink. They are both considered (in the ideal) to have unlimited capacity.
In the context of the paper a “heat source” can supply energy not available from the environment. A heat sink can not supply energy independent of the environment.

For permission, contact us. See the About>Contact menu under the header.

All rights reserved worldwide.

Some material from contributors may contain additional copyrights of their respective company or organization.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!
Cookie Policy