Climate scientists versus climate data

I read with great irony recently that scientists are “frantically copying U.S. Climate data, fearing it might vanish under Trump” (e.g., Washington Post 13 December 2016). As a climate scientist formerly responsible for NOAA’s climate archive, the most critical issue in archival of climate data is actually scientists who are unwilling to formally archive and document their data. I spent the last decade cajoling climate scientists to archive their data and fully document the datasets. I established a climate data records program that was awarded a U.S. Department of Commerce Gold Medal in 2014 for visionary work in the acquisition, production, and preservation of climate data records (CDRs), which accurately describe the Earth’s changing environment.

The most serious example of a climate scientist not archiving or documenting a critical climate dataset was the study of Tom Karl et al. 2015 (hereafter referred to as the Karl study or K15), purporting to show no ‘hiatus’ in global warming in the 2000s (Federal scientists say there never was any global warming “pause”). The study drew criticism from other climate scientists, who disagreed with K15’s conclusion about the ‘hiatus.’ (Making sense of the early-2000s warming slowdown). The paper also drew the attention of the Chairman of the House Science Committee, Representative Lamar Smith, who questioned the timing of the report, which was issued just prior to the Obama Administration’s Clean Power Plan submission to the Paris Climate Conference in 2015.

In the following sections, I provide the details of how Mr. Karl failed to disclose critical information to NOAA, Science Magazine, and Chairman Smith regarding the datasets used in K15. I have extensive documentation that provides independent verification of the story below. I also provide my suggestions for how we might keep such a flagrant manipulation of scientific integrity guidelines and scientific publication standards from happening in the future. Finally, I provide some links to examples of what well documented CDRs look like that readers might contrast and compare with what Mr. Karl has provided.

Background

In 2013, prior to the Karl study, the National Climatic Data Center [NCDC, now the NOAA National Centers for Environmental Information (NCEI)] had just adopted much improved processes for formal review of Climate Data Records, a process I formulated [link]. The land temperature dataset used in the Karl study had never been processed through the station adjustment software before, which led me to believe something was amiss. When I pressed the co-authors, they said they had decided not to archive the dataset, but did not defend the decision. One of the co-authors said there were ‘some decisions [he was] not happy with’. The data used in the K15 paper were only made available through a web site, not in digital form, and lacking proper versioning and any notice that they were research and not operational data. I was dumbstruck that Tom Karl, the NCEI Director in charge of NOAA’s climate data archive, would not follow the policy of his own Agency nor the guidelines in Science magazine for dataset archival and documentation.

I questioned another co-author about why they choose to use a 90% confidence threshold for evaluating the statistical significance of surface temperature trends, instead of the standard for significance of 95% — he also expressed reluctance and did not defend the decision. A NOAA NCEI supervisor remarked how it was eye-opening to watch Karl work the co-authors, mostly subtly but sometimes not, pushing choices to emphasize warming. Gradually, in the months after K15 came out, the evidence kept mounting that Tom Karl constantly had his ‘thumb on the scale’—in the documentation, scientific choices, and release of datasets—in an effort to discredit the notion of a global warming hiatus and rush to time the publication of the paper to influence national and international deliberations on climate policy.

Defining an Operational Climate Data Record

For nearly two decades, I’ve advocated that if climate datasets are to be used in important policy decisions, they must be fully documented, subject to software engineering management and improvement processes, and be discoverable and accessible to the public with rigorous information preservation standards. I was able to implement such policies, with the help of many colleagues, through the NOAA Climate Data Record policies (CDR) [link].

Once the CDR program was funded, beginning in 2007, I was able to put together a team and pursue my goals of operational processing of important climate data records emphasizing the processes required to transition research datasets into operations (known as R2O). Figure 1 summarizes the steps required to accomplish this transition in the key elements of software code, documentation, and data.

Unfortunately, the NCDC/NCEI surface temperature processing group was split on whether to adopt this process, with scientist Dr. Thomas C. Peterson (a co-author on K15, now retired from NOAA) vigorously opposing it. Tom Karl never required the surface temperature group to use the rigor of the CDR methodology, although a document was prepared identifying what parts of the surface temperature processing had to be improved to qualify as an operational CDR.

Tom Karl liked the maturity matrix so much, he modified the matrix categories so that he could claim a number of NCEI products were “Examples of “Gold” standard NCEI Products (Data Set Maturity Matrix Model Level 6).” See his NCEI overview presentation all NCEI employees [ncei-overview-2015nov-2 ] were told to use, even though there had never been any maturity assessment of any of the products.

NCDC/NCEI surface temperature processing and archival

In the fall of 2012, the monthly temperature products issued by NCDC were incorrect for 3 months in a row [link]. As a result, the press releases and datasets had to be withdrawn and reissued. Dr. Mary Kicza, then the NESDIS Associate Administrator (the parent organization of NCDC/NCEI in NOAA), noted that these repeated errors reflected poorly on NOAA and required NCDC/NCEI to improve its software management processes so that such mistakes would be minimized in the future. Over the next several years, NCDC/NCEI had an incident report conducted to trace these errors and recommend corrective actions.

Following those and other recommendations, NCDN/NCEI began to implement new software management and process management procedures, adopting some of the elements of the CDR R2O process. In 2014 a NCDC/NCEI Science Council was formed to review new science activities and to review and approve new science products for operational release. A draft operational readiness review (ORR) was prepared and used for approval of all operational product releases, which was finalized and formally adopted in January 2015. Along with this process, a contractor who had worked at the CMMI Institute (CMMI, Capability Maturity Model Integration, is a software engineering process level improvement training and appraisal program) was hired to improve software processes, with a focus on improvement and code rejuvenation of the surface temperature processing code, in particular the GHCN-M dataset.

The first NCDC/NCEI surface temperature software to be put through this rejuvenation was the pairwise homogeneity adjustment portion of processing for the GHCN-Mv4 beta release of October 2015. The incident report had found that there were unidentified coding errors in the GHCN-M processing that caused unpredictable results and different results every time code was run.

The generic flow of data used in processing of the NCDC/NCEI global temperature product suite is shown schematically in Figure 2. There are three steps to the processing, and two of the three steps are done separately for the ocean versus land data. Step 1 is the compilation of observations either from ocean sources or land stations. Step 2 involves applying various adjustments to the data, including bias adjustments, and provides as output the adjusted and unadjusted data on a standard grid. Step 3 involves application of a spatial analysis technique (empirical orthogonal teleconnections, EOTs) to merge and smooth the ocean and land surface temperature fields and provide these merged fields as anomaly fields for ocean, land and global temperatures. This is the product used in K15. Rigorous ORR for each of these steps in the global temperature processing began at NCDC in early 2014.Figure 2. Generic data flow for NCDC/NCEI surface temperature products.

In K15, the authors describe that the land surface air temperature dataset included the GHCN-M station data and also the new ISTI (Integrated Surface Temperature Initiative) data that was run through the then operational GHCN-M bias correction and gridding program (i.e., Step 2 of land air temperature processing in Figure 2). They further indicated that this processing and subsequent corrections were ‘essentially the same as those used in GHCN-Monthly version 3’. This may have been the case; however, doing so failed to follow the process that had been initiated to ensure the quality and integrity of datasets at NCDC/NCEI.

The GHCN-M V4 beta was put through an ORR in October 2015; the presentation made it clear that any GHCN-M version using the ISTI dataset should, and would, be called version 4. This is confirmed by parsing the file name actually used on the FTP site for the K15 dataset [link]; NOTE: placing a non-machine readable copy of a dataset on an FTP site does not constitute archiving a dataset). One file is named ‘box.12.adj.4.a.1.20150119’, where ‘adj’ indicates adjusted (passed through step 2 of the land processing) and ‘4.a.1’ means version 4 alpha run 1; the entire name indicating GHCN-M version 4a run 1. That is, the folks who did the processing for K15 and saved the file actually used the correct naming and versioning, but K15 did not disclose this. Clearly labeling the dataset would have indicated this was a highly experimental early GHCN-M version 4 run rather than a routine, operational update. As such, according to NOAA scientific integrity guidelines, it would have required a disclaimer not to use the dataset for routine monitoring.

In August 2014, in response to the continuing software problems with GHCNMv3.2.2 (version of August 2013), the NCDC Science Council was briefed about a proposal to subject the GHCNMv3 software, and particularly the pairwise homogeneity analysis portion, to a rigorous software rejuvenation effort to bring it up to CMMI level 2 standards and resolve the lingering software errors. All software has errors and it is not surprising there were some, but the magnitude of the problem was significant and a rigorous process of software improvement like the one proposed was needed. However, this effort was just beginning when the K15 paper was submitted, and so K15 must have used data with some experimental processing that combined aspects of V3 and V4 with known flaws. The GHCNMv3.X used in K15 did not go through any ORR process, and so what precisely was done is not documented. The ORR package for GHCNMv4 beta (in October 2015) uses the rejuvenated software and also includes two additional quality checks versus version 3.

Which version of the GHCN-M software K15 used is further confounded by the fact that GHCNMv3.3.0, the upgrade from version 3.2.2, only went through an ORR in April 2015 (i.e., after the K15 paper was submitted and revised). The GHCN-Mv3.3.0 ORR presentation demonstrated that the GHCN-M version changes between V3.2.2 and V3.3.0 had impacts on rankings of warmest years and trends. The data flow that was operational in June 2015 is shown in figure 3.

Figure 3. Data flow for surface temperature products described in K15 Science paper. Green indicates operational datasets having passed ORR and archived at time of publication. Red indicates experimental datasets never subject to ORR and never archived.

It is clear that the actual nearly-operational release of GHCN-Mv4 beta is significantly different from the version GHCNM3.X used in K15. Since the version GHCNM3.X never went through any ORR, the resulting dataset was also never archived, and it is virtually impossible to replicate the result in K15.

At the time of the publication of the K15, the final step in processing the NOAAGlobalTempV4 had been approved through an ORR, but not in the K15 configuration. It is significant that the current operational version of NOAAGlobalTempV4 uses GHCN-M V3.3.0 and does not include the ISTI dataset used in the Science paper. The K15 global merged dataset is also not archived nor is it available in machine-readable form. This is why the two boxes in figure 3 are colored red.

The lack of archival of the GHCN-M V3.X and the global merged product is also in violation of Science policy on making data available [link]. This policy states: “Climate data. Data should be archived in the NOAA climate repository or other public databases”. Did Karl et al. disclose to Science Magazine that they would not be following the NOAA archive policy, would not archive the data, and would only provide access to a non-machine readable version only on an FTP server?

For ocean temperatures, the ERSST version 4 is used in the K15 paper and represents a major update from the previous version. The bias correction procedure was changed and this resulted in different SST anomalies and different trends during the last 15+ years relative to ERSST version 3. ERSSTV4 beta, a pre-operational release, was briefed to the NCDC Science Council and approved on 30 September 2014.

The ORR for ERSSTV4, the operational release, took place in the NCDC Science Council on 15 January 2015. The ORR focused on process and questions about some of the controversial scientific choices made in the production of that dataset will be discussed in a separate post. The review went well and there was only one point of discussion on process. One slide in the presentation indicated that operational release was to be delayed to coincide with Karl et al. 2015 Science paper release. Several Science Council members objected to this, noting the K15 paper did not contain any further methodological information—all of that had already been published and thus there was no rationale to delay the dataset release. After discussion, the Science Council voted to approve the ERSSTv4 ORR and recommend immediate release.

The Science Council reported this recommendation to the NCDC Executive Council, the highest NCDC management board. In the NCDC Executive Council meeting, Tom Karl did not approve the release of ERSSTv4, noting that he wanted its release to coincide with the release of the next version of GHCNM (GHCNMv3.3.0) and NOAAGlobalTemp. Those products each went through an ORR at NCDC Science Council on 9 April 2015, and were used in operations in May. The ERSSTv4 dataset, however, was still not released. NCEI used these new analyses, including ERSSTv4, in its operational global analysis even though it was not being operationally archived. The operational version of ERSSTv4 was only released to the public following publication of the K15 paper. The withholding of the operational version of this important update came in the middle of a major ENSO event, thereby depriving the public of an important source of updated information, apparently for the sole purpose of Mr. Karl using the data in his paper before making the data available to the public.

So, in every aspect of the preparation and release of the datasets leading into K15, we find Tom Karl’s thumb on the scale pushing for, and often insisting on, decisions that maximize warming and minimize documentation. I finally decided to document what I had found using the climate data record maturity matrix approach. I did this and sent my concerns to the NCEI Science Council in early February 2016 and asked to be added to the agenda of an upcoming meeting. I was asked to turn my concerns into a more general presentation on requirements for publishing and archiving. Some on the Science Council, particularly the younger scientists, indicated they had not known of the Science requirement to archive data and were not aware of the open data movement. They promised to begin an archive request for the K15 datasets that were not archived; however I have not been able to confirm they have been archived. I later learned that the computer used to process the software had suffered a complete failure, leading to a tongue-in-cheek joke by some who had worked on it that the failure was deliberate to ensure the result could never be replicated.

Where do we go from here?

I have wrestled for a long time about what to do about this incident. I finally decided that there needs to be systemic change both in the operation of government data centers and in scientific publishing, and I have decided to become an advocate for such change. First, Congress should re-introduce and pass the OPEN Government Data Act. The Act states that federal datasets must be archived and made available in machine readable form, neither of which was done by K15. The Act was introduced in the last Congress and the Senate passed it unanimously in the lame duck session, but the House did not. This bodes well for re-introduction and passage in the new Congress.

However, the Act will be toothless without an enforcement mechanism. For that, there should be mandatory, independent certification of federal data centers. As I noted, the scientists working in the trenches would actually welcome this, as the problem has been one of upper management taking advantage of their position to thwart the existing executive orders and a lack of process adopted within Agencies at the upper levels. Only an independent, outside body can provide the needed oversight to ensure Agencies comply with the OPEN Government Data Act.

Similarly, scientific publishers have formed the Coalition on Publishing Data in the Earth and Space Sciences (COPDESS) with a signed statement of commitment to ensure open and documented datasets are part of the publication process. Unfortunately, they, too, lack any standard checklist that peer reviewers and editors can use to ensure the statement of commitment is actually enforced. In this case, and for assessing archives, I would advocate a metric such as the data maturity model that I and colleagues have developed. This model has now been adopted and adapted by several different groups, applied to hundreds of datasets across the geophysical sciences, and has been found useful for ensuring information preservation, discovery, and accessibility.

Finally, there needs to be a renewed effort by scientists and scientific societies to provide training and conduct more meetings on ethics. Ethics needs to be a regular topic at major scientific meetings, in graduate classrooms, and in continuing professional education. Respectful discussion of different points of view should be encouraged. Fortunately, there is initial progress to report here, as scientific societies are now coming to grips with the need for discussion of and guidelines for scientific ethics.

There is much to do in each of these areas. Although I have retired from the federal government, I have not retired from being a scientist. I now have the luxury of spending more time on these things that I am most passionate about. I also appreciate the opportunity to contribute to Climate Etc. and work with my colleague and friend Judy on these important issues.

Postlude

A couple of examples of how the public can find and use CDR operational products, and what is lacking in a non-operational and non-archived product

Here you will see a fully documented CDR. At the top, we have the general description and how to cite the data. Then below, you have a set of tabs with extensive information. Click each tab to see how it’s done. Note, for example, that in ‘documentation’ you have choices to get the general documentation, processing documents including source code, data flow diagram, and the algorithm theoretical basis document ATBD which includes all the info about how the product is generated, and then associated resources. This also includes a permanent digital object identifier (doi) to point uniquely to this dataset.

Here on the left you will find the documents again that are required to pass the CDR operations and archival. Even though it’s a slight cut below TSI in example 1, a user has all they need to use and understand this.

The contents of this FTP site were entered into the NCEI archive following my complaint to the NCEI Science Council. However, the artifacts for full archival of an operational CDR are not included, so this is not compliant with archival standards.

Biosketch:

John Bates received his Ph.D. in Meteorology from the University of Wisconsin-Madison in 1986. Post Ph.D., he spent his entire career at NOAA, until his retirement in 2016. He spent the last 14 years of his career at NOAA’s National Climatic Data Center (now NCEI) as a Principal Scientist, where he served as a Supervisory Meteorologist until 2012.

Dr. Bates’ technical expertise lies in atmospheric sciences, and his interests include satellite observations of the global water and energy cycle, air-sea interactions, and climate variability. His most highly cited papers are in observational studies of long term variability and trends in atmospheric water vapor and clouds.

NOAA Administrator’s Award 2004 for “outstanding administration and leadership in developing a new division to meet the challenges to NOAA in the area of climate applications related to remotely sensed data”. He was awarded a U.S. Department of Commerce Gold Medal in 2014 for visionary work in the acquisition, production, and preservation of climate data records (CDRs). He has held elected positions at the American Geophysical Union (AGU), including Member of the AGU Council and Member of the AGU Board. He has played a leadership role in data management for the AGU.

He is currently President of John Bates Consulting Inc., which puts his recent experience and leadership in data management to use in helping clients improve data management to improve their preservation, discovery, and exploitation of their and others data. He has developed and applied techniques for assessing both organizational and individual data management and applications. These techniques help identify how data can be managed more cost effectively and discovered and applied by more users.

David Rose in the Mail on Sunday

David Rose of the UK Mail on Sunday is working on a comprehensive expose of this issue [link].

Here are the comments that I provided to David Rose, some of which were included in his article:

Here is what I think the broader implications are. Following ClimateGate, I made a public plea for greater transparency in climate data sets, including documentation. In the U.S., John Bates has led the charge in developing these data standards and implementing them. So it is very disturbing to see the institution that is the main U.S. custodian of climate data treat this issue so cavalierly, violating its own policy. The other concern that I raised following ClimateGate was overconfidence and inadequate assessments of uncertainty. Large adjustments to the raw data, and substantial changes in successive data set versions, imply substantial uncertainties. The magnitude of these uncertainties influences how we interpret observed temperature trends, ‘warmest year’ claims, and how we interpret differences between observations and climate model simulations. I also raised concerns about bias; here we apparently see Tom Karl’s thumb on the scale in terms of the methodologies and procedures used in this publication.

Apart from the above issues, how much difference do these issues make to our overall understanding of global temperature change? All of the global surface temperature data sets employ NOAA’s GHCN land surface temperatures. The NASA GISS data set also employs the ERSST datasets for ocean surface temperatures. There are global surface temperature datasets, such as Berkeley Earth and HadCRUT that are relatively independent of the NOAA data sets, that agree qualitatively with the new NOAA data set. However, there remain large, unexplained regional discrepancies between the NOAA land surface temperatures and the raw data. Further, there are some very large uncertainties in ocean sea surface temperatures, even in recent decades. Efforts by the global numerical weather prediction centers to produce global reanalyses such as the European Copernicus effort is probably the best way forward for the most recent decades.

Regarding uncertainty, ‘warmest year’, etc. there is a good article in the WSJ: Change would be healthy at U.S. climate agencies (hockeyshtick has reproduced the full article).

I have known John Bates for about 25 years, and he served on the Ph.D. committees of two of my graduate students. There is no one, anywhere, that is a greater champion for data integrity and transparency.

When I started Climate Etc., John was one of the few climate scientists that contacted me, sharing concerns about various ethical issues in our field.

Shortly after publication of K15, John and I began discussing our concerns about the paper. I encouraged him to come forward publicly with his concerns. Instead, he opted to try to work within the NOAA system to address the issues –to little effect. Upon his retirement from NOAA in November 2016, he decided to go public with his concerns.

He submitted an earlier, shorter version of this essay to the Washington Post, in response to the 13 December article (climate scientists frantically copying data). The WaPo rejected his op-ed, so he decided to publish at Climate Etc.

In the meantime, David Rose contacted me about a month ago, saying he would be in Atlanta covering a story about a person unjustly imprisoned [link]. He had an extra day in Atlanta, and wanted to get together. I told him I wasn’t in Atlanta, but put him in contact with John Bates. David Rose and his editor were excited about what John had to say.

I have to wonder how this would have played out if we had issued a press release in the U.S., or if this story was given to pretty much any U.S. journalist working for the mainstream media. Under the Obama administration, I suspect that it would have been very difficult for this story to get any traction. Under the Trump administration, I have every confidence that this will be investigated (but still not sure how the MSM will react).

Well, it will be interesting to see how this story evolves, and most importantly, what policies can be put in place to prevent something like this from happening again.

“Incredible ain’t it, non-archiving of critical evidence -?”
And just not true. There is an extensive archive. Bates even linked to it. It is here.

Bates complaints seem to be
1. The archiving wasn’t complete until six months after the paper appeared
2. Data is in ascii format which is not “machine readable”. Of course it is, it just requires a format statement.

NIck you left out complaints:
1. Karl made administrative decisions contrary to data integrity;
2. Karl used 90% rather than 95% standard;
3. The use of non standardized data set implied a greater uncertainty to the data that was not, could not be addressed;
4. Karl made poor data integrity decisions in order to meet a publishing date;
5. Karl made decisions that benefited his publication not the organization;
6. etc., etc.

jfp,“NIck you left out complaints:”
I dealt with the 90% issue here. He was comparing results with the AR5, which used 90%.

All the rest are matters of opinion, and his opinion does seem to be soured by something. 1,4,5 are pure opinion, and unsubstantiated. The non-standardized issue is bunk. If you go to the readme file in the archive, they say exactly what files they use:“This directory contains the adjusted land station data and metadata used in the Old Analysis; data from GHCN-Monthly version 3.2.2.
ghcnm.tavg.v3.2.2.20150116.qca.dat.gz; Adjusted station data
ghcnm.tavg.v3.2.2.20150116.qca.inv.gz; Inventory of adjusted stations”
Those are standard issue, dated files. I download them every day. And the copies are in the archive.

And for the new Analysis“This directory contains the adjusted land station data and metadata used in the New Analysis.
tavg.v4.a.1.20150119.qca.dat.gz; Adjusted station data
tavg.v4.a.1.20150119.qca.inv.gz; Inventory of adjusted stations”

This is V4, but again, standard dated versions, and are in the archive here

Nick, yes to 1 and 2, leta not forget 3; where Karl is charged with deliberately manipulating data to spit out a very well timed lump of garbage and parade it around as observational science. Dont sugar coat what Karl did Nick.

” Karl is charged with deliberately manipulating data to spit out a very well timed lump of garbage and parade it around as observational science”
Sounds opinionated. And that is what we got from Bates. Just subjective, no supporting facts.

As Thorne (who was there), says, Bates was never involved at a technical level. All he has done is dig into the paperwork, and check whether boxes were ticked. There is a lot of ignorance in Rose’s article, at least some of which seems to originate with Bates.

Nick, yes i am opinionated and yes facts support my opinion regardarding fleet data.
Fact: a full survey and inspection of and not limited to, citing, materials, calibration, method, and procedural has never occurred relative to fleet measurements. Of all the variables affecting heat transfer and retention, all of these are unknown. The error bounds being as they are unknowable renders the data useless. That should be obvious to a child, youre not a child Nick. I can count raisins to see how many pounds of found my cat hungers for, but that’s not science Nick. Karls use of demonstrably corrupt “data” exposed him.

“Karls use of demonstrably corrupt “data” exposed him.”
The confusion of the complaints against Karl are amazing. K15 did not introduce the use of ship data. They have been used for very many years. Nor did K15 promote its use. On the contrary; the introduction of buoys, pioneered by NOAA, is replacing ship data, to an extent that, with variance weighting, it now provides by far the largest component of SST data. What K15 is doing is making possible a proper transition between the sources of data, by quantifying the (small) bias.

No, Bates states that (by definition) data are not archived by making it available somewhere as ftp. One of his major points is that Karl made public these data that had not been approved by formal process and want improvements so gained. Procedures like authority endorsement must surely be an initio taken as good and non- compliance, such as to time a paper’s release for impact, not so good. Agreed?
Geoff

Geoff,“Procedures like authority endorsement must surely be an initio taken as good”
Ironies abound. Here is what skeptics had to say when someone deemed skeptic at CSIRO was reproved for completely ignoring publication approval processes.

But the stuff about “time a paper’s release for impact” is nonsense. Firstly, as ehak points out, it was actually submitted in December 2014. And I’m sure the dead weight of the Bates’s in NOAA would mean that they had it in process for long before that. But second, it is a methods paper. There is no particular reason why it should have impact, except skeptics decided for some reason that he was stealing their favorite pause. Basically, the information about measurable ship-buoy bias had been published for years, and ERSST just had to do something about it. That was the driver.

“v4 actually makes preferential use of buoys over ships (they are weighted almost 7 times in favour) as documented in the ERSSTv4 paper. The assertion that buoy data were thrown away as made in the article is demonstrably incorrect.”

I was taught at an early age that you should never give people reason to believe you are a liar, a thief or a cheat. Well, the author of that post gives plenty of reason to believe he is the latter with this statement.

Sure the buoy data was weighted heavily – AFTER it was adjusted against the ship intake data. Now I’m not taking issue with that decision, even though it sounds counter intuitive to me – taking the less reliable data set and adjusting the more reliable one to it. What I do take issue with is the author of the post spinning it to make it sound as if Dr. Bates is making a false statement. To claim that “v4 … makes preferential use of buoys over ships ” is effectively an effort to misdirect, as no mention is made of adjustments to it prior to it being used. And if I am recalling correctly, Dr. Bates didn’t say the unadjusted buoy data was “thrown away”, rather that it was not archived in the standard manner.

Dr Bates said: ‘They had good data from buoys. And they threw it out and “corrected” it by using the bad data from ships. You never change good data to agree with bad, but that’s what they did – so as to make it look as if the sea was warmer.’

“The confusion of the complaints against Karl are amazing. K15 did not introduce the use of ship data. They have been used for very many years. Nor did K15 promote its use. On the contrary; the introduction of buoys, pioneered by NOAA, is replacing ship data, to an extent that, with variance weighting, it now provides by far the largest component of SST data. What K15 is doing is making possible a proper transition between the sources of data, by quantifying the (small) bias.”

I can’t speak for others Nick, but I’m not operating under any sense of confusion. And you are the one introducing the strawman with the above comment. Who is claiming Karl “introduced” the use of ship intake data? No one I’ve seen. And the term “promote” is another effort at misdirection. The primary question regarding the use of the two data sets was towards the logic behind how they were utilized. Why adjust the more reliable set using the set having lower reliability and then weighing the adjusted set more heavily? I have yet to see a good answer to that simple question. Care to provide one or are you too busy closing ranks?

“There is no particular reason why it should have impact, except skeptics decided for some reason that he was stealing their favorite pause. ”

Nick, this claim is just bizarre. Are you really unaware of the massive media coverage around this study? The acompanying PR blitz by the usual promoters of global warming fears?

That climatophobes can spend a year loudly trumpeting a result in major media and then argue the result was only interesting because of the reaction of skeptics really speaks to the state of the movement.

Exactly what is “dishonest” about which “graph” (there are six graphs in the linked article)?

The last graph (“Hadley and NOAA – Common Baseline”), when El Niño (geothermal) distortions are removed (or just look at 2001 – 2015) appears to confirm the hiatus. Are you suggesting the graphic does not represent Hadley and NOAA data?

“With regard to the “rush” to publish, as of 2013, the median time from submission to online publication by Science was 109 days, or less than four months. The article by Karl et al. underwent handling and review for almost six months. Any suggestion that the review of this paper was “rushed” is baseless and without merit. Science stands behind its handling of this paper, which underwent particularly rigorous peer review.”

Jane Lubchenco, former NOAA administrator:

“These are sad, old accusations that have been definitively disproven.”

Rt. Rear Admiral David Titley, former NOAA chief operating officer:

“In summary, the Mail on Sunday has found a disgruntled ex-NOAA employee and is using him to construct alternative facts about the climate. Unfortunately for all of us, the air will keep warming, the seas will keep rising, and the ice will keep melting, regardless of the Daily Mail’s fanciful claims and accusations. The real atmosphere is impervious to alternative facts.

There is both a NOAA internal process on scientific integrity (my office ran it when I was at NOAA) and the opportunity to submit allegations of wrongdoing to the Department of Commerce Inspector General who, if there is reasonable evidence to substantiate the allegation, would undertake an independent investigation”

“In summary, the Mail on Sunday has found a disgruntled ex-NOAA employee and is using him to construct alternative facts about the climate. Unfortunately for all of us, the air will keep warming, the seas will keep rising, and the ice will keep melting, regardless of the Daily Mail’s fanciful claims and accusations. The real atmosphere is impervious to alternative facts.”

I didn’t bother reading the Mail article, just Dr. Bates post above. Not once does he “construct alternate facts about the climate”. He talks about process. And are you really going to give weight to any individual who claims “Unfortunately for all of us, the air will keep warming, the seas will keep rising, and the ice will keep melting .. “? That is most certainly not a scientific statement. It is a play on the emotions, with no proof to support the unfortunately part. In fact the science we have tells us exactly the opposite when it comes to ” the ice will keep melting”. Our planet has spent far more time in periods of large scale glaciation then is has in interglacial periods like the current one over the past million years or so. Based on our knowledge of the past record we should be coming to the end of the current period. If that is the case, I’m thinking a bit of warming is a good thing. Particularly was no one has shown how it will be bad.

Incredible ain’t it? Why hasn’t there been a big stink about Spencer and Christy’s UAH tropospheric satellite dataset? It’s still beta version 6.x and has been used for almost 18months now. Their paper explaining the underlying changes has still not been published.

RSS updated their satellite data set to Version 4 last year. They waited until after their paper has been published to release the data set. The old Version 3 (which is closer to UAH 6.x beta) runs colder because of various issues outlined in their paper.

Yet Judith Curry, Ted Cruz etc run around claiming “The satellite data is the best data we’ve got!” (referring to UAH). Despite the satellite data having much higher error range than land based temperature data.

Isn’t all of the data… corrected? My understanding is that the world is divided up into grids with official thermometers in them and when we look at the data, only 20% of these grid cells actually have official thermometer readings. That’s a lot of missing data that — gaps in the record — for which “missing data” is simply provided tp fill in the gaps where no data exists or has been lost or deleted or has otherwise simply gone missing.

My understanding is that the missing data is filled in from the computer models (the ones that run too hot). If you were wondering why the official NOAA and GISS temperature plots look like the models, it’s because they threw out any records that were too cold and inserted model warming.

You can call it scientific fraud if you like. NOAA and GISS justify it by showing the nice hockey stick they’ve made.

“Both of these are established methodologies for doing spatial prediction
otherwise known as interpolation” – phew, good thing you’re not using “computer models” to fill in the data. Oh, wait. Huh. You’re interpolation to estimate data between known values… yeah, a model. And I bet you are using a computer to do it.

A continuous temperature “surface” based on a sparse convenience sample uses infinitely more interpolation than a gridded area averaging method. But neither should be used to draw statistical conclusions about the total population, in this case global temperatures and the global average thereof.

Statistical sampling theory is perfectly clear about this inadequacy, but it is universally ignored by the statistical modelers. K15 is bad but the whole surface temperature game is a statistical sham.

Sure, sure, averaging averages… that’s the ticket– based on such results, I probably can pack a lot lighter this year for my trip to Antarctica.

–e.g.,

The authors therefore applied a correction for ship size in their data. Once Jones, Wigley, and Wright had made several of these kinds of corrections, they analyzed their data using a spatial averaging technique that placed measurements within grid cells on the earth?s surface in order to account for the fact that there were many more measurements taken on land than over the oceans. Developing this grid required many decisions based on their experience and judgment, such as how large each grid cell needed to be and how to distribute the cells over the Earth. They then calculated the mean temperature within each grid cell, and combined all of these means to calculate a global average air temperature for each year. Statistical techniques such as averaging are commonly used in the research process and can help identify trends and relationships within and between data sets. ~Vision Learning, “Data collection, analysis, and interpretation: Weather and climate”

David, you shouldn’t do a lot of things if you want the most accuracy in life, but can you quantify the failure as you see it a little bit more precisely? If we have limited samples, do you suggest we ignore the issue altogether? Can you write up a paper showing the sort of error being introduced that allegedly is not already being accounted? If the end result is a small deviation, all your complaining would have amounted to little more than unwarranted skepticism. If it’s significant, you might get a major science prize or something. What do you say?

You touch on – but evade – an important issue when you write,
“There are no established, globally accepted, universal “engineering standards”.

There is no possibility of such “standards” because
(a) there is no “established, globally accepted, universal” definition of global temperature
and
(b) if there were such an agreed definition then there is no possibility of an independent calibration standard for it.

All supposed “data” for global temperature is rendered scientifically invalid by the lack of an agreed definition of the parameter and the impossibility of any calibration standard for it. Simply, estimates of global temperature are pure pseudoscience with less credibility than phrenology.

Avoiding it suggest you recognise there is no better chart of temperature over the Phanerozoic Eon, despite 30 years of intensive climate research, and you recognize temperatures have been far higher for most of the past 542 Ma, life thrived, and there is no valid evidence to support your beliefs that GHG emissions are net-damaging nor that 2 C warming is dangerous. (Explained here: https://judithcurry.com/2017/01/29/the-threat-of-climate-change/#comment-836115 )

ISO/IEC 17025 General requirements for the competence of testing and calibration laboratories is the main ISO standard used by testing and calibration laboratories. In most major countries, ISO/IEC 17025 is the standard for which most labs must hold accreditation in order to be deemed technically competent. In many cases, suppliers and regulatory authorities will not accept test or calibration results from a lab that is not accredited. Originally known as ISO/IEC Guide 25, ISO/IEC 17025 was initially issued by the International Organization for Standardization in 1999.

The Joint Committee for Guides in Metrology (JCGM), is an organization that prepared the “Guide to the expression of uncertainty in measurement” (GUM) and the “International vocabulary of metrology – basic and general concepts and associated terms” (VIM). The JCGM assumed responsibility for these two documents from the ISO Technical Advisory Group 4 (TAG4).

This might be a duplicate. Posting from an iPad for the first time. Anyway. NI-43-101 is a globally accepted, established and enforced standard. Karl would be charged under securities law if he were a geologist and did what he did.

Mosher dodge the question here. He replied down thread (to one of his own comments):

Peter

Sorry I did not dodge ANY important question you had.

Youve never asked an important question in your life

How many times do people have to refuse to change your damn diapers for you.

Ps.. the chart you provided is crap.

Mosher gets really upset every time he is asked for evidence that global warming is net-damaging, serious threat, dangerous. He can’t answer the questions with a constructive, informative answer, and instead flies into an abusive rage. He says the chart is crap, but didn’t provide a link to a better one and explain why it is better. He avoided the question: Sign 4 of <"10 signs of intellectual dishonesty":

Huh, no established, globally accepted, universal “engineering standards”? I work in the “evil oil industry” and the entire industry runs on the API or American Petroleum Institute standards, from wellhead to well casing to drilling fluids. The API engineering standards as globally known and serve as the benchmark. Official API documents are divided into specific Standards as well as “Recommended Practices” to cover operational procedures. The US Government incorporates many of these Engineering Standards into the Federal regulations as do many other Government Regulators all over the world.. To say there are no established, globally accepted, universal “engineering standards” is simply ignorance.

I wouldnt cast stones if i were u. Your blinders are so high up your face your eyes are bruised. Thomas Karl used a ridiculous dataset, and magically it found its way into circulation. Magic to you perhaps. To the rest of us it looks real bad. His head should be on the proverbial pike right now and you defend him by attacking the whistleblower. Manage yourself with more integrity.

Since you seem to like guessing, I’ll take a guess that a description of you: You are probably one of those gullible young dudes who accept the nonsense sprouted by the likes of John Cook (SkepticalScience) and his disciples. In capable of challenging your beliefs and doing your own reality checks. Furthermore, you hide behind a pseudonym and are not prepared to state your real name or your background.

Peter, having read your “I am persuaded… ” statements, you seem very easily persuaded by anything you believe supports what you want to believe. It looks more like you are projecting and that you yourself are “incapable of challenging your beliefs and doing your own reality checks”. I will say back to you “How embarrassing that you are Australian”.

Like… Spencer and Christy’s UAH tropospheric satellite dataset? It’s still beta version 6.x and has been used for almost 18months now. Their paper explaining the underlying changes has still not been published.

RSS updated their satellite data set to Version 4 last year. They waited until after their paper has been published to release the data set. The old Version 3 (which is closer to UAH 6.x beta) runs colder because of various issues outlined in their paper.

Yet Judith Curry, Ted Cruz etc run around claiming “The satellite data is the best data we’ve got!” (referring to UAH). Despite the satellite data having much higher error range than land based temperature data.

A reputable scientist, in my book, would not need to be FORCED to do what is necessary to enable his reviewers to do the things which peer reviewing of scientific papers is supposed to be about – checking that the methods used in the paper is not completely wrong, and that the experiment performed is repeatable by independent researchers.

Heavy reading. Government bureaucracy at its finest. At least they weren’t able to gum up the works enough to stop Karl’s result from being published, because that was and remains an important note on biases, and nothing said here goes against the basic idea.

“results stood up”!!! Yup – just like Mann’s results stood up. Right up to the time his “hide the decline”, misuse of stats., using known inappropriate data sets and several other things were realized.

Could also be phrased as “Hausfather treated the same data with similar results because both teams made the same errors at the same places in the process”
The starting material is from the same sources. There might be a need to ask questions if the results WERE different..

It looks like somebody tried to get the “researchers” to justify bad findings before the organisation’s reputation was trashed. And don’t think for a moment that NOAA’s reputation hasn’t been trashed. Skeptics know what passes for science at NOAA and despise them for it. True Believers know what NOAA got away with, but just think it was justified because it supports the narrative. But they don’t trust them with real science.

“So, in every aspect of the preparation and release of the datasets leading into K15, we find Tom Karl’s thumb on the scale pushing for, and often insisting on, decisions that maximize warming and minimize documentation.”

He can get his own data, right? Buoy data can’t be that hard to find. What stops him from doing his own work? These complaints never come up with an alternative result, and unless they do, the whole complaint is on no foundation.

Ideally, products would be completely independent and use consistent methods. When you peek at your neighbor’s paper and decide your method has to be “improved” because it isn’t keeping up with the Jones, it isn’t really independent. When you decide the time to “improve” happens when you can make the biggest splash, your ethics become questioned.

Kind of a rock and a hard spot though, don’t change and your original “perfect” way of doing things sucks, do change and it is obvious your original way of doing things sucked. It is a lot better to pick the right course right out of the gate.

“That looks like an opinion that has not been substantiated with any results.”

I have described below what a thoroughly dishonest graph that is (from David Rose of course). The difference he shows is almost entirely due to the fact that HADCRUT is on 1961-90 base; NOAA is 1901-2000. If you put them on the same 1981-2010 base, there is hardly any difference.

It is true that the Karl paper is an important note on bias, but not in the way I think you mean. It is a prime example of how bias can destroy proper application of the scientific method, create a “result” that is worse than meaningless because it was established a priori, which then creates positive feedback for the corruption of science by others investigators as well. This particle example of work carries the added infamy of doing this in both the technical and political realms, infecting both scientists and policymakers.

One thing that has repeatedly Shocked me is the way the temperature record DOESNT CHANGE despite the changes in processes, methods, and data. I cannot count the number of times I have left data “on the floor” and had the answer come out the same.

That said. Now that the commerce department is run by Trump it should put an end to the claims of fraud. The CDR work is pretty much check box work.. you can tell how vital it is to skeptics by their insistent demands that Christy and Spencer update their CDR.

Or you can see how important it is by comparing the RSS CDR ( which is excellent but a bit out of date ) to the UHA CDR which is a out of date mess.
See how many skeptics
A) have ever READ the CDRs or know here to find them
B) have decided that RSS is preferable because of its CDR.

“…One thing that has repeatedly Shocked me is the way the temperature record DOESNT CHANGE despite the changes in processes, methods, and data…”

You’re joking, right? Compare the “null” process (i.e., raw data) to the processed data and it “DOESNT CHANGE?” Really? Then why bother homogenizing, processing data, accounting for station moves, etc, in the first place? Why is BEST still making updates?

“You’re joking, right? Compare the “null” process (i.e., raw data) to the processed data and it “DOESNT CHANGE?” Really? Then why bother homogenizing, processing data, accounting for station moves, etc, in the first place? Why is BEST still making updates?”

1. Yes it doesnt change. The answers will be different here and there
But the relevant scientific information remains the same. The surface
of the planet is warming. The LIA was real. To be Sure if you use
100% RAW DATA the rate of warming will be higher. ADjustments
COOL the entire surface record, they do not warm it.

2. Why Bother?
The first reason is that Skeptics had a hypothesis about adjustments.
They hypothesized that The scientists had cheated on the land record. So they PAID US to find the cheating and prove that the adjustments were wrong. We found the opposite.
The next reason is that we want the most accurate record possible.
While the raw data is actually warmer than the adjusted data ( when you look at SST and SAT ) it’s important to be as accurate as you can be.

3. Why are we still making updates? Simple, because you can never be correct you can only be less wrong. And every day I ask.. how can I be less wrong. Now, Nothing in the Science changes with these minor adjustments.. The world is still warming.

I can’t find the file at the moment, but it has been published on a reliable skeptic site recently. It shows the gigantic changes made at NOAA and GISS to the climate history. These are scans of the original published works and they show the extremes that they have gone to to wipe out the warm ’30s.

You assert,
“One thing that has repeatedly Shocked me is the way the temperature record DOESNT CHANGE despite the changes in processes, methods, and data. I cannot count the number of times I have left data “on the floor” and had the answer come out the same.”

However, the compilers of the MGT data sets frequently alter their published data of past MGT (sometimes they have altered the data in each of several successive months). This is despite the fact that there is no obvious and/or published reason for changing a datum of MGT for years that were decades ago: the temperature measurements were obtained in those years so the change can only be an effect of alterating the method(s) of calculating MGT from the measurments. But the MGT data sets often change. The MGT data always changed between submission of the paper and completion of the peer review process. Thus, the frequent changes to MGT data sets prevented publication of the paper.

8.

Whatever you call this method of preventing publication of a paper, you cannot call it science.

But this method prevented publication of information that proved the estimates of MGT and AGW are wrong and the amount by which they are wrong cannot be known.

(a) I can prove that we submitted the paper for publication.

(b) I can prove that Nature rejected it for a silly reason; viz.

“We publish original data and do not publish comparisons of data sets”

(c) I can prove that whenever we submitted the paper to a journal one or more of the Jones et al., GISS and GHCN data sets changed so either

the paper was rejected because it assessed incorrect data
or
we had to withdraw the paper to correct the data it assessed.

But I cannot prove who or what caused this.”

PLEASE NOTE: This explanation is quoted from Hansard and if untrue it would be a perjury that would have put me in jail.

Also, one of the emails from me that was leaked by the Climategate whistleblower objects to this method that was used for blocking the paper.

“The data changes almost every month with e.g. this result”
That is not the result of data changing every month. It is the result of totally misrepresenting the data, using plots of different things.

The first plot is also in a paper of Hansen in 1981. It isn’t GISS data (it’s NCAR, Jenne), and it is a land stations plot. Of the data, Hansen said:
Very little SH data.

The second is a plot of an early version of GISS met-stations only. This aims to give a global temperature by re-weighting met-stations to represent oceans as well. It was becoming possible through the larger dataset available. This lives on as GISS Ts.

The third set is different again. It is a land/ocean set, which combines land stations and SST. The three are simply not the same thing. It is nonsense to say that they are the symptom of monthly changes.

“Lots of imaginary and bogus data.”
He actually did remarkably well with what was available. But yes, there is vastly more now. That is why it is just stupid to attribute the difference between a 1980 plot and a modern one to some kind of malfeasance.

Thankyou for confirming my point. As you say, the compilers of global temperature time series can and do change their data depending on what they want to present.

As I said, these changes are possible because there is no agreed definition of global temperature.

All global temperature data will continue to be bunkum until
(a) there is an agreed definition of global temperature
and
(b) there is some possibility of an independent calibration standard for global temperature.

Until then all global and hemispheric temperature data remains less scientifically valid than phrenology. Paymasters say what they want and the compilers of global and hemispheric temperature data can and do provide it.

Richard,“As you say, the compilers of global temperature time series can and do change their data depending on what they want to present.”
I didn’t say that. Here they are just presenting what is known at the time. Hansen in 1980 wasn’t compiling a time series; he was using someone else’s data in writing a paper about GHE. And in 1987 he was initiating one, but did not use SST because a suitable series wasn’t available. That came later.

But yes, scientists do look at different datasets for different reasons. land only, NH, SH etc. What is stupid or worse is when people pick out these graphs JoNova-style without looking at whether they represent the same thing and say – but you keep changing the data.

The three graphs each purport to be global temperature and you correctly said they are “plots of different things”.
YES! THEY ARE PLOTS OF DIFFERENT THINGS. THAT IS THE POINT.

In response to your saying they are “plots of different things” I wrote,

“Thankyou for confirming my point. As you say, the compilers of global temperature time series can and do change their data depending on what they want to present.

As I said, these changes are possible because there is no agreed definition of global temperature.”

Your response to that is to ‘move the pot containing the pea’ by claiming,
“I didn’t say that. Here they are just presenting what is known at the time”.

NO!
The measurement data from earlier times were and are a record.
“They” used different compilation methods to generate the different graphs of global temperature from the same available record of measurement data. A unique definition of global temperature would define how measurement data would be compiled to generate each value of global temperature in the time series.

IN REAL SCIENCE A PARAMETER PROVIDES AN INDICATION OF REALITY. IT DOES NOT SUGGEST WHATEVER ITS PRODUCERS WANT IT TO SUGGEST AT ANY GIVEN TIME.

THE GRAPHS DIFFER BECAUSE THEY REPRESENT THE DIFFERENT SUGGESTIONS THEIR COMPILERS WANTED THE TIME SERIES OF GLOBAL TEMPERATURE TO SUGGEST AT THE TIMES THEY WERE PRODUCED.

Indeed, your original post admitted this saying,

“The third set is different again. It is a land/ocean set, which combines land stations and SST. The three are simply not the same thing.”

YES!
THEY ARE NOT THE SAME THING BUT THEY EACH PURPORT TO BE A TIME SERIES OF GLOBAL TEMPERATURE!.

Appendix A of that item is one of the emails from me that were leaked by the Climategate whistleblower and it includes this,

“.. as my response states, Myles’ comments do not alter the fact that the masked data and the unmasked data contain demonstrated false trends. And the masking may introduce other spurious trends. So, the conducted attribution study is pointless because it is GIGO. Ad hominem insults don’t change that.”

And your attempt to claim you said other than you wrote also doesn’t change it.

To ensure your attempt has not hidden it, I again state the important issue from which you have attempted to deflect attention,

“As I said, these changes are possible because there is no agreed definition of global temperature.

All global temperature data will continue to be bunkum until
(a) there is an agreed definition of global temperature
and
(b) there is some possibility of an independent calibration standard for global temperature.

Until then all global and hemispheric temperature data remains less scientifically valid than phrenology. Paymasters say what they want and the compilers of global and hemispheric temperature data can and do provide it.”

Lets not lose track of the ball here. One takeaway from John Bates’ post is that government science agencies – or at least NOAA – do take integrity of the data, archiving and access seriously. Do not let the actions or behavior of some color your conclusions.

Another is that Dr Bates shines a light on the real importance of the Karl paper. And it is not its refutation of the pause or any other aspect of climate science. As Mosher has pointed out in the past, if you accept Karl’s conclusions, you also have to accept that the rate of warming was lower than the “consensus” was claiming. My takeaway from Karl was a shot in the foot from trying to be fastest on the draw. (note to Western enthusiasts – always choose a rifle, then a shotgun. A pistol is your last choice. Also, if you are in a pistol fight, be prepared to get shot. Putting your first round on target is by far the most important thing in a civilian fire fight, not who draws and fires first.)

The importance of the Karl paper will be as a text book example of what happens when you already have a specific result for your research. It will be a shame if an otherwise good scientist’s claim to fame is the adding the verb Karlization to the scientific vocabulary. But it’s a bed he made.

The paper’s timing was highly suspect. After 30 years of watching the USA government and media distort facts and create bogus information I tend to distrust all information they put out which may be intended to create support for political objectives. This means the Karlized data sets don’t convince me at all.

I realize mine is a fairly simple minded and “unscientific” approach. But unfortunately global warming is now a political hot potato. It’s going to be, whether we like it or not, a battlefield in the 21st century Obamite-Trumpite Disinformation Civil War.

the integrity of acquisition archiving and access is taken seriously but there are a lot of logistics that have to align that don’t …and honestly NOAA doesn’t take its fisheries ship techs seriously and its mostly due to the inconsistency of the NOAA corps officers who are only managing the civilian techs for a ‘management grade’ on their evals without having any background in surveying themselves and aren’t in the position long enough to be accountable for their own naive mistakes in judgment

Do you know what goes into each of the datasets described here, and which you would trust more than the other? Without that background it is just a list names with version numbers, and few could be bothered to go and review that background. It really is in the weeds there. Karl’s basic idea supersedes all this because it is understood only in terms of ship and buoy biases. Plain and simple.

What’s wrong Jim? Once again having to admit you haven’t a clue about what you are talking about and having to change the subject?

Why do I need to know every detail about the data sets themselves? As for which would I trust more, based on what I do know? I believe the answer to that depends on what one wants to use the data for. But just for the sake of argument, I would say the Argo data is probably the better set. Just hasn’t existed for that long. Hence the use of historical ship collected data. How about you explaining the logic behind taking what is considered to be the more reliable data set and adjusting it using the less reliable set?

The odd thing is that Bates thinks the ERSSTv4 was handled properly by Karl et al., and that is where his main result was (ships versus buoys). His complaints are confined to land data, and some quibble about machine-readable where no one understands what he is talking about.

a CTD has dual redundant temp sensers
calibrated annually, rinsed after each opporeration, and its comparative plot readings are monitored closely during the cast

An Argo is deployed with a drifting life expectancy of five years …sensors drift over time when not calibrated regularly

Buoys are left to drift and often cant be serviced as timely as one would like

A thermosalinogtraph is in a large scale piping system its temp sensors are spaced far apart and is often not monitored well on various different boats as it is turned on and left to run in the background with often only a digital read out instead of a relevant comparative plot readily displayed

He published science which he knew was right and didn’t let delaying tactics deter him. If he had been wrong or premature, he would have deserved criticism, and later publications would have corrected it. Science is never complete and papers can only give a current status. You publish what you have at a given time, and can’t answer all the questions, or wait till you have all the answers. Papers always end with remaining questions and are never tied up with a bow.

Punksta, the very idea that fleet data, which includes bucket data, without the benifit of a full and comprehensive survey, became a trusted dataset is overwhelming my ability to hold down lunch. Karl et al 2015 is a dogs chew toy. There is no set of corrections which makes fleet data usable without intimate knowledge of each and every vessel and that is just the beginning

Owen mentions intimate knowledge of the vessels …I was the Tech on two of the vessels 2006-2015… i was the one collecting equatorial SST via SBE21, SBE3,SBE38, SBE45 Thermosalinographs and an SBE9plus water sampler carousel CTD on the TAO el niño array runs as well as elsewhere and I deployed the most Argo floats of the fleet, and I knew the NDBC/SAIC buoy techs…

“Read lengthy expose by Bob Tisdale on Karl’s malfeasance/misfeasance:”
So what did Bob actually expose? All I see are endless graphs showing that NOAA has higher trend that HADSST. Yes, NOAA adjusted for the clear bias in matching buoy and ship data, and HADSST didn’t, at least not yet. Doesn’t mean NOAA is wrong, let alone malfeasant.

Perhaps it is worth mentioning that the standard engine room inlet temperature measuring instrument will usually be calibrated to EN 13190 class 2 standard, ie ±2 degrees centigrade when installed, and is highly unlikely to be checked during the lifetime of the vessel.

de-de.wika.de/upload/DS_IN0007_GB_1334.pdf

The logic of utilising readings from a device of such poor accuracy when Argo buoy instrument data accurate to 0.01 deg C is available is not clear.

Nor is it clear how readings from such gauges can be quoted in the Karl datasets to 0.001 deg C.

So Argo data with an accuracy of ±0.01 deg C agrees with engine inlet temperature with a gauge accuracy (plus a collection of other variables) of 2 deg C, permitting temperature data to be quoted to three decimal places and you can’t see any issue with that…

Why am I not surprised…

I hope no-one ever puts you in charge of a real project, designing a supermarket trolley for example…

And as I tend to give your opinion a good deal weight on certain topics, can you explain the reasoning behind adjusting a data set considered to be more accurate using a less reliable set? If Argo is more accurate, why adjust it at all? Why didn’t Karl apply the same weighting of Argo without adjusting it first? His decision might be justifiable in the field of statistical analysis, but it sure doesn’t sound to be so in the field of common sense.

cat weasel …it’s not engine water intake temp …it is ‘uncontaminated salt water pump system intake’ on the bow which takes a water temperature sensor in the bow thruster room and compares it to another temp sensor farther up the line usually in a wet lab
Here is the example of an SBE38 and its specshttp://www.seabird.com/sbe38-thermometer

Please be aware that I am describing not research vessel measuring technique, but standard merchant marine practice, as is described here:

For the oceans, the situation is different. Until the 1970s, SST observations were made entirely from ships. (After 1970, temperatures were also measured using moored and drifting buoys and, from the early 1980s, using satellites.) Different ships used different measurement methods over the years, each of which potentially had different biases. Some measurements were made by lowering uninsulated buckets over the ship’s side; these tend to produce colder temperatures, owing to the effects of evaporation once the bucket has left the water. Other measurements were taken at the inlet for the intake of water to cool the ship’s engine; these are likely to be biased towards warmer temperatures because of heating from the engine-room

“that are upto 50 km apart. LOL”
You seem to think that is too far. Have you an estimate of the appropriate distance? Could you say why?

There is a trade-off in error of the bias estimate. Close encounters are better correlated, but there are fewer of them. A larger sample diminishes variance of the mean. They have done an extensive study of spatial correlation in part 1 of their study. They have a quantitative basis for their choice. You are arm-waving.

Yes there are long gaps in ships data when seastates cause the intake to suck air and we shut it off to save the pumps..

or when an part breaks underway that can’t be fixed for a while

or when transiting areas where we don’t want to clog the intake with oil or algae etc

As for comparing data between ships and buoys 50km apart, we collected ships data and buoy data when servicing the bouy..,so there are plenty close aboard measurements of ships data, buoy data, ctd data, and argo data all from the same operational stop, so if he was using data as far as 30NM …half a degree…that makes sense too as we leap frogged buoys if they didn’t need service…but we still would have steamed by …so there must have been extenuating circumstances

Punksta, the very idea that fleet data, which includes bucket data, without the benifit of a full and comprehensive survey, became a trusted dataset is overwhelming my ability to hold down lunch. Karl et al 2015 is a dogs chew toy. There is no set of corrections which makes fleet data usable without intimate knowledge of each and every vessel and that is just the beginning

Catweazle666, thank you for your comment. You illustrate nicely, why crap fleet data cannot be used. You have provided a single example, a full fleet survey would unearth a score more. And we havent touched on procedure yet. Resolution in this case cannot be guessed at. Nick Stokes, and Mosher, both of you are invited to see the forest through the trees. Karl et al 2015 cannot be defended, unless be a lunatic. I point to Nick and Mosher both because i respect you both.

Karl 2015 was not the paper that justified use of fleet data. That has been used for a very long time, but is being replaced by better buoy data. K15 is the paper that says how the buoy data can be properly used with ship data. Buoy data is greatly upweighted because of its lower variance.

Nick,
Not true about lower variance when the dominant vasriancebis from natural factors like stratification, mixing,diurnal cycles samples random times. In such cases it matters little which of 2 competing instrument systems is used.

Geoff,
The cause of variance doesn’t matter – they just weight according to what is observed. It’s a math process – you work out the uncertainty of combinations, then work out the weighting that minimises uncertainty.

They were calibrated annually and as needed and generally on a port and starboard rotation to be swapped during winter inport/dry dock periods

The limitations were such things as the flow gages…older set ups had manual valve flow gages with no read outs, and visiting scientists who were knob twiddlers were notorious for adjusting water flow without permission …or syphoning water without permission

The study drew criticism from other climate scientists, who disagreed with K15’s conclusion about the ‘hiatus.’ (Making sense of the early-2000s warming slowdown). The paper also drew the attention of the Chairman of the House Science Committee, Representative Lamar Smith, who questioned the timing of the report, which was issued just prior to the Obama Administration’s Clean Power Plan submission to the Paris Climate Conference in 2015.

There

So let’s make some guesses. These are the scientists who wrote the paper that disagreed with K15’s conclusions about the hiatus:

Let’s guess what they would have to say about the above CargoCult Etc. hatchet job?

My guess is they will condemn this article in strongest possible terms, and that they will fully stand by their real assessment of Karl15:

Recent research that has identified and corrected errors and in homogeneities in the surface air temperature record (4) is of high scientific value.

4. Karl, T. R. et al., Possible artifacts of data biases in the recent global surface warming hiatus. Science (2015).
.
As for why the WaPo rejected a shorter version of this article, they possibly contacted the K15 co-authors and got a very different story. Just a hunch.

==> There are global surface temperature datasets, such as Berkeley Earth and HadCRUT that are relatively independent of the NOAA data sets, that agree qualitatively with the new NOAA data set. ==>

Kudos to you, Judith, for mentioning this important caveat regarding the implications of the main post.

it is unfortunate to note that Rose’s article about how world leaders were “duped” fails to include that comment of yours, or indeed anything about BEST and charts “flawed” NOAA and HadCRUT data but didn’t include BEST. Perhaps you might comment to Rose about his oversight.

Also, while I note that the author (in the Rose article) says: “‘I want to address the systemic problems. I don’t care whether modifications to the datasets make temperatures go up or down.” , it is unfortunate, IMO, that he did not provide a similar reference point as you provided, or better yet provide his own analysis to show a materially different results than Karl’s analysis – .to back his accusations of scientific fraud (“putting [a] thumb on the scale”).

In 2011 an article at Greenwire was based on a dozen climate scientists being asked the question:

“Why, despite steadily accumulating greenhouse gases, did the rise of the planet’s temperature stall for the past decade?”

There was no doubt that a slowdown in global warming had occurred. Kevin Trenberth:

“The hiatus [in warming] was not unexpected. Variability in the climate can suppress rising temperatures temporarily, though before this decade scientists were uncertain how long such pauses could last. In any case, one decade is not long enough to say anything about human effects on climate; as one forthcoming paper lays out, 17 years is required.”

In August, 2014, the New York Times discussed the attempts to understand The Pause. Andy Revkin:

“There’s been a burst of worthy research aimed at figuring out what causes the stutter-steps in the process [global warming] — including the current hiatus/pause/plateau that has generated so much discussion. The oceans are high on the long list of contributors, given their capacity to absorb heat. The recent studies have pointed variously to process in the Pacific and Atlantic and Southern oceans……”

I asked the New York Times to do a story about the govt only vessel solely dedicated to climate studies being pulled off line in 2012 on short notice due to the across-the-board budget cuts…they were interested in the story but asked “Is there a conspiracy?” When I said “No, just unfortunate budget cuts” he was no longer interested in the story

Not quite nothing? This is not an expose’ on the results but it’s damning of the players and the game.

“Thumbs on scale”
Not documented and utilized a defective data set. “The incident report had found that there were unidentified coding errors in the GHCN-M processing that caused unpredictable results and different results every time code was run.”
Did not follow the prescribed guidelines of the scientific organization nor the publishers. According the Daily Mail article potentially subject to retraction.
Not reproducible.

There are two sides to a story… look at the way the Fyfe paper is being hyped.

Peterson says he’s not aware of any issues raised in the months just prior to the publication of the Science study. But in 2013 and early 2014—well before the disputed study was submitted to Science—Peterson says there was tension between agency scientists and data managers. The scientists wanted to publish a paper based on a then-new, more comprehensive database of land temperatures from the ITSI. Others in the agency pushed for a delay out of concerns the new ITSI data hadn’t fully met NOAA protocols for releasing such databases to the public. The dispute led to a 6-month delay in the publication of that earlier study in the Geoscience Data Journal, says Peterson. The ITSI data was later used in the Science study. …

Now, under new management, I’d suspect that the communication will become available. Would it be out of order to expect that should that communication indicate there were time ‘pressures’ and/or other ‘less than scientific reasons’ for a rush to publication that the reputations of the scientists and certainly NOAA won’t be tarnished. And presuming that to be the case (maybe incorrectly) how will that be beneficial?

John- Thank you for doing this. Your experience with inappropriate behavior by Tom Karl, Tom Peterson and Peter Thorne at NCDC is consistent with my experiences with the CCSP 1.1 Report. If you have not read these, you might find them informative

“The process that produced the report was highly political, with the Editor taking the lead in suppressing my perspectives, most egregiously demonstrated by the last-minute substitution of a new Chapter 6 for the one I had carefully led preparation of and on which I was close to reaching a final consensus. Anyone interested in the production of comprehensive assessments of climate science should be troubled by the process which I
document below in great detail that led to the replacement of the Chapter that I was serving as Convening Lead Author. ”

BEN WAS REALLY PISSED OFF WITH ROGER — AS WAS TOM KARL I GUESS (NOT YET TALKED TO HIM).ALL OF HIS POINTS CAN BE SHOT DOWN, BUT IT IS A PAIN NONE THE LESS. APPARENLTY JUDY CURRY EXPOSED HER INFERIORITY COMPLEX (ANS HER INFERIORITY).”

Your new work has further exposed a very manipulated effort for that community.

Has it been a warm winter where you are? or not? Was last summer warmer or cooler than normal? I know you are trying to measure something far more complex, but I’d be interested to hear your reactions.

Under these policies, NOAA was obliged, inter alia, to establish a peer review record accessible to the public. NOAA resisted Lamar Smith’s requests on the questionable grounds that scientific correspondence was privileged, but this is not the case for the peer review record of influential scientific information.

I questioned another co-author about why they choose to use a 90% confidence threshold for evaluating the statistical significance of surface temperature trends, instead of the standard for significance of 95% — he also expressed reluctance and did not defend the decision.

The 90% threshold bothered me the most. This could make a huge difference in interpretation and the wording of results. When dealing with measurements ostensibly accurate to 0.01 or even 0.001C this changes designation of whether or not any observed change is meaningful. If 95% is the standard for climate data it should be used. The only other departure allowed should be the actual level attained rather than a threshold level.

The AR5 best-estimate ERF trend over 1998–2011 is 0.23 ± 0.11 W m–2 per decade (90% uncertainty range), which is substantially lower than the trend over 1984–1998 (0.34 ± 0.10 W m–2 per decade; note that there was a strong volcanic eruption in 1982) and the trend over 1951–2011 (0.30 ± 0.10 W m–2 per decade; Box 9.2, Figure 1d–f; numbers based on Section 8.5.2, Figure 8.18; the end year 2011 is chosen because data availability is more limited than for GMST). The resulting forced-response GMST trend would approximately be 0.13 [0.06 to 0.31] °C per decade, 0.19 [0.10 to 0.40] °C per decade, and 0.17 [0.08 to 0.36] °C per decade for the periods 1998–2011, 1984–1998, and 1951–2011, respectively (the uncertainty ranges assume that the range of the conversion factor to GMST trend and the range of ERF trend itself are independent). The AR5 best-estimate ERF forcing trend difference between 1998–2011 and 1951–2011 thus might explain about one-half (0.04°C per decade) of the observed GMST trend difference between these periods (0.06 to 0.08°C per decade, depending on observational data set).

1. The findings of K15, which Bates somewhat misrepresents, hold up under independent examination. In particular it was suggested here at Climate etc, that folks should look at other datasets to conform or call into question K5 decisions. ( thanks Dave Springer for suggesting folks compare it to argo) The hypothesis here at Climate Etc was that IF folks looked at
other data (like argo) they would see that Karl had his thumb on the scales.

Well, folks did just that. They looked at satellites, Buoys and Argo.
What did they find? They found that Karl’s thumb was weightless.
That is, they found that the independent data sets CONFIRMED the
adjustments. This doesnt make them perfect. This doesnt preclude another Look at the data. What it shows is that the Supposition that Karl’s adjustments would be shown to be inncorrect was Wrong. “falsified” if you
like that term. Busted. The Karl adjustments to SST are confirmed by looking at data folks have never looked at before. See zekes paper.

2. CDRs
I would like to thank Dr. Bates for Pushing for CDRs and a more formal process. I would hope that formal CDRs would get folks to stop their claims
of fraud. I would like everyone to hold themselves to CDR like processes.
It would be great if Christy and Spencer updated their CDR. It’s way way way out of date. Wouldnt it be great if Goddard, Watts, Willis, Ah hell everybody used a CDR like process?. It would great if folks didnt post charts unless they went through a formal process. But even with a CDR like process I am sure that people will still claim fraud. because they can. I know that people will still demand more more more. Not because they actually want to use the data, but just as a form of obstruction, diversion etc. Wouldnt it be great if Judith Required a CDR and up-to-date code repository for every post at Climate Etc.? Do you think she will?

in any case. when folks post their findings here on climate Etc.. you can expect me to ask

1. Where is your SVN or Git
2. Where is your data posted.
3. Where is your CDR or functional equivalent

I expect every intellectually honest person to agree with me and demand the same everywhere. not gunna happen.

Funny story. Long ago I ran a program to develop a UTD ( unit training device) basically a flight sim for training fighter pilots. It was great. We ended up winning a huge contract teamed with Hughes Training to build a UTD for the F16. Our code was all research code, but hey, bringing it up to snuff would be no problem. So I just paired every research programmer with a guy skilled in MilStd 2167A/T. After 3 days one of the research guys punched the standard’s guy in the face. Ouch.. thunderdome. This is just my way of saying that the move to a more production oriented approach won’t happen overnight and it wont be bloodless.

As you’ve stated a sufficient number of times this is a blog.
“in any case. when folks post their findings here on climate Etc.. you can expect me to ask:
1. Where is your SVN or Git
2. Where is your data posted.
3. Where is your CDR or functional equivalent”
As a member of the scientific community surely you’re willing to ask the associated of the above referenced to those such as Karl. Based on your history, had you asked Karl and found that which is outlined in this expose’ might you have viewed the work through a different lens? Apparently his offering would not hold up to the standards you wish to apply here.

“As a member of the scientific community surely you’re willing to ask the associated of the above referenced to those such as Karl. Based on your history, had you asked Karl and found that which is outlined in this expose’ might you have viewed the work through a different lens? “Apparently his offering would not hold up to the standards you wish to apply here.”

I Viewed his data through the lens of a Skeptic. That is WHY
Folks on our team decided to Check his work by Looking
at OTHER DATA using OTHER METHODS.
For reference see Zeke’s comments here when K15 came out.
He withheld judgment.

People really need to think harder or read what I have written in the past more closely.

If you present an argument and refuse data or code, Then I am under no rational obligation to believe you OR find your mistake.

The absence of these things ( code, data, CDR) doesnt make you wrong.
It just means I’m under No obligation to CHECK your work or to believe it.

After 30 years of intense climate science, and ideology driven climate control policies which have done great damage to the global economy, why don’t we have a widely accepted chart of temperatures over the Phanerozoic Eon?

It is important because this chart, if correct, may mean the GCMs and IAMs are exaggerating the negative impacts and damages of GW.

I don’t see anything in those sentences about impact functions or damage functions, do you? It is just a pile of cherry picked factoids and opinion (that support your belief). It is the Alarmists’ equivalent of this:A complete list of things caused by Global Warminghttp://www.numberwatch.co.uk/warmlist.htm

You can see net damage, however. How you evaluate it in dollars depends on how you value lives in poorer regions. GDP alone doesn’t cut it because that values an African life at about 5% of an American one.

You are asking whether it is net benefit. It does not look like net benefit to me. As to the cost of livelihoods and damage, value it any way you like. There are many ways. That is your choice. Don’t wait for me to tell you. Do you want it to be 4 C warmer after reading that? I would value the difference between now and that pretty highly.

You’ve made that silly statement a hundred time. It’s been pointed out to you that it applies to all policies. All policies have winners and losers. Rational policy delivers the maximum benefit. Virtually no policy has no losers.

Stop babbling Jim Denier. You haven’t a clue. Go back to watching the football.

You just haven’t thought it through. There is a cost. Who pays? A carbon tax or maybe you want a benefit tax. Where does the money come from in your world? Note that all carbon use is a cost-benefit calculation. A person only pays for fossil fuels to gain benefit. The hidden part is the cost downstream (see scenarios).

If you think you have a better way to quantify and present the global net-benefits of GHG mitigation policies than the globally accepted standard, please show the equivalent chart in units of measure you deem appropriate, provide links to the basis for it, method, inputs, assumptions, and all else needed to be able to understand it and reproduce it (as I di for the above chart: https://anglejournal.com/article/2015-11-why-carbon-pricing-will-not-succeed/ .

Actually I go with a consensus $40 per tonne, but that’s just because it seems to pay for a lot of the damage and adaptation, if not all. You need an identified revenue stream from carbon to at least offset these costs. How mathematically exact it should be is academic. Just the presence of revenue matters.

You think of the world as a single economy, not a set of independent ones. That’s very globalistic/idealistic of you. Hope it works like that but I think you need policies to get the money to the right people otherwise climate change justs sinks some nations.

You are correct that carbon pricing would create many more losers than winners and sink many nations – that’s why countries they will not participate unless they will incur net-benefits and why any attempts will not be sustained. This applies to any GHG mitigation polices that would incur net-losses to their economy.

I have noticed on a number of threads that you seem to have a fundamental misunderstanding about the net cost of decarbonization. You use phrases like “It is just balancing the books” which suggests you think this is a zero sum game. It is not. Decarbonization has a real cost to current GDP growth – as much as 2% on a net basis according to Stern. To use technical economic terms, that is freakin’ huge. The economic dislocation and societal impacts of this GDP loss (which will be mostly reflected in consumption) will be highly significant. If you want people to agree to sacrifice over $1 trillion today (and every year) to avoid potential costs in the future, you had better damn well have a reliable damage function. Peter Lang is 100% correct on this.

DavidT, yes, if you want to calculate the costs go ahead, but don’t offset them with benefits unless you also have a way to tax those benefits to pay for the costs, which you don’t. It is fundamental to understand that the people with the benefits are not the same people with the costs. And, no, it is not zero-sum. There is a loss either way. Mitigation and climate stabilization once and for all costs less than continuing costs for adaptation and damage for centuries.

I honestly don’t know what are are talking about. Can you explain further what you mean by “don’t offset them with benefits”? I think you also miss the point. It is the obligation of those requiring current investment to justify that investment. If you want the global economy to forgo over a $1 trillion annually, you need to make the case and make it strongly. You need to present a robust damage function. Otherwise, it just won’t happen. You may want to look up what the estimable Daniel Kahneman has written on global warming. (I dabbled in behavioral economics earlier in my career and while I moved on to a more markets based macro focus, I still regard him as one of the greatest living economists).

So Mosher, at what point in time should the World have been induced to find K15 scientifically reliable without further caveat from the NOAA? The time of publication? Paris? Ever?
Does K15, as published, contain adequate notice of uncertainty sufficient for understanding by the Paris audience?
Should K15 have included notice of its lapses in Agency process and standards prior to publication, as part of the uncertainty necessarily contained in its conclusions?

“So Mosher, at what point in time should the World have been induced to find K15 scientifically reliable without further caveat from the NOAA? ”

1. We didnt take it as gospel truth. That is why we checked it.
Turns out They Improved the record.
2. I have no clue what you mean operationally by scientifically
reliable. Its a record, with uncertainty. Like all records. Their
Ocean now has more warming than ours. Not enough of
a difference to change any real science. A small technical
difference of interest of specialists.

Does K15, as published, contain adequate notice of uncertainty sufficient for understanding by the Paris audience?
1. Yes
2. There is ZERO evidence that anyone at paris even considered
K15.. In fact years of negotiating prior to K15 took no notice
of the pause. Quite the opposite. the pause which skeptics
found so compelling was ignored by at least half of scientists.
We published an editorial prior to K15 essentially making the
same point
Should K15 have included notice of its lapses in Agency process and standards prior to publication, as part of the uncertainty necessarily contained in its conclusions?
1. No.
2. The failure to archive data at the time of publication falls on the
journal to enforce.

Fundamentally I find the same thing here as I did in Climategate.

1. Processes… things like archiving, responding to FOIA, providing code,
being open and transparent are LACKING.
2. DESPITE these failings in Process NOTHING IN THE SCIENCE CHANGES.

The remedy is the same rememdy I suggested in Climategate.

1. Provide proper FUNDING for the improving processess and data archiving.
2. GET SCIENTISTS OUT OF THE JOB OF CONTROLLING THEIR DATA.
In industry we had data custodians and document control. I produced data and the custodian or librarian is in charge of filing it, and maintaining it, and distributing it.

Well, show me that Karl’s bias was not a result of noticing these other datasets would be more consistent if he pressed his thumb this way.

There comes a question of circular reasoning when methods aren’t applied as expected. Someones intuition or bias might actually be borne out by the method, sure. But the bias needs to be separated from the science. Methods help that. You don’t know the extent of bias if the methods aren’t followed.

This is great! Shallow as it is, for me this is vindication after years of being laughed at and called names for being a ‘Climate denier’ in spite of citing studies by so many scientists and questioning the constant yearly trend of ‘adjustments’ made to the temperature data NOAA kept posting regularly.
I can ‘throw’ this article in the faces of the self-righteous GW believers out there.
And having said that……
Now let’s try to get our taxpayer money back for this embarrassing farrago of nonsense these so-called ‘scientists’ have foisted on us.
This ‘scam’ cost billions to perpetrate and it came out of all our pockets.
Let’s also get the ‘ringleaders’, who pushed this BS on us. I am sure there are a lot of politicians involved here ( and what a great way of winnowing the grain from the chaff here) and we should start hiring forensic accountants to find out the culprits (starting with the premier suspect—- the infamous Al G.)
Follow. The. Money.

I thought the article was rather dense, and seemed to argue that scientists had not rigorously kept records of their data and had also used weak statistical methods. But others in the comments are saying that subsequent studies nonetheless support the original data. Still, you think this is clear-cut evidence to “throw” at “believers”? I’m interested in why you think so.

I would ask the question that if the addition of ship data, the use of an “unclean” data set and relaxation of significance level made no difference, why did this paper have to be written at all. It sounds like the pre-existing data already had eliminated the “pause”. Where is that paper?

“scientists had not rigorously kept records of their data and had also used weak statistical methods”
He’s complaining that they didn’t archive records (on time) according to a protocol that he favors (and seems to have invented). As to statistical methods, he gives no details, and I see no evidence that he knows anything about statistics.

“The ‘whistle blower’ is John Bates who was not involved in any aspect of the work. NOAA’s process is very stove-piped such that beyond seminars there is little dissemination of information across groups. John Bates never participated in any of the numerous technical meetings on the land or marine data I have participated in at NOAA NCEI either in person or remotely. This shows in his reputed (I am taking the journalist at their word that these are directly attributable quotes) mis-representation of the processes that actually occurred. In some cases these mis-representations are publically verifiable.” – Peter Thorne

“Berkeley Earth and HadCRUT that are relatively independent of the NOAA data sets, that agree qualitatively with the new NOAA data set. However, there remain large, unexplained regional discrepancies between the NOAA land surface temperatures and the raw data. ”

Judith.

. What Portion of HadCRUT data is taken from NOAA? In your study of this
what portion? 10% 20%? 50%? 90% 95%? how did you figure that?

What Portion of Berkeley earth land data is taken from GHCN-m version 3?.. how did you figure that?

What regional discrepancies are you referring to? What is large?
What do you mean by unexplained? what would constitute an explanation for you?

These are real questions. Since you express some certitude about these isssues an answer would be cool.

Judith – The Berkeley Earth and HadCRUT data are not relatively independent from the NOAA data sets. I will document this today in several tweets. They all draw from mostly the same raw data while in the case of the Berkeley analysis, the added sites are mostly in the same geographic area. Even Phil Jones has confirmed this with respect to NCDC, GISS and the HadCRUT data.

I want to point out that all of the surface data sets over land suffer from i) a systematic warm bias associated with using minimum temperatures in the construction of trends and I) in blending non-spatially representative sites with good sites. This includes the BEST analysis as both of these issues directly affect the raw data. Maximum temperature is a better metric (although as the appropriate metric to diagnose global warming, ocean heat content change should be used).

Factored on top of that issue is the so-called homogenization issue which is very much a black box,

That there is a divergence between surface and lower tropospheric temperature trends supports the conclusion of warm bias in the surface temperature trend assessments. Warming has been occurring but it is more muted than claimed by Tom Karl et al.

An update on this subject will be given by Professor Dick McNider in next week’s Santa Fe meeting.

All this Wandering in the Weeds is fun and games, Mr. Mosher. But none of it alters the fact that IPCC climate models are bunk and unfit to fundamentally change our society, economy and energy systems.

‘Roughly half of the available energy is returned to the atmosphere via latent heating on the wet day, with a maximum value at noon of ~350 W m-2. In contrast, the maximum latent heat value on the dry day is only 50 W m-2’

It seems very possible that there is a moving feast of drought and precipitation artefacts in the surface record. There is as well an ENSO artefact in the annual averages. The perils of over smoothing data. I think both of these contributed to last years ‘record’.

Well, what about the HAD data? Can anyone help me with this problem? In 2010 Phil Jones had an interview Q&A with the BBC and listed the warming trends from 1850 to 2009. This during their Climategate fiasco.
First trend was 1860 to 1880 0.163 c/ decade
Second trend was 1910 to1940 0.150c
Third trend was 1975 to 1998 0.166 c
Fourth trend was 1975 to 2009 0.161 c.

Why have the two earlier trends dropped and particularly the first trend 1860 to 1880 has dropped from 0.163 c to 0.113c ?
I’m using HAD 4 L&O, but there is a global HAD 4 Krig and that shows a higher trend for 1860 to 1880 of 0.167 c.
Just for interest I checked the trend from 1910 to 1945 and found it to be 0.140 c/dec or higher than Jones’s second trend is now. BTW HAD 4 global Krig was 0.151 c/ dec for 1910 to 1945. What is going on?

Can anyone help me with this problem? In 2010 Phil Jones had an interview Q&A with the BBC and listed the warming trends from 1850 to 2009. This during their Climategate fiasco.
First trend was 1860 to 1880 0.163 c/ decade
Second trend was 1910 to1940 0.150c
Third trend was 1975 to 1998 0.166 c
Fourth trend was 1975 to 2009 0.161 c.

Why have the two earlier trends dropped and particularly the first trend 1860 to 1880 has dropped from 0.163 c to 0.113c ?
I’m using HAD 4 L&O, but there is a global HAD 4 Krig and that shows a higher trend for 1860 to 1880 of 0.167 c.
Just for interest I checked the trend from 1910 to 1945 and found it to be 0.140 c/dec or higher than Jones’s second trend is now. BTW HAD 4 global Krig was 0.151 c/ dec for 1910 to 1945. What is going on?

Ethics seems to be at the heart of this essay and it’s big deal. Being open, accessible and transparent are really important to some people.
Does this mean we should expect that President Trump to provide us with a machine readable version of his tax returns? Why not?

Oh and don’t forget that for every regulation/rule you want to add to enforce the CDR you have to kill two other regulations (see executive order released 1/30/17).

Gavin has also posted a better NOAA:HadCRU graph matching baselines. You can see Blue (HadCRU) slightly better in top in the past and see Blue much more in bottom in recent years, making Rose/Bates’ point more clearly than the Rose graph with two different baselines. So does yours.

Shub,” a bit unsubtly made”,/i>
Indeed so. That caption read:“The misleading ‘pausebuster chart’: The red line shows the current NOAA world temperature graph – which relies on the ‘adjusted’ and unreliable sea data cited in the flawed ‘Pausebuster’ paper. The blue line is the UK Met Office’s independently tested and verified ‘HadCRUT4’ record – showing lower monthly readings and a shallower recent warming trend.”
Unsubtle. And lies. The difference he shows is almost entirely due to the different anomaly bases,, which are of no significance at all.

Instead of looking at QA as filtering data to a best set, a practical person might view it as characterizing all of the data so as to have maximum use. For example, soft or qualitative data may tell you something–particularly with spatial. The same goes for processes flawed to various degrees.

The fact that NOAA, or any entity collecting data, has a QA program for its purposes does not mean that those processes and products are best for a third party user who may have more stringent or even less stringent requirements–requirements constrained by available resources, timing, goals, etc. Sometimes we by necessity must just go with what we’ve got in hand. That does not preclude later extension, revision or iteration.

Yep. Even if the temperature is up, and going up, so what? What difference does it make?

Even if the temperature is up or down says nothing about the impacts. The fact that there is no valid evidence that global warming would be net damaging is the real issue. Without valid evidence that GW is net harmful, there is no justification of the belief that GHG emissions are harmful – and no evidence to support the 2C political target and belief it is dangerous.

If all Climate Change is undoubtedly a Very Bad Thing, it follows that there must have been a time when we had the Perfect Climate, from which any deviation was for the worse.

When was it? How do we know?

Coz to me it seems that in general a warmer greener world is a better place than a colder greyer icier one…

Maybe if I was a polar bear I;d think different…but there are only ~25,000 of them living in a very narrow ecorange. By contrast there are 7,000,000,000 humans living all over the planet in temperatures varying from -40C to +40C. (233K to 313K). We are a very adaptable species.

And try as I might I really can’t persuade myself that the existence of the planet – or of humanity – is in danger if the average global temperature changes from 287K to 289K by the end of this century…

“The temperatures are up and the ice is down.”
Actually that isn’t true. The temperatures in the ’30s where higher than now. The ice cover in the Arctic was also lower than now.

“Because we are stuffing the atmosphere with insulation.”
Again, not true. the atmosphere doesn’t work like a layer cake, where each layer insulates the layer above from the layer below. It works like a gigantic convection engine. Trying to measure the insulating effect of CO2 isn’t even on the right planet. That’s the planet the global models trumpeted by the IPCC work on.

On the real Earth, convection from low to high and from equator to poles is what drives the climate.

Mark:
““The temperatures are up and the ice is down.”
Actually that isn’t true. The temperatures in the ’30s where higher than now. The ice cover in the Arctic was also lower than now.”

Actually. no it is true and you are peddling an “untruth”….

No doubt you do not trust NOAA so here is JMA:

And Hadcrut:

Is there any more real truth you’d like me to show you.
Or, as I suspect, all you want is the alt type.
The endlessly perpetuated myth of the “1930’s were warmer”.

There is no data-set that shows that to be the case globally.

“On the real Earth, convection from low to high and from equator to poles is what drives the climate.”

No, the sun drives the climate, or rather the proportion of it’s TSI that is absorbed.
That currently is greater than the LWIR emitted.
The excess is largely accounted for by that being stored in the oceans.
Convection is one aspect of weather, which is the process that moves the Sun’s energy around the Earth in it’s quest to exit to space.
To some degree this is failing. Because there is a radiative restriction on it doing so above that balance dictates. All the science since Fourier, Tyndall and Arrhenius tells us that. Empirical science and not up for argument.
What is, is to what degree global temps will stop at when balance is re-attained.

Oh, and also you may, or may not, care to show comprehensive evidence of ice being less over the entire Arctic basin in the 1930’s than in recent years.

See, that’s the funny thing. I remember when the data-sets DID show the 30’s being warmer then today. Hell, I remember when Hanson himself displayed graphs showing the 30’s warmer then today.

Your basic problem here is, despite what the Leftists believe about human psychology and how to twist it, most people don’t actually fall for all this constant ‘adjustment’ of the data. Once or twice, maybe. But do it time after time, and people on both sides soon get cynical. And that goes double for Mosher and his assertions that everyone on the Climate Faithful side get the same results. Because it’s TRUE, they do all line up again soon after every adjustment. Over and over and over again, no matter how much they change, everyone always ends up with identical results. And we’re supposed to all believe that THIS one is correct, no matter how different it is from the past. We’re apparently not even supposed to notice how much it keeps changing. ‘The adjustments don’t really change the trend.’ ‘The raw data actually shows more warming.’ ‘No, I won’t show you the data, you just want to find something wrong with it.’

I’m sorry, but some of us remember when Oceania WASN’T at war with Eurasia.

“Your basic problem here”
Your basic problem is that you don’t pay attention. Hansen showed graphs which showed 1930’s close to recent for ConUS, not globally. And even that is out of date. 2012 and now 2016 were way ahead.

My conclusion:
• There is no support for a variable TSI ‘Background’
• The current Climate Data Record [CDR] is not helpful to Climate Research
• The CDR should not be based on obsolete solar activity data
• I expect strong ‘push-back’ from entrenched ‘settled science’, but urge [at least] the solar community to be honest about the issue

A CDR is a quality audit. It isn’t the same as ticking a few boxes. It is about ensuring that approved and valid processes were used to create the output from the source. There are no sources of corruption or bias. If you can’t show these simple things, your paper has no quality. It is more likely to be politics than science.

JCH, “The skeptics who have completely failed to show there is anything wrong with K15… because they appear incapable of doing science, but are clearly great at doing despicable smear jobs.”

Actually, K15 showed there was something wrong with with previous version of ERSST which were used to estimate mitigation costs of perhaps a trillion dollars per degree. Bates showed that K15 didn’t follow established procedure. Getting the right answer using the wrong methods excuse is getting tired doncha think?

Nick Stokes, “There is no evidence about wrong methods. There are claims about late paperwork.”

The whole issue is about not following the methods recommended by NOAA which would include having all paperwork reviewed prior to release. NASA, NOAA and other US agencies are notoriously anal about procedure.

Nick, Mosher and the rest are absolutely right. It doesn’t matter if proper procedure is followed, as long as the ‘right’ answer is gotten. And it must be the ‘right’ answer because now that someone has gotten it, everyone else will be able to get the same ‘right’ answer too. Just like what happened with Mike’s Hockey Stick.

See, it’s like what the Left call ‘Fake but Accurate’, but even better. You don’t have to just take their word for it. Give them time and they’ll eventually fabricate plenty of ad hoc justifications and ‘evidence’ to support what they knew was true long before they even started.

Because we all know that working in a lab or an office is every bit as dangerous as climbing a mountain. Not to mention little things like machine gun and mortar fire.

Anyone who puts on the uniform, whether voluntary or not, has my respect. Try not to diminish it by fake comparisons. Unless Mike Mann is your idea of a hero. Him being in the trenches and all.

Speaking of voluntary – the Generation of Heroes has been replaced. Today’s generation is truly that Generation. The majority of those landing on Iwo were draftees. That does not discount their courage, their efforts or their accomplishments. The men and women who have deployed to Iraq, Afghanistan and places all over the world have done so of their own accord. Everyone a volunteer. Most especially those who joined after 2001.

Amusing, I dismissed the post as it seemed just another politically motivated trivial witch hunt. But now it seems to have descended into a superficial discussion of the relation of surface temps to catastrophic global warming. Or not.

“I think I plotted that right on the Global Marine Argo Atlas.”
If you wanted to show the OHC at the meridian of Greenwich (0°E or 360°E) from 70°S to 70°N you plotted it right. Anyway, nobody would discuss why ocean temps declined because one should consider the whole ocean, from 0°E to 360°E:
And now: what to discuss??

The change in ocean heat approximately equals energy in less energy out.

Maybe not.

That’s a possibility.

Net up is warming.

Even if there were some increase in greenhouse gas forcing – it can’t be seen against large natural variability.

Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.http://link.springer.com/article/10.1007%2Fs10712-012-9175-1

We may also wonder why you think that short term variability has any significance for the cause.

Your wrote:”Discuss why ocean temps declined over the past couple of years – or not.”
The ocean temps climbed indeed in the last couple of years as one sees when one uses the data of Argo correctly. Guess why?

Well Mosh – it started with a complaint about adjustments to the surface temps – which uses such archaic methodologies – and contain such large and variable artefacts that are completely neglected – that it all becomes moot. It goes on the usual climate memes around pauses based on the surface temps. If you are going to claim off topic – please start well before I got here.

There are better methods than irrelevant to climate surface records. The only valid comparison is between UAH and RSS if you really want to maintain standards.

And the boxes, being checked, show that an extra level of care was taken, including review by some of the top people in the NCEI group. This should be better data than unchecked, which is much of the point that Dr Bates is making. Are you pushing for less checking and formality?

“Watch the temperatures go up. Ice vanish. Coral die. More serious weather events. Droughts. Fires, Floods.”
Which of those events show that man’s CO2 is the cause, for if man’s CO2 is not the cause there is no need for political action?

“They looked at satellites, Buoys and Argo.What did they find? They found that Karl’s thumb was weightless.That is, they found that the independent data sets CONFIRMED the adjustments.’

So all the “best” climate scientists (pun intended). using all the best temperature data, denied for almost 15 years that there was any pause.

Then about three years ago, those same scientists, using those same data sets, admitted there was a pause, and spent their energy explaining why it didn’t matter (ocean heat content being a better proxy was the most popular). Even the AR5 saw a pause in the data.

With Obama’s last shot at a climate deal coming up, we get Karl 15 which assures us there was no pause, based on “improvements” in the data.

The same data that didn’t show a pause for 15 years, then suddenly showed a 17 year pause that had to be explained away, was all wrong. And all those “best” scientists were wrong.

OK. this is post normal science at its typical. No surprise there.

But now we are assured that the data in Karl 15 that proves all those stupid best scientists were wrong, agrees with the data on which they made their calculations. The “adjustments” that show no pause, are “confirmed” (sorry “CONFIRMED”) by the data sets that until Karl 15 almost everyone agreed showed a pause.

+1. And don’t forget the computer of the guy that made the breakthrough accidentally had a complete failure and nothing can be recovered. That’s not odd at all. And the 8 scientists that worked on this with him didn’t think they needed to have any backups. Again, that’s not odd at all.

I think the defenders of Karl’s paper are not addressing the point. The point being “killing” the pause / slowdown up to 2014, and hurrying to publish it without any quality to be ready on time to have a political effect.

So, it doesn’t matter whether other temp series agree with Karl in 2016 or not. The question is the behaviour of the “settled science”, and whether there was a pause / slowdown with N papers “explaining it”, or it was just a scientists’ dream. You have to choose; one, or the other.

” hurrying to publish it without any quality”
That is Bates opinion. But the modification to ERSST was basically implementing the 2011 paper by John Kennedy, which had the necessary data. That is not undue haste.

Plazaeme,
Beyond the formal fussing about his archiving procedures, it’s mainly opinion, and the facts are few and shaky. Do you know where is the “thumb on the scales”? He makes a fuss about the choice of 90% CIs. They are actually common enough, but for this case, they are clearly required, because they are what is used in the AR5 analysis that he is comparing with. See Box 2.2, AR5.

A lot of the rest is just nit-picking, eg:“The data used in the K15 paper were only made available through a web site, not in digital form, and lacking proper versioning and any notice that they were research and not operational data.”
Website is of course digital, and the paper is research, so that notice is hardly needed. This guy has an almighty chip on his shoulder.

” The land temperature dataset used in the Karl study had never been processed through the station adjustment software before, which led me to believe something was amiss. “
Again opinion. Or unbased suspicion.

All the fuss about GHCN is also just pointless nitpicking. K15 was about ERSST, and the adjustments for buoys.

– “It is clear that the actual nearly-operational release of GHCN-Mv4 beta is significantly different from the version GHCNM3.X used in K15. Since the version GHCNM3.X never went through any ORR, the resulting dataset was also never archived, and it is virtually impossible to replicate the result in K15.”

Let me show you where I am going, in case the “”virtually impossible to replicate” part is right. You may say the GHCN part is not relevant. Not relevant for what? K15 “was about” ERSST. The pause killing thing.

OK. So Karl had two choices. To publish something “virtually impossible to replicate”, or to wait until he had a better quality paper. He chose the former because the pause killing thing was OK enough. And you think this is not “hurrying to publish it without quality, to be ready on time to have a political effect”?

Plazaeme,
Karl’s paper is here. It is a short paper, and has just three figures with global trends. They calculate with ERSST 3b and ERSST 4. The paper says they used GHCN3 on both occasions. It was presumably the same version. Now GHCN V3 has changed very little in that time. I don’t know what he means by GHCN V3.X, and I don’t know if he does, since he also refers to V4, which is a very different product. But GHCN V3 is issued more or less daily, and each file comes with a distinctive date number. I am sure the file for the relevant date is available. In any case, GHCN V3 has changed very little in the last few years (except for new data). It is extremely unlikely that getting the wrong day version would change any of the published results. Again, it is all about varying ERSST, which makes by far the largest part of the global average.

Plazaeme,
Actually, I was wrong on one point – the new analysis uses GHCN V4. But it’s a standard issue file, easily obtained. The readme file in the archive (dated Sept 15) says exactly what the are and where they are in the archive:

“This directory contains the adjusted land station data and metadata used in the Old Analysis; data from GHCN-Monthly version 3.2.2.
ghcnm.tavg.v3.2.2.20150116.qca.dat.gz; Adjusted station data
ghcnm.tavg.v3.2.2.20150116.qca.inv.gz; Inventory of adjusted stations”

And for the new Analysis“This directory contains the adjusted land station data and metadata used in the New Analysis.
tavg.v4.a.1.20150119.qca.dat.gz; Adjusted station data
tavg.v4.a.1.20150119.qca.inv.gz; Inventory of adjusted stations”

It’s possible that if someone wanted to replicate the day after publication, they might have had to ask for a copy. For generations of science, that is how it worked.

“Karl is not replicable. The lead author admits this to Rose.”
Nonsense. Whta he said to Karl was:“He also admitted that the final, approved and ‘operational’ edition of the GHCN land data would be ‘different’ from that used in the paper’.”

He’s saying that the eventual product may not give exactly the same result (of course). But he has archived the data files he used. That’s what are needed for replication.

Thank you Nick for confirming the point. What Karl et al did amounts to: ‘we were writing code for this paper and at one point we got this cool result (politically). We went ahead and published it even though we knew the final results would be different.’

If you did this in a clinical trial (and people have done this), the results would be thrown out.

I just read the Rose article that quotes you and I wanted to check whether you’ve been accurately represented.

1) You’re quoted: ‘They had good data from buoys. And they threw it out and “corrected” it by using the bad data from ships. You never change good data to agree with bad, but that’s what they did – so as to make it look as if the sea was warmer.’

But ERSST gives anomalies so it makes zero difference whether you put the ships down or the buoys up. That part is irrelevant and misleading. Have you been misquoted?

2) It also ignored data from satellites that measure the temperature of the lower atmosphere…Dr Bates said he gave the paper’s co-authors ‘a hard time’ about this, ‘and they never really justified what they were doing.’

This sounds like they’re talking about MSUs, which would be a silly way of measuring SSTs since there is almost no SST information in those channels. Were you referring to IR?

3) “there needs to be a fundamental change to the way NOAA deals with data so that people can check and validate scientific results.”

Argo, ATSR/AVHRR and buoys alone all say that ERSSTv3b was missing warming and agree with ERSSTv4. Independent sources validate ERSSTv4 and this reality is the opposite of what David Rose says so I’m concerned that your name and concerns about data handling are being used to push absolute bollocks about scientific results.

“But ERSST gives anomalies so it makes zero difference whether you put the ships down or the buoys up. That part is irrelevant and misleading”

While no ‘expert’, common sense suggests these anomalies are for grids, not individual instruments (obviously!). In this case, the (rapidly) changing composition of ships v buoys must surely affect some grid anomalies, pushing them upwards. Otherwise, why bother with this adjustment at all?

“In this case, the (rapidly) changing composition of ships v buoys must surely affect some grid anomalies, pushing them upwards. “
No. The anomaly base they actually use is in the past. It used to be 1961-90; it seems now to be 1971-2000. It actually doesn’t matter, as long as it is a fixed period. What changes the trend is the increasing proportion of buoys recently, which read slightly colder, and this difference is in no way affected by the anomaly base.

MR, your first argument was tried at the time Karl appeared, and failed for two reasons. Bothmstill apply. First, you leave the better data alone and adjust the lower quality data. Always, I was taught. Second, there is an important artifact of moving the buoy data up. Over the period in question, the ERSST went from only about 10%buoy (iirc) to over 90%. So the substitution of data type over time automatically produces a warming trend where in fact none exists, something largely verified by ARGO and which was ignored. Clever, but wrong.

MR,
The elephant is the inability of Argo to measure the changes it purports to measure. It is simply not possible to attain the claimed accuracy, when you do the correct procedures and include the natural variation of (say) temperature down a profile, with its dynamics, its stratification, its variable mixing properties, surface micro layer effects and so on. Given past work at attempting water bath stabilised temperatures in good laboratory surroundings, I’d hazard a guess that the best one can do, in the sense of all variation in a natural setting, as about +/-0.1 deg C for uses such as ocean heat content. Happy to be shown wrong. Show me the numbers. My mind is flexible towards improved knowledge. So far nobody I have read has gone far towards this holistic accuracy, some do not go past elementary stats precision calculations. (Easy for me to have missed some critical work, there is a large volume.)
Remember the lesson from Josh Willis et al –
” The recent cooling signal in the upper ocean reported by Lyman et al.
[2006] is shown to be an artifact that was caused by a large cold bias discovered in a small fraction of Argo floats as well as a smaller but more prevalent warm bias in eXpendable BathyThermograph (XBT) data. These biases are both substantially larger than sampling errors estimated in Lyman et al. [2006].”
Who is to say the floats that replace Argo will not cause it to get junked like Argo junked the XBT versions?
Geoff

Forget Rose. You and Hunt and others are missing the overall. The story is not about what Rose may or may not have said right or wrong. And it is not about the particulars of what Bates said properly or not.

The public can’t understand the details. And they don’t want to get into the weeds.

The story and the 2nd and 3rd derivative of the story is that a whistleblower, from the inside and not just any whistleblower but one fro the epicenter of the climate establishment. This has more significance, not scientifically, but public perception wise than anything that Judith or Pielke or Lindzencan say. Bates is from the government.

This is going to be a seminal moment because of the headline value. Every skeptic, politician or otherwise, will get their 15 minutes of fame, again not because of the actual issues surrounding what Bates has said but rather who has come out from the shadows. The original story will get lost. The future stories will be the great divergence between what the climate establishment really knows versus what they they think they know. And that is Judith’s uncertainty monster.

Anyone who thinks this is about Rose or about the specifics of Bates statements doesn’t understand the dynamics that will overtake what is being discussed here. Talk about chaos theory.

Do you have any comment on David Rose failure to put the two temperature series on a common baseline.?
He is a journalist, the Guardian uses cute little polar bears dying on ice floes, he is selling a message.
Yes it is wrong, tough as they say. Know you have backed up your email correspondence with Karl, including the order to destroy it all.
Pass the message to Zeke, head down mate but not in an email.

John will sooner or later have to face the fact that he made unsubstantiated claims (talk about traceability) that already have been refuted, eg: “Insisting on decisions and scientific choices that maximised warming and minised documentation” when:

Dr. Tom Karl was not personally involved at any stage of ERSSTv4 development, the ISTI databank development or the work on GHCN algorithm during my time at NOAA NCEI

Shub – Fancy bumping into you in here. We’ll have to stop meeting like this! Of course Steve Mosher’s point is not irrelevant.

Ceresco – Please forgive my tardy reply, but I’ve been rather busy today poking the House Science Committee in general and Lamar Smith in particular with a very long but hopefully still sharp stick. By way of just one example:

David Rose, writing for a general public, could have qualified his story in many more ways to the point where it was caveats and exceptions, not able to be digested by most of the public. Me wanted to show two time series with different trends. He did. The numbers on the vertical axis allow readers to compare trend differences. Rose does not state that they are there to give absolute values to the lines. In some ways it is like other authors adding instrumental data to proxy data and explaining the addition in small print elsewhere. Acceptable to some, not to others. A specialist argument that need not be put in a newspaper article for the public???
Geoff

“However, the Act will be toothless without an enforcement mechanism. For that, there should be mandatory, independent certification of federal data centres.”

Within other areas of importance, ISO 17025 is the standard by which independent laboratories are accredited.

“ISO/IEC 17025 General requirements for the competence of testing and calibration laboratories is the main ISO standard used by testing and calibration laboratories. In most major countries, ISO/IEC 17025 is the standard for which most labs must hold accreditation in order to be deemed technically competent. In many cases, suppliers and regulatory authorities will not accept test or calibration results from a lab that is not accredited.
…
In the U.S. there are several, multidisciplinary accreditation bodies that serve the laboratory community. These bodies accredit testing and calibration labs, reference material producers, PT providers, product certifiers, inspection bodies, forensic institutions and others to a multitude of standards and programs.” – Wikipedia

I think the necessary standards and mechanisms for accreditation are largely in place. It is more about developing the competence and attitude towards using and following these standards. In particular, I think accreditation would benefit the field of climate science.

“Because the data with respect to in-situ surface air temperature across Africa is sparse, a oneyear regional assessment for Africa could not be based on any of the three standard global surface air temperature data sets from NOAANCDC, NASA-GISS or HadCRUT4 Instead, the combination of the Global Historical Climatology
Network and the Climate Anomaly Monitoring System (CAMS GHCN) by NOAA’s Earth System Research Laboratory was used to estimate
surface air temperature patterns ”

Urban Data
WMO-
“The nature of urban environments makes it impossible to conform to the standard guidance for site selection and exposure of instrumentation required for establishing a homogeneous record that can be used to describe the larger-scale climate”

“The other concern that I raised following ClimateGate was overconfidence and inadequate assessments of uncertainty.”

“I questioned another co-author about why they choose to use a 90% confidence threshold for evaluating the statistical significance of surface temperature trends, instead of the standard for significance of 95% — he also expressed reluctance and did not defend the decision.”

That is another area where climate science fails to meet industry standards.

There actually exists an international guideline on expression of uncertainty – Guide to the expression of uncertainty in measurement. This is the only broadly recognized guideline on the expression of uncertainty. The following seven organizations* supported the development of this Guide, which is published in their name:
BIPM: Bureau International des Poids et Measures, IEC: International Electrotechnical Commission, IFCC: International Federation of Clinical Chemistry **, ISO: International Organization for Standardization, IUPAC: International Union of Pure and Applied Chemistry, IUPAP: International Union of Pure and Applied Physics, OlML: International Organization of Legal Metrology ”. Guideline is freely available from https://www.oiml.org/en/files/pdf_g/g001-100-e08.pdf

United Nations Intergovernmental Panel on Climate Change IPCC failed to identify and recognize this guideline. They made up their own in a hasty way. Their guideline is largely a joke called: “Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties”.https://www.ipcc.ch/pdf/supporting-material/uncertainty-guidance-note.pdf

That is the IPCC document that tries to standardise subjective levels of confidence. Hard to believe that this kind of sub-standard activity within science is allowed and even promoted by United Nations.

In addition to having failed on qualitative and quantitative expressions of uncertainty, being susceptible for noble cause corruption, InterAcademy Council seem to have been heavily involved with United Nations and IPCC – both politically, economically, personally and otherwise.

One example:
“The creation of InterAcademy Council had been requested in 1999 by the Secretary‐General of the United Nations in order to facilitate the best scientific input into global decision‐making.”

The past work described here as done by Dr Bates should be regarded as likely to do much good, little bad, thankless, repetitive, necessary, not every senior person’s cup of tea. The system is better for it.
It would be interesting to learn if the certifications required prior to use of collated data were matched by requirements that by-the-book treatment of precision, bias, uncertainty – whatever the labels – had been completed to specified methods and reviewed and passed before release for public use.
Dr Bates, if you read this a comment would be lovely.
Other readers, please do not go out of your way to be personally nasty. The real importance here is getting the best possible data to the users. It is trivially about whistleblowers and their motivation.
Geoff.

Who prepared the possibly misleading Figure for David Rose’s Daily Mail article – which allegedly uses different base years for calculating anomalies?

When I use Nick Stokes trend viewer, I get 1.56 K/century (0.67-2.00 95% ci correcting for autocorrelation) for what he calls NOAAlo and 1.34 K/century (0.97-2.18) for what he calls HADCRUT (for 1/97-11/16). The difference is 0.22 K/century or about 0.045K for this 1/5 century period. The graph has horizontal lines drawn every 0.2K, but the difference in temperature rise between these two data sets isn’t clearly visible on the plot. The only thing clearly visible there is the fact that NOAA is warmer in all periods – the baseline problem.

During the “Pause” (roughly 1997-2013), one can find periods where the trend of each record is as rough as 1 K/century less than the long term average of 1.6-1.7 K/century (1.4-1.9) for the last 40 years. The confidence interval also included zero during the Pause. However, now that we have experienced three years of warming in a row culminating in the recent El Nino, the trend for the first and last 20 years of this period is essential the same as for the full 40 year period! Karl15 didn’t kill the Pause – the 15/16 El Nino did. Unless temperatures return to the level of the Pause years for the next five years or so, the Pause is not going to return – whether or not Karl’s adjustments are accepted or refuted. Both sides should recognize that the process of combining data from an evolving mixture of technologies for measuring ocean temperature into a composite global record is an inherently uncertain process. The +0.05 K/decade change in trend introduced by Karl15 was only important because it challenged a politically-important term: “The Pause”.

More than three years of monthly posts at WUWT (and elsewhere) on the Pause by Monckton and others (which have mostly ceased), have created the illusion in part of the skeptical community that the Pause had invalidated the concept of radiative forcing. It’s time for this idiocy to stop. The relevant issue is TCR and especially ECS. There is no chance they are 0 K/doubling and little chance they are 1 K/doubling. They could be half of what the IPCC’s models project. Even RSS and UAH show long-term warming, albeit 25% less than the global surface record.

Something that wasn’t clear to me, though, is that I had to use the data from NOAA’s site, not the data from the Karl et al. archive (which seems to have a baseline of 1971 – 2000, so is actually below the HadCRUT4 data if the baselines aren’t reconciled). Is the data on the NOAA site the same as the Karl et al. data?

ATTP,
The traditional files posted at NOAA are the MLOST files, which used to be 1961-1990, but may be now 1971-2000. They are the ones that are directly calculated, and would be what is in the archive. The 30-year baseline is one that can be used for most stations. Then the results are converted to 1901-2000 for most reporting purposes. I’m not sure why; it sounds like a longer base would be better, but in fact what counts is the base when you first aggregate. Annoyingly, the MLOST data is now posted later than the the 1901-2000 conversion.

Jim Hunt link has Zeke’ replot showing the similarity, problem is not the graph misrepresentation by David Rose but the fact that a respected and senior insider felt that the head of the department was glory seeking and did it in an unprofessional manner.
This adds credence to the Republican investigation and means those who delayed and denied on the e mails now face much stiffer punishment as the investigation can now steam ahead.

Nick: I don’t care what your graph shows with noisy or smoothed data, which can easily fool your eye. Rapidly changes in temperature look closer. The linear trend times the period using your trend viewer says that NOAA rose 0.045K more than HADCRUT over this period. If you want to convince me that the answer is different, show me a plot of the DIFFERENCE between the two records with a linear fit.

Changed caption in DM article: “Although they are offset in temperature by 0.12°C due to different analysis techniques, they reveal that NOAA has been adjusted and so shows a steeper recent warming trend.”

On the Mail on Sunday article on Karl et al., 2015
There is an “interesting” piece (use of quotes intentional) in the Mail on Sunday today around the Karl et al., 2015 Science paper.

There are a couple of relevant pieces arising from Victor Venema and Zeke Hausfather already available which cover most of the science aspects and are worth a read. I’m adding some thoughts because I worked for three and a bit years in the NOAA group responsible in the build-up to the Karl et al. paper (although I had left prior to that paper’s preparation and publication). I have been involved in and am a co-author upon all relevant underlying papers to Karl et al., 2015.

The ‘whistle blower’ is John Bates who was not involved in any aspect of the work. NOAA’s process is very stove-piped such that beyond seminars there is little dissemination of information across groups. John Bates never participated in any of the numerous technical meetings on the land or marine data I have participated in at NOAA NCEI either in person or remotely. This shows in his reputed (I am taking the journalist at their word that these are directly attributable quotes) mis-representation of the processes that actually occured. In some cases these mis-representations are publically verifiable.

I will go through a small selection of these in the order they appear in the piece:

1. ‘Insisting on decisions and scientific choices that maximised warming and minised documentation’

Dr. Tom Karl was not personally involved at any stage of ERSSTv4 development, the ISTI databank development or the work on GHCN algorithm during my time at NOAA NCEI. At no point was any pressure bought to bear to make any scientific or technical choices. It was insisted that best practices be followed throughout. The GHCN homogenisation algorithm is fully available to the public and bug fixes documented. The ISTI databank has been led by NOAA NCEI but involved the work of many international scientists. The databank involves full provenance of all data and all processes and code are fully documented. The paper describing the databank was held by the journal for almost a year (accepted October 2013, published September 2014) to allow the additional NOAA internal review processes to complete. The ERSSTv4 analysis also has been published in no fewer than three papers. It also went through internal review and approval processes including a public beta release prior to its release which occurred prior to Karl et al., 2015.

2. ‘NOAA has now decided the sea dataset will have to be replaced and revised just 18 months after it was issued, because it used unreliable methods which overstated the speed of warming’

While a new version of ERSST is forthcoming the reasoning is incorrect here. The new version arises because NOAA and all other centres looking at SST records are continuously looking to develop and refine their datasets. The ERSSTv4 development completed in 2013 so the new version reflects over 3 years of continued development and refinement. All datasets I have ever worked upon have undergone version increments. Measuring in the environment is a tough proposition – its not a repeatable lab experiment – and measurements were never made for climate. It is important that we continue to strive for better understanding and the best possible analyses of the imperfect measurements. That means being open to new, improved, analyses. The ERSSTv4 analysis was a demonstrable improvement on the prior version and the same shall be true in going to the next version once it also has cleared both peer-review and the NOAA internal process review checks (as its predecessor did).

3. ‘The land temperature dataset used by the study was afflicted by devestating bugs in its software that rendered its findings unstable’ (also returned to later in the piece to which same response applies)

The land data homogenisation software is publically available (although I understand a refactored and more user friendly version shall appear with GHCNv4) and all known bugs have been identified and their impacts documented. There is a degree of flutter in daily updates. But this does not arise from software issues (running the software multiple times on a static data source on the same computer yields bit repeatability). Rather it reflects the impacts of data additions as the algorithm homogenises all stations to look like the most recent segment. The PHA algorithm has been used by several other groups outside NOAA who did not find any devestating bugs. Any bugs reported during my time at NOAA were investigated, fixed and their impacts reported.

4. ‘The paper relied on a preliminary alpha version of the data which was never approved or verified’

The land data of Karl et al., 2015 relied upon the published and internally process verified ISTI databank holdings and the published, and publically assessable homogenisation algorithm application thereto. This provenance satisfied both Science and the reviewers of Karl et al. It applied a known method (used operationally) to a known set of improved data holdings (published and approved).

5. [the SST increase] ‘was achieved by dubious means’

The fact that SST measurements from ships and buoys disagree with buoys cooler on average is well established in the literature. See IPCC AR5 WG1 Chapter 2 SST section for a selection of references by a range of groups all confirming this finding. ERSSTv4 is an anomaly product. What matters for an anomaly product is relative homogeneity of sources and not absolute precision. Whether the ships are matched to buoys or buoys matched to ships will not affect the trend. What will affect the trend is doing so (v4) or not (v3b). It would be perverse to know of a data issue and not correct for it in constructing a long-term climate data record.

6. ‘They had good data from buoys. And they threw it out […]’

v4 actually makes preferential use of buoys over ships (they are weighted almost 7 times in favour) as documented in the ERSSTv4 paper. The assertion that buoy data were thrown away as made in the article is demonstrably incorrect.

7. ‘they had used a ‘highly experimental early run’ of a programme that tried to combine two previously seperate sets of records’

Karl et al used as the land basis the ISTI databank. This databank combined in excess of 50 unique underlying sources into an amalgamated set of holdings. The code used to perform the merge was publically available, the method published, and internally approved. This statement therefore is demonstrably false.

There are many other aspects of the piece that I disagree with. Having worked with the NOAA NCEI team involved in land and SST data analysis I can only say that the accusations in the piece do not square one iota with the robust integrity I see in the work and discussions that I have been involved in with them for over a decade.

I don’t see the point in arguing minor adjustments. It’s a great diversion.
It ties you up talking trivialities that the public has not the remotest interest in.

There are bigger distortions. Ends for trends – and the annual circus of annual means. Trends should start in the mid-forties – and only monthly data and running averages makes any sense in the larger scheme of things. The spike at the end is a combination El Nino – and a drought artefact from a severe global drought. Not much to base such dire narratives on.

Just between us, the trend is less than 0.1 degrees C/decade.

Changes in the Pacific state cause multi-decadal – and much longer – cooling and warming. Which is why a mid-40’s start is more honest than starting after temps fell. This is mainstream science btw. Concentrate on your strengths and their weaknesses.

And El Nino seems set to a nosedive off a 20th peak of El Nino activity.

Will this end global warming? Don’t know – but this will put a crimp in its style.

RE, whether you are right or wrong misses the main point imo . The debate is no longer about science, even if it once was. It is about politics and ‘solutions’. Showing corruption of NOAA data for political purposes makes a political, not a scientific point.

The educated layman.. or practicing scientists from other fields might look at these and the long list of similar scandals and wonder why ‘climate scientists’ spend so little time and energy making sure their work is demonstrably honest and can stand up to external scrutiny.

And why so much on attacking the whistleblowers who point out their ‘mistakes’.

For all those defending K15, including Mosher and Stokes, would you please answer as simple question: have you been able to reproduce the K15 results? If so, then your criticisms of Bates are well founded. If not, then you’re full of crap.

Well then,,they must have archived the data and code, right? I noticed none of those three have answered the question. Mosher says they checked it but no where says reproduced. I don’t think it has been.

Judith, John et al – I have tweeted with documentation showing (even in Phil Jones’s words) that the NCDC, GISS and HadCRUT analyses draw from mostly the same raw data. I also tweeted on why BEST is not a true independent analysis.

Systematic bias and uncertainties that we have documented in our peer reviewed papers apply to all of these analyses.

My recommendation is to use tropospheric layer averaged temperatures (from satellite and radiosondes) for atmospheric trends of heat changes.

For global warming diagnosis, use ocean heat content changes, recognizing that the deeper ocean heating (i.e. below the long term thermocline) is mostly unavailable to affect weather on multi-decadal time periods). Indeed, heat that goes deeper into the ocean is not even sampled by surface temperatures.

These more robust data tell us that the climate system has warmed in recent decades, but that the heating is more muted than claimed using the global surface temperature trend (and that the climate models are predicting). The heating is also quite spatially variable as shown in the ocean heat content data with a significant fraction going into the Southern Oceans.

Ken – My comments are with respect to the culture of arrogance that Tom Karl, Tom Peterson and Peter Thorne showed when I was on the CCSP 1.1 Committee and afterwards. I was seeking to quantify uncertainties and biases in the Chapter I was lead author on, but they shut this effort down.

What John Bates has done is to expose this culture based not on robust science, but on promoting an agenda. Regardless of one’s views on policies, the scientific method should not be hijacked as they have done.

My specific scientific comments are with respect to the land portion of the surface data set. Subsequent to the CSSP Committee and in response to the failure of that report to properly assess the surface temperature analyses, we wrote this article

Roger,
Thanks for the reply, but Peter Thorne’s post suggests that a great deal of what John Bates has claimed is not true. I don’t really see how personal issues you may have with some of those involved is all that relevant.

What John Bates has done is to expose this culture based not on robust science, but on promoting an agenda. Regardless of one’s views on policies, the scientific method should not be hijacked as they have done.

Actually, my issues transcend personality. The problem, in this case, is the abuse of power (Tom Karl, Tom Peterson, Peter Thorne) to defend their surface temperature data, rather than to engage in constructive scientific discussion. The post by John Bates exposes their behavior.

Unfortunately, this culture of behavior and groupthink has become systemic throughout the climate science community including the AMS and AGU. My son’s recent WSJ op-ed documents his experience in this community.

I have documented the issues with the AGU and AMS leadership in both articles, reports and weblog posts. As just one example, AGU EOS refused to publish the minority report on the AGU Climate Change Statement despite explicit policy to publish opinion comments.

NCDC leadership is part of this mindset as John Bates has now documented.

The objective assessment of climate is broken. This should concern all scientists regardless of your political views.

Ken,
Hmm. There are two distinctions, IMO. First that there is a trend, then second the significance. One doesn’t eliminate the other. An example might be ‘the pause’. Isn’t that fair? Then, to where the trend line follows.

The point which leads from this topic is that rules of conduct are followed. Should they not be going forward then are the rules ‘useful’? If they are followed one might presume they indeed are.

“Last night Mr Karl admitted the data had not been archived when the paper was published. Asked why he had not waited, he said: ‘John Bates is talking about a formal process that takes a long time.’ He denied he was rushing to get the paper out in time for Paris, saying: ‘There was no discussion about Paris.’

He also admitted that the final, approved and ‘operational’ edition of the GHCN land data would be ‘different’ from that used in the paper’.”

1.) Data NOT archived due to a ‘process’ which takes time. (Please note time frame from the so called ‘hiatus’ and date of paper). What was the need for speed? (Not questioning motive, only asking a question which IMO needs asking).
2.) GHCN land data ‘different’. Some suggesting Dr. Bates decision to publicize the lack of following protocol, yet not asking in what form the ‘difference’ takes.

I am on holiday in Austria at present and this thread is more entertaining than the alternative, which is a TV show which appears to be a dog beauty contest… my vote is for ‘teddy’ at present, but that could all change.

I am intrigued as to how we can see what appears to be tribalism developing here, as several people have linked approvingly to an article by peter Thorne. He appears to be highly credible but is he more credible than the equally illustrious author of this article? The fact he is retired also lends credibility as people seem More able to speak out.

However, as ī don’t tend to believe that scientists are trying to hoax us all my sympathies also veer towards Karl. What a dilemma!

Thorne writes “v4 actually makes preferential use of buoys over ships (they are weighted almost 7 times in favour) as documented in the ERSSTv4 paper. ”

Sure the buoy data was weighted heavily – AFTER it was adjusted against the ship intake data. Now I’m not taking issue with that decision, even though it sounds counter intuitive to me – taking the less reliable data set and adjusting the more reliable one to it. What I do take issue with is Thorne spinning it to make it sound as if Dr. Bates is making a false statement. To claim that “v4 … makes preferential use of buoys over ships ” is effectively an effort to misdirect, as no mention is made of adjustments to it prior to it being used.

“Yes, but within error of estimate. So really, not.”
No, really. But anyway, BEST was a clear record. HADCRUT and NOAA were actually very similar, despite one having buoy adjustments and the other not. They both were narrowly warmest, but only because 2015 had been such a warm year. So there is no doubt that one or other is warmest. Here is the graph of cumulative record:

Gavin has posted a better NOAA:HadCRU graph matching baselines. You can see Blue (HadCRU) slightly better in top in the past and see Blue much more in bottom in recent years, making Rose/Bates’ point more clearly than the Rose graph with two different baselines.

For nearly two decades, I’ve advocated that if climate datasets are to be used in important policy decisions, they must be fully documented, subject to software engineering management and improvement processes, and be discoverable and accessible to the public with rigorous information preservation standards. I was able to implement such policies, with the help of many colleagues, through the NOAA Climate Data Record policies (CDR).

Professor Lindzen gets it right when he questions whether climate science data is even designed to actually help us answer questions–

Data that challenges the [global warming] hypothesis are simply changed. In some instances, data that was thought to support the hypothesis is found not to, and is then changed. The changes are sometimes quite blatant, but more often are somewhat more subtle. The crucial point is that geophysical data is almost always at least somewhat uncertain, and methodological errors are constantly being discovered. Bias can be introduced by simply considering only those errors that change answers in the desired direction. The desired direction in the case of climate is to bring the data into agreement with models, even though the models have displayed minimal skill in explaining or predicting climate. Model projections, it should be recalled, are the basis for our greenhouse concerns. That corrections to climate data should be called for, is not at all surprising, but that such corrections should always be in the ‘needed’ direction is exceedingly unlikely. Although the situation suggests overt dishonesty, it is entirely possible, in today’s scientific environment, that many scientists feel that it is the role of science to vindicate the greenhouse paradigm for climate change as well as the credibility of models. ~Richard Lindzen (2009)

“Once the CDR program was funded, beginning in 2007, I was able to put together a team and pursue my goals of operational processing of important climate data records emphasizing the processes required to transition research datasets into operations (known as R2O). Figure 1 summarizes the steps required to accomplish this transition in the key elements of software code, documentation, and data.
“Unfortunately, the NCDC/NCEI surface temperature processing group was split on whether to adopt this process, with scientist Dr. Thomas C. Peterson (a co-author on K15, now retired from NOAA) vigorously opposing it.”

As long as John mentioned me I thought, perhaps, I would explain my concern. This is essentially a question of when does software engineering take precedence over scientific advancement. For example, we had a time when John’s CDR processing was producing a version of the UAH MSU data but the UAH group had corrected a problem they identified and were now recommending people use their new version. John argued that the old version with the known error was better because the software was more rigorously assessed. I argued that the version that had corrected a known error should be the one that people used – particularly when the UAH authors of both versions of the data set recommended people use the new one.

Your concern would be valid but for three things:
1. SST adjustment method was based on a 2011 paper that reported great uncertainty in the result, although the central result was equivalent to Huang’s later paper introducing ERSST4 (the NOAA redo). Neither Huang nor Karl papers discussed that uncertainty. That borders on academic misconduct.
2. The Karl paper now provably did not follow the written archival requirements of Science. It should be retracted.
3. Your successors now say ERSST4?is flawed, will produce ERSST5, and it will show lower warming.
Dr Peterson, you have been caught out. You cannot defend your indefensible conduct by pointing to UAH or Bates rigid view on software validation. You used a land data set produced by software that was buggy. You used an SST methodology with high uncertainty and did not report that.
Good that you and Karl are retired, else your fired!

” based on a 2011 paper that reported great uncertainty in the result”
Kennedy reported a bias of 0.12±0.01°C. That is not great uncertainty.

“The Karl paper now provably did not follow the written archival requirements of Science. It should be retracted.”
You give no basis for that. But papers are not retracted for possible missed dates in archiving. There is a full archive now.

“Your successors now say ERSST4?is flawed, will produce ERSST5”
They don’t. There was always going to be an ERSST 5. Just like there was a 1,2,3,4.

Since you asked, I would have used the newer version of the UAH MSU data for internal research and analysis but I would not have published anything until this newer data had been certified by the CDR process. If any of the research results looked to have a potential impact on climate change policy (where perhaps trillions of dollars could ultimately be at stake), I would have expedited the CDR process.

Your concern about Bates rigidity on software validation is beside the point. There are three facts that make your and Karl’s actions indefensible:
1. The Huang ERSST4 methodology was based a previous 2011 paper. That paper reported a wide range of uncertainty; neither Huang nor Karl did. That borders on academic misconduct.
2. The data behind Karl is not archived as required by Science. The paper should therefore be retracted.
3. Your successors now say ERSST4 is faulty, and they are producing ERSST5, which will have a lower warming trend.
You rushed a politicized poor quality paper out to support Obama in Paris. Rep Smith subpoenaed NOAA based on whistleblower concerns, and that subpoena was stonewalled. Now we know why. And likely there were emails (probably now disappeared) between long time acquaintances Karl and Holdrun on this that explain the contempt of congress by NOAA?
Good that you and Karl are retired, else your fired.

“That paper reported a wide range of uncertainty; neither Huang nor Karl did. That borders on academic misconduct.”
Rud, you seem to be mindlessly repeating this. What is that “wide range of uncertainty” that you claim? Numbers, please.

The sad truth is Mosher is well and truly in the black books with Karl and the AGW gang. They will never trust him and give him full access to any controversial data.
Hence he is able to make comments like
“The data IS AND WAS availableThe point is
The way Karl used the data IS AND WILL remain unavailable.
For ever.
That is what Science required ( and what science requires)
The method is lost forever Mosher, not the data, read what Bates said, carefully, and don’t dodge the real issue here.
Broke my New Years resolution saying this.

Thomas Peterson: This is essentially a question of when does software engineering take precedence over scientific advancement.

Whenever the “scientific advancement” depends on the output from the software.

I think the question you are really addressing is “When is the software considered reliable enough for claims and actions to be based on it?” There are examples, are there not, of “upgrades” to software introducing new errors, so newer can not be automatically granted a claim to superiority.

For example, we had a time when John’s CDR processing was producing a version of the UAH MSU data but the UAH group had corrected a problem they identified and were now recommending people use their new version. John argued that the old version with the known error was better because the software was more rigorously assessed.

Is that pertinent to this essay about this case? I think we can always take it for granted that no one is always right. If it is pertinent to this essay about this case, can you show us how?

Instead he chose to run to a Daily Mail trash tabloid ‘journalist’ with a known history of false claims, and a contrarian blog, to present his unrefereed opinions. Some of which have been refuted as provably false.

Right before a House Science Committee hearing to “Make the EPA Great Again” on Tuesday 7th Feb. (yes seriously, that’s what Lamar Smith called it)

The House Science Committee and Lamar Smith tweeted about the Daily Mail trash tabloid article 6 times and did a Press Release. They used the Daily Mail trash tabloid piece s ‘confirmation’ (seriously) that NOAA scientists ‘manipulated climate records’ and were up to no good.

A trash tabloid piece as a ‘source’? No published science? No fact checking? No mention of other independent work that confirmed NOAA’s work? Really Lamar?

“AGU believes that the merits of the Karl et al. (2015) should be and have been discussed in appropriate peer-reviewed scientific journals. We note that the main results of that study have since been independently replicated by later work. In the meantime, we will continue to stand up for the credibility of climate science, the freedom of scientists to conduct and communicate their science.”https://fromtheprow.agu.org/climate-science-data-management

Why did Bates not follow the normal processes himself?

Why did the House “Science” Committee use a trash Tabloid hit piece as it’s only “source”?

I don’t think it’s a secret there is systemic bias within the climate science community. Why not focus the energy on this particular instance towards reproduction of results and quantifying and qualifying it?

One can only hope this will not be the only instance of someone in the background stepping forward to expose and help vett the information that has been conveyed over the years. I think we all agree that the truth is the ultimate goal regardless of stance. After all, temperature is such a simple and finite thing, no? ;-)

John Bates
Thanks for upholding the scientific method and working so hard to establish and maintain the integrity of the objective data.
What’s progress in requiring NOAA and the Inspector General to uphold and enforce the Information Quality Act for this climate data which appears to definitely come under “Important Scientific Information (ISI)?
What requests have been made to “correct” this information?
How can we collectively help to be most effective? (vs numerous easily dismissed requests)
How should be contact our legislators to fix this mess?
e.g. See:NOAA also must maintain Information Quality, as it is subject to the Information Quality Act (Pub Law 106-554 via OMB’s guidelines.

Information Quality
In response to Section 515 of the Treasury and General Government Appropriations Act for Fiscal Year 2001 (Public Law 106-554), and to implement guidelines issued by the Office of Management and Budget, NOAA has issued Information Quality Guidelines for ensuring and maximizing the quality, objectivity, utility, and integrity of information which it disseminates. The NOAA guidelines also establish an administrative mechanism allowing affected persons to seek and obtain correction of information that does not comply with OMB or NOAA applicable guidelines.

(B) establish administrative mechanisms allowing affected persons to seek and obtain correction of information that does not comply with the OMB 515 Guidelines (Federal Register: February 22, 2002, Volume 67, Number 36, pp. 8452‑8460, herein “OMB Guidelines”) or the agency guidelines . . .
Department of Commerce (DOC) has issued Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Disseminated Information (available from http://www.doc.gov). . . .Influential scientific information (ISI) means scientific information the agency reasonably can determine will have or does have a clear and substantial impact on important public policies or private sector decisions. See Part II, under Objectivity, for futher explanation, and the Appendix for additional guidance and ISI examples. . .
rm highly influential scientific assessment (HISA) means influential scientific information that the agency or the Administrator of the Office of Information and Regulatory Affairs in the Office of Management and Budget determines to be a scientific assessment that: (i) could have a potential impact of more than $500 million in any year, or (ii) is novel, controversial, or precedent‑setting or has significant interagency interest.

Reproducibility means that the information is capable of being substantially reproduced, subject to an acceptable degree of imprecision. For information judged to have more (less) important impacts, the degree of imprecision that is tolerated is reduced (increased). With respect to analytic results, “capable of being substantially reproduced” means that independent analysis of the original or supporting data using identical methods would generate similar analytic results, subject to an acceptable degree of imprecision or error.

Transparency is not defined in the OMB Guidelines, but the Supplementary Information to the OMB Guidelines indicates (p. 8456) that “transparency” is at the heart of the reproducibility standard. The Guidelines state that “The purpose of the reproducibility standard is to cultivate a consistent agency commitment to transparency about how analytic results are generated: the specific data used, the various assumptions employed, the specific analytic methods applied, and the statistical procedures employed. If sufficient transparency is achieved on each of these matters, then an analytic result should meet the reproducibility standard.” In other words, transparency – and ultimately reproducibility – is a matter of showing how you got the results you got. . . .
Original Data are data in their most basic useful form. These are data from individual times and locations that have not been summarized or processed to higher levels of analysis. While these data are often derived from other direct measurements (e.g., spectral signatures from a chemical analyzer, electronic signals from current meters), they represent properties of the environment. These data can be disseminated in both real time and retrospectively. Examples of original data include buoy data, survey data (e.g., living marine resource and hydrographic surveys), biological and chemical properties, weather observations, and satellite data. . . .
OBJECTIVITY

Objectivity ensures that information is accurate, reliable, and unbiased, and that information products are presented in an accurate, clear, complete, and unbiased manner. In a scientific, financial, or statistical context, the original and supporting data are generated, and the analytic results are developed, using commonly accepted scientific, financial, and statistical methods.

Accuracy. Because NOAA deals largely in scientific information, that information reflects the inherent uncertainty of the scientific process. The concept of statistical variation is inseparable from every phase of the scientific process, from instrumentation to final analysis. Therefore, in assessing information for accuracy, the information is considered accurate if it is within an acceptable degree of imprecision or error appropriate to the particular kind of information at issue and otherwise meets commonly accepted scientific, financial, and statistical standards, as applicable. This concept is inherent in the definition of “reproducibility” as used in the OMB Guidelines and adopted by NOAA. Therefore, original and supporting data that are within an acceptable degree of imprecision, or an analytic result that is within an acceptable degree of imprecision or error, are by definition within the agency standard and are therefore considered correct.

Influential Information. As noted in the Definitions above, influential information is information the agency reasonably can determine will have or does have a clear and substantial impact on important public policies or private sector decisions.

A clear and substantial impact is one that has a high probability of occurring. If it is merely arguable or a judgment call, then it would probably not be clear and substantial. The impact must be on a policy or decision that is in fact expected to occur, and there must be a link between the information and the impact that is expected to occur. See Appendix for further guidance and ISI examples.

Without regard to whether information is influential, NOAA strives for the highest level of transparency about data and methods for all categories of information in all its scientific activities, within ethical, feasibility, cost, and confidentiality constraints. This supports the development of consistently superior products and fosters better value to the public. It also facilitates the reproducibility of such information by qualified third parties . . .PART III. ADMINISTRATIVE CORRECTION MECHANISM
A. Overview and Definitions
1. Requests to correct information. Any affected person (see “Definitions” below) may request, where appropriate, timely correction of disseminated information that does not comply with applicable information quality guidelines. An affected person would submit a request for such action directly to:

However, requests for correction received in compliance with the Department of Commerce guidelines and forwarded to NOAA by DOC will be considered as if submitted to the NOAA Section 515 Officer on the date received by the NOAA Executive Secretariat. . . .Appendix: Additional ISI Guidance and Examples
Additional guidance for determining whether NOAA data meet the criteria for ISI.
The definition for ISI provides no clear criteria or specific guidelines such as the HISA criterion for any dataset that is over the $500 million threshold. Each data set of “scientific information” is unique, and impacts public policy and private sector decision-making in its own unique way. The three key phrases, which managers must weigh in on, are whether or not there is a “clear and substantial impact,” and this impact has a “high probability of occurring”, on “important public policies or private sector decision making.” It should also be kept in mind that these evaluated impacts may be regionally dependent in nature. If “yes” can be answered for each phrase, then it is ISI. An informal addition by the NOAA Information Quality Contacts Group to “public policies or private sector decision-making” is “strategic management processes”.
Examples of existing peer review plans and their influence on public policies or public sector decision-making: . . .

With respect to the old Leonard Nimoy show “In Search Of”, I am simply looking for authentication of either of the two items referenced and cannot find it to date. Any global game changing Paradigm Shift needs to be 100% supported without question. Not just an attempt of an educated opinion.

The Committee thanks Dr. Bates, a Department of Commerce Gold Medal winner for creating and implementing a standard to produce and preserve climate data, for exposing the previous administration’s efforts to push their costly climate agenda at the expense of scientific integrity.”

Over the course of the Committee’s oversight, NOAA refused to comply with the inquiries, baselessly arguing that Congress is not authorized to request communications from federal scientists. This culminated in the issuance of a congressional subpoena, with which NOAA also failed to comply. During the course of the investigation, the Committee heard from whistleblowers who confirmed that, among other flaws in the study, it was rushed for publication to support President Obama’s climate change agenda.

I told you all way last year that POTUS Trump’s impending election would pry the truth out of NOAA. And further that all this inconclusive bickering over the pause, clouds, attribution, little jimmy dee’s sanity, sensitivity and whatever would be rendered moot. Trump rules! Left loon alarmist climate policy drools, at least for the next 8 years. Then we will probably have POTUS Ivanka, if she wants it.

“AGU remains committed to serving as a leader in data and transparency in science.” . . .
AGU believes that the merits of the Karl et al. (2015) should be and have been discussed in appropriate peer-reviewed scientific journals. We note that the main results of that study have since been independently replicated by later work. In the meantime, we will continue to stand up for the credibility of climate science, the freedom of scientists to conduct and communicate their science. . .
U.S. House of Representatives Committee on Science, Space, and Technology issued a misleading press release. These types of statements by policymakers that attempt to take one study/dispute and blow it out of proportion . . .
We will be working with the science committee to demonstrate the scientific consensus on climate change and to encourage them not to interfere with the scientific process. . . .
ORIGINAL POST (4 February): Early today, AGU’s former Board member John Bates published a letter outlining what he believes to be mismanagement of climate science data in a highly-cited scientific paper, “Possible artifacts of data biases in the recent global surface warming hiatus” (Tom Karl, et al. 2015) . . .
I know many of you will have concerns or questions about this news, and I strongly encourage you to share those thoughts with us here, or in an email to president@agu.org

2. Appeals of denials of requests. Any person receiving an initial denial of a request to correct information may file an appeal of such denial, which must be received by the NOAA Section 515 Officer (address as in paragraph III.A.1. above) within 30 calendar days of the date of the denial of the request. The appeal must include a copy of the original request, any correspondence regarding the initial denial, and a statement of the reasons why the requester believes the initial denial was in error. No opportunity for personal appearance, oral argument, or hearing on appeal will be provided.

3. Burden of Proof. The burden of proof is on the requester to show both the necessity and type of correction sought. Information that is subjected to formal, independent, external peer review is presumed to be objective provided that, for “influential scientific information,” and “highly influential scientific assessments,” the peer review fulfills the requirements of the OMB Peer Review Bulletin. The requester has the burden of rebutting that presumption.

Regarding Rose’s article, it is not likely that Karl affected anything at Paris, and even in the unlikely case it did, it is good they didn’t make any decisions on the now-known-to-be flawed hiatus. Thirty-year temperature trends never showed a pause, even pre-Karl, because there was a faster warming in the late 1990’s that canceled it out. Fifteen-year trends are inherently unstable because of the influence of the solar cycle among other things.http://www.woodfortrees.org/plot/hadcrut4gl/mean:120/mean:240/plot/gistemp/mean:120/mean:240

Wrong when comparing temperature datasets to models to assess their validity. In BAMS 2009 NOAA said 15 years would invalidate them. In 2011 Santer published 17 years. Well, it has been 17 years and whether there is a hiatus like satellites show, or a slight warming per Karl, the fact is the models have now been invalidated by observation. By ~4x in the tropical troposphere per Christy, and still by 1.7x per Santer’s new paper with the erroneous tropical stratosphere correction. And about 2.5x for whatever GAST dataset one favors.
That means the model sensitivities are too high. Observationally based on several recent papers by a factor of about 2. Which means there is no C in CAGW.

“In BAMS 2009 NOAA said 15 years would invalidate them. In 2011 Santer published 17 years.”
As so often, no attention paid to what was actually said. BAMS 2009 was about the frequencies of 15 year pauses in ENSO-adjusted data. And Santer said that at least 17 years was required in order to detect human influence on troposphere temperatures.

“In experimental philosophy, propositions gathered from phenomena by induction should be considered either exactly or very nearly true notwithstanding any contrary hypotheses, until yet other phenomena make such propositions either more exact or liable to exceptions.

This rule should be followed so that arguments based on induction be not be nullified by hypotheses.” Isaac Newton

Climate shifts – duh. The next one is due within the decade. I predict it will be to more Pacific upwelling and cooler conditions. My hypothesis is that is driven by a solar trigger. As for the current trend – you don’t count your money while you’re sittin’ at the table…

The cool water anomaly in the center of the image shows the lingering effect of the year-old La Niña. However, the much broader area of cooler-than-average water off the coast of North America from Alaska (top center) to the equator is a classic feature of the cool phase of the Pacific Decadal Oscillation (PDO). The cool waters wrap in a horseshoe shape around a core of warmer-than-average water. (In the warm phase, the pattern is reversed).
Unlike El Niño and La Niña, which may occur every 3 to 7 years and last from 6 to 18 months, the PDO can remain in the same phase for 20 to 30 years. The shift in the PDO can have significant implications for global climate, affecting Pacific and Atlantic hurricane activity, droughts and flooding around the Pacific basin, the productivity of marine ecosystems, and global land temperature patterns. This multi-year Pacific Decadal Oscillation ‘cool’ trend can intensify La Niña or diminish El Niño impacts around the Pacific basin,” said Bill Patzert, an oceanographer and climatologist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif. “The persistence of this large-scale pattern [in 2008] tells us there is much more than an isolated La Niña occurring in the Pacific Ocean.”

Natural, large-scale climate patterns like the PDO and El Niño-La Niña are superimposed on global warming caused by increasing concentrations of greenhouse gases and landscape changes like deforestation. According to Josh Willis, JPL oceanographer and climate scientist, “These natural climate phenomena can sometimes hide global warming caused by human activities. Or they can have the opposite effect of accentuating it.” NASA

This study uses proxy climate records derived from paleoclimate data to investigate the long-term behaviour of the Pacific Decadal Oscillation (PDO) and the El Niño Southern Oscillation (ENSO). During the past 400 years,
climate shifts associated with changes in the PDO are shown to have occurred with a similar frequency to those documented in the 20th Century. Importantly, phase changes in the PDO have a propensity to coincide with
changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Nin˜o events, while the negative phase is shown to be more favourable for the development of La Niña events. Verdon, D. C., and S. W. Franks
(2006), Long-term behaviour of ENSO: Interactions with the PDO over the past 400 years inferred from paleoclimate records, Geophys. Res. Lett., 33, L06712, doi:10.1029/2005GL025052.

So a cool Pacific to 1976, warm to 1998 and a bit of a mixed signal since.

The energy budget feedbacks include cloud radiative effects. Limited satellite data says that this is significant. There are supporting ground based observations.

The usual response from urban doofus hipsters is – naw this can’t be right. Some are playing catch up and spin. It can’t be very comfortable.

Letter to: President Eric Davidson, American Geophysical Union
President Davidson
When 100 scientists attacked him, Albert Einstein said:
“If I were wrong, one would be enough”.

Isn’t science advanced more by model falsification than by consensus?

John P.A. Ioannidis has showed: “Why most published research findings are false.” PLoS Med. 2005;2(8):e124. pmid:16060722 and “Why Most Clinical Research Is Not Useful.” PLoS Med 2016; 13(6): e1002049. doi:10.1371/journal.pmed.1002049

Why should climate science be any different?

You assert: “all independent records now show that the past two years were the warmest years on record.”
Yet satellite temperature specialist Roy Spencer found that:
“Global Satellites: 2016 not Statistically Warmer than 1998” January 3rd, 2017 http://bit.ly/2iM6vFa

Please document how your statement is scientifically supportable within a 95% probability.

I find it remarkable that the president of the American GEOPHYSICAL Union would ignore the geological record documenting temperature declines since the Holocene Optimum. e.g. K. Gajewski (2016) reports:
“Most sites also show cooling during the past 3.2 ka.”
Quantitative reconstruction of Holocene temperatures across the Canadian Arctic and Greenland. Global and Planetary Change 128 (2015) 14–23 http://bit.ly/2l8klq6

Per Thomas Kuhn’s “The Structure of Scientific Revolutions” isn’t it very difficult to correct a scientific paradigm? By Nobel Laureate Richard Feynman’s high standard of scientific integrity, should we not bend over backwards to find any and every reason why a model is NOT true?
See Cargo Cult Science, Caltech 1974 http://calteches.library.caltech.edu/3043/1/CargoCult.pdf

John Bates has credibly documented serious failures at NOAA of not abiding by its own standards, (let alone those under the Information Quality Act.) http://bit.ly/2jS2QoD

Please lead the AGU in publicly examining all possible errors in NOAA’s temperature data and validating all its adjustments compared to the satellite and balloon records so we can have confidence in their veracity.

“Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right.”

There is a great deal of very poor climate science. In the blogosphere it descends into climate memes learnt by rote at the feet of self appointed gate keepers. went back to realclimate recently. Naw – still crazy. As for hotwhopper – I am afraid to go back there.

Does any political moderate believe that there are not ideological underpinnings? The urban doofus hipster vision involves narratives of moribund western economies governed by corrupt corporations collapsing under the weight of the internal contradictions – leading to less growth, less material consumption, less CO2 emissions, less habitat destruction and a last late chance to stay within the safe limits of global ecosystems. And this is just in the ‘scholarly’ journals.

Economies are fragile – movements on markets can be fierce – recovery glacially slow sometimes. There are economic problems – but the problems are not intrinsic to capitalism. They were created by poor judgement. We blundered into it through stupidity. It is not difficult – however – to imagine scenarios in which markets are deliberately destabilised to hasten the end of capitalism. Creeping tax takes, overspending by government, printing money, keeping interest rates too low for too long, or too high for too long, taxing primary inputs, implementing market distorting subsidies – the scope is endless. These are suspiciously the objectives of global warming progressives – but let’s not call it a conspiracy. Climate science – and the weird fruit of it – is conflated with seriously misguided energy policy. Is that a coincidence?

Most of this is classic “he said, she said” analysis. Bates made claims. Thorne says Bates is wrong. He does link to relevant information, which seems of little or no use to non-scientists (perhaps even to non-climate scientists).

His rebuttal makes two major points. First, the Daily Mail is a tabloid. Which is obviously correct. Second, that global average temperatures warmed during 1970-2000 — and there was a strong El Nino in 2015-16. He believes that this proves something, but is unclear exact what that is. Since his forte is study of variability, some detail on this point would be useful. Especially useful would be graphing the post-1950 time period, which is what the IPCC uses as the period dominated by anthropogenic warming.

If Hausfather is correct, this means that Bates’ claims are of interest to climate scientists, and esp. to NOAA and Science — most seriously about possible violations of best practices (and some rules) — but have little relevance to the public policy debate.

“The ‘whistle blower’ is John Bates who was not involved in any aspect of the work. NOAA’s process is very stove-piped such that beyond seminars there is little dissemination of information across groups. John Bates never participated in any of the numerous technical meetings on the land or marine data I have participated in at NOAA NCEI either in person or remotely. This shows in his reputed (I am taking the journalist at their word that these are directly attributable quotes) mis-representation of the processes that actually occured. In some cases these mis-representations are publically verifiable.”

Moreover, the GHCN software was afflicted by serious bugs. They caused it to become so ‘unstable’ that every time the raw temperature readings were run through the computer, it gave different results.

This sounds familiar – similar to the scandal of non-repeatability that has exploded in the life sciences, that is, medical drug discovery research especially involving molecular genetics. Attempts to replicate the highest impact papers fail more often than not. Scientists involved admit that their complex and sensitive experiments give different results with each run. This opens the door to cherry picking the results most to the liking of the author – a fraud that is easy to hide among the voluminous minutiae of the experimental method.

It’s a problem of massively inductive science. Assumption built on assumption, complex model on complex model, and data manipulation and steering of results becomes easier and easier with every layer of complexity.

Paradoxically, the real study of complexity is very simple – accept that it’s a chaotic-nonlinear system and analyse it accordingly. But politically progressive climate science refuses to do this – it sticks to inappropriate linearity and builds up monstrous analytical complexity where fraud can easily be hidden.

Also just changing the headline on the graph is just not good enough.
Any honest reporter of facts would have had the two on the same base in the first instance, never mind not said why they were so far apart.
I know from experience that visuals tell the tale. “A picture paints …” etc.
99% of folk will see the graph. Assume “fraud” and move on.
The man is execrable and that you support him is likewise.

David Rose published graphs derived from public sources. If such graphs are to be compared, then some caveats need to be added whose words depend partially on the intended point of the comparison.
The initial incompatibility claimed about these 2 graphs arose from their authors, not from David Rose. Surely, there must come a point where investigative reporters must say “This is a rather complex comparison. How far down the explanatory track do I have to go to for the public to know that this climate special peculiar anomaly method of presentation has traps that we simply do not have the time nor space to explain in our media, nor attribute back to the original authors”.
It is not my view that David Rose erred. The error was with original authors who chose to present data that require too much detail to be placed in context each time they are used. And which are poorly classed by version and revision numbers and notes, and which can change overnight without much warning.
Geoff

Geoff Sherrington says:
“It is not my view that David Rose erred. The error was with original authors who chose to present data that require too much detail to be placed in context each time they are used.”

All published errors are the writer’s fault. Period.

Any climate journalist should know that different datasets often (and usually) have different baselines, and how to correct for them. In this case Rose did not, nor did he ask anyone.

Climate journalist – does that require the same qualifications as a cartoonist? It sure does appear that way when one peruses the work of the majority of journalists who write about climate. Seth Borenstein, Chris Mooney, Scooter Nuccitelli (though in Scooter’s defense, he didn’t really start out as a journalist).

In Rose’s figure the red and blue lines (Karl and Hadcrut) coincide at all the el Nino peaks. So just how can they be out of synch?

In any case they seem (to me) almost identical, just slightly displaced. Not really scandalous at all. The real issue is the serial hiking of especially Pacific SSTs by almost half a degree in the last decade. As Bill Illis puts it:

Bill Illis on February 5, 2017 at 7:05 am
So we move from ERSSTV2 to ERSSTV3 in 2009 and they adjusted the SST trend up by 0.3C. In V3 to V3b in 2012, adjustments of another 0.1C, The ERSSTV3b to ERSSTV4 in 2015 another +0.12C. That is 0.52C all together over just 6 years. And we don’t even really know what happened to the data in 2016 because noone knows where it comes from (some ships, ICOADs, where is the raw data).

Oh do cut some slack.
If climate journalists know about offsets and anomaly periods,as some claim they ought, surely the public should know also and so see abundant red herrings..
Or have said climate journalists failed in skills to educate the public about the anomaly method of expression of time series data?
Havebabthink about how many times a day you absorb material thastnyou knowbisvexpressed shorthand for any number of reasons. I want to read the nub of the material, not endless caveats, exclusions and so on.
Here, why not think in terms of every dayvreadingvrathere than a forensic dissection of a generalized message as if you were in a hunting party?
Geoff

Here’s a repeat of Bill Illis’ comment from the parallel thread on WUWT:

Bill Illis on February 5, 2017 at 11:47 am

If Karl was trying to come up with an accurate sea surface temperature dataset, he should have thrown out the inaccurate ship data instead.

But what he did in ERSST v3b was to throw out the satellite records followed up by throwing out the buoy trends in ERSST v4.

Does this sound like someone trying to get to an accurate record. Is this what a person in charge of a “National” data centre is supposed to be about. Is that what a person in charge of the world “Climate Data Centre” should be about.

We HAVE to go in and correct all of the data now. We are going to need forensic statisticians and prosecutors to do a proper job. I imagine there is an oath of integrity that Karl had to sign to be put in charge of so much of the world’s data records.

“But what he did in ERSST v3b was to throw out the satellite records followed up by throwing out the buoy trends in ERSST v4.”
Complete nonsense. The whole point of the adjustment was to make ships and buoys usable together. From Peter Thorne:

“v4 actually makes preferential use of buoys over ships (they are weighted almost 7 times in favour) as documented in the ERSSTv4 paper. The assertion that buoy data were thrown away as made in the article is demonstrably incorrect”

Tony,“Would you agree with that date or go for the century longer dada as being scientifically viable?”
I think longer. 1850 is a stretch. The thing is, SST varies fairly smoothly in both space and time, and in patterns which you can sort out with years of modern data. So you don’t need a high density of readings.

“Adjusted the more reliable data set using the less reliable one and then weighted it?”
There is a known discrepancy (0.12, Kennedy 2011). You have to adjust one or other to make them compatible. As has been said over and over, it makes no arithmetic difference which. The weighting, based on variance, is done regardless of the adjustment.

Since I ended up dropping statistical analysis in grad school, I’ll have to take your word that it doesn’t make a difference. As I have said I don’t have major heartburn over what the Karl research did. Just thought it didn’t sound logical. Not that it was improper. I do think that someone didn’t think it through very well though. Work for any decent sized company and HR will make you aware of the importance of avoiding even the appearance of conflict of interest, whether one exists or not.

The Irish Climate Analysis and Research Units blog did a critique of Bates’ claims and the Mail on Sunday article (already linked by Ceist (@Ceist8) above). They repudiate the claims in seven numbered points. I could see problems with two of them and tackled one, point 6. I tried to post my critique in a comment but fell foul of the word count. So I’m posting it here if that’s OK and they can come here and respond if they want to (I’ll let them know). Their blog is hosted by the Department of Geography at Maynooth University. Their critique of Bates is linked below, followed by my comment.

“v4 actually makes preferential use of buoys over ships (they are weighted almost 7 times in favour) as documented in the ERSSTv4 paper. The assertion that buoy data were thrown away as made in the article is demonstrably incorrect.”

///

I note you are careful to say “as made in the article”. The analysis below doesn’t question the literal truth that raw buoy data was used in the paper. Everyone including Bates clearly knows that buoy data was used (after all, it had to be included as a starting point in order to adjust it upwards). The question is whether in the final analysis, the raw buoy data bore any resemblance to the adjusted buoy data. If not, then the raw data i.e. “the good data from buoys” was, for all intents and purposes, thrown out.

I clicked on the link to the ERSSTv4 paper you give in your response to point 6, quoted above. I presume your reference to preferential use of buoys over ships being weighted almost 7 times in favour is the 6.8 multiplication factor cited at the end of the second paragraph in section 2:

“The number of buoy observations was multiplied by a factor of 6.8, which was determined by the ratio of random error variances of ship and buoy observations (Reynolds and Smith 1994), suggesting that buoy observations exhibit much lower random variance than ship observations.” [From Section 2: Reconstruction Methodology- the full second paragraph of that section is reproduced at the bottom of this comment].

It’s of note that the 6.8 multiplication factor is introduced at the end of the paragraph and not at the beginning. At the beginning it says:

“buoy SSTA was adjusted by a mean difference of 0.12°C between ship and buoy observations (section 5).”

The specific section is 5C. I checked this section and there’s no mention of the 6.8 multiplication factor being applied during this stage. I was looking specifically for the “ratio of random error variances of ship and buoy observations” being applied during this adjustment process, thus triggering the 6.8 multiplication factor at the 0.12°C adjustment stage. In other words, there was no 6.8 multiplication factor weighting the 0.12°C adjustment. Indeed, section 5C lays out the adjustment process in four stages:

“Here the adjustment is determined by 1) calculating the collocated ship-buoy SST difference over the global ocean from 1982 to 012, 2) calculating the global areal weighted average of ship-buoy SST difference, 3) applying a 12-month running filter to the global averaged ship-buoy SST difference, and 4) evaluating the mean difference and its STD of ship-buoy SSTs based on the data from 1990 to 2012 (the data are noisy before 1990 due to sparse buoy observations).”

The only thing which might possibly refer to a multiplication factor here is in point 2), a “global areal weighted average”. It certainly isn’t the 6.8 multiplication factor mentioned at the end of the second paragraph in section 2. Besides the 6.8 factor is said to be based on the “ratio of random error variances” and nothing else which is why I was looking for that specific term in section 5C.

Therefore, the 0.12°C upward adjustment to buoy data was made before this same upward adjustment was subsequently compounded by the 6.8 multiplication factor. It therefore follows that your stated “preferential use of buoys over ships….weighted almost 7 times in favour” is, on closer analysis, a 7 times compounding of the upwardly adjusted 0.12°C buoy data. This means that, far from being weighted 7 times in favour of the original, unadjusted buoy data, it’s weighted 7 times *against* it (prior to SSTA averaging, see below).

The process continued as follows:

“The averaging of ship and buoy SSTAs within each 2° × 2° grid box was based on their proportions to the total number of observations. The number of buoy observations was multiplied by a factor of 6.8, which was determined by the ratio of random error variances of ship and buoy observations”. (Also from the second paragraph in Section 2).

In other words, the number of buoy SSTA data points in each 2° x 2° monthly bin was multiplied “almost 7 times” meaning that each and every buoy SSTA data point that had by now been adjusted up by 0.12°C was then reproduced 7 times over in each and every bin. This of course served only to increase the proportion of (upwardly adjusted) buoy SSTA data to ship SSTA data. Only then was the averaging of the two sets performed in each bin with the effect being that the increased proportion of upwardly adjusted buoy data had far more influence on that average than it would otherwise have done. That bumped the averaged result up by 0.06°C for the crucial post 1980 period. As stated in section 5c:

“However, the global mean SST is 0.06°C warmer after 1980 in ERSST.v4 because of the buoy adjustments (not shown) and there are therefore impacts on the long-term trends compared to applying no adjustment to account for the change in observational platforms.”

Laying aside the scientific integrity or otherwise in performing the above set of operations, it’s quite clear that there’s a case to be made for Bates saying:

“They had good data from buoys. And they threw it out […].”

And it is wholly inappropriate to cite the “almost 7 times in favour” weighting of buoy data when that weighting was of the number of *adjusted* buoy SSTA data points and not the number of *original*, unadjusted buoy SSTA data points. Your response to point 6 is clearly saying that the “preferential use of buoys” was weighting the final result “almost 7 times in favour” towards the original buoy data so the original buoy data were being respected. Why else would you make such a statement?

The 7 times weighting was in fact doing quite the opposite, resulting in Bates’ assertion that the buoy data had been thrown out. Multiplying the data points in each bin by 6.8 times in fact undermined the integrity of the raw buoy data by compounding the 0.12°C upward adjustment 6.8 times over prior to averaging.

Section 2, paragraph 2 in full:

“The ship and buoy SSTs that have passed QC were then converted into SSTAs by subtracting the SST climatology (1971–2000) at their in situ locations in monthly resolution. The ship SSTA was adjusted based on the NMAT comparators; buoy SSTA was adjusted by a mean difference of 0.12°C between ship and buoy observations (section 5). [Specifically section 5c]. The ship and buoy SSTAs were merged and bin-averaged into monthly “superobservations” on a 2° × 2° grid. The number of superobservations was defined here as the count of 2° × 2° grid boxes with valid data. The averaging of ship and buoy SSTAs within each 2° × 2° grid box was based on their proportions to the total number of observations. The number of buoy observations was multiplied by a factor of 6.8, which was determined by the ratio of random error variances of ship and buoy observations (Reynolds and Smith 1994), suggesting that buoy observations exhibit much lower random variance than ship observations.”

“Therefore, the 0.12°C upward adjustment to buoy data was made before this same upward adjustment was subsequently compounded by the 6.8 multiplication factor. “
You are completely mixed up here. There are two separate things
1. Determine the bias (0.12°). This was actually done in Kennedy’s 2011 paper. No issue of weighting there.
2. To get average SST, combine ship and buoy data. This is variance weighted, and that is where the 6.8 factor comes in. It would do so independently of bias adjustment. You just have to get the right numbers before weighting.

I think you are falling for the fallacy that seems to have afflicted even Bates, that somehow it matters whether you adjust buoy to ship or ship to buoy. It doesn’t. The only difference between the two is a constant which disappears when you take anomaly. To see that, suppose you adjusted the ship data down by 0.12 to match buoys. Then add 0.12 to everything, everywhere. Ships are back to where they were, and buoys have been adjusted up. The only difference between the two actions is that constant offset of 0.12.

“You are completely mixed up here. There are two separate things
1. Determine the bias (0.12°). This was actually done in Kennedy’s 2011 paper. No issue of weighting there.
2. To get average SST, combine ship and buoy data. This is variance weighted, and that is where the 6.8 factor comes in. It would do so independently of bias adjustment. You just have to get the right numbers before weighting.”

/////

Did you actually read the comment or did you just skim through to halfway, hike out a quote and trash it without understanding the basis for it? A less charitable person would say you’re wilfully misrepresenting. I genuinely think you’re not slowing down enough to digest what’s being carefully laid out.

The whole point of the arguments laid out before the section you quoted was to show precisely that “there are two separate things”. Those arguments prove exactly what you’re saying in your points 1 and 2. It could hardly be clearer.

The reason I laid it out so carefully is that paragraph 2 in Section 2 of the ERSST.v4 paper is slightly ambiguous in that it mentions the 6.8 multiplication after both processes: the bias determination and the variance weighting. Although it’s fairly obvious it relates only to the variance weighting, other readers may think otherwise and question my analysis on that basis.

That’s why I went to great lengths to establish that it is indeed the case that the 6.8 weighting factor came after the bias determination and only applied to the variance weighting . You’ve assumed that by delving into it in this way, I myself was “completely mixed up here”. I was covering all bases lest any reader who’s less inclined to read the paper thought there was a hole in my argument (i.e. that the weighting might possibly apply to the biasing as implied by the slight ambiguity in Section 2, paragraph 2).

Only after establishing this (which means establishing your points 1 and 2 with crystal clarity) did I continue.

The reason for making absolutely sure we knew where the 6.8 weighting was applied is so as to get beyond the narrow (but correct) argument you made in your last paragraph. Of course it doesn’t matter if the bias adjustment is buoy-to-ship or ship-to-buoy: the average 0.12°C adjustment is made either way and results in a simple translation of whichever line up or down by that 0.12°C. The trend stays the same, so far so good. But that is before the second of your “two separate things” is applied i.e. the 6.8 weighting factor. I completely agree that it’s two separate things and that when the 6.8 factor came in it “would do so independently of bias adjustment”. That’s the very reason I made my comment. The weighting is one stage on from the simple translation up or down of buoy or ship data for the bias determination.

The crucial factor here is that “Since 1980 the global marine observations have gone from a mix of roughly 10% buoys and 90% ship-based measurements to 90% buoys and 10% ship measurements.”

If you weight a 90% contribution of buoys by 6.8 it’s going to have a far greater effect on the resultant weighted average than if you weight a 10% contribution. Therefore, the 6.8 weighting has a greater and greater effect as one progresses through the 32 years from 1980 to 2012. The paper makes it clear that this was done in a temporal dimension as well as areal over that period (otherwise we wouldn’t end up with a chronological series and a left-to-right line on the graph). It says in Section 2:

“The ship and buoy SSTAs were merged and bin-averaged into monthly “superobservations” on a 2° × 2° grid.”

Then shortly after that it says:

“The number of buoy observations was multiplied by a factor of 6.8”.

These two quotes bear out that over the 1980-2012 period, the 6.8 weighting was having a greater and greater effect. Why else would it be called the pause-buster paper? It accounts for the 0.06°C upward shunt in the resultant, averaged temperatures after 1980 as clearly stated in section 5C of the paper:

“However, the global mean SST is 0.06°C warmer after 1980 in ERSST.v4 because of the buoy adjustments (not shown) and there are therefore impacts on the long-term trends compared to applying no adjustment to account for the change in observational platforms.”

Notice it says “there are therefore impacts on the long-term trends” and that this is “because of the buoy adjustments”. It couldn’t be any clearer.

Your argument regarding “falling for the fallacy that seems to have afflicted even Bates” is a red herring because it applies to number 1 of your “two separate things”, the bias adjustment but not to number 2, the 6.8 weighting factor. This was the whole point of the initial arguments in my comment- I was putting clear blue water between the bias adjustment stage and the weighting stage so that we could see how the weighting leveraged the already upwardly adjusted buoy data in each 2° x 2° monthly bin by 6.8. It did so regardless of how many actual buoys we start with. This comment takes it further by pointing out that this leveraging effect became increasingly dominant from 1980 to 2012 as the buoy/ship measurement ratio went from 1:10 to 10:1 and therefore did affect the trend. And that effect on trend is stated as clear as day in the quote above from the ERSST.v4 paper.

scute,
Well, if not “mixed up”, then overly long and hard to follow. And I still can’t see your point after another long reply. Again, the 0.12 has nothing to do with weighting, and the weighting happens independently of the 0.12. How would your argument change if they had weighted the ships down instead of buoys up? It makes no difference, of course.

The fact that the adjustment is weighted is just a part of getting it right.

“If you weight a 90% contribution of buoys by 6.8 it’s going to have a far greater effect on the resultant weighted average than if you weight a 10% contribution.”
I suspect the 90% is the % after weighting. But it doesn’t really matter. The fact is that buoys are having a big effect. That makes it more important that their pre-weighting value is right. And again, it doesn’t matter whether you adjust buoys up or ships down, however weighted. How do you think they should do the arithmetic?

Scute,
I set out the algebra of adjustment on the other thread. But it might be useful to put it here as well. I’ve allowed for weighting.
Suppose you first average ships and buoys separately over a period of time, say a month. Whatever you do to weight or adjust, you do equally to members of each class, so you do the same to the average.

In that month, there are nS ships, average S, and nB buoys, average B. The combined average A will be a weighted sum:
A = (wB*B + wS*S)/(wB+wS)
If you want an unweighted average, then wB=nB and wS=nS. But if you want to upweight buoys by 7, then wB = 7*nB.

Now suppose you adjust buoys up by a (0.12). Then
Ab = (wB*B + wB*a + wS*S)/(wB+wS) = A + wB*a/(wB+wS)
Or if you downweight ships by a, then
As = (wB*B + wS*S – wS*a)/(wB+wS) = A – wS*a/(wB+wS)
Then
Ab – As = a*(wB+wS)/(wB+wS) = a

Constant a for each month, however A and B change. So no diffrence in trend or shape. This is true whether you upweight buoys or not.

Thank you for continuing to engage despite my indignation…and with algebra, splendid! I shall reply in full in a few days so please come back and check. I have other commitments hence the delay.

I’m now drilling right down into Karl et al. and have dug out the supplementary materials. Whenever I see “supplementary materials”, I swop those words for “closet in which to throw skeletons”.

I’d barely started before coming across this:

“The addition of buoy data in recent decades has been particularly important as the spatial coverage from ship observations has decreased since the 1990’s (cf. Fig. 1(a) in (13)). As stated in this article, three of the 11 major improvements incorporated into ERSST version 4 had by far the largest impact on the trend during the recent “hiatus” period (2000-2014). To make the buoy data equivalent to ship data on average requires a straightforward addition of 0.12°C to each buoy observation. This impacts the trend only because the number of buoys and percentage of coverage by buoys has increased over this period.”

So this, most notably the last sentence, bears out what I was saying: the progressive increase in buoy numbers over time between 1990 and 2012 is responsible for skewing the SST trend upwards during the hiatus period. The main Karl et al paper says this in corroboration:

“Of the 11 improvments in ERSST version 4 (13), the continuation of the ship correction had the largest impact on trends for the 2000–2014 time period, accounting for 0.030°C of the 0.064°C trend difference with version 3b. [The buoy offset correction contributed 0.014°C decade−1 to the difference, and the additional weight given to the buoys because of their greater accuracy contributed 0.012°C decade−1 (supplementary materials).]”

(Their square brackets, above, not mine).

So the ERSSTv4 paper’s 0.12°C “buoy offset correction” and the “additional [6.8] weight given to the buoys” was responsible for 0.026°C per decade of the increased trend in v4’s SST (0.014 + 0.012).

The other 0.030°C per decade they mention was attributable solely to the NMAT adjustments to ship data used in the ERSSTv4 paper. This had its greatest effect during the hiatus period according to the main paper.

The main paper drives home the fact that the higher ERSSTv4 trend is almost fully responsible for the Karl et al global surface temp trend hike.

“The new analysis exhibits more than twice as much warming as did the old analysis at the global scale (0.086° versus 0.039°C decade−1) (table S1). This is clearly attributable to the new SST analysis, which itself has much higher trends (0.075° versus 0.014°C decade−1).”

These amounts, measured in 100ths of a degree, are small but add up. Karl et al (2015) make much of the fact that the latest IPCC period, 1998-2012, had a trend that was statistically insignificant (0.39°C per decade) giving rise to the claim of the pause. The new trend of 0.086°C per decade in Karl et al. puts it outside the error bars making it statistically significant and allowing them to say that before Karl et al. there was a claimed pause and after Karl et al. there isn’t- a satisfyingly neat claim, summarised in two words: “Pause buster”. Here’s the relevant passage:

“Also, the new global trends are statistically significant and positive at the 0.10 significance level for 1998–2012 (Fig. 1 and table S1) by using the approach described in (25) for determining trend uncertainty. In contrast, the IPCC report (1), which also used the approach in (25), reported no statistically significant trends for 1998–2012 in any of the three primary global surface temperature data sets.”

But as they say in the main paper, the change in overall global surface temperature trend during the hiatus period is almost entirely due to the 0.064°C SST trend change between ERSSTv3b and ERSSTv4. That in turn was was largely dependent on the 0.026°C trend increase in SST brought about by the “buoy offset correction” and the “additional [6.8] weight given to the buoys”, as quoted in context above.

Therefore, both the buoy offset correction and the 6.8 weighting played a major part in pushing the global surface temp trend from 0.039°C to 0.086°C per decade and therefore played a major part in busting the pause.

“the progressive increase in buoy numbers over time between 1990 and 2012 is responsible for skewing the SST trend upwards during the hiatus period.”
In fact, the increase skewed trend down, because buoys measure cooler. The correction counters that.

In my last comment I nailed the issue to the floor with no wiggle room whatsoever. I used quotes from the Karl et al 2015 itself which categorically stated that 0.026°C of the 0.064 trend difference between ERSSTv3b and ERSSTv4 was attributable to the “buoy offset correction” and the “additional weight given to the buoys”. Furthermore, that “This impacts the trend only because the number of buoys and percentage of coverage by buoys has increased over this period.”

Those two quotes from Karl et al 2015 bookended your excerpt from my comment. Therefore, my initial suspicion that you wish to wilfully misrepresent me and which I disregarded, giving you the benefit of the doubt, is now proven to be correct. There’s no point in continuing this discussion.

“I later learned that the computer used to process the software had suffered a complete failure, leading to a tongue-in-cheek joke by some who had worked on it that the failure was deliberate to ensure the result could never be replicated,” – this is not a joke. By saying that the computer suffered failure (quite a rare occasion these days), the culprit has denied others the access to the data. He did that because he used or made fake data. I might guess he also attempted to destroy that data.

From here, there are multiple possibilities. The NOAA computers must be backed up continuously. Even if the culprit destroyed the data on his computer, it might be backed up somewhere. Even if the data was deleted before back up, and the computer was re-imaged, there is a chance that the data can be recovered using forensic methods.

The new NOAA administration might try this approach. The “dog ate my homework” defense appears in the climate alarmism cases all the time.

” By saying that the computer suffered failure (quite a rare occasion these days), the culprit has denied others the access to the data. He did that because he used or made fake data. “
There is no evidence of that at all, and it is extremely unlikely. In any case, the data is archived. All Bates has said here is that a computer failed (actually not so rare). The rest is just your imagination.

In regards to how this case should be handled, I would like to point out to a precedent on the opposite side of the science/pseudoscience divide: Kurt Mix, an engineer who helped to shut down the gushing oil after the Deepwater Horizon explosion, was indicted for deleting few text messages that he exchanged with his supervisor. He was accused of destruction of evidence and obstructing justice – crimes that carry up to 20 years in prison for each count. There is an opinion that the orders to charge Kurt Mix came directly from Eric Holder.

I see no reason why evidence tampering by the so-called “climate scientists” should not be ignored by the law enforcement.

“I see no reason why evidence tampering by the so-called “climate scientists” should not be ignored by the law enforcement.”
I agree. It should be ignored. Law enforcement is about finding evidence of crimes, not temperature change.

John Bates’ posting brings a whole set of different questions to the fore. These questions stem from my research in the discipline of Science, Technology, Engineering and Policy (STEP). There is this whole question of whether Tom Karl et al. followed the Department of Commerce, and NOAA policies for publishing Influential Scientific Information (ISI). ISI is scientific information that is known to have public policy influence or potentially may have public policy implications. This kind of scientific information can influence politicians, policymakers, and lawmakers to make political positions, laws, and policies that can positively or negatively affect citizens and business in various ways. The data and conclusions in the K15 paper qualify as ISI, because the implications of the conclusions have the potential to influence public policy concerning the politically charged climate change policy debates among various political factions.
It may seem to be strange, but scientific information integrity has been a subject for debate, congressional, and executive actions for many years. This begs a broader question, don’t scientists already have integrity? After all they are scientists, and scientists have earned the trust of the lay public, and therefore automatically have integrity and trust conferred upon them through the degree earning process, right? This is the epistemic authority that is granted to scientists. Institutions also garner epistemic authority, i. e. NOAA, NASA, universities, etc. To understand what is going on here, we need to look at the recent background of scientific integrity pursuit.

The most recent effort at scientific information integrity definition and control starts with the Information Quality Act:

Section 515 of the Treasury and General Government Appropriations Act for FY 2001 (the Data Quality Act, Information Quality Act or IQA) directed the White House Office of Budget and Management (OMB) to issue government-wide guidelines that “provide policy and procedural guidance to federal agencies for ensuring and maximizing the quality, objectivity, utility, and integrity of information (including statistical information) disseminated by federal agencies.” (NOAA Information Quality Act Overview, December 2010)

This act of congress caused a flurry of actions throughout the executive branch of the federal government, and as such, various departments and agencies developed procedures and policies to assure scientific information integrity. However, because of the allegations by some of the misuse of scientific information by Bush Administration to push particular public policies, in March of 2009, the Obama Administration’s OMB issued a Memorandum concerning scientific integrity. This memorandum assigned the Director of the Office of Science and Technology Policy the responsibility of assuring the highest integrity of scientific information in the executive branch of the government. As you all know, scientific information integrity is essential in every aspect of science, and just having to make sure that there is integrity in science through such a memorandum is astonishing at a minimum. It seems that problems with scientific information integrity may be a recurring problem.

From this memorandum, a whole raft of policies were developed at the individual executive department levels that flowed down to various agencies about how to ensure information integrity and ISI. If you look at the scientific integrity policies at NOAA for example, it can be seen that the agency has developed measures to control the integrity of scientific information and implemented specific measures to control the release of ISI. However, no matter what is written in law or policy, it is important to note that all policies are subject to management enforcement and scientist conformance. This means that people are at the heart of all policy implementations, and no policy is any good without strict adherence to the policies, and enforcement of consequences to people for policy violations.

My experience with NOAA indicates that the policies they implemented either have no teeth, or the agency has been corrupted by politics that de-fang policy enforcement. I say this because I have researched into NOAA policies governing ISI, and documented a political controversy that erupted in 2012 in my home town (at the time) in Colorado because of the complete lapse in ISI policy conformance by a single NOAA scientist. This particular controversy was about the social and health effects of oil and gas extraction activities within the town and the atmospheric effects of such activities. In this case, a single NOAA scientist allowed himself to be manipulated by anti oil and gas industry political activists, and literally gave them unpublished and non-vetted scientific information. In actuality, the scientist actually worked with the activists to construct a presentation to local municipality officials. These local officials were decision makers, and policy makers that were exposed to ISI that had not gone through the peer review process that is required by NOAA policy before public release. It is unknown whether the scientist involved encountered consequences for his behavior or not. A paper was published a year later confirming the results of the prematurely released ISI. The data itself and the official publication of the data raises a whole bunch of other questions that cannot be delved into here.

You may ask what’s the problem with this action? Why not release the information? Well, there are several reasons. 1) The purpose for ISI control is to prevent or minimize unnecessary undesirable effects in any given community that may be influenced by the release of ISI. 2) ISI could be misinterpreted or manipulated by activists seeking to sway a political debate one way or another. 3) The premature disclosure of ISI may cause influenced policymakers to generate policies that are reflex in nature and not well thought out, impact businesses negatively, or may be inappropriate because of an incorrect ISI interpretation. 4) The public influenced by the premature release of ISI may come to incorrect conclusions, become panicked, fearful, and make personal decisions that are based on inappropriate assumptions. 5) There is a remaining, and long term influence in the affected community by the premature release f ISI. (This controversy is described in detail in the STEP chapter of my PhD Dissertation, External Detection and Localization of Well Leaks in Aquifer Zones [Chapter 6, RESIDUAL FOOTPRINTS OF RESOLVED SCIENTIFIC KNOWLEDGE DISPUTES]; a requirement for earning my minor in STEP.)

In the case described above, all of the described reasons for controlling ISI actually occurred in the affected community. It all could have been prevented if a single scientist had followed strict ISI release policies. It should also be noted that the controversy that scientist caused forced him to realize how he had been manipulated, and as a result he regretted his premature release of ISI.

How does this apply to the K15 paper? If ISI policy had been strictly followed, then the recognition of the character of the temperature data that was used in the paper may have been recognized, and publication may have been delayed until the correct data was used. However, Tom Karl could have been deliberately acting as a Stealth Advocate (see Roger Pielke’s definition in his book The Honest Broker: Making Sense of Science In Policy and Politics, 2007) while in a leadership position at NOAA. His specific advocacy would be concealed by his use of incorrect data. This possibly reflects misdirection and possible hidden political influence in scientific discourse, something that should be avoided. No matter what Karl’s case is, unfortunately it is clear that stealth advocacy is alive and well in a significant amount of climate science discourse.

The question of whether Tom Karl et al. strictly followed the Department of Commerce, and NOAA policies for publishing ISI still stands out. Sorry for the long discourse, but the concepts presented here are needed to somewhat understand the present circumstances, at a limited level. STEP is an interesting subject, and involves concepts that may be unfamiliar to many readers.

“The data and conclusions in the K15 paper qualify as ISI”
I think that needs to be better established, with some detail. What data and what conclusions? There is really very little original data; it is a methods paper. It describes how they can take some known biases (established by other people) and make appropriate adjustments to remove the bias. It does comparisons. And it comments on temperature history. All in a scientific journal.

You have expended a lot of words in your comment. I think you need to say something about the boundaries of ISI and how this fits in.

One of the more bizarre claims is rushing Ersstv4. First submitted for publication in December 2013, published 2015.

And before that we have this:

“Because ships tend to be biased warm relative to
buoys and because of the increase in the number of
buoys and the decrease in the number of ships, the
merged in situ data without bias adjustment can have a
cool bias relative to data with no ship–buoy bias. As
buoys become more important to the in situ record, that
bias can increase. Since the 1980s the SST in most areas
has been warming. The increasing negative bias due to
the increase in buoys tends to reduce this recent warm-
ing. This change in observations makes the in situ tem-
peratures up to about 0.1° C cooler than they would be
without bias. At present, methods for removing the
ship–buoy bias are being developed and tested”

I think it was submitted December 2014. But yes, not rushed. And not on the Eve of Paris. And goodness knows how long they had to battle with the Batesian red-tapers before they got it out the door. And yes, the data calling out for some adjustment for buoy vs ships had been published for many years.

“Yes, God forbid anyone has the damn gall to exert themselves and risk their employment and pension”
Well, he’s not risking any of that. But at this stage, he wasn’t a whistle blower. He was, as Eli notes, a self-promoted gatekeeper. And all the usual stuff – you can’t do this now when there is a new version of GHCN in the pipeline etc. It’s never the right time.

The spike in surface temps last year was ENSO influenced – but also had a drought artefact. The difference in peak latent heat flux at the surface can be 300W/m2 between a wet and dry day. Thermometers do not measure latent, of course, so this component is missed. It can make a difference of 2 degrees C. Yet you are arguing on their ground about obsolete methods.

The instruments on the AQUA satellite – on the other hand – provide a great deal of precise and diverse information.

There are no 20 year trends – just lots of loud noise – there cannot be a definitive pause detection. It is purely symbolic – a flag to be captured in a skirmish you can’t win. There has been a massive mobilisation of urban doofus hipsters, the press and science agencies. You can see some of it here. I am afraid it is asymmetric warfare.

Glad this attack on scientists has once again been exposed as all part of a dishonest politically motivated anti-science witch hunt on the part of the House (anti)Science Committee and Lamar Smith. Off course some people are just ideologically motivated to believe any science they don’t like must be wrong and the scientists must be ‘crooked’. People like that will never accept science they don’t like no matter how much evidence there is.

“Colleagues of Mr Karl have been quick to dismiss the story, saying that other data sets come to similar conclusions. This is to miss the point and exacerbate the problem. If the scientific establishment reacts to allegations of lack of transparency, behind-closed-door adjustments and premature release so as to influence politicians, by saying it does not matter because it gets the “right” result, they will find it harder to convince Mr Trump that he is wrong on things such as vaccines.”

There is an argument floating around that it is immaterial how the constant offset has been implemented by Karl – anomalies solve everything. This is a very puzzling argument and the only way it seems to apply is if one assumes ships as literally effectively buoys which just happen to measure precisely 0.12C high, always and everywhere. Of course, this is not true, as figure 1 in Hausfather(2017) in particular demonstrates. The constant offset, based on an average for 10 years of comparative data, is just a crude fix, possibly intentionally used because of its desirable trend features when confounded with the changing ship-buoy ratio. The actual ship-buoy discrepancy is some, likely wide-ranging and complicated, distribution of error which really should have been documented far more thoroughly before introducing a crude, opportunistic fix.

But they also weigh the buoy much more when both are available. The result is that their series matches satellites, buoys only and Argo better than Hadsst3 with no such weighting. As Hausfather also showed.

“This is a very puzzling argument and the only way it seems to apply is if one assumes ships as literally effectively buoys which just happen to measure precisely 0.12C high, always and everywhere.”
The argument relates to the frequently put notion that it matters whether you adjust ships to buoys or buoys to ships. It doesn’t, as a matter of elementary arithmetic, after anomaly. As to whether the adjustment is too coarse, the fact is that what was done before was to add exactly 0.0 to everything. There is clear evidence that buoys read about 0.12 lower than ships. A blanket change of 0.12 is not perfect, but better than a blanket change of 0.0.

Is there equivalent ‘irony and hypocrisy’ to pointing out ‘irony and hypocrisy’ that one who’s now retired from an agency does not return to that agency to follow that agency’s process? From the headpost: “I have wrestled for a long time about what to do about this incident. I finally decided that there needs to be systemic change both in the operation of government data centers and in scientific publishing, and I have decided to become an advocate for such change.”

Additionally, there is a strong defense of the Karl side by yourself for not following that agency’s process while he was in employment. This is the point of the post. Irony and hypocrisy inclusive.

Please be fair and even handed. Chastisement of Dr. Bates should be simultaneous with same of Dr. Karl.

The National Oceanic and Atmospheric Administration said Monday that it would review a whistleblower’s allegations that the agency manipulated climate data in order to eliminate the global warming “pause” for political reasons.

isn’t it fascinating and convenient how any climate scientist, or any scientist, who questions the dogma, as a good scientist is supposed to do, is a “denier” and can simply be ignored?

I remember when the warmists tried to deny the Little Ice Age, claiming the data was “minimal” or “misinterpreted” or some such, as deniers always say. As a Finnish historian responded, “If our armies hauled gun carriages across a frozen river, we can accept the fact of the ice’s presence and thickness as established.”

The “Population Bomb” didn’t destroy humanity. Nuclear weapons didn’t. We didn’t run out of oil by 1980/1990/2000/2010, And the world is not going to warm enough to kill us either.

I’d be happy to entertain inquiries of, “We can see the world is warming, and there are a great many factors we still haven’t nailed down, and we need more money to isolate, examine and incorporate them.”

But when the demand is, “The world is doomed, you must give us complete control of your economy or else,” the answer will remain unprintable.

@Ceist Quote: Off course some people are just ideologically motivated to believe any science they don’t like must be wrong and the scientists must be ‘crooked’. People like that will never accept science they don’t like no matter how much evidence there is. /quote.

I’m no scientist, so I won’t pretend to understand this information. There are plenty here disputing your claims though, and enough reason to believe that no, nothing skeevy took place at NOAA.

But you get full props for giving climate change deniers another thing to endlessly twist to their end. On the bright side, you’ve probably given your new company a boost, so at least you’ll profit from from those in the fossil fuel industry who might now be more inclined to use your services, seeing as you are an ally.

Please allow me to go back to basics here. From what I read above we appear to have a ‘systematic’ bias with respect to sampling… one way or the other. However, if one looks at the Method Detection Limit, (https://en.wikipedia.org/wiki/Detection_limit#Method_detection_limit), can causes to climate variability be deduced using differences that are ‘in the background noise’ of the ‘system’s’ (thermometers + computer models + data selection + other) ability to measure with confidence? Clearly, you all are much more informed on the data sources and can argue an assigned ‘sigma’ better than I can ever hope. Whatever you decide, if there is this much ‘error’ on the capability of the thermometers plus other parts of the ‘system’… is it prudent to claim a certain cause of climate variability?

“can causes to climate variability be deduced using differences that are ‘in the background noise’ of the ‘system’s’ (thermometers + computer models + data selection + other) ability to measure with confidence?”

This is much studied, and used by the indices. HADCRUT results are expressed as the output of a monte carlo. There is more recent work, but most refer to Brohan 2006. I have talked about spatial sampling here.

Hi Nick. Something bothers me about this and I’ll say up front IANAS. Let’s say you did some experiment where you set up the conditions each time as similarly as possible and made a measurement of some aspect of the system. If you had 2000 runs of the experiment, I can see where the mean and SD is meaningful, because it represents a measurement made on a system in a prescribed state.

But with thermometers all over the country, in various terrains, at various elevations, etc; you aren’t measuring something designed to be in the same state. Of course, you can take the bucket of numbers and derive a mean and SD from them, but it seems to me the SD is less meaningful in this case.

jim2,
You aren’t claiming to have made an individual measurement. There is a worldwide temperature distribution, and you are sampling, and the usual sampling error considerations apply. I have talked about that here. It particular it explains why if you take it as sampling absolute temperature, the error is too high, but if you subtract means to form anomalies, you remove most of the variation, and the sampling error of the mean is back to reasonable.

Theer is nothing unusual about this. If you wanted to estimate a wheat crop, in a field or a country, you would sample, with care about spatial. Same for soil moisture, minerals, temperature of a furnace or a swimming pool. And the bigger the sample, the better your estimate. Bigger reduces the sampling error of the mean, which is what the sd of your last question should be seen as.

Thank you for the healthy, civil debate. Regardless any other climate debaters, I give you two guys high marks on working through the discussion with consideration.

Without naming companies, I was involved with the estimation of weather risk for large commercial companies like (big box stores and utilities). At that time, we did exactly what Nick is saying and ran Monte Carlo Simulations day after day, invested in climatologists, meteorologists, mathematicians and other highly educated people to make bets on the order of $4-5 million per season (winter and summer). Unfortunately, what we determined that the supposed ‘normal’ distribution of weather data using long period data like 50 and 100 year thermometer measurements produced what is known as a FAT TAIL distribution… Imagine the classical bell curve sitting on top of a rectangle laying on its side. What this told us what that the weather was moving for some other reason than stochastic variation…. or we did not have a wide enough confidence interval estimate. We were just about as likely to go ‘off scale high’ and ‘off scale low’ as hit the mean estimate. I say all this to propose that Monte Carlo simulations are one of the reasons we all lost our jobs. For several years, leadership kept promising management that we had it ‘figured out’ and we did not. My premise here is that if we cannot predict the weather past 3 months to save out jobs (and we ALL did care to save our jobs), how can people predict longer term?

Maybe, instead of Monte Carlo, Nick was thinking about the Central Limit Theorem (https://en.wikipedia.org/wiki/Central_limit_theorem). I used to know this Theorem well when I worked as a Statistical Process Control specialist in a Radiological Laboratory counting ‘disintegrations per minute’ hour after hour and day after day. for years..

The applicability of this CLT depends on the underlying data being, in fact, ‘normal’. I point you back to the first paragraph where we found the bell curve to have FAT TAILS and therefore, something else is going on…I don’t think the underlying data is ‘normal, as some of us are assuming in applying multi-sample tests.

So I revert further back to my point from yesterday that we have ‘error’ the ‘system”… (thermometers + computer models + data selection + other) …AKA ‘systematic error’… which goes back to my question… if there is this much ‘error’ on the capability of the thermometers plus other parts of the ‘system’… is it prudent to claim a certain cause of climate variability?

jkutney – The CLT is useful precisely because the underlying distribution doesn’t have to be normal. It can be bi-modal or whatever and the CLT will yield the estimated mean of the population. The more times the population is sampled, the closer the sample mean will be to the population mean.

Nick – My experience is that once you run the CLT iterations, you end up with a standard deviation for the CLT distribution that does not tell you about the Standard Deviation needed to determine the original Method Detection Limit… so although one may be able to make a single estimate (the new CLT-derived mean) the original standard deviation is no more improved by using the CLT… so you cannot then run an Error Analysis using the CL- derived Standard Deviation to (http://teacher.nsrl.rochester.edu/phy_labs/AppendixB/AppendixB.html) to see if the new value is statistically different than the other… i.e. one cannot tell if there is a statistically significant change in temperature.

Back to my earlier point, our PHD team could not forecast the weather 3 months out… How on earth, can people speak about knowing the exact cause of global warming with certainty?

THANKS FOR OPENING AND MAINTAINING THE DEBATE…. I am about out of ideas and leave it to you smart people to find all of the sources of weather variation change through ongoing civil discourse.

Will this accusation of improper paperwork or book-keeping accompanied by a fake graph in the Daily Mail be enough?

Of course it will.

To improve or correct the science is not the goal. That is done by science. You know, doing the work and publishing in a journal scientists read. That sort of thing.

When the global community of scientists, as represented by the Royal Society, National Academy of Sciences, American Association for the Advancement of Sciences, American Physical Society, American Chemical Society — and every other scientific society and institution on the planet — all see the evidence the same way and speak out about it … we have our best view of reality.

Consensus science was settled science, until it wasn’t. The climate warming Chicken Little’s of today have had their dire 20+ year old predictions fail to come to fruition over and over again. Their answer has been to double down on it like gamblers addicted to their gambling.

Al Gore predicted we would be underwater then bought a ocean front California home (hint, hint), while he made millions from his fraud with his paid-off, government funded scientists (another hint).

Before these warmers, who take advantage of government funds for politically motivated research, there was the coming ice age prediction. That was consensus/settled science that was taught in my grade school. Which is it all you ‘settled science’, ‘consensus science’ people?

The world is flat was settled/consensus science, until it wasn’t. Salt is good for you, no, now it is bad for you, no, now it is good for you settled, consensus science, until is wan’t.

http://www.eenews.net/stories/1060049630/
Bates accused former colleagues of rushing their research to publication, in defiance of agency protocol. He specified that he did not believe that they manipulated the data upon which the research relied in any way.
“The issue here is not an issue of tampering with data, but rather really of timing of a release of a paper that had not properly disclosed everything it was,” he said.
===========================================================

There it is from Bates himself. No skulduggery.

The “hiatus” was not statistically valid. Never was. Besides, what would such a putative “hiatus” in temperature prove? The trend in temperatures is up. It’s 1C higher now than when we started burning fossil fuels. That’s the evidence.

The effect of the CO2 we have added to the atmosphere will continue even if we stop right now. That’s the science. Science that started with Fourier in 1824.

Ceist, I followed your link to the Guardian. The article claims that scientists who believe in the warming issue are under attack personally and economically. Hey, this is what you folks have been doing for real. Good grief, there are alternate universes after all.

The solution to this problem is an open source temperature reconstruction. Way too much power is concentrated in the hands of a few activists masquerading as scientists. The Hockeystick and its Hide the Decline methods would never be reproducible and/or pass public scrutiny. People hide data and methods from the public because they have something to hide. After Steve McIntyre savaged the Hockeystick, all the climate activists went into hiding their data and methods.

“The Hockeystick and its Hide the Decline methods would never be reproducible and/or pass public scrutiny. People hide data and methods from the public because they have something to hide. After Steve McIntyre savaged the Hockeystick, all the climate activists went into hiding their data and methods.”

If you think this is a science, you don’t know what real science is. Climate science is a joke. The current whistle blower is almost certainly to be followed by more. Most importantly, the data will never support the theory, and the models will only continue to fail. No amount of propaganda will change the basic physics behind this nonsense.

Your comment about reproducibility and public scrutiny is wrong; both the methodology and data behind the paleoclimate work are published. You’re crying conspiracy about things that are out in the open. But I suppose that’s easier than reading the papers and understanding the methodology for yourself.

Most importantly, the data will never support the theory, and the models will only continue to fail. No amount of propaganda will change the basic physics behind this nonsense.

Indeed, the basic physics which says that CO2 is a greenhouse gas, and warmer air can hold more water vapor.

The data is what made and makes scientists accept manmade climate change. Nothing else.

This IS what’s happening. I’ve known it since the first time I had this vision. About the middle of the 1980’s I had a vision that said this. “We are coming into a time of illumination, and you ARE GOING TO see whole civilizations walking around naked. It’s meant to pass genetic strengths down upon the whole human race.” We are meant to absorb this energy while it lasts. Plants and animals are also going to be absorbing this energy. This vision has also returned to me on a recurring basis over the years until this present day to continue to reinforce this impression upon my mindset that I may with authority communicate this to the world. Laugh. But this is what you’re going to see.
Whoever THEY are, THEY’RE intent on deceiving people with this notion that global warming is anthropogenic. Climate change is occurring. Only it’s not man caused. It’s solar. They should define it as solar warming and not global warming. There is evidence that the ice caps on Mars have been receding. Tell me how carbon emissions on earth are causing that. Their “We must stop global warming” agenda is a global big money hoax. Hoaxes are not just local now, they are international and global. Politics is becoming international and global now also. Beware of mass deception and hoaxes like anthropogenic climate change.

“Of much more serious significance, however, is the way this wholesale manipulation of the official temperature record… has become the real elephant in the room of the greatest and most costly scare the world has known,” Booker wrote. “This really does begin to look like one of the greatest scientific scandals of all time.”

After they waste trillions of dollars trying to stop and reverse global warming and then obviously fail because it is impossible to stop or reverse because it’s being caused by the sun, to save face they will try to blame skeptics, conservatives, republicans, and “the uneducated” as the villains who obstructed their efforts to stop/reverse global warming and save the planet. They already have their lies prepared after they fail to stop it in spite of all their efforts. After trillions of dollars are squandered, wasted and stolen they will say more should have been done but the skeptics, conservatives, republicans, and “the uneducated” interfered with our efforts. A true witness delivers souls, but a deceitful witness will breath out lies. That’s all they will do is continue to breath out more lies.

Jim D will likely appreciate this. Peter Lang, not so much. But here it is: “Trump Secretary of State Rex Tillerson also supports a carbon tax, which he championed when he was chief executive of ExxonMobil.”

Revisiting Karl2015, it is noteworthy that one author is NOAA ‘chief homogenizer’ (Menne). This prompts the question why a variation of breakpoint methodology (as Menne 2009) was not applied gridwise to the ship, buoy series. Not to endorse entirely the way pairwise homogenization is actually implemented (scale issues?). But, admittedly without sorting out the details, it seems a much preferable way of ‘incrementally blending’ two time series than the crude one-off shift used. Trend “benefits” may well be less of course. But it would be somewhat consistent with other parts of the temperature record and everyone is interested in accurate, consistent, science?

I’m impressed (favorably) by the amount of serious (that is, reasoned) disagreement in the comments on this blog. I haven’t looked at Realclimate blog for some years, but I did today and was a bit shocked that it had only about 80 comments and in skimming, few or none seemed critical or technical. If only to increase its credibility, that blog ought to allow some disagreement and try to attract some opposition commenters.