I missed this article from July, and it deserves wide distribution. Steve McIntyre writes (excerpts):

In 2012, the then much ballyhoo-ed Australian temperature reconstruction of Gergis et al 2012 mysteriously disappeared from Journal of Climate after being criticized at Climate Audit. Now, more than four years later, a successor article has finally been published. Gergis says that the only problem with the original article was a “typo” in a single word. Rather than “taking the easy way out” and simply correcting the “typo”, Gergis instead embarked on a program that ultimately involved nine rounds of revision, 21 individual reviews, two editors and took longer than the American involvement in World War II. However, rather than Gergis et al 2016 being an improvement on or confirmation of Gergis et al 2012, it is one of the most extraordinary examples of data torture (Wagenmakers, 2011, 2012) that any of us will ever witness.

…

The re-appearance of Gergis’ Journal of Climate article was accompanied by an untrue account at Conversation of the withdrawal/retraction of the 2012 version. Gergis’ fantasies and misrepresentations drew fulsome praise from the academics and other commenters at Conversation. Gergis named me personally as having stated in 2012 that there were “fundamental issues” with the article, claims which she (falsely) said were “incorrect” and supposedly initiated a “concerted smear campaign aimed at discrediting [their] science”. Their subsequent difficulty in publishing the article, a process that took over four years, seems to me to be as eloquent a confirmation of my original diagnosis as one could expect.

I’ve drafted up lengthy notes on Gergis’ false statements about the incident, in particular, about false claims by Gergis and Karoly that the original authors had independently discovered the original error “two days” before it was diagnosed at Climate Audit. These claims were disproven several years ago by emails provided in response to an FOI request. Gergis characterized the FOI requests as “an attempt to intimidate scientists and derail our efforts to do our job”, but they arose only because of the implausible claims by Gergis and Karoly to priority over Climate Audit.

Although not made clear in Gergis et al 2016 (to say the least), its screened network turns out to be identical to the Australasian reconstructions in PAGES2K (Nature 2013), while the reconstructions are nearly identical. PAGES2K was published in April 2013 and one cannot help but wonder at why it took more than three years and nine rounds of revision to publish something so similar.

In addition, one of the expectations of the PAGES2K program was that it would identify and expand available proxy data covering the past two millennia. In this respect, Gergis and the AUS2K working group failed miserably. The lack of progress from the AUS2K working group is both astonishing and dismal, a failure unreported in Gergis et al 2016 which purported to “evaluate the Aus2k working group’s regional consolidation of Australasian temperature proxies”.

Detrended and Non-detrended Screening

The following discussion of data torture in Gergis et al 2016 draws on my previous and similar criticism of data torture in PAGES2K.

Responding to then recent scandals in social psychology, Wagenmakers (2011 pdf, 2012 pdf) connected the scandals to academics tuning their analysis to obtain a “desired result”, which he classified as a form of “data torture”:

we discuss an uncomfortable fact that threatens the core of psychology’s academic enterprise: almost without exception, psychologists do not commit themselves to a method of data analysis before they see the actual data. It then becomes tempting to fine tune the analysis to the data in order to obtain a desired result—a procedure that invalidates the interpretation of the common statistical tests. The extent of the fine tuning varies widely across experiments and experimenters but is almost impossible for reviewers and readers to gauge…

Some researchers succumb to this temptation more easily than others, and from presented work it is often completely unclear to what degree the data were tortured to obtain the reported confession.

As I’ll show below, it is hard to contemplate a better example of data torture, as described by Wagenmakers, than Gergis et al 2016.

The controversy over Gergis et al, 2012 arose over ex post screening of data, a wildly popular technique among IPCC climate scientists, but one that I’ve strongly criticized over the years. Jeff Id and Lucia have also written lucidly on the topic (e.g. Lucia here and, in connection with Gergis et al, here). I had raised the issue in my first post on Gergis et al on May 31, 2012. Closely related statistical issues arise in other fields under different terminology e.g. sample selection bias, conditioning on post-treatment variable, endogenous selection bias. The potential bias of ex post screening seems absurdly trivial if one considers the example of a drug trial, but, for some reason, IPCC climate scientists continue to obtusely deny the bias. (As a caveat, objecting to the statistical bias of ex post screening does not entail that opposite results are themselves proven. I am making the narrow statistical point that biased methods should not be used.)

Despite the public obtuseness of climate scientists about the practice, shortly after my original criticism of Gergis et al 2012, Karoly privately recognized the bias associated with ex post screening as follows in an email to Neukom (June 7, 2012; FOI K,58):

If the selection is done on the proxies without detrending ie the full proxy records over the 20th century, then records with strong trends will be selected and that will effectively force a hockey stick result. Then Stephen Mcintyre criticism is valid. I think that it is really important to use detrended proxy data for the selection, and then choose proxies that exceed a threshold for correlations over the calibration period for either interannual variability or decadal variability for detrended data…The

criticism that the selection process forces a hockey stick result will be valid if the trend is not excluded in the proxy selection step.

Gergis et al 2012 had purported to avoid this bias by screening on detrended data, even advertising this technique as a method of “avoid[ing] inflating the correlation coefficient”:

For predictor selection, both proxy climate and instrumental data were linearly detrended over the 1921-1990 period to avoid inflating the correlation coefficient due to the presence of the global warming signal present in the observed temperature record. Only records that were significantly (p<0.05) correlated with the detrended instrumental target over the 1921-1990 period were selected for analysis. This process identified 27 temperature-sensitive predictors for the SONDJF warm season.

As is now well known, they didn’t actually perform the claimed calculation. Instead, they calculated correlation coefficients on undetrended data. This error was first reported by CA commenter Jean S on June 5, 2012 (here). Two hours later (nearly 2 a.m. Swiss time), Gergis coauthor Raphi Neukom notified Gergis and Karoly of the error (FOI 2G, page 77). Although Karoly later (falsely) claimed that his coauthors were unaware of the Climate Audit thread, emails obtained through FOI show that Gergis had sent an email to her coauthors (FOI 2G, page 17) drawing attention to the CA thread, that Karoly himself had written to Myles Allen (FOI 2K, page 11)about comments attributed to him on the thread (linking to the thread) and that Climate Audit and/or myself are mentioned in multiple other contemporary emails (FOI 2G).

When correlation coefficients were re-calculated according to the stated method, only a handful actually passed screening, a point reported at Climate Audit by Jean S on June 5 and written up by me as a head post on June 6. According to my calculations, only six of the 27 proxies in the G12 network passed detrended screening. On June 8 (FOI 2G, page 112), Neukom reported to Karoly and Gergis that eight proxies passed detrended screening (with the difference between his results and mine perhaps due to drawing from the prescreened network or to difference in algorithm) and sent them a figure (not presently available) comparing the reported reconstruction with the reconstruction using the stated method:

Dashed reconstruction below is using only the 8 proxies that pass detrended screening. solid is our original one.

This figure was unfortunately not included in the FOI response. It would be extremely interesting to see.

As more people online began to be aware of the error, senior author Karoly decided that they needed to notify Journal of Climate. Gergis notified the journal of a “data processing error” on June 8 and their editor, John Chiang, immediately rescinded acceptance of the paper the following day as follows, stating his understanding that they would redo the analysis to conform with their described methodology:

After consulting with the Chief Editor regarding your situation, my decision is to rescind the acceptance of your manuscript for publication. My understanding is that you will be redoing your analysis to conform to your original description of the predictor selection, in which case you may arrive at a different conclusion from your original manuscript. Given this, I request that you withdraw the manuscript from consideration.

Contrary to her recent story at Conversation, Gergis tried to avoid redoing the analysis, instead she tried to persuade the editor that the error was purely semantic (“error in words”), rather than a programming error, invoking support for undetrended screening from Michael Mann, who was egging Gergis on behind the scenes:

Just to clarify, there was an error in the words describing the proxy selection method and not flaws in the entire analysis as suggested by amateur climate skeptic bloggers…People have argued that detrending proxy records when reconstructing temperature is in fact undesirable (see two papers attached provided courtesy of Professor Michael Mann) .

The Journal of Climate editors were unpersuaded and pointedly asked Gergis to explain the difference between the first email in which the error was described as a programming error and the second email describing the error as semantic:

Your latest email to John characterizes the error in your manuscript as one of wording. But this differs from the characterization you made in the email you sent reporting the error. In that email (dated June 7) you described it as “an unfortunate data processing error,” suggesting that you had intended to detrend the data. That would mean that the issue was not with the wording but rather with the execution of the intended methodology. would you please explain why your two emails give different impressions of the nature of the error?

Gergis tried to deflect the question. She continued to try to persuade the Journal of Climate to acquiesce in her changing the description of the methodology, as opposed to redoing the analysis with the described methodology, offering only to describe the differences in a short note in the Supplementary Information:

The message sent on 8 June was a quick response when we realised there was an inconsistency between the proxy selection method described in the paper and actually used. The email was sent in haste as we wanted to alert you to the issue immediately given the paper was being prepared for typesetting. Now that we have had more time to extensively liaise with colleagues and review the existing research literature on the topic , there are reasons why detrending prior to proxy selection may not be appropriate. The differences between the two methods will be described in the supplementary material, as outlined in my email dated 14 June. As such, the changes in the manuscript are likely to be small, with details of the alternative proxy selection method outlined in the supplementary material .

The Journal of Climate editor resisted, but reluctantly gave Gergis a short window of time (to end July 2012) to revise the article, but required that she directly address the sensitivity of the reconstruction to proxy selection method and “demonstrate the robustness” of her conclusions:

In the revision, I strongly recommend that the issue regarding the sensitivity of the climate reconstruction to the choice of proxy selection method (detrend or no detrend) be addressed. My understanding that this is what you plan to do, and this is a good opportunity to demonstrate the robustness of your conclusions.

Chiang’s offer was very generous under the circumstances. Gergis grasped at this opportunity and promised to revert by July 27 with a revised article showing the influence of this decision on resultant reconstructions:

Our team would be very pleased to submit a revised manuscript on or before the 27 July 2012 for reconsideration by the reviewers . As you have recommended below, we will extensively address proxy selection based on detrended and non detrended data and the influence on the resultant reconstructions.

…

Torturing and Waterboarding the Data

In the second half of 2012, Gergis and coauthors embarked on a remarkable program of data torture in order to salvage a network of approximately 27 proxies, while still supposedly using “detrended” screening. Their eventual technique for ex post screening bore no resemblance to the simplistic screening of (say) Mann and Jones, 2003.

One of their key data torture techniques was to compare proxy data correlations not simply to temperatures in the same year, but to temperatures of the preceding year and following year.

To account for proxies with seasonal definitions other than the target SONDJF season (e. g., calendar year averages), the comparisons were performed using lags of -1, 0, and +1 years for each proxy (Appendix A).

This mainly impacted tree ring proxies. In their practice, a lag of -1 year meant that a tree ring series is assigned one year earlier than the chronology (+1 is assigned one year later.) For a series with a lag of -1 year (e.g. Celery Top East), ring width in the summer of (say) 1989-90 is said to correlate with summer temperatures of the previous year. There is precedent for correlation to previous year temperatures in specialist studies. For example, Brookhouse et al (2008) (abstract here) says that the Baw Baw tree ring data (a Gergis proxy), correlates positively with spring temperatures from the preceding year. In this case, however, Gergis assigned zero lag to this series, as well as a negative orientation.

The lag of +1 years assigned to 5 sites is very hard to interpret in physical terms. Such a lag requires that (for example) Mangawhera ring widths assigned to the summer of 1989-1990 correlate to temperatures of the following summer (1990-1991) – ring widths in effect acting as a predictor of next year’s temperature. Gergis’ supposed justification in the text was nothing more than armwaving, but the referees do not seem to have cared.

Of the 19 tree ring series in the 51-series G16 network, an (unphysical) +1 lag was assigned to five series, a -1 lag to two series and a 0 lag to seven series, with five series being screened out. Of the seven series with 0 lag, two had inverse orientation in the PAGES2K. In detail, there is little consistency for trees and sites of the same species. For example, New Zealand LIBI composite-1 had a +1 lag, while New Zealand LIBI composite-2 had 0 lag. Another LIBI series (Urewara) is assigned an inverse orientation in the (identical) PAGES2K and thus presumably in the CPS version of G16. Two LIBI series (Takapari and Flanagan’s Hut) are screened out in G16, though Takapari was included in G12. Because the assignment of lags is nothing more than an ad hoc after-the-fact attempt to rescue the network, it is impossible to assign meaning to the results.

In addition, Gergis also borrowed from and expanded a data torture technique pioneered in Mann et al 2008. Mann et al 2008 had been dissatisfied with the number of proxies passing a screening test based on correlation to local gridcell, a commonly used criterion (e.g. Mann and Jones 2003). So Mann instead compared results to the two “nearest” gridcells, picking the highest of the two correlations but without modifying the significance test to reflect the “pick two” procedure. (See here for a contemporary discussion.) Instead of comparing only to the two nearest gridcells, Gergis expanded the comparison to all gridcells “within 500 km of the proxy’s location”, a technique which permitted comparisons to 2-6 gridcells depending both on the latitude and the closeness of the proxy to the edge of its gridcell:

As detailed in appendix A, only records that were significantly (p < 0.05) correlated with temperature variations in at least one grid cell within 500 km of the proxy’s location over the 1931-90 period were selected for further analysis.

As described in the article, both factors were crossed in the G16 comparisons. Multiplying three lags by 2-6 gridcells, Gergis appears to have made 6-18 detrended comparisons, retaining those proxies for which there was a “statistically significant” correlation. It doesn’t appear that any allowance was made in the benchmark for the multiplicity of tests. In any event, using this “detrended” comparison, they managed to arrive at a network of 28 proxies, one more than the network of Gergis et al 2012. Most of the longer proxies are the same in both networks, with a shuffling of about seven shorter proxies. No ice core data is included in the revised network and only one short speleothem. It consists almost entirely of tree ring and coral data.

Obviously, Gergis et al’s original data analysis plan did not include a baroque screening procedure. It is evident that they concocted this bizarre screening procedure in order to populate the screened population with a similar number of proxies to Gergis et al 2012 (28 versus 27) and to obtain a reconstruction that looked like the original reconstruction, rather than the divergent version that they did not report. Who knows how many permutations and combinations and iterations were tested, before eventually settling on the final screening technique.

It is impossible to contemplate a clearer example of “data torture” (even Mann et al 2008).

Nor does this fully exhaust the elements of data torture in the study, as torture techniques previously in Gergis et al 2012 were carried forward to Gergis et al 2016. Using original and (still) mostly unarchived measurement data, Gergis et al 2012 had re-calculated all tree ring chronologies, except two, using an opaque method developed by the University of East Anglia. The two exceptions were the two long tree ring chronologies reaching back to the medieval period:

All tree ring chronologies were developed based on raw measurements using the signal-free detrending method (Melvin et al., 2007; Melvin and Briffa, 2008) …The only exceptions to this signal-free tree ring detrending method was the New Zealand Silver Pine tree ring composite (Oroko Swamp and Ahaura), which contains logging disturbance after 1957 (D’Arrigo et al., 1998; Cook et al., 2002a; Cook et al., 2006) and the Mount Read Huon Pine chronology from Tasmania which is a complex assemblage of material derived from living trees and sub-fossil material. For consistency with published results, we use the final temperature reconstructions provided by the original authors that includes disturbance-corrected data for the Silver Pine record and Regional Curve Standardisation for the complex age structure of the wood used to develop the Mount Read temperature reconstruction (E. Cook, personal communication, Cook et al., 2006).

This raises the obvious question why “consistency with published results” is an overriding concern for Mt Read and Oroko, but not for the other series, which also have published results. For example, Allen et al (2001), the reference for Celery Top East, shows the chronology at left for Blue Tier, while Gergis et al 2016 used the chronology at right for a combination of Blue Tier and a nearby site. Using East Anglia techniques, the chronology showed a sharp increase in the 20th century and “consistency” with the results shown in Allen et al (2001) was not a concern of the authors. One presumes that Gergis et al had done similar calculations for Mount Read and Oroko, but had decided not to use them. One can hardly avoid wondering whether the discarded calculations didn’t emphasize the desired story.

Nor is this the only ad hoc selection involving these two important proxies. Gergis et al said that their proxy inventory was a 62-series subset taken from the inventory of Neukom and Gergis, 2011. (I have been unable to exactly reconcile this number and no list of 62 series is given in Gergis et al 2016.) They then excluded records that “were still in development at the time of the analysis” (though elsewhere they say that the dataset was frozen as of July 2011 due to the “complexity of the extensive multivariate analysis”) or “with an issue identified in the literature or through personal communication”:

Of the resulting 62 records we also exclude records that were still in development at the time of the analysis .. and records with an issue identified in the literature or through personal communication

However, this criterion was applied inconsistently. Gergis et al acknowledge that the Oroko site was impacted by “logging disturbance after 1957” – a clear example of an “issue identified in the literature” but used the data nonetheless. In some popular Oroko versions (see CA discussion here), proxy data after 1957 was even replaced by instrumental data. Gergis et al 2016 added a discussion of this problem, arm-waving that the splicing of instrumental data into the proxy record didn’t matter:

Note that the instrumental data used to replace the disturbance-affected period from 1957 in the silver pine [Oroko] tree-ring record may have influenced proxy screening and calibration procedures for this record. However, given that our reconstructions show skill in the early verification interval, which is outside the disturbed period, and our uncertainty estimates include proxy resampling (detailed below), we argue that this irregularity in the silver pine record does not bias our conclusions.

There’s a sort of blind man’s buff in Gergis’ analysis here, since it looks to me like G16 may have used an Oroko version which did not splice instrumental data. However, because no measurement data has ever been archived for Oroko and a key version only became available through inclusion in a Climategate email, it’s hard to sort out such details.

…

Conclusions

Gergis has received much credulous praise from academics at Conversation, but none of them appear to have taken the trouble to actually evaluate the article before praising it. Rather than the 2016 version being a confirmation of or improvement on the 2012 article, it constitutes as clear an example of data torture as one could ever wish. We know Gergis’ ex ante data analysis plan, because it was clearly stated in Gergis et al 2012. Unfortunately, they made a mistake in their computer script and were unable to replicate their results using the screening methodology described in Gergis et al 2012.

…

One wonders whether the editors and reviewers of Journal of Climate fully understood the extreme data torture that they were asked to approve. Clearly, there seems to have been some resistance from editors and reviewers – otherwise there would not have been nine rounds of revision and 21 reviews. Since the various rounds of review left the network unchanged even one iota from the network used in the PAGES2K reconstruction (April 2013), one can only assume that Gergis et al eventually wore out a reluctant Journal of Climate, who, after four years of submission and re-submission, finally acquiesced.

As noted above, Wagenmakers defined data torture as succumbing to the temptation to “fine tune the analysis to the data in order to obtain a desired result” and diagnosed the phenomenon as being particularly likely when the authors had not “commit themselves to a method of data analysis before they see the actual data”. In this case, Gergis et al had, ironically, committed themselves to a method of data analysis not just privately, but in the text of an accepted article, but they obviously didn’t like the results.

One of the underlying mysteries of Gergis-style analysis is one seemingly equivalent proxies can be “significant” while another isn’t. Unfortunately, these fundamental issues are never addressed in the “peer reviewed literature”.

This comment remains as valid today as it was in 2012.

In her Conversation article, Gergis claimed that her “team” discovered the errors in Gergis et al 2012 independently of and “two days” before the errors were reported at Climate Audit. These claims are untrue. They did not discover the errors “independently” of Climate Audit or before Climate Audit. I will review their appropriation of credit in a separate post.

It is Mannian splicing and attribution denial all over again, just to obtain a “hockey stick”.

It is gobsmacking that this sort of thing continues, and I salute Steve McIntyre for the patience and microscopic detail he had to wade through to sort out this complete failure of peer review – twice. The Journal of Climate should retract Gergis 2016.

Update: For those that wish to contact Joelle Gergis, and/or the Journal of Climate about this paper, here are the respective web pages. If you do, please be factual and respectful; nothing is accomplished by boorish behavior.

Read McIntyre’ post completely. She tried that ‘typo’ approach for almost a year; the journal editors didn’t buy it. Reason is the ‘typo’ modification would have meant ex post selection, a different no-no.

It appears that the typo was, “I came erroneously to the correct conclusion.” Having admitted to the ‘erroneously’ part she doesn’t want to admit to the ‘incorrect’ part. Interesting self-snooker.
Oh what a tangled web we weave
If from the start we would deceive.
From tortured data, thin and thick
A manufactured hockey stick
Appears from proxies rent and torn
And generates deserved scorn.

When I was a Ph.D. student with ANU/Canberra, in 1982 one fine morning my Professor from CSIRO Dr. Henry Nix along with Dr. R. L. McCowen from Townsville [?] came to my house [I lived close to PM’s Residence]. McCowen proposed whether I can present a paper on agroclimate of Tropical Australia at a Symposium titled “Agro-Research for the Semi-Arid Tropics: North-West Australia”. I expressed my willingness but raised the question on data. McCowen clarified on this. He sent me daily rainfall data of 82 stations online. I analysed the data for agro-climate using my water balance model [Agric. Meteorol., 28:1-17] and my agro-climatic classification model [Agric. Meteorol., 30:185-200]. The detailed results I presented in my Ph.D. Thesis [with ANU Library]. The data helped me to complete my Thesis in 11 months and with the University special permission for viewer on three years residency; and took up a job in Brazil. The university submitted later my thesis for review; and I got Ph.D. in 1985.
Here, I haven’t done any corrections to data series. I simply converted daily data in to weekly data for individual years and used in the agro-climatic classification model and daily data in water balance model. The results clearly showed the basic problems for agro-research in this zone in terms of rainfall over different parts.
Dr. S. Jeevananda Reddy

Gergis is a highly qualified professional. The peer reviewers were den-ialists.
/sarc
What I truely wonder is how Steven McIntyre can keep himself so calm and to-the-point in the middle of all these rogue scientists.

I love it – psychic tree ring proxies. Growth affected by next year’s temperature 🙂… Mangawhera ring widths assigned to the summer of 1989-1990 correlate to temperatures of the following summer (1990-1991) – ring widths in effect acting as a predictor of next year’s temperature. …

The above research shows nothing new that a reasonable person wouldn’t have known.
As far as I (you might think not a very reasonable person) can understand these matters in the temperate regions, I would assume, in the tree growing seasons the soil moisture/rainfall, sunshine exposure/hours, air temperature and humidity are probably equally (within few percentage points) important positive contributors, not to ignore the negatives like disease, bug infestations, forest fires (- & +).
In my book tree rings proxy is only a good proxy for the tree growing conditions, it is in the same category as the 10Be nucleation proxy, which is only a good proxy for the 10Be nucleation, followed with few other multi factor proxies that are almost as useless.

Holy Crap, Batman. I am shocked, just shocked, that;
“With ongoing public concern regarding climate change and recent drought that has affected many areas of the western United States, this study provides context and direct evidence for the negative impact of water stress on forest ecosystems. The response of trees to drought is a tangible example of the impacts of climate change on terrestrial ecosystems and is understandable by the public and a broad scientific audience.”
Imagine my surprise when I found out that no water during a drought can cause a tree to have reduced growth. I hope they aren’t claiming this as original research. I am betting that someone else figured this out a long time ago.
And this is science?

On the bright side, Gergis ’12 was forced into revision by a journal after it’s considerable flaw(s) became public knowledge. I’ll be impressed when clearly faulty climate science papers become rejected/redacted as a matter of course, rather than rationalized and defended, as is the current norm.

FWIW (not much), silver bullets are for werewolves, not vampires. I think Bram Stoker’s rules for killing a vampire are limited to (a) a wooden stake, (b) decapitation, and (c) a shot from a “sacred” bullet.

In certain engineering disciplines, the heat transfer correlations for heated elements are derived from series of experiments with slightly different initial and boundary conditions (heat flux, fluid temperature and pressure, etc). Many different experiments are done to cover the expected range of applicability of the correlation (on the order of at least 1000). After the data is QA’d (before the correlation is developed) to make sure that there are no missing holes or obvious extraneous outliers (black swans), the data is randomly split into two bins.
One bin is locked away. The other bin is used to develop the correlation. When the correlation is determined to be ready for a test, the locked bin is opened, and the unused data is compared to the correlation. If there is good agreement (i.e., it meets the statistical tests defined before the experiments are performed), the error bars are established, and the correlation can be used in production. If not, then the entire process is redone, from the beginning. It is expensive and time-consuming, but it provides a correlation that people can trust.
The data torture described in this article seems like the worst sort of manipulation imaginable. How can these people live with themselves?

In a follow-up post, Steve writes
“I’m going to examine Gergis’ dubious screening out of the Law Dome d18O series, …”
Law Dome weakens Gergis’ story line, and there is little doubt that was the reason the series was not included.https://climateaudit.org/2016/08/03/gergis-and-law-dome/

Yup given Steve pointed out it has a better tstat than 24 of Gergis 28 retained proxies, there is just no reason Gergis should have applied another method just to the law dome d180 series other than to exclude it

Bimbo – noun, plural bimbos, bimboes. Slang.
1. a foolish, stupid, or inept person.
2. a man or fellow, often a disreputable or contemptible one.
3. an attractive but stupid young woman, especially one with loose morals.
Source http://www.dictionary.com/browse/bimbo

Iffen Joelle Gergis doesn’t like being referred to as a ‘Bimbo’ then she should post haste “cease and desist” from publicly touting her foolish, stupid, idiotic and/or inept ideas, claims and beliefs as being scientific fact or factual.
If the scientific community ignores Gergis’s intentional acts of foolishness, stupidity, idiocrasy and/or ineptness ….. then there is no reason for her to change her dastardly and scientific dishonest ways.
If the Bimbo “shoe” fits, …… then she should wear it without complaining.

From the point of view of the normal scientist, it’s easy to understand that when you see the epistemological warning light flashing, indicating a problem within institutional science, this is going to provoke a certain mental disturbance. The two main blogs confronting each other at the moment of my conversion were RealClimate, manned by a team of climate scientists, representing the voice of official science as it were, and Climate Audit, run on a shoestring by the Canadian Steve McIntyre, who had no particular scientific status. And yet his site was far more precise, detailed, and factual. Climate Audit displayed scientific rigour; RealClimate offered personal attacks. In this case, it was the amateur who was following the path of true science.

I took it up with ATTP a while back, rather than discussing the merits of both the paper and the audit the clown talks about rubbish, because he sadly has no clue what he is talking about, and has been running around calling people deniers everywhere.
I don’t just cast the title of “Brain dead activist” around lightly. He earns it in this case and his “followers” my god. I am no genius but, when I read the comments, I thank my lucky stars for what cognitive abilities I have

Many scientists disagreed with Einstein. They didn’t use a bad picture of him to demean him, and call him a ‘backroom weirdo who looks like [???]’ It’s sexist. The same thing happens everytime Oreskes is the subject. Being homely is not the issue … the issue is bad science and bad journalism.

@ teapartygeezer
Well “DUH”, Einstein didn’t look, act or dress like a “bimbo” during his early years when
many scientists disagreed with him therefore there were no “bimbo” equivalent pictures of him for use in their demeaning critiques.
Einstein was a fashionable “dresser” for most of his career. To wit:https://plus.maths.org/issue37/features/Einstein/Einstein_young.jpg

Comments like yours detract from the great work Steve McIntyre does. Warmists will concentrate on comments like yours rather than address the statistical issues. Your comment does not add to the debate.

Well, why don’t you tell us exactly what you know about this person and their character if you aren’t just cheaply slagging them off on the basis of their appearance?
[opinions differ. Thank you for responding here. .mod]
So mod, you’re ok with this type of behavior? What’s the issue of opinion here? Are such personal attacks based on appearance acceptable here or not?

Mr. Schaeffer, when you get your own blog, you can then engage in the difficult hourly decisions on moderatation – lately you’ve been trying to project your views on how things should be run here…and quite frankly, we have a successful policy that has handled millions of comments, and almost a quarter billion views. Is it perfect, no. Is the person’s description petty? Probably, but we can’t catch everything. But, if you can do better by all means go do it.
In the meantime please refrain from the “concern troll” antics.

Let’s see your picture. We will grade the voracity of your astute “science acumen is based on how you look” intuition using just…your driver’s license pic. Passport. High school year book. Cosco ID. Come on. Chicken?

Mosher agrees
“Ah no.The problem is it authorizes the use of several suspect practices.
A) comparing a proxy to multiple temperature fields to find a correlation WITHOUT correcting for this in uncertainties. Its pretty bad with 5 degree bins out to 500km, it would be hilarious
with 1 degree bins out to 500km.
B) using “leads” Physically that means the proxy predicts temperature. Unphysical is bad.
c) screwing up your calibration and verification time periods. Basic stuff.”
–
Brandon points out an extra major concern, the new paper references a different date set1931 on but found the data used was from 1920 on ( the original paper data set). This alone ,if true, would demand withdrawal of the paper
“New paper
“For predictor selection, both proxy climate and instrumental data were linearly detrended over 1931–90. As detailed in appendix A, only records that were significantly (p < 0.05) correlated with temperature variations in at least one grid cell within 500 km of the proxy’s location over the 1931–90 period were selected for further analysis-"
–
Old paper
"The lead author, Professor David Karoly, told The Conversation: “The actual method … included the long-term trend in the temperatures over the period from 1920 to 1990."
–
Problem is obvious.
Two different data dates claimed but both actually calculate the trends over the same single original base.
–
Steve McIntyre makes the most salient point
Data was used with the ability to predict the future,
Gergis has shown how to protect the temperature for the next millennium.
By fortune telling.
Perhaps she should be called the fortune teller.

If you have time, write the following computer program. In step one you make with a pseudo-random number generator a time series of temperatures, let’s say a proxy, for a period at your choice, say from 1200-1950 AD. Print the result on a file and repeat the procedure 10.000 times for slightly different periods. With our present computers this will only take a few minutes. In step two you take the time series of surface temperatures from 1850-2016 (does not matter that the latter part is problematic). For the period you have numbers for both a proxy and the surface record, you compute the correlation between both. In the end you have 10.000 correlations, some with missing values in case there is no time overlap of more than one year. Without proof, I can tell that the mean correlation will be zero but there will be a certain spread around that number. In step three the computer should select all proxies with a valid correlation above .20. In order to get more candidates, you may also compute correlations with a time lag of plus or minus one year. This means that proxies sometimes have psychic abilities. The end result is a small collection of the right proxies. In step four you should compute the mean temperature time series of those proxies. Without proof, I can tell that proxies’ time series will show a remarkable resemblance with the surface record. If the latter is a hockey stick the former will be too. That is to say, for the period 1850-2016. Without proof, I can tell that the proxy time series before 1850 will be a curious random walk to be taken at face value. The end result can be presented to a climate science journal. For the statisticians: the smaller the period of overlap between proxies and surface record, the higher the spread around zero of the correlations. This can be checked by taking two criteria, let’s say a correlation between 0-.20 and a correlation .21-1.00. The latter group will apply at proxies with usually a small time overlap with the surface record. So the resemblance with the surface record will depend on the criterion. For very high correlations the final result may be disappointing.

Was the methodology in the new paper better or worse than in the old paper?
–
It should always be better in a new paper but no, it was apparently worse.
Problems were in consistency of application of predictive techniques.
In looking for data that matched known temperature patterns, so they could use that data as a proxy record they matched some records for the corresponding year, some for the past year and some for the future year.
When they found a match they used it.
Hence 3 different ways proxies were selected, then combined.
Which is not a good way to select data.
You have 3 different years of ad hoc types of proxy mixed together.
In Australia that is called a “Captain’s pick” in a derogatory way.
When skeptics do it it is called cherry picking.
–
Included is the second problem that choosing a match with data 1 year into the future makes the data predict the future.
If the concept of a tree ring looking a year ahead at the temperature and deciding how much it should grow this year is a valid scientific concept I confess to being a little skeptical.
–
It brings in that concept of wanting to find a correlation so hard that you will accept an idea that is unphysical so long as it fits.
Like the stock market going up 50% in those years the the Green Bay Packers win the championship. ( not a real example)
–
New paper methodology states
“For predictor selection, both proxy climate and instrumental data were linearly detrended over 1931–90. As detailed in appendix A, only records that were significantly (p < 0.05) correlated with temperature variations in at least one grid cell within 500 km of the proxy’s location over the 1931–90 period were selected for further analysis-"
–
The problem is they list 68 degrees of freedom which means they actually used 1920- 1990 de trending but claimed it was 1931-1990.
The methodology did what it said but they have labelled the dates wrongly.
This can be fixed by correcting the dates but then it will throw the trends out.

Here we go again with global temps. I am guessing that the past 800,000 year reconstruction, were it a weather report, would demonstrate bi-polar seesaw, hot here cold there, climate pattern change that reveals distinctive signals at the leading edge of each interstadial warm peak and at leading edge of each stadial cold trough that when averaged out would tell us nothing. It is the climate “pattern” change that speaks, not the global average. Meteorologist get this. Nearly all paleo and nuevo self-proclaimed climate scientists apparently lack the gene to get this.

You don’t need to be a botanist or a forester, or indeed anything other than a person with a brain and a smattering of common sense to figure out the following:
Tree rings widths, clearly a direct measure of a tree’s growth at yearly intervals are going to be correlated with the following:
1. Rainfall (and not just total rainfall but the total distribution of rainfall, day by day, over the year.
2. Temperature (and not just yearly average but a detailed day-by-day temperature record of maxima and minima
3. Sunlight. A fundamental requirement for photosynthesis and (common sense would suggest) strongly affected by clouds. Again, not just average cloud cover, but a day by day detailed cloud-cover ecord would be required to determine how it affected the trees
4. Soil nutrients. How these change over time is probably a complex function of temperature and rainfall, affecting weathering of underlying rocks, release of mineral constituents, and their transport up the soil column.
5. Forest fires. If a tree survives a fire, it will likely benefit by elimination of other trees competing for the sunlight, release of mineral nutrients from ash, and probably other things I can’t think of.
6. Insect infestations, especially those that defoliate
7. All the other stuff I can’t think of, that foresters would know about.
So how in the heck can anyone expect to extract AVERAGE temperature data from tree rings without first eliminating all the other variables that trees are responsive to? You can’t do it. It’s as close to impossible as you can get in this world. It’s a vain hope. It’s a farce. It’s a TREE RING CIRCUS
[And atmospheric CO2 levels. .mod]

For permission, contact us. See the About>Contact menu under the header.

All rights reserved worldwide.

Some material from contributors may contain additional copyrights of their respective company or organization.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!
Cookie Policy