The Steig Corrigendum

Plagiarism is the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit.

Here is a discussion of the topic from Penn State, where Michael Mann of Steig et al has an appointment.

In an entirely unrelated development, Steig et al have issued a corrigendum in which they reproduce (without attribution) results previously reported at Climate Audit by Hu McCulloch (and drawn to Steig’s attention by email) – see comments below and Hu McCulloch’s post here.

They also make an incomplete report of problems with the Harry station – reporting the incorrect location in their Supplementary Information, but failing to report that the “Harry” data used in Steig et al was a bizarre splice of totally unrelated stations (see When Harry Met Gill). The identification of this problem was of course previously credited by the British Antarctic Survey to Gavin the Mystery Man.

124 Comments

In this Letter, we reported trends on reconstructed temperature histories for different areas of the Antarctic continent. The confidence levels on the trends, as given in the text, did not take into account the reduced degrees of freedom in the time series due to autocorrelation. We report in Table 1 the corrected values, based on a two-tailed t-test, with the number of degrees of freedom adjusted for autocorrelation, using Neffective = N(1 – r)/(1 + r), in which N is the sample size and r is the lag-1 autocorrelation coefficient of the residuals of the detrended time series. The median of r is 0.27, resulting in a reduction in the degrees of freedom from N = 600 to Neffective = 345 for the monthly time series.

We also include results of a further calculation that takes into account both the variance and the uncertainty in the reconstructed temperatures. We performed Monte-Carlo simulations of the reconstructed temperatures using a Gaussian distribution with variance equal to the unresolved variance from the split calibration/verification tests described in the paper. Confidence bounds were obtained by detrending each simulation and obtaining the lag-1 autocorrelation coefficient and variance of the residuals; a random realization of Gaussian noise having the same lag-1 autocorrelation coefficient and variance was then added to the trend, and a new trend was calculated. The 2.5th and 97.5th percentiles of the 10,000 simulated trends give the 95% confidence bounds. For the case of zero unresolved variance, this calculation converges on the same value as the two-tailed t-test, above. The 95% confidence minimum trend value is given by the 5th percentile values of the simulated trends, last row of Table 1.

The corrected confidence levels do not change the assessed significance of trends, nor any of the primary conclusions of the paper. We also note that there is a typographical error in Supplementary Table 1: the correct location of Automatic Weather Station ‘Harry’ is 83.0° S, 238.6° E. The position of this station on the maps in the paper is correct.

CORRIGENDUM
doi:10.1038/nature08286
Warming of the Antarctic ice-sheet surface
since the 1957 International Geophysical
Year
Eric J. Steig, David P. Schneider, Scott D. Rutherford,
Michael E. Mann, Josefino C. Comiso & Drew T. Shindell
Nature 457, 459–462 (2009)
In this Letter, we reported trends on reconstructed temperature
histories for different areas of the Antarctic continent. The confidence
levels on the trends, as given in the text, did not take into account the
reduced degrees of freedom in the time series due to autocorrelation.
We report in Table 1 the corrected values, based on a two-tailed t-test,
with the number of degrees of freedom adjusted for autocorrelation,
using Neffective5N(12r)/(11r), in which N is the sample size and r
is the lag-1 autocorrelation coefficient of the residuals of the detrended
time series. The median of r is 0.27, resulting in a reduction in the
degrees of freedom from N 5 600 to Neffective 5 345 for the monthly
time series.
We also include results of a further calculation that takes into
account both the variance and the uncertainty in the reconstructed
temperatures. We performed Monte-Carlo simulations of the reconstructed
temperatures using a Gaussian distribution with variance
equal to the unresolved variance fromthe split calibration/verification
tests described in the paper. Confidence bounds were obtained by
detrending each simulation and obtaining the lag-1 autocorrelation
coefficient and variance of the residuals; a random realization of
Gaussian noise having the same lag-1 autocorrelation coefficient
and variance was then added to the trend, and a new trend was calculated.
The 2.5th and 97.5th percentiles of the 10,000 simulated trends
give the 95% confidence bounds. For the case of zero unresolved
variance, this calculation converges on the same value as the two-tailed
t-test, above. The 95% confidence minimum trend value is given by
the 5th percentile values of the simulated trends, last row of Table 1.
The corrected confidence levels do not change the assessed significance
of trends, nor any of the primary conclusions of the paper. We
also note that there is a typographical error in Supplementary Table 1:
the correct location of Automatic Weather Station ‘Harry’ is 83.0u S,
238.6u E. The position of this station on the maps in the paper is
correct.

What is with these bozos. Jesus Christ. I was trained to be meticulous in giving credit where credit was due.
Somebody should just post a question over at RC and ask if the would do the civil thing and say ( if it’s true) that they were alerted to the “error” by Hu.

In copying over the messages from the Dr. Phil, Confidential Agent thread to this one, you, perhaps inadvertently, failed to include what’s now reply #71 there where Michael Jankowski made a clever “teachable moment” from my correction. It really belongs in this thread too.

I should also point out that they now have determined basically the same values as you did. For the rest, the 1.964 is the two sigma value from Hu’s post. What this means is that Dr. Steig’s group decided to provide an additional correction beyond even Dr. Hu’s work for reasons I don’t yet understand. However, they presented the almost EXACTLY the same results as Dr. McCulloch in their table in ROW 2.

Dear Dr. Steig and co-authors,
FYI, I have recently posted a comment on your 2009 paper in Nature
on Climate Audit, at http://www.climateaudit.org/?p=5341 .
While I was able to replicate or virtually replicate the 1957-2006 trends you report
on p 460 for the three regions and the continent as a whole, the 95% Confidence
Intervals you report appear to have taken no account of serial correlation
in the regression errors. When this is done, the CI’s are substantially wider
than you report.
Any reactions, by comments there or by e-mail, would be welcome!
— Hu McCulloch

BTW, the link provided by CoolChill in #136 gives the complete Corrigendum, so you don’t have to subscribe to see more. The 3-page “letter” they refer to is just their original article, not the full text of the correction.

Plagiarism will be difficult to prove in this case, but complaints to the relevant authorities should be registered nonetheless. The fact that Dr. McCulloch is a professor who advised them of problems with the paper and did not receive a reply of any kind, is itself a breach of academic protocol. Steig et al will of course assert that someone in their group (perhaps Gavin’s “mystery man”) verbally pointed out the problem before Hu informed them of its existence, and that will be difficult to refute.

Regardless, letters should be written to the appropriate authorities.

If climate science is ever going to be cleaned up, that is one way to do it.

Jon Stephenson’s book on Fuch’s IGY trans antarctic expedition has recently been published. The book ‘Crevasse Roulette’ gives an insight into how vast and variable conditions are in the Antarctic. The emphasis Steig et al have chased relating to AGW is in my opinion out of all proportion to man’s ability to monitor temperature in the wilderness PJ describes.

We had a case recently in England with a Newcastle University gaining worldwide headlines claiming they’d created artificial sperm. It turns out that paragraphs from their papers had been copied from another paper in 2007 and now has been withdrawn.

What is hammered home time and again with a lot of “climate science” is the way the discipline is learning from scrath some pretty basic econometrics.

Nothing wrong with that, but they claim all manner of competence in the area, refuse to take input across disciplines and often concoct their own statistical methodology without any foundation whatsoever.

Do the methods described by Dr Steig in his Corrigendum deal with precision testing rather than with accuracy testing? If you make a Monte Carlo simulation, do you not choose numbers from the same range shown by the data? Now if the data were systematically biassed X degrees wrong (and they can be, as Ryan O has shown with the tiles, the choice of regpar and the choice of Principal Components) the Steig confirmation might merely show a replicable grouping around the wrong absolute value. Like painting the target on the wall after shooting a tight group. Unfortunately, the group is not as tight as it was. Is it also now offset from the group, with fewer bullseyes?

Re: Geoff Sherrington (#46), This is an excellent point. The Monte Carlo analysis they did does not show anything. All they did was use the lag-1 coefficient from the AVHRR data from each grid cell, run simulations, and compare it back to the same grid cell. This tells you nothing whatsoever about the threshold statistics for the reconstruction; it only tells you how well random noise fits a particular cell. It is a particularly easy test to pass besides, as the artificial offsets between the satellites make it very difficult for any simulation to achieve relatively high RE/CE scores. The analysis they should have done was use the lag-1 coefficient from the PCs and run simulations, then do reconstructions using those simulations, and calculate thresholds.
.
They also should have calculated thresholds for the ground stations for the RegEM results. The ground stations do not have a problem of artificial offsets, and the resulting thresholds are much, much higher.
.
I don’t really plan on posting the results here right now. Hopefully this will appear in print. 😀

My first reaction upon reading this was to say “same old sloppy, the dog-ate-my-homework, climate science”. But this is different. This is more of the “any and all who disagree with us (the saviors of the world) are evil shills in the pay of the wicked fossil fuel industry and thus undeserving of our recognition.”

Do you see the connection between this event and the practice of secrecy so prevalent in the AGW “research” circles? They know they are data manipulators, and they want to publish without public scrutiny to avoid exposure.

It’s amazing the lengths they will go to to avoid giving credit to anything associated with CA. There must be no credibility assigned to anyone not directly on the global warming payroll.

My thought is that the reason the new result has been released is because they know that both Hu and Ryan’s improved reconstruction are correct. They can now claim that any ‘improved’ results are within their new expanded CI so they aren’t and never were — wrong.

All that improvements in trend calculation accuracy have to stand on is the center of the probability distribution has shifted downward and the trend distribution is different than the interpretation of the now obvious Chiladni patterns of Steig et al. Of course any new work will have similar CI’s.

Take a look at the new article at Pielke Sr’s blog about Dr. Stone’s problem. He has identified a major problem with one of the alarmists contentions; and was told basically if he didn’t do their work for the miscreants he would not get published.

This appears to be a bad week for the science in terms of intellectual honesty and accepted practices, unless of course they change the name from climate science to climate mythology.

David #42
It’s my understanding from reading some of Steig’s postings on RC after his Antartic paper that he also teaches statistics and did not think much of the statistical critiques of his work.
Thanks
Ed

Re: Edward (#54),
My comment was purely tongue-in-cheek. After Steig et.al. was published, there was a kerfuffle over data and methods availability with Jeff Id ( I think) where Steig sarcastically (and ironically now) suggested Jeff come take one of his statistics classes. Perhaps Jeff could reproduce the quote?

Any email sent before then may remain unread and be discarded. If your message
is important, you will need to resend after that date.

But still, Steig had a responsibility to let him know.

Conceivably there were other replies that got caught by my spam filter and eventually erased. I try to scan my junk box regularly for important messages, but may have overlooked something. (I just found a message about FI from UC there last month that would have been lost if I hadn’t caught it.)

It seems one of the core operating principals of Climate Science is to save the planet, at any cost. The means justify their ends.

“-”On the one hand, as scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but – which means that we must include all doubts, the caveats, the ifs, ands and buts. On the other hand, we are not just scientists but human beings as well. And like most people we’d like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climate change. To do that we need to get some broad based support, to capture the public’s imagination. That, of course, means getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. This “double ethical bind” we frequently find ourselves in cannot be solved by any formula. Each of us has to decide what the right balance is between being effective and being honest. I hope that means being both.” – Stephen Schneider, lead author of the Intergovernmental Panel on Climate Change, Discover magazine, October 1989

From the original article: “A recent article in the Chronicle of Higher Education calls plagiarism “the gravest sin in the academy”.

Sorry? I can’t follow this. These people are lying and fiddling base data and they call ‘Plagiarism’ the gravest sin?

Plagiarism is a human vice inherent in everything we do. We all ‘borrow’ from other people. When it gets blatent, and when people improve their careers and standing at the expense of another, it is less acceptable, but it still goes on, everywhere. I am sure that many great scientists have been totally overlooked in the past, and the discoveries we attribute to others should really have been their due.

But this is a sin against humans. Presenting untruths as fact, and altering base data to back this up is a sin against science and knowledge. For a true academic, this is surely far worse…?

On Feb. 26, 2009, I informally published, in a well-known and closely watched climate blog, a comment on the Jan. 22, 2009 Nature letter by Eric J. Steig et al., “Warming of the Antarctic ice-sheet…” (vol. 456, pp. 459-62). In my comment, I pointed out that the confidence intervals they published made no compensation for serial correlation, and that when this is done, the results are substantially weaker than they reported, albeit not by enough to overturn them in the key case of West Antarctica. On Feb. 28, 2009, I called the authors’ attention to my findings in the e-mail copied below.

In yesterday’s issue of Nature, Steig et al. published a Corrigendum replicating my findings, with essentially the same results. However, they make no mention of my prior, well-distributed results, of which I had made them aware. Instead, they present my prior discovery as if it were their own.

According to your Editorial Policies, “Plagiarism is when an author attempts to pass off someone else’s work as his or her own.” There is no submission date published with the Corrigendum, but if it this was after Feb. 28, I would submit that this Corrigendum constitutes plagiarism as you define it.

I therefore request that you retract the Steig et al. Corrigendum and replace it with my e-mail to them, copied below. The e-mail provides the URL to my Feb. 26 Climate Audit post, “Steig 2009’s Non-Correction for Serial Correlation.”

Since your policy on corrections and comments is to publish them “if and only if the author provides compelling evidence that a major claim of the original paper was incorrect,” and this error did not in itself overturn their key result, I did not submit my comment to Nature, and only published it informally instead. But since you have now published Steig et al.’s replication of my findings, they evidently are important enough for at least a mention in Nature.

Thank you in advance for your careful consideration.

Sincerely yours,

J. Huston McCulloch
Professor of Economics and Finance
Ohio State University

Dear Dr. Steig and co-authors,
FYI, I have recently posted a comment on your 2009 paper in Nature
on Climate Audit, at http://www.climateaudit.org/?p=5341 .
While I was able to replicate or virtually replicate the 1957-2006 trends you report
on p 460 for the three regions and the continent as a whole, the 95% Confidence
Intervals you report appear to have taken no account of serial correlation
in the regression errors. When this is done, the CI’s are substantially wider
than you report.
Any reactions, by comments there or by e-mail, would be welcome!
— Hu McCulloch

In the real world of science plagerism gets people fired, pushes professors off the tenure track, and leads to the wastelands of science.

I was personally able to get a professor fired as a student because this person attempted to take credit for an idea for a space shuttle experiment that came from my student group.

We, as scientists, must INSIST on rigorous scientific morality in this most crucial area. The more that this happens, the more science itself will be called into question, should the current consensus collapse in the face of contrary actual evidence.

The integrity of the scientific method is far more important than any one theory or any one group of scientists.

Considering that Mann was away from his computers until mid March (based upon the automated reply that Hu received) and Steig was traveling to Antarctica (this was addressed in several threads based upon communications with Jeff Id), then for the “mistake” to have been found prior to Hu’s post, it would have had to have been discovered by one of the secondary authors. Interestingly, several of them have said in coorespondance that they did not have access to all of the statistical data. So the only way for anyone to have verified the fallacy discovered prior to Hu’s post would be for the two primary statistical analysts of the paper (Mann and Steig) to been working on the data while away from their staff positions.

That which is fascinating to me in the above link to Penn State is that each of the three verified cases of plagiarism discussed in the article was committed by a TENURED faculty member! So in no case was it a question of “doing what is necessary to survive” (i.e., intellectual cannibalism). Fascinating to me, and also troubling. Another interesting aspect of the article is that each of the perps is kept anonymous. If you rob a bank of money and get caught, both your name and photo will be all over the media. At Penn State, at least, if you rob an intellectual bank and get caught, your identity is kept from public scrutiny (or so it seems).

r is the lag-1 autocorrelation coefficient of the residuals of the detrended time series. The median of r is 0.27..

Unsurprisingly, a slightly lower AR1 autocorrelation helps them with the confidence intervals – it increases the number of degrees of freedom from 310 to 345, which might make a difference to them.

In this context, does anyone have any ideas on what r is the “median” of? It’s not a term that arises in conventional autocorrelation calculations. It looks like there is a Team-style ad hoc variation to methodology – I can’t locate any description of how the median arises in this context. PErhaps they made windows and calculated a whole bunch of AR1 coefficients, but we don’t know this (or the windowing procedure) nor are the benefits of windowing described here if such was used.

The Nature policy on plagiarism is here. There is also a section on “Due credit for others’ work” and “discussion of unpublished work”, with a link to another page on “Handling (mis?)appropriated data: Introducing a policy to ensure due credit for unpublished data”.

While the networking of climate scientists appears to be beneficial in getting papers into print, that networking evidently is not so helpful when it comes to finding errors that can be an embarrassment when found later and by others. The lack AR1 adjustment to the trends shows sloppiness on the part of the original authors and the peer reviewers.

One would hope that the seemingly Jekyll and Hyde persona of Erik Steig comes by way of his conflicting roles as scientist and advocate. Where Steig’s actual demeanor lies in this behavioral spectrum, I would not venture a guess – or a least not a public one.

r is the lag-1 autocorrelation coefficient of the residuals of the detrended time series. The median of r is 0.27..

Unsurprisingly, a slightly lower AR1 autocorrelation helps them with the confidence intervals – it increases the number of degrees of freedom from 310 to 345, which might make a difference to them.

In this context, does anyone have any ideas on what r is the “median” of? It’s not a term that arises in conventional autocorrelation calculations.

That puzzles me a little, too. .318 was just my figure for All Antarctica — I got .262 for Peninsula, .312 for W. Ant, and .290 for E. Ant. By “median,” I would think they must mean the median of these 4 values, ie the average of the middle 2. But this would be .301 by my numbers, and why not just list all 4 values in the text or table?

There are different valid ways of computing r1 — in the post I tried to adjust both numerator and denominator for degrees of freedom. They would get slightly smaller values than mine if they just divided the sum of the n squared residuals by the sum of the n-1 values of e(t)e(t-1), as many do, without even taking averages, but with 600 observations and 2 regressors these distinctions should be imperceptible to their 2 decimal places.

Yet despite their apparently lower r1’s, they usually get the same seQ that I do, so I don’t worry much. The only exception is All Ant., where they are actually slightly higher (CI = +/- .10 for them, seQ = .0458 = .0916/2 for me).

But perhaps they tried to estimate r1 by some unnecessary Monte Carlo method, and .27 is the median of these simulations.

We should understand and note Dr. McCulloch’s qualification of his claim. If Steig et al did submit their correction before February 26th, then there would be no question of plagiarism. Climate science is, as we all know, rife with unsupported claims. We should be cautious in not adding to them. We should wait for Nature’s response to see if our indignation is justified

is the median across all 5509 gridcells of the correlation coefficients of the (gridcell-specific) trend regressions?

The correlations are probably similar enough that it doesn’t make a big difference. However, what matters is the r1 for the actual regression being run, not for regressions run on disaggregated data. Given the nonlinearities and generally positive correlations, they’re probably introducing a little downward bias. This may account for the difference in r1, but not for why they got a slightly higher value for All Ant than I did.

For the R-challenged readers, instead of calculating the autocorrelation of the single averaged sequence which was used in the calculation of the trend, Steig inappropriately took the individual 5509 gridded reconstructions (but all reconstructed from only three PC sequences) and ran a separate regression with each of them and calculated 5509 autocorrelations fro the residuals. The median of those is .27.

The correlations range from .142 to .350 with a mean of .2689167. You are absolutely right that the proper autocorrelation is the one from the actuasl regression on which the results are based.

Roman or Hu, can either of you make up a color-coded map along the lines of their SI Figure 4 showing the impact of the correction on this figure? This is where the claim is made. My guess is that this will noticeably change the significant/nonsignificant boundary and that this figure should have been reissued in the Corrigendum. If neither of you have time right now, I” do it over the weekend.

One point that readers should realize: as of 2004, Nature did not peer review Corrigendums. We established this in connection with the Mann Corrigendum.

I have a theory to explain the outrageous conduct of Mann and Steig. Dr McCulloch is a professor of economics and finance. Obviously he is not a climate scientist. Mann/Steig define plagiarism narrowly, that is applicable only to the work and ideas of peers in the same field. It follows that attribution is not required.

Condemnation of Steig and Mann will be most effective when it comes from their peers.

RE Ryan O #90 —
Thanks, but the link doesn’t work. It tries to to another comment on this thread #6712, but doesn’t connect.

I would suggest that such a plot code “insignificantly different from 0” as 0 itself, rather than as a separate color, since the graph would be saying that these gridcells are essentially 0. (This also works more easily in MATLAB…)

Note that in the Corrigendum, Steig & Co switch peashells — in the paper and most of the corrigendum, they are talking about 95% CI’s, which define 5% 2-tailed tests. But then in the last line of Table 1, they are talking about 5th percentile of the Monte Carlo distribution, i.e. they are implcitly using a 5% 1-tailed test, rather than a 2-tailed test. This is much weaker, since it has the same lower bound as a 10% 2-tailed test. This is why the bottom line is higher than the lower limit of the CI defined by the first and 3rd lines.

Although the first paragraph just replicates my CA post (closely enough for climate work…), the second paragraph does something additional that I don’t quite understand. It seems like they are trying to take the reconstruction noise into account, which sounds reasonable enough. But if you start with a noisy regression, and then add more noise to the dependent variable, you just get a noisier regression, and the regression standard errors will taken this into account with no modification of approach.

But I guess Ryan O (#52) has figured out what this means in terms of their RegEM methodology.

Steve and Ryan, I wrote a short R program for calculating the prob. value for each of the satellite regressions after it has been adjusted for ar1 correlation (plus trends, t statistics and p-values, etc.):

dat is the satellite reconstruction and xs is the time variable (1957 monthly to 2006). I would put up the graph outlining where allregs[,7] = (approximately) .05 although it is easier to simply graph the region where it is greater than .05 (i.e. trend is “not significant), but at the moment I have misplaced my good Antarctic plotting function 😦 . if it isn’t done by tomorrow, I’ll get it done and post it.

One would surmise that having done similar maps in Figure S4, they would have have repeated it with the updated AR-corrected results. Their picture would not have been particularly supportive of the statement: “The corrected confidence levels do not change the assessed significance of trends, nor any of the primary conclusions of the paper“.

I strongly believe that they overstated their case in the initial publication. As Steve rightly points out in #123, there are definitely a lot more issues involved here.

Re: Hu McCulloch (#92), Easy enough. I’ll fiddle with my plotting program. Maybe hashmarks would work even better. What the program does is plot everything first, then it overplots areas of insignificant trends. So I can make it look pretty much however we want.
.
AFA paragraph 2 goes, it’s cheesy. They restrict the Monte Carlo simulations to simulating the difference of the reconstruction vs. actual data. In other words, they assume the reconstructed trend is true. They calculate the unexplained variance between the reconstruction and actual data. They restrict the variance of their simulations to this value. Run simulations. Add the simulation back onto the assumed “true” trend and calculate a new trend and CIs.
.
So it’s Q.E.D. that the results “converge” to the t-test when the unexplained variance is zero. The results are the frickin 2-tailed t-test because there is no unexplained variance to simulate!
.
Note that this method depends on them having to assume the reconstructed trend is true. This means it can not be used to establish thresholds for r, RE, CE, explained variance, or any other statistic. It only establishes CIs for linear regressions under the assumption that the trend is true and the variance of the “noise” is equal to the unexplained variance between the split calibration/verification results and actual data.
.
It also appears (though they don’t specify this) that the simulations are done using the area average reconstruction temperatures. If they were done on the individual grid cells, this would require a lot more than 2 simulations per cell.

While a few exuberant bloggers fumbled about with “back of the envelope calculations”, the original authors, in the spirit of science, went back and “did the math” in order to polish an already shining piece of work.

“”Is this a warming which I see before me,
The trending toward the red? Come, let me publish thee.
I have thee not, and yet I see thee still.
Art thou not, fatal vision, sensible
To feeling as to sight? or art thou but
A warming of the mind, a false creation,
Proceeding from the heat-oppressed brain?
I see thee yet, in form as palpable
As this which now I draw.””

Re: Hu McCulloch (#109), If you don’t get satisfaction from Nature, you might try China. Here’s the headline of a letter in this week’s Science:

China Fights Against Statistical Corruption

Particularly in the current financial crisis, many countries rely on statistics released by the Chinese government for production and trade of bulk commodities, exchange rates, and economic stimulus. However, the credibility of China’s statistics has long been questioned. On 1 May, a new regulation, Rules on Punishment for Violation of Laws in Statistics, was put in effect by the Ministry of Supervision, Ministry of Human Resources and Social Security, and the National Bureau of Statistics.

I’m trying to get a copy of the “Rules on Punishment for Violation of Laws in Statistics” but it seems they only give it out to academics.

(that last part is a joke – the “Rules on Punishment for Violation of Laws in Statistics” are clearly given here, albeit in Chinese). I wonder if we should start working on versions of the “Rules on Punishment for Violation of Laws in Statistics” for use in the US and UK (and Germany?).

Looking at Steig et al again, is Fig2 of the main paper also in need of revision? I’m not sure what this is representing with its “grey” 95% CLs – they seem to be simply constant error bands to the annual anomaly?:

[…] in Nature making essentially the same point I had made several months before in my CA post. See The Steig Corrigendum for discussion. A graph there by Roman Mureika shows that the portion of the continent that shows […]