Monday, July 27, 2009

Editorial standards at AGU journals

This ridiculous paper has already been eviscerated by Tamino, RC, and mt, so I won't waste too much time on it, but I have spotted one more error that no-one else has commented on so far before I get to the main point of my post.

So first, the error. It's not as significant as the one Tamino deals with, but here it is anyway. Paragraph 30 reads as follows:

[30] For the 30 years prior to the 1976 shift (i.e., 1946–1975) the SOI averaged +1.93 but in the 30 years after 1976 (i.e., 1977–2006) the average was −3.06, which represents a shift from a La Niña inclination to an El Niño inclination. The standard deviations for the two periods were 9.48 and 10.40 on monthly SOI averages, and 6.56 and 6.35 on calendar year averages, which indicates consistent variation about a new average value. Only the RATPAC-A data are available for lower tropospheric temperatures both before and after this shift, and even then we are limited to 17-year periods for our analysis of RATPAC-A data because monitoring did not commence until mid-1958. From 1959 to 1975 the RATPAC LTT averaged −0.191°C and from 1977 to 1993 it averaged +0.122°C. The standard deviations on the seasonal data were 0.193° and 0.163 C°, and on monthly data 0.162°C and 0.146°C. We have already illustrated the close relationship between SOI and GTTA, but this description of the respective changes before and after the Great Pacific Climate Shift indicates a stepwise shift in the base values of each factor but otherwise relatively consistent ranges of variation.

(SOI and RATPAC are time series data, the definition of which is irrelevant to my point).

So, to parse this clearly, the authors are claiming that when a time series has the properties that the mean of the first half and second half differ, but the variability in each interval is the same, this indicates that there was a step shift in the middle.

Let's take a linear trend plus noise, y=at+e where t (time) runs from -T to T, and e is any additive noise with variance s2. The expected mean over the first half [-T,0] is -aT/2, and the mean over the second half is aT/2. The standard deviation of the first half is sqrt(a2T2/12 + s2), where these two contributions come from the linear trend and noise respectively. The standard deviation of the second half is, um, sqrt(a2T2/12 + s2). In other words, when the means of the first and second half of a time series differ, but the variability does not, this tells us precisely nothing about whether there was a step change or just a linear trend. Ooops.

I hate to think what they might have done were it not for Craig Loehle's graciously acknowledged assistance with the statistical analysis. I'm sure he is delighted to be associated with this sorry mess of a paper.

Now to the real point, which is that the AGU journals seem to have become rather prone to publishing this sort of nonsense recently (remember Schwartz, Chylek and Lohmann, to name but two). Although of course no system will ever be infallible (and a system that blocked out all the mistakes would block a lot of interesting and important stuff too) the errors in these papers are so blindingly obvious that it is hard to believe that any reasonably diligent and competent reviewers would miss them.

When you submit a paper to an AGU journal, you are asked to suggest 5 reviewers. It's a common enough practice (pretty much ubiquitous) which helps the editor who may not be well acquainted with the particular subfield that the paper address. However, it also serves as an open invitation to game the system by suggesting people who you think are likely to be particularly generous and uncritical. Of course any editor worth his (or her) salt should also look outside this list, especially if he thinks that the authors have played this game. But if they have a lot of papers to deal with, and no real stake in the outcome, they might not bother.

I'd like to see AGU editors attach their names to the papers they handle. This seems to be standard practice in the EGU journals, which have not (AFAIK) suffered from this sort of nonsense. This leaves the editors somewhat accountable for the mistakes they make, and any pattern of repeated carelessness would be easily spotted. Of course, the main responsibility lies with the authors and reviewers, but as things stand, it seems like a small clique can publish anything they want so long as they all pat each other on the back. Peer review isn't well set up to deal with deliberate gaming of the system.

Of course, under the EGU's open review system, the gaping holes in this paper would have been spotted very quickly and it would never have been published.

9 comments:

What sets this paper apart from Schwartz etc. is that the authors are so disreputable. I find it hard to imagine that the editor(s) didn't have some inkling of that, especially with the nod to Loehle as an added hint. You'd think that a list of suggested reviewers biased enough to pass this paper through would also have set off alarms.

Now, having gotten what must be a monumental amount of flack, will the editors withdraw it?

Not sure that signed reviews would have helped here (the authors may well know who the reviewers are anyway), and they are a rather big step away from the standard of anonymous peer review. However, open review a la EGU seems a much more palatable - and potentially powerful - step.

Steve, I would bet a large sum against a withdrawal. IME most of the AGU editors just don't care about things like this, and/or see it as above their remit to pass judgements.

Looking on the bright side, this sort of embarassment can only help to weaken the profiteering hegemony of the AGU publication machine, and for-profit publication more generally. Someone please remind me, what are the supposed benefits of paying for this editorial service?

I'm delighted that the bunny is fully on board with that system. As far as I know, there has been one fairly controversial matter in the EGU journals, and even though I have some misgivings about how it was handled, it was nothing like the clusterf"#$s that the AGU seems to arrange on a regular basis.

I guess finding standard reviewers is really hard these days. Most of the experienced researchers try to skip it or hand it over to inexperienced researchers. I have seen a lot of senior scientists do that. Reviewing is a lot of hard work and responsibility, and I guess that is one of the reasons. May be one solution would be for the journals to pay for reviews. This may increase the number of quality reviewers, and the responsibility they put in.