Ammann at AGU: the Answer

OK, back to Ammann at AGU, his answer to the cross-validation R2 and my offer to him after our lunch.

I think that asking for the cross-validation R2 was a good one-bite question at several different levels. First, it’s objective and any prevarications are noticeable to the audience. Second, you can get to the point a lot faster than with bristlecones or principal components methodology. Thirdly, it cuts across both MBH and Ammann & Wahl – in the sense that the failed cross-validation R2 was one of our most prominent criticisms of MBH and Ammann had declared that our criticisms of MBH were "unfounded" – a claim which, as we have seen, has been repeated to Congress not only by Mann, but by Sir John Houghton. Their assertions that our claims were "unfounded" fly in the face of the fact that their own code yields a cross-validation R2 of ~0.02 for the 15th century MBH98 step, just as ours did – hardly evidence that our claim was "unfounded".

Reviewing the Bidding
A short review of the bidding to show why the question is relevant. IPCC, which had applied the MBH reconstruction so prominently, stated that the MBH reconstruction had "significant skill in independent cross-validation tests".

Averaging the reconstructed temperature patterns over the far more data-rich Northern Hemisphere half of the global domain, they estimated the Northern Hemisphere mean temperature back to AD 1400, a reconstruction which had significant skill in independent cross-validation tests.

If you wondered what "independent cross-validation tests" were used, you needed to look further than MBH98 itself, which reported use of the RE, correlation (r) and r-squared (r2 or R2) tests as follows (even illustrating the information for the AD1820 step) in Figure 3:

[RE] is a quite rigorous measure of the similarity between two variables, measuring their correspondence not only in terms of the relative departures from mean values (as does the correlation coefficient r) but also in terms of the means and absolute variance of the two series. For comparison, correlation (r) and squared-correlation (r2) statistics are also determined.

In our GRL article, we reported that MBH98 failed cross-validation tests according to a number of statistics commonly used in Hockey Team paleoclimate articles and, most notably, that the cross-validation R2 statistic was approximately zero. (Because of this failure, we argued that it was impossible for the seemingly significant RE statistic to be anything other than "spurious" in a statistical sense and proposed a mechanism to explain the spurious statistic through inaccurate benchmarking.) This was one of 3 claims highlighted in our Abstract.

Now, for some time, Ammann and Wahl have been stridently criticizing us based on claims arising from still unpublished submissions. In 2004, they told the Washington Times that they had "exactly" reproduced MBH:

Moreover, when they applied this refined technique to the proxy record "we exactly reproduce what Mike did," Ammann told UPI.

Ammann and Wahl conclude that the highly publicized criticisms of the MBH graph are unfounded.

On June 11, 2005, GRL rejected the Ammann and Wahl submission, but neither UCAR nor Alfred University updated their prior press release or the UCAR webpage. (Much later in September, after they got the editor considering the submission replaced, the Ammann and Wahl submission was taken out of the garbage can and it is presently once again under review.)

The word "unfounded" was also used in a press release by Alfred University, where, for greater certainty, they added that our criticisms did not have any "scientific merit":

They [Ammann and Wahl] concluded that McIntyre’s and McKitrick’s criticisms of MBH98 were unfounded, and that the statistical methods for summarizing the North American tree-ring proxies have been properly applied in MBH….Wahl said. “We didn’t find the criticisms (by McIntyre and McKitrick) to have scientific merit.

As noted over the past few days, the claims in this press release were widely distributed. Sir John Houghton cited it in his testimony to the U.S. Senate on July 21, 2005 reported here ; Mann cited it repeatedly in his reply to the Barton Committee discussed here; even the European Geophysical Union cited it in their testimony to the Barton Committee.

Ammann and Wahl [2005]
In May, I reported that, using apples and apples, I had been able to exactly replicate A&W’s methodology, which had only differed from ours in one obscure re-scaling step. (It would have been nice if they had acknowledged that the differences between our emulation of MBH and theirs were miscroscopic.) The one area of difference related to an obscure re-scaling step. This was actually changed in their code as late as April 2005, so it wasn’t just us that had trouble understanding what Mann did in this step (although it’s clear now with the source code released in July 2005).

Ammann and Wahl did not archive their results for their 15th century step, but did archive code from which the information could be calculated. I did the calculation using MBH weights and temperature PCs (which Ammann and Wahl irritatingly and pointlessly varied) and archived the results. From this, I calculated the same cross-validation statistics as in our GRL article, yielding the following results shown in Table 1 below (previously reported at climateaudit here. Obviously the verification RE, R2 and CE are virtually identical in the MM05a emulation and using the A&W emulation. I do not see how the information in this table entitles Ammann and Wahl to honestly assert that our claims were "unfounded", when their own code so strongly confirms one of our most prominent claims.

Table 1. Cross-Validation Statistics for MBH98 and MBH98 Emulations

RE (ver)

R2 (ver)

CE

Sign Test

Product
Test

MBH98

0.48

n.r.

n.r.

n.r.

n.r.

MM05a

0.46

0.02

-0.26

0.46

1.54

A&W Code

0.47

0.02

-0.24

0.54

0.91

Ammann’s Answer
We’ve been trying for a long time to hear someone from the Hockey Team admit the undeniable – that the cross-validation R2 failed (And at least one of our claims was “founded”). But we’ve reported how determined Mann has been to avoid disclosure of this information and did I seriously expect that Ammann would just answer the question?

Well, it will probably come as no surprise to you that he didn’t. Like a politician, he changed the topic. If he thought that a valid statistical model could abjectly fail an R2 test, in my opinion, his obligation was to say that the R2 was such-and-such, but here’s why I think that you should ignore the poor results. But he went into a song-and-dance as to why the attendees at the session should not have that information. (His prevarications were not lost on the audience, which included Zorita, Huybers, Pollock and other interested parties.)

I’ll discuss his reasons in detail tomorrow. They were based primarily on the theory that the R2 statistic measured only "high-frequency" attributes of a model, while RE measured "low-frequency" attributes, which were the only ones of interest. Whatever the merits of this particular argument, it is one that is not well-known to statistics outside the Hockey Team; it is not consistent with their articles on other occasions. But even if it were, since Ammann had assserted in a prominent national press release that our "widely publicized claims" were unfounded, he had an obligation to show that our claim about the MBH98 cross-validation R2 was "unfounded" and he had no right to dodge the question.

In his answer, Ammann pointed out that he had previously explained all this to me (one could almost hear the mannered " ..sigh…" that William and Gavin do at realclimate (or used to do until we started twitting them for it.) Here Ammann has opened up an interesting can of worms and, in his position, I would have not made this remark in a public forum. I had been invited by the irrepressible Stephen Schneider to act as an anonymous reviewer for the Ammann and Wahl submission to Climatic Change. In my capacity as an anonymous reviewer (not as Stephen McIntyre), I had asked Ammann for the cross-validation R2 statistic. Astonishingly, he refused this information. I don’t have much experience with academic reviews, but this seemed like a cheeky and dangerous response. He gave a long song-and-dance about low-frequency and high-frequency. Since Ammann introduced this exchange into a public forum, I will discuss it some more in a day or two as there are many interesting aspects to the Climatic Change exchange. I might add that, regardless of the low-frequency high-frequency argument, Mann himself claimed that his reconstruction passed multiple cross-validation statistics, including the r and r2 tests (see above); Ammann claimed to have "exactly" reproduced Mann, so this should include proving that MBH98 passed all of these tests as claimed.

I’ll just mention one more thing for now about the Climatic Change review. After Ammann refused to provide the R2 statistic, I asked him for a copy of a still unpublished paper which had been mentioned by Stephen Schneider, which used R2 statistics. Ammann refused to provide this to the referee, but said, if the referee identified himself to Ammann, that Ammann would send the article to him. More on this in a few days. (Also in passing, in Mann, Rutherford, Ammann and Wahl [2005], Testing the Fidelity…, they report on RE, R2 and CE statistics.

So how did Ammann know that the anonymous referee was me? He knew; he wasn’t guessing. It wasn’t through Climateaudit, as I’d only used the public record in what I said here at that time. I know how and it wasn’t through Schneider or his staff or through anyone usually associated with me. The confidentiality was broken elsewhere. I’m teasing here as I don’t think that you’ll be able to guess and I may not tell for a while (bear with me on this.)

After Ammann’s short filibuster at AGU failing to disclose the answer, I tried to follow up, but AGU is fanatical about schedules (fair enough); by then, Ammann’s time had expired and the chairman politely cut off questions.

The Lunch
During the session, I introduced myself to Ammann and invited him to lunch. Mostly we chit-chatted pleasantly about children, geology in South America (he has done field work in South America) etc., but all good things come to an end and we had to deal with the elephant in the room.

As well as not answering at AGU, in his two current submissions criticising our work, Ammann doesn’t report the adverse R2 cross-validation statistics. I urged him at lunch – in his own interests – to deal with the issue himself. In any enterprise, dealing with the bad news is no fun, but you’ve got to do it and you’re always better off dealing with it yourself, rather than having someone else hammering you with it. I pointed this out to him in the nicest possible way. I told him that, if he doesn’t, it will be awfully easy for me to excoriate him for withholding these adverse statistics and that I would obviously do so. I asked him: why give me such an easy target? He was relatively young; I was trying to coach him.

Amazingly, he tried to justified withholding the adverse results by claiming that we had not reported cross-validation statistics for “our”à reconstruction. The comment floored me. First of all, even if it were true, how would that justify him doing the same? Second, and more importantly, we have never published a reconstruction and claimed it as “ours.”à Readers of this blog know that we have categorically and consistently asserted that we have never claimed to make a positive reconstruction of past climate. We are often urged to do so, and even criticised for not doing so. But the record shows clearly that
1àⵠ We have never endorsed MBH98 proxies and methodology for making a climate reconstruction.
2àⵠ We explicitly disavowed making a positive reconstruction as early as our first E&E article.
3àⵠ When some misunderstandings arose in 2003, we promptly issued a FAQ with a completely categorical statement that could not possibly be misunderstood.
4àⵠ We’ve consistently maintained this position in our 2005 articles. Our GRL article did not include a reconstruction and our E&E article was couched entirely in terms of robustnessàⵠ Even Ammann’s realclimate associates have acknowledged that we have not proposed an “alternative” positive reconstruction.
5àⵠ Most recently, Ross published a letter at the Wall Street Journal on the Friday before Christmas, correcting such a mis-statement by them.
· In our comments to Climatic Change, we once again re-stated this in categorical terms to ensure that there was no chance of a misunderstanding by Ammann and Wahl.

Third, purely hypothetically, if by some chance I were to produce “my” climate reconstruction, does anyone honestly believe I would be so dumb as to compute an R2, observe that my results lack significance, then proceed to publish my results and my code anyway, while simply withholding the fatal R2, in the hopes that no one would notice? In other words, does Ammann think I would be dumb enough to do what he is now doing with his current journal submissions? Give me a break. I didn’t go through all this at lunch with Ammann. I simply re-iterated that we did not propose an alternative positive reconstruction and had maintained that position consistently. He seemed incredulous at this. Sometimes even smart people don’t read things that they don’t like.

The Offer
As we were winding up, in fact, just as we were returning to AGU, Ammann screwed up his nerve to complain about getting roughed up and my tactics in doing so, which he didn’t like very much.
This is a guy who had used UCAR press facilities and distribution to issue a national press release on the very day that we’re making a rare public appearance, announcing his submission of two articles supposedly debunking us and the horse we rode in on. This press release was then relied on by Houghton, Mann and others in their evidence to the U.S. Congress. Ammann had given newspaper interviews and presented in Washington and he’s complaining about getting roughed up.

It’s not like these guys don’t know how to use the media to their advantage. (I’m still amazed at scientists issuing press releases – based on my experience I think that they outdo mining promoters.) But they are used to one-sided bullying. It’s all right for them to dump on McKitrick and me, but they turn into crybabies when we fight back. I don’t know what he expects.
Anyway, regardless of whether it was reasonable or not, he was complaining about getting roughed up. It was a little pathetic, but he’s a pleasant enough guy in many ways and, if I were in my 30s, I wouldn’t like it very much either. The trouble with being young is that you don’t always anticipate all the consequences of what you do. As they say: good judgment is the product of experience, which is the product of bad judgment.

Anyway, this gave me a really interesting idea. Rather than trying to hash out the rights and wrongs of who did what to whom, I tried a completely different tack. I pointed out to him that there was very little remaining community interest in more controversial articles on the same topic, which would undoubtedly leave the situation pretty much where it stands. However, I surmised that there would be very strong community interest in a joint article in which we summarized clearly:
(1) the points of agreement;
(2) the points of disagreement and
(3) how these points of disagreement could be resolved.

Because our algorithms were fully reconciled and almost identical to start with, I expressed optimism that we could identify many results on which we could express agreement. We could each write independent supplements to the joint text if we wished. If we were unable to get to an agreement on a text within a finite time (I suggested the end of February), we would revert back to the present position, with neither side having lost any advantage in the process. Pending this, both parties would put matters on hold both at journals and at blogs- and you’ll notice that I’ve been silent on this particular issue lately.

Shortly after, I ran into the chairmen of our session, Hugo Beltrami and Fidel Gonzalez-Rauco, and later to Eduardo Zorita. I outlined my proposal to them; they all heartily endorsed the idea. I told Ross about the proposal and he endorsed it too. A few days after I returned from San Francisco, I emailed Ammann with the offer in writing, with an expiry date to the offer (hey- I’ve been doing business for many years). No response – not even an acknowledgement. On the expiry date, I sent a reminder email, this time both to Ammann and to his coauthor, Eugene Wahl of Alfred University, urging them to accept the proposal. Once again, no response – not even an acknowledgement. Since then, nearly 2 more weeks have passed without any word from either Ammann or Wahl. So the offer has obviously been refused without even the courtesy of a reply.

Now we are forced to deal with the matter of preparing a Reply to their re-submitted GRL Comment. It’s a very weak comment and full of mischaracterizations and misrepresentations, which is undoubtedly one of the reasons that it was rejected the first time. There’s nothing in it that concerns us regarding the validity of our original articles. Some of the points are actually identical to Huybers, but presented as though they were novel and completely ignoring our published reply. It’s bizarre. Anyway, you can be sure that the issue of cross-validation statistics will feature prominently in the Reply.

Aside from the academic exchanges, Ammann’s withholding of adverse information has left him in a highly vulnerable position – a vulnerability that seems sheer madness to me, given the high profile not merely of the topic, but of the particular information being withheld. It was bad enough the first time when Mann withheld the information – shame on him. But this time, the issue is in the sunlight – it’s caught the attention of a House Committee for goodness’ sake – so why would a young man get involved in withholding the information one more time – especially when he’s issued national press releases. But sometimes even smart people make bad decisions and I think that Ammann will come to regret his current course of action.

25 Comments

None of these people play poker, because if they did, they’d know that their hand was dud, they’d know their opponents know their hand was dud, and they’d fold instead of trying to stare everybody down and raising the stakes.

Perhaps its an ego thing with the Hockey Team, or maybe they just don’t want to be the one to let the Team down. I cannot believe that Ammann is risking so much. Maybe he’s in too deep and can’t pull back.

That he couldn’t answer a direct simple question in front of a large audience of his peers, cannot have gone down well. If it sounds evasive now, it must have sounded evasive then.

Even failing a joint submission, which I think is a great idea, there may be other oppotunities: panel discussions, symposia etc. I don’t know how many articles Lindzen co-authored with Santer (actually I do, it’s zero), but they were part of one of the most thrilling academic symposia I have ever seen. Check this out.

Forget about the session schedules; each discussant had about 30 minutes. After Glantz’s talk, questions were opened up, and that went on for about another 30 minutes. It was fantastic.

I always find it odd when pro-MBH folks criticize “your” reconstruction. Because, when you realize that you are simply presenting an “adjusted” MBH reconstruction, criticisms of that reconstruction are actually criticisms of MBH, not of you. One of the major points you make is that MBH is not robust, i.e., small changes make large differences to the results. Therefore criticism of the results you get when you make adjustments to MBH simply show that small changes in the MBH methodology produce garbage. (I have also seen the adjusted MBH results criticized because they place the MWP too late.) This is odd, because they SHOULD be arguing that the adjusted MBH results are still OK, i.e., that the bottom-line results are robust to small changes.

On the RE v. R2 debate, it would be helpful to have a short explanation of each and why they supposedly measure skill. Preferable would be a few graphs showing what a good RE implies and what a bad R2 implies and what sort of cases can generate a good RE and a bad R2.

Also, it would be useful if Ammann could provide a case with a good RE, a bad R2 where the model was still good and explain why it was still good in spite of the bad R2.

Also helpful would be standard statistical texts that discuss the application of RE v. R2. Is this actually discussed somewhere or are people just making this up.

I realize this is a lot to ask, so don’t feel obligated to do it. But I know quite a bit of statistics, and these issues are not obvious to me.

Funny how the people from within the Hockey Team family seemingly is running like hell when confronted with Steve Mcintyre.
I don´t mind what the zealot´s of the Hockey Team is going to say and how they will react to the whole issue now, because, I´m quite convinced that the RIGHT PEOPLE in the RIGHT PLACES will take notice of what is going on and what to make of all of this. I will urge Steve just to keep going on and not to loose faith in his project because " SOMETHING IS HAPPENING! ".

Re # 5. This discussion would be most helpful. I went to the book store and bought a stastistical text book just so I can keep up, but I have a long way to go before achieving understand. Some online tutoring will help. Thanks

I’m interested in Ammann’s response here, because of his view of the problem in the frequency domain. From my background in remote sensing, I tend to think about things in the frequency domain first, and such a comparison between RE and r2 has occurred to me in the past.

I would add that a spectral comparison of such measurements isn’t hugely intuitive, but please bear with me on this one!

As the r2 statistic effectively removes the mean of the two distributions before making a comparison. If we consider the MBH case, I believe the verification period was a 45 year period between 1856 and 1900 (inclusive). If we consider a discrete Fourier transform of this period, the removal of the mean in the r2 statistic effectively zeros the DC bin of the transform. The consequence of this DC removal would suppress the effect of frequencies following a (sin x)/x function from DC. The first null of this function would fall at the second DFT bin, which would have a period of 1/(45 years). The 3dB point on the (sin x)/x function occurs at 0.6/(45 years); this means that in frequency space, a “signal” in the temperature reconstruction with wavelength of 75 years will only have half of the influence in the r2 as in the RE; a “signal” with wavelength of 45 years will have similar influence in r2 as RE; a “signal” with infinite wavelength will not affect r2 at all.

I hope this makes sense to someone, as it doesn’t read back too clearly to me! I’m thinking out loud here, so I apologise in advance….

Anyway, this means that there is some band-limited variation in the reconstruction that one could argue RE catches more effectively than r2. But then you get into a slightly knotty problem…

You see, band-limited noise has a decorrelation distance. For example, if you have band-limited noise between DC and (say) 500 years wavelength, and compare samples (say) 50 years apart, they will be highly correlated. Now I believe MBH calibrated their reconstruction on the years 1902-1980 incl., leaving just a one year “guard” gap between the calibration and verification sections of data. This means any low frequency signals will be highly correlated between the verification and calibration samples. And Ammann believes that the inclusion of these frequencies, which would be highly correlated, improves the measure of statistical skill? I’m far from convinced.

The frequency domain is a different way of thinking about RE and r2 than Steve’s approach of considering autocorrelation, but I believe they both come to the same conclusion: that the RE statistic, under these circumstances, is likely to have a higher value, but this value may be spurious. I would concur that multiple statistical tests are better than one, and that if RE is to be relied upon there should be a “guard” gap of greater than one year in the verification and calibration data sets (more like 45/35/45, or something like that). The small guard gap struck me the first time I read MBH, and I would have certainly raised that issue had I been reviewing the paper.

Spence, I really appreciate the comments from a frequency-domain perspective and you’ve described the RE-r2 issue in an interesting and original way that might actually shed some light on the matter. Econometricians tend to deal with autocorrelation in time domain although with some diligence in search, one can locate some frrequency domain references. The tree ring people – from whom the RE originates – mostly deal in time domain with a few unsophisticated sallies into frequency domain.

Mann has written some articles about frequency domain and applied these methods in MBH98 (and still in Rutherford et al 2005). I’ve collected some of this and will make a separate post. There are a couple of points about his method that I’ve been unable to get much light shed on by econometricians and it sounds right up your alley.

From a time domain perspective (where I’m more familiar), I agree entirely with the 1-year guard issue. In any of the classic spurious regressions (with a spurious R2 – say between Honduras births and Australian wine exports, that type of thing), you can easily construct a calibration-verification pair that yields a high RE. Maybe it would be interesting to look at the classic spurious regression situation from a frequency point of view as well as the points of view of Phillips [1986] which has tended to overwhelm subsequent research.

1) I presume I know what an R2 is, i.e., the percentage of variance explained by a regression which, in the case of a univariate regression, is the square of the correlation coefficient between the two series. But, what exactly is an RE?

2) I can understand how an R2 is computed when you have two series as you have in the verification period (where, I presume you calculate the R2 between predicted and actual temperatures). But, in the 15th century, we don’t have observed temperatures, so what are the two series?

3) How do you compute the significances of R2 and RE? (I understand how you can do it for R2 using standard regression methods.) Obviously, autocorrelation in the errors is extremely important in calculating the significance of R2. How do MBH and Steve correct for it? Presumably, autocorrelation is also very important in the significance of RE. How do you correct for it there?

Just to explain the differences: when I say “suppress the effect of frequencies”, I am referring to the frequencies present in the reconstruction, which are not constrained to the DFT bins, and how they translate into the DFT bins.

You are right in saying that the DFT splits the reconstruction into a number of frequency bins, but it is wrong to assume other frequencies do not exist in the reconstruction. However these other frequencies do not cause a trivially understood response in the DFT, they cause a response associated with a sinc curve (i.e, sin x/x), and also cause sidelobe responses in other frequency bins along the sinc relationship (this relationship can be modified using weighting factors, but they are not appropriate in this type of analysis).

For example: if a DC signal exists in the reconstruction, it will appear in the DFT in the first frequency bin. Because the signal corresponds exactly to a frequency in the DFT, the contribution of this signal to the other bins is zero. (Sorry, DC is remote sensing speak for a mean offset)

If a 1/45 year signal exists in the reconstruction, it will appear in the DFT in the second frequency bin. Again, the signal corresponds exactly to a frequency in the DFT, so the contribution to other bins is zero.

If some other signal – say 1/90 year signal – exists in the reconstruction, the response in the DFT will be the convolution of the section of signal (zero padded in either direction to infinity) with a sinusoid at each frequency in the DFT. This response will follow samples along a sin x/x curve in the absence of any weighting windows, which is the effect I was referring to above.

The comparison is a bit like zero-padding a signal and looking at the DFT of that.

I’m afraid my explanation probably isn’t that clear, because this is all typed straight out into the blog, rather than carefully proof-read for clarity.

Steve –

As you know, I’ve been interested in the relationship between RE and R2 and this is one thing that has been turning cogs at the back of my mind for a while, but that I haven’t really thought through to a conclusion. Ordinarily I wouldn’t turn to the frequency domain to understand these things because it isn’t the most intuitive approach, but it may just be that it happens to capture the differences between the two statistics quite neatly. It should be straightforward to play out some Monte Carlo experiments of this type in MATLAB without stretching the grey cells too far!

I did not assume that “assume other frequencies do not exist in the reconstruction.” I simply pointed out the limitations of the sample period. Clearly frequencies below 1/period can, and do, exist. That doesn’t preclude the fact that certain conditions have to be met to capture those frequencies in the samples. Zero padding cannot extract information that is not there.

I think we are talking slightly at cross-purposes here. I’m not sure you quite follow my line of reasoning, where I’m coming from and where I’m trying to get to. This may be partly my fault for doing a brain dump to the blog rather than explaining in a structured way.

Just to clarify, I didn’t mean to come across as saying you assumed there are no other frequencies in the reconstruction, I simply stated that it would be wrong to make that assumption. I’m trying to clarify my point by including all details.

Clearly frequencies below 1/period can, and do, exist. That doesn’t preclude the fact that certain conditions have to be met to capture those frequencies in the samples.

Okay, your getting close to where I’m coming from here. Let me try and explain it in a different way.

We start off with a 580-year reconstruction. Clearly that contains resolvable frequencies using a harmonic model down to 1/580 year. We are analysing that by looking at a 45-year portion of that reconstruction. As you point out, these low frequencies don’t map cleanly to the shorter distribution. When we chop off the DC bin, analogous to removing the mean in the R2 statistic (a step that does not take place in the RE statistic), we modify how the DFT responds to these lower frequencies.

What I’m actually interested in is one step further back from this – what is the consequence of doing this on an unconstrained system, and a system constrained by matching a 79-year portion for mean and variance, then assessing over a 45-year sampled portion of that curve.

My assertion is that the low frequency components coupled in via the assumptions and constraints associated with the harmonic model are the same as the assumptions and constraints of zeroing the DC bin of the DFT. Therefore any consequences in terms of correlation distance could be assessed by looking at the band pass associated with the DC bin of a 45-point DFT, on an unconstrained system. And that is equivalent to applying a sinc function to an unconstrained system.

While we’ve got frequency-domain issues open, I’m going to post up a note on how Mann tests for white noise-ness using a frequency domain procedure. It is a separate issue from RE and R2. I haven’t waded through it. It is essentially a statistical method which was published in Climatic Change – which is an interesting journal – but not a good journal to introduce statistical novelties, even if valid. His methods for testing for white noise are very different from usual econometric methods as you will see.

I was unable to access any of them although I was able to access other files at UCAR. On earlier occasions, I have blocked from Mann’s FTP site and from Rutherford’s FTP site. Could someone check as to whether they can access Ammann’s webpage. If you can, then the Team has made another block. It seems surprising that employees of federally funded agencies can engage in such petty personal forms of discrimination, if indeed, that’s what Ammann has done.

I can’t access it either. Their web server appears to be running (I can connect on the HTTP port) but is not responding to requests. Seems like the web server software has crashed or the OS is in a bad state.