Alchemy or Science?

One of the examples of spurious regression mentioned in Phillips 1998, quoted by Eduardo Zorita at CPD and previously here by me, was taken from Hendry 1980, from an article entitled “Econometrics – Alchemy or Science?”. Hendry is a very eminent professor of economics and the article proved to be as interesting as its title.

Before I present Hendry’s example of spurious regression, here are some extended quotes from Hendry’s criticisms of econometrics, which are very reminiscent of views expressed here about multiproxy studies. The resemblance is not accidental as that is a framework that I approach the topic from. One of my regrets about the NAS panel was that they saw fit not to include a statistician qualified on these issues.

Hendry introduces his comments by an extended summary of Keynes’ 1940 critique of Tinbergen’s econometric modeling, an article worth reading in its own right, as Keynes is perhaps the leading 20th century economist (who plied statistical waters in his early days – I noticed a 1908 article coauthored by him and Yule of spurious regression renown.) Hendry:

Despite its obvious potential, econometrics has not had an easy time from many who have made major contributions to the development of economics, beginning from Keynes’ famous review in 1939 of Tinbergen’s book Statistical testing of Business Cycle Theories. In an oft quoted passage in his Comment (1940, 156), Keynes accepts that Tinbergen’s approach is objective but continues

No one could be more frank, more painstaking, more free from subjective bias or parti pris than Professor Tinbergen. There is no one therefore so far as human qualities go, whom it would be safer to trust with black magic. That there is anyone I would trust with it at this present stage or that this brand of statistical alchemy is ripe to become a branch of science, I am not yet persuaded.

His objections make an excellent list of what might be called “problems of the linear regression model”, namely (in modern parlance): using an incomplete set of determining factors (omitted variables bias); building models with unobservable variables (such as expectations), estimated from badly measured data based on index numbers (Keynes calls this “the frightful inadequacy of most of the statistics”); obtained “spurious correlations from the use of “proxy” variables and simultaneity as well as (and I quote) “the mine [Mr Yule] sprang under the contraptions of optimistic statisticians”; being unable to separate the distinct effects of multicolllinear variables; assuming linear functional forms not knowing the appropriate dimensions of the regresses; mis-specifying the dynamic reactions and lag lengths; incorrectly pre-filtering the data; invalidly inferring “causes” from correlations; predicting inaccurately (non-constant parameters); confusing statistical with economic “significance of trends and failing to relate economic theory to econometrics” To Keynes’ list of problems, I would add stochastic misspecification, incorrect erogeneity assumptions, inadequate sample sizes, aggregation, lack of structural identification, and an inability to refer back uniquely from observed empirical results to any given initial theory.

Hendry reports that these issues recur in the 1970s in a variety of articles by leading economists (including Leontief):

An echo of this debate recurs in the early 1970s. For example, following a sharp critique of mathematical economics as having “no links with concrete facts”, Worswick (1972) suggests that some econometricians are not “engaged in forging tools to arrange and measure actual facts, so much as making a marvellous array of pretend tools: In the same issue of the Economic Journal, Phelps Brown (1972) concludes against econometrics, commenting that “running regressions between time series is only likely to deceive”. Added to these innuendoes of “alchemical” practices, Leontief (1971) has characterized econometrics as an “Attempt to compensate for the glaring weakness of the data base available to us by the widest possible use of more and moir” sophisticated statistical techniques”. To quote Hicks, “the relevance of these methods to economics should not be taken for granted;..Keynes would not have been surprised to find that “econometrics is now in some disarray (1979, xi).

Hendry starts into his own example with the following wonderful phrase:

Econometricians have their Philosophers’ Stone; it is called regression analysis and is used for transforming data into “significant” results! Deception is easily practised from false recipes intended to simulate useful findings and these are derogatively referred to by the profession as “nonsense regressions”.

Now for the example quoted by Phillips, an equally or even better known econometrician. Hendry first shows graphs and models relating price levels to money supply, an important economic theory. After presenting these results, he then presents an alternative theory with a number of illustrations, two of which are shown below:

A second example will clarify this issue. Hendry’s theory of inflation is that a certain variable (of great interest in this country) is the “real cause” of rising prices. I am certain that the variable (denoted C) is exogenous, that causality is from C to P only and (so far as I am aware) C is outside government control although data are readily available in government publications.

Of the relationship in the above figure, Hendry reports:

there is a “good fit”, the coefficients are “significant”, but autocorrelation remains and the equation predicts badly. However assuming a first order autoregressive error process at last produces the results I anticipated; the fit is spectacular, the parameters are “highly significant”, there is no obvious residual autocorrelation (on an “eyeball ” test and the predictive test does not reject the model [see the Figure below]

Hendry then explains how he was able to improve on monetary theory of inflation:

My theory performs decidedly better than the naàƒ⮶e version of the monetary one, but alas the whole exercise is futile as well as deceitful since C is simply cumulative rainfall in the UK. It is meaningless to talk about “confirming theories” when spurious results are so easily obtained.

Doubtless some equations extant in econometric folklore are little less spurious than those I have presented. Before you despair at this hopeless subject, the statistical problem just illustrated in one of its manifestations by Yule in 1926 and has been re-emphasized many times since (see in particular Granger and Newbold 1974).

In any of these guises, the relationships shown above will have a massively significant RE statistic. The lesson to be learned from this is surprisingly easy, and, as I currently formulate it, I wonder why it’s taken me so long to pose it as I am going to do today. Before doing so, I’ll mention that Hendry says that there are adequate tests for spurious relationships in this univariate setting:

We understand this problem and have many tests for the validity of empirical models (those just quoted fail two such tests. [The two chi-squared values in Figure 8 are a (likelihood ratio) test for a common factor and a Box-Pierce test for residual autocorrelation respectively “€œ see Sargan 1975, Mizon and Hendry 1980, Pierce 1971 and Breusch and Pagan 1980 “€œ both of which “reject” the model specification.

The lesson for the RE test is surprisingly simple under all the hyperventilating: statistical tests all have a purpose, to test against null hypotheses. But watch what the econometricians do – they have different tests for different purposes. Let’s apply this to some of the recent examples.

Rutherford et al 2005 (and Wahl and Ammann 2006) argue that the verification r2 test is a poor test because (using more precise statistical terminology not used by them) it has little power against a jump-shift in mean of a series, while still preserving high-frequency relationships. I agree that the statistic has little power (i.e. ability to discriminate against) this event, but this is not a situation that has any practical relevance. While the verification r2 test would fail to identify this unlikely event, other tests would (and the RE test is useful for that.)

Mann points out that the RE test works well against a null of the proxy reconstruction being AR1 red noise – fair enough. The only problem is that nobody is suggesting that the proxy reconstruction is a simple AR1 red nosie series.

The nulls that need to be tested for multiproxy studies are completely different:
(1) a spurious trend (e.g. CO2 fertilization in bristlecones (more or less classic “nonsense” regression, but complicated by the multivariate setting, where less is known about spurious regression effects;
(2) biased cherrypicking of red noise series – a different but potentially interrelated mechanism.

Thinking about how to test against these nulls is what needs to be done. The RE statistic is completely useless against these nulls. Expressed this way, one says what idle puffery is contained in recent ruminations by Mann and his disciples about why one should ONLY look at the RE statistic – a conclusion NOT adopted by the NAS Panel, although they were pretty inarticulate about their reasons. (As I noted above, wouldn’t it have been nice if someone of Hendry’s background had been invited to be on the panel instead of even one of Bloomfield or Nychka.)

We understand this problem and have many tests for the validity of empirical models (those just quoted fail two such tests. [The two chi-squared values in Figure 8 are a (likelihood ratio) test for a common factor and a Box-Pierce test for residual autocorrelation respectively — see Sargan 1975, Mizon and Hendry 1980, Pierce 1971 and Breusch and Pagan 1980 — both of which “reject” the model specification.

However, we must still be careful not to think that measurements alone can resolve these sorts of issues. Would the reader have been convinced that the model in Figure 8 was seriously deficient had it not been revealed first that C was cumulative rainfall in the UK? I grant that the test statistics reveal a problem with the model, but the fundamental defect of the model as a representation of a theory of inflation is not revelaed by the test statistics alone. One might tinker with the specification some more (as was done in going from Figure 7 to Figure 8) to avoid problems revealed by the test statistics. In the end, however, we reject all variants of the model because there is no feasible economic theory that would give such a strong link between rainfall and inflation. One cannot do science by engaging in measurement without theory, and a scientific theory or hypothesis is of no value if it cannot be tested with some real world measurements. The statistical tests can reveal problems against some types of null hypotheses, but in the end we also need to have the statistoical model be a credible representaiton of some underlying process we can understand on the basis of some theoretical framework.

Applying this to the problem at hand, I agree that we need statistical tests to have power against the relevant alternatives such as your (1) and (2) above. I agree that this is a fundamental problem with Mann relying on RE alone. However, the other critical strike against his framework is that it elevates a proxy (bristlecone tree rings) to a pre-eminent status as a measurement of temperature when the variable is not strongly postiively correlated with the local temperature, and the notion of “teleconnections” to a global temperature field is not a credible explanation for why it should work.

I have been banging on about this before, but noone here seems to read my posts.

This is all about unit roots. When Hendry wrote this article, economists/econometricians were grappling with spurious regressions arising from times series. This article nicely captures the state of flux in thinking at that time. His is the classic undergraduate introduction to the problem of unit roots (non-stationary variables). In the following 10 years econometrics was virtually solely focused on addressing this problem and nowadays it is axiomatic.

It was only in 1979 that the Dickey Fuller test for a unit root (i.e. non-stationary series) was published. It wasn’t until 1987 that Engle & Granger published the seminal piece of work on how to specify, simulate and test for valid models under such condition. Then Johansen pushed that work further for multiple equation systems (which I think would provide the basis for an historic paper in this field).

My suggestions for your reading Steve, which I think would lead you in the right direction would be:

Engle & Granger – 1987
Johansen – 1988 (?)

After addressing that, I suggest you move on to:

Ommmitted valiariables
Endogeneity (this has MAJOR implications for the extrapolation of estimated models – forecasting, which I think is the natural progression from wher you are now)

re #5: Paul, I’ve been reading your posts if that is of any comfort 🙂 What comes to unit roots, I’m sure Steve is aware of those issues as they are now covered even in the time series introductionary texts, see, e.g., C. Chatfield: The Analysis of Time Series – An Introduction, 6th edition, Chapman& Hall/CRC, 2004.

BTW, the above book has a funny inscription:

Alice sighed wearily. “I think you might do something better with the time,’ she said, “than waste it in asking riddles that have no answers.’

“If you knew Time as well as I do,’ said the Hatter, “you wouldn’t talk about wasting it. It’s him.’

“I don’t know what you mean,’ said Alice.

“Of course you don’t!’ the Hatter said, tossing his head contemptuously. “I dare say you never even spoke to Time!’

“Perhaps not,’ Alice cautiously replied: “but I know I have to beat time when I learn music.’

#5. Paul, I also pay attention to what you post. I’ve been plugging away at mathematical issues. I’m much slower at assimilating math than I would like.

I posted Hendry not so much to dump on econometrics but to provide a listing of criticisms and issues from the middle history of econometrics that seem to apply to multiproxy climate reconstructions. I’m going to post up some notes on Keynes’ critique of Tinbergen since it’s amusing.

#2, Peter H, I agree that the ultimate issue with a nonsensical theory is that it is nonsensical, not that it fails a statistical test. One big difference between an audience of econmists (or even lay people) for that matter and climate scientists is that lay statisticians readily understand that, if there are caveats about bristlecones as thermometers expressed in the specialist literature (CO2 fertlizatino or whatever), you can’t just go ahead and use them as the essential world thermometer.

#5 again. Many of the tests for spurious regression rely on autocorrelated residuals. As I’ve mentioned before, MBH99 admits to autocorrelated residuals (expressed by Mann in frequency terms). Mann simply “adjusts” the confidence intervals in a methodology that neither Jean S nor I have been able to figure out, although we’ve made progress. (Neither can von Storch). I asked the NAS Panel to explain these methods, but they didn’t. The Team defence against failing autocorrelated residuals is that this is a “high-frequency” test and “low-frequency” is what is of interest. At least the NAS Panel recognized that there are so few degrees of freedom for low frequency that confidence intervals are imppossible.

Nice clear post. Shows the general worry about the hockey team is due lack of application of generally accepted approaches.
#5 I note in Rutherford 05 that it is admitted that RegEM assumes stationarity but then go on to claim that based on low historic variance this assumption isn’t a problem. Not tested, assumed, then concluded — circular.

Such methods arguably depend more heavily on assumptions about the stationarity of relationships between proxy indicators and large-scale patterns of climate variability than the “local calibration” approach. Model experiments suggest this probably is not problematic for the range of variability inferred for recent past centuries (Rutherford et al., 2003). Reconstructions of the more distant past (e.g. the mid-Holocene”¢’¬?Bush, 1999; Clement et al., 2000) would require, however, a more careful consideration of stationarity issues.

Great post, Steve. I do not understand it all, but I agree that the ultimate test is simply “does this make sense?” It makes absolutely no sense to cherry pick bristlecone pines and then regress ring width with GLOBAL temperatures, when they don’t correlate with local temperatures. Who needs fancy statistical tests, when the basic methodology is so stupid?

The pessimistic interpretation of what Steve has illuminated is that there are lots of inappropriate empirical methods out there that can pass muster with most of the climate science community. The statistical deconstruction of these inappropriate methods is a tough task that not many are up to. Therefore, the future will see two parallel tracks for climate research. One will be the studies that support climate alarmism using dubious methods. The other will be the intricate work of sorting out the good claims from the bad.

After all, we have had bad statistics used to make hundreds of bogus cancer claims for decades now. The best that can be said is that the public learns to apply a discount factor to the claims.

Gerd Bürger has not seen Mann et al 2006 either. The rate of Hockey Team moving on is growing ever faster. The mere mention that Mann et al 2006 is in press will doubtless send all critics into paroxysms of fear and they will retire from the field. I’ll bet it’s in Journal of Climate courtesy of Andrew Weaver and Gavin Schmidt.

IMO, statistics aside, the Hokey Team’s work is dead, scientifically, because:
1. They have no reasonable basis for selecting (cherry-picking) their proxies. It is obvious that they bias their sampling to achieve a pre-conceived theory.
2. It cannot be shown that tree ring widths or densities are proper indicators of temperature. Even if they were indicators, the relationship is not linear.
3. They do not cooperate with auditors by sharing data and code.
4. Their methodology for correlating tree ring growth with temperature doesn’t make any sense (correlating with “global mean temperature, e. g.)
5. Their constructions do not show the LIA or MWP, when there is now no doubt about their existence.
6. They will not actively debate their work with “skeptics,” whom they simply wave off as “denialists.”
7. They admit no errors, no matter how slight (ironically, they are the denialists).

To ANY scientist, this is not science. They are casting a shadow on responsible climate scientists, and I’m glad MM have brought this to the world’s attention.

Hate to tell you Jim, but Steve regularly admits errors as well as invites criticism/comments from the very authors he is admonishing! Sorry, but your “overinflated ego” thesis w.r.t. Steve and Ross is not based on fact.

I should leave this for SM, but Jim Barrett (#23), perhaps if you were to point out SM’s alleged mistakes so he could reply, you might see if he does either admit or cogently argue whether they are mistakes. Otherwise your statement is simple puffery.

In contrast, with Michael Mann there are a number of well posed questions which he either will not or cannot answer. It is not necessarily that he is wrong in every case, but in the absence of any serious attempt to answer one must assume that he is.

So, put up your questions and see what response you get. Some posters have posed excellent questions to SM, and to my knowledge, where an error is pointed out, SM generally acknowledges it.

With reference to what I said in posting 23 (“let’s have a list of the errors Steve has admitted”), I can confidently say that I am not qualified to say whether Steve is right or wrong on many issues. However I am qualified to make the observation that I cannot recall an instance where he has admitted a mistake.

Now Mark claims “Steve regularly admits errors” and also that my “‘overinflated ego”‘ thesis w.r.t. Steve and Ross is not based on fact”. Let’s first point out that I never even mentioned Steve or Ross in this regard – what I said was that “there are a few overinflated egos here “¢’¬? on both sides of the argument” – that fact is very obvious.

However, Ed Snack throws the ball back to me with: “if you were to point out SM’s alleged mistakes so he could reply, you might see if he does either admit or cogently argue whether they are mistakes”. As I said, I am not qualified to point out Steve’s mistakes – only to say that he doesn’t seem to admit any. However, since Mark clearly believes that “Steve regularly admits errors”, why doesn’t Mark just provide a “list of the errors Steve has admitted” as I originally asked for?

Spurious regression is a technical term in econometrics as Steve has rightly insisted. My copy of Introductory Econometrics by Jeffrey Wooldridge has a glossary at the end. Under “spurious regression problem” we find the following:

” A problem that arises when regression analysis indicates a relationship between two or more unrelated time series proceses simply because each has a trend, is an integrated time series (such as a random walk), or both).”

It follows an entry for spurious correlation which reads
“A correlation between two variables that is not due to causality , but perhaps to the dependence of the two variables on another unobserved variable”.

Econometricians have developed ways of avoiding this problem, which Paul keeps talking about. And they begin by looking to see if the individual series are stationary or not. If they are not then there might be problems.

I am not qualified to talk the numbers either. And I think you are trying to be fair some how. But, I think you should looker deeper into your reasoning.

How are mistakes SteveM makes during a process to piece through this mess, (and the author’s don’t cooperate with him); any comparison to what mistakes the authors of these climate models make? Remember the NAS panel agreed with Steve and Ross. And Steve isn’t publishing several papers every year, ignoring mistakes he’s made.

These models were accepted with huge flaws by “consensus” all over the world. They use these models to scare people into worry, into voting a certain way, and into paying higher taxes; the regular Joe does not have a clue what’s going on. This side of the story doesn’t get any main stream publicity. I saw at least 3 shows on TV this weekend about GW, no mention of the NAS or Steve’s work.

On top of that, Steve and Ross just had the NAS panel validate the major mistakes they cited in these models. The NAS agreed with Steve and Ross.

Do you think what you are asking for is really fair or maybe something you should consider looking at more fairly?

Now Mark claims “Steve regularly admits errors” and also that my “”overinflated ego”‘ thesis w.r.t. Steve and Ross is not based on fact”. Let’s first point out that I never even mentioned Steve or Ross in this regard – what I said was that “there are a few overinflated egos here “¢’¬? on both sides of the argument” – that fact is very obvious.

But you earlier said #23:

“They admit no errors, no matter how slight”

Could you not say exactly the same of Steve M? Let’s have a list of the arrors Steve has admitted.

It seems to me that one of the unfortunate facts about this whole debate is that there are a few overinflated egos here “¢’¬? on both sides of the argument!

I suppose you could argue that your “ego” remark applied not to Steve M, but to the commentators. If so, the response would be “well done Sherlock!”

As others have pointed out, Steve occasionally corrects mistakes, but they are generally not substantive. M&M’s published work is very robust to criticism, except where undisclosed aspects of MBH methods have wrongfooted critics (e.g. MM 03, but also Von Storch & B&C). Even in these instances, minor imperfections in emulation of MBH methods have turned out not to be important, despite what Real Climate might have us believe.

I have corresponded with Steve since well before this blog even started, and “overinflated ego” is the last appelation I would ever apply to him.

Jim,
I’m not sure if you are saying that it’s OK for Mann to not admit his mistakes because (as far as you know) Steve M does not, or if you are just making an ad hom attack on Steve M, but either way it’s a logical fallacy. Whether Steve M admits his mistakes or not has nothing to do with the claim that Mann does not. As far as I’m concerned the evidence that Mann won’t admit his mistakes is pretty overwelming, but that has nothing to do with Steve M’s conduct, or mine for that matter.

I don’t claim any particular virtue, but I’ve learned over the years that, if you’ve made an error, you’re better to cut your losses as quickly as possible, admit the error, assess the consequences and get on with things.

I’ve also had enough ups and downs in my life to know that you’re never as smart as you think you are when things are going well and never as dumb as you think you are when things are going poorly.

I don’t think that the gotcha mentality is very useful. For example, Ross McKitrick made an programming error in cos latitude in a publication unrelated to me. The error was identified because he provided full disclosure of all data and codes. When the error was pointed out, he promptly acknowledged and corrected it. Critics crowed over this error as though it somehow discredited our critique of MBH. I thought the opposite – it showed that errors in work can arise and that checking is worthwhile.

Having said that, our published work has been very clean in terms of errors.

MM03 did not implement an undisclosed method of MBH in our emulation. We had requested additional methodological details and were refused. This didn’t directly affect any specific claims of MM03 and led to many further interesting results after MM03 flushed out much further information about MBH.

In retrospect, I sould have preferred to have submitted the simulations of Reply to Huybers on RE statistics over the simulations in MM05a, although I’m not prepared to concede that the original simulations were "wrong". It would have been hard to have done them as well since the later simulations depended on another undisclosed MBH methodology, again flushed out during the process. I would also have expanded the discussion of correlation versus covariance PCs, an issue also raised by Huybers. We reported the differences, but could have provided more detail.

I’m surprised at how few errors I’ve made on the blog, considering that I’m often posting straight to the blog (as I am now).

I suggested that a change in the plot of Crowley and Lowery 2000 from Mann et al (Eos 2003) to Jones and Mann 2004 showed that Mann KNEW that the Crowley and Lowery 2000 reconstruction contained a splice of proxy and instrumental records, contrary to a claim that he was then making at realclimate. Tim Lambert argued plausibly that the Eos 2003 plot of Crowley and Lowery was not actually Crowley and Lowery, but a bizarre displaced version of MBH99. This possibility had not occurred to me and I acknowledged that Mann’s change of version might simply be coopering up this bizarre error in Eos, as opposed to evidencing knowledge of splicing (my original point that Crowley CL2 reconstruction included splicing being correct.) Tim Lambert also thought that I’d misrepresented whether or not he’d blocked John A from posting on his blog, as he had previously blocked per. I don’t think that I did, but the issue is indescribably boring and irrelevant to our substantive points.

-called the PC1, “the reconstruction” in discussion with JohnC.
-responded to questions, where you should specify reconstruction impact with PC1 impact instead. without a caveat/clarification, this confusing viewers. and this difference overstates the impact of the flaw and it should be evident that your viewers usually don’t catch the difference.
-failed to specify amount that the reconstruction changes based on off-centering alone (leading to some confusion, including by one of the GRL reviewers)
-at times in discusion confounded flaws within the Mann procedure or when discussing one flaw in detail, tried to bring in the other flaws (a logical fallacy).
– poorly written poster and powerpoint for some presentations

Could you not say exactly the same of Steve M? Let’s have a list of the arrors Steve has admitted.

Let’s first point out that I never even mentioned Steve or Ross in this regard

Sorry Jim, but these two comments are in direct contradiction of each other, and both made by you. And, if you have never seen Steve M. admit his errors, then no, you are not qualified to speak to the matter. This logic alone is a logical fallacy – absence of proof is not proof to the contrary, affirmation of the consequent – and you obviously have not been engaged in all the discussions.

Interesting to review these posts since #23. Jim Barrett makes an assertion. But instead of accusing him of being a “shill” or a “denialist” or “in the pay of somebody or other”, the responses cordially discuss the issue, and provide thoughtful responses to his question. Not only that, but the principal of the site weighs in, and provides respectful comment. He is held to account by another frequent contributor. And other contributors discuss the logic issues involved. So far as I can see no ad hominems involved (unless perhaps the references to “ego” being evident on both sides. Nor any editing or blocking of posts.

Seems to me to be a refreshing approach compared with what we see on other blogs!

Steve falsely claimed that I blocked John A from accessing (not posting at) my site. Rather than admit to making a mistake he complains about how unimportant and boring the issue. If he won’t admit error on an unimportant issue, how likely do you think it is that he’ll admit to making an error on something important?

#41. I was wondering how long it would take Tim Lambert to start arguing about blocking John A at his site. I said that there was insufficient evidence to say whether or not he had blocked John A, but he had admitted to blocking per. Tim’s defence: “I shot the sheriff, but I did not shoot the deputy”.

You’ll notice Tim’s total absence from discussions of Mann, including the interesting recent discussion of Figure 7 or any substantive comment. So if that’s the best that Tim’s got, I think that it’s not very substantive. We all eagerly await his “Mann Screws It Up Again” column, which is going to be very lengthy now. Tim, no more discussion on blocking or not blocking of John A at your site please. You’ve had an ample say. I don’t have time or interest to discuss it further with you.

re #41: Hi Tim, haven’t seen you for a while here. Did you manage to reproduce those correlations?

Oh, BTW, I think I found some interesting things in Rutherford (2005), for instance, it seems that they may have screwed up their intrumental temperatures. I could publish those findings in your blog first, would you be interested? I’m thinking a header like “Was It Scott or Did Mann Screw It Up Again?”.

I was “blocked” at the same time as JohnA and the definite impression I had was that it was an issue with the connectivity not a deliberate blocking. Lots of people were unable to access at that time. I thought Steve was a little bit wiggling with the nonsequiter about Per. (which was wrong, too, blablabla, but it’s still Mannian to come up with discussion of issue B when talking about issue A.) That said, Tim is a worm for only arguing about what people argued about and for not engaging on the content and lacks the balls to take on the liberals. (but note this is a non sequiter to the JohnA banning thing.)

Oh…and Bruce, the rules DO apply to me. But I don’t always follow them. And I don’t want you thinking things are too genteel here. Want to mix some ad hom in.

So Steve raises the issue of his claim that I blocked John A and then complains when I correct his misrepresentations. And then misrepresents the facts. Again.

Steve, if you want to write about it, could you at least TRY to describe what happened accurately? I did not admit to blocking per because I didn’t do it. I banned per from commenting for a few days because he abused another commenter. I did not block him from reading my site.

Steve claimed that I blocked John A from reading my site. I didn’t. Steve will not admit that his claim was false. And this is hardly the only example. Steve just will not admit to making mistakes.

Neither von Storch nor McIntyre seem to think that the weighting issue is very important. Von Storch just mentions it in passing and McIntyre has not bothered to find out what effect it has on the final reconstruction.

re #46: Tim, back to science. As you seemed to be such an expert in these areal weighting issues, how exactly one is supposed to weight the grid-box data in Rutherford (2005) in order to get the annual NH mean? They use 5×5 grid cells with centers at 2.5N through 67.5N.

No more discussion of banning on Tim’s site. Tim is not like realclimate and tolerates dissenting comment. I get the impression that he’s a little quicker on controlling comment than I am, but, at worst, he is not particularly unreasonable with respect to control. It probably wouldn’t have done any harm for me to delete more flames than I have in the past and I’m taking a harder line on this sort of stuff than before. One man’s flame is another man’s substantive comment and the line is hard to set policies on. Further discussion of any aspect of banning on Tim’s site will be treated as spam.

If Tim’s claim that I mischaracterized some aspect of a dispute between him and John A is the most substantive error that he has identified, then I think that others can deduce that he has not identified any substantive error in the various allegations about multiproxy work or he would have mentioned them.

McIntyre as [sic] not bothered to find out what effect it has on the final reconstruction

Steve has said several times he’s hasn’t gone through everything MBH98. Maybe it’s on his future agenda.

Your next statement on your blog is this: "Nonetheless McIntyre repeatedly demanded that I post a ferocious denunciation of Mann’s weighting error."

He’s just (repeatedly) making the point that you fail to display any sort of balance – to point out simple and demeaning errors Mann made the way you did with McKitrick. I doubt he seriously expects you to do it. I know I don’t. As for you belittling the importance of Mann’s error compared to McKitrick’s – well, the way I hear things, it didn’t affect McKitrick’s conclusions much at all when the correction was made, just as you imply things would probably be with Mann’s error being corrected. I fail to see why one error should be so newsworthy and deserving of headlines while the other is something to keep silent about.

You then say, "I explained this to McIntyre, but he insisted that I was this strange “cos latitude specialist” thing. I don’t think he was doing it to annoy me”¢’¬?he seemed to have completely convinced himself."

How could you say something like this and expect anyone to take you seriously?

Michael your notion of balance here makes no sense. I reported McKitrick’s error because I discovered it. You think I need to “balance” this with a post about a Mann error? Do you think Steve should balance each post about Mann with a post criticizing McKitrick? He doesn’t do this, in fact he protects McKitrick by banning discussion of McKitricks’s errors. How come your application of this balance standard is so unbalanced?

The error did significantly affect McKitrick’s conclusions, while Steve clearly does not think it likely that Mann’s weighting error made a significant difference, or he would have said something by now.

It does make sense, since your post was titled “Mcintyre stuffs it up again” and since the tone of the post was that “we don’t need to listen to this guy on anything else since he made an error”. If you have that attitude then no-one can do anything and then Mann ought to be tarred with the same brush. Also, can’t remember if it was you (I think it was) seemed to imply that Mc did not know the DIFFERENCE between what a radian and what a degree is. When in fact, what he did not know was that a program asking for an INPUT of lattitude, expected it in the format of radians (requiring outside calculation) vice in the recorded units of degrees.

my post was entitled McKitrick screws up yet again…the “yet again” part of the title is important.

I notice in your “yet again” post – within your paragraph of “In previous postings on Ross McKitrick I have shown how he messed up an analysis of the number of weather stations, showed he knew almost nothing about climate, flunked basic thermodynamics, couldn’t handle missing values correctly and invented his own temperature scale” – that the “showed he knew almost nothing about climate” sentence links to another of your posts here, in which you quote Robert Grumbine. You didn’t personally “show” anything in that post, other than to quote Mr Grumbine’s criticisms. There’s no discovery of yours, nothing original of yours other than the few other sentences you wrote on the page, etc.

Hence, not all of your critical posts of McKitrick are based on your own discovery of errors. And personally, if I were Mr Grumbine, I wouldn’t care for the wording the paragraph I quoted above – it sounds as if you are taking credit for the items Grumbine was discussing.

Tim, I don’t think that the cos error was serious or that it shows an inability to do work in this field. It is very understandable how such an error could occur and the error is one of inputting not of understanding of the physics. Also, I think that he was forthright about fixing it and that he shared his code to allow finding it. That’s what I think.

Re#57,
I certainly agree with the deg vs rad error, but I haven’t looked at the others.

Incidentally, do you concede that Mann has made multiple errors and that the discovery of some of the errors was delayed for several years because Mann withheld code? Do you concede the existence errors are much more noteworthy than McKitrick’s considering the importance of Mann’s work and the lack of proper dilligence due to the resistance to releasing the code?

Tim, before this whole thread gets deleted (and I’m sure it will be) is there a thread on your site where we can discuss these other McKitrick errors? I’ve been reading the “taken by storm” book and am finally ready to see what exactly the complaints are against it. My time available is somewhat limited but if there’s a place I can post at my leisure I’ll try to do so.

Ok folks, I’ve got lots to do right now and don’t have time to babysit a flame war. The cos thing’s been done to death; Tim’s not going to concede an inch. Jim Barrett who provoked this discussion has disappeared. So now that we’ve all risen to the bait, let’s recognize it for what it is.