Back on October 23rd, Barron's editor Jim McTague (Survivor! The GOP Victory) -- in a cover story no less -- forecast a GOP sweep of the House and Senate; even the article's "worst case scenario" was off significantly from what happened last Tuesday. Why was this forecast made just 2 weeks before the election so far off the mark?

McTague puts forth a weak mea culpa
in this week's Barron's that does not acknowledge any flaws in his methodology. Further,
in the original piece, he identified that the sole savior of his financial/mathematical approach would be if the economy was actually much worse than he previously
acknowledged. That possibility simply never entered his thinking.

Rather than simply call McTague a partisan -- we at the Big Picture don't roll that way -- let's instead look at the methodology that produced that incorrect forecast, and try to discern where he went wrong (its more educational that way). While the cover story got the election about as wrong as anyone possibly could, I want to focus on analyzing why.

That October 23, 2006 edition of Barron's outlined the basis of the methodology employed:

"Our analysis -- based on a race-by-race examination of campaign-finance data -- suggests that the GOP will hang on to both chambers, at least nominally. We expect the Republican majority in the House to fall by eight seats, to 224 of the chamber's 435. At the very worst, our analysis suggests, the party's loss could be as large as 14 seats, leaving a one-seat majority. But that is still a far cry from the 20-seat loss some are predicting. In the Senate, with 100 seats, we see the GOP winding up with 52, down three.

We studied every single race -- all 435 House seats and 33 in the Senate -- and based our predictions about the outcome in almost every race on which candidate had the largest campaign war chest, a sign of superior grass-roots support. We ignore the polls . . ."

Why was even the worst case scenario significantly off? I can think of two explanations: First, an analytical error in the overall, and second, an economic one. Bad theory was compounded by poor recognition of the facts. Other than those two small items . . .

Let's start with the theory. This is the classic example of one of my favorite analytical foibles: Confusing *correlation with causation. This is one of the most common errors of deductive reasoning and logical analysis we come across -- and one that can be easily avoided with a little forethought.

In the present instance, there is a presumption that the candidate with "the largest campaign war chest" will be the winner of the election because of that monetary advantage. There are at least three methodological flaws with this approach:

1) It fails to consider that the perception of likely victory is what may attract campaign donations to the eventual victor; It assumes a causative relationship where there may be none;

2) It ignore the advantages of incumbency -- in raising campaign donations, as well as getting re-elected;

3) Voters can and do hold incumbents responsible for current economic conditions - and those economic conditions are less than ideal;

In other words, the McTague Methodology gets causation precisely backwards: People donate money to whom they think will win, in order to secure official favors, legislation, etc. This naturally leads to greater donations of money to incumbents, who traditionally have a 98% re-election success in the House of Reps. However, when incumbents appear more vulnerable, they may not get the main financial advantage -- mo' money -- of incumbency.

And, when people feel finacially insecure -- about their jobs, healthcare, assets, inflation -- they may vote for change.

McTague even touched upon a weak spot in the war-chest thesis, economic nervousness:

"It's true that our formula isn't foolproof. In 1958, 1974 and 1994, the wave of anti-incumbent sentiment was so strong that money didn't trump voter outrage. We appreciate that voters in 2006 are hopping mad at the GOP because of the war and because of scandal. We just don't agree that the outrage has reached the level of those earlier times. The reason is that the economy in 2006 is healthier. And the economy is the only other factor that figures in our analysis...

This is the first time in our memory that an incumbent party enjoying a strong economic tailwind suffered defeat." (emphasis added).

Perhaps that tailwind is less strong than Barron's believed. As we noted prior to Election Day, It is Still the Economy, Stupid.

"More than 80 percent of voters in an exit poll, conducted for The Associated Press and television networks by Edison Media Research/Mitofsky International, said the economy was a very important or extremely important issue. That percentage was the highest for any issue, including Iraq and terrorism.

Furthermore, electoral data and government economic statistics suggest that the economy played a role in the outcome: if your state wasn’t among the best economic performers in the last six years, judged by the growth of personal income, it appears that you were three times as likely to vote to throw the bums out." (emphasis added).

Putting all these factors together, we are left with but two conclusions: The electorate voted both Guns AND Butter (but more butter than guns), and tossed out the ruling party for mostly economic reasons; Secondly, the methodology of relying merely on a candidates war chest is flawed, as it asssumes a causation that is not there. If anything, the expected victors attract donations -- not vice-versa.

>

_________* Correlation and CausationLet's more precisely define our terms: Correlation is the occurence of two (or more) elements, usually in the same time, location or event. They may be independent (or not), they might be coincidental (or not). Correlated items can have no relationship other than their simultaneous occurence. Causation, on the other hand, refers to a specific relationship -- one of authorship and creation. X caused Y to occur is a relationship of causality.

TrackBack

Comments

Note: I've had plenty of boneheaded predictions in my day -- but they are usually not for an event thats only 2 weeks away.

When I am wildly wrong -- and you don't know the half of them -- its often due to some intervening event.

My Bearish forecast for this year has proven to be wildly off the mark; I thought after hitting highs this year we would roll over sooner rather than later; Amongst the many other reasons, corporate profitibility has remained stronger longer than I expected; I also failed to pick up on the significance of changes in GSCI quick enough.

That was a forecast made a year ago -- I cannot say I was ever that far off that close to an event occuring.

Disclaimer

Disclaimer

The information on this site is provided for discussion purposes only, and are not investing recommendations. Under no circumstances does this information represent a recommendation to buy or sell securities.