A weakness in fivethirtyeight.com

August 4th, 2008, 3:43pm by Sam Wang

[Postscript: Some of you have pointed out that to account for non-independent drift between states, Silver implements an add-on procedure involving an adjustment after the win probabilities are calculated. Therefore, you say, simulations are necessary. Several points are being overlooked. First, the main effect of such a procedure is to increase the amount of uncertainty without necessarily changing the expectation. The effects of such non-independence can be estimated without too much fuss (see my follow-ups here and here) and certainly without simulations. This procedure also makes the undersampling problem even worse. Second, it must be remembered that none of these adjustments are as large as the considerable uncertainty that goes with trying to make a projection several months into the future. So it all carries a whiff of not mattering much. For more commentaries on FiveThirtyEight’s projection model (and this site’s assumptions) see these essays: {estimating the cell-phone effect} {the “reverse Bradley” effect} {contrasts with the Meta-Analysis}]

Today I’d like to outlline the basic contrasts between this calculation and a popular resource, FiveThirtyEight.com. That site, run by Nate Silver, a sabermetrician, is a good compendium of information and commentary. However, both our goals and methods differ on several key points. The biggest difference is that this site provides a current snapshot of where polls are today, while he attempts a prediction using many assumptions, including an enormous uncertainty about what will happen between now and November. This is a fundamental difference in what we provide. His approach does have some structural problems…

The first step in estimating Electoral College outcomes is to estimate the state-by-state win probabilities. Silver’s approach is to use current polling information as well as a slew of other factors, including past reliability of individual pollsters and informed guesses about how much future change may occur. My approach is to take the last three polls only, giving a current snapshot. Therefore his site is future-oriented, while mine is focused on the here and now. To some extent this is a matter of taste (though what I provide is far more easily interpreted).

The second step is to combine these probabilities into an estimate of the likely overall outcome, measured in electoral votes (EV). Silver’s approach is to carry out thousands of simulations, then tally the simulations. That method reflects the fantasy baseball tradition, in which individual outcomes are often of great interest. However, such an approach is intrinsically imprecise because it draws a finite number of times from the distribution of possible outcomes. The Meta-Analysis on this site calculates the probability distribution of all 2.3 quadrillion possible outcomes. This can be done rapidly by calculating the polynomial probability distribution, known to students as Pascal’s Triangle.

To illustrate what a difference this makes, let’s consider some recent data. Fivethirtyeight lists win probabilities for the 50 states and the District of Columbia. That site’s tabulation of 10,000 simulated outcomes looks like this:

These simulations are drawn from the true distribution corresponding to fivethirtyeight’s figures, which looks like this:

This true distribution takes the form of a bell-shaped curve, as expected given the large number of states and Silver’s cautious future projections. In contrast, the distribution of simulations is highly irregular, probably because of inadequate sampling. The undersampling may account for an error in his top-line EV estimate, Obama 303, McCain 235. I calculate the true estimate from his probabilities as Obama 308, McCain 230. In fact, because of the smoothness of the distribution this result could be obtained without any simulation at all, simply by adding up all the states’ EV, weighted by probability. To put it bluntly, his simulations are not only imprecise but also unnecessary.

Now let’s look at today’s snapshot of polls on this site:

The distribution looks spiky because recent polls give fairly extreme single-state probabilities. For example, if the election were held today, an Obama victory in Washington state is near-certain, but in November less so because of unknown future events.

The Meta-Analysis is also precise because it uses dozens of polls from states in contention, and over 100 polls in all. Today’s Meta-Analysis gives an EV estimate of Obama 331, McCain 207, with an effective margin of error (MoE) of 35 EV. This MoE is equivalent to a little bit less than 1.0% – better than you will find at any other resource on the Web.

Such high precision is useful for tracking movement in the race from day to day. The estimator’s history shows only one major swing this year: Hillary’s withdrawal in early June, which was followed by a sharp jump in Obama’s performance against McCain. No other event provided nearly as large a change. Indeed, for the last three weeks there has been virtually no movement.

I’ll end with a caveat. Individual states are polled less frequently than the nation as a whole. Therefore the Meta-Analysis may respond more slowly to changes than national averages. But it responds far more accurately.

So – to relieve your anxiety at the vagaries of individual polls, come back again!

43 Comments so far ↓

I’ve been reading Nate Silver’s blog for a while, and I believe that his model assumes that there are correlations between the results in different states. His simulations aren’t just flipping an independent “biased coin” for each state, so simulation is in fact necessary. (This is a bit tricky to find, though, and I regret that I am not quite interested enough to go find this piece of information in his archives a second time!)

You guys are both much smarter than I, but Sam, doesn’t Nate also reduce his Obama EV win numbers somewhat because he projects that in the future the race will be narrower. I believe he takes this from a look at historical polling where races always seem to narrow as the election day nears:

David, to state the obvious, Nate Silver doesn’t know for certain whether the race will narrow. Any citation of past trends only gives a rule that is observed most of the time, not all. Building in such a rule would tend to increase the uncertainty of predictions of future events. This particular “rule” about the race narrowing over time would probably be the phenomenon known to statisticians as “regression to the mean.”

My basic position is as follows: It is possible to get an extremely accurate snapshot of where things are now – and changes in that snapshot over time. Any projection forward in time requires making complex assumptions of questionable validity. In the process, one loses information about what is happening right now. I think of so-called projecttions as exercises in throwing away useful information.

In regard to your last point, perhaps consult political scientists, economists, and psychologists. Not statisticians!

Well in that case you and Nate are useful in different ways. A projection into the future may not be perfect but it certainly provides some likely scenarios in November while your site as you say is only a snapshot of the present. Nonetheless Nate made some pretty good projections during the primaries (which is how he got famous in the first place) while I didn’t really here from blogs about startlingly accurate projections here (you could have validated your algorithm during the primaries instead of claiming it’s perfect now with only one election under your belt).

I’m confused about your statement about why both Nate’s and your distributions are spiky. First, you say that your analysis does not try to predict the future:

My approach is to take the last three polls only, giving a current snapshot. Therefore his site is future-oriented, while mine is focused on the here and now.

Then, you explain that your distribution is spiky because the outcome of, for example, Washington state more uncertain in November than it is now.

The distribution looks spiky because recent polls give fairly extreme single-state probabilities. For example, if the election were held today, an Obama victory in Washington state is near-certain, but in November less so because of unknown future events.

These two statements seem to be a contradiction.

Then, you say that Nate’s distribution is spiky because of undersampling in his simulation.

In contrast, the distribution of simulations is highly irregular, probably because of inadequate sampling.

This implies, I think, that if he did 100,000 or 1,000,000 draws rather than 10,000 his distribution would smooth out. I understand your point about the monte carlo simulation being unnecessary for this problem. But, if you accept that if Nate increased the number of his samples that his distribution would be smooth, then why, since your method doesn’t require sampling at all, isn’t your distribution smooth.

Marc, go look at Silver’s individual-state win probabilities. Over half are between 5% and 95%. In such a case the compound probability distribution includes many states that could go either way. In this situation the true distribution of final outcomes will be smoothed out by the multiplicity of combinations.

As I have stated, the distributions shown on this site are true probability distributions. Why, then, do they look spiky? The answer is that fewer states are uncertain based on current polls. Go look at the state probabilities listed on this site (go dig around in the Methods section for a file called stateprobs.csv).

A simplified version goes like this: Imagine that there is only one undecided state, and all others are certain to go to either candidate. In this case the probability distribution is composed entirely of two spikes. Two undecided states would have four spikes (assuming the states were of different sizes). And so on.

Isabel, I haven’t found what you describe. But your description makes me even more skeptical about the validity of the methods over there.

Of course there are correlations between state outcomes. But polls from different states are correlated as well, and are likely to account for most, if not all, of that correlation. A secondary correction is unjustified, especially considering the large uncertainty in how much change will occur between now and November. It falls into the general category of rearranging the deck chairs on the Titanic.

However, if he is doing that then the logical place to insert the assumption is as part of estimating the individual state win probabilities. In this case my analysis still holds.

It’s so great to have your site back to compare with 538. 538 is certainly interesting to read, as it includes some interesting demographic regressions. However, given the accuracy of your (uncorrected-lol) metaanalyses, I think I’ll give the tip to your model.

One thing about 538 that I think you should include in your model is the pollster rankings and subsequent weighting. If I’m not mistaken, in your model, Zogby Interactive is given as much weight as SurveyUSA. It would be nice to give more credit to pollsters who introduce less error into their sampling.

Aaron, thanks for writing. It’s enjoyable to think about these issues again.

I don’t think I’ll be making that weighting for two reasons: (1) I do not think it, or other corrections, affect the result much; (2) I don’t know when pollsters alter their methods. This would invalidate any weighting.

As an off-topic point, I just wanted to relay to you that when you started your site in 2004, I was just about to head off to college. I saw that you were a professor of biophysics and I thought to myself “what the hell is biophysics?” Funny thing is that 4 years later, I got my master’s in molecular biophysics and biochemistry and I still don’t have a clue what biophysics is. Hopefully, I’ll finally figure it out over the next 7-8 years in medical/grad school…

Thanks for all your work. Although being a common statistical illiterate and , therefore, unable to analyze your methods, I understand your results. Fascinating. I will be looking for Nate’s response, if any.

This is a great site, and I understand a lot of where you are coming from, but I’ll defend Nate’s methodology based mostly on the fact that I trust his baseball projections more than anybody else’s, and it’s proven to be incredibly predictive.

You lodge this complaint against him, “Any projection forward in time requires making complex assumptions of questionable validity. In the process, one loses information about what is happening right now. I think of so-called projecttions as exercises in throwing away useful information.”

The biggest problem with this statement is that many of us might think that knowing where the race is *now* is useless information. I want to know, essentially, given the information I have now, what is the chance that Obama will win the Presidency in November. Nate is trying to accomplish that goal, and it’s the same goal he’s tried to accomplish his entire career as a baseball forecaster: given what we know about a player up to this point, what will he do next year and the five years following.

Of course Nate doesn’t know if the race will narrow, but we have past information that it often does, why should we throw out that information just because it’s not a certainty? I want the predictions to feel somewhat intuitive, and I think Nate’s numbers match up pretty well with the numbers on Intrade, suggesting that the people investing in the race essentially agree with his forecasting.

As to the reason Nate does the intra-state correlations and demographic regressions, I think it makes a lot of sense. As you acknowledged, the state polls are infrequent, so they often lag behind. They also are infrequent, and so we don’t get a number of different samples quickly to tell if one is an outlier (like we did with the Newsweek poll showing Obama up 15, which was quickly shown to be an outlier). Because of this, and because information from other states can tell us about information in the state we are interested in, Nate thinks it’s useful to apply that information. A North Dakotan of similar background to a South Dakotan will likely vote in a very similar manner. So why shouldn’t we use that information to our advantage.

Suppose we were sampling two populations that we knew to be identical, i.e. we created them to be that way. For example, suppose we go to the standard coinflip analysis, we do two samples of 100 coinflips, and one comes out 45/55 and the other 55/45. Don’t we still want to say that the likelihood is that at the end of the day, each of those samples is representing a 50/50 probability? I don’t see why Nate’s use of correlations among states would prove any less useful than that.

You are right on the monte carlo, it’s probably not necessary, but I think you overstate the error related to it. Also, you say “For example, if the election were held today, an Obama victory in Washington state is near-certain, but in November less so because of unknown future events.” But that’s exactly what Nate is trying to do, factor in that uncertainty.

I’m sorry I missed your efforts in 2004, as this is the kind of rigorous statistics I like to see. I’m no statistician, but FiveThirtyEight’s model struck me as overwrought, over fit, and uninformative. When you’re not even sure how your own model is going to respond to poll changes I think you have something with little utility… I mean, it’s exciting to see what comes out of the black box and all, but how am I supposed to interpret it?

I have a specific question about the “spikiness” of Silver’s graph, which you cite as a sign of problems with his method.

I’m an English teacher and not a statistician, but it seems to me that a smooth bell curve is an inappropriate representation of possible electoral outcomes because of the inherent “lumpiness” in electoral vote distribution. For example, each candidate will either win all of Michigan’s 17 electoral votes, or none of them. Given that reality, a spiky graph seems (intuitively, to my untrained eye) like a more realistic representation of possible outcomes.

You suggest that the curve should be smoother given the “large number” of states, but 50 doesn’t actually seem like a very large number, and the number of states that are actually being contested by the campaigns is much smaller–at most, only about 20 are receiving significant expenditures from the campaigns. Why is a spiky graph implausible?

Obama owns a lead of just a few points–at most–nationally, and the same in swing states such as Ohio. Yet your model gives him an overwhelming advantage. If McCain has a strong week of polling, taking the lead in those states and nationally, wouldn’t your model give him a correspondingly enormous advantage? Or put another way–if the current polling situation doesn’t lead your model to show the race as a close one, what would? (Check out the bolded lines in the table on Chris Bowers’ post here for a look at the close margins in those swing states.)

On the “spikiness” of the distribution and correlations between state outcomes:

In the context of trying to predict election results in November, I think treating state outcomes as independent is clearly incorrect. What is the conditional probability that McCain wins Ohio given that he also wins Massachusetts? It had better be higher than the straight-up probability that he wins Ohio, because he only stands a chance in MA if the dynamics of the race change a lot over the next few months.

The assumption of independence is much more reasonable if you are going for a current snapshot of the race. (Even then, I would argue that it’s not perfectly right, but that is a minor point.)

The way Nate Silver deals with this is by separating national movement from state-level movement. There is one distribution for the national popular vote margin, and then each state has a distribution for the difference between its margin and the national margin.

Here’s an example (numbers for illustration purposes only, but I think they’re close to what Nate is working with):

That means Obama is expected to do worse in OH than nationwide by one percentage point, but he could easily do better.

Now Nate assumes that each of the state distributions is independent from the national distribution. I am pretty sure that he does not model the state distributions as independent from each other, but rather he expects “similar” states (along a range of criteria) to behave in similar ways. I can understand being skeptical of that. But even if he did make the state distributions independent, his model would still produce “spiky” results.

A final note: it should be pretty clear just from looking at the top two pictures in this post that the upper picture doesn’t represent 10,000 samples drawn from the lower distribution. It is too spiky.

The great strength of 538.com is that it includes an estimate of non-sampling variance or, equivalently, the correlation among states. In the variance components framework I like, 538’s model is basically
Vit = Pit + Eit + Ut.
That is, the vote in state i at time t equals the poll result (Pit) plus the poll’s sampling error (Eit) plus Ut, which we can think of as a common nationwide “shock” due to campaign events or just as nationwide non-sampling error.

In order to calculate the probability of winning a state, we need to know the distribution of Eit and Uit. Eit is normally distributed, since it’s sampling error, and its variance is easily calculated from Pit and the sample size of the poll. Uit is assumed to be normally distributed, and its variance is calculated from the variance of national polls in previous elections, as described by 538 here (see the “National Movement” section).

The bottom line is that by taking into account the variability due to campaign developments, 538 calculates a reasonable probability of an Obama victory, currently 62.7%. Sam Wang doesn’t seem to report the similar probability, but eyeballing the histogram, it looks like this site estimates Obama’s probability of victory at greater than 99%, which is obviously ridiculous and uninformative.

That said, I generally prefer Wang’s straightforward approach to 538’s. 538 does so many complicated things that it’s hard to know what’s driving the results. Wang’s approach is in many ways more enlightening (even if it’s not as accurate a forecast).

“In the context of trying to predict election results in November, I think treating state outcomes as independent is clearly incorrect.”

I couldn’t disagree more. The data should dictate the model; the model shouldn’t be telling the data how to behave… which is what 538 is all about. There is no need to force things into little boxes that “make sense”… just doing thorough stats the *actual data* should be enough to show us what’s going on.

I do think that with what you are trying to do here, the “snapshot” approach, a lot of the objections to independence between states are no longer valid. But still, let’s say you hold an election today and McCain wins MA. Clearly the polls underestimated his support there by 10+ points. That is a big deal. Is it likely that the polls in other states also underestimated his support? Here are some cases.

1. The polls were wrong in MA because of sampling error. In fact, McCain was always ahead there. This seems rather unlikely considering that Obama’s margins in polls taken over the last three months have been: +9, +20, +13, +23, +13, +5. Still, if this is true, of course it will not tell us anything about other states.

2. The polls were wrong in MA because the sample was biased. Pollsters weren’t reaching the right mix of people. It is likely that such an effect would carry over into other states.

3. The polls were wrong because people lied to the pollsters about their intentions (maybe the “Bradley effect”). Again, no reason to suppose that this would be limited to MA.

4. The polls were wrong because the pollsters predicted turnout incorrectly. This could be due to a superior GOTV organization for McCain (probably state-specific) or a relative lack of enthusiasm among Obama supporters (maybe state-specific, maybe not).

5. The polls were wrong because a lot of people changed their minds at the last minute. Unless Obama specifically insulted the state of MA, there’s no reason to expect this to be state-specific.

There are probably explanations I’m missing. Anyway, depending on why the polls got it wrong, either we expect that polls in other states also overstated Obama’s support, or we expect that the effect was limited to MA.

I think, based on the list above, that the best guess should be that polls in other states suffered from similar problems. (Even if you think the right answer is a combination of #1-#5, this conclusion follows. The only way to maintain the assumption of full independence is if you pin the blame entirely on #1, which I claim is insupportable.)

So, the assumption that states’ results are independent is unwarranted even in the case of a current snapshot of the race. I can see why it makes sense to assume independence anyway — it’s all well and good for me to make a blog comment, but incorporating non-independence properly would take a lot of extra work, and a bunch of subjective decisions would need to be made along the way. The way you currently have it, the calculation is simple and transparent. There’s a lot to be said for that.

Forgive me, though, if I don’t give too much credence to the 99% Obama victory figure, even as a snapshot. I believe that the “right” snapshot figure, after accounting for all the stuff I brought up, would be something like 90%. But that’s just a wild guess.

You make a good point, and I share some of your skepticism about 538. The question at hand is whether the state probabilities should be modeled as independent or not. I think the historical data show that assuming independence is inappropriate.

Look at the 2004 election. The polling average in August for Ohio was Bush +1.5, and the final result was Bush +2.5. Here are the same results for RealClearPolitics’ 18 “battleground states”:

Bush improved his percentage in 14 out of 18 states. If the state percentages varied independently, one candidate would improve his percentage in 13+ states once every ten elections.

Of course, maybe 2004 was that one-in-ten election. But it’s far more likely (IMO) that Bush gained a lot of support nationwide in September and October, and that support was roughly equally distributed across states, to a first approximation.

If someone were to go back to previous elections, they might find enough evidence to “reject the null hypothesis” of independence between states. But since we have reason to believe that the null hypothesis is false to begin with, why wait until p < .05 before making a more sensible model?

Of course the polls in states which move together would tend to be correlated. I think Nate’s idea is that since he’s attempting to predict the results in November — as opposed to your goal of saying what things would look like “if the election were hold today” — he assumes that states which have tended to move together will continue to move together as the election nears.

Following up on Isabel Lugo’s post, about a month ago, Nate published an entry on State Similarity Scores, and implied that it would have an impact on the model.

Nate has not published anything further, but it appears that the model incorporates a nearest neighbor analysis in some fashion as part of the simulation runs. If you examine the Secnario Analysis on Nate’s blog, in particular the information relating to scenarios where OH/MI/PA move together, you can see that these three states are not moving independently, and neither are they moving in lockstep with the weakest member of the group OH, as there are scenarios where OH is won by Obama, and MI or PA are lost by Obama.

At this point, the application of the state similarity scores by Nate is a “secret sauce.”

dcj :”I think the historical data show that assuming independence is inappropriate.”

I still disagree… you make the case that a model for interdependence might be appropriate… but I fail to see how assuming independence is wrong here… it is merely the most conservative (not in a political sense) way to go. If I treat Maine and New Hampshire as independent and they happen to vote in a “New England Style” then what have I lost? A bit of power certainly, but I didn’t have to assume ahead of time what they were going to do. I’ll grant you that there are possible pitfalls, but it seems to be the way to start. A geographical relationship should come out in the raw data… and while I have yet to look at Professor Wang’s maps, I suspect that is the case here.

Non-independent data points are problematic, but the first thing I want to see is the raw stuff, and we work from there… I’m not a particular fan of being told what I’m supposed to see and forcing the data to do so.

You could certainly do much more clever things if you assume interdependence, but I personally prefer rock solid simplicity.

No response on the question of exactly what you mean by “This MoE is equivalent to a little bit less than 1.0%”?

To remind you, in context, you say “Today’s Meta-Analysis gives an EV estimate of Obama 331, McCain 207, with an effective margin of error (MoE) of 35 EV. This MoE is equivalent to a little bit less than 1.0%”.

And you clearly can’t be saying that 35 is “a little bit less than 1.0%” of (331+207). So what DO you mean?

For example, assume that neighboring states A and B both poll 55/45 for Obama in June and both poll 54/46 in July. Then in August, a poll for state A shows 51/49 for Obama. Wouldn’t it make some sense to use that poll in state A as a (somewhat unweighted) proxy for a poll in state B? In other words, why not use very recent polls from highly correlated states in preference to very old polls from the actual state?

To clarify a bit further, assume the third oldest poll for State B was from May, showing 40/60 in McCain’s favor. Would you really believe that was more predictive than the contemporaneous polling from State A?

1) 538 includes correlations between the states. The effect of correlation can be clearly seen in the Interactive Presidential Election Probability Calculator at http://election-projection.net/interactive.html. Simply move the correlation slider between 0 and 100.

2) 10K simulated elections is not nearly enough to get a stable EV distribution. Again, this can be seen at election-projection.net/interactive.html. 200K iterations is better; election-projection.net uses 500K iterations in its pre-generated graphs and statistics.

I believe some amount of state-to-state correlation is justified to account for the possibility that the state polls may include a common bias that favors one candidate over the other. Of course, we don’t know what that bias might be, so it could be simulated by a single random variable that is added to the margin of the poll average in each state. Numerically, this approach is very expensive (it requires recomputing the state probabilities in each simulated election). Another way approximate a correlated bias and to use a correlated random number generator. This is the approach used at election-projection.net. See election-projection.net/methodology.html and election-projection.net/mathematics.html

Interestingly, the correlation has a large effect on the probability of each candidate winning (a higher correlation pushes the candidates closer to a draw), but has very little effect on the expected electoral votes. The fact that your results do not include the possibility of a systematic bias favoring one candidate is one of the reasons your results appear so “certain” (~99% probability of an Obama win). If you included the possibility of a systematic bias, the probability of an Obama win would drop something on the order of 5 to 10 percentage points. This can again be seen at election-projection.net/interactive.html.

I do believe however that 538’s use of correlation is problematic. They are correlating states to each other based on demographics and the national polls. The assumption is that a shift in opinion in one state will shift opinion in demographically similar states, and likewise, a shift in opinion at the national level will shift opinion in all states. I believe this is a flawed assumption because it does not account for how candidates allocate their campaign resources (to specific states, not nationally or to groups of demographically similar states). I think the errors introduced by the incorrect correlations will be even greater in the battleground states where candidates are expending the most resources. These states also matter the most in predicting the outcome, and therefore the error is magnified. I believe the only fundamentally sound way to determine the potential outcome in each state is to use only that individual state’s polls, not national polls or polls from other states, and 538’s use of these other polls is incorrect.

I am the creator of election-projection.net, and if you have any questions or comments on the methodology, please feel free to write.

I’m sorry that post is unclear. The concept is that the median comes with a 95% confidence interval that is measured in EV. The method also allows the calculation of a Meta-Margin, i.e. the amount of opinion swing that would make the race perfectly even. Therefore the confidence interval can be converted to units of percentage points of popular opinion.

By the way, if you write again please don’t use random letter strings. I am identified – the least you can do is have a real handle.

I think people may be interested in checking out the newly-updated FAQ, which explains, as Ragout said, that the purpose of the simulation is to take into account non-sampling variance — the idea that the race may break in favor of a candidate (or in favor of different candidates in different states).

I think some commenters here are confusing this with the trend adjustment. The trend adjustment’s only meant to adjust the results for states demographically similar to recently-polled states, but that haven’t themselves been polled recently. As a state receives more polling, the trend adjustment is weighted correspondingly less.

[…] McCain this November. It’s the meta analysis of all current state polling, as done by the Princeton Election consortium and basically shows McCain has no chance of winning whatsoever. Of course, predictions don’t […]

Anderson, when these methods were applied to 2004 data, the result on Election Eve was Bush 286 EV, Kerry 252 EV – which was, in fact, the final result. This is described in the WSJ articles on this site. The exactness of the match was a lucky hit, but the point is unchanged: meta-analysis is a powerful tool and can give very good results.

That year I also made a speculation about the “incumbent rule” that made an incorrect prediction. This year there will be no such add-ons. If you want to do your own modifications, the clickable maps in the right sidebar may be of use, which give win probabilities with a swing of 2 points toward either candidate. I am also making available the code with documentation.

Electoral-vote.com has analysis indicating that using Election Eve polling data, without correction, is quite accurate. I did a similar analysis. Therefore you will not find in this calculation any of the many corrections suggested by others.

There are people who wager real money on the outcome of elections with bookies who cannot afford to set the odds incorrectly. As much as I love a good two-tailed measure of statistical significance or a juicy Pearson number or a frisky visit to the chi-square tables in the appendix of my trusty college statistics text, those bookies don’t depend on the kind of sophisticated number crunching done by Ph.D. candidates in Political Science or Master’s candidates in Sociology. I wish I knew all the secrets of a certain family bookmaking outfit in Ireland that I have heard of which has made huge profits on American Presidential Elections for the past three generations. (The only time they lost big was on Truman-Dewey, but even George Gallup got that one wrong.) Sophisticated numerical manipulations always work well on election night as the networks try to project winners, but do you trust your statistical analysis enough to wager real money on the outcome of your number-crunching?

Mister O: aye, there’s nothing like a wee bit of the old one-tailed test, is there.

The answer is yes. Electronic markets for political races lag polls because that’s what drives them. They are closely related, of course. The markets usually get the sign of an outcome right but tend to be less certain than polls would indicate. See my post on InTrade.

In 2006 InTrade had a contract that put the odds as 3-1 against a Democratic takeover of the Senate. The poll-based odds were 1-1. This was a good bet. I’m thinking about coming up with some odds like that as the election draws near.

I agree with virtually all of your points. However, a Monte Carlo Simulatuion of 5000 election trials based on state win probabilities provides a robust EV win probability. Obama won all 5000 trials, even assuming he captured only 40% of the undecided vote.

His expected EV of 379.49 is the sum of the products EV(i)* P (i), where i= 1,51 .
P(i) is the state win probabilty and EV(i) is the state electoral vote.

You are an idiot. Go back to your liberal arts eduction to create poetry, and avoid anything that uses analysis.

Is this the eduction you get at Princeton? Where does this stupidity come from? I used to think this was what awaited me in the community college? I had a few friends from Yale and assumed Princeton was a little more advanced.

Perhaps I should help you to understand. There is no such thing as certainty in prediction. Rather one must make a judgment about the data and form an opinion about the lack of completeness. Statistics is just a tool to help one make educated guesses.

However, unfortunately, when you do not understand the data you are also a fool if you fill the void with whatever thoughts you have at the time. Don’t mix stupidity with interpretation of gaps in the data.

I guess when you come from Princeton you can spend your life making up whatever opinions that your want regardless of the data.

I have to admit that I almost deleted Greg’s comment. But it’s funny and reveals more about the writer than anything else. So I’ll leave it up for a little while.

Greg, look into my background and you will see that I am a developer of original statistical methods. In the meantime, I encourage people to visit http://leanstrategies.net, where Greg puts his statistical acumen and people skills to good use.

Thanks for the honor of keeping my post. I guess you are not afraid of criticism. Also thanks for the publicity. I intend to use more of your material in my blog.

I do have to say I was not fair. You do not seem to be an idiot after all. How could anyone with a CV as long as yours be an idiot? More likely I am the idiot.

However what I should have said is that you seem to mistake analysis for statistical methods. One uses judgment the other is a theoretical construct.

Despite the fact that your comments may be theoretically correct under certain assumptions, they appear ignorant to me. It seems more like you were trying to find some flaw no matter how minor rather than providing some critique of real value.

Data is full of gaps. And statistics by itself is just a tool. That is why fivethirtyeight.com is so successful. Perhaps you need to look harder at the simple analytic process they have laid out and see the gem. But then you are likely too distracted with developing theories to do that.

I am curious, in your work, do you use your hunches to direct your research or do you ask the computer to tell you where to look?

If it is the latter, you are likely to not consider the diversity of ideas that need to be brought together for breakthroughs.