‘Extremely Possible’?

Silver’s point is to emphasize 85 isn’t 100. But it’s striking how hard it is to say that without sounding like you are saying 85 is 50.

A sort of extremism kicks in that doesn’t seem to manifest in other areas of probabilistic reasoning. 50/50 or 100/0. Look at the polls; see which of those the polls are close to; that’s your answer. Elections: toss-up or lock.

Not black swan blindness, in Taleb’s familiar sense. Nor does anyone make quite this style of mistake when thinking about dice or cards, do they? You might make a baseline rate mistake in interpreting a potentially false positive regarding a medical diagnosis. But if the doc tells you you have a 15% chance of having the flu, no one thinks: oh, from that it follows that I’m 100% healthy.

Part of it is wishful thinking. We do it in our favor when we are seemingly winning, not against ourselves when we look to lose. No one thinks a 15% chance that their party wins is zero. But we think an 85% chance our party wins is 100%. But it’s usually the opposite with, say, dice. If you know you win on a 1-5 and lose with a 6, and it’s do-or-die, it’s almost extra stressful to roll, not less stressful.

What is it about the psychology of elections that makes it hard to weight 85% as 85%?

I confess I, like others, fall for it. Thinking there’s fully an 85% chance of Dems taking the House makes me think: well, if the chances are THAT high, they must be higher still!

The heart has its reasons. (Yes, even after getting burned in 2016.)

Maybe it has something to do with a somewhat idealistic tendency to think of polls as expressions of – oracles of – ‘the general will’?

Logically, it should be enough to point out: polling error. But somehow it doesn’t feel like that should be possible. Not because experts can’t be wrong but because the general will of the people ought to be a thing that expresses itself. If you are getting a strong signal through the polls, that can only be explained by it being a message coming through. The message must be true. So: 100%. This is obviously nuts, I grant.

So vote, kids!

Share this:

I absolutely hate seeing “__% chance __ party [does thing x] reported as news. I get that pols snd parties do polling for good reasons, but nobody has ever been able to convince me it serves any useful purpose as “news.”
In 2016, from what I saw, it both riled up Rs and gave too many chin-rubbing Ds a false sense of security. But it’s nearly impossible for a casual reader to take for what it is.
Sure, Russia. And yeah, maybe racism. Or economic anxiety or whatever. You ask me, it’s Nate Silver that gave us Trump.

My concern is that close poll numbers make election stealing easier and I don’t think I am being overly conspiracy minded to suspect that Bush 2 stole both election by the mystery of computer tallies of the vote count in Florida and Ohio. I don’t know if the democrats have a plan to deal with this possibility.

Maybe people unwittingly associate the probability-percentage-talk with vote counts. That is, they conflate an 85% chance of winning with 85% of people voting for the winning candidate. These scenarios might be confused because they are both taken as ways of the general will expressing itself. This is not that nuts- if someone wins with 51% of the vote, it sounds like it makes sense to say, ‘well, it was a toss-up, could’ve gone either way.’

= = = “Nor does anyone make quite this style of mistake when thinking about dice or cards, do they?”
“What is it about the psychology of elections that makes it hard to weight 85% as 85%?” = = = =
When one is playing dice one makes hundreds of rolls. Cards, dozens of hands in an evening and then dozens more under the same conditions the next week. An election is a one-time event, with (in the US) maybe another somewhat similar (or maybe not) 2 years later, then nothing like it ever again. Had a long argument with Bob Weber (the other one – the strategy analysis guy) at NWU on this over a semester – it is deeply unclear what “85% chance” for a single event means. And even if you go down that road what is going on in elections is not described by a continuous fair distribution function.

I don’t have an answer to the question, but it is worth noting (again) that in 2016 Silver was relentless in his efforts to impress upon readers that Trump had a very real chance of winning. In response to Shirley0401: Silver estimated Trump’s chances as higher than any other aggregator of polls (among national polls themselves, only two had him winning, and both had a history of overpolling Reps and not correcting for that). No-one who read Silver has an excuse for being complacent, whereas anyone who didn’t, and just read other aggregators, does have an excuse.

That said, Silver may have significantly overestimated Trump’s chances of winning. Talk to people who study electoral patterns for a living — in the aftermath 2 such professionals said that after looking at all the returns, it seemed more like a 1 in 300 event than a 15 in a 100 event.

I wonder how culturally-dependent the phenomenon John identifies is? Certainly, I see a 15% chance of losing, and it appears in my head as about 50% when I am feeling optimistic. But then I’m English (I’m guessing that for CB a 15% chance of losing translates to something more like 80%, but he can comment for himself…)

No, people make this kind of mistake all the time with dice. You play a game with literally hundreds of rolls, and people will get super-salty over rolling snake eyes. As if a 1/36 chance were literally impossible. Even if you give people a slight edge, say, tell them they have a 60% chance of winning, they will take that as an ‘expected win’ and think of it as something a lot higher.

I think that “extremely possible” means a bit more: this is a very uncertain election. I remember that about a month ago Silver tweeted that there is a 40% chance of either party winning BOTH the House and Senate. I don’t know the mathematics of that, but it seems reasonable to me. The underlying political circumstances and the known indicators are very unusual. Early voting, young voting, and large turnout have all favored the Dems in previous elections. On the other hand, we don’t know how much the resurgence of hatred and racism (including people who are willing to tolerate racism in others) has grown. We don’t know how much of the GOP base has fled the party. Or, exactly how much of Trump’s 2016 win was due to economic dissatisfaction and/or disaffection with Clinton, two big factors which have been lessened/removed. And so on. We are going to learn lots about the US electorate in this particular election tomorrow.

Maybe it has something to do with a somewhat idealistic tendency to think of polls as expressions of – oracles of – ‘the general will’?

I don’t think that’s it, psychologically. To begin with, nobody cares about the general will, and the 1% who have actually read Rousseau remember there is a section entitled That general will may err (or words to that effect, I have no idea how it is translated). But more importantly, it seems to me that we have the same psychological reaction to probabilities applied to many accidental phenomena, in which the general will plays no role whatsoever.

For instance, when you expect a baby, doctors would frequently compute a probability it has such or such disorder (they would always do so for Down syndrome in France, for instance) in terms of the general circumstances of the pregnancy. Then, depending on further tests during the pregnancy (say nuchal translucency examination for Down syndrome) they might revise the probability. Among future parents with whom I discussed this matter, it was not exceptional to hear the experience reported in this way: “so the baseline for us was 1/1000 and after the test, it went down to 1/2500, so everything is great” or “the baseline was 1/2000 and after the test it jumped to 1/250, so we had five weeks of real anxiety until the next test”, and I did feel something like this myself.

Yet, I believe it is really hard to get an actual sense of what a 1/250, 1/1000 or 1/2500 chance might mean. Events that unlikely and truly random in ordinary life are quite rare, in fact, and when they do happen, we tend to process them as amazing coincidences (what are the odds that among the 7 commenters who commented on this thread at the time I’m writing, two at least have the same PIN number on their main credit card? the answer is very close to 1/500) or fateful events. For people interested in politics, election results fall quite precisely in the latter category, surely.

It is rather common for laypeople to say “only 5% of people in the world have this disease, so you’re silly to worry about it.”

There’s also a psychological distinction for most people between, say, paying $15/hour to participate in some activity, and playing a low-stakes, relatively-high odds game that averages out to the same cost. The person who is willing to do the latter, knowingly, should not in my opinion be assumed not to know he has a 100% chance of losing (not getting his money back).

85% chance in an election is different than 85% chance in a game of chance. In an election, that 15% (usually) represents the possibility that there is a systematic error in the information – something that is not present in games. [Now I want to make a game with that!].

Our bias creeps in when we ask, “How likely is such an error? Is it really 15%?” We look around at all the people we know, which steers us wrong in our preferred direction.

Humans aren’t hardwired to handle quantitative analysis–they like it best when it operates in accordance with confirmation bias. “I had a hunch and the numbers proved me right.” The reason is pretty straightforward–back in the old days, the lad or lass who stayed on the ground to count the number of animals in the approaching pack of carnivores instead of scampering up into the trees with the rest of the troop generally didn’t survive to make his/her contribution to the gene pool.

That said, in the spirit of analysis of analysis, over the past year, Silver has consistently argued that in 2016 the predictive failure was an interpretative failure (in a nutshell, the analysis didn’t take into the account the operation of the electoral college) while this time around the potential for failure lies within the polls themselves (systemic polling error). His articulated basis for considering systemic polling failure is simple and persuasive–the three best predictors of outcome are special election results, fundraising success and the polls themselves, and they are in discord. There are other, deeper and more controversial, reasons, as well, of course.

In another 48 hours we’ll have a much better feel for the kind of country the United States has become, and the viability of its government.

I agree with M Caswell at 4, in facts this was my explanation for the apparent surprise about Trump’s victory and brexit victory.

I think that what happens is something like this:
– statistician S looks at his surveys and sees that Clinton has 51% of the vote, Trump has 49%.

– S also knows that his surveys have some level of possible error, that he estimates at +-2%. Therefore the Clinton vote will be somewhere between 49%-53%, the Trump vote between 47%-51%.

– S then estimates Clinton’s victory as 85% likely because only a minor slice of the “cloud” of the possible results ()15%) enters the “Trump wins” area, and tell this to the papers.

– Joe Average reads that Clinton has 85% chance of winning, doesn’t know/ doesn’t think about the idea of the “cloud” of the expected results, and therefore reads this as if Clinton had an estimated 85% of the vote, that is totally NOT what S meant.

– 85% of the vote obviously would be certain victory.

This kind of thinking also goes with the interpretation of the results: if a politician (say Obama) wins elections with a good margin like 55% to 45%, papers will say that he won by a “landslide”, so that the impression is that now everyone is a Dem and there are no more Reps, but in reality if you enter in a room where 55% of the people had black hairs and 45% of the people blond hairs, you will think that the hair color is quite evenly distributed: the human brain doesn’t really work well on these small difference in percentages (if you don’t actually count people).
So this kind of language shows as epocal cultural changes what are really moderate fluctuations, even though the political effect of an election makes the effect of these fluctuations very important.

Given that early voting has exceeded 2014 levels by ~25% (and counting), and that in some states early voting is close to or exceeded total 2014 votes (TX, AZ, FL, GA, UT, MT, NV), it must be extra hard for polling operations to choose a likely voter model. It’s clear that historical data cannot be right! For that reason, I would take the margins of error and the possibility of outlying events as extra important this year.

It is very hard to rate a forecaster after a single event. However, Silver’s record after 2016 is looking very strong. Maybe Trump was super-unlikely. But given that it did happened, Silver’s estimation was much more likely to be accurate than most of the others. On the other hand, you can’t really say much about two forecasters, one of whom gives an event 60% probability and the other 85%. One outcome will tell you very little in this case.

Regarding the significance of 85% probabilities, any quality poker player will appreciate that 1 in 40 odds sometimes get home. Even 1 in 10 events are perfectly reasonable things to base a change in play on (semi-bluff, for example). But cards or dice have a distinct nature of being really random events. The interesting thing about this election is that there is no cosmic role of the dice involved. Whatever is going to happen has largely already been determined (in terms of big swings over the range of possible outcomes, not whichever races end up being super close), most likely even weeks ago. We are just going to learn a lot about the state of affairs when the ballots are finally counted.

A related tendency which I don’t see in myself is this–there are some people who insist on a yes or no answer and think you are being a weasel if you say “probably” or “probably not”. I have no examples offhand, but I think I’ve seen this on TV and in movies–the hero hears some pencil-necked geek waffle about what will or won’t happen, or alternatively, the hero is the person using the probabilistic language and his boss demands a yes or no answer, the hero or heroine takes a deep breath and fully commits.

Thinking there’s fully an 85% chance of Dems taking the House makes me think: well, if the chances are THAT high, they must be higher still!

Ah, but I think that if there is as much as a 15% chance of Dem defeat, then surely the odds of defeat must actually be greater. You see the glass as 85% full; I see it as 15% empty.

I think Silver, even taken out of context the way Taleb did, made a valiant effort to describe a difficult concept. I suppose he could have said “very real possibility,” but I like Silver’s formulation better. When there is a 1% chance of something, that thing is possible. When there is a 15% chance, it is extremely possible.

Forecasters may survive either way but whether 538 is accurate at predicting is perfectly possble to test empirically. After they predict 100 races at 85% go and see if approx 85% won.

And this is, of course, a concept that Silver has discussed. He acknowledges that if every race he predicts at 85% goes that way, then he is wrong.

Silver had Trump winning at something like 30%, with a significant percentage of that probability given over to a scenario where Trump lost the popular vote and won the electoral college. I think he acquitted himself very well in 2016.

Contrast that with someone who predicted a Trump victory based on the idea that Trump would get more votes. That person is doing a lot of completely unjustified self-congratulation, but Silver still looks good in retrospect. James Comey, who was sure that Hillary would win, looks even worse in retrospect than he did at the time.

@ John Holbo “Forecasters may survive either way but whether 538 is accurate at predicting is perfectly possble to test empirically. After they predict 100 races at 85% go and see if approx 85% won.”

… but that isn’t going to happen is it. Statistics of probability of binary outcomes require enormous numbers of instances before you can draw conclusions, and the predictions aren’t going to be 85% all the time. For polling, there just aren’t anywhere near enough instances.

Even for cases where there are lots of data, e.g. financial markets, models that predict sufficiently well to be worth investing in are very hard to develop.

@3
I’m not particularly a Nate Silver fan but he gets a lot of undeserved grief for this kind of thing. He’s perfectly fine as long as he speaks probabilistically. Where he gets in trouble is when he speaks in colloquial English or succumbs to the many requests for “predictions” and “forecasts.” As for the former I’m pretty sure he has a good grasp and it’s the rest of society that needs a remedial course on the subject. As for the latter he sometimes screws up spectacularly (e.g. some of his early columns on Trump).

The electoral system has ceased to work like an equilibrium process in thermodynamics and become like the last 15 seconds of a basketball game. That is, the future trajectory is dominated by active, informed strategic intervention by participants. It’s a mug’s game, or maybe fanfic, to guess who has what up their sleeve, and assigning “probabilities” is entirely ungrounded.

So in an environment dominated by “unknown unknowns”, where everybody has got significant skin in the game, you got to put it down somewhere. So I imagine that for most individuals, “probability” has to do with how good you feel about the ticket you bought, or at any rate the face you want to show the neighbors.

As to predictions: on any given Sunday, any NFL team might beat any other NFL team.

No, people make this kind of mistake all the time with dice. You play a game with literally hundreds of rolls, and people will get super-salty over rolling snake eyes. As if a 1/36 chance were literally impossible.

The only instances in which I expect to find 85% (or more) of the vote going to one candidate in this election are:
1. Uncontested races, mostly local, such as all of the Judge positions that are on my ballot for tomorrow’s election, and
2. Brian Kemp’s gubernatorial bid in Georgia*, where he currently supervises voter records and elections. Plus, they vote on machines that do not create a human-verifiable audit, which have already exhibited problems in early voting.

I think rather that events we would call “random*” because we can’t see or can’t be bothered to detail the causal chain happen constantly in plain view. Individual leaves are falling from the tree outside my window. I wonder why we are so attracted to or comforted by order even unto our ontologies.

*whether or not there are actual “random” events, truly uncaused events is more interesting and controversial. In our rationalistic worldview it is hard to think about randomness, really. The principle of continuity is sacrosanct, the second law of motion doesn’t work always except on Tuesdays, and if it did we would seek the reason why. “No reason” might eventually get you committed.

Heck, even the pagans found reasons in nymphs yokai whatever. But religion was a little closer to discontinuity than we are, stuff doesn’t just happen but also doesn’t need explanation.

So we have what the historical illusion so we can put a foot in front of the other.

Our thought is that at 85% a Democratic underachievement would be a crime against nature but I won’t be surprised because I doubt Silver is adequately including actual electoral crimes in his analysis.

Or are we being prepared? Crazy probabilities so we don’t scream about ballot shenanigans? Darn, whoda thunk the odds could turn up 50 new Republican congresspersons?

This actually came up in Sex and the City. Carrie’s friend has breast cancer, but the prognosis is reasonably good and Carrie is confident her friend will survive. Carrie’s boyfriend says his friend died of breast cancer, and Carrie needs to accept the possibility that her friend will die too. She’s absolutely furious at him for saying that, and refuses to believe there’s any chance of her friend dying.

Cowards! –
as there really isn’t any (emotional) question about the Democrats taking ”the house” for sure – and all of these ”cowards” who have become so damn careful because they got burned with their silly predictions based on silly polling and NOT on reading ”the peoples mood” –

“Logically, it should be enough to point out: polling error. But somehow it doesn’t feel like that should be possible. Not because experts can’t be wrong but because the general will of the people ought to be a thing that expresses itself. If you are getting a strong signal through the polls, that can only be explained by it being a message coming through. The message must be true.”

It’s bad enough that the billionaires have changed the laws so they own all the media, and rigged the system (Citizens United) so they can call the shots.

But like many others here have pointed out, the things that can’t be determined by polls is voter suppression, ballot tampering (which still isn’t being investigated as thoroughly as it should), voting machine irregularities (usually GOP states), etc etc.

Back in pre-November 2016, when I actually still had hope for the world as I live out the last part of my life, I constantly titrated Silver’s blog against Sam Wang’s. Wang was all 95% Bayesian sure of Clinton’s victory, and famously ate a bug later to fulfill his confidence-laden promise to do so if wrong. Silver troubled me enough to post a question about Wang’s priors on his blog–specifically about gathering reliable data on specific polls, since so much of that comes from voluntary replies from cold cell-phone and landline calls. Why should one think this is reliable enough to truly bolster confidence? He never replied of course, but at least he gulped that bug. I’m hoping for a pivot in the House, and hoping my contributions to Dems–the most I’ve ever contributed–will pay off in restoring some of my pre-2016 hope. But I can’t say I’m optimistic.
For those more in the know about stats, I’ll ask–how can we become more confident about predictions based on surveys?

This Current Affairs article has a nice long list of Nate’s “Well Actually, Trump’s Losing” updates. Pretty hilarious seeing them all in one place.

“Nate Silver will probably always be the best poll data analyst. The problem is that poll data analysts are completely fucking useless in a crisis. They don’t understand anything that’s going on around them, and they’re powerless to predict what’s about to happen next. Listening to anything they have to say is very, very dangerous. If you want to change anything, you’ve got to forget Nate Silver forever. That’s because he tells you entirely about the world as it looks to him right now, rather than the world as it could suddenly be tomorrow. He has no idea what the outer boundaries of the possible are. Nobody does.”

Catchling @20: Exactly. Nate was noting that there were almost no plausible scenarios in which the Democrats win the Senate while losing the House, so in the case of the overlap being effectively 0, you just add the two probabilities to get the chances that one party will control both.

Dipper: Statistics of probability of binary outcomes require enormous numbers of instances before you can draw conclusions

This is not quite true, unless you’re using a very low estimate of “enormous”. I wanted to double-check this, so I literally did some back-of-the-envelope calculations and it appears that you could tell the difference between an 85% chance of an event occurring and a 90% chance with roughly the typical 95% certainty cutoff after only 70ish independent observations. FiveThirtyEight has had more observations than this if you count their separate district-level and state-level predictions for multiple elections, even assuming a high average cross-correlation of errors.

There are certainly more flexible tests that could check if a probability model for a binary outcome is well-calibrated across a wide variety of predicted values. For example, running a logistic regression of (some transform I’m too lazy to work out and don’t know off the top of my head of) the predicted odds versus an outcome indicator for each event and seeing if the estimated slope is statistically indistinguishable from one should do the trick and should have quite narrow standard errors with a hundred or so data points.

– an so let me also make an extremely possible prediction –
tomorrow after the D’s will have won the house it’s extremely possible that this Silver dude will extremely regret that he made such a silly cowardly prediction – as he could have gained some seriousness back by finally predicting something which will happen.

Jo Wolff posted something on the twitter recently: “the online forecast says 50% chance of rain at 12, it is 12, it is actually raining”. I think there’s an no inconsistency except if the forecast says 0% chance of rain.

Where Silver differs from most of the numbers people I have read on the subject of polling, is that he acknowledges that big systemic error is possible in the polling process.

Polls sample a tiny and highly selected slice of the electorate. Make it a big enough slice and polling can get past the random drift limitation to deliver a very small window of the uncertainty that arises from random drift. But nothing that polling can do can get past the uncertainty created by the selectivity of the samples it looks at. They’re not random samples, not even close. They are good at predicting actual outcomes because, so far at least, if the pollsters make the necessary corrections and adjustments that have worked in the past, the many biases that selectivity introduces cancel out, or are not differential biases, not biases favoring either party. But nobody understands, or could understand, biases that go any deeper than the demographics the pollsters can correct for.

This cycle, of course, we have the Trump factor. Trump has upended our confidence that we understand the underlying dynamics of who wins elections and why, so it seems especially incautious to assume that this cycle the biases introduced into polling by its selectivity will continue to cancel out each other, and/or remain non-differential. This is a new world, for better or worse.

Silver’s method for allowing for this imponderable bias is to put in a fudge factor that works to draw his estimates back from probabilities near 1 or near 0. That’s brute force, as these truly are imponderables and his fudge factor is rather arbitrary in its magnitude, but it’s better than nothing. I put in a bigger fudge factor and just assume that we have little to no idea who is going to win what today.

What I remember from Silver in 2016 was constant discussion of probabilities, and the various ways his projections could go south. (Maybe this is an artifact of reading only the website and not anything reported about the website?)

The other memory I had was, “Hilary Clinton has about the same chance of losing the presidency, as an average NFL kicker has of missing a field goal from the 27 yard line.”

Systematic polling errors are actually really easy to produce because everyone wants to use a likely voter model that seems “reasonable” based on past elections, but every election is in some ways exceptional. There are powerful mobilization and empowerment effects within various demographic groups and communities that vary strongly from year to year. These effects relate both to the general political mood (is the media focusing on the issues I care about?), as well as issues of representation (are there people on the ballot who look or sound like me?), and the tangible effects of activists and organizers. This last bit isn’t just a function of fundraising or lack thereof, but also community penetration and message framing, and depends largely on how hopeful or fearful the community in question is about the state of society and whether they think they can do anything about it (marginalized groups may be afraid to vote — and thus draw unwanted attention — while privileged groups might be afraid of not voting and losing power).

The more you chop up the electorate into smaller demographic chunks, the more obvious the variation from year to year becomes. The electorate only appears to be as static as it is because there are relatively few people who always vote or almost never vote (which is very different from being a non-voter) and most voters fall into the “sometimes” or “usually” vote categories. These voters will become mobilized or demobilized based on a combination of cultural discourse, issue salience, candidate affinity, and direct contact from campaign or community organizations, but also by the actions and relative strength of their political opposites (which is why candidates usually emphasize how close an election is even when it isn’t). So often groups wind up canceling each other out. Landslides, on the other hand, tend to happen when one cultural/political movement is engaged, organized, and empowered while its opposite is confused or divided. This may well be what happens today.

So, Trump was supposed to have a 15% chance of victory before the elections IIRC. It’s not low, it’s more or less the same chance of rolling a 6 on a normal dice.
Nobody is surprised when a dice rolls 6, but many people were surprised by Trump’s victory. Why?

Same, Obama was supposed to have an 80% chance of victory, so the fact that he won the elections by a good margin was seen as a vindication of this supposedly high chance of winning. Why? 20% of the other guy winning isn’t really a low probability event.

Same with brexit, which surprised everyone but was forecasted at 20% likeliness IIRC (I might be mixing some of these numbers).

Why are people surprised when medium-low probability events (not very low probability but medium to low probability) happen?

Weather. I visited a neighboring city about 2 miles from home. I had to pull over since I couldn’t see the road for all the rain. The streets back home were bone dry. A wave of thunder cells passed through the viewing/prediction/ diagnostic area causes rain in 50 % of the area, i.e. 50% chance of rain. Will you pull the laundry, or do you feel lucky?

The “House” result that is of interest is that of the Democrats getting +23 [or more] out of close to one hundred independent races, each with its own probability profile. Nate Silver runs the numbers and guesses that there is an 85% chance that the Democrats will win +23 seats over all. It is a probability of probabilities. It doesn’t say squat about any particular race, and only suggests a secondary “mass” outcome. That information is useful if you are betting or running a campaign; otherwise, I’d like butter and hot paprika on my next dish of popcorn, SVP.

I sense there’s a bit of a Bayesian vs Frequentist split in this thread over whether its even meaningful to assign a probability to a one-off event like an election as opposed to a large population of events (such as repeated rolls of a die)…

Somewhat related: different genetic tests will give me completely diferent probabilities of having a disease, and both be right. P(i have disease|i have set x of alleles) and P(i have disease|i have set y of alleles) can be completely different. Meanwhile, i either develop the disease or i don’t.

This example might be better: i know I have red hair (100% certainty). If some genetic test tells me I have x% chance of having red hair, this does not effect my knowledge that i do in fact have red hair, but possibly tells me something about the reliability of the test

In The times today Danny Finkelstein has a column on modelling on prediction of voting. Its behind the paywall, so briefly he quotes John Sides and Lynn Vavreck who use “sophisticated statistical techniques to identify the things that really matter” which turn out to be change in GDP, president’s approval rating, presence of an incumbent, and how many consecutive terms a party has enjoyed. And race. White poorer Democrats started voting Republican.