Posted
by
Soulskillon Friday September 25, 2009 @08:25PM
from the lies-damned-lies-and-statistics dept.

An anonymous reader writes "Nate Silver suggests the political pollster Strategic Vision is 'cooking the books. And whoever is doing so is doing a pretty sloppy job.' Silver crunched five years worth of their polling data, and found their reported results followed a suspicious pattern which traditionally suggests fraud. The five-year distribution of the numbers 'is not random. It's not close to random.' The polling firm had already been reprimanded by the American Association for Public Opinion Research for failing to disclose their methodology, though the firm argues they did comply with the organization's request. Their response to Silver's accusation? 'We have a call in to our attorney on this and fully intend to take action that will vindicate us.'"

Not sure if you're trying to make a pun, but "categorical" in this case means "without exception." For example, Kant talks about categorical and hypothetical imperatives. Categorical imperatives you do always without exception (such as never lying, according to Kant anyway). Hypothetical imperatives are what you do based on the situation (CPR is appropriate only when someone is not breathing, for example).

It is reasonable to assume that non-significant digits will be uniform, given a sufficiently large sample. On the other hand, it is not reasonable to expect that mere second digits will be uniform in data that is as highly biased as poll results.

In other words, I don't expect any particular distribution, but I don't believe that the mere presence of a non-uniform distribution is enough to prove wrong-doing.

IANAS (I am not a statistician), but according to Wikipedia, Benford's law applies to the distribution of the first digit. It has a logarithmic distribution. This makes complete sense since the probability for certain numbers will be higher than others (i.e., in telephone bills, the 1 is probably much more likely since there are a lot of people with $100+ phone bills). But they are discussing the *2nd* digit. This should be uniform unless it's a very strange dataset.

Reading TFA, Nate's analysis implies that there is a systematic bias toward some last digits in the overall poll percentages aggregated over many disparate topics.

What seems so improbable (to me) is that if someone really were grossly "cooking the books" like this - literally not doing the poll, or tallying any numbers at all, but instead simply reporting fake results for press... is that they would be so stupid to make up the results manually instead of using a computer in some way. What, some guy in an office reading other polls and saying "gee I think the number will be 45%."

If this kind of bias really has been introduced by manually creating and publishing the results (as the analysis seems to imply), then it will be easy to track down and prove with further digging into the data, interviewing people who made the calls or took the data, etc. However, accepting such an explanation would requires a level of stupid on the part of the principals in this company that is so extreme that I find such a scenario an improbable explanation for the results presented.

However, I'm unconvinced that this is some sort of smoking gun; Silver needs to really run this sort of simplistic analysis on a lot of other polls and see if there in fact is a bias towards a 47 - 43 split with 10% undecided. That actually sounds about right for a lot of the polls I remember in the last election.

If you read the TFA, Nate addresses this. He states that his data--SV LLC's polling results--are selected from a wide, wide, wide variety of topics, not just necessarily the highly divisive ones where there may be a relatively even split between two choices.

Moreover, (as Nate states) over enough data, even the effect of the undecided percentage on the trailing digit should be random.

Moreover, (as Nate states) over enough data, even the effect of the undecided percentage on the trailing digit should be random.

Except that in this case, the trailing digit is merely the second digit. A bias in the second digit of what is after all highly biased data (you don't have a lot of 98-2 results in polls) is not unlikely, even in samples much larger than what he's using.
Not saying that the company is honest, but Silver's argument is not sufficient to condemn them.

all i get out of reading the article is that silver has a bee in his bonnet and doesn't like the firm in question. anyone who's done a statistics course knows numbers can be twisted and played with to come out with just about any answer. i'd be very suprised if ALL pollsters do this.

I don't know, this sort of reminds me of a recent case of fraud in Physics [wikipedia.org]. If a PhD physicist can make such a mistake, it doesn't seem totally unbelievable to me that a polling firm might. Also, you have to ask yourself if they ever actually expected their results to come under much scrutiny.

Nate Silver does great analysis at the first order multiple-linear-regression level -- he outperformed all the other polls/predictors in 2008 iirc.

He sucks at meta-analysis though, in that he just doesn't understand the math. His 2008 monte-carlo stuff gave good results, but was just a bad reinvention of averaging. His recent foray into analyzing stock returns was interesting but 0-information (i.e. useless.)

Now he's mentioning Benford's law, but playing with trailing digits. Then he handwaves a non-normal result with an appeal to "it looks wrong." Come on, give us some real math here!

That said, he's probably right, but he's given us no math to support his claim.

It's not quite as unlikely as he says, (half a million to one instead of millions to one) but Strategic Vision is almost certainly sampling something that is not what the rest of the industry is sampling.

Benford's law is sometimes called the First Digit law. It deals with cases where numbers are not equally probable, but rather lower integers are more common than higher ones. A good example of such a number is the first digit of street addresses. There are many short streets that only have a 100's block, and only a portion are long enough to also have a 200's block, fewer to have a 300's block, and so on, so the first digit is not equally likely to be, say, a 4 or a 7, rather there will be more fours than sevens. Some stock market numbers should fit Benford's law, and there are plenty of other cases with real world applications.

However, the law in extended form does work for second or higher digits, or cases where the most likely value for a digit is not 1. Take the IRS for example. Last year, the standard deduction for married filing jointly was an even $10,000. Many people didn't bother to itemize schedule A unless it got them at least a couple of hundred extra back. So there were many people who claimed $10,2XX on their itemized returns, a few less that claimed in the $10,3XX and so on. $10,0XX or $10,1XX values probably weren't the most common, because a lot of people probably didn't bother to gather all the records needed and do all the paperwork if they though it was only going to get them, say, an extra $27 or even $104.

The IRS could, and probably does use Benford's law to look for number patterns that may indicate fraud, but for some of those numbers, it's the second or latter digit that they should start at. (They won't publicly discuss whether they have any sorting/flagging software that is Benford's law based. I suspect they do as it would be foolish not to take advantage of the math here, but I have absolutely no proof other than that I use some of the same math in a private role, and it's been damned useful a couple of times in spotting a client trying to get me involved with something shady, so it should work equally well for the government.).

So, using Benford's law for second or other trailing digits is legitimate. I can't tell from the article whether Nate Silver is doing everything else correctly, but the extension to a particular trailing digit isn't itself a flaw, and I could come up with a good psychological argument whey humans might fudge the second digit by a point or two, but only when it isn't already an 8 or 9, so as not to make the 10's digit roll, so focusing on digit 2 could certainly be justified. (as could focusing on the second digit to the right of a decimal point for precision results, by much the same logic).

Their response to Silver's accusation? 'We have a call in to our attorney on this and fully intend to take action that will vindicate us.'"

Generally, I would expect a logical course of action from an honest and transparent firm would be to hire a statistician to vindicate themselves. Lawyers don't make a reputable firm appear any less reputable.

This is not slander. He's just said "I've mined their data and these are the results. This smells somewhat like a rat. This needs looking into." He was very careful to avoid any direct accusation of impropriety, only saying "This is what patterns like this often mean".

I've been following Nate ever since the 2008 elections, and I've much enjoyed his analysis. Being a mathematician, I can spot BS math, but Nate usually does a decent job with no BS. But this article is has so many analytical gaps that I feel awkward supporting him this time, even though the article as a whole is convincing. To make such a bold claim as he is, I would've expected him to assess this more completely. He did no comparisons to other pollsters, and sampled data that is not IID (identically and independently distributed). i.e. if a boolean poll has 49% for one side (9) the other answer has to be 51% (1) The last digits (1 and 9) are completely dependent. Not all polls are boolean, but there will still be correlations, and many polls in the sample are boolean. Not only that, but he mis-applied the reference to Benford's Law. I know he knows what Benford's law is, because he's had multiple other posts about it, but got it dead wrong in this article.

I'm glad there is someone sufficiently mathematical to look for things like this and have a wide enough audience to be heard, but I wish he'd taken some more time to do look at more control groups and do some confidence intervals before sticking his head into a potential legal mess.

I have been programming accounting software for almost fifteen years and the first nasty lesson I learned was that data can be presented in unlimited ways and if you want to get paid you better make it look good. Change the scale, oversample, skew the questions and all sorts of other nasty tricks are now par for the course.

We now have well respected polls contradicting each other by double digits because of the politicizing of any information that might change voters opinions. I never thought that I wou

BREAKING NEWS:The AP is reporting a major fuckup at Slashdot. The web site cannot even do the most basic task essential to its operation, allows readers to leave comments on articles. No comments were available from anyone employed by the web site. Phones rang and rang and rang. Several other Sourceforge properties had their numbers disconnected due to non-payment.

It is apparent no one in charge of the place gives even a sliver of a fuck, or even reads the front page after articles are posted, as it is 2009

I know this might be slightly off-topic, but I think that the issues Slashdot has been having are due to an unexpected spike in traffic after they posted the story of how 3D Realms was switching over to Epic's Unreal Engine for the upcoming Duke Nukem Forever. I'm pretty stoked about this and am saving up to be able to afford a Voodoo2 - DNF is gonna be da bomb!!

Seems like NASA story comments are appearing in here. Tragically, GP might have been modded off topic and now mocked through no fault of his own. There is no justice in this world *shakes fist at the gods*

From TFA, it looks like they handle a fair variety of sundry topics in American politics. Not a giant deal, I've certainly never heard of this particular outfit before; but I find it extraordinarily hard to believe that anything which increases the amount of false-but-plausible-looking noise in the world is a good thing.

On important topics such is more dangerous than on less important ones; but its mere existence makes the world a less knowable place either way. Either you have people believing false data, or you have people falling into the essentially nescient "all data are just source biases" position.

I find it disturbing, too, that the media just reports the polling companies' results, without reporting things like what questions were asked, in what order, how the poll was conducted or who commissioned it, all of which can have a big effect on the results. A lot of "push polling" goes on, especially when the polls are commissioned by special interest groups, business associations, unions or political parties themselves.

I'm not in the US, so I don't know this polling company, but I've had a municipal, provincial and federal election in the past 12 months (with another possible federal election imminent) and I think polling and radio call in shows have a great deal of effect on people's opinions these days, more so than traditional newspaper and television newscasts.

If Strategic Vision was conducting fraudulent poles, I would be looking at their client list and going after whoever paid for them as well.

Pretty good on the explanation of the "who" was polled, but not the questions.

Just a silly example: "Are you in favor of decreasing the speed limit on Main Street to 5 MPH?" vs. "Are you in favor of saving cats and squirrels on Main Street?". I know silly example, but it is non-political and illustrates the point the the wording of the question, as well as the sequence of each question, contributes to determining the results of the poll. Even just the tone of voice can push someone in a direction. Th

Voting, being a proactive decision, automatically introduces selection bias into the poll. i.e. the sample group represents those who are likely to volunteer their opinions, rather than a cross-section of the general population,

Strategic voting is the worst thing that you can do to a democracy. It makes every political system fall into a two-party system [wikipedia.org], which (see: United States) becomes a de facto one-party system.

If the vote is to reflect public opinion, people should vote their own opinion. They don't need to try to help the system by guessing the most popular option.

Sure, in an unattainably perfect world with perfect election systems, this would be true. However, one most note that its impossible to have a single-winner voting system where more than two candidates stand for election where strategic voting is not rewarded if voting actually matters at all.

the police are too far away. so we have a status quo here currently in the usa where hundreds of urban dwellers die every year from thugs with guns for the sake of a law which serves only the rural minority. but as the usa continues to urbanize further, and begins to equal european urban/rural ratios, political status quo will fall in line inevitably

and instead of HUNDREDS of urban dwellers dying every year for the sake of rural-friendly laws as we currently have, DOZENS of rural folks will die instead for

You really think that the only people who want guns legal are rural? And that the laws are "rural friendly" in that regards? I've got news for you, the vast majority of gun owners and enthusiasts are urban dwellers, and that isn't looking like it's going to change anymore now than it has in the past couple of decades.

In addition, you really think that the majority of murders with weapons wouldn't happen without weapons? People murdered each other before guns were invented, removing them might make a few cas

I don't really have an opinion on gun control but I think this is wrong:

People murdered each other before guns were invented, removing them might make a few cases go away but won't impact the vast majority of homicides.

Premeditated murders maybe, but crime in general is greatly assisted by the availability of guns. The problem is that they're just so powerful. If you go into a bank with a knife and start waving it around and telling people to get on the ground they're just going to run away. But pull out a gun and everyone 10 meters around is going to obey every word because you can kill them instantly.

And people are defenseless against a gun but they can at least run or throw a chair or punch an attacker with a knife. And gun killings are easy and impersonal while with a knife the attacker has to struggle and get covered in blood and listen to screams or whatever.. much nastier

Swords are a problem I guess but they're impossible to carry concealed

I find it disturbing, too, that the media just reports the polling companies' results, without reporting things like what questions were asked, in what order, how the poll was conducted or who commissioned it, all of which can have a big effect on the results. A lot of "push polling" goes on, especially when the polls are commissioned by special interest groups, business associations, unions or political parties themselves.

tl,dr. (Too long, didn't read).

Unfortunately, for most of the world, this will be the response from most readers if the media took the time to report on the details of the poll.

Although, really, in the internet age, the media could have added a link so anyone interested could see the details of the poll. However, I suspect doing so would just expose to world how ignorant/lazy the reporters are, because you may find most poll results are either horribly slanted or extremely poorly designed (to the point that the poll was designed to mislead will be obvious).

For example, I recall seeing a newpaper headline saying ">80% of women has been sexually assaulted at least once". Surprised at this, I RTFA, and it turned out the "poll" was done by an NGO aimed at helping rape victims, and they "polled" 8 (eight) of their staff to get this result. My view of that newspaper (and reporters/editors in general) dropped a few notch after that.

Pretend I know nothing about Pollster (which happens to be true). Why should I care whether they've faked results? By that, I mean: do they research options of favorite flavors of cotton candy, or public support for health care reform, or the best style of car, or...? In other words, do they do stuff that actually matters?

Pretend I know nothing about Pollster (which happens to be true). Why should I care whether they've faked results? By that, I mean: do they research options of favorite flavors of cotton candy, or public support for health care reform, or the best style of car, or...? In other words, do they do stuff that actually matters?

Faked polls = astroturfing.

Need I say more?

Well, you might need to explain what astroturfing is. Most people here think that astroturfing is when you are satisfied with a mass-market product.

Well, you might need to explain what astroturfing isAstroturfing is where a special interest tries to create the impression of grassroots support. That may be through paying shills to post a lot on message boards with posts that support your position, it may be through dodgy polls or it may be through other means.

First of all, I don't think "What do I care" is anything but flamebaiting. Who cares if you don't care?

Second, if they're the same "strategic vision" that the article is talking about, their webpage says"Strategic Vision has worldwide experience developing tools to measure decision-making, human behavior, attitudes and perceptions. Its globally relevant, comprehensive theory of human behavior creates the most effective strategies addressing decision-making in product development and communications in the widest variety of fields, including automotive, customer service, government and politics, medicine and healthcare, organizational and jury, travel and leisure, food and beverages, and education." So they probably report on anything you will pay them to poll on, or rather, anything you will pay them to make a graph from nothing.

Lastly, a quote in TFA by the company gives you plenty of reason to care:

[W]e categorically deny them and will refute them. We have a call into our attorney on this and fully intend to take action that will vindicate us...he has attempted to do severe damage to our reputation and what is he going to do when we disprove him just say I am sorry. That isn't enough at this point.

There you go: the company is mad about being uncovered and is doing the next step any stupid assholes do when their misdeeds come to light: sue in a vain attempt to keep the information from becoming well known. Therefore, -everyone- should know they're faking the results. I'm tempted to e-mail all their clients with a link to the article. If they go out of buisiness, maybe other shitty companies will finally realize you don't sue people who expose you as charlatans.

if they're the same "strategic vision" that the article is talking about, their webpage says "Strategic Vision has worldwide experience developing tools to measure decision-making, human behavior, attitudes and perceptions....

Nope, you're looking at the webpage of a different company! See Nate's previous article [fivethirtyeight.com]:

Why would you pick the name "Strategic Vision, LLC" for your company when the name "Strategic Vision, Inc." was already in use by an extremely well regarded, San Diego-based research firm that has been in business for more than 30 years? Are you deliberately trying to confuse your potential clients and leverage Strategic Vision, Inc.'s much stronger brand name?

Second, if they're the same "strategic vision" that the article is talking about

They're not, from another helpful article from FiveThirtyEight [fivethirtyeight.com]

Why would you pick the name "Strategic Vision, LLC" for your company when the name "Strategic Vision, Inc." was already in use by an extremely well regarded, San Diego-based research firm that has been in business for more than 30 years? Are you deliberately trying to confuse your potential clients and leverage Strategic Vision, Inc.'s much stronger brand name?

You're looking at the page from the well regarded Strategic Vision, Inc. Funny that SV LLC seems to be so happy to sue Nate Silver, it would seem that SV Inc has a far stronger case against SV LLC.

Except you've linked to the wrong company. Strategic Vision, Inc. [strategicvision.com] is a well respected 30-year old polling firm in California. Strategic Vision, LLC [strategicvision.biz] is the shady 5-year old GOP shill corp with questionable poll results and no real office (or polling results allegedly). Careful with those links, you don't want to slander the wrong company here. I think SV Inc. may have a trademark case on their hands if their feeling litigeous.

First of all, I don't think "What do I care" is anything but flamebaiting. Who cares if you don't care?

I didn't say that I didn't (or wouldn't) care, but was asking why I should care. I thought I was fairly clearly about that. The story basically boiled down to "some group you've never heard of is falsifying data that you may or may not be interested in, but I didn't want to bother to explain any of this and would rather make every single reader figure it all out for themselves".

There you go: the company is mad about being uncovered and is doing the next step any stupid assholes do when their misdeeds come to light: sue in a vain attempt to keep the information from becoming well known. Therefore, -everyone- should know they're faking the results. I'm tempted to e-mail all their clients with a link to the article. If they go out of buisiness, maybe other shitty companies will finally realize you don't sue people who expose you as charlatans.

First, I don't have a dog is this hunt. I don't know who the accuser or target of accusation is, and certainly don't have opinio

In a word, yes. Nate Silver manages the blog FiveThirtyEight [fivethirtyeight.com] and is well-known as a statistical analyst from the 2008 US election (among other things). Strategic Vision has released quite a few polls. In Silver's words,

...Strategic Vision's polls cover a wide array of topics: Presidential horse race numbers in any of a dozen or so states, senate and gubernatorial polling, primary polling, approval ratings of various kinds, polling on issues like the war in Iraq, and more abstract questions such as whether voters think that 'experience' or 'change' is the more important quality in a Presidential candidate.

So yes, this is pretty big news, should it turn out that Strategic Vision's behavior is in fact illicit. They're influential enough that news agencies may pick up their polling results. This is bad enough, but when you factor in the fact that polling results can be very effective propaganda in something like a presidential race, fraudulent polling can have significant consequences.

That's actually a really good point. On a related note, what I'm interested to know is whether the allegedly faulty data diverges from other firms' polling data on particular questions. In other words, are they pushing an agenda of some sort? Are they just faking data so they have something to sell? Is Nate Silver full of shit?

Strategic Vision is a Republican pollster. Meaning when a Republican politician wants a poll about a particular set of data they give Strategic Vision some money and they do a poll. This can be for either internal polling to give them and idea how the "battle" is going or for general consumption. And yes Strategic Vision is big enough to matter, but they are just the tip of the iceberg how misleading "R" pollsters

In general there are some Republican some Independent and some Democrat pollsters however all of their results are supposed to be scientific the idea is dose a poll for internal consumption really help if tells you that you are going to win easily on election day only to have to be a landslide against you?The answer is no.

The reason why this is dangerous is multi fold. 1) Due to the supposed scientific nature it has been used to make public policy decisions 2) It can influence peoples opinions. 3) It can influence a senator's or some other politicians choices while they are in power.

Here is a perfect example of this. A certain Republican senator from Maine is considering if she should support a public option, so she wants to see what the citizens of her state think about the topic. She hires Strategic Vision to do a poll for her. Strategic Vision comes back and says 60% of your state's citizens are against it. She gose "Wow I guess im not supporting that bill" In reality its 60% the other way. From this the senator decides to not support the bill and it dose not pass.

I will be as blunt as possible. I am accusing Rasmussen, Strategic Vision and other Republican pollsters of deliberately lying to the American people in order to alter the public debate. If you follow the math they have been consistently off for years. If you want to just look at the last election cycle Rasmussen etc all had the results a lot tighter than the results on election day. This could just be poor polling on their part but I will offer exhibit B

Since health care reform has been a topic in the news the difference between the several Republican pollsters and "everyone else" has been steadily growing. I firmly believe that the insurance industry has been paying these pollsters to lower their numbers for the democrats to push them to drop health care reform.

Yes the Democrats poll numbers have been sliding somewhat across the board. However if you look at the data from the Republican sources. They have the numbers significantly different than those of the "Independent and Democratic" pollsters.

Over all I want to say this "dishonest polling" helps no one. It may help push a certain agenda temporarily but It can also cause those who support it to loose elections..... Look at the results from 2008 the REPUBLICAN PARTY IS BEING MISLEAD BY ITS OWN POLLSTERS AND IT IS COSTING THEM ELECTIONS

I firmly believe that the insurance industry has been paying these pollsters to lower their numbers for the democrats to push them to drop health care reform.

Yeah, you go ahead and cling to the belief that the insurance industry doesn't want the health care bill to go through. Why would they possibly look at 30 Million people who aren't buying their product and support a bill that will require everyone, by force of law, to buy their product?

I'd certainly like to see some numbers regarding who the insurance industry as a whole is contributing to.

"Yeah, you go ahead and cling to the belief that the insurance industry doesn't want the health care bill to go through"

You are right the insurance industry would stand to gain massively by that proposal. That's exactly why the liberal sect of the democratic party has been fighting that provision.

I would like to point out that the insurance industry is being very pragmatic they have a two tier battle plan. They don't want the bill to pass however if it dose pass they want to have things like that put in

That provision was added to some of the bills to "tempt" republicans into voting for it as several Republicans have explicitly said they would like to see that included.

As far as "I'd certainly like to see some numbers regarding who the insurance industry as a whole is contributing to." The money has been flowing quite rapidly into the conservative arm of the democratic party. Ben Nelson, Mary Landrieu and Max Baucus have all goten heavy donations since this whole thing has started (from insurance companies). That is not to say that the republicans have not been getting a lot of money from the insurance companies. (That goes without saying) So to some it up Republicans are continuing to get good pay checks,(the usual) however some conservative democrats are now also getting paid for their services(Newish). Just for your info many progressives want political blood for this, Ben Nelson and Max Baucus and to a much lesser extent Mary Landrieu are the one thing that is standing in the way of progressives' holy grail. For that many of us want political revenge at any cost.

It shouldn't (but probably will) be considered trolling to point out that the political section of their client list consists of the Republican Party, the Conservative Party (of England), The Department of Defense, the Whitehouse, and the State of California. That section hasn't changed in that last year, so I assume it's referring to not only the Republican governor of California, but also Dubbya's Whitehouse. Sounds like they get most, if not all, of their political business from conservative sources.

Many people made decisions based on those polls, including politicians. If the results are not random samples but where cherry picked, it could influence those politicians to support bills and policies that they think the public wants (Patriot Act, Warrentless Wiretapping, Waterboarding, Wars, etc) but in reality they might not actually want as a majority.

This applies for anything using statistics including scientific theories, the same fraud detecti

Also note: If you understand statistics you would _never_ use the phrase 'statistically impossible'

If you understood thermodynamics, you'd know that 'statistically impossible' is why the world doesn't go crazy. Like sudden appearance of vacuum when you try to breathe or random melting of spoon when stirring your coffee.

Statistically Impossible may well have meaning. In Cosmology, various people at various times (Hawking, Guth, Dirac, and Einstein (1n the late 40's working with Minkowski and Godel), all found that they had to write a few pages on whether very improbable events were distinguishable from zero probability events before they could justify using some of their math. All were working on their own takes on the origin of the Cosmos problem at the time. Most of them decided that any event with a probability of less than 1 in the whole lifetime of the Cosmos was 'statistically impossible' and not just 'improbable'. Rosen later argued that it was better to phrase it in terms of less than 1 during that part of the cosmos's lifetime when entropy was low enough to allow other events of that same energetic magnitude to happen normally rather than the whole lifetime, and others have debated the point various ways, but it's still common to call some things statistically impossible when doing fundamental cosmology.
Oh, and I need a new spoon.

If i take any data set (say one with a standard distribution), how many of those data sets would i have to sample on average before i found one that looked like the ones he is talking about? If the expected number of data sets i would have to look at is in the millions, you are correct in that i might find it in my first sample, but the chances are incredibly tiny.

If you take one with a uniform distribution then you would expect to find one with a greater or equal disparity to the one observed once every 70 quadrillion. If you take a distribution corresponding to the industry average, you'd get a result disparity greater than or equal to strategic vision's, on average, one time in about half a million.

Fortunately, there are corrections you can do for that. And he took a fairly normal statistical test on the numbers, which is equivalent to saying he didn't perform that many comparisons. To very rough approximation, you need to correct your p-value for all the less weird analyses you might have performed on the data instead. It's a bit hard to pin down an exact p-value for the analysis he did (the underlying data isn't expected to be flat; it's also not expected to be that bizarrely lumpy), but I promise that Nate Silver has an understanding of this issue (which you'd see, if you'd read the post).

If you accept his initial theory that the digits should be equally probable then it's a multinomial exact test or a G test of goodness of fit. If you observe, as he did, that the industry average supports a slightly different distribution then you can compare SV's results with the industry average using a Fisher's exact or G test of goodness of fit. They're simple tests, and no corrections are necessary unless you do multiple comparisons, which is not the case here.

When you are making decisions based on public opinion and the differance between 52% and 48% makes the difference between whether you keep your elected position. Imagine what the difference would be between 60% and 40%. I'm not sure of the exact reasoning for these kinds of polls 20% seems to be about the stranded margin of error. I imagine it has some aspect of what the state of the art is in scientific statistic estimation theory is. In which case 57% to 20% difference would like using 1850's technology c

First, the example he gives where he looks at polls from ALL sources is an example of a plausible distribution of real results because, assuming the majority of pollsters are not cooking their data, the data should be dominated by randomness. He then looks at this particular pollster and finds a much greater disparity in trailing digit frequency. The question is, is it significant, or just chance?

Given the numbers, it's not particularly hard to figure out. You can calculate the likelihood of any particular result given a theoretical distribution using a G test of goodness of fit. Technically for numbers this small you could use an exact test but I don't know of a web version and I'm too lazy to write one up. But here's a description of, and an excel spreadsheet that performs, the G test of goodness of fit: http://udel.edu/~mcdonald/statgtestgof.html [udel.edu]

Basically, you plug in the distribution you see and compare it with the one you expected. What you get is the probability of that distribution occurring by chance. So if we plug in the observed data for all the pollsters and assume equal likelihood for all trailing digits we get a p=0.006. Whoops, looks like our assumption isn't quite correct. As the blog author notes, the observed distribution is humped a little, favouring the middle numbers. He also gives a possible explanation. For giggles, the probability of the Strategic Vision results given equally probable trailing digits is absolutely microscopic: p=1.44x10^-17. Together those tell us that our assumption of equal digit distribution is probably not quite right, but the Strategic Vision data still looks mighty funny.

Okay, so assume instead that most pollsters aren't making up their numbers. Not that their numbers are necessarily accurate, but that they're at least not making them up off the top of their heads. So using the data from all pollsters as a template, how likely is the Strategic Vision distribution? That's a G test of independence: http://udel.edu/~mcdonald/statgtestind.html [udel.edu]. We could use Fisher's exact test, but I can't find one that will do a 2x10 table.

Plugging in the data, we get G=43.068, d.f.=9, which gives p=2.09x10^-6. The blog author was actually a little careless when he said the chances of Strategic Vision's results are millions to one against. If you insist on the equal-probability theory then the odds are 70 quadrillion to one against Strategic Vision and 166 to one against the industry as a whole. Taking the more realistic approach that the industry average is a better representation of the actual probability, the odds against Strategic Vision's results are about half a million to one against. Not millions to one, but close enough.

First, the example he gives where he looks at polls from ALL sources is an example of a plausible distribution of real results because, assuming the majority of pollsters are not cooking their data, the data should be dominated by randomness.

Here is the thing. Did he begin with the theory that Strategic Vision was fraudulent, or did he begin with the theory that some pollsters were fraudulent?

After all, he was churning a lot of pollsters data.

Isnt it quite possible that he was simply mining his massive dataset for something, anything, that made any pollster look bad?

In short, how likely is it for one legitimate pollster out of many legitimate pollsters to have data that isn't quite normal (pun intended?)

From his posting, he talked to SV about their refusal to reveal their methodology, then decided to test to see whether their results showed any suspicious bias. He was specifically testing SV and not searching for any pollster.

You're right, if he tested multiple pollsters then he'd have to correct for multiple comparisons. Even so, you'd expect results as bad as SV's about one time in half a million. There aren't that many major pollsters, so you could detect results skewed as badly as SV's to a high con

From his posting, he talked to SV about their refusal to reveal their methodology, then decided to test to see whether their results showed any suspicious bias. He was specifically testing SV and not searching for any pollster.

I suspect that refusal to reveal methodology is quite common, given that most are agenda-driven. Did he only speak to SV, or did he speak to lots of pollsters who refuse to reveal methodology?

Even so, you'd expect results as bad as SV's about one time in half a million. There aren't that many major pollsters, so you could detect results skewed as badly as SV's to a high confidence level using a data mining technique.

But there are LOTS of ways (infinite, really) to "test" data, so even if there are only 50 pollsters, you can still end up with millions of chances of finding arbitrary million-to-one outliers (where a lack of outliers would actually be suspicious!)