"Risk-limiting audits" use sound math to make sure the right candidate won.

NAPA, CALIFORNIA—Armed with a set of 10-sided dice (we’ll get to those in a moment), an online Web tool, and a stack of hundreds of ballots, University of California-Berkeley statistics professor Philip Stark spent last Friday unleashing both science and technology upon a recent California election. He wanted to answer a very simple question—had the vote counting produced the proper result?—and he had developed a stats-based system to find out.

On June 2, 6,573 citizens went to the polls in Napa County and cast primary ballots for supervisor of the 2nd District in one of California’s most famous wine-producing regions, on the northern edge of the San Francisco Bay Area. The three candidates—Juliana Inman, Mark van Gorder, and Mark Luce—would all have liked to come in first, but they really didn't want to be third. That's because only the two top vote-getters in the primary would proceed to the runoff election in November; number three was out.

Napa County officials announced the official results a few days later: Luce, the incumbent, took in 2,806 votes, van Gorder got 1,911 votes, and Inman received 1,856 votes—a difference between second and third place of just 55 votes. Given the close result, even a small number of counting errors could have swung the election.

Vote counting can go wrong in any number of ways, and even the auditing processes designed to ensure the integrity of close races can be a mess (did someone say "hanging, dimpled, or pregnant chads"?). Measuring human intent at the ballot box can be tricky. To take just one example, in California, many ballots are cast by completing an arrow, which is then optically read. While voters are instructed to fully complete the thickness of the arrow, in practice some only draw a line. The vote tabulation system used by counties sometimes do not always count those as votes.

So Napa County invited Philip Stark to look more closely at their results. Stark has been on a four-year mission to encourage more elections officials to use statistical tools to ensure that the announced victor is indeed correct. He first described his method back in 2008, in a paper called “Conservative statistical post-election audits,” but he generally uses a catchier name for the process: “risk-limiting auditing.”

Napa County had no reason to believe that the results in this particular election were wrong, explained John Tuteur, the County Assessor, when I showed up to watch. But, anticipating that the election would be close, Tuteur had asked that Napa County be the latest participant in a state-sponsored pilot project to audit various elections across the Golden State.

While American public policy, particularly since the 2000 Bush v. Gore debacle, has focused on voting technology, not as much attention has been paid to vote audits. If things continue to move forward, Stark could have an outsized effect on how election audits are conducted in California, and perhaps the country, for years to come.

“What this new auditing method does is count enough to have high confidence that [a full recount] wouldn't change the answer,” Stark explained to me. “You can think of this as an intelligent recount. It stops as soon as it becomes clear that it's pointless to continue. It gives stronger evidence that the outcome is right.”

Audit day

To kick off the process, all 6,573 votes tallied in the 2nd District supervisor contest were re-scanned by county elections officials in the City of Napa. They sent the scans to a separate computer science team at Berkeley, led by Professor David Wagner. Along with a group of graduate students, Wagner has developed software meant to read voter intent from ballots. His system, for instance, will flag even ballots where the arrow was not filled in according to the instructions, and it takes a different approach to filtering out stray marks. The Wagner team created a spreadsheet containing each ballot (they also created a numbering system to identify and locate individual ballots) and how that person cast his or her vote.

One problem that cropped up early on was the discrepancy between the number of ballots cast and the number of ballots scanned. While 6,573 total votes were recorded in this particular contest, the Wagner team scanned a total of 6,809 ballots, while Napa County recorded 7,116 votes cast in the election as whole. (Not every voter in the election chose to vote in this particular contest.) In short, there were over 300 ballots missing. While that seems problematic, the margins stayed more or less the same.

"If both systems say 'Abraham Lincoln won' then if the unofficial system is right, so is the official system, even if their total votes differ and even if they interpreted every vote differently," wrote Stark in an e-mail on Tuesday. "That's the transitive idea. A transitive audit is really only checking who won, not checking whether the official voting system counted any particular ballot correctly. That said, we do compare the precinct totals for the two systems to make sure they (approximately) agree, which they did here."

He added that to deal with the missing ballots, to confirm the winner, he treated them as if they were votes for the runner-up—so even with 300 additional votes, Luce still was the victor.

"To confirm the runner-up, we could not do that; instead, I treated them two different ways, neither completely rigorous," he added. "In other audits, I've been able to deal with any mismatches between the ballot counts completely rigorously, so that the chance of a full hand count if the reported result was wrong remained over 90 percent."

With that out of the way, the first step in the actual audit was to randomly select a seed number that would be used to feed a pseudo-random number generator found on a website that Stark created. For this, Stark had some high-level help in the form of Ron Rivest, one of America’s foremost experts on cryptography and voting systems, a professor of computer science at MIT who had also helped create the RSA crypto algorithm. Using 20 store-bought ten-sided dice, Rivest and Stark rolled out a 20-digit number. (73567556725160627585, for those keeping score at home.)

Risk-limiting auditing relies on a published statistical formula, based on an accepted risk limit, and on the margin of victory to determine how many randomly selected ballots should be manually checked.

“The risk limit is not the chance that the outcome (after auditing) is wrong,” Stark wrote in a paper (PDF) published in March 2012. “A risk-limiting audit amends the outcome if and only if it leads to a full hand tally that disagrees with the original outcome. Hence, a risk-limiting audit cannot harm correct outcomes. But if the original outcome is wrong, there is a chance the audit will not correct it. The risk limit is the largest such chance. If the risk limit is 10 percent and the outcome is wrong, there is at most a 10 percent chance (and typically much less) that the audit will not correct the outcome—at least a 90 percent chance (and typically much more) that the audit will correct the outcome.”

To decide how many ballots should be sampled in the Napa County audit, Stark used his own online tools and calculated that it should be 559. With that number in hand, Napa County's John Tuteur supervised a team of temporary ballot counters in another room. They sorted through stacks of ballots in numbered boxes, affixing a sticky note to the individual ballots in question, preserving the order in which all ballots were kept.

After locating the individual ballots, the team delivered the boxes containing them back to Stark, Rivest, and a few observers (including me). Each marked ballot was then pulled from its box and displayed to the room. Once everyone agreed that the ballot showed a vote for a particular candidate, an undervote (e.g., no vote at all), or an overvote (an uncounted and unauthorized vote for multiple candidates), the result was tallied on Wagner's spreadsheet. After a given set of ballots, those results were then compared to what the Wagner image-scanning team had recorded.

"You want cast as intended, and counted as cast, and verified,” Stark said.

I think we should have a modern-day poll tax: If you can't manage to either operate the ballot correctly, or request help and adequately explain why you are physically unable to do so, you get your ballot taken away.

I can't imagine a world in which we wouldn't all benefit from decisions taking advantage of rigorous mathematics wherever possible, and probability is the only way to go when it comes to difficult decisions. I somehow made it through nine years of education in aerospace engineering learning remarkably little about probability. It's nothing short of negligent on my part and whoever designed the curriculum.

I think we should have a modern-day poll tax: If you can't manage to either operate the ballot correctly, or request help and adequately explain why you are physically unable to do so, you get your ballot taken away.

I can't imagine a world in which we wouldn't all benefit from decisions taking advantage of rigorous mathematics wherever possible, and probability is the only way to go when it comes to difficult decisions. I somehow made it through nine years of education in aerospace engineering learning remarkably little about probability. It's nothing short of negligent on my part and whoever designed the curriculum.

I saw that. To be honest, I was expecting something a little more than that, but I enjoyed it nonetheless.

Probability is such an integral part of our lives, but can have such profound results (e.g. the Monty Hall problem). Between that and statistics, I'm amazed at the people that don't understand how it works.

Just something simple like understanding that a 1% false positive on a test doesn't mean that the result has a 1% chance of being wrong.

Probability is such an integral part of our lives, but can have such profound results (e.g. the Monty Hall problem). Between that and statistics, I'm amazed at the people that don't understand how it works.

Just something simple like understanding that a 1% false positive on a test doesn't mean that the result has a 1% chance of being wrong.

I'm not surprised at all. Statistics, probability and combinatorics aren't trivial, simple problems. Sure understanding what a false positive is, etc. is simple, but you get quite easily into not so simple math (that said, statistics was a rather long part of my high school curriculum, from the TED talk I assume it isn't in the US?)

Also things like the Monty Hall problem are rather unintuitive to a large fraction of the populace (and that includes quite clever people..)

But I agree statistics is extremely useful - probably the best thing I learned from my math minor that I actually use in real life.

I think we should have a modern-day poll tax: If you can't manage to either operate the ballot correctly, or request help and adequately explain why you are physically unable to do so, you get your ballot taken away.

Even further, there should be a test assessing people's knowledge of the candidates. If people can't explain their candidate's basic political stances, their vote should be thrown out. This will erase the "Obama is black like me, so I'll vote for him" kind of votes (or on the other end of the political spectrum, "Obama is a Muslim, so I'm voting for McCain". This will also eliminate votes from people who blindly vote party line or who are just voting for the party their parents tell them to, and so on.

Statistics and probability are much more important than calculus, which for some reason has become the standard advanced math for high school. I think we have it backwards.

As for the procedure, I think four hours and 15 people is not bad at all, and time well spent.

And as for: "He has ideas for speeding up the process, but they don't align well with the current crop of voting machines, which don't record their per-ballot vote interpretations." The current crop of voting machines are problematic in many ways, they simply are designed to not be checked for accuracy. It creates a big trust problem. Possibly a big accuracy problem too, we just can't tell. Certainly more of a problem than is being solved with voter ID laws, so if people are demanding that they should also be demanding improvements in the voting machines.

IN mid-August, Walden W. O'Dell, the chief executive of Diebold Inc., sat down at his computer to compose a letter inviting 100 wealthy and politically inclined friends to a Republican Party fund-raiser, to be held at his home in a suburb of Columbus, Ohio. ''I am committed to helping Ohio deliver its electoral votes to the president next year,'' wrote Mr. O'Dell, whose company is based in Canton, Ohio.

I'm not surprised at all. Statistics, probability and combinatorics aren't trivial, simple problems. Sure understanding what a false positive is, etc. is simple, but you get quite easily into not so simple math (that said, statistics was a rather long part of my high school curriculum, from the TED talk I assume it isn't in the US?)

Nope, we don't touch probability/statistics over here. It was in the pre-calculus book (Highest level), but not part of the curriculum. My high school offers Statistics as an AP course, but the teacher is awful. Most of the class didn't understand what was happening at any given moment. We entirely skipped Bayes' Theorem. The professor didn't understand it. That upset me then, and it upsets me now. Bayes' Theorem is amazing. P(A|B)=(P(B|A)P(A))/P(B)

The math curriculum is getting reworked for upcoming students, though. Cut down to three required courses instead of four. Maybe they'll drop some of the repeating units (Geometry, Logarithms, basic trig) and work in something useful?

Try to get your friends/family to understand this:You flip two coins. Given that at least one is heads, what is the probability that both are heads?It's 1/3. People are irritatingly bad at understanding this.

Even further, there should be a test assessing people's knowledge of the candidates. If people can't explain their candidate's basic political stances, their vote should be thrown out. This will erase the "Obama is black like me, so I'll vote for him" kind of votes (or on the other end of the political spectrum, "Obama is a Muslim, so I'm voting for McCain". This will also eliminate votes from people who blindly vote party line or who are just voting for the party their parents tell them to, and so on.

In other words, you'd like to disenfranchise anyone who chooses a candidate for (legally permissible) reasons you don't personally agree with, and believe a "television comprehension test" would be an effective way to do so?

The 1% sample in California would not be intended to prove or validate (with some level of risk) that a particular election / contest is correct, but that rate forms a quality measure of the balloting processes as a whole. The 1% is a management (therefore political) sample size that trades cost against benefit. With large ballot populations (and the voter population of a few thousand is large, although relatively small for California) 1% is not an unusual sample size. Mr Stark must know this, so I have no clue why he harps that this does not support the validation of a specific election. (Other than maybe having a hammer, means all problems are solved with a hammer or maybe that he has a hammer to sell.)

The legislature would have been concerned that the balloting process as a whole was not broken. It is unlikely there were concerned about any given future election, that is the political way (e.g., so what if one person died, that is just a statistic unless it can be used as rallying cry to get elected.) The fixed percentage sample size would prevent small elections of having to bare a larger relative cost per ballot. Having the sample size based on the election results makes the costs tied to the results. I know that opposite argument can be made, but remember these are managers (politicians). And many recounts are paid for by the candidates anyhow. So in the minds of these politicians, having a fixed percentage represents a known predictable foreseeable cost.

However, there is potentially another problem here and that is the assumption that voters for each candidate have equal likelihood of casting a poorly marked ballot. Actually that seems almost certainly not true, if it is reasonable to assume that candidate preference is relatable to educational level, and educational level is related to the skill of reading and following instructions. There is where jaketheultimate’s Bayesian can come in to play.

Ps. Anyone remember Numb3rs? For the first season or two it seemed that everything was solved with a Bayesian.

Statistics and probability are much more important than calculus, which for some reason has become the standard advanced math for high school. I think we have it backwards.

. . .

No. Calculus usually forms the “basic” arithmetic of most significant mathematical process, including statistics.

There are some discrete processes (e.g., where the results are only black and white, true or false; and samples are integral, e.g., 1, 2, 3, 4, etc.), that you can work around, but that can be limiting.

If you have Excel, look at the help for NORMDIST(x,mean,standard_dev,cumulative), the cumulative form is the integral of the normal distribution density (mass) function, evaluated from –infinity to x. Microsoft just does not show the integral operation any more (to avoid scaring anybody?), they just describe it.

Contrasting this with a discrete distribution like Hypergeometric distribution which can be computed via simple arithmetic, such as combinations and permutations.

Actually calculus forms the underlying mathematical basis for so much technology, from engineering, physics, economics (modeling), weather (modeling), Wall Street Wizards, and real risk management, and on and on. It is buried, baked and often hidden, but it is there. Practically any discussion of mathematics beyond addition and multiplication, without some knowledge of calculus will end up dancing around the subject trying to avoid it, just making the discussion difficult. If you are not working the technology, then you are not likely calculating any probabilities or maybe nothing beyond cost and schedules, writing lines of code, or just ringing up sales on a cash register.

Someone I know has complained that in his college economics classes, the professor would not allow calculus to solve problems, so every problem that would have touched calculus had to be solved by hand waving past a limit (as X approaches zero). Example the Marginal rate (of anything) is a fancy economic word for “derivative”

Where I went to college, in economics, calculus was fair game as we all had calculus.

Anyhow, my point being, you have to start with the basics.

Yes in high school I had simple statistics prior to calculus, but that is a very small sample of world of statistics, then I had calculus in high school and that was given at a genuinely college level. That made college calculus easy, especially since I did not have to repeat Calc I, yea.

I u toss two coins and you know one is heads then the probability of two heads is 50% as they are not related events and so the only relevant probability is that of the unknown second coin? Maybe it is the way the question is worded but certainly not simple for me to get my head round it being a 1/3rd even though I understand how it is calculated I any see why we only ignore the TT outcome.

Let me put it another way if I toss a coin today and it is a head the probability of it being a head next time is 50% not a 1/3rd.

Worryingly this sort of statistical analysis is used in financial auditing for companies and banks use it in risk modelling for loans and we all know how that turned out.

I u toss two coins and you know one is heads then the probability of two heads is 50% as they are not related events and so the only relevant probability is that of the unknown second coin? [...]

(edit to better address the question)

You seem to confuse "at least one is heads" with "the first one is heads". In the second case you are right: the probability is 50% and only depends on the second toss, which is also completely independent from the first (even given the information that the first one is heads).

In the "at least one is heads" case things change. For example if you look at the first coin toss (and only the first) and it is tails, then you do not have to look at the second coin toss to know that it's heads. Your 50% reasoning falls apart here, because given the information that at least one is heads, the first coin (counter-intuitively) actually does influence the second coin toss.

Here you should make sure that you do not think of these as coin tosses done one after the other, otherwise the above does not make sense. The first coin toss does not actually influence the second while they are done. However once I do both and I determine that one is heads (which I can only do when both coins are down) then tell you this fact and show you the first coin, then suddenly the the outcome of the first coin seems to influence the second.

Even further, there should be a test assessing people's knowledge of the candidates. If people can't explain their candidate's basic political stances, their vote should be thrown out. This will erase the "Obama is black like me, so I'll vote for him" kind of votes (or on the other end of the political spectrum, "Obama is a Muslim, so I'm voting for McCain". This will also eliminate votes from people who blindly vote party line or who are just voting for the party their parents tell them to, and so on.

In other words, you'd like to disenfranchise anyone who chooses a candidate for (legally permissible) reasons you don't personally agree with, and believe a "television comprehension test" would be an effective way to do so?

No.

I simply think people should be required to know the candidates' political stances in order to vote. My informed vote that I decided after much deliberation should count for more than someone who looked at Obama, saw he was black, and voted for/against him.

In fact, I think it should be illegal to vote for a candidate based on gender/race/religion/etc. Discrimination is illegal when hiring employees, and the election is essentially the final stage of the hiring process for the most powerful job in America, so discriminatory votes should be thrown out.

I know this system is not perfect and can be cheated (by learning about the candidates but still voting based on nonpolitical factors) but it would at least weed out a good amount of the ignorant votes and possibly make Americans more informed about the candidates running for office.

Couldn't they just have used pens with nibs the thickness of the arrow?

This is a good point - at least as important as auditing the results is designing a system that produces fewer inaccurate results in the first place.

Seriously... complete an arrow?? Who was the genius that had the idea???

I'm guessing the idea was to create a system which was obvious visually with a reliable result. An arrow is easy to understand even just looking at it, whereas a box can have crosses, ticks or just filled in (which can screw up computer reading of the result)

Using dice for a seed? Seriously? Do you know how much those things cost?

From man 4 random:

Quote:

The random number generator gathers environmental noise from device drivers and other sources into an entropy pool. The generator also keeps an estimate of the number of bits of noise in the entropy pool. From this entropy pool random numbers are created.

When read, the /dev/random device will only return random bytes within the estimated number of bits of noise in the entropy pool. /dev/random should be suit‐ able for uses that need very high quality randomness such as one-time pad or key generation. When the entropy pool is empty, reads from /dev/random will block until additional environmental noise is gathered.

And just what exactly did they do with the seed? Is this just for picking which ballots?

Couldn't they just have used pens with nibs the thickness of the arrow?

This is a good point - at least as important as auditing the results is designing a system that produces fewer inaccurate results in the first place.

Seriously... complete an arrow?? Who was the genius that had the idea???

Yeah, why is the system so complicated? remember from Bush vs. Gore you had to punch holes in the ballot or something. In Finland the system is simple: every candidate gets assigned a number. You write that number on the ballot. Done. The ballot looks like this:

If you write in anything else besides the number, the ballot is discarded.

The ballots are hand-counted by volunteers, with observers from each party making sure that everything goes smoothly.

Yeah, you could make the claim that USA is bigger, and hand-counting is too slow. Well, USA is bigger, but it also has more resources (read: people counting the ballots). And with simple number-based system, the counting is faster than trying to decipher arrow or holes. In Finland, the electrion-results are usually done few hours after the election ends.

If you write in anything else besides the number, the ballot is discarded.

But then you're into the minefield of volunteers reading bad handwriting. The idea with the arrow (or any other simple mark-on-a-page) device is to avoid the potential pitfalls of differing interpretation of the written numbers.

Try to get your friends/family to understand this:You flip two coins. Given that at least one is heads, what is the probability that both are heads?It's 1/3. People are irritatingly bad at understanding this.

It's easier to understand this problem by making a chart. Here are all of the possible outcomes of tossing two coins, and each outcome is equally likely:

...the assumption that voters for each candidate have equal likelihood of casting a poorly marked ballot. Actually that seems almost certainly not true, if it is reasonable to assume that candidate preference is relatable to educational level, and educational level is related to the skill of reading and following instructions...

The problem is that neither of those assumptions is valid.Although, if you're suggesting that highly educated people are less likely to read and follow instructions I could easily be swayed to that interpretation. Both my work in IT support and my volunteer work dealing with people of all backgrounds have led me to realize that educated people have a marked tendency to actively ignore printed instructions.

Hi everyone, I'm a collaborator of Philip's and can answer some of these questions.

First, we tend to use ten-sided dice in election auditing as physical sources of randomness because there are public observers present who don't know what base-6 would be.

Second, the seed is used as input into a PRNG or CSPRNG (not sure what they were using here, but it's likely whatever Stark's open-source software toolkit uses, which I don't recall at the moment... maybe mersenne twister).

Third, the 1% was established in 1964 when punchcard ballots were starting to be used in California and it was the first "ballot record" that didn't have actual candidate names next to the mark a voter would make. So, you'd punch the ballot in the apparatus and you'd get a thing with holes in it next to a three-digit contest number. The CA legislature wanted to have a hand-count check that there were no tabulation problems. The 1% number used to be higher and has dropped over the years as the burden has been pretty big for some counties (LA County spends 24/7 for 30 days straight with about 8 teams counting ballots to do the 1%). Often, big races aren't terribly close so these methods allow confirming that the election would not change without having to count all the ballots and often much less than 1%. It's the very close and small races that need a lot of hand counting for confirmation.

If I missed a question, let me know and I'll do my best to answer it. BTW, Rivest and his student Emily Shen will present a bayesian auditing approach at EVT/WOTE'12 in Bellevue, WA in August. That paper will be publicly available on 6 August (only available to registered attendees right now).

Try to get your friends/family to understand this:You flip two coins. Given that at least one is heads, what is the probability that both are heads?It's 1/3. People are irritatingly bad at understanding this.

It's easier to understand this problem by making a chart. Here are all of the possible outcomes of tossing two coins, and each outcome is equally likely:

"At least one is heads" eliminates outcome #4. Of the three remaining outcomes, exactly one has both coins as heads.

And this was my problem because given one is heads there are only two solutions. Both heads or a heads and tails. The result being 50% not 1/3rdIf one coin is a heads then the other coin (doesn't matter which way round therefore the split between 2 and 3 is not relevant given the original question)

However I can appreciate why 1/3rd would appear correct but can't logically accept it for this question

As someone famous said there are lies, damn lies and statistics. and that's why I don't trust any of them

Even further, there should be a test assessing people's knowledge of the candidates. If people can't explain their candidate's basic political stances, their vote should be thrown out. This will erase the "Obama is black like me, so I'll vote for him" kind of votes (or on the other end of the political spectrum, "Obama is a Muslim, so I'm voting for McCain". This will also eliminate votes from people who blindly vote party line or who are just voting for the party their parents tell them to, and so on.

In other words, you'd like to disenfranchise anyone who chooses a candidate for (legally permissible) reasons you don't personally agree with, and believe a "television comprehension test" would be an effective way to do so?

No.

I simply think people should be required to know the candidates' political stances in order to vote. My informed vote that I decided after much deliberation should count for more than someone who looked at Obama, saw he was black, and voted for/against him.

In fact, I think it should be illegal to vote for a candidate based on gender/race/religion/etc. Discrimination is illegal when hiring employees, and the election is essentially the final stage of the hiring process for the most powerful job in America, so discriminatory votes should be thrown out.

I know this system is not perfect and can be cheated (by learning about the candidates but still voting based on nonpolitical factors) but it would at least weed out a good amount of the ignorant votes and possibly make Americans more informed about the candidates running for office.

You're joking, right? Because if you're serious that is a fucked up and/or naive interpretation of how things would work.

The 1% sample in California would not be intended to prove or validate (with some level of risk) that a particular election / contest is correct, but that rate forms a quality measure of the balloting processes as a whole. The 1% is a management (therefore political) sample size that trades cost against benefit. With large ballot populations (and the voter population of a few thousand is large, although relatively small for California) 1% is not an unusual sample size. Mr Stark must know this, so I have no clue why he harps that this does not support the validation of a specific election. (Other than maybe having a hammer, means all problems are solved with a hammer or maybe that he has a hammer to sell.)

The legislature would have been concerned that the balloting process as a whole was not broken. It is unlikely there were concerned about any given future election, that is the political way (e.g., so what if one person died, that is just a statistic unless it can be used as rallying cry to get elected.) The fixed percentage sample size would prevent small elections of having to bare a larger relative cost per ballot. Having the sample size based on the election results makes the costs tied to the results. I know that opposite argument can be made, but remember these are managers (politicians). And many recounts are paid for by the candidates anyhow. So in the minds of these politicians, having a fixed percentage represents a known predictable foreseeable cost.

However, there is potentially another problem here and that is the assumption that voters for each candidate have equal likelihood of casting a poorly marked ballot. Actually that seems almost certainly not true, if it is reasonable to assume that candidate preference is relatable to educational level, and educational level is related to the skill of reading and following instructions. There is where jaketheultimate’s Bayesian can come in to play.

Ps. Anyone remember Numb3rs? For the first season or two it seemed that everything was solved with a Bayesian.

Part of Stark's issue with the 1% is that it's merely 1% of precincts, rather than 1% of all ballots.