Last week, the Texas Supreme Court ruled that the NAACP of Austin could have its case dismissed against the Secretary of State of Texas. Tim Lee writing for Ars Technica does a great job of summarizing the case, quoting ACCURATE Acting Director Dan Wallach and Postdoc Joseph Lorenzo Hall.

Dan provides a particularly stark illustration of the most severe technical vulnerabilities found in the 2007 California Top-To-Bottom Review (in which many ACCURATE researchers participated):

Wallach is an expert on Travis County’s eSlate machines because he participated in one of the nation’s only comprehensive DRE machine security audits in California back in 2007. Wallach says the most serious flaws with the machines arise from their networking capabilities. To tally the votes at the end of the election, the Hart InterCivic’s voting machines are taken to a distribution center where they are connected to an ordinary PC running special vote-counting software.

Wallach said that the PC software had a buffer overflow vulnerability, which meant that a single malicious voting machine could take control of the vote-counting PC. And the PC, in turn, had the power to directly modify the memory of the other voting machines which would later be connected to it. Hence, a malicious party with access to a single voting machine could trigger a viral attack on the voting machines used in dozens of precincts.

The Texas Supreme Court essentially ruled that this issue–whether or not to require voting machines be fundamentally auditable–is a policy issue and that the proper resolution is with the Texas legislature or, ultimately, Texan voters.

The central idea in this result is that these researchers have examined how people fill in bubble forms, like optical scan ballots in voting, to see if there is enough structure in these bubble patterns to uniquely identify the individual filling out the form. They apply some serious machine-learning mojo and can correctly identify the individual about 50% of the time, a much greater identification rate than the 3% rate for making completely random guesses. And the correct answer is one of the top three results 75% of the time.

This has both good and bad consequences for elections. Bad in that anyone with form-filling data such as an employer or an exit pollster, likely has enough identifying information to identify a person’s ballot based solely on a scanned image of that ballot, the likes of which advocates (such as the Humboldt Election Transparency Project) have been releasing for a few years now. Good in that this might help to identify when a different person filled out a ballot (vote buying) or, more importantly, if many ballots were filled out by the same person (ballot box stuffing).

The Princeton team has had this paper accepted to USENIX Security in August and they’ve been playing around with mitigations for voting, such as the inked markers used in Los Angeles for the InkaVote system (where a cheap inked dauber can apply a uniform size and amount of ink to a target).

Full disclosure: the author of this post, Joseph Lorenzo Hall, was a visiting postdoc at CITP for the past three years and consulted closely with the CITP team on this work.

The U.S. Department of Labor (DOL) recently asked for public comment on a fascinating issue: what kind of guidelines should they give unions that want to use “electronic voting” to elect their officers? (Curiously, they defined electronic voting broadly to include computerized (DRE) voting systems, vote-by-phone systems and internet voting systems.)

As a researcher here at ACCURATE, I figured we should have good advice for DOL.

(If you need a quick primer on security issues in e-voting, GMU’s Jerry Brito has just posted an episode of his Surprisingly Free podcast where he and I work through a number of basic issues in e-voting and security. I’d suggest you check out Jerry’s podcast regularly as he gets great guests and really digs deep into the issues while keeping it at an understandable level.)

The DOL issued a Request for Information (PDF) that asked a series of questions, beginning with the very basic, “Should we issue e-voting guidelines at all?” The questions go on to ask about the necessity of voter-verified paper audit trails (VVPATs), observability, meaningful recounts, ballot secrecy, preventing flawed and/or malicious software, logging, insider threats, voter intimidation, phishing, spoofing, denial-of-service and recovering from malfunctions.

Whew. The DOL clearly wanted a “brain dump” from computer security and the voting technology communities!

The requirement for source code review of 1% of Lines of Code (LOC) during the new Test Readiness Review, where a voting system must pass a few basic tests before being allowed to undergo more extensive testing, needs to be better specified to be effective; we proposed a few ways this could be improved.

There should be explicit recognition that an important goal of the test plan and test report is to facilitate reproducibility of certification testing. We cited the difficulty of reproducing certain tests ACCURATE PIs and researchers faced during the California Top-To-Bottom Review and the Ohio EVEREST voting system review.

The procedure for dealing with modifications to software in relation to the trusted build process needs to be better specified to handle each possibility of availability/unavailability of the original build environment and/or file signatures. The bottom line is if an unmodified file can pass signature verification or can be manually compared to a bona-fide unmodified file, then it doesn’t have to undergo testing again; otherwise, there’s no basis to know if the file has been unmodified.

We look forward to working further with EAC, vendors, advocates and experts to ensure the Testing and Certification Program remains healthy, efficient and robust.