The
new (near final) agenda for the plenary meeting will be sent out today.
Presentations are due Wednesday afternoon and will be sent to subcommittee
when we have them in. John reviewed the agenda for the meeting which
will start with a high level overview of the structure of the VVSG.
Subcommittee presentation will go CRT, HFP, then STS. The meeting
may need to be extended and the schedule flexible.

Nelson
went over STS's plan for presentation. Bill Burr will give a high
level overview of security efforts, John Wack will talk about epollbooks,
and Nelson will cover specifics on cryptography, setup validation,
and highlight the new chapters as well as changes to the other chapters.
Bill's overview will cover requirements for auditing electronic and
paper records.

Discussion
on Auditing, Electronic Records, and Paper Record Requirements:

John Kelsey
felt that comments on the VVPR paper had been covered at the last meeting
with the exception of the use of barcodes. Should they be allowed? And
if so, what requirements should be written? Should the barcodes contain
cast vote ballot records and should they be used for recounts or audits?
Should this be a policy issue? Are they necessary considering the advancement
in capabilities for OCR (OCR is voter verifiable)?

It was
suggested that whether or not barcodes were used should be left up to
the states. It was decided that the standard would not disallow barcodes,
but there would be discussion about problems when adding more complexity
to the record, and discussion that if used it must be stressed that
auditing to compare information is very important. They should not be
used for recounts or auditing an election.

Open
Ended Vulnerability Testing (OEVT) - Santosh Chokani:

The OEVT
was updated based on David Wagner's concerns. David still had concerns
over the process, but with the understanding that it was meant to be
descriptive and not prescriptive caused fewer issues. Ron Rivest pointed
out that the paper should be used as guidance for the team conducting
the test. The team should have flexibility and a clear goal. The goal
of OEVT is to find vulnerabilities that could be exploited without detection
to change the result of an election. Due to limitations of resources,
fully exercising a vulnerability will probably not be feasible.

Today's
discussion was focused on the level of effort, size of the team, and
cost. The paper being discussed estimated 5 to 6 people, at 6 to 8 weeks,
costing between $250K and $500K. The added "cost of this testing"
causes issues for EAC as well as the state election officials. Ron stated
that the costs and effort would depend on factors such as the complexity
of the system, the quality of the documentation provided by vendor,
number of iterations with vendor, whether it's a new system or older
system, and how much expected use there will be of the system. David
pointed out that extensive OEVT costs would cause barriers to entry
for new innovative systems.

Several
options were discussed: 1) TGDC to recommend a default (mid-range) level
of testing required and EAC would determine if more or less was needed
based on mitigating factors of each system. 2) TGDC would recommend
a default (mid-range) level of testing and the states would request
more testing if they deemed necessary based on their needs. 3) Certify/test
systems based on the number of systems put in use (the more systems
that you deploy requires more vulnerability testing). 4) Another model
would be to have a broad range of testing requirements and let the testing
labs determine what is an appropriate level of testing, within that
range, based on the complexity of the system.

Santosh
will take these points from discussion today, and using $100K as the
bottom level testing amount, rework white paper.

Ron then
brought up the reporting based on OEVT testing. A report will be generated
with the list of vulnerabilities, but then a determination has to be
made as to whether they are fixable flaws or not. (If they are determined
to be fixable flaws, they will have to be retested.) How does this become
a standard? How do you determine whether vulnerabilities are serious
enough to cause a system to fail certification? David Wagner stated
that a serious vulnerability is one that if a single person, using whatever
he has available, can do something that will affect the outcome of an
election. When a system is tested, a list of vulnerabilities found will
be forwarded to EAC for determination whether the system fails or is
certified. TGDC will provide to the EAC what should be considered serious
vulnerabilities.

Future
STS Meetings (Nelson Hastings):

The STS
subcommittee will continue to meet on Tuesdays after the plenary meeting.
Our next meeting is scheduled for May 29 at 10:30 a.m.

[*
Pursuant to the Help America Vote Act of 2002, the TGDC is charged with
directing NIST in performing voting systems research so that the TGDC
can fulfill its role of recommending technical standards for voting
equipment to the EAC. This teleconference served the purposes of the
STS subcommittee of the TGDC to direct NIST staff and coordinate voting-related
research relevant to the VVSG 2007. Discussions on this telecon are
preliminary and do not necessarily reflect the views of NIST or the
TGDC.]