I've dabbled with photography for most of my life, defiantly regressing toward bulkier equipment and more cumbersome technique as cameras become inexorably smaller and easier to use with each successive generation. Lately, I've settled on a large format camera with a tethered digital scanning back, which is about as unwieldy as I'd likely want to endure, at least without employing a pack mule.

But for all the effort I'm apparently willing to expend to take pictures, I'm pathologically negligent about actually doing anything with them once they've been taken. Scattered about my home and office are boxes of unprinted negatives and disk drives of unprocessed captures (this is, admittedly, overwhelmingly for the best). So it was somewhat against my natural inclinations that a few years ago I started making available an idiosyncratic (that is to say, random) selection of images elsewhere on this web site.

Publishing photos on my own server has been a mixed bag. It maximizes my control over the presentation, which appeals to the control freak in me, but it also creates lot of non-photographic work every time I want to add a new picture -- editing html, creating thumbnails, updating links and indexes. I've got generally ample bandwidth, but every now and then someone hotlinks into an image from some wildly popular place and everything slows to a crawl. I could build better tools and install better filters to fix most of these shortcomings, but, true to my own form I suppose, I haven't. So a few weeks ago I finally gave in to the temptations of convenience and opened an account on Flickr.

I'm slowly (but thus far, at least, steadily) populating my Flickr page with everything from casual snapshots to dubiously artistic abstractions. It's still an experiment for me, but a side effect has been to bring into focus (sorry) some of what's good and what's bad about how we publish on the web.

Flickr is a mixed bag, too, but it's different mix than doing everything on my own has been. It's certainly easier. And there's an appeal to being able easily link photos into communities of other photographers, leave comments, and attract a more diverse audience. So far at it seems not to have scaled up unmanageably or become hopelessly mired in robotic spam. They make it easy to publish under a Creative Commons license, which I use for most of my pictures. But there are also small annoyances. The thumbnail resizing algorithm sometimes adds too much contrast for my taste -- I'd welcome an option to re-scale certain images myself. The page layout is inflexible. There's no way to control who can view (and mine data from) your social network. And they get intellectual property wrong in small but important ways. For example, every published photo is prominently labeled "this photo is public", risking the impression that copyrighted and CC-licensed photos are actually in the public domain and free for any use without permission or attribution.

I wonder how most people feel about that; do we think of our own creative output, however humble, as being deserving of protection? Or do we tend to regard intellectual property as legitimately applying only to "serious" for-profit enterprises (with Hollywood and the music industry rightfully occupying rarefied positions at the top of the socio-legal hierarchy)? A couple of years ago I led a small discussion group for incoming students at my overpriced Ivy-league university. The topic was Larry Lessig's book Free Culture, and I started by asking what I thought was a softball question about who was exclusively a content consumer and who occasionally produced creative things deserving protection and recognition. To my surprise, everyone put themselves squarely in the former category. These were talented kids, mostly from privileged background -- hardly a group that tends to be shy about their own creativity. Maybe it was the setting or maybe it was how I asked. But perhaps their apparent humility was symptomatic of something deeply rooted in our current copyright mess. Until we can abandon this increasingly artificial distinction -- "consumers" and "content producers", us against them -- our lopsided copyright laws seem bound to continue on their current draconian trajectory, with public disrespect for the rules becoming correspondingly all the more flagrant.

Today Ohio Secretary of State Jennifer Brunner released the results of a comprehensive security review of the electronic voting systems used in her state. The study was similar in scope to the California top-to-bottom review conducted this summer (with which I was also involved), covering the systems used in Ohio. The project contracted several academic teams and others to examine the election procedures, equipment and source code used in that state, with the aim of identifying any problems that might render elections vulnerable to tampering under operational conditions.

The ten-week project examined in detail the touch-screen, optical scan, and election management technology from e-voting vendors ES&S, Hart InterCivic and Premier Election Systems (formerly Diebold). Project PI Patrick McDaniel (of Penn State) coordinated the academic teams and led the study of the Hart and Premier Systems (parts of which had already been reviewed in the California study). Giovanni Vigna (of WebWise Security and UCSB) led the team that did penetration testing of the ES&S system.

I led the University of Pennsylvania-based team, which examined the ES&S source code. This was particularly interesting, because, unlike Hart and Premier, the full ES&S source code suite hadn't previously been studied by the academic security community, although ES&S products are used by voters in 43 US states and elsewhere around the world. The study represented a rather unique opportunity to contribute to our understanding of e-voting security in practice, both inside and outside Ohio.

My group -- Adam Aviv, Pavol Cerny, Sandy Clark, Eric Cronin, Gaurav Shah, and Micah Sherr -- worked full-time with the source code and sample voting machines in a secure room on the Penn campus, trying to find ways to defeat security mechanisms under various kinds of real-world conditions. (Our confidentiality agreement prevented us from saying anything about the project until today, which is why we may have seemed especially unsociable for the last few months.)

As our report describes, we largely succeeded at finding exploitable vulnerabilities that could affect the integrity of elections that use this equipment.

There were other parts to the study (called "Project EVEREST") than just the source code analysis, and, of course, there is also the question of how to actually secure elections in practice given the problems we found. The Ohio Secretary of State's web site [link] has a nice summary of the review and of the Secretary's recommendations.

Addendum (4 June 2008): Apparently the Ohio SoS web site was recently reorganized, breaking the original links in this post. I've updated the links to point to their new locations.

Addendum (24 March 2009): See this post for a recent case of vote stealing
that exploited a user-interface flaw in the ES&S iVotronic.

Most of the debate surrounding the Protect America Act --
hastily passed this summer and eliminating requirements for
court orders for many kinds of government wiretaps --
has focused on the perceived need to balance national
security against individual privacy. And while it's surely true that
wiretap policy strikes at the heart of that balance, the specter
of unrestrained government spying may not be the the most immediate
reason to fear this ill-conceived and dangerous change in US law.
Civil liberties concerns aside, the engineering implications of these new
wiretapping rules, coupled
with what we can discern about how they are being implemented, should be
at least as unsettling to hawks as to doves.

Contrary to previous speculation, DCS-3000 is much more
than an updated version of the FBI's
Carnivore
Internet interception and collection device first disclosed seven
years ago. Instead, the DCS system appears to be a comprehensive
software suite
for managing and collecting data from a variety of Title III (law enforcement)
surveillance technologies, including Internet wiretaps, wireline
voice telephony, cellular, "push-to-talk", and maybe others.
The system provides a single interface for managing
and collecting evidence from all the different kinds of wiretaps
the FBI uses, connected via a "DCSNet" for getting
tapped traffic to any FBI field office in the US. There are references
to several other FBI systems as well, most notably the Bureau's ill-fated
Trilogy case management system, and also something called
DCS-5000, which is described as an analogous system for managing
FISA (national security) taps. The software is definitely large and complex --
there are mentions of multi-week
training courses for the agents who use it.

That complexity itself raises some difficult security questions. As
my colleague
Steve
Bellovin points out, the new documents suggest that the FBI may have failed to
adequately secure the system against an insider threat. But aside
from the usual risks that the software could
be subverted or abused, in a wiretapping system there's also
the problem of ensuring that intercepted evidence is faithfully
recorded. And that, it turns out, can be harder than it sounds.

Two years ago, my graduate students and I
discovered basic
flaws in the in-band signaling mechanisms used for many years in older
analog voice telephone wiretaps. The flaws allow a
wiretap target to interfere with a phone tap by playing special tones
that cause interception equipment
to shut down prematurely or record misleading call data.
We speculated, based on the documents available to us then, that
the CALEA-based interception system now used by the FBI
might suffer from similar problems. The FBI denied this at the time,
claiming that only a few systems remain vulnerable to our attacks.
But sure enough, the EFF's new documents refer in several places to continued
support for in-band "C-tone" signaling in voice line taps (for example, on
page
53 of this pdf document). No doubt, these features were included to
provide backward compatibility with older equipment. And the result is
backward compatibility with older bugs.

California Secretary of State Debra Bowen's
decision
on the fate of her state's voting technology was announced just before
midnight last Friday. The certifications of all three reviewed systems
(Diebold, Hart, and Sequoia) were revoked and then
re-issued subject to a range of
conditions intended to make it harder to exploit some of the problems
we found in the security review
(see previous entry in this blog). The certification of a fourth system, ES&S, was
revoked completely because the vendor failed to submit source code in
time to be reviewed.

Whether the new conditions are a sufficient security stopgap and whether
the problems with these systems can be properly fixed in the long term will be
debated in the technical and elections communities in the weeks and months
to come. How to build secure systems out of insecure components is
a tough problem in general, but of huge practical importance here, since
we can't exactly stop holding elections until the technology is ready.

But that's not what this post is about.

The traditional role of the vendors in cases like this, where critical
products are found to be embarrassingly or fatally insecure,
is to shoot the messengers. The reaction is familiar to most anyone
who has ever found a security flaw and tried to do the right thing
by reporting it rather than exploiting it: denials, excuses, and threats.

Occasionally, though, a company will try to look "responsible" by employing a
different strategy, acknowledging -- and perhaps even actually correcting -- the
underlying problems. This should be understood as
nothing more than a transparent attempt to pander to customers
by wastefully improving the security of otherwise perfectly good products.
These naive organizations --
a tipoff is that they're often run by engineers rather than
experienced business people -- do enormous damage by shirking their public
relations duty to the community as a whole. Fortunately, this kind of
unsophistication is rare enough not to have been much of an issue
in the past, although in some circles, it is becoming worrisomely
commonplace.

To help vendors focus on their obligations here, Jutta Degener and I
present Security Problem Excuse
Bingo. Usual bingo rules apply, with vendor press releases, news
interviews, and legal notices used as source material. Cards can be
generated and downloaded from
www.crypto.com/bingo/pr

Because we follow all industry standard practices,
you can rest assured that there are no bugs in this software.
We take security very seriously.

Readers of this blog may recall that
for the last two months I've been part of
a security review of the electronic voting systems used in
California.
Researchers from around the country (42 of us in all) worked
in teams that examined source code and documents
and performed "red team" penetration tests of election systems made by
Diebold Election Systems, Hart InterCivic and Sequoia Voting Systems.

The red team reports were released by the California Secretary of State
last week, and have been the subject of much attention in the nationwide
press (and much criticism from the voting machine vendors in whose systems
vulnerabilities were found). But there was more to the study than
the red team exercises.

Today the three reports from the source code analysis teams were released.
Because I was participating in that part of the study, I'd been unable
to comment on the review before today. (Actually, there's still
more to come. The documentation reviews haven't been released
yet, for some reason.)
Our reports can now be downloaded from
http://www.sos.ca.gov/elections/elections_vsr.htm .

I led the group that reviewed the Sequoia system's code (that report
is here [pdf link]).

The California study was, as far as I know, the most comprehensive
independent security evaluation of electronic voting technologies ever
conducted, covering products from three major vendors and investigating
not only the voting machines themselves, but also the back-end systems that
create ballots and tally votes. I believe our reports now constitute
the most detailed published information available about how these systems
work and the specific risks entailed by their use in elections.

My hats off to principal investigators Matt Bishop (of UC Davis)
and David Wagner (of UC Berkeley) for their tireless skill in putting
together and managing this complex, difficult -- and I think terribly
important -- project.

By law, California Secretary of State Debra Bowen must decide by tomorrow
(August 3rd, 2007) whether the reviewed systems will continue to be
certified for use throughout the state in next year's elections,
and, if so, whether to require special security procedures where
they are deployed.

We found significant, deeply-rooted security weaknesses
in all three vendors' software. Our newly-released source code
analyses address many of the supposed shortcomings of the red team studies,
which have been (quite unfairly, I think) criticized as being "unrealistic".
It should now be clear that the red teams were successful
not because they somehow "cheated," but rather because the built-in
security mechanisms they were up against simply don't work properly.
Reliably protecting these systems under operational conditions will likely
be very hard.

The problems we found in the code were far more pervasive, and
much more easily exploitable, than I had ever imagined they would be.

As strongly as I feel about the evils of illegal wiretapping,
I must admit to having decidedly mixed feelings here. No, kids,
don't tap your neighbor's phone. But unraveling the once-forbidden
mysteries of telephone electronics has a way of pulling a young
geek into a lifetime of technological exploration. It certainly
did for me.

I was at a
conference recently where everyone was asked to recall
their first moment of thinking "I rule!" over some technology. It's
a surprisingly revealing question; experience the exhilaration of
hacker empowerment at a sufficiently impressionable age and you're
hooked forever. A disproportionately large fraction of the answers seemed
to involve telephony. (Mine was when I discovered you could dial a phone by
flashing the hookswitch. I think I was too young to have anyone to call,
though).

So I suppose if the nerdy kid next door figures out how to hook one of these kits
up to my phone, I won't be too upset. Just make sure not to eat
the solder.

Vassilis Prevelakis and Diomidis Spinellis just published (in the July '07
IEEE Spectrum) a terrific
technical analysis [link]
of the recent Greek cellular eavesdropping scandal. In 2005, it was
discovered that over a hundred Athens cellphones, mostly belonging to
politicians (ranging from the mayor to the prime minister), were being
illegally wiretapped. The culprit hasn't been found, but there's
plenty of fodder for speculation, including mysteriously missing records,
suspicious suicide, and, as Prevelakis and Spinellis point out,
an intriguing technological mystery.

This would all be interesting enough for its stranger-than-spy-fiction
elements alone, but what makes the story essential reading here is how
definitively it illustrates something that many of us in the security and
privacy community have been warning about for years: so-called "lawful
interception" interfaces built in to network infrastructure become inviting
targets for abuse. (See, for example, this point made
in 1998 [pdf] and
in 2006 [pdf]).
And, as this case shows, those targets can be rich indeed.

For some reason, wiretapping interfaces don't seem to get much technical
scrutiny, and we're starting to see how easy it can be to
exploit them to nefarious ends.
Vulnerabilities here can cut both ways, too, sometimes making it easier
for real criminals to evade legal surveillance. A couple
of years ago, Micah Sherr, Eric Cronin, Sandy Clark and I
discovered basic
weaknesses in the interception technologies used for decades to tap
wireline telephones. Many of the vulnerabilities have found their way, in the name of "backward compatibility",
into the latest eavesdropping standards, now implemented just about everywhere.
Maybe even in Greek cellular networks.

Several people asked me for a list of references from my talk on
"Safecracking, Secrecy and Science" Sunday morning in Sebastopol, and I promised a blog entry with pointers.
(If you were there, thanks for coming; it was fun. For everyone else, I
gave a talk on the relationship between progress and secrecy in
security, as illustrated by the evolution of locks and safes over the
last 200 years.)

Unfortunately, few of the historical references I cited are on the web
(or even in print), but
a bit of library work is repaid with striking parallels between the
security arms races of the physical and virtual worlds.

The California Secretary of State recently announced plans for a "top-to-bottom" review of the various electronic voting systems certified for use in that state. David Wagner of U.C. Berkeley and Matt Bishop of U.C. Davis will be organizing source code and "red team" analysis efforts for the project, and they've recruited a large group of researchers to work with them, including me. This has the potential to be one of the most comprehensive independent evaluations of election technologies ever performed, and is especially significant given California's large size and the variety of systems used there. Trustworthy voting is perhaps the most elemental of democratic functions, but, as security specialists know all too well, complex systems on the scale required to conduct a modern election are virtually impossible to secure reliably without broad and deep scrutiny. California's review is a welcome and vitally important, if small, step forward.

I'll be leading one of the source code review teams, and we'll be getting to work by the time you read this. We have a lot to do in a very short time, with the final report due to be published by late summer or early fall. Until then, I won't be able to discuss the project or the details of how we're progressing, so please don't take it personally if I don't.