Casual PKI and making e-mail encryption easy

E-mail encryption seems to be hard enough, or annoying enough,
that even many technically sophisticated people don't do it
regularly. Non-technical people rarely seem to be able to
figure it out at all.

Brad Templeton, Chairman
of the EFF Board, has been thinking about this problem. In an essay
called "Returning
Privacy to E-mail", he suggests that expecting the general
public to learn about public key cryptography in order to benefit
from it is unreasonable, and that the best way to get cryptography
used it to make it seamless and invisible (and preferably included
by default).

The EFF and cryptographers and cryptography activists have fought
for a long time to keep strong cryptography available to the public.
(Corporate lobbying groups like
Americans for Computer
Privacy helped, as did
EFF's
Bernstein crypto-export lawsuit.) Now cryptography can be
published and exported, there's no more Clipper Chip proposal,
and even Dorothy Denning has come around behind the public's right
to have cryptography. But that same public is not using it.

I'm generally a fan of user education, and even wrote not long
ago that existing public key infrastructure was too weak, and
that users needed to learn more about verifying key
fingerprints, and to be more paranoid, not less. But Brad
suggests, and there seems to be a growing consensus, that the
threat models for a casual user today are different from the
threat models for the activists and dissidents for whom PGP was
first written last decade.

Today, the casual user has no cryptography protecting his or
her e-mail at all. The casual user doesn't necessarily want
the best-achievable security (which does require proper
key-exchange and key-management techniques, both of
which are exemplified by somebody like
Manoj
Srivastava). The casual user might, however, appreciate
having something better than no security. The sad
experience of usability experts seem to be that the casual
user is fairly unlikely to tolerate much complexity in order
to achieve security. That's why Brad and others have
suggested that an effective e-mail privacy solution for the
general public ought to be as close as possible to "zero
user interface".

The most complex part of e-mail encryption today is clearly
key exchange. Regular PGP users, like Debian developers,
will attend keysigning parties and read off fingerprints in hex.
I went to the PGP keysigning BOF session at ALS, and Phil Zimmermann
show up, as did Manoj Srivastava. They had a fascinating
exchange, which might be summarized like this:

Manoj: We are not insisting that users be paranoid enough,
because there are many attacks which could be effective
against PGP, the way users are using it today.

Phil: On the contrary, we're alienating users by insisting
that they be more paranoid than necessary. Most users who
could benefit from the security provided by PGP are
extremely unlikely to be subject to such sophisticated
attacks.

Phil then went on to propose something he called a
robot CA, which issues certificates or signs keys
in an automated way without attempting to formally verify
users' real-world identities. (The robot CA uses an interactive
confirmation protocol akin to what happens when you try to
subscribe to a mailing list -- you get an automated reply which
asks, in effect, "Is this really you?". Thus, there is an
attempt to verify that a key does actually correspond to a
particular e-mail address, but not to verify whether the
owner of that e-mail address is who he or she claims be.)
This proposal is slightly different from Brad's, but I think
both of them could be referred as casual PKI schemes.
That is, they are forms of key-exchange infrastructure which
involve some useful and practical key verification, while
being admittedly vulnerable to certain sophisticated attacks.

(Brad's casual PKI scheme, outlined in "Returning Privacy
to E-mail", is similar to one used by two free software
projects he found,
PPS and
Herbivore, both of which seems to be at a proof-of-concept
stage.)

Cryptographers sometimes groan about casual PKI, because every
casual PKI scheme is vulnerable to some types of man-in-the-middle
attack, and the designers of casual PKI schemes are perfectly
aware of this. But casual PKI doesn't verify keys out-of-band
because verifying keys out-of-band seems to be something that
most people are not willing to do. This leads to an explicit
trade-off between security and usability, and Phil and Brad have
each suggested that very few attackers could or would make
active MITM attacks (compared to the number who can and do use
passive eavesdropping to intercept e-mail today). There's also
no reason for casual PKI to replace other forms of PKI --
for example, both a Herbivore-like scheme and a robot CA scheme
can be implemented so that users still have and can use standard
PGP keys. They just don't exchange them through keysigning
parties or do manual fingerprint verification. Then people who
want to verify keys can get extra security, while people who
don't want to verify keys can still get some useful security.

There are at least two questions about such ideas -- a technical
question and a much more practical question. First, what's
the best technical architecture for an e-mail privacy system
with very little user interface? Second, what's the best way
to organize its development and get it deployed with e-mail
software that many Internet users use?

Yes, I'm one of the *ahem* technically sophisticated people who don't
encrypt e-mail regularly -- mainly because it's useless when most people
around me aren't installing PGP anyway.

The major problem I see is deployment:

As Brad pointed out, many e-mail clients in popular use today are
closed source. I won't even try to push the companies involved to
implement it in their upcoming versions -- they'll just implement some
useless random number generator and get away with it.

Another significant obstacle is that most users don't see
eavesdropping as a threat to themselves (though they'll scream whenever
they find that a certain company is able to find out all the URLs
they've surfed to -- but once that's over it's back to the `don't-care'
mindset again). A good way around this may be to sneak opportunistic
encryption into a program which has a different selling point, e.g.
removing spam. (If anyone has better ideas, let me know.)

As for the technical architecture, I see Brad's solution as being the
most attractive. Though it doesn't allow a user to read his encrypted
mail from several different clients, in order to achieve "zero-UI" a
client shouldn't even need to bother a user with stuff like passphrases,
which means any private keys will have to be automatically managed by
and tied to the client anyway. Phil's solution has the weakness that if
the CA is compromised, then the whole infrastructure collapses.

I started recently to sign all my e-mail. What I present here is why I
took so long to use it on a regular basis and why don't use crypto for
more than that.

Configuration

Creating a public/private key pair is a complicated and intimidating
process. Granted, this is more a myth than reality, but some of the
questions asked only confuse a novice. In any case this should be
integrated with the first usage of a mailer.

Knowledge needed

If you use crypto you have to be aware of the risks of loosing your
private key, of a passphrase compromise and of letting someone use your
computer. You also need to know about revocation certificates, and be
explained why you should encrypt certain mails and not others
(mailing-lists, for instance). I met someone that nevers looses his
laptop out of sight so that he can be sure that his keys aren't
compromised. When I suggested using a credit card sized CD with the
private keys he took almost 5 seconds before describing me how a trojan
gnupg installed in his laptop would compromise his keys.

Prone to failure

This is why I don't encrypt files. If I loose my private key I loose
my files. Nowadays I just sign mails, that means that if I loose the
private key there's no real harm done (I just loose all those
signatures), and I can even use the revocation
certificate to say that key is not valid anymore. But if I use it to
encrypt files or emails and I loose my private key, I loose the files
and the email messages. My solution to this was to put a copy of the
private key in bank safe, but few normal users will do this.

I think it's possible to get Jane User to use crypto. I just don't
think it's possible to trust Jane User to not give access to her private
key to the nice nephew that's into computers.

I frown on casual PKI, but it's much better than nothing. What I thought
when I listened to Manoj and Phil argue about this at the BOF, my
primary concern was that a clear distinction is established between
traditional key signing and laxer methods. Presumably, casual keys will
be on the same keyservers as those belonging to hard core security
buffs. This doesn't pose a direct problem; the paranoid folks can chose
not to view the signing robot as a trusted signer. However, if end users
start to sign each others' keys for improved security (which is a good
idea) and do not employ strong verification techniques, it could be very
detrimental to the webs of trust.

I was reading the OpenPGP RFC the other day, and noticed that the
protocol includes support for indicating the level of confidence that a
certain signature bestows (see 5.2.3.12 in RFC2440). I like this idea a
lot. It allows people to make low security signatures and indicate
that those signatures aren't to be trusted by paranoid users. The
RFC also taught me that signatures can contain a prefered key server,
and a URL to a policy that describes how the signature was issued. I
would love to write a policy like Manoj's for myself, but it would be a
little awkward if it was not contained in the actual signature using the
OpenPGP signature subpacket. I haven't seen support for any of this in
GPG, thus I suggest that GPG add support for these features. (While
you're at it, make GPG not crawl on medium size keyrings ;-).) Don't ask
me for patches though - I'm unwilling to assign copyright to the FSF.

Your main problem seems to be the risk of private key loss. I'd suggest
making backups. (Open)PGP encrypts private keys with a symmetric cipher
of about 128 bits (it depends), keyed by a secure hash of your
passphrase. Thus, I have my private key backed up on a CD-R in a safe
deposit box. I'm mostly relying on the encryption rather than the box to
keep it safe. Of course, your password has to be secure. To get full
security out of the key length, you need 16 random bytes. Using only
digits and numbers, you'll need about to 21 characters. And if the
passphrase is in a human language, that value will be much higher.

I don't bother with e-mail encryption the vast majority of the time
because there are few people that I communicate with that are
willing or able to take part in a PKI. It's too confusing for my friends
and family. So many of them use free webmail from various
Internet terminals, so (as best as I'm aware) any sort of encryption
isn't even an option.

My solution? I've given a lot of them accounts on my home mail
server. Ain't no middle for a man to be in. :)

So many of them use free webmail from various Internet terminals,
so (as best as I'm aware) any sort of encryption isn't even an option.

Well, it may still be possible. Though I heard that web mail services
filter out JavaScript in incoming mail, I've not yet heard of them
filter out Java. So a way may be to embed a Java applet in an
encrypted mail, where the applet will perform some skulduggery with the
mail data (I have in mind a protocol involving several untrusted
servers), and finally display the plaintext mail. Of course, the mail
will look different from what it usually does. :-|

But this is complicated. Heck, let's just write a virus which will
spread and turn all the terminals in the world into Herbivores. (This
has the obvious legal problems; then again the FBI is into such things
anyway.)

Obviously, the heavyweight PKI model is not going to fly in the real
world anytime soon. I too am very hopeful about lighter weight
alternatives.

But I have to wonder whether the very idea of a Public Key
Infrastructure is the problem. For some reason, people have got
themselves convinced that they need some such thing, but more and more I
just don't see why.

Nailing down exactly what a PKI does is a tricky process in and of
itself. What, exactly, can you count on when you get a certificate? The
answer is complex and subtle, so it fails the mom test right out of the
box.

An even deeper problem is that of namespaces. A PKI is very similar to a
slightly different beast: a "secure" name service. However, it quickly
becomes clear that the most natural namespace for such a name service
has a built-in impedance mismatch with the existing DNS-based namespace
of the Internet. In a secure name service, the namespace hierarchy most
naturally reflects routes of delegation. Internet names, on the other
hand, are chosen for marketing reasons, convenience, or because the ISP
assigns them.

So what is to be done? I like the schemes that propose DH-style
encryption without any notion that you use a PKI to associate a public
key with a human-readable name. This way, you get very good protection
against passive attacks, but of course none against the classical MITM
active attack. I think this is very reasonable, because lower tech
active attacks such as keyboard sniffers are likely to be much more
effective than the relative rocket science of a crypto MITM. Clients can
also keep track of persistence of public keys in much the same way that
ssh does. ("warning - the public key has changed")

I'm not very hopeful that crypto will ever become popular on port 25. It
just breaks too many other things, like mailing lists, filtering on the
server, and other things. I've also noticed during my work on the
remailer-list that crypto tends to make reliability worse as well. Port
25 is fairly decrepit these days, and likely to decay even more. I think
there's a real niche for a next-gen stored messaging service, but I
also think it's a very hard job to do a good job designing it. I
think it will happen, but it will take years or perhaps decades.

I've been thinking quite a bit about what such a system would look like,
for many years.
Here are some of the properties I think it should have:

It should be based on a strong peer-to-peer network. I think this is
a necessary precondition, in fact, because otherwise you either have a
weak system or a centralized one. Of course, the fact that we don't
really know how to build strong p2p networks yet means that this dream
is still far-off.

It should have its own namespace, not tied to email. After thinking
about it for quite some time, I believe that the only policy that will
work for people is first-come, first-served. It has the killer
advantage of being easy and intuitive for people to understand. A big
barrier, of course, is its utter hostility to the concept of trademark
ownership rights. This emphasizes the fact that many of the issues
involved in building such a service are political rather than technical.

Obviously, I think trust should be involved. Chapter 7 of my
thesis-in-progress will describe the idea of a "stamp-trading network".
Everybody issues individual stamps allowing access to their mailbox, and
trades them with friends (according to trust relationships). So now your
friends can send you mail. Additionally, they can trade with their
friends and so on, so you're accessible to the entire world. However, a
spammer would probably find it difficult to acquire huge quantities of
stamps in this way. And if your friend is trading stamps with a spammer,
that's probably a good reason to sever the trust relationship.

I also think public key crypto has a significant role to play in
such a network. People are going to have to learn how to weave PKC into
their lives: how to take care of keys reasonably well even on computers
with crappy security; how to obtain a set of pk's that they absolutely
know are legit, etc. Basically, we have to design rituals and
processes around all this stuff, in much the same way that we've
developed rituals around paper signatures (for example: signing and
countersigning, initialling small changes, the rather cool concept of
notary public for important stuff, a preference for non-black ink, etc).
Phil's pgpdoc1.txt was a
baby step in this direction, but again, to really get it right will
take time.

As long as we're fixing things, how about email's terrible
reliability problems, which are being made even worse. Mailboxes should
be replicated and distributed, so that the failure of a single server
won't prevent you from reading your mail. The action of sending a mail
should also be a transaction, so that it either commits (and you know it
will be accessible from the recipient's mailbox) or aborts (and you know
it won't).

At the same time, configuration should be really easy, or, even
better, totally automated. You shouldn't have to poke config files on
lots of different servers to set up a distributed mailbox; it should
just happen. Here, I think, is an area where we can take a cue from
Napster and its ilk. In terms of functionality, a Napster client isn't
much more than a lightweight ftp server and client. Indeed, you can do
pretty much the same thing by setting up a little Apache. However,
configuring an Apache fails the mom test pretty badly. Configuring DNS,
so that people can actually find your little server, is far worse.
Napster, with all its flaws, provided a way to get this basic
functionality without the configuration nightmare.

Again, I feel that trust should form the basis of this
autoconfiguration. Consider remotely accessible disk space (eg for the
distributed mailbox), for example. The problem with autoconfiguration is
that other people get to automatically configure their use of
my resources. Not good. However, I would be happy to provide a
couple hundred megs to each of my friends, based on trust relationships.

I hope to have convinced you by now that this is a hard problem. I also
think it's an interesting problem, and hope that the abject failure of
early efforts such as PGP will spur deeper inquiry, rather than people
just walking away frustrated.

I do use GnuPG all the time, and I teach people to use it. I use the Gnus mailer, and it does all of it for me.

I can understand people wasn't using PGP years ago because it was a pain
in the ass to do it. Most of the time you had to encrypt your mail
manually, and copy/paste it into your mailer like elm (at least that
what I was doing) but today almost every mailer has a pgp plugin, even
Outlook does. So to me the problem isn't the time it takes to
sign/encrypt mail before you send them, as you can configure your mailer
to do it without asking you. For example I use bbdb with Gnus, and I
create a bbdb entry (sort of big brother database) for each people I
exchange mail with. When I know they do use PGP/GnuPG I create a
pgp-mail field with the action I want, MIME encrypt or inline encrypt.
When I'll send the mail to the person, it will do so without bothering
me. It's fast, it's usefull.

The problem is that people don't care about securing their
communication. They often tell you 'I have nothing to hide' when you ask
them to use GnuPG. They don't see the point, it consumes time (a little,
but it still does as they have to configure their mailer), and they just
don't care.

To me, every mail sent to a public list, to a newsgroup, etc, should be
signed. It would prevent story like the one which happened to a US
governor few years ago.

One of the problem I see also is the key exchange part of gnupg. To me
it's much more usefull to exchange key / verify identity with people
which lives far from you, and than you don't see very often, but who
might send a mail to the list which you also subscribed. It extends the
web of trust. Let's say I live in Paris, France (which is true
actually), and than I go to London for a 3 days week-end. I would use
that time to meet local people to exchange my key, but I don't know
anyone there. So I'll try to get in touch with the local LUG, and
hopefully I'll meet interested people. But that would just be a luck if
I find someone.

What I was thinking about is making a website, very simple, very fast,
which would list all the people which are available for key signing. A
bit like the Debian one, but for everybody (I don't care if they use
Windows PGP or Debian and GnuPG, I just want to extend my web of trust).
It would ask for your location, your email and things like when you are
available. Someone which goes somewhere could take a look to see who's
around at that time, and could send a message (we don't give email for
privacy) to say "hey, I'll be there tomorrow, can we meet ?".

In brief, one big reason against using encryption (Or signing -
encrypting is fairly pointless most of the time) I didn't see
mentioned on a quic browse through the discussion is keeping the
security. If, like most knowledgeable people I can think of, you're in
the habit of reading and replying to your e-mail from various places -
home, work, friends, maybe in a pinch a public library or a netcafe.
Then "losing the private key" becomes a real danger, in the more
worrisome meaning, as you cannot read encrypted mails or sign them
without bringing your private key into untrusted environments.

One solution, ofcourse, would be smart-cards or the like which contain
all the neccessary logic to carry out the sensitive operations,
without the private key ever leaving the confines of the card that you
can carry with your person. Ofcourse, todays prelevance of handhelds
and cellulars offers even more appealing solution...

Phil's solution has the weakness that if the CA is compromised,
then the whole infrastructure collapses.

Actually, in my Herbivore
system, there is no certification authority; every user keeps a list of
the public keys of everyone they recieve Herbivore-enabled email from.
And every sent email, whether encrypted or not, contains the
user's public key.

Probably the easiest way to get widespread adoption of PKI is if AOL started to set up PGP keys for new and existing users. It would be able to build upon the certification AOL must do in any case, and would be much better than the casual certification discussed here.

I must say that I disagree with the idea of "invisible" encryption.
The user needs to know that *something* is going on, but it doesn't have
to be all-or-nothing.

Things like Herbivore seem interesting, but just encrypting over the
wire is only a nominal improvement....things should stay encrypted on
disk as well (if I am misunderstanding Herbivore, sorry!). In office
environments especially, it seems like a common "attack" would be to
simply walk up to an unattended computer, and read or send email.
Without requiring the user to put in the password for their private key
to either digitally sign an outgoing email, or decrypt an incoming one,
users are not any better off.

Approaching the problem of rolling out PKI is like anything else -
looking at where the apparent bottlenecks are. People have already
mentioned deployment. I agree that this is the big one. So people need
to be able to generate a keypair, get other people's public keys, and
publish their own. Then, for lots of extra points, some means of key
management - like signing and revoking keys, and rating trust of other
signers. The extra points are more difficult, but do novice users even
care? If they do, then they are motivated to learn how. And I'm sure a
decent GUI could be made for this. Combined with a help document, I
can't see how a semi-motivated user couldn't do it.

So we are left with the issue of generating your keypair, publishing
it, and getting your friends' public keys. For the sake of brevity, I'm
going to assume GPG and Evolution/GNOME here. It seems like Evolution,
in its first-time druid, could incorporate (At the end) a page like "To
make your email as secure as possible, it is reccommended that you

create a cryptographic keypair to allow you to digitally sign and
encrypt your emails. Would you like to do this?" Of course, this would
only pop up if the user's gpg keyring didn't already have a keypair with
their name on it. If they say yes, Evolution can just default to
creating a new key for them, with 1024 bit key (or if you really want,
2048 or something). It already has their name and email, so they don't
have to re-enter it. They just come up with a password. Evolution then
automatically publishes the public key to pgp.mit.edu. As contacts are
added to Evolution, it will check for public keys on pgp.mit.edu, and
add them to the vCard. Just like Evolution already supports an option
to "always send HTML mail to this person", it could have "always sign
emails to this person" and "always encrypt to this person." When new
emails come that are signed, GPG already has the ability to
automatically query for their key on a keyserver. Obviously, whenever
an encrypted mail comes in, or a signed mail goes out, the user enter's
their secret key's password. Add in a couple easy heuristics, like
"always encrypt replies or forwards of encrypted emails", and you're good.

That seems like it would be relatively straightforward. The user
doesn't have to do a whole lot....it would just work. Files stay
encrypted on disk, people can handle signing and encrypting email pretty
easily. The Contact card for a person can have the key information, so
the user can rate/sign their keys pretty easily. It would be a pretty
huge improvement from where we are now, anyway.

For the extra stuff, like more advanced key management, a bonobo
control could be written to manage your gpg keyring, and handle things
like viewing signatures, signing, revoking, backing up, etc.

Once these things are taken care of, it seems like making gAIM,
Jabber, or gnomeICU work with the Evolution/GPG settings would be pretty
straightforward. Hell, let Nautilus do it too, so you can just put a
"secret" emblem on a file, and it encrypts it with your public key.

My point is that it seems like we can come up with some pretty good
results without throwing too much out here. For the more
security-conscious, everything still works the same, so nothing is lost.
It only raises up the novices to being able to use encryption easily.
At the very least, it is the thin edge of the wedge......Once people
have their keys, we can do more fun things with certs for AuthN and
AuthZ. And with the worries of losing private keys, or using a webmail
program and being unable to read email - there is the IETF SACRED
project designed to take care of this...So people are thinking about it. ;-)

Raph made several good points about an "ideal" email system. One of my secret projects at the moment actually involves designing such a
thing (peer to peer, encrypted, distributed email system) as part of a commercial product with open source bits... but that's still a
long way off.

One thing I particularly didn't agree with, though:

Obviously, I think trust should be involved. Chapter 7 of my thesis-in-progress will describe the idea of a
"stamp-trading
network". Everybody issues individual stamps allowing access to their mailbox, and trades them with friends (according to trust
relationships). So now your friends can send you mail. Additionally, they can trade with their friends and so on, so you're accessible
to the entire world. However, a spammer would probably find it difficult to acquire huge quantities of stamps in this way. And if your
friend is trading stamps with a spammer, that's probably a good reason to sever the trust relationship.

Unless I misunderstand you completely, stopping spam is not nearly so easy as this. These "stamps" you're talking about are really,
now, about the same as your e-mail address itself, except you're adding the extra restriction that you _only_ want to receive email from
people you explicitly specify (sometimes called a "white list", the opposite of a black list).

This would work, except: it's much more work to administer, and it's completely incompatible with people (like me) who want to receive
email from random strangers around the Internet who
use my software or liked my web page. I get spam because my email address is published publically (on purpose), not because my friends
revealed my email address to someone or because someone is sending spam to random user@host strings.

In my opinion, PKI can be used to handle spam, but only usefully in a blacklist fashion: if you have to have a signed key in order to
send email, then you can never send totally anonymous email (although the key might not be associated with your physical identity and
you might have more than one key, a condition
I call "semi-anonymous"). In any case, if it's just slightly difficult or time-consuming to get one of these (eg.
exactly one comes with your ISP service) and it's unforgeable, then if you start sending spam with it, people can start ignoring all
email produced by that key. In fact, you could even subscribe to an anti-spam service that _retroactively_ removed spam from your
mailbox once a spammer is identified.

The big problem with spam right now is really that it can't be identified accurately enough, and digial signatures could probably help
us with that.

...

Obligatory on-topic portion: yes, PKI is too hard to use. I agree with Zimmerman in that I think it's just that we're all too paranoid.
I think that sometimes we should do security like I prefer to do software: if the new version is better than the old one, let's release
it. If we try to be perfect, we'll end up with a massively over-engineered solution. Worse is better?

If I have to enter my password every time I send and receive an email message, I'll _never_ use encryption. And I'm a fast typist.

Encrypting files on the disk is an interesting idea, but not that useful: again, if I have to enter my password to access every file I
want to use, just shoot me now. We could encrypt the files and cache the key (so I only enter the password once per session or after an
idle period), but that doesn't actually solve the "walk away from your desk and someone borrows your computer" problem any more than
xlock does.

A properly transparent security system involves just _one_ password for all resources, which you enter once at login time. Every time
you leave the computer unattended, you should have to re-enter the password when you come back. (Of course, detecting that is the hard
part.)

No additional security is really obtained by forcing people to enter their password every 12 seconds. Worst case, someone installed a
keylogger while you left your computer unattended and is now getting a copy of your password every time you key it in. More common
case, the apps won't agree on what password to use, and you'll end up with too many of them and have to write your passwords down on
paper. Oops!

Again, remember our goal here: we want something that's better than nothing. Any email signing/encryption is better than the current
none. Any digital signature scheme, even with a suspicious-looking caching mechanism, is better than the current "people actually trust
a fax of my signature" legal system. But people aren't likely to do any of it if it's harder than sending email or faxing a signed
document in the current world.

apenwarr:A home user that doesn't have to worry about
people idly coming up to their computer should be fine with password
caching. Evolution now has an option to "remember" your password for
the duration of your emailing session for signing things. So if you're
really adverse to it, you click the checkbox once, and go on like
normal. Not a big deal. To see how it is, I've never checked that
option, and really, I've found it to not be a big deal at all. Sure, I
waste a second or so typing in my password, but I probably don't need
that second so badly. But I typically write medium-to-long messages, so
typing an extra 8-16 characters after I click "send" isn't a significant
add-on. If you fire off one or two sentence emails regularly, I can see
how it could be annoying. So just cache the password, and close your
client when you're done. (I keep mentioning Evolution because it is the
only email client I really ever use, so I'm most familiar with it these
days)

As for apps demanding different passwords, etc: Your secret key has
the same password across any app that wants to use it, so it isn't like
you have to remember 50 passwords for 50 apps. It is just one. And if
you don't want to have to type in a password to open a file, don't
encrypt it! There is probably no need for me to encrypt my ogg vorbis
collection, and since I play them pretty constantly, it would be a
hinderance. But if I want to encrypt my work documents, it would be
cool if it were easy to do. Or even if I've got the list of gifts that
I want to buy my friends (who sometimes might use my computer), I don't
have to hide the file, I can put it in plain view, and just encrypt it -
that is easier for me.

I think that the big problem with deployment of PKI is that the
*perception* of PKI deployment is that it is really hard and
complicated, and obviously very burdensome. It's really not that bad.
Most of my research/work these days is how to make PKI-type things Just
Work for things like college campuses. The thing that people need to
realize is that security considerations can give you a lot of benefits
in general - like using a digital signature to get things approved
online, or for marking who revised what document in a collaborative work
environment.

As for some kind of single-sign-on mechanism: yes, ideally, I'd like
everyone to have a smart card with their private key on it, and you use
that to log into your computer. Pull it out, and no one can get at your
stuff. You can also do much more secure transactions. Websites would
never need to ask for your username and password again - just encrypt
everything with your public key, and then only the holder of the private
key can access it. Eric Norman at the University of Wisconsin has been
playing with this idea...it looks pretty promising. The point of
the "thin edge of the wedge" is still true though. We need to find a
way to start getting this stuff to users, so it is actually doing
something useful, and making our lives easier/more secure. No need to
throw the baby out with the bathwater. There is a lot of playing room here.

I agree that one cause with the OpenPGP PKI situation we have to
today is how many people want it to be only operated 'perfectly', and
hence we have very few userid signatures.

Recently I've started work on a small project to create statistics about
the history of key usage for individual users based on "From" addresses,
counting signing of messages and noting which forums these appear in.
The idea is to give one a fairly good idea of which keys belong to which
email addresses, because you will be able to see situations where
"someone purporting to be foo@bar.org has continously sent signed
message to the forum gnupg-users@gnupg.org for the past 3 years, signed
with the key 0xABCD1234". Given information like this, you can fairly
ascern that foo@bar.org own 0xABCD1234.

By downloading and processing mailing list archives, one can get a rich
history of information about which keys 'probably' belong to which
people, and for many situations, that is 'good enough'.

apenwarr: I didn't explain the stamp-trading network in
anywhere near enough detail, so you missed an important bit.

The "stamp trading" part of the stamp trading network is entirely
automated. The individual trust links are manually configured, in much
the same way as on Advogato. The entire idea is to enable mail from
strangers. Basically, what happens is that the network finds a route in
the trust network from the sender to the receiver (following the reverse
direction of the usual "A trusts B" arrow). Each such edge is capacity
limited (again, in much the same way as Advogato), to make life hard for
spammers. So when the capacity of a trust link is used up, it disappears
temporarily from the routable network. I imagine that the capacity is
continually replenished.

If you want to receive a lot of mail from strangers, you issue
lots of trust links.

Of course, you still have the problem of finding routes. This is pretty
easy if you have global information, but general distributed
route-finding is one of those tricky problems you have to solve to make
a really good p2p network.

I'm not sure this works, but it seems to me to be worth trying. If the
idea is sound, it's very hard to imagine anything better.

I don't think pk-based blacklisting can work. What's to stop a spammer
from simply generating a new pk for each spam? MojoNation suffers from a
similar problem, where a small amount of credit is assigned to unknown
trading partners. The so-called "flaming smurf" attack consists of
generating a new pk for each transaction, then discarding it when after
it runs up debts.

I've noticed a lot of people getting interested in PGP and/or GPG in the
last few weeks. I personally only started signing my emails about a
fortnight ago, and I'm guessing my reasons for starting are the same as
the reasons many people are starting to look into it more seriously at
this time... the frightening scope of (and the sheer amount of) Big
Brother legislation that the UK and USA have passed in the wake of the
WTC attack.

Governments have taken the attack's aftermath and used the emotional
hysteria of the moment to rush through legislation that wouldn't have
got past the first reading normally... so now, it falls to citizens to
protect their own rights, as their elected representatives obviously
have other agendas. The cynics in the audience will fail to be
surprised at this turn of events :) but even most of us won't have been
using encryption... up until now.

I don't think the general public is turning to encryption yet, that will
certainly require the kind of 'one click' solutions that are being
discussed - but it will be interesting to see how large a proportion of
the technical community starts to encrypt routinely in the next few
months, compared to the fairly low take-up to date...

I think the only way to get encryption truly accepted by the masses is
to use encrypted webmail. Hushmail has shown that it is
possible: although their system is not perfect in many ways none of
these are due to limitations of the medium, in my opinion.

Most people seem to be happy to use webmail these days (although for me
I would find the problem of not being able to access it for offline
usage a problem, if I didn't have an always-on connection). A webmail
system with transparent encryption could be easily generated. The
system would be a trusted location to get public keys from, and would
incorporate mechanisms of transparently fetching keys from other
trusted locations.

If a webmail system that did this became popular, then encrypted e-mail
would become a normal aspect of life.

Things like Herbivore seem interesting, but just encrypting over the
wire is only a nominal
improvement....things should stay encrypted on disk as well
(if I am misunderstanding Herbivore,
sorry!). In office environments especially, it seems like a
common "attack" would be to simply walk
up to an unattended computer, and read or send email.
Without requiring the user to put in the
password for their private key to either digitally sign an
outgoing email, or decrypt an incoming one,
users are not any better off.

You understand Herbivore OK. Firstly, Herbivore is only concerned
with
over-the-wire encryption; it doesn't attempt to solve the problem of
on-the-disk encryption. Secondly, the solution you propose would require
users to type in a password every time they want to see their email;
this
requires effort and experience shows that when security products get in
the way of people doing their tasks, they will circumvent them.

It really helps to get good mailer support for encryption. If
all you need to do is click an icon to encrypt, or enter a password
(with a five minute cache) to decrypt, most people can handle it--at
least in my experience.

raph: Thanks for the additional details on the stamp-trading, I'm no longer convinced that it won't work. That said,
it also sounds a bit complex and I'm not sure people want to configure trust metrics just for things like email. It also remains to be
seen exactly how much spam comes from people who would be in the trust network; if each spammer only sends me one message, they probably
won't exhaust my trust capacity, but it'll still be annoying. In other words: individual spam is indistinguishable from individual
e-mails, if we have a giant trust network. I think.

I don't think pk-based blacklisting can work. What's to stop a spammer from simply generating a new pk for each
spam?

First: if someone did this, the ability to strongly authenticate the sender would at least make it possible to retroactively remove from
my inbox (if I sign up) all messages produced by a particular key, which means I wouldn't even have to see most spam. The current
system with no sender authentication makes this impossible, because each _instance_ of the same spam can be slightly different and use a
different (or worse, a forged) sender.

Second: we simply need to include rate-limited key issuing authorities in our web of trust. For example, your ISP could give you a key
when you sign up, or a central key issuing web site could issue no more than one key a week (say) to a particular email address or
domain.

Now that I think of it, this sounds very similar to your trading stamp network, except my key authorities are merely asserting that you
exist, not that you should really be trusted. Rather than having a per-email stamp, you have a per-identity stamp, which prevents you
from pretending to be too many different people and subverting the system. I think this, combined with retroactive inbox filtering,
could solve most spam problems.

If you leave your computer unattended for any significant timeperiod
while logged on, your privacy is already as good as gone. Even if the
mail-software demands invidual password identification every time you
try to open a message, in less than the time it'd take to read a single
e-mail, I can steal your private key, download and run a key-logger
that mails your passwords to me. Assuming I don't happen to just stand
behind them while they're entering the mail-encrypt password.

So the correct solution is to get users into the habit of logging off
(or if needed, forcefully lock the computer when idle) if there's any
chance they're going to let it out of their sights for a moment,
instead of preventing them from getting any work done. After this, the
system password is sufficient identification, and the mail-client need
not bother the user with it constantly. (Unless it's a shared computer,
in which case you shouldn't be bringing your private key to it anyway).

Ofcourse, there is a reason you might want to physically store the
messages/files/whatever in encrypted form anyway, because it is
possible somebody could get to them physically, scanning the hard-drive
etc. But in that case they'd also get your private key, which is
already over halfway to cracking the code, as the password is usually
easier to aquire. The discovery of a private key is also enough for
potential adversaries to go about seeking legal or extra-legal means of
getting the password out of you.

I still maintain keeping the private key secure is the largest hurdle
in responsible use of strong cryptography. At work, for example, your
boss owns the computers, and usually has a legal right to examine
anything on them. Quite few arrests have happened as a result of tips
from computer-repairmen showing keen interest in their clients hard-
drive contents. In the days of yore, I approached this problem by
holding my private key on a discette that I was carrying along with
myself, but it's far too easy to intercept this when used on computers
not under your total control & ownership.

At work, for example, your boss owns the computers, and usually has
a legal right to examine anything on them.

That may well be true, but I'll prefer to allow my mails to be read by
the few people around me, than to allow my mails to be read by these
few people and all the routers the mails will ever go through.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser
code is live. It needs further work but already handles most
markup better than the original parser.