Of course PGP key management sucks, so let's make a standard of it so NSA can use standard tools to retrieve them. It's just too much trouble to find where every moron put his keys.
(PS if you can access PGP keys with an external device touching your computer, how about stealing the whole data, why only PGP keys?)

It may not be perfect, but the way the author describes things, we're to find an alternative to e-mail entirely and add all kinds of cool features on top. There's plenty of ambitious crypto communication services that have tried to do that, and they haven't succeeded because they don't get adopted. E-mail isn't sexy, and PGP isn't either, but they do work and are ubiquitous.

I think we should be VERY happy that Google and Yahoo are adopting an existing open standard, instead of inventing their own, making some of the more interesting bits like key exchange proprietary behind Gmail, getting bored with it, and turning it into abandonware in a few years.

I think that it is a mostly unfounded posting by yet another user that does not get it and wants perfect security for free. That is not going to happen, ever. PGP works pretty well. Email integration is fine if you bother to find a few things out. And no, that is not a "bad user interface", it is an user interface that requires you to learn a few things that you critically need to be secure anyways. You can be a (l)user or you can have security. You do not get both.

As such, this article does more harm than good, as it has no relation to reality and calls for things that are not possible at this time.

Most of the problems discussed are about using PGP for *encryption*. There, I think it leaves a lot to be desired, because of the leaking of metadata, etc.

But, the most prevalent and important use of PGP is for *verification* of downloaded packages. The world is full of phishing, DNS redirects, and other forms of impersonation and fraud. If you download the newest version of AwesomeSoftware.zip (perhaps Tor or Tails or the Linux kernel?) how do you know the package you clicked on is the one the legitimate author intended to send?

Most download pages offer an MD5 of SHA1 sum to verify the download, but *who* claimed it was the right file to download?

I know that linux.tar.gz is the correct one because it is signed with the same key as the previous releases for many years. It doesn't matter if that key is REALLY from "Linux Torvalds" or "Greg K-H" or whomever on a passport, just that it's the same person that signed the earlier ones, which I trust based on experience.

A PGP signature tells me two things:
1) This package is bitwise correct, as signed.
2) The person who declared it correct is _____

It's really interesting to see how many people don't really understand how important things like Forward Secrecy are for a protocol where the encrypted message can be seen by several servers inbetween. If you aren't using Forward Secrecy for transport of encrypted email (with secrecy between sender and receiver, and not between individual points of transfer), why use Forward Secrecy in *any* context at all?

Just because there's no better open email encryption standard out there doesn't mean we have to defend the clearly suboptimal existing one until the bitter end.

This article speaks very highly of Elliptic Curve Crypto, but if I recall correctly, that was one algorithm that Bruce has said we can't really trust, since it's being pushed by the NSA. What do we do about that? Many of the arguments in the article hinge on ECC keys being small.

@David Kowis: the main problem with ECC is (usually) that the curves often used were generated absolutely non-transparent by the NSA. One simply doesn't known if they are really randomly selected (as claimed by the NSA) or specially crafted to make solving the DLP much easier for the creator.

If one chooses elliptic curves in a transparent manner with certain security precautions (like the ones listed here: http://safecurves.cr.yp.to/), ECC seems to be safe.

Although Matt does a great job talking about the issues that exist today, lets talk about potential solutions and how to deploy it.

People are thinking small. PGP has usability flaws and keys should be reduced for various reasons. SSL is fatally flawed. Email is small; lets get abstract and come up with something better... Lets not focus on a solution that is just email based - lets replace the majority of crypto systems. Something that is trusted. Something that is easy and requires little to no human interaction. Something that is easy for developers to use. The research exists, lets improve those straw implementations to a system that will work. Move past just Crypto math to complete systems.

I know who Green is. But he does style himself as that "clueless user that does not get it" in his article. For example demanding perfect forward secrecy in email is something only somebody can do that does not (or pretends to not) understand email, as there can be no key-negotiation in Email. In fact, that emails does not require an end-to-end connection is one of its main advantages. At the same time, email is critically important and will not get replaced anytime soon, if ever.

I think his article is not well thought out and basically a rant demanding things that are not practical, and I will call somebody that does this clueless and a (l)user, even if it is somebody like Green that really should know better.

As someone with a Ph.D. in cryptogrpahy, I would expect a lot better. It seems to be that the article is not thought through, and written under an emotional rant. He mixes terms, putting blame in the wrong place, making suggestions that aren't appropriate for the use, and comparing everything to miniLock (subtly and indirectly). I wasn't impressed with the article as a whole.

Some concerns of his are legitimate, such as transferring key data over plain HTTP/HKP, and OpenPGP implementations not properly verifying the requested key was actually downloaded. Others, such as printing public keys on business cards, are silly (transferring key data offline is impractical, IMO).

In my opinion, there are bigger fish to fry, and he missed it, such as leaking information in SMTP headers, even if the message body is end-to-end encrypted. His post is the perfect example of blaming A & B, when he should have been concerned about X & Y.

Care to try to understand enough about a 90 page standard to verify that an implementation is correct? I sure don't.

Instead of having one thing that does everything under the sun (with 10 other standards that overlap), we should have a few things with modular, reusable, and verifiable components. The file format should only care about how the data and metadata is encoded, not the underlying algorithms themselves. The file format shouldn't care about how keys are exchanged, that's someone else's problem, and so on and so forth.

The idea is that instead of having one standard, you have a collection of simple and modular standards that each accomplish one narrowly defined, specific goal. You create a series of standards for interfaces, then you have higher level standards reference the interfaces, while the lower level standards conform to the interfaces. You have an interface for key exchange and validation, you have an interface for encryption, you have an interface for authentication and signing, and so on and so forth.

@ads - "But, the most prevalent and important use of PGP is for *verification* of downloaded packages."

That is so for a reason, namely that PGP proved to be a failure for its original intended purpose, which was email confidentiality and non-repudiation. It happened to have a niche purpose for which it works, and we may as well keep it around for that purpose - but we still need something that works for email.

I too find this article unimpressive. PGP is one of the best tools around. Why discourage people from using it rather than improving it?

I just erased a long sarcastic response. I guess Matt Green means well. But I am not trading PGP for a Google, Yahoo, Apple or Microsoft product. Elliptic key cryptography obtains shorter keys (who cares, they're on a computer and can be compared as hashes?) by using a protocol that the NSA probably already has broken. Getting security from app stores? Sheesh.

Something like Textsecure would be preferable to email, not only does it not bounce all around the world like the ancient email protocol does but has forward security. We should have PM systems instead of junky email in 2014.

People are going to use email. Most of them are going to use webmail. Anything that allows them to encrypt their email, rather than sending plaintext, is a realistic solution to part of the problem of bulk surveillance. A solution that requires people to migrate en masse from email to a new protocol is not. That is a solution for a clique of crypto-obsessed techno-elites.

1. Green seems to hate email, which PGP is tied to. Email's not going anywhere because it works on every platform, it's decentralized, it's unencumbered by patents, supports many implementation tradeoffs, and it's also the most pervasive cross-platform file sharing tool.

2. Green thinks email needs perfect forward secrecy. As an option, it's nice and can be done if public keys are already exchanged. However, most companies keep email on insecure endpoints and many want to be able to inspect traffic for security/compliance reasons. PFS is no advantage in one and adds problems in another.

3. Key management is a trust decision. It's complex because the social process it represents is complex. The PGP solution can be improved significantly with improved key management. Several styles might be supported that the client labels such as "personally vetted," "institution vetted," or "unvetted." Likewise, the level of vetting performed might be rated minimal, average, more than average effort, or highly vetted.

4. He acts like most of the problems are hard to deal with. Yet, there's been a number of tools (esp proxies & plugins) that created an easy-to-use email protection on top of PGP or SMIME.

5. He doesn't mention guards or proxies. Nexor, for instance, implements both to allow legacy email functionality, apply any security analysis desired, and transparently support the encryption. Zimbra had a lot of interesting features to make things easier too, albeit with less assurance. Ignoring the current best solutions, while highlighting failings of the worst, is a selection bias.

6. He conflates the PGP protocol and format issues. You can keep or break legacy on each. One's problems are easier to fix, too.

7. He mixes protocol and implementation issues. The protocol might be hard to implement. Yet a number of good implementations have existed. Yet, he bases his problems with PGP *protocol* on the problems of a *bad* *implementation*. More selection bias with me adding that security protocol implementations screw up more in general than many protocols because they're trickier to get right.

8. He pushes ECC while ignoring the fact that NSA patented it and has private requirements for commercial use. Who knows what those requirements are. Meanwhile, RSA patent expired, it's math is more mature, it has plenty implementations, and there are hardware accelerators on numerous cheap boards. I'd push it unless outside of USA or you want the use to be strictly non-commercial.

9. Bonus point: That he still uses PGP instead of other stuff kind of implies it's still better than everything else for hkm *in practice.* Last part beats better on paper any day. ;)

Other than a bit about some of us needing to upgrade our old versions of GnuPG (thanks, I think) -- I found the article completely worthless.

The main "problems" the writer seems to have with PGP/GnuPG are inconvenience, and the fact that many people take unsafe shortcuts in setting up their "webs of trust" to be able to use PGP. But his cure for inconvenience is to abandon e-mail in favor of some new substitute, perhaps instant messaging or webmail (both of which, in my experience, are much less convenient); and his cure for the safety issues of the "web of trust" is to simply trust X.509 (SSL) certificate authorities instead. (Any reader who thinks that's a good or even sane idea should start by typing "certificate authority" in the search box at the upper right corner of this page, and read the first 10 hits. It is SSL that needs changing to use PGP's trust model, not vice versa.)

@Gweihir
"For example demanding perfect forward secrecy in email is something only somebody can do that does not (or pretends to not) understand email, as there can be no key-negotiation in Email. In fact, that emails does not require an end-to-end connection is one of its main advantages. At the same time, email is critically important and will not get replaced anytime soon, if ever."

If you need something, i.e., perfect forward secrecy, and the communication channel cannot supply it, i.e, email, then you should replace the channel.

Yes, email is critical important. But just as every irreplaceable person can be replaced, email can be replaced by something that does what email does, but then with both better security and better key management.

@Dave
"reop - reasonable expectation of privacy
I like this as an alternative to PGP."

I like the following feature of reop:

What the documentation doesn’t really make clear is that same set of keys used for encrypting work for decrypting (i.e., it’s not asymmetric). For instance, if Alice, sending a message to Bob, encrypts with secAlice and pubBob, that would normally be decrypted by Bob with pubAlice and secBob. But secAlice and pubBob work just as well to decrypt. If you were expecting to encrypt some secret with a public key and then have that computer “forget” how to access the secret, that won’t work.

I must admit that I do not completely understand why it would be a good idea to have asymmetric encryption where the public key can decrypt a message that has been encrypted by the same public key?

When it comes to communications there are two basic modes Simplex and Duplex, they have further charecteristics such as if they can be used in "OffLine" or "OnLine" mode. Whilst simplex can be used in either mode duplex cannot, and that has major implications both to ease of use and security of both delivery and content.

Email is effectivly a simplex protocol which is a very major security advantage for content, and if done correctly anonymity as well. Duplex mode has major disadvantages especialy especialy for content security but also very much for anonymity.

From a series of very hard learned lessons most Mil&Dip government communications are partly or wholly simplex in nature.

Any security method that "requires" Duplex mode should be treated with considerable skepticism and alternative simplex methods should be considered automatically as preferable unless a significant case can be made for duplex over simplex mode operation.

The likes of Perfect Forward Secrecy, sound like they are highly desirable, but in practice are at best a niche ideal that drags a huge tail of limitations and problems behind them which often prove unsupportable and severely limiting early on.

For instance one of our strongest security tools "segregation / separation" does not work in duplex mode. Further in radio and nearly all "broadcast networks" one major advantage is the "covert RX outstation" which only remains covert in simplex transmission systems. It is just one reason why such a lot of effort has gone into forward error correction systems that code data such that transmission errors can be corrected with out the RX outstation requesting repeats etc and giving away it's position.

One of the ways the likes of Tor and other Mixnet systems get broken is when the simplex mode operation is forced into duplex mode operation, it alows various forms of Traffic Analysis that often provides more important "metadata" information than the unveiling of the messagecontents could.

Thus I would be very loth to give up the simplex advantages of email on some spurious argument of providing a small advantage in one tiny area of the potential attack surface, where the supposed advantage is actually sunk by requiring duplex mode operation.

@Clive
"Thus I would be very loth to give up the simplex advantages of email on some spurious argument of providing a small advantage in one tiny area of the potential attack surface, where the supposed advantage is actually sunk by requiring duplex mode operation."

As I understand Green's arguments, he claims that the simplex model is already broken by the key validation and signing. GPG already requires validating keys and the web of trust is both opaque and broken for global email, or so he claims.

Your simplex system would require something like bitmessage, which has its own (scaling?) problems.

And I can think of a lot of people who would have good use for perfect forward security in their email.

Bruce, in light of the many critiques offered here, and especially Aaron Toponce's takedown, why do you call this a "good post"? To me it looks like a sloppy mixture of some valid criticisms with a lot of bad misinformation.

As the general public willingly accept every version of Windows, OSX or Android that is installed for them along with it's bloatware, replacing the current email system with a more reliable and secure system should be much easier than getting people off Win XP. If Microsoft and others can be encouraged to work with the sucrity community then a new system could be rolled out with Windows 9 or 10 and future Android and Apple updates

A more secure implementation of an email-like system would be a big drawcard for the masses that are slowly waking up to the fact that their electronic communications systems are constantly leaking information and being hooverd up by many a national insecurity apparatus, the corporate data market and criminal organisations.

Ease of use is a poor excuse to criticise software though. You need to learn how to drive a car and a license to operate it, a computer/smartphone/tablet requires no qualifications or experience. The simplicity of systems like Windows has made 'ease of use' an expected requirement by many. Rather than constantly purchase ever more powerful devices, consumers should be encouraged to try and install a Linux distrobution for example and compile a custom kernel to exploit the real potential of their aging hardware. Those unwilling to learn should be forced to adopt new security measures implementedin future operating system releases.

Trying to talk security to the public however is like trying to talk quantumn mechanics to a mentally challenged snail. My own wife for example doesn't like it when her debit card is abused by a rogue online store, but try and suggest to her to adobt a little security and not use her smartphone for online banking and she completely ignores the matter with more than a little contempt. The general public continues to show little interest in any electronic security matters even after they are robbed, scammed and inpersonated.

The esculating security problem is the hard medicine so many need to swallow. Bank fraud squads have had their workload double again this year and it's only going to get much, much worse before the public wakes up to the fact that they are driving drunk, blind and stupid with all the doors unlocked and no seatbelt. Maybe a few experiences dealing with their local bank's frustrated and overworked fraud department will encourage folks to adobt a better attitude towards security.

By the way, though we often like to, security professionals like myself can't afford to always work for free and many people and businesses need to realise this. I'm quite happy to contribute to opensource projects, but your malware infested network or laptop may require a small payment in exchange for the many hours it can take to remove all the crap and still retrieve and clean your personal files that were deleted by virus, as you probably have no backup policy of any kind and probaly still require those 20 years of customer records.

Matthew Green's post is good. He states himself it was a hard article to write and it made him feel uncomfortable. There are many good points in his article.

PGP and common encryption ciphers were mostly written in the 1990's when CPUs were very limited and data connections very slow. Reading through the crypto papers reveals many concerns at that time which are now no longer relevent.
New standards are needed or we will continue adobting increasingly larger keys that make 4096 bits look tiny and 'distributed cracking computing systems' will not even blink at. The WEP cap file decrypting cloud service laughs at WEP wireless keys and it was built by an individual with limited resources (yes WEP is indeed crap of course). The NSA has the computing power and eavesdroping network gear to route the unsuspecting public through a https proxy and serve you what they like and impersonate anyone.

How many people bother to check a SSL Server Certificate or know how to check it's fingerprint? It's all goboldygook to Joe Public. Joe Public posts his complete router details on his router provider's forum and asks the admin to push him a firmware update as he can't do it himself. Joe Public uses WEP and has not changed his router's password or IP address. Joe Public also likes to install free games and useless apps of dubious nature. Joe Public provides many crackers with free wireless internet access and a bot host but is completely unaware of this.

Now even the cops are purchasing devices like the UFED Physical Pro that will clone your smartphone passwords and all. Police departments are increasingly employing the same software used to track terrorists in Afghanistan, and then using that software to profile the local citizens. Increasingly they push for more warrantless searches and laws to force disclosure of passwords and keys even if you are not suspected of a crime.

Change is needed. An institution becomes instituionalised and then it's main mission becomes it's self preservation, rather than service of the public as originally intended. The national security apparatus is even attempting rewriting the constitution or ignoring it entirely, not exactly 'service of the public' IMHO. Now the criminals are working on USB firmware implants and wireless mesh malware that exploits the internet of things, and the NSA/GCHQ provided the perfect environment for them to succeed. Ever tried dumping the firmware from your hardware and examining it for rogue code, fun by the bucketload? :{

Unfortunately we can skip any further discussion when the fine print in each “license” (paid or “free”, e.g. from so called “OS”) lawfully can say “we are not liable for anything, you are on your own” and our “government” happily prosecutes the consumer.

"Good post?" The bar is set low. Re-iterating the findings of Whitten & Tygar and Garfinkle & Miller isn't what I would call good. We've known these issues for decades, and he adds nothing new to the discussion.

I am by no means a security professional, but as far I've read in these comments the following:

a) Mr. Greens suggestions do not work for email, ignoring that he suggested to get rid of email altogether.

b) People will use email and this will never ever change, so we dont need to develop any follow up.

c) Email and PGP is great and works well.

d) Usability is stupid and it's peoples fault for not using the products available.

e) ECC is broken by design, beacuse the NSA, and there never ever can be something practical besides ECC and RSA.

People do change their habits. From MySpace to Facebook, from letters to email, from phoning to adding sms or instant messages. And people are not willing to learn lots of stuff for seemingly small tasks with no safety or bad implications when there are acceptable ways around it. Not being able to drive a car can have serious implications on your life (it doesn't have to) and if you do anyways, it can kill people. Not being able to use mail security is basically harmless, socially acceptable and much easier.
Ignorance on usability and therefore user acceptance will break lots of things, because people will work around on unpleasant mechanisms. They usually don't even like passwords, because most password policies are stupid and they can get annoying to type lots of times.

It's a "lets work on it", because if never approached it wont get better.

Personally I think Matthew's raised a number of very good points. People also tend to forget that it's just a blog; it's not meant to be his finest piece of writing ever, just a number of ideas that he's put down on paper. This isn't a thesis or a proposal.

Ultimately I think PGP fails because no one is able to provide an end-to-end service. There's a protocol, various different pieces of software and different implementations. Nobody's really responsible for the whole thing, and it's a very old solution that doesn't solve today's complex threats.

It's also a usability nightmare. I was trying to explain how a friend of mine how she could send me encrypted emails after she had seen my PGP fingerprint in my email signature.

What browser do you run? What operating system? Where do you access your Hotmail account from? Your iPad? University? I gave up... seriously, I couldn't even get to the part about generating keys.

I've been doing IT security professionally for 10 years, and I wouldn't never recommend such a solution to our endusers. It would be rejected outright. Key management? Verification of whom I'm talking to? You've got to be kidding me. They're just concepts beyond most people. And rightly so.

Software such as Threema and hopefully Moxie's Signal/TextSecure are very promising. Different levels of trust. Implementations from one vendor across platforms. Security built in. My friends can actually use them; I think that pretty much says it all.

That said, what scares me about the stuff in the posting Bruce links to is the possibility that using security as the excuse email - open, can be implemented by anyone, works on every platform, etc. - will be turned into a series of commercially controlled proprietary systems that are interoperable only with each other, leaving people who want to do their own email from their own server on their own domain unable to do so.

@David HendersonThe implementation is not at all suited to world-wide email.

Sure it is. All it requires is instead of one, two pieces of information be exchanged: email address & public key. I post my public key with my email address on my website. Anyone who wants to contact me securely, can. It's not difficult at all.

Has anyone taken advantage of this? No.

This frustrates me more than anything - even fellow certifiable geeks who rant about needing security - the moment I mention that I am fully set up to use it with email & xmpp/jabber - they immediately stop talking about crypto and they have all so far refused to even look into setting it up.

They also seem to have a great love of Skype & Outlook. People like this need to walk the talk or stop talking.

Back to your comment, PGP works just fine worldwide. Completely valid for one-on-one communication, in fact I would argue moreso than a group, no matter the size. That might be it's weakest point is that it requires end-to-end verification for each and every commuique - and also it's strength.

iwozERE: The public work in the banks, too. I was recently in the local branch of Barclay's bank paying a bill in cash (why not? It was under £20), and the teller began pushing me to use mobile banking instead. I said it was insecure, and she invited me to a tutorial session the bank runs so I could see how secure it is. (I meant the *phone* was insecure, not their service, of course.) And on and on, until I finally looked her in the eye and said, "Don't you *like* having a job here?"

She got so rattled she forgot to stamp the bill "PAID".

It is my view that a lot of the phishing problem could have been solved at the beginning had banks troubled to see authentication as a *two-way* process in which the bank needs to authenticate itself to us as well as demanding that we prove our identities to them. You *still* get banks calling people in the UK and saying "Let me take you through security", which ought to be sanctionable behavior. Had they begun the process 20 years ago of teaching people to expect a handshaking process in which both parties trade information until both are satisfied we'd be in a lot better shape now. I'm not a security expert, yet the first time I got one of those calls, back in 1991, my immediate reaction was, "You've got to be kidding me. Send me a letter if you have something to say." and hung up. If I could do that then, why couldn't the paid people who worked for the banks?

This site has done an excellent job of creating an easily setup pgp client, automating the key generation and ease of use. The site even has a youtube instruction site:http://gentlegpg.sourceforge.net/

One of the problems mentioned in the article was the possibility of encrypting and sending the wrong message to someone. This program has a check system against that by requiring the name or some other word(s) to be in the message text. Otherwise, it will not go forward with the encryption.

Any suggestion to replace (RFC2822-)Email is just insane. It is not practically doable and the result would very likely be significantly worse than what it replaces. There is a reason Email is so pervasive and successful and all competitors have failed. Please get back to the real world and stop making theoretical suggestions that pretend to be practical ones.

"Your simplex system would require something like bitmessage, which has its own (scaling?) problems."

This is why I need to implement the offline-messaging protocol I designed. It's designed as a way to send messages between two parties without anyone being able to determine who the sender or recipient is except for the sender and recipient (requires Tor/I2P for anonymizing connections). It's limited and not designed as an email replacement, but it doesn't have the "You must download and attempt to decrypt every message ever sent" issue. It's not a complete protocol, it's just a file format, but you don't really need to develop a special protocol as you can drop messages anywhere that's indexed by google and the recipient can search for them online. It also doesn't allow unsolicited messages (is that a feature or a limitation?).

It doesn't provide PFS, however it does get kind of close. Messages are part of a chain and the last message is needed to both identify and decrypt the next message.

I'm reading the article and it's turning my stomach.
The NSA riddled propaganda of pushing for ECC over PGP due to the "advantage" of shorter keys is just screaming: "PGP keys are too long for us to break, use these shorter keys which we have figured out what to do with". The NSA has been recommending ECC on the benefit of shorter keys for a long time, and given recent history, I say: "slow and steady, and the bigger the better".

I don't trust the SEC curves, especially since the prime curves have magic numbers with no reasoning behind their choice, and it is really suspicious that they use 521-bit prime curves instead of 512-bit without an explanation. As far as I'm concerned, SEC should not be trusted. I don't have the same feelings for Curve25519, however, and ECC has held up to public cryptanalysis.

If the NSA influenced the choice of curves so they could break it, then I think it stands to reason that they can only break specific curves. If they didn't influence the choice of curves so they can break them, the choice of unverifiable seeds and use of 521-bit prime curves without any explanation just doesn't make any sense whatsoever.

@Thoth How about several times per day? I have rules set in my Mutt config that automatically encrypts when an address exists in a To:, Cc:, or Bcc: line. Provided that this includes employees, almost all of which have PGP keys due to internal infrastructure needs, I am constantly encrypting and decrypting mail.

Further, due to my PGP keysigning policy, and my activity with CACert, I see at least 2 requests per month for signing PGP keys. I further encrypt my website usernames and passwords with PGP, which requires daily access. Every email I send is also cryptographically signed with my key, and I am on several mailing lists, with many who do the same.

In other words, yes, PGP users are a niche group. No argument there. But there are many very active PGP users in the community. Not only users, but developers and other contributors. Same could be said for OTR, or any other encryption utility.

@Anura:
I'm grateful for the response, but my worry is not wholly alleviated with ECC. As you point out, the arbitrarily chosen constants seem to imply the NSA has some method of choosing these constants that gives them the advantage. AFAIK, we don't know what this method is, and how/if it can be extended. For all we know, if they can engineer one curve, the method could apply the method to other curves as well (though maybe a little less conveniently). As a result, I am uncomfortable in personally recommending ECC while there is suspected back door in some of the curves that no one outside the NSA crypto lab fully understands. It's quite possible they reduced all of ECC to some multi-stage problem where no section is significantly harder than the equivalent of factoring the key length - we simply have no idea.

The math behind RSA OTOH, is simple, well understood, and judging by the NSA's constant complaining about it (both publicly and privately), fairly secure. Sure it requires much longer key lengths and is slower, but it doesn't have any of the magic "trust us" numbers we don't understand. I like that.

I personally prefer Diffie-Hellman algorithms over RSA. The main reasons are the performance of ephemeral keys (RSA is impractical for large scale use) and that key-confirmation and message authentication implicitly verify the identity of the sender as well, since both party's keypairs are used in the key exchange (and MACs are much shorter than RSA signatures).

Also, looking it up, Curve511187 is a thing, except it's called M-511 and is listed as one of the SafeCurves. I would use it for my little project, except there is only one implementation I can find (RELIC) and I don't know how good it is - it's also fairly recent, added in October 2013. Note that this is the reason for me dragging my feet on my little project: RSA isn't ideal for my purpose, don't want to use NIST curves, could use regular Diffie-Hellman but it does take more space in the message since I require an ephemeral key, and for other algorithms there a lack of impelmentations - this is why I'm leaning towards Curve25519 - it's available, and NaCl does everything I need, but I am less comfortable than I would be with a larger curve.

Regarding crypto algorithms, assuming NSA had a hand in a good amount of algorithms, we are pretty much done over with. Desipte that, there are algorithms created by individuals or groups that the NSA may not have their hands in (Safecurves).

If I remember correctly, a 2048 bit asymmetric key is about say 112 bit symmetric key size and to achieve a 128 bit symmetric key strength, you need a 3072 bit asymmetric key. The important part is the key strength according to it's size and the 2048 bit asymmetric keys we are using isn't nearly as strong as we think (unless we all use 8192 bits of asymmetric key) but it's still good enough.

If an attacker attacks, the last thing he wants to do is go head on against crypto and the first thing he wants is the easier stuff ... TLAs.

Most GPG/PGP programs have a fix location where the keys are stored and some of them are encrypted with a low entropy password or some are simply in raw keys. Different GPG/PGP may store keys in different location and different key handling methods with different degree of assurance (usually low assurance).

Some of the common methods includes directing the GPG/PGP program to read keys from a specific location (like a Truecrypt volume) but overall key management is a headache to those who are new to security.

GPG/PGP is not perfect and has not seen much updating in the face of more advanced threat models which Matt tried to present in a casual manner in a ranting style on his casual blog (although not going to take all his words for truth). The main point is GPG/PGP should evolve as time changes to handle more advance threat models in the current age.

We also know that NSA hated PGP, while they love modern stuff Green likes. Some suspected it was a disinformation campaign. Yet, their internal documents show the basic and old protocols are tough enough on them that they always try to endrun them. A good configuration & implementation of PGP, esp on an assured proxy/guard, is one of the better methods of dealing with them. It's also similar to how the DOD & NATO messaging systems operate. I've always said what they trust to protect their most private stuff from other TLA's can teach us how to protect ours from them. Good news is they publish a lot of information about that.

@Gopiballava
I usually prefer to trust a seperate standalone GPG/PGP encryptor to do the work rather than in the mail client as you described. It's abit more tedious but that meant you don't go trigger happy clicking the send button and it also ensures that weaknesses in mail clients do not affect your standalone encryptor.

There an interesting Java-based standalone GPG/PGP client called Portable PGP (http://ppgp.sourceforge.net/) but it has not been updated for years. It's an ideal concept of a standalone encryptor in an easy to use GUI. Only 5 button menus on the left side - Encrypt, Decrypt, Verify, Sign, Keyring ... no confusions. Too bad that it has been outdated for a long time (internally supports 1024 key sizes) and the cumbersome Java unlimited Strength File it relies on for crypto. Java is not the most likely language for secure applications but the simple to use idea and GUI is what it has done well on. Besides increasing the key and cipher support, this utlity and all other GPG/PGP utility could have a manual key entry via text editing mode as well.

PGP? its "WoT" based, and in the real world, that means the key you want to trust is signed by a handful of people you have never heard of, and THEIR keys are signed by a handful of people you have never heard of. Outside of the Debian strongly connected set, WoT really only works if you can confirm the key out of band, and I can count on one hand the number of people who have bothered to do that with me (it IS pretty geeky, after all)

Some other solution? well, nobody has come up with one yet. It has been suggested that something dns based could be used, but nobody can be really bothered to do DNSSEC, even if others could be induced to check it (xkcd has an opinion here - http://xkcd.com/1181/)

Google and Yahoo COULD run their own keyservers, generate keys dynamically for users who don't supply their own, and sign the keys themselves for email accounts held with them - which would be great, right up to the point where the NSA turn up and insist they be allowed to edit the database "a little" - and could as easily be done with S/Mime (given Google have their own issuing certs) if you are going to rely on Google to say if a key is valid or not.

I have no issues with PGP (or S/Mime) technically. You can happily insist that your key only supports a decent algo, and insist anyone you send encrypted mail to at least supports (if not insists on) a decent algo. Find some better way to share and verify keys, and I don't see a need to reinvent the wheel - you can stick with the system we have now (which works) instead of trying to build an entirely new one from scratch.

Nick P: hey man, long time no speak!! (Would you believe that I have spent time in hospital with influenza I picked up from a trip to Asia a few weeks back... Had a death in the family... and I am just getting started. I must have walked under a ladder, or killed a cat or whatever you supposedly have to do to get terrible luck).

Re PGP: I think that despite its flaws it has stood the test of time and its design has had to have some inherent flexibility and genius to do just that and allow for its expansion. Zimmerman did a pretty reasonable job and it remains uh .. Pretty good. The WoT, which most people see as its primary weakness can be viewed as a feature as it is at least decentralized. Yes, it could be improved and I think we have spoken about how we would go about this in the past. The thing is - nothing about PGP (the spec or the implementations incl gnupg) is unfixable and I expect that it will continue to give TLAs headaches for many years to come, barring some quantum advance that breaks asymmetric crypto.

Good to see you post. I was going to email you, but wasn't sure if you could (or would) access the account under your difficult circumstances. I also thought about setting up something real-time at some point to run some ideas by you.

"Would you believe that I have spent time in hospital with influenza I picked up from a trip to Asia a few weeks back... "

I thought you said you were stuck and couldn't get to the U.S. But you can get to Asia? Huh?

"Had a death in the family... and I am just getting started. I must have walked under a ladder, or killed a cat or whatever you supposedly have to do to get terrible luck"

Sorry to hear that. My situation has been similarly rough to the point I was seriously thinking I wouldn't make it another month. Calmed down as such that most immediate problems are gone, but quite a few that could become that way at any time. Still battling. So, I feel you.

re PGP

Yes, its flexibility is nice. It inherents the flexibility of email which is one of the most over-used protocols around. About anything you can do with email you can do with PGP. Most of the problems can be fixed. And web of trust is kind of a bullshit model as above commenters noted, but PGP supports one-to-one or small group trust easily. The organizations and groups strongest in security use it that way anyway. Then, web of trust can kick in from there if you only trust key signings from groups you know take a strong approach to identity. For instance, they only sign people they know personally & exchange face to face. And you still do input validation on whatever they send you because they might use Windows, Mac, or Ubuntu. Haha.

"barring some quantum advance that breaks asymmetric crypto"

That's why I keep coming up with schemes that leverage symmetric crypto for everything. Trickier, but stronger & better performance.

I agree with prior posters on that key management is the most fundamental problem with PGP and PKI. Either one has to trust 3rd parties (CA and alike) or rely on semi-secure channels to establish the key authenticity. One can’t blame PGP for that – there simply wasn’t anything better back in a day.
The author is right in asking for progress, for better, more convenient and secure key delivery mechanisms. Unfortunately, there is not much mention of NameCoin – the most promising development in more than a decade.

NameCoin enables truly decentralized key distribution mechanism that allows exchange of public keys with no intermediaries or 3rd party servers. It relies on BitCoin like immutable ledger (block chain) for identity mapping to certificates or other crypto data. I’d encourage everyone to take a look – it is amazing what it can do today and how little attention it receives on the net.

Quite a few products already use NameCoin. The one, most closely related to the topic, is SecureDolphin - Google Chrome extension for end-to-end encryption that works with Yahoo and Gmail. It uses NameCoin for storage and distribution of x509 certificates. Check it out in Chrome store

"Most download pages offer an MD5 of [sic*] SHA1 sum to verify the download, but *who* claimed it was the right file to download?"

(*It would appear that "of" was a typo, intended to be "or".)

I continue to find myself amazed at how many people-- including those whom one would think would know better-- don't seem to get this: that a cryptographic hash /alone/, no matter how strong, does absolutely nothing to /authenticate/ a file.

From my observation, most, if not an overwhelming majority, of the GNU+Linux releases that are announced on DistroWatch provide nothing more than a hash (often only MD5, sometimes SHA1 and relatively rarely SHA256) and not even on an HTTPS page (which would at least provide /some/ degree, even if weak, of authenticating that the hash was in fact the correct one for the legitimate file) but on a plain, unauthenticated HTTP page.

A comment posted to this very blog back in May comes to mind as an apparent striking example of this blatant error made by someone whom one would think would know better.

In speaking of the need for software to be authenticated, the author wrote,

"Authenticode" comes to mind and in the Linux World we have an MD5 hash we can verify before we burn an ISO image for a new OS.

I replied by pointing-out, first, that MD5 has been considered deprecated for years already and then, what I stated just above at the beginning of of this post, namely that,

Even the most robust cryptographic hash, /alone/, cannot do more than verify the *completeness/integrity* of a file-- not the /authenticity/. For that, one must somehow establish trust in the /source/ of the hash.

The poster in question then responded with a post in which, after conceding that MD5 has been deprecated for years, immediately went on to claim,

but just because it could be cracked doesn't mean it's useless: it makes the process of posting a corrupted distro much more costly, and -- helps to further the practice of protecting software by offering authentication

the software must be protected first -- before there can be any discussion of protecting data .

That was the entirety of the reply. As one can see, the second point I had made-- clearly the more critical and germane one of the two-- had been completely ignored. Seeing this, I then replied by attempting to elaborate upon and make clearer, by way of illustration, this critical distinction between verifying the completeness or (technical) integrity of a given file, on one hand, and /authenticating/ it, on the other. ( https://www.schneier.com/blog/archives/2014/05/forged_ssl_cert.html#c6307745 One could perhaps say that I made what was essentially the same point that "ads" did in the quote of his that I began this post with, only in far more words.)

Not only was there no response from the poster I had replied-to but I appear to have been the only one to have so much as even noted the troubling assertion that had been made, based on an apparent conflation of the process of verification of the technical integrity of a file with that of authentication.

This, given the apparent level of knowledge of the commentators to this blog, would appear quite discouraging.

I have made note of the fact that "software signing" only testifies to the contents not the quality or security of the code for several years now (as you will find if you look back on this and other blogs).

The problem is there is no way to certify code has not been tampered with "up stream" of the code, nor that the holder of the signing key or process has been sufficiently carefull that a third party can not have "used them" to sign malicious code that has in effect never been in the "official code path".

However there is a further problem we have to accept, which is hashes and certificates "are assumed" to be secure for the purpose to which they are put. However as Bruce has noted on a number of occasions "attacks only get better" and as we have seen a number of attacks though developed against a particular algorithm (FEAL being the most benighted) can be generalized to others. Thus hash collisions have been found, and certificates fail to improved factoring etc. It is a safe assumption that the likes of the NSA whilst behind the curve in many things, are very likely to be ahead of the open community curve in this area of research.

So yes code signing does not attest for the quality or security of the code, but nore does anything else currently...

The real question is there a way to show that the code generation process of a given manufacturer/supplier is "what it says on the packet" and the simple answer is no, you have to rely on external process auditing, and that at the best of times is far from infallible.

Personaly, due to the lack of a reliable process I'm at the point of ceasing to care what other people think on it, I simply state the observations and let them make their own mind up.

I appreciate your points. I would like to reply in some detail but I do not know how soon I will have the necessary time and frame of mind to do so. I therefore am posting this reply now, in order to be sure that I least acknowledge your reply and thank you for going to the trouble of writing it. Please accept my apologies for not having done so for what I believe are now two previous instances of your having written detailed replies to me.

Nick: the trip to Asia was connected to the family emergency (nope, my family are white but on vacation over there). Re US situation - the only thing that prevents me (as far as I know, and I've made the relevant FOI requests but who knows what secret lists I've made or if my inquiries put me on a list - that would be ironic) from coming over to the west coast at the moment is employment. If I could organize a steady job, preferably somewhere in the Bay Area I'd be on the next plane. I was probably needlessly concerning myself over a previous project that went a bit bad and I'd be a fool to elaborate any further on a public forum. Nobody got hurt and I didn't desert or do anything that crazy but you know how these things get blown out of proportion.

Re WoT - yeah, it is a bullsh*t model but I guess when you compare it to the antiquated, hierarchical and demonstrably broken X509 it starts looking a bit better. I suspect given its versatility these issues could be overcome without throwing the whole thing out.

They were both children of their times a third of a century ago. One was seen as Authoritarian beyond acceptance, and the other a method of providing a more human and secure --from authority-- method of providing authentication.

In both cases the fundamental model was wrong and nobody in either authority or geekdom wants to change things such that users can take sensible control of trust.

The primary thing got wrong is the relationship between what humans do (roles) and what they are (bioindividual). An authorative organisation wants to identify the body not it's actions because that is the historic way by which authority controls the many with punative punishments, and why companies rarely get punished for crimes that would be a death or life sentance or equivalent for an individual, and why the likes of the bankers et al have not gone to jail (collective action foils the controling mind requirment). Humans on the other hand know they have many roles in life, and as the old army expression of "I'd trust him with my life, not my wallet" indicates humans can and usually trust people based on their role. Bruce mentioned this with his tale of getting his plumbing fixed and alowing access to his house.

As both CAs and WoT fail to differentiate what the trust is for it's an all or nothing judgment call by the user...

But there are other issues a third of a century ago the Internet was a "geek plaything" not a buisness tool, and not yet generaly recognised as a replacment or augment to our more general human activities. As a result neither CAs or WoT got it even close to being a fit.

Unfortunatly we have rapidly evolved the models not on what is important to us as humans within society but what disfunctional or draconian entities want that makes their not our life easy.

As an example try getting web browser designers to put in the foundations for what would be a shift to role bassed not biology based trust... You will either be ignored, or told nobody wants it, or a whole myriad of NIH / NPII reasons that boil down to "we have reasons we won't admit to for not changing, because slave / surfs and other commodity humans have no rights and we are not going to give them any in any way".

The WoT assumed people would still be "local" socialy as well as for business or trade etc. This is obviously wrong, thus "trust" becomes nebulous and subject to false amplification (eBay reputations etc).

CAs have failed not only as expected by those who thought up WoT, but in many many ways they did not predict. Worst of all is that the heirarchical CA model has not included any liability to the certificat issuer, thus there is no incentive to do anything other than issue certs as cheaply as possible. This has also had a perverse incentive which encorages as many CAs as possible thus breaking the heirarchical model compleatly...

What we need is something new with solid and reliable foundations, and like the old joke about MS products "if the answer to the question is CA / WoT then you are probably asking the wrong question"...

Clive: I don't usually just reply to say I agree but your post eloquently summarizes everything I was going to comment on anyway. ;-)

The only thing I will add is that it is surprising that this issue, which is really quite fundamental, still exists in 2014. Without a reliable way of solving the "trust" problem the cryptosystem is vulnerable to all of the problems we've seen before with PGP key servers and rogue keys etc. I'm not saying it is an easy problem to solve - but one would think that the cipherpunks and general crypto enthusiasts not to mention commercial interests would have sufficient incentive to solve it. Of course you can also argue that the govt has a vested interest in ensuring that this remains broken.

I think we could've come up with better solutions. Truth is that *both* commercial interests and government have a vested interest in ensuring it's broken. The reason for commercial is they want things to be cheap, integrate, and centralized. That their typical use case needs a large number of highly assured components in networks and endpoints means their tradeoffs can't be met. They go with low assurance methods instead, undermining most trust protocols. The government has their motivation of SIGINT, although the LEO's (e.g FBI) do want strong authentication. It's just that NSA is more powerful, smarter, and has legal charge with COMSEC/evaluations.

The success of Bitcoin and P2P models (esp darknet/F2F) indicates a decentralized model might work. Question is what model. I'd include a way for the user to declare their trust rating of each node, as well. Data coming from a service running Windows Server is more risky than data coming from a Cambrige CHERI-BSD machine.

Nick: in fact I have what I believe are some reasonable ideas on how to combine a proof of work cryptocurrency-esque concept with a traditional web of trust. I'd like to discuss this with you further at some point as being idea rich and time poor it is very hard to get things from a concept or even a paper to a working alpha.

I think that by a) making key generation costly computationally (subkeys free obviously) and b) expanding the WoT by forcing those who vouch for others to put their money where their proverbial mouth is... A hybrid CA model whereby identity validation is crowdsourced may also be a potential solution.

Prior to the family crisis that sent me half way across the Pacific and forced me to put all of my projects on hold I had written a proof of concept key server which was loosely based on the SKS code, but with the main difference being the web interface and how keys are managed esp ageing out of old keys that haven't been queried for a long time to avoid one of the biggest issues we have at the moment with having old keys that are long lost (and with the revocation cert long gone too) cluttering up the key servers. It is something I have been thinking about for some time and I believe that while the problem can't be "fixed" you can certainly partially mitigate much of the problems. My primary interest in running a key server was ancillary to my work on blogsig, which is now alpha quality and I have a working firefox extension as well as for emacs (why not ;-).

Nick (off topic) - doing a quick forensic job. Do you know of the URL for the library of the hashes for files that are installed stock with windows 8 64bit for elimination purposes? My memory is failing me.

The model of a key server with a transparency backend is based on the premise that a user is willing to trust the security of a centralized service, as long as it is subject to public scrutiny, and that can be easily discovered if it's compromised

I call FUD!
It's far more feasible/reasonable to discover forgery/impersonation based on decentralized communities of local/personal 'trustability'...

Nick: thanks! those hash lists did the trick just fine. BTW I'll be blogging again really soon but given that a few people have emailed relating to my canary not being updated I figured I had best sign up a new document and post it up there. Thanks to everyone for their concern, well wishes etc communicated via e-mail over the past few weeks. I'll move any of this discussion over to the Friday thread if anyone wants to comment as I realize this has gone quite off-topic.

The real reason that PGP is awkward to use is because hardly anybody actually NEEDS to use it. For example, in the 20+ years since I created my PGP key, I have never had a single convincing reason to actually encrypt any message with it other than to confirm that it works*. Virtually everyone I've talked to in the jobs I've been has considered that they should even think about encrypting their emails. I only ever had one reason to use it and that was a specific case where a customer wanted to send billing data to the company I used to work for, but even that was simply a method for protecting data in transit - I created automated procedures at the endpoints to deal with the encryption and decryption processes, so key management never really was an issue.

Obviously, people in more repressive regimes may think differently, but if you consider that if they are in a threatening enough situation they feel they need to use encryption, the people they feel threatened by will most likely have no compunction against victimising or imprisoning them simply for doing just that.)

Until a significant proportion of the people who use email think that they want encrypted mail, the situation is always going to be that PGP is basically going to be a free, specialised tool, with all the attendant usability problems that entails.

* Actually, I will state that I've used PGPdisk for about 10-15 years to keep personal data relatively secure, but that's very much a different thing to what's under discussion here.

(Anon as I've related some operational details of a previous employer)

Felix: I've seen several of these posts about how PGP is a bad idea and we should have FS in our email encryption, but I actually disagree as completely as I can. "encryption" is about systems where multiple servers see the message in between. Forward Secrecy is about having the inability to read past messages given a future exposure of the key. The problem is that people want to read past emails given their current key. It's difficult for me to see how FS is even possible between the sender and the recipient in an email-type system, even if we were building the protocol from a greenfield.

FS requires synchronicity; email is pretty much a useful system because it doesn't require synchronicity (otherwise people would use some sort of synchronous messaging protocol, like IM). You could have a sort of synchronous-for-the-purposes-of-crypto-only system by delegating the crypto to your email provider, but then you basically just have TLS on the transport layer, which is a good idea, but it's not a replacement for GPG. By definition if you're truly running end-to-end FS then you can't read email today if it was sent yesterday. (Yes, I know, you could save the ephemeral key so that you could open the message later, but if you're saving the ephemeral key you just created a new long-term key and violated FS)

The reason why you would want FS some places, but not email, is that there are lots of protocols where you don't want anyone (including you) to be able to read what happened yesterday. Email isn't one of them.

" Forward Secrecy is about having the inability to read past messages given a future exposure of the key. The problem is that people want to read past emails given their current key."

It's actually easy. The client keeps the keys and decrypts messages on demand. Client might even store the keys with the message after encrypting them with a master key. Email search is where things get trickier.

That doesn't really provide forward secrecy. If the recipient can decrypt the emails in the future, the adversary can as well. PFS can only be provided for data that is decrypted and discarded as soon as the session completes (and assumes you pin memory, zero it before freeing, and don't hibernate).

It's the closest thing I had to PFS off the top of my head. Email is one of those things people keep and use later. So, what can approximate PFS with that requirement? That leads to my proposal. What you mention would alienate the vast majority of email users, which one can't do for an email security approach.

Yep, that's exactly the problem; PFS just doesn't work for email. It's trivial to design a system such that if the *sender* is compromised the emails cannot be decrypted, but not the recipient (RSA does the former by its very nature, and the sender can provide an ephemeral key so this can be done with Diffie-Hellman as well). In an offline system, the sender needs to be able to send something the recipient can decrypt without providing a separate key.

The best thing you can do is to store the emails encrypted with a symmetric key (to be decrypted on your machine, not by the server!), likely a passphrase, possibly hashed with a keyfile (ideally stored on a USB drive). But even then, you are still vulnerable to malware.

PFS should be used when it can, which is any time where you have the sender and recipient online at the same time. When it cannot be used, you do what you can, which often isn't much.

Over and above the technical reasons why PFS or it's equivalent won't work for Email, are human and managment reasons.

Managers view Email like writen correspondence of old, as something that should be "kept filed away"... and humans are lazy and the Reply button just to be helpfull includes the original message, so often both sides have the entire email conversation in their inboxes. It's a problem without control, especialy with the Reply All button and corporate mail lists...

So with SMTP and current EMail practice it's best to assume security is "a dead horse" and "stop flogging it" into some new direction it won't go. As was once obsereved about humanity and it's traits "We are our own worst enemy", or more enigmatically "We have met the enemy and it is us".

He has very valid points of PGP having unnecesary legacy baggage and too much complexity leading to insecure implementation and usage.

What's the point of modern PGP clients supporting algorithms such as IDEA or 5CAST? So the time traveller from 90s can send you encrypted e-mail?

When looking at various standards and implementations it's very easy to believe that NSA has deliberately sabotaged them by introducing unnessary complexity, making the secure use more difficult. Here are some examples:

- SSLv3 was released in mid-90s, it's considered to be insecure. Currently 99.3% percent of web servers support at least TLS1.0. Why haven't browsers, e-mail clients and other software dropped support for SSLv3 years ago?

- Even the newest TLS1.2 supports DH-ANON, ECDH-ANON, "NULL encryption", RC4 and 3DES. What's exactly point of these? If someone really wants to use these options, certainly he could utilize one of the older TLS standards?

- ZRTP (used for VoIP encryption) is a modern standard from 2011. Yet it requires support for 32-bit HMAC (in addition to more secure 80-bit HMAC). Sure, transferring those extra bits would clog the modern Internet /sarcasm.

- IPSec is probably the best known example, so I won't go into details

Since no-one has mentioned it, and people are still breaking their minds, I would like to mention pond [1] from Adam Langley (agl, Google), who is also working on BoringSSL [2]. The rationale and techniques behind pond are well documented and it tries to solve some issues of PGP and email, but it is not a replacement of these (experimental, open source).

It's mostly for legacy but IDEA jumps out at me. That's a 128-bit cipher that went over a decade in use (against NSA) before a weakness was posted in 8.5 round version. Still can't be cracked AFAIK. The developing weaknesses mean that I'd recommend increasing rounds, using it in multi-layer encryption, or doing something like 3DES. One thing I consistently say in security is "tried and true beats novel or new." IDEA proved itself enough that people are justified in including it into a scheme.

Note: 3DES is used by the financial system and Microsoft. It's also unbroken so far.

Many people here seem to confuse PGP with GnuPG - or at least to think that they are interchangeable. They are anything but.

If you are using PGP (2.x, the genuine article), then anything you generate can be understood by GnuPG.

If you are using GnuPG, some of the things you generate cannot be understood by PGP (or even by GnuPG itself) and other things are extremely awkward.

For instance:

1) It is not possible to sign a binary file with GnuPG in a way that is verifiable by PGP 2.x. (Detached signatures and clearsigned texts work fine.) This is because GnuPG is not RFC1991-compliant as far as signging is concerned, and PGP 2.x understands only RFC1991. Proof:

gpg --sign --rfc1991 --armor --output file.asc file.bin

The resulting signed binary file will not be verifiable by PGP 2.x and even by GnuPG itself! The reason is because GnuPG outputs the signature packet after the literal packet, which is in accordance with the OpenPGP standard - but the RFC1991 standard specifies that the order should be exactly the opposite. Using the --rfc1991 (or --pgp2) option is of no help.

2) Signing-and-encrypting a file in a way which is understandable by PGP 2.x is extremely awkward and requires 5 (five!) separate commands:

I am very disappointed about posting an unreflected blog entry like this one, there is nothing wrong with PGP in terms of data integrity/it security.
Yes, PGP usability and of course key management should be improved.
But there is no trustworthy alternative.
Recommending a cloud service for exchanging confidential data sounds like a joke.