The problem isn't with the e-mail clients, it's with e-mail protocols themselves.

E-mail is as archaic and broken as FTP (also, why are we even still using FTP?!). Just adding an attachment (typically in Base64 encoding) will inflate the file size by around 1/3. How on earth we've put up with that for this long is beyond me given how valuable bandwidth was (and, for many, still is).

Then you have the whole hodge-podge of half supported "standards": multiple different ways of HTML encodings, plain text, no standard failure responses, no native compression, no native encryption (I know SMTP can be SSL encoded, but that's not even available as standard on many servers).

It's quite simply just a horrible mess so I'm amazed it even works this well.

The thing about FTP, though, is the standard is so simple, and for the vast majority of servers it Just Works. I don't see what needs to change, it's dead simple.

It doesn't though - there's a whole series of hacks from your router (eg FTP doesn't natively work behind NAT nor firewalls without adaptive routing) through to the client itself. (Sorry about the rant I'm about to launch into - it's nothing personal )

Every FTP server (read OS, not daemon) returns different output from commands such as dir, so FTP clients have to be programmed to support every server (utterly retarded standard!!)

What's even worse is that FTP doesn't have a true client / server relationship. The client connects to the server and tells the server which port the server should connect back to the client on. This means that firewalls have to be programmed to inspect the packets on all outgoing port 21 connections to establish which incoming connection requests to port forward. It's completely mental! This means that the moment you add any kind of SSL encryption (which itself isn't fully standardised and data channel encryption isn't always enabled even when the authentication channel is) you can potentially completely break FTP.

Add to that the lack of compression (no compression support on a protocol named "file transfer protocol" - I mean seriously) and the very poor method of handling binary and ASCII files and you're left with an utterly broken specification.

I will grant you that FTP is older than the WWW. FTP harps back to the ARPNET days and it's protocol actually makes some sense back then (All clients were also servers. All machines were known and trusted so servers could happily sit in the DMZ and you incoming connections already knew what OS they were connecting to...etc)

However these days FTP is completely inappropriate. SFTP at least fixes some of these things via SSH tunnelling (compression, guaranteed encryption on both data and authentication channels, no NATing woes, etc), but that in itself can cause other issues (many public / work networks firewall port 20, SFTP servers can be a pain to chroot if you're not an experienced sys admin, etc).

It just seems silly that FTP has never undergone a formal rewrite. Particularly when HTTP has undergone massive upgrades over the years and there's been a whole plethora of significantly more advanced data transfer protocols from P2P to syncing tools (even rsync is more sophisticated than FTP). From cloud storage through to network file systems. FTP really is the bastard child that should have been aborted 10 years ago (sorry for the crude analogy, but I can't believe people still advocate such an antiquated specification)

"The thing about FTP, though, is the standard is so simple, and for the vast majority of servers it Just Works. I don't see what needs to change, it's dead simple."

Oddly my experience is different.

I've implemented FTP software, so I appreciate it's simplicity. But in practice I find protocols that span multiple ports to be a bad idea. They cause problems with firewalls and routers, fundamentally requiring very ugly stateful application level gateways to work. Plain FTP usually fails to servers without "passive mode" hacks. Even then it fails between most normal peers. The default ASCII transfer modes can easily cause corruption and doesn't serve much purpose these days.

SFTP is perhaps too complex (being an extension of SSH and all), but network-wise for the most part it just works on all networks that don't have it blocked. It can easily be run behind a NAT on any port one wishes. Obviously it's more secure too.

Standard compliance is pretty good these days, even in Exchange. I can't vouch for the billion badly coded email clients but that's not an email problem, that's a code-quality problem.

Fair point there. However I still think the standard is outdated. For example, I don't see the point in transmitting everything as ASCII - in fact I personally think base64 should die. Anything that adds ~30% overhead to each and every attachment clearly isn't a sane standard for attachment encoding.

Content is encoded in exactly one way: MIME.

MIME isn't a single encoding specification, there's a few different variants (IIRC the biggest being 7bit and 8bit)

I can't think of a single modern SMTP server that doesn't support STARTTLS.

I will grant you that the biggest part of this problem isn't with SMTP server support but more mail hosts (lazy admins) not defaulting to TLS. I can't recall where I read this, but there's still a significant amount of e-mails being transmitted between mail servers without any encryption.

I can understand why most of the WWW is unsecured (viewing -for example- BBC News with SSL could be considered overkill), however e-mails often contain personal / confidential information and thus should be encrypted by default.