Posted
by
kdawson
on Tuesday October 07, 2008 @07:53PM
from the shake-hands-when-you-say-that dept.

agl42 writes "Obfuscated TCP attempts to provide a cheap opportunistic encryption scheme for HTTP. Though SSL has been around for years, most sites still don't use it by default. By providing a less secure, but computationally and administratively cheaper, method of encryption, we might be able to increase the depressingly small fraction of encrypted traffic on the Internet. There's an introduction video explaining it."

Firefox isn't helping the lack of SSL on the web by throwing a ridiculous warning when using self signed certs. Browsers should treat self signed certs as 'unsigned with the added bonus that communications can't be eavesdropped' instead of freaking out that you might not know who you are talking too.

self signed certs aren't appropriate for processing credit cards... but not every site that has forms needs that... and simply removing eavesdroppers would be a step in the right direction.

Per definition, then you're more than an eavesdropper. Then you're actively intercepting and rewriting the connection, which is a lot more complicated to do in volume plus detectable by comparing fingerprints. Whereas just copying the stream for the NSA is trivial and without detection possibility, but hey pick no security because the other is imperfect.

That distinction is exactly what is being proposed here. Encrypted TCP would solve the eavesdropping problem (mostly, not totally) and the X509 certs would solve the authenticity problem. Best of both worlds.

What's wrong with the SSH approach? First time you see a cert, inform the user. If it changes in the future, freak the hell out.
It works great for ssh and solved the whole key distribution problem.
It's magnitudes better than the current situation in browsers.

It works great after the initial conection, but you're vulnerable to a man in the middle attack on the initial connection attempt. It doesn't solve the key distribution problem at all. Or did you not read the warning message that ssh prints out on initial connections? You're accepting a risk, just as there is a risk in accepting a self-signed certificate. The difference is that your average SSH user can understand the risk, whereas unless you go the extreme route Firefox has gone, the average Web user will

The point being that this is the actual security hierarchy, from best to worst:

SSL with cert signed by a trusted certificate authority

SSL with self-signed cert

Plain HTTP

Whereas most web browsers make it appear like this:

SSL with cert signed by a trusted certificate authority

Plain HTTP

SSL with self-signed cert

Any browser that warns you about self-signed certs should make at least as much of a fuss about using plain HTTP, but they don't. Firefox takes it to ridiculous extremes but they're all faulty in this respect.

And really, if browsers would save the self-signed cert and then alert me if it changes the way SSH does, then the result will be very good, nearly as good as a regular cert (and potentially even better, since there's no potential for compromising the trusted certificate authority).

The thinking behind the current browser behavior is that the while self-signed certs provide encryption, they do absolutely nothing to try to verify that the remote host is who they claim to be. Providing a lock symbol (which, over the years, security professionals have tried to train users to trust) when there is nothing even resembling validation does a disservice to the user. There is no need to make such a fuss over plain HTTP because users have been trained not to send credentials over plain HTTP. T

So stop displaying the lock symbol! Nothing requires you to treat "real" SSL and self-signed SSL identically. It should be obvious that the current standard approach of making them look exactly the same except for a scary warning that appears the first time you hit a self-signed site is broken. But nobody cares about doing better because it's the "standard".

Straw man: The keyed lock argument is easier to prove false, and seems naively analogous to the SSL problem.

We are instead talking about a keyed lock where I, as an attacker, can walk up to your house after you leave and use a key with a specially shaped tip to to prime the lock for accepting a new key (in addition to the old key-- this is not technically identical, but from a user interface point it is the same interaction). I can then unlock your door with my completely random key; when you get home,

Maybe, but listening to unencrypted traffic is even easier and more simple to automate, and you don't have to "control" any part of the path, you just have to be a peer on the wire. Larger target, easier to do. And I don't believe your door lock simile is actually representative of what's going on with self-signed SSL. If all traffic on the net were at least self-signed SSL, you completely eliminate the "low hanging fruit" and raise the bar for an attacker, requiring more and more complex tools to setup,

SSL without a trusted certificate provides NO additional security over communicating in the clear. AT ALL.

Bzzzt, wrong, thanks for playing.

Yes, the man in the middle attack is very real. However, it takes a great deal more work to set up than a simple sniffer, because you have to either capture/block/proxy/rewrite packets so that each side thinks it's speaking with the other, or spoof the DNS somehow.

On the other hand, a simple network sniffer can capture almost everything send in the clear, no special network tricks needed.

Authentication requires encryption. Encryption does not require authentication, but should then be considered somewhere between truly secure and just wide open. Call it a nice-to-have that prevents casual sniffers from picking up passwords to your home server, reading your webmail, and the like.

Your assertion assumes that there are no casual crackers/script kiddies out there who won't immediately escalate to some invasive and rather difficult MITM attack, or that sniffing is not a real danger. I'd argue that 90% of the insidious activity comes from just sniffing cleartext off the wire, and that more sophisticated attacks are significantly rarer. Encrypting the over the wire traffic is a way of mitigating a significant portion of that risk.

However, it takes a great deal more work to set up than a simple sniffer, because you have to either capture/block/proxy/rewrite packets so that each side thinks it's speaking with the other, or spoof the DNS somehow.

The "Default Gateway" IP address allows your computer to send an ARP for your gateway (i.e. 192.168.0.1) and get its MAC address; to send a packet to be routed, it determines that the destination IP isn't on this subnet and so sets the MAC address for the frame to the MAC of the default gateway (hardware drops any frames destined for anything beyond its own MAC or broadcast), which looks at the IP address and uses that to route, setting the MAC to the next hop. There is no encapsulation of packets to rout

Yes, the man in the middle attack is very real. However, it takes a great deal more work to set up than a simple sniffer,

Bzzzt, wrong, thanks for playing. I written an app that makes the process as easy as starting a sniffer, just because you don't know of any software that automates the process doesn't mean they don't exist. The app doesn't work flawlessly due to the technical details involved, but it certainly works well enough for me to watch 'encrypted' data flow just as easy as it is for you to star

EXACTLY! With a self-signed certificate, there's no indication that a man in the middle attack is taking place.

SSL without a trusted certificate provides NO additional security over communicating in the clear. AT ALL.

Self-signed != untrusted, and CA-signed may not always = trusted. Why do people always seem to just assume that CA-signed/self-signed are equivalent to trusted/untrusted?

There are ways to verify certs other than having a site- (or attacker-) chosen CA sign them. For example, the Perspectives [cmu.edu] firefox extension relies on "you can't fool all of the people all of the time" rather than the "you can't fool any of these people ever" that the CA system relies on. And it works regardless of whether a cert is self-si

Every time a Firefox or SSLTLS article comes up, we go over this again and again. SSLTLS is both an encryption and authentication scheme; it sucks but that's what the spec says it is. Firefox can't go off and do its own thing, least someone starts exploiting the fact that their implementation of SSLTLS is no longer an authentication scheme and starts taking advantage of people who expect otherwise. The W3C needs to separate authentication and encryption in the standards themselves, that's the only proper and safe way to change things.

What, the protocol spec says "thou shalt have such-and-such a user interface"? It completely forbids the application determining "the protocol can provide X and Y, but in this case we only have X and not Y", and telling the user what we actually have rather than what the protocol we're using could theoretically provide? If so... that's really very stupid, and maybe people should ignore it.

The problem (I think) with treating self-signed certificates as 'unsigned with the added bonus that communications can't be eavesdropped' is that it would rely on site owners not asking for sensitive information while using a self-signed cert.

Most users are too dumb to check for SSL, good luck getting them to discern insecure, 'insecure but can't be eavesdropped', and secure. Hell, most users would be shocked to find out you can eavesdrop on their traffic in the first place.

Most users are too dumb to check for SSL, good luck getting them to discern insecure, 'insecure but can't be eavesdropped', and secure.

Fair enough. So don't put the secure green lock up for self signed SSL. Put up a totally different icon in some neutral color like blue. If they click on it it says, the connection is encrypted and can't be eavesdropped but there is no gaurantee you are talking to who you think you are.

Hell, most users would be shocked to find out you can eavesdrop on their traffic in the first place.

Good point! Maybe firefox 3 should pop up a huge error screen every time you try to connect to a site with plain http. It could say something like:

The server you are connecting to is insecure. Maybe there is a configuration error on the server. Or maybe someone is trying to impersonate it. Oh, and by the way, not only that, but any communication with them maybe trivially intercepted by any 3rd party...

There's an ambiguity to SSL certs. They do two things at once. They 1) prove that the person who has the cert is that person through a certificate authority and they 2) provide for encryption. Why not simply have grades of SSL? A self signed cert could then allow encryption and say perhaps show a yellow padlock whereas a CA signed cert could provide for encryption and provide CA authentication and give a green padlock or whatever. What's so freaking difficult about that?

So have the browser treat it as being unsigned. Don't do anything special. Don't put up a big green lock. Don't make a fuss. Even if its not really MORE secure, its certainly not LESS secure, so firefox at WORST should treat it exactly the same as plain http.

Because it tells you that there is an error and the site is broken, rather than clearly warning you that the site isn't really secure? Because it has a priority inversion, where it treats self-signed as less secure than completely unsigned?

Yeah, I don't either. No one has ever explained to me why a legitimate site that needs ssl couldn't afford a legit signed cert. They always raise the issue of cost. To me its like a potential restaurant owner complaining that the board of health won't let him opperate a restaurant because it costs too much to comply with the regulations. I don't ever want to eat at any restaurant where the health code was not followed. This may cause unsanitary restaurateurs to go underground and server even less sanitary f

It should by default accept a self-signed cert transparently without any fuss. It SHOULDN'T show a big green lock. It should just be a regular connection. If the self-signed cert changes on a subsequent visit, THEN they should get a warning. That's it.

The problem is, we've tried to train users to look for the "https" or the lock, or both. Getting rid of the lock for self-signed connections is fine, but the https is still there, and it's misleading.

Opportunistic encryption was the original goal of the FreeS/WAN [freeswan.org] project. It was not realised, and the eventual forks (OpenSwan and strongSwan) are now aimed more at running IPSEC tunnels.

Opportunistic encryption was the original goal of the FreeS/WAN [freeswan.org] project. It was not realised,

That depends on your definition of "not realized". Before the FreeS/WAN project was abandoned, opportunistic encryption had been implemented and was in use. Adoption was probably quite small, but it existed.

The video starts out saying that increased encryption is needed thanks in part to warrantless government surveillance. It then goes on to describe a system that assumes no MITM attacks can exist. The fact is, however, that governments are entirely capable of performing MITM attacks, as can telecommunications companies; and if it becomes popular we may see more techniques that allow individuals to perform MITM attacks. While this algorithm has significant merit, care needs to be taken to avoid a false sense of security.

It does not "assume no MITM attacks can exist". It deliberately does not protect against them. This is not the same thing, as one is a position of ignorance whereas the other is an intentional choice not to defend against that threat.

In practical terms, MITM is considerably harder than simply listening in. Wide-scale surveillance such as what caused the big recent flap with FISA and the NSA simply can't perform MITM attacks. Protecting against pure eavesdropping while remaining open to MITM attacks is useful, it's just not a 100% solution. As long as it doesn't sell itself as one (and I see no indication that it is) then there's absolutely no problem with that.

Exactly - this is what Google is interested in. If ISPs start replacing Google adverts in web pages with their
own (or worse, the AdWords adverts in Google search results), then Google will lose huge amounts of
revenue.
Luckily, but only by chance, Google's self-interest in this case is aligned with ours.

By providing a less secure, but computationally and administratively cheaper, method of encryption, we might be able to increase the depressingly small fraction of encrypted traffic on the Internet.

If the encryption is computationally cheaper, then the decryption is computationally cheaper. I'd rather people know that what they're sending over the 'net can be sniffed than have them think that because example.com uses Rot13 encryption their traffic is private.

If the encryption is computationally cheaper, then the decryption is computationally cheaper. I'd rather people know that what they're sending over the 'net can be sniffed than have them think that because example.com uses Rot13 encryption their traffic is private.

A few key points:

Obfuscation != Encryption

Cost to Encode (encrypt/compress/obfuscate) does not directly relate to the cost to decode. The relationship differs per algorithm used.

Cost to de-obfuscate without proper keys can be significantly more than cost to de-obfuscate with proper keys.

Forging a CA signature on a certificate would be a BIG DEAL.Forging a DNS entry, especially with ISP cooperation(read government snooping), is DEAD SIMPLE.

True, if it required you to forge a real CA's signature. The whole point of self-signed certs is that there is no CA - you're not impersonating being anyone other than the website and it has zero effect on anything else. I can make such a certificate up in a terminal in the time it takes me to type it. I don't know if it would be a bigger deal legally, but technically it is equally dead simple. And if it was legally a big deal I'm sure they can get some retroactive immunity for it.

I read the technical details [google.com] and they talk about an advert being encoded in the CNAME, to distribute a curve25519 key and a port number. But they could have done much simpler using technology that already exists: encode the 160-bit SHA1 fingerprint of an X.509 certificate and a port number in the CNAME (only 32 chars needed in base32). Then connect to this port using HTTPS and simply verify that the certificate matches the fingerprint ! Advantages:

This technique works using standard TLS/SSL technology, no need to reinvent a poor man's TLS protocol like they did with Salsa20/8, Curve25519, Poly1305, etc.

It is just as secure as their "Obfuscated TCP" (both techniques rely on the DNS records not having been tampered).

The SHA1 fingerprint being encoded in the CNAME allows the browser to verify its validity without prompting the enduser with scary dialog boxes (and it also works with self-signed certs).

And as a bonus, the fact a standard HTTPS server is running allows endusers who really want true security to explicitely connect to the HTTPS URL by themselves (without relying on the CNAME trick). Doing this would make the browser verify the validity of the cert using the normal way (scary dialog boxes... or not if the cert's CA is trusted).

Yeah I was looking at the stream cipher, Salsa20 (never heard about it before), and I see that djbdns distributes very fast implementations (less than 1 cycle/byte on amd64 holly sh1t!). Traditional TLS/SSL ciphers like AES are indeed 2 orders of magnitude (base 2) slower.

1) IPSec is very much about authenticating who you are talking to on the other end. If two nodes connect for the first time, with no previous knowledge, they have no way to authenticate the other is who they say they are. This is a failure to IPSec - you'll get no SAs, and most implementations will drop traffic that would've required that SA.

2) The classic key exchange issue:
a) You can authenticate a session using certs, but now you have the same problems as signed SSL certs, except that every host participating needs to have one and know about all the other nodes' CAs.
b) You can instead opt to use a pre-shared key, but now you have to pre-share the key. This is fine when you are looking to secure specific traffic to a specific node.

For uses that aren't affected by these downsides, IPSec is a hugely popular technology. VPNs between a branch and central office as well as remote access for roaming users are very popular. Of course in these cases you can very meaningfully authenticate the other end and the key exchange isn't a problem.

we might be able to increase the depressingly small fraction of encrypted traffic on the Internet.

I agree that this would indeed be a good thing for several reasons. An encrypted message in a medium where most everything is plaintext may attract the attention of attackers or, worse, be seen as "suspicious" by a government. (Certainly the U.S. and the PATRIOT Act spring to mind, but let's not forget the truly oppressive governments such as China's and any number of third-world dictatorships.) If online privacy via encryption comes to be a right that everyone gets used to enjoying—much like how almost all mail is sent in sealed envelopes, whether or not its contents are sensitive—then it will be that much harder, for technical and/or social reasons, for an authority to take away. If Obfuscated TCP is even a token step in that direction (and it seems to be a bit better than that), then it is probably a good thing overall.

Someone earlier today on Slashdot was plugging Cory Doctorow's Little Brother, and I'm going to follow that example (you can read it for free!) [craphound.com] as part of it advances the same idea.

If you watch the "video", one of their explicit points (#2) is that the user shouldn't be informed of this. This will not trigger the little security lock icon. From a user's point of view, you shouldn't be able to tell if the web server you are connected to is unsecured or secured this this little bit of obfuscation.

This isn't for real security, it's to make simple sniffing harder. As the video puts it, it simply raises the bar for someone who wants to read other people's traffic.

It seems like a very good idea to me. It sounds quite intelligent (from what I know of TCP/IP, etc). Some protocols have need changes (protocols where there is one connection and it isn't dropped would need some way to communicate that the encryption is OK during the first (and possibly only) connection.

Either way, it sounds like quite an improvement over what we have now.

If you watch the video, your brain will leak out through your ears. It's terrible. Why produce a video which seems to be a black screen with a dark blue line wiggling when the person talks? Why pick a person with a crappy British accent and a speech impediment? Who's going to understand? Why flash up a couple of words here and there like "SSL" and "HTTP"? Why produce such a steaming pile of crap and call it an "introductory video"?

Instead, whoever is the video star in this could have written down their ideas in plain text. That would allow for easy reading and comprehension by people all over the world. Maybe I can read quickly. Maybe I don't want to sit around waiting for you to lisp and stammer through your presentation. Maybe I'd understand it better if I read it than if I heard it on a crappy video. Maybe I don't want to waste my bandwidth downloading several megabytes of video, where the same information in plain text might be a few kilobytes.

It may be slightly off-topic but the parent has a VERY valid point. Self-signed sites are encrypted but best of luck trying to get people to use them thanks to the 3-clicks required and SMALL text. When I used the new firefox release I was even confused at first.

Now back to obfuscated TCP: This is on par with using NAT to fix the lack of IP addresses. Just fix the damn thing properly and stop screwing around with time wasting half-fixes (yeah they admitted it).

About the only thing this is going to do is make troubleshooting problems with Ethereal or other packet sniffers a pain in the a$$. Thanks.

NSA and ISPs like to snoop, data mine and traffic shape. Traffic shaping can even be a good thing in certain situations (and I'm not talking Comcast here.) It's highly unlikely anything like obstcp will ever get standardized, since it prevents exactly what you just mentioned.

Self-certified certificates are worse than plain unencrypted traffic because of psychological reasons, not because of technical reasons.

Your ISP can easily and automatically intercept self-signed certificates and present their own certificate, which you will gladly accept. Now you think that you are less secure than a verified certificate but still "somewhat secure". Technically, you are however protected as much as plain unencrypted HTTP.

Now, this is where humans fail. Because you still think you are "somewhat safe", you will take higher risks, write things that you would normally not write, click on links that you perhaps wouldn't click on over a plain non-encrypted page. The famous false sense of security, which actually does nothing except making you feel good.

Um, no. It's so that it costs more to index the web, meaning that any competitor to google has to pay more to challenge their dominant position as search engine. Microsoft has to bleed even more money trying to compete. Yahoo might have to abandon search.

TFA explains how Obfuscated TCP is both opportunistic and transparent. Servers will provide encrypted transmission only when the client requests it, and unlike SSL/TLS there is no additional handshaking required to setup the connection.

A car analogy. Suppose you use regular unleaded fuel in your car and it drives fine. High octane fuel then becomes available at a higher cost. Your car continues to drive fine on regular unleaded, but you have the choice to fill up with high octane at any petrol station that serves it.

This is stupid. Nobody crawls the web by sniffing traffic. Google and everyone else connects to webservers the same way you do. For your post to make any sense we have to assume that this would make sites using Obfuscated TCP inaccessible by default, which goes against its entire design philosophy.

That is why you check the cert and add it to your personal list of allowed (by checking it carefully you prevented the initial MITM). Then if the cert ever changes you get notified of the change (catching a later MITM). Again like I said, I am not against the scary warnings, I'm for user education. It is simply a user education thing.

That's actually a problem with OpenSSL and mod_ssl. Check out mod_gnutls and gnutls for an approach where name-based virtual hosts can each have their own certificate that validates in most current browsers (Safari, Opera, Firefox, IE7).

he big barrier to ssl for small sites is cost - in some cases the cost of an ssl cert will exceed all other costs.

I would disagree. One of the biggest barriers to implementing SSL on my sites is the lack of IP addresses. I only have two IP addresses, yet I host 16 web sites. My understanding is that HTTPS requires IP based virtuals which would prevent me from hosting more than two sites if I were to use SSL for all of my sites.

Run your websites on different ports, you have 65535 of them per IP. Make http://site1/ [site1] redirect to https://site1:1111/ [site1], http://site2/ [site2] redirect to https://site2:2222/ [site2], etc. I concede this prevents users from directly typing the https url in their address bar as they don't know the port number in advance, but again 99% of the users let themselves be redirected to the https content on most websites anyway (except paranoids like me:P).

Use certs with the "subjectAltName" X.509 extension that let you create a single cert valid for multiple DNS names. I do this (with a CA I created & control), it works very well. The downside is that I think commercial CAs make you pay extra bucks to sign such certs (if they even accept to do that).

Anybody remembers what hapenned to RFC 2817 [ietf.org] ? It tried to address this very pb by introducing the "Upgrade: TLS/1.0" header and the "426 Upgrade Required" status code, but I don't think any browser or server implement them.

Running your web sites on non-standard ports is a great way for your web site not to be accessible to users accessing the internet through firewalls that limit egress traffic based on TCP destination ports.

For a public site using non-standard ports is an easy method to shoot yourself in the foot - you immediately block all users behind proxies or firewalls that only allow communication on "standard" web ports.