Internet architects mull changes to fight SSL-busting CRIME attacks

Engineers who help oversee Internet standards are proposing changes to long-standing website practices in order to guard against a new attack that exposes user login credentials even when they are transmitted through encrypted channels.

The tentative recommendations are included in a draft document filed earlier this week with the IETF, or Internet Engineering Task Force. It is among the first technical documents to grapple with an attack unveiled last month that allowed white hat hackers to decrypt the contents of encrypted session cookies used to log in to user accounts on Dropbox.com, Github.com, and other sites. (The sites took measures to block the exploit after researchers Juliano Rizzo and Thai Duong gave them advanced notice of their exploit.) Short for Compression Ratio Info-leak Made Easy, CRIME provided a reliable and repeatable means for attackers to defeat the widely used secure sockets layer and transport layer security protocols. Together, they form the basis of virtually all encryption between websites and end users.

CRIME is able to deduce the contents of encrypted communications that use data compression to reduce the amount of time it takes to move packets from one point to another. By injecting different pieces of known data into a compressed SSL data stream over and over and then comparing the number of bytes each time, attackers can use the method to deduce the encrypted contents character by character. The method worked against protected Web communications that used TLS compression or SPDY, an open networking protocol developed by Google engineers.

"It is RECOMMENDED to disable compression when communications are not trivial, unless traffic increase is considerable," IETF members B. Kihara and K. Shimizu wrote in the draft, which was billed as a "work in progress." "If data are confidential and other mitigations are inapplicable, compression MUST be disabled, especially when the compression is applied in the lower layer like TLS compression."

When compressing whole data in the same context is unavoidable, the draft continued, encryption schemes must insert random paddings to prevent disclosure of the original size of the compressed data. "Note that this mitigation cannot prevent attackers from guessing secrets by statistical approaches," the authors continued. The ineffectiveness of padding wasn't lost on other cryptographers. "Adding random padding to hide the length of compressed/encrypted data is like setting your Prius on fire because it doesn't pollute enough," Johns Hopkins University professor Matthew Green said in a Twitter dispatch. Marsh Ray, a software developer with two-factor authentication provider PhoneFactor, replied: "Or like adding noise to electric cars so hearing impaired people can cross the street?"

This week's draft will expire in the middle of April and could be updated, replaced, or obsoleted by other documents at any time.

Promoted Comments

Maybe his analogy wasn't that padding is cryptographically ineffective, but that adding padding is ineffective when your goal is compression.

That's how I understand it.

article wrote:

Marsh Ray, a software developer with two-factor authentication provider PhoneFactor, replied: "Or like adding noise to electric cars so hearing impaired people can cross the street?"

Fearknot wrote:

Did he intentionally mix this up?

It certainly made a lot more sense in my head than it does looking at it now and I would have thought about it longer had I known it was going to end up in Ars!

There was also a link in my tweet to an article showing this was a very real situation in practice. The NHTSA is actually mandating something like this.

Imagine a world where hearing impaired people were being publicly run down and maimed by electric cars. The media and population are all in a dilemma about whether or not the benefits of electric cars are worth the costs. The idea is that adding noise to electric cars is seemingly a waste of good silence (channel bandwidth, get it?), but it still helps enough in practice that it enables us to gain the main benefits of electric cars without feeling guilty.

Fearknot wrote:

I'll add one of my own, sticking with the theme: Adding padding when encrypting compressed data is like adding a gasoline engine to your electric car so blind people can hear you coming.

Works for me.

And, in fact, the Prius does have a gasoline engine due to the various tradeoffs involved.

The ineffectiveness of padding wasn't lost on other cryptographers. "Adding random padding to hide the length of compressed/encrypted data is like setting your Prius on fire because it doesn't pollute enough," Johns Hopkins University professor Matthew Green said in a Twitter dispatch

And perhaps if I understood why the padding was ineffective, then I would be able to understand this analogy...

Maybe his analogy wasn't that padding is cryptographically ineffective, but that adding padding is ineffective when your goal is compression.

article wrote:

Marsh Ray, a software developer with two-factor authentication provider PhoneFactor, replied: "Or like adding noise to electric cars so hearing impaired people can cross the street?"

That one makes even less sense. Did he intentionally mix this up? (Adding noise to electric cars is for the benefit of blind people, not for deaf people who can just look left and right before crossing.)

I'll add one of my own, sticking with the theme: Adding padding when encrypting compressed data is like adding a gasoline engine to your electric car so blind people can hear you coming.

I think what's trying to be communicated is that adding padding just means that it takes a little longer for the attacker to decrypt because it's just extra content to test against. Hence setting your Prius on fire because it doesn't pollute enough. It's still the same problem, you're just adding more to it. Same with sound for deaf people. You're adding more, but the problem still remains.

To add my own analogy, it's like trying to fix a research paper by adding more words when it is about the wrong topic in the first place.

I think what's trying to be communicated is that adding padding just means that it takes a little longer for the attacker to decrypt because it's just extra content to test against.

I'm not ready to give up on padding yet. Maybe we can make it take more than "a little longer" for the attacker.

Yes, padding simply means the attacker must take more samples to gather more statistics. But one could make a similar claim about key lengths today: they just make it take more work. Of course, padding won't be able make the attacker's job 2^128 times harder like we might by lengthening our crypto keys from 128 to 256 bits.

But what if padding could make the difference between CRIME requiring a handful of HTTP requests per decrypted byte to needing millions? I think that could be worthwhile, even if we'll still have to give up on secure transport layer compression in general. Having some nonzero amount of plaintext length hiding seems like a nice, conservative, choice that could occasionally save some higher-level protocols from their mistakes as well.

I still don't get how this attack works. How can Eve inject arbitrary data pre-encryption, then watch the resulting encrypted output, if they don't already have control over that end of the connection?

The attack basically works like this:

Requirements: The victim's browser is known to have an HTTPS cookie for a known site.The victim's browser and the known site enable SSL compression.The attacker can cause the victim's browser to load the attacker's javascript. (could be phishing, MITM on HTTP, linkbait on articles, etc)The attacker can observe the amount of data sent to the known site.

The attacker causes the victim's browser to make repeated requests to the HTTPS site with attacker controlled data via image urls, javascript get or post, etc. If the data matches the cookie value, it will be compressed and the amount of data sent to the known site will be reduced. Note that while data size may not be easily observable, data size influences transfer time, and transfer time is observable within javascript.

31 Reader Comments

I still don't get how this attack works. How can Eve inject arbitrary data pre-encryption, then watch the resulting encrypted output, if they don't already have control over that end of the connection?

I still don't get how this attack works. How can Eve inject arbitrary data pre-encryption, then watch the resulting encrypted output, if they don't already have control over that end of the connection?

My understanding is they inject data into a compressed SSL session they control and then they compare the packet size to the SSL session they're trying to decrypt.

I still don't get how this attack works. How can Eve inject arbitrary data pre-encryption, then watch the resulting encrypted output, if they don't already have control over that end of the connection?

My understanding is they inject data into a compressed SSL session they control and then they compare the packet size to the SSL session they're trying to decrypt.

When compressing whole data in the same context is unavoidable, the draft continued, encryption schemes must insert random paddings to prevent disclosure of the original size of the compressed data. "Note that this mitigation cannot prevent attackers from guessing secrets by statistical approaches," the authors continued.

I wouldn't have minded a little more explanation on this point, as it's not quite clear what the weakness is. Could a potential attacker request the same data (compressed with random padding) repeatedly, and then run some kind of analysis on the differences in the requested packets, to infer what is genuine data and what's padding, as the genuine data is presumably invariant across all of them?

article wrote:

The ineffectiveness of padding wasn't lost on other cryptographers. "Adding random padding to hide the length of compressed/encrypted data is like setting your Prius on fire because it doesn't pollute enough," Johns Hopkins University professor Matthew Green said in a Twitter dispatch

And perhaps if I understood why the padding was ineffective, then I would be able to understand this analogy...

I still don't get how this attack works. How can Eve inject arbitrary data pre-encryption, then watch the resulting encrypted output, if they don't already have control over that end of the connection?

The attack basically works like this:

Requirements: The victim's browser is known to have an HTTPS cookie for a known site.The victim's browser and the known site enable SSL compression.The attacker can cause the victim's browser to load the attacker's javascript. (could be phishing, MITM on HTTP, linkbait on articles, etc)The attacker can observe the amount of data sent to the known site.

The attacker causes the victim's browser to make repeated requests to the HTTPS site with attacker controlled data via image urls, javascript get or post, etc. If the data matches the cookie value, it will be compressed and the amount of data sent to the known site will be reduced. Note that while data size may not be easily observable, data size influences transfer time, and transfer time is observable within javascript.

The ineffectiveness of padding wasn't lost on other cryptographers. "Adding random padding to hide the length of compressed/encrypted data is like setting your Prius on fire because it doesn't pollute enough," Johns Hopkins University professor Matthew Green said in a Twitter dispatch

And perhaps if I understood why the padding was ineffective, then I would be able to understand this analogy...

Maybe his analogy wasn't that padding is cryptographically ineffective, but that adding padding is ineffective when your goal is compression.

article wrote:

Marsh Ray, a software developer with two-factor authentication provider PhoneFactor, replied: "Or like adding noise to electric cars so hearing impaired people can cross the street?"

That one makes even less sense. Did he intentionally mix this up? (Adding noise to electric cars is for the benefit of blind people, not for deaf people who can just look left and right before crossing.)

I'll add one of my own, sticking with the theme: Adding padding when encrypting compressed data is like adding a gasoline engine to your electric car so blind people can hear you coming.

Maybe his analogy wasn't that padding is cryptographically ineffective, but that adding padding is ineffective when your goal is compression.

That's how I understand it.

article wrote:

Marsh Ray, a software developer with two-factor authentication provider PhoneFactor, replied: "Or like adding noise to electric cars so hearing impaired people can cross the street?"

Fearknot wrote:

Did he intentionally mix this up?

It certainly made a lot more sense in my head than it does looking at it now and I would have thought about it longer had I known it was going to end up in Ars!

There was also a link in my tweet to an article showing this was a very real situation in practice. The NHTSA is actually mandating something like this.

Imagine a world where hearing impaired people were being publicly run down and maimed by electric cars. The media and population are all in a dilemma about whether or not the benefits of electric cars are worth the costs. The idea is that adding noise to electric cars is seemingly a waste of good silence (channel bandwidth, get it?), but it still helps enough in practice that it enables us to gain the main benefits of electric cars without feeling guilty.

Fearknot wrote:

I'll add one of my own, sticking with the theme: Adding padding when encrypting compressed data is like adding a gasoline engine to your electric car so blind people can hear you coming.

Works for me.

And, in fact, the Prius does have a gasoline engine due to the various tradeoffs involved.

The lesson here: just because an encryption algorithm E is chosen-plaintext secure doesn't mean that composing an arbitrary function F before E is chosen-plaintext secure. Necessarily, composing F after E is secure (so long as F is independent of E's key), but before, all bets are off and you need to re-prove security.

The ineffectiveness of padding wasn't lost on other cryptographers. "Adding random padding to hide the length of compressed/encrypted data is like setting your Prius on fire because it doesn't pollute enough," Johns Hopkins University professor Matthew Green said in a Twitter dispatch

And perhaps if I understood why the padding was ineffective, then I would be able to understand this analogy...

Maybe his analogy wasn't that padding is cryptographically ineffective, but that adding padding is ineffective when your goal is compression.

article wrote:

Marsh Ray, a software developer with two-factor authentication provider PhoneFactor, replied: "Or like adding noise to electric cars so hearing impaired people can cross the street?"

That one makes even less sense. Did he intentionally mix this up? (Adding noise to electric cars is for the benefit of blind people, not for deaf people who can just look left and right before crossing.)

I'll add one of my own, sticking with the theme: Adding padding when encrypting compressed data is like adding a gasoline engine to your electric car so blind people can hear you coming.

I think what's trying to be communicated is that adding padding just means that it takes a little longer for the attacker to decrypt because it's just extra content to test against. Hence setting your Prius on fire because it doesn't pollute enough. It's still the same problem, you're just adding more to it. Same with sound for deaf people. You're adding more, but the problem still remains.

To add my own analogy, it's like trying to fix a research paper by adding more words when it is about the wrong topic in the first place.

The lesson here: just because an encryption algorithm E is chosen-plaintext secure doesn't mean that composing an arbitrary function F before E is chosen-plaintext secure.

Nevertheless, this protocol metamodel dominated by channels and layering is exactly what network, security, and software architects have sold to the world. "Secure Sockets Layer" and "Transport Layer Security" provide crypto as a bonus feature of transport with the very strong implication to application protocol developers that it's abstracting away the security considerations just like a sockets API abstracts away reliable in-order transport.

So the application code has been written on these assumptions. To say there's a lot of it would be a huge understatement. When such widely held assumptions are found to be shaky in practice, we have to fix our practice to meet the assumptions.

I don't see any way out of this that doesn't involve adding some amount of randomized padding.

I think what's trying to be communicated is that adding padding just means that it takes a little longer for the attacker to decrypt because it's just extra content to test against.

I'm not ready to give up on padding yet. Maybe we can make it take more than "a little longer" for the attacker.

Yes, padding simply means the attacker must take more samples to gather more statistics. But one could make a similar claim about key lengths today: they just make it take more work. Of course, padding won't be able make the attacker's job 2^128 times harder like we might by lengthening our crypto keys from 128 to 256 bits.

But what if padding could make the difference between CRIME requiring a handful of HTTP requests per decrypted byte to needing millions? I think that could be worthwhile, even if we still decide to abandon automatic compression on the secure transport layer in general. Having some nonzero amount of plaintext length hiding seems like a nice, conservative, choice that could occasionally save some higher-level protocols from their mistakes as well.

Well, the other approach is to compress the secrets independently. If the document is divided into independent sections each of which is compressed without reference to the other sections, and then those are packed together to form the whole message, then there is no leak. The attacker can modulate their own content section and not extract information from the other packets. The attacker at this point has only the ability to attack SSL via known plaintext of their own section, but that problem has been around for ages and SSL is resistant to that.

This makes the packet assembly more complicated but seems like generally good practice. Mixing secrets with other parts of the message in any way seems bound to cause tears.

I have some intuition about general computer security, but I'm not well acquainted with the specifics of SSL.

If the compression scheme was theoretically perfect, its output would appear to be a perfectly random stream of bits. Similarly, with the encryption. Are we saying that because neither the compression nor the encryption is theoretically perfect (perhaps neither comes close to being so, because they are engineered for speed/latency of calculation/transmission); the result is a vulnerability to "known plain-text" attack?If so, then I can't see how removing the compression will result in great improvements to overall security. It appears at face value to be a retrogressive step that will only result in vulnerability to an alternative "known plain-text" attack.

The correct answer might be to use more theoretically perfect (albeit more computationally more expensive) schemes for compression and encryption. Perhaps we ought to have variable strengths of compression & encryption (with pluggable algorithms), so that a high-traffic site only requiring low-grade authentication might use low-grade encryption, whereas an on-line banking system could use something stronger with poorer page loading latency?

I think what's trying to be communicated is that adding padding just means that it takes a little longer for the attacker to decrypt because it's just extra content to test against.

I'm not ready to give up on padding yet. Maybe we can make it take more than "a little longer" for the attacker.

Yes, padding simply means the attacker must take more samples to gather more statistics. But one could make a similar claim about key lengths today: they just make it take more work. Of course, padding won't be able make the attacker's job 2^128 times harder like we might by lengthening our crypto keys from 128 to 256 bits.

But what if padding could make the difference between CRIME requiring a handful of HTTP requests per decrypted byte to needing millions? I think that could be worthwhile, even if we still decide to abandon automatic compression on the secure transport layer in general. Having some nonzero amount of plaintext length hiding seems like a nice, conservative, choice that could occasionally save some higher-level protocols from their mistakes as well.

This suggests to me that you're missing the point Marshray. The intent of TLS-compression and SPDY "Speedy" is that the overall package is compressed to reduce transit times. If you're adding padding to increase cyrpto strength of a compression, you will have to add so much padding by comparison to actual data that when compressed you will receive no benefit from the compression because the total packet size must increase to accommodate the padding.

At which point it becomes moot to use either the compression or the padding, as the padding is merely a response to a cryptographically weak compression algorithm.TLS and SSL 3.0 continue to be (relatively) secure if the cryptographically weak compression algorithms are not used.

It is more useful (in my opinion) to recognize that arbitrary composition breaks the cryptographic guarantees, and change TLS to become more aware of higher layers in order to avoid the breaks.

An example solution (which I had already prototyped) is to make TLS reset its compression whenever it sends a newline character, until it gets a double-newline (or double-CRLF) at which point it reverts to normal compression. When looked at from higher-level application perspective, this means each HTTP header is compressed separately--removing most (or all) cases where attacker-controlled and victim-controlled data are compressed together. Sure, this solution also affects the compression ratio; but at least its advantage is quantifiable, not a matter of statistics.

The ineffectiveness of padding wasn't lost on other cryptographers. "Adding random padding to hide the length of compressed/encrypted data is like setting your Prius on fire because it doesn't pollute enough," Johns Hopkins University professor Matthew Green said in a Twitter dispatch

And perhaps if I understood why the padding was ineffective, then I would be able to understand this analogy...

Maybe his analogy wasn't that padding is cryptographically ineffective, but that adding padding is ineffective when your goal is compression.

article wrote:

Marsh Ray, a software developer with two-factor authentication provider PhoneFactor, replied: "Or like adding noise to electric cars so hearing impaired people can cross the street?"

That one makes even less sense. Did he intentionally mix this up? (Adding noise to electric cars is for the benefit of blind people, not for deaf people who can just look left and right before crossing.)

I'll add one of my own, sticking with the theme: Adding padding when encrypting compressed data is like adding a gasoline engine to your electric car so blind people can hear you coming.

No he did not mix it up that badly. The noise is added to the car for the same reason that many people mount deer whistles ... to make it loud enough to be heard. Any hearing problem makes it difficult to hear a car when it is running quietly. You are right though that the advertised purpose is to allow the blind to hear traffic, but it also helps sighted people to sense (hear) traffic they cannot see...for instance that car that is just about to hit them from behind.

The victim's browser is known to have an HTTPS cookie for a known site.The victim's browser and the known site enable SSL compression.The attacker can cause the victim's browser to load the attacker's javascript. (could be phishing, MITM on HTTP, linkbait on articles, etc)The attacker can observe the amount of data sent to the known site.

The attacker causes the victim's browser to make repeated requests to the HTTPS site with attacker controlled data via image urls, javascript get or post, etc. If the data matches the cookie value, it will be compressed and the amount of data sent to the known site will be reduced. Note that while data size may not be easily observable, data size influences transfer time, and transfer time is observable within javascript.

That still doesn't make sense. If the attacker is able to inject arbitrary javascript, they can just grab document.cookie directly, possibly keystrokes etc, making all of this careful compression timing into Rube Goldberg nonsense.

What is not yet clear is how the attacker is including their input in the source material before it gets compressed and encrypted. In their video, they mention JavaScript and the attacker are using a proxy. The proxy is not a requirement for the attack however. In the video we can see that the SSL certificate the browser is receiving is not modified in any way. This means the attacker’s proxy is not man-in-the-middling SSL, and is instead acting the normal way that HTTP proxies do when asked to make an SSL connection: Use CONNECT and act as a dumb pipe pushing bytes. Hopefully the method of how the attacker is including their input will be revealed in their upcoming presentation.

Okay, this is sounding like bullshit to me. All that detail about the nuances of SPDY, RLE, etc is completely irrelevant without step 1: how do you get the attack onto the target?

If you can already run arbitrary javascript on the victim's browser within the context of the target site, there are a whole bunch of more powerful attacks you could do than this timing nonsense.

That still doesn't make sense. If the attacker is able to inject arbitrary javascript, they can just grab document.cookie directly, possibly keystrokes etc, making all of this careful compression timing into Rube Goldberg nonsense.

What you are missing is that the JavaScript injection is not in the same context as the site. So an attacker need not be able to inject JavaScript into the HTTPS site, they just need the victim to run some JavaScript and use cross domain IMG tags while the attacker observes transmission sizes.

If you can already run arbitrary javascript on the victim's browser within the context of the target site, there are a whole bunch of more powerful attacks you could do than this timing nonsense.

But that's just it, this attack does not require the Javascript to run within the context of the target site.

It could even be done without a Javascript browser at all. For example, imagine a long slow-loading HTML document having IMG tags adaptively generated by the attacker to make requests against the secure target server. The user's auth cookies will be sent along with these requests.

I have some intuition about general computer security, but I'm not well acquainted with the specifics of SSL.

If the compression scheme was theoretically perfect, its output would appear to be a perfectly random stream of bits. Similarly, with the encryption. Are we saying that because neither the compression nor the encryption is theoretically perfect (perhaps neither comes close to being so, because they are engineered for speed/latency of calculation/transmission); the result is a vulnerability to "known plain-text" attack?

There is nothing wrong with either. But compression will affect the plaintext length BEFORE encryption of the plaintext (if the encryption is any good at all ciphertext will not compress). The attack is interactive so if you guess 3 consecutive letters of the session cookie that will lead to the message being say 3 letters shorter than if your guess matched no letters.

Quote:

If so, then I can't see how removing the compression will result in great improvements to overall security. It appears at face value to be a retrogressive step that will only result in vulnerability to an alternative "known plain-text" attack.

Compression changes the length in plaintext dependant manner and can not really be made "secure" in any sense as it is a plaintext dependant state machine. It was added for network efficency only and has nothing to do with security it has just opened a side channel here.

Quote:

The correct answer might be to use more theoretically perfect (albeit more computationally more expensive) schemes for compression and encryption

I think what's trying to be communicated is that adding padding just means that it takes a little longer for the attacker to decrypt because it's just extra content to test against.

I'm not ready to give up on padding yet. Maybe we can make it take more than "a little longer" for the attacker.

Yes, padding simply means the attacker must take more samples to gather more statistics. But one could make a similar claim about key lengths today: they just make it take more work. Of course, padding won't be able make the attacker's job 2^128 times harder like we might by lengthening our crypto keys from 128 to 256 bits.

Padding MUST be in a multiple of the block size (16bytes for AES) with a maximum length of 255 bytes. That's a most 16 different padding lengths and the attacker just needs to see the shortest padded version to make their guess. To change it to be any more complex will require the move from PKCS #5 padding or CBC mode at which point you may as well write a new protocol. In any case, a compression function has no place in a encryption protocol - it was added for programmer convenience only and is optional anyway. MS (for a change) got it correct by refusing to ever implement it in the first place.

In general, random padding CAN NEVER fix compression oracles (or random delays for timing oracles) since by definition they must be random to achieve their supposed result. The problem is white noise always just looks grey from a distance and the difference you are looking for is always a constant bias that can be statistically separated by taking a number of samples with reasonable probability.Guessing a N letter key by brute force is an exponential time problem (k*x^N), guessing (and checking) one letter at a time is a linear problem (k*N), guessing one letter at a time with random padding is also a linear (but slightly slower (p*k*N) ) problem - there is no way to improve on that...

P.S. the most efficent imaginable classical computer would take more energy than is released in a supernova just to count to 2^256, a very different measure of "just make it take more work".

If the content of the webpage, images, css, js is for the most part static so compressed or not the encryption is going to be easy to break because we know what is being sent in the initial transfers.

Knowing some or even all of the plaintext does not by itself allow to to break in any way any (secure) modern cipher e.g. AES, Salsa20, etc. You simply already know some of the message by some other channel making its encryption rather moot.

This attack is not an attack on the cipher at all it's an attack on the compression function which changes the length of the message if you repeat the secret in the part of the message you control. Ciphers do not generally conceal the length of the plaintext unles you pad to a fixed length (e.g. with zeros) before encryption but doing so obviously renders compression redundant.

Compression only allowed this attack because of the form of encryption used. If the data itself were encrypted in blocks, rather than simply XORed with a keystream, very little could be deduced by tampering with the encrypted message.

Padding MUST be in a multiple of the block size (16bytes for AES) with a maximum length of 255 bytes. That's a most 16 different padding lengths and the attacker just needs to see the shortest padded version to make their guess. To change it to be any more complex will require the move from PKCS #5 padding or CBC mode at which point you may as well write a new protocol.

Note that TLS supports RC4, a stream cipher and thus does not use CBC mode. There are also draft proposals to define CTR mode ciphersuites for TLS.

Stubabe wrote:

In any case, a compression function has no place in a encryption protocol - it was added for programmer convenience only and is optional anyway. MS (for a change) got it correct by refusing to ever implement it in the first place.

For the information leak to occur, it doesn't matter if the compression happens in the TLS spec itself or at a higher protocol layer. In my view, if Netscape+IETF is going to sell the world a drop-in security layer for sockets, it needs to not fall apart the instant a service passes compressed data over it.

Considering that Duong and Rizzo are making an annual event of decrypting session cookies passed over TLS, yeah, protocol changes are, on the table.

Padding MUST be in a multiple of the block size (16bytes for AES) with a maximum length of 255 bytes. That's a most 16 different padding lengths and the attacker just needs to see the shortest padded version to make their guess. To change it to be any more complex will require the move from PKCS #5 padding or CBC mode at which point you may as well write a new protocol.

Note that TLS supports RC4, a stream cipher and thus does not use CBC mode. There are also draft proposals to define CTR mode ciphersuites for TLS.

I didn't mention those simply because as stream ciphers (CTR is also a stream cipher mode it just uses a block cipher to do it) they pass the length through completely unaltered (not even padded) so are even eaiser to exploit. Anyway, whilel I pointed out the particular issue with TLS the crux of my post is that random padding is not a fix for this issue. Defeating padding is always a linear time problem -> offers no real security. The take home message being that in today's era of strong crypto it is far easier to attack side channels than try to kick the door in.

Quote:

Stubabe wrote:

In any case, a compression function has no place in a encryption protocol - it was added for programmer convenience only and is optional anyway. MS (for a change) got it correct by refusing to ever implement it in the first place.

For the information leak to occur, it doesn't matter if the compression happens in the TLS spec itself or at a higher protocol layer. In my view, if Netscape+IETF is going to sell the world a drop-in security layer for sockets, it needs to not fall apart the instant a service passes compressed data over it.

Considering that Duong and Rizzo are making an annual event of decrypting session cookies passed over TLS, yeah, protocol changes are, on the table.

And that is what has happened - the spec has changed to suggest that compression is not used for these kinds of interactive protocols, there is no "fix" for this as it's a fundamental side-channel that cannot be prevented. You either have fixed length (or at least constant length for a given plaintext) or you don't, TLS should have never of provided the option in the 1st place. The only fix TLS could provide for higher layer compression is padding to maximum possible length (but how would it know what that is?). A security protocol cannot generally undo security errors made at a higher level. The "real" fault here is browsers are allowing 3rd party (potentially attacker controlled information) into a secure channel.

Also, your suggestion that as long as you pick a strong padlock (by way of analogy) that your responsibility to security stops there no matter what you do (fit it to a rotten door, leave a window open etc.) is ridiculous. The only fix for length based side channels is to transmit fixed length messages at a constant rate (sending empty messages if necessary) and some high confidentially protocols do just that, but I doubt such a protocol with compression would perform better than uncompressed TLS for https. And ultimately compression is just a performance optimization - general rule of cryptography is you can have security or performance pick one.

If there is any major fault with TLS is that it gives its users a false sense of security that it is some kind of set-it-and-forget-it black box. Secure crypto is very hard, TLS (and protocols like it) make it a lot easier, but it is still not easy. You must write everything from the ground up to be secure, trying to retrofit a fundamentally insecure design (such as http, js, etc) with a security wrapper is never going to achieve "perfect" security.

Anyway, "Duong and Rizzo " are very good at producing interesting edge cases of browser https behavior that are not very easy to perform in the real world. Also, most of their attacks are generally inapplicable to 99% of non-https uses as most other protocols do not allow attackers to interactively inject plaintext into secure channels at a high rate.

Also, your suggestion that as long as you pick a strong padlock (by way of analogy) that your responsibility to security stops there no matter what you do (fit it to a rotten door, leave a window open etc.) is ridiculous.

I would agree, but I don't think that's what I said. What I said was that the world *has been* sold a literal image of a padlock advertised as a "secure sockets layer". Now it's up to the data security professionals (like me and you) to do our best to deliver on those promises whether we were the ones making them or not.

Stubabe wrote:

The only fix for length based side channels is to transmit fixed length messages at a constant rate (sending empty messages if necessary) and some high confidentially protocols do just that,

There's no reason TLS couldn't negotiate the use of such an option.

Stubabe wrote:

but I doubt such a protocol with compression would perform better than uncompressed TLS for https.

Not everyone who uses TLS has a zero budget for security as is commonly assumed.

Stubabe wrote:

Anyway, "Duong and Rizzo " are very good at producing interesting edge cases of browser https behavior that are not very easy to perform in the real world.

Tell that to the CISO when these guys are down at Ekoparty in Argentina decrypting Paypal session cookies on stage.

Stubabe wrote:

Also, most of their attacks are generally inapplicable to 99% of non-https uses as most other protocols do not allow attackers to interactively inject plaintext into secure channels at a high rate.

I mean this in the nicest possible way, but "bullshit". :-) You don't know that because you haven't surveyed "99% of non-https uses" and even if you had I'll wager that Duong and Rizzo are more creative and motivated at exploiting side channels than you are.

Also, most of their attacks are generally inapplicable to 99% of non-https uses as most other protocols do not allow attackers to interactively inject plaintext into secure channels at a high rate.

I mean this in the nicest possible way, but "bullshit". :-) You don't know that because you haven't surveyed "99% of non-https uses" and even if you had I'll wager that Duong and Rizzo are more creative and motivated at exploiting side channels than you are.

97.5 percent of statistics are made up on the spot...

To exploit a side channel like that you need to be able to inject arbitrary data deterministically, frequently enough to be useful and most importantly of all adaptively. With the exception of SSL VPN & https that is a fairly unusual protocol, and many other protocol's data encodings severely limit what can be injected. The other problem is that they are high profile attacks (certainly not passive) you need almost 100% control of the network and odd behaviours will be seen by the user or even a IPS and that risks detection. It's so much easier just to use something like SSLstrip (you get the same percentage to clueless users to succumb either way), so from a Hackers point of view you don't gain much (nation states however...).

Duong and Rizzo aren't even doing anything new, MAC padding attacks, deterministic IVs, compression oracles have been know about for years and for just has long security researchers have been saying you can use them against TLS (I personally remember coming across a discussion regarding padding/compression oracles in TLS years ago) and long ago TLS1.1 was created purely to fix the error used in BEAST last year. It's just nobody listened. What Duong and Rizzo are REALLY good at is attacking browser behaviour and demonstrating actual attacks in highly dramatic ways "OMFG they can do facebook! oh noes!" - it makes people pay attention (which is good) but most of the analysis has already been done.