Posted
by
timothy
on Thursday March 14, 2013 @01:45PM
from the do-it-one-nanotimes dept.

Sparrowvsrevolution writes "At the Fast Software Encryption conference in Singapore earlier this week, University of Illinois at Chicago Professor Dan Bernstein presented a method for breaking TLS and SSL web encryption when it's combined with the popular stream cipher RC4 invented by Ron Rivest in 1987. Bernstein demonstrated that when the same message is encrypted enough times--about a billion--comparing the ciphertext can allow the message to be deciphered. While that sounds impractical, Bernstein argued it can be achieved with a compromised website, a malicious ad or a hijacked router." RC4 may be long in the tooth, but it remains very widely used.

This is the cipher known as 'arcfour' in SSH. I use it regularly when speed is more important than security, which is frequently. I'm not sending a billion of the same files anywhere, so I will continue to use it.

Irrelevant. As long as I only send one copy of the compressed data, it should be safe. A better objection is that it probably would take more CPU to compress the data before sending it over RC4 than it would to just switch to AES with no compression.

I tried to beat ARC4 a decade ago, and had to admit defeat. It's a fine stream crypto system, so long as you have a solid nounce and drop the first few hundred bytes of output. I use ARC4-DROP(1024) in tinycrypt, a simple encryption tool I published on sourceforge because I couldn't find one that was simple. Back then, crypto gurus had found that with 1G of data they could determine that an encrypted stream of 0's was ARC4. I think that's been reduced to around 10Mb. So the question is "how do I make u

The SSL people never bothered to do it because SSL was defined before the weaknesses were published. The method to mitigate the problem makes it an entirely new algorithm. And there are much better algorithms to standardize than RC4+half-measures. The real problem is that vendors won't upgrade to these newer algorithms, such as AES-GCM.

Also, the well known measures didn't avoid the problem, they just kicked the bucket down the road. Once upon a time you only had to discard 256 bytes, then 768 bytes, then 10

Thats what I have been using for years. In fact, in my last job when one of the DBAs was complaining about how long it took to transfer really large files with winscp, I showed her how to set blowfish as the cipher and she about started glowing she was so happy at the speed.

I recall benchmarking an old Pentium-75 at 17Mbps with rsync over ssh, and that was, I think, 2001 (for an embedded x86 appliance I was working on). That was a 150MIPS machine. But, yes, the NetBSD folk do some amazing things, and anybody running a 16MHz Atmel is going to have a hard time pushing serious crypto. The new embedded chips with AES on-die are also lovely.

As I understand it, ARC4 originally stood for 'Alleged' RC-4 since it was reverse engineered from RSA's proprietary RC4 implementation. The name RC4 is trademarked by RSA and they refuse to confirm that ARC4 was 100% compatible with their trademarked RC4, so for these two reason, the name ARC4 stuck.

Of course nobody today disputes that there is any actual difference between the public ARC4 and RSA's RC4...

You're screwed. You have the PCI people who are freaked out over CBC ciphers because of BEAST, you have lots of LTS distros not offering TLS 1.2, and you have people under FIPS who are your customers, so you wind up having to offer RC4 as a cipher to meet all of the above requirements. And even if you assume FIPS-managed clients will be controlling their ciphers to meet their internal requirements, you have to explain this to the PCI scanner vendor every. single. time.

If the LTS vendors could backport TLS 1.2, that would solve many headaches.

If the LTS vendors could backport TLS 1.2, that would solve many headaches.

You can't even be sure to have Server Name Indication on a smartphone. Entry-level smartphones still come with Android 2.3, whose TLS stack lacks SNI support, and without SNI, they can't see the certificate for more than one site on port 443 of a given IPv4 address.

Yes, and in CBC mode if you want browser and openssl support, which leads back to the BEAST attack without TLS 1.2. Which is fine if the client is properly configured to avoid the attack - the server side can only help, not prevent. But the PCI Scanner vendors will redflag you for running any CBC ciphers.

Many browsers still don't support TLS 1.2 (which I feel should be treated as a serious bug, not a feature enhancement that some of those browsers treat it as). And those that do almost universally don't enable it by default.

Yup. RC4 is really fast in software and so can scale really easily without needing any real change in server capacity.

Also, most browsers support Elliptic-Curve Diffie-Hellman key exchange with RC4 which provides perfect forward secrecy with substantially less computing overhead as using the standard DH key exchange protocols.

I'm still not sure how gmail is in any way vulnerable here? From the summary alone, it implies that you need to compromise something else to be able to inject data that's repeated a billion times. Given that gmail doesn't use a 3rd party ad service, it suggests that you'd need to compromise Google one way or another. Either that or the machine itself, in which case a keylogger would be much more effective, or the person's router, in which case there are still other easier attacks.

it implies that you need to compromise something else to be able to inject data that's repeated a billion times.

If a known plaintext is good enough, you could use an image or just the <!DOCTYPE HTML><html><head> at the beginning of every HTML5 document. Or by "inject" did you specifically refer to chosen plaintext?

Bernstein demonstrated that when the same message is encrypted enough times--about a billion--comparing the ciphertext can allow the message to be deciphered. While that sounds impractical, Bernstein argued it can be achieved with a compromised website, a malicious ad or a hijacked router.

The researcher himself has stated that you need to compromise the website. I suppose you're correct in that if you select a well known piece of data, you could be in there, but it must

You don't need to see the entire message, only the same same encrypted portion that you're trying to decipher. And you need to see it over and over again, each time encrypted with a different key.

You then look at all the biases, which the RC4 stream generator is rife with (and not just the key schedule). And eventually you'll be able to deduce the plaintext. So, you don't need to know any plaintext, but you need the same unknown plaintext to be at the same position in the stream every cycle.

followed by a bunch of other headers, before you get to the DOCTYPE and such.

Knowing that the document begins with "HTTP/1.1 200 OK" isn't very helpful, because as I understand it, this isn't a known-plaintext attack, but rather a constant-plaintext attack: RC4 as used by SSL/TLS doesn't produce the same cyphertext from a given plaintext every time. Ideally, there wouldn't be any correlation between cypher

I know your comment is in jest, but it's worth noting the attack already assumes you use a different key for each transmission (just the same plaintext), also it's statistical so it already has a good chance of recovering some data after just a million transmissions, and even more so if the data is constrained (e.g., HTTP headers are always ASCII, but the "billion" number given in the summary is assuming near 100% recovery of the first 256 bytes of an *arbitrary* plaintext).

If I understood the article, the reported RC4 weakness is known since so long ago there is a RFC about it (rfc4345 [ietf.org]) that TLS implementation just ignores. SSH also uses RC4 in a non-vulnerable way, and that's why it's not broken, and it's perfectly possible to have a secure RC4 algorithm by simply discarding the first N bytes, where N>1000.

What "provably strong" bulk cipher algorithm is in TLS 1.x? AFAIK as of the latest version--TLS 1.2 (the latest)--only 3DES and AES are allowed as alternatives to RC4. A prove that either of those is strong would be a major result in the crypto world.

TLS 1.2 made it easier for clients and servers to negotiate new cipher suites. AES-GCM, which all experts agree is the best option today, is defined by http://tools.ietf.org/html/rfc5288

Several of Berstein's slides make fun of people who say, "oh, we'll just return to RC4". People keep bouncing around between the RC4 stream cipher and CBC block modes, when every expert has been saying for years we should ditch these and move to AES-GCM. But the browsers refuse to upgrade their TLS stacks.

The post I was replying to implies that there's a new result proving that AES or 3DES is secure (which would have profound implications not only on cryptography but on greater questions of computability) or that there's a version of TLS that uses one time pads or something. I was attempting to get at exactly what was being claimed.

So, all the various PCI scanners tell you to use RC4-based crypto due to BEAST, which is pretty hard to pull off, now suddently we won't be able to able to use RC4 anymore, but TLS v1.2 and up aren't available in IE8 (XP) or older version of Firefox and I believe. So, now what?

now suddently we won't be able to able to use RC4 anymore, but TLS v1.2 and up aren't available in IE8 (XP)

Show users of IE on Windows XP a countdown timer in days, hours, minutes, and seconds until the planned end of support for Windows XP [microsoft.com]. "Microsoft has announced that on April 8, 2004, it will stop providing security updates for your PC's operating system. In the meantime, we recommend accessing our web site using Google Chrome or Mozilla Firefox web browser. [ Get Chrome ] [ Get Firefox ]"

RC4 provides reasonably good security as long as you don't use it for things it wasn't meant for. (Rule#1 of RC4 club is "Never encrypt the same stuff twice".) Bernstein's attack is interesting, because he's using TLS/SSL to push RC4 to do things it wasn't meant to do.

arcfour256 drops the first 256 bytes, which is done expressly to prevent a known attack against RC4. In addition, unlike a public https server an attacker can't force you to transfer a same-plaintext file a billion times via ssh. ssh with arcfour256 should still be fairly safe, though I'd certainly transition to AES in a timely fashion.

I run tests on RC4 years ago, run it thru a plain text file full of the same char repeated and then run through RC4, guess what? Oh the password is showing every 256 chars, hence the "weak" key.
I developed a newer version of RC4 called RC64, uses a 64K (65536 or 256 ^ 2) key. The randomisation process is very complex and the algo was only just slightly slower than RC4, which is very fast anyway. A graphical representation of the 64K key visualized pure white-noise when the key was viewed in grey-scale. Th

I developed a newer version of RC4 called RC64, uses a 64K (65536 or 256 ^ 2) key. The randomisation process is very complex and the algo was only just slightly slower than RC4, which is very fast anyway. A graphical representation of the 64K key visualized pure white-noise when the key was viewed in grey-scale. They need to start using mine me thinks:) Oh, and in a 50MB file full of the same repeated char, the password was not even hinted at and no 4 bytes were the same.

I don't mean to discourage you from trying cool things, but this is exactly what NOT to do when considering cryptography. Creating a secure cipher is HIGHLY non-trivial and should not be attempted (or rather, your attempt should not be used to secure things) unless you understand the processes involved very well and are an expert. Just because the output "looks" random, does not mean it is secure. There are many random number generator algorithms that pass all statistical tests, but are trivial to predic

I'm no expert with regards to symmetric encryption granted, but I do know a lot about it. It makes RC4 look like plain-text in comparison. I disagree with regards to using a larger key as it means there is a lot more overlap when rotating the key, why have key(x) when you can have key(x, y)? It's technically as easy hence the speed is only just slightly slower, a fraction. The problem with keys that are 256 is the rotation. I came to the conclusion that there wasn't enough scope to randomize it enough hence

From TFA: "The gigantic number of identical messages that must be sent to break the scheme might seem reassuring. The attack in its current form takes close to 32 hours to perform. But Paterson points out that an attacker could use a malicious ad, a hijacked portion of a website, or a compromised router to feed the identical message to a user again and again unbeknownst to the victim.

RC4 was useful as a workaround for BEAST. Now we need to throw out RC4, we need another workaround for BEAST

The Right Way would be to use TLS 1.1 or TLS 1.2, but they re not supported by some browsers (e.g.: Mozilla).

What about working around the problem in the web server? BEAST relies on session cookies being at a fixed offset. If I add a random length cookie that is changed on each server reply, I understand I break BEAST assumption. Am I right? Writing an Apache module to do that would be quite straightforward. Perhaps it already exists?