Exploit is the latest to subvert crypto used to secure Web transactions.

Software developers are racing to patch a recently discovered vulnerability that allows attackers to recover the plaintext of authentication cookies and other encrypted data as they travel over the Internet and other unsecured networks.

The discovery is significant because in many cases it makes it possible for attackers to completely subvert the protection provided by the secure sockets layer and transport layer protocols. Together, SSL, TLS, and a close TLS relative known as Datagram Transport Layer Security are the sole cryptographic means for websites to prove their authenticity and to encrypt data as it travels between end users and Web servers. The so-called "Lucky Thirteen" attacks devised by computer scientists to exploit the weaknesses work against virtually all open-source TLS implementations, and possibly implementations supported by Apple and Cisco Systems as well. (Microsoft told the researchers it has determined its software isn't susceptible.)

The attacks are extremely complex, so for the time being, average end users are probably more susceptible to attacks that use phishing e-mails or rely on fraudulently issued digital certificates to defeat the Web encryption protection. Nonetheless, the success of the cryptographers' exploits—including the full plaintext recovery of data protected by the widely used OpenSSL implementation—has clearly gotten the attention of the developers who maintain those programs. Already, the Opera browser and PolarSSL have been patched to plug the hole, and developers for OpenSSL, NSS, and CyaSSL are expected to issue updates soon.

"The attacks can only be carried out by a determined attacker who is located close to the machine being attacked and who can generate sufficient sessions for the attacks," Nadhem J. AlFardan and Kenneth G. Paterson researchers wrote in a Web post that accompanied their research. "In this sense, the attacks do not pose a significant danger to ordinary users of TLS in their current form. However, it is a truism that attacks only get better with time, and we cannot anticipate what improvements to our attacks, or entirely new attacks, may yet be discovered."

How it works

Lucky Thirteen uses a technique known as a padding oracle that works against the main cryptographic engine in TLS that performs encryption and ensures the integrity of data. It processes data into 16-byte chunks using a routine known as MEE, which runs data through a MAC (Message Authentication Code) algorithm, then encodes and encrypts it. The routine adds "padding" data to the ciphertext so the resulting data can be neatly aligned in 8- or 16-byte boundaries. The padding is later removed when TLS decrypts the ciphertext.

The attacks start by capturing the ciphertext as it travels over the Internet. Using a long-discovered weakness in TLS's CBC, or cipher block chaining, mode, attackers replace the last several blocks with chosen blocks and observe the amount of time it takes for the server to respond. TLS messages that contain the correct padding will take less time to process. A mechanism in TLS causes the transaction to fail each time the application encounters a TLS message that contains tampered data, requiring attackers to repeatedly send malformed messages in a new session following each previous failure. By sending large numbers of TLS messages and statistically sampling the server response time for each one, the scientists were able to eventually correctly guess the contents of the ciphertext.

It took the scientists as little 223 sessions to extract the entire contents of a TLS-encrypted authentication cookie. They were able to improve their results when they knew details of a the ciphertext they were trying to decrypt. Cookies formatted in base 64 encoding, for example, could be extracted in 219 TLS sessions. The researchers required 213 sessions when a byte of plaintext in one of the last two positions in a block was already known.

To make the attacks more efficient, they can incorporate methods unveiled two years ago in a separate TLS attack dubbed BEAST. That attack used JavaScript in the browser to open multiple sessions. By combining it with the padding oracle exploit, attackers required 213 sessions to extract each byte without needing to know one of the last two positions in a block.

The Lucky Thirteen attacks are only the latest exploits to subvert TLS, which along with SSL is intended to safeguard bank transactions, login sessions, and other sensitive activities carried out over unsecured networks. One of the most serious recent attacks used a universal wildcard certificate to spoof the credentials of virtually any website on the Internet. The previously mentioned BEAST attack was able to decrypt an eBay authentication cookie, although the technique required the attackers to first subvert something known as the same origin policy. Late last year, the same researchers behind BEAST devised CRIME, an attack that used Web compression to subvert TLS/SSL.

TLS remains vulnerable to such attacks largely because of design decisions engineers made in the mid-1990s when SSL was first devised, Johns Hopkins University professor Matthew Green observed in a blog post published Monday that explains how Lucky Thirteen works. Since then, engineers have applied a series of "band-aids" to the protocols rather than fixing the problems outright.

The attacks apply to all implementations that conform to version 1.1 or 1.2 or version 1.0 or 1.1 of TLS or DTLS respectively. They also apply to implementations that conform to version 3.0 of SSL or version 1.0 of TLS when they have been tweaked to incorporate countermeasures designed to defeat a previous padding oracle attack discovered several years ago.

It's not the first time SSL and TLS have been brought down using a padding Oracle attack. The protocols were later patched to prevent attacks that used subtle differences in timing to ferret out details about the encrypted plaintext. At the time, some cryptographers acknowledged a tiny window that could still permit that type of exploit.

The scientists dubbed their exploit "Lucky Thirteen" because it's made possible by the TLS MAC calculation including 13 bytes of header information.

"So, in the context of our attacks, 13 is lucky—from the attacker's perspective at least," the researchers wrote in their Web post. "This is what passes for humor amongst cryptographers."

This is ridiculously complicated, and I only have a rudimentary understanding of what is going on. Is there anything for admins or web devs to do at the moment?

Yup. And yet I can bet I'm going to get an email a month or two from now from a client saying their PCI scan failed because they're vulnerable to the Lucky 13 attack. *sigh* I'm not entirely sure how I get roped into system administration tasks but it has happened multiple times already.

Isn't a part of hardening an SSL/TLS implementation killing the connection when too many anomalies occur? (I know, it's kind of a DoS). Or is the sample time so spread out that it's within reasonable operating limits?

And that is why administrators either disable all the CBC ciphers or at least prioritize non-CBC ciphers.

Got cipher problems? I feel sorry for you son, I got me TLS cipher suites but CBCs? I got none!

And use what?

Unless you can deploy TLS1.2 with AES-GCM or one of the poorly supported AES-CTR ciphersuites you are basically stuck with RC4 which is less than ideal.

But you are right: given all the CBC mode issues with TLS (this is just an enhancement of an old one) I really don't get why they didn't push CTR mode ciphersuites harder in TLS1.1/TLS1.2 or at least change to Encrypt-Then-MAC when they added explicit IVs in TLS1.1

eberan wrote:

Isn't a part of hardening an SSL/TLS implementation killing the connection when too many anomalies occur? (I know, it's kind of a DoS). Or is the sample time so spread out that it's within reasonable operating limits?

It will reset after the 1st padding failure but that doesn't help if you are attacking an https session. Since the password or cookie is sent at the same offset every time so you can continue your attack across subsequent sessions. But you will have millions of padding aborted TLS sessions.

Isn't a part of hardening an SSL/TLS implementation killing the connection when too many anomalies occur? (I know, it's kind of a DoS). Or is the sample time so spread out that it's within reasonable operating limits?

Exactly...There should be some logic in the SSL/TLS implementation to have some sort of threshold to the number of tampered packets it processes from a specific IP relative to time. If not, it's not to late to implement it because 2^19 - 2^23 attempts are still sort of far fetched to cause mass hysteria...

This is ridiculously complicated, and I only have a rudimentary understanding of what is going on. Is there anything for admins or web devs to do at the moment?

No, this will be patched by administrators when the patch comes out eventually.

Of course, there are lots of reasons not everything that uses a given SSL library will be updated. Package repositories fall out of date, application source code using an SSL library may never have been or may no longer be available, bloated hosting automation platforms may continue to languish, etc.

For the vulnerability to be found in something like OpenSSL, which is used as an often-unseen component in countless projects, is dismaying. Too often, software vendors and maintainers ignore updates to the components they use.

This attack seems related to http://lasec.epfl.ch/memo/memo_ssl.shtml, an attack from 2003 ("The former problem of availability of error messages (encrypted in TLS/SSL) is solved by performing a timing attack i.e. by measuring the taken for error messages to come back from the server. It is then possible to perform the attack over several sessions of TLS/SSL.")

Open-source TLS implementations that I know of, in particular OpenSSL, and PolarSSL, are not vulnerable to the attack above. I can only assume they will release patches for the Lucky 13 attack very soon.

This attack seems related to http://lasec.epfl.ch/memo/memo_ssl.shtml, an attack from 2003 ("The former problem of availability of error messages (encrypted in TLS/SSL) is solved by performing a timing attack i.e. by measuring the taken for error messages to come back from the server. It is then possible to perform the attack over several sessions of TLS/SSL.")

Open-source TLS implementations that I know of, in particular OpenSSL, and PolarSSL, are not vulnerable to the attack above. I can only assume they will release patches for the Lucky 13 attack very soon.

It takes more than just source code updates to fix a timing oracle, optimising compilers can "unfix" your patch for you. You need to validate the assembly output to be 100% sure.

All of these replies lead to my next question:Are there events on the server-side components that can be leveraged to determine if a remote client is being subjected to such an attack?

IOW, are there a lot of resets or other anomalous session-maintenance packets coming from the client back to the head-end device that's attempting to maintain their SSL/TLS session we can alert off of?

While this targets the client implies your server-side components aren't necessarily susceptible that doesn't mean the session a remote client does establish is valid, not being intercepted or otherwise manipulated and thus should be terminated.

All of these replies lead to my next question:Are there events on the server-side components that can be leveraged to determine if a remote client is being subjected to such an attack?

IOW, are there a lot of resets or other anomalous session-maintenance packets coming from the client back to the head-end device that's attempting to maintain their SSL/TLS session we can alert off of?

Yes, there should be for TLS, maybe not for DTLS but I don't know much about that particular protocol so I couldn't say for sure. A MITM could block the TLS alerts but you would still see a LOT of aborted sessions (without any error) at the server. Certainly, something a IPS could potentially flag up.

Quote:

While this targets the client implies your server-side components aren't necessarily susceptible that doesn't mean the session a remote client does establish is valid, not being intercepted or otherwise manipulated and thus should be terminated.

The obvious case is obtaining a PW or cookie from an https session where the client would be the target. But if you wanted to obtain something sent from the server (we are probably not considering https here) there is nothing to prevent you attacking the other end so server side is definitely susceptible just less useful to your attacker in the common case.

* Only the CBC suites are vulnerable to this, not the RC4 suites. The example is against AES-CBC.

* It takes on the order of 10,000+ tries per recovered byte. (Most of my session cookies are typically around 24 bytes.)

* SSL/TLS kills a session after a single failed attempt.

* DTLS will keep a session open, but is not used by normal https traffic.

They postulated attacks against normal https traffic: These attacks would allow a person to obtain the session cookies. Similar in concept to CRIME/BEAST, but each failed attempt that yields more data also causes a SSL/TLS session reset. (You get as many good attempts on a session as you like, but only one failed.)

My conclusion not theirs:

Therefore a good DDoS/intrusion prevention system should flag a browser that has been hijacked and is attempting to break into a session long before getting enough bits to do it. At 200,000 rapid fire requests it should be obvious that an IP is attacking.

I would like an actual web server programmer to chime in here:

A single failure could be caused by normal problems in data transit and should not be worried about, but if you are seeing tens of thousands: how hard would it be to introduce timing delays in the handshake for the IP in question to slow down such attacks, but not so much as to prevent an non-malicious request from feeling too slow? (The IP might be a NAT for a large group of users.)