The latest vulnerability in the Transport Layer Security protocol is not -- according to the two researchers who found it -- cause for immediate panic, but it does raise questions about the viability of improved versions of the same type of attack. It's also the latest in a long series of problems with the Internet's basic client-to-server security mechanism.

By submitting your personal information, you agree to receive emails regarding relevant products and special offers from TechTarget and its partners. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

The time it takes to produce the error message varies slightly depending on exactly what happens during decryption.

"I've been fascinated with real-world applications of cryptography for many years and, in that context, TLS [Transport Layer Security] was one of the biggest targets for someone like me," said Kenneth Paterson, professor and co-author of a recently released paper describing a new TLS attack, in a recent phone interview. He said he was "primed by several years of investigation" of other communication protocols where the timing of various messages called for in those protocols provided insights into the message contents. The paper, Lucky Thirteen: Breaking the TLS and DTLS Record Protocols, was authored jointly by Paterson and Ph.D. student Nadhem AlFardan, both in the Information Security group at Royal Holloway, University of London.

As for the TLS account, Paterson said the attacker sits between a client and a server that are using the TLS protocol, and "what this bad guy does is he intercepts the packets and modifies them in a very subtle way." The packets sent to the server have an unusual alignment where one header field comprises 13 bytes, thus giving the attack its name: "Lucky Thirteen". The attacker "then sends the packets to the server to be decrypted and the way that the server decrypts them produces errors because there's an integrity protection mechanism in TLS," the professor said.

"The server is very polite and it always sends back an error message to the client."

What happens next is the critical part: "It turns out that the time it takes to produce the error message varies slightly depending on exactly what happens during decryption. We modify the packets and the modification is done in such a way that the server takes more or less time, and that time difference leaks the plaintext."

According to Paterson, the time differences being measured range between half a microsecond and a couple of microseconds. The size of the difference "depends very much on the hardware that the server is using. And the time difference is affected by the fact that the packets containing these error messages are also going across the network. So they are subject to all kinds of delay and jitter from things like routers in the network."

The need for measuring clear time differences means that the best attack scenario is one in which the attacker is on the same network segment as the server. That's not necessarily a typical case, but one possible scenario, Paterson said, is where "the bad guy is actually an ISP and you're in a nation state where the ISP is actually adversarial toward the citizens."

Generally speaking, though, the paper's authors want to be clear that this isn't the first line of attack that any sensible attacker would turn to. For one thing, the farther away the attacker is (in terms of network topology) from the server, the harder it is to get a clear reading of the time differences. Even close by, many samples of the time difference have to be obtained for each byte that is being recovered.

All those samples mean many TLS sessions have to be initiated. This creates a lot of "noise" that can be easily monitored to watch for attacks (and, indeed, most commercial servers will balk at being asked to start unusual numbers of sessions from the same IP address, even without taking special precautions against this attack).

While this version of the Lucky Thirteen attack may not be common in the wild any time soon, the Lucky Thirteen team pointed out that it isn't known how many improvements can be made to the attack, and whether subsequent improvements might make it a more readily used attack.

What might such improvements look like? For one thing, improvements might involve reducing the number of sessions needed to crack the underlying encryption. And Paterson pointed out ways in which the number can already be effectively reduced: "If you can anticipate some of the text -- assuming that the first bytes are the header of a cookie you are trying to intercept, for example, and knowing the format of standard cookies -- the needed number of samples drops to 219 from 223, he said. "And then there's another trick: If the attacker also knows one of the last two bytes in the block, then the number of samples needed drops from 219 to 213."

This is still noisy, but is a reasonably fast attack; in the test scenario that was used in the paper it deciphered a byte of plaintext in about 15 minutes.

While this wouldn't be Paterson's first choice if he were attacking someone (he opted for sending a phishing email when pressed), it's also just the latest in a long string of problems in the general realm of TLS security and the security of its predecessor, the Secure Sockets Layer (SSL). Just to take a few more recent examples: Flaws in SSL handling have been used to remotely wipe the memories of Android and iOS mobile devices; a JavaScript attack tool called BEAST has been used to commandeer browser SSL sessions; and a steady stream of problems with the certificates that provide the underlying trust in the SSL protocol have all chipped away at what is perhaps the most widely used security mechanism on the Internet. An exasperatingly long list of TLS/SSL attacks can be found here.