Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

I've wondered why there isn't a protocol similar to what was used in SSH 1.x, where every x amount of time (default was ten minutes), there was a set of RSA keys generated and kept in memory, used for transactions (and signed with the permanent set of keys), then tossed.

In theory, PFS should be the core of TLS... negotiate the protocol, use DH or the elliptic curve variant to hammer out a session key, re-negotiate the session key every so often, and in any case, toss the session key for good. Having a temporary set of RSA keys similar to SSH 1.x provides protection because it make the permanent host keys essentially signing keys only, not used for encryption, so less data would be encrypted by those keys.

My old Android phone had CM7 installed and the browser let you select one of about 10 UA strings, with the intention of not being redirected to only-works-for-iphones-but-we'll-redirect-all-mobile-browsers-to-this-shitty-cut-down-version-that-lacks-content websites.

Generating RSA keys is more costly than, for example, ECDH keys. Checks for primality for the p and q, are needed for it to be secure for RSA. In my understangin, any big enough integer is a valid DH private key.

Static RSA would be nice for certain applications, since it is computationally cheaper to do for the client. Also, with DANE for instance, the same primitives can be used to check signatures. Yet, RSA might be costly in the future keylengths. For instance, some say that 256-bit symmetric keys are eq

It's been a while, but I think the break even point for computational costs between RSA and ECC is something around 1500bit RSA and 200bit ECC, but the 200bit ECC is much stronger. ECC's computation grows closer to linear while RSA is exponential. With 2048bit RSA being standard now, ECC is now cheaper, minus hardware acceleration reasons.

There are some things you want to share, and there are some things you don't want to share, which are called "private". And there are people in other countries who want to hurt you, who are called "terrorists". There's a part of the government called the NSA that looks at other people's private things in order to stop terrorists from hurting you. But some people don't like strange people looking at their private things.

Sometimes you want to share your private things with other people you trust. One of th

And there are people in other countries who want to hurt you, who are called "terrorists". There's a part of the government called the NSA that looks at other people's private things in order to stop terrorists from hurting you.

That's what the NSA wanted us to believe before Snowden's leaked documents showed us otherwise.

So unless you're willing to argue that the Chancellor of Germany is a terrorist, please stop repeating the Party line.

Some people can misuse your private things to take your money or to lie to other people that you did things that you never really did. This misuse is called "identity theft". And that's one thing that encryption can help prevent when used in the right way.

In other news [eweek.com], OpenSSL gets a 4-year-old flaw patched. The catch here is that the bug was not only 4 years in the codebase, but it was publicly reported (CVE-2010-5298 [nist.gov]) for 4 years, without no one taking the responsibility to fix it.

OpenBSD developer Ted Unangst made a detailed report [tedunangst.com] of the bug. It's not as severe as Heartbleed, but still allows remote attackers to inject data across sessions or cause a denial of service (use-after-free and parsing error) via an SSL connection in a multithreaded environme

It's also pretty shocking that such madness even exists, save that open source can be like government, fixing only the things that people really bitch about, rather than infrastructure components that are decades old, and older.

If people paid attention to core components like they did the Linux kernel perhaps there'd be a lot of noise, but a lot of better code.

Yo: Linus-- you listening? Kernel ain't crap without other components that work and are rationally safe. Cut loose a few of your homies to do some ma

I realize these are volunteers. I'm exactly this kind of volunteer, but in a different sub-discipline. I'm booked, and understand how others have their own motivations. Yet strong classical coders are good in most endeavors, and this is one (the entire encryption chain) that needs serious review, as the others are worthless without cogent and secure code.

That 2 TB of key material has to be stored somewhere, and can't ever be reused. Is your data storage coercion-proof? Also, how much do you like sneaker-net? At that point, you might be better off not committing things to paper.

PFS is nice and all. But why the hate for RSA? It's a well understood algorithm.

What the TLS designers need to think about is how to reduce the options to one well understood choice. So the complexity of ciphersuite negoitation, rekeying and all the other absurd complexity can be removed.

Got it . . . easy for someone who doesn't work with crypto every day to get confused, especially as RSA the company has developed many security-related bits of infrastructure. In fact, as far as I can tell, there's no relationship between RSA (the company) [wikipedia.org] and RSA (the crypto suite) [wikipedia.org]. My bad.

Nope -- I didn't miss it. 1982 was a long time ago, and RSA (the company) bears little resemblance today to RSA (the company) of 1982. In any case, was RSA (the company) involved in the development / imporvements / other iterative activities related to RSA (the crypto suite in question) or not, especially recently? Because if that's the case, my original point stands, as there's plenty of evidence that RSA (the company) has been complicit in NSA's efforts to backdoor and other undermine / weaken componen

What the TLS designers need to think about is how to reduce the options to one well understood choice.

Putting all of your eggs in one basket yields global disaster with unacceptable lag times to recovery when dropped.

It is not the designers responsibility. Rather education campaigns to assist application developer and operator community in selecting appropriate cipher suites and or more work in security stacks to provide generally applicable options.

So the complexity of ciphersuite negoitation, rekeying and all the other absurd complexity can be removed. TLS will not be fixed. It will be replaced.

KISS is great and all but it falls apart when you go that one step further and try to oversimplify.

And yet, the underlying crypto was sound. The composition was broken. If they had specified a proper random IV and not allowed alternatives, beast could not have happened.

The weight of TLS made the details of the CBC implementation appear minor. But in fact getting that right is a primary task.

If there was less work on the bucket full of protocol, perhaps there would have been time for people to listen to the crypto guys when they say your IV must be random. If CBC was the only privacy mode, you can be sure

It's notable that in the current draft they still don't get it right.IV The Initialization Vector (IV) SHOULD be chosen at random, and
MUST be unpredictable. Note that in versions of TLS prior to 1.1,
there was no IV field, and the last ciphertext block of the
previous record (the "CBC residue") was used as the IV. This was

"It is not the designers responsibility. Rather education campaigns to assist application developer and operator community in selecting appropriate cipher suites and or more work in security stacks to provide generally applicable options."

This mentality has led to so many errors - lest I say many C functions, that routinely result in fail because no one did bounds checking . . . We will always have poor developers, so we need to use standards and tech to help them get it right. Saying it is the develo

The temptation to optimize for speed/memory usage over producing correct code is like will'o'the'wisps for many developers. If you develop code for a hostile environment (the default assumption for crypto) it should be assumed that any inputs will be abused.

Further, in a hostile environment, developers also need to assume that they won't have anticipated every possible way to extract information from a process. Do the simplest thing that will always work, within the limits you can predict. Then fix the bugs

Because elliptic curve is less expensive: Providing the same degree of difficulty for far less bits, or more computational difficulty for the same bits. Takes less CPU to compute, so lighter weight makes it more useful. Eg: In my alternate distributed "toy" OS's PKI system there are "permission spaces". Every agent can be granted a permission, and can grant permissions to other agents, perhaps reducing the permissions possible the agent can grant to the next, including the permission to grant certain per

I think it is a mistake for IETF to make value judgments about mandating a lowest acceptable level of security. Not everyone is faced with the same problems or places the same value on specific aspects of security.

Ability to decode encrypted sessions after the fact could in some instances offer more value than risks associated with retroactive effects of key compromise. TLS after all has ability to negotiate ciphers offering no encryption at all while still offering integrity protection for data.

TLS is supposed to provide point-to-point security, if you want the information to also go somewhere else then send it there. If you don't trust the starting point then have the ending point do it. So your industrial control system sends commands X to machinery Y, machinery Y forwards it to logger Z before executing the command. Or have the logger be an intentional MITM. While the problems are genuine I feel your solutions create complications in all the wrong places.

That's different. Encryption ciphers have a pretty standard interface and can be plugged in modularly by the dozens.

This interpretation was not intent. Term "cipher suite" rather than "cipher" was used intentionally to include combinations of *all* parameters. I could have just as easily said *SRP*

Key exchange protocols need more customization, and extend the protocol at its most vulnerable spot, before both parties have the keys they need to communicate securely.

Negotiation parameters are checked retroactively after session establishment regardless. Peers would have to be configured to allow a cipher suite so broken as to be compromised in real-time during TLS handshake. If RSA could be compromised in this way it is hard to imagine a scenario where binding of trust from RSA to ephe