Posted
by
Soulskillon Saturday May 17, 2014 @12:16AM
from the now-let's-be-paranoid-that-the-NSA-solved-it-years-ago dept.

An anonymous reader points out this Science Daily report:
"Researchers ... have solved one aspect of the discrete logarithm problem. This is considered to be one of the 'holy grails' of algorithmic number theory, on which the security of many cryptographic systems used today is based. They have devised a new algorithm that calls into question the security of one variant of this problem, which has been closely studied since 1976. The result ... discredits several cryptographic systems that until now were assumed to provide sufficient security safeguards. Although this work is still theoretical, it is likely to have repercussions especially on the cryptographic applications of smart cards, RFID chips , etc."

Here's the thing, however. From reading the article, it seems that DH was not, itself, broken. Here's the problem, however: DH is used for forward reference security. It is used to ensure that an adversary that captured the encrypted communication cannot be decrypted later, even if the RSA key is later compromised

Which means that whether DH has already been broken is a moot question. The real question is whether it is likely to be broken in the near future (where what "near" means depends on what you're actually encrypting).

Here is what Schneier usually has to say about that: Attacks always get better over time.

Of course, the main problem with replacing DH is that we don't really have anything better on hand.

I'm guessing Schneier et al won't have a chance to analyze and reply until next week, but this is so important, who knows?

It also occurred to me that, since the mess with the NSA broke out, I have not seen anything about Suite B being modified--everything in there is still officially supported for "State Secrets". I keep wondering if we're missing something there...

Of course, the main problem with replacing DH is that we don't really have anything better on hand.

Actually there is no need for DH, you can create a new throwaway RSA private/public key pair on both sides, sign it with your main key, use the throwaway keys to transfer the session key then wipe the throwaway keys. The problem with this approach is that generating a new RSA key pair for every session + transferring new key + extra round trips is a really slow process compared to DH.

A better way to handle this would probably be to have the client generate the session key and encrypt it with the servers public key, this would distribute the load for generating keys away form the server so they would not be as easily DOS'ed.

Actually there is no need for DH, you can create a new throwaway RSA private/public key pair on both sides, sign it with your main key, use the throwaway keys to transfer the session key then wipe the throwaway keys. The problem with this approach is that generating a new RSA key pair for every session + transferring new key + extra round trips is a really slow process compared to DH.

So how do you go about securely communicating one part of the throwaway keys to the other side so that the session key can be transferred?

No, this paper is explicitly stating that the theory is flawed. If this attack is expanded, SSL traffic will essentially be as secure as rot13. That's very different and much worse than heartbleed or any key generation issue. My applications were not affected by either of those coding mistakes, but they and everything else using the same algorithm would be by a break in DH.

No of course not. Ignorance is considered irresponsible, foolish, and lazy unless you're a neophyte. And while faith is desirable, faith is best based on knowledge, experience, and reason. Faith which cannot be defended by reason or evidence is considered weak and shaky, but better than no faith at all. Sharing evidences of what you have faith in is a very popular activity and is considered healthy and encouraging for all.

Reading the paper, the most notable feature is that their algorithm is efficiency for constant characteristic, including the common case of fields of characteristic 2. It's also okay for the characteristic to grow somewhat with the size of the field, but not very fast.

This is not at all relevant to most implementations of DH, which use prime fields of large characteristic. For example, DSA depends on discrete log modulu a large prime p. In particular, I wouldn't worry about forward secrecy of current internet traffic.

This is not at all relevant to most implementations of DH, which use prime fields of large characteristic.

Exactly. Probably more interesting is that their solution is applicable to a wider range of finite fields than recent improvements.
From the paper:

Although we insist on the case of finite fields of small characteristic, where
quasi-polynomial complexity is obtained, our new algorithm improves the com-
plexity of discrete logarithm computations in a much larger range of finite fields.

I see no good basis for the ScienceDaily author's leap from the paper's results to his conclusion that

Since solving this variant of the discrete logarithm is now within the capacity of current computers, relying on its difficulty for cryptographic applications is therefore no longer an option. This work is still at a theoretical stage and the algorithm still needs to be refined before it is possible to provide a practical demonstration of the weakness of this variant of the discrete logarithm. Nonetheless, these results reveal a flaw in cryptographic security and open the way to additional research. For instance, the algorithm could be adapted in order to test the robustness of other cryptographic applications.

I was told by a scientist friend 14 years ago that diffie hellman was a sort of knapsack problem - and thus solvable in polynomial time - and that it was therefore inherently weaker than RSA.:/ could it be that this has been known for decades but not widely spread?

When I look the knapsack problem up wikipedia says it's NP hard but that many cases, including random cases are solvable in polynomial time, even though not all can.

You know, that's how decision problems for circuit verification are. While

As somebody else pointed out, either you or the person you were talking to was thinking of Merkle-Hellman, not Diffie-Hellman.

NP-hard problems are attractive, because they take polynomial time to verify if you know the right piece of data (i.e. the key) and exponentialish time if you don't. But it turns out that there are a lot of them that either can't be usefully turned into an encryption algorithm, or usually aren't more than polynomially hard if you do, because the version of the problem that got you

Breaking the discrete logarithm problem breaks both DH and RSA; obtaining the private exponent from the RSA public key can be done by solving the discrete logarithm problem. However, this algorithm solves the discrete logarithm problem only for certain fields (Galois fields of small characteristic), and I don't believe those are the ones interesting for RSA or DH. (RSA uses a composite group; DH uses a prime field of large characteristic)

Dlog is the base, however, to almost any other public key algorithm out there which isn't elliptic curve. This includes Diffie Hellman, El-Gamal, DSA, Schnor and I'm sure others as well.

My reading of the article is that those are not yet borken, per se (spelling mistake left in intentionally). Since Diffie Hellman is primarily used for forward reference security, however (i.e. - figuring out a session key that will not be compromised even if the private key later is), the question is not whether it is safe today. The question is whether it will remain safe for the foreseeable future.

If attacks on dlog are beginning to become practical, the answer is "less and less".

Dlog is the base, however, to almost any other public key algorithm out there which isn't elliptic curve. This includes Diffie Hellman, El-Gamal, DSA, Schnor and I'm sure others as well.

Discrete log is certainly being used with elliptic curves. EC isn't really an algorithm, but an alternative "number system" that is well suited for crypto (the common alternative is finite/Galois fields).

RSA private key is p,q,d where p and q are large primes and d is the private exponent.RSA public key is n, e where n = pq and e is the public exponent. The private exponent d was chosen such that (m^e)^d (mod n) = m for any m (message) 0 m n.

RSA encryption is c = m^e (mod n)RSA decryption is m = c^d (mod n)

Suppose I have an RSA public key n, e. I pick an arbitrary m and calculatec = m ^ e (mod n)

Dlog is the base, however, to almost any other public key algorithm out there which isn't elliptic curve.

EC algorithms (ECDH, ECDSA, ECIES) are also based ultimately on the discrete log problem. So, this news is a potential threat to essentially all of the major asymmetric crypto algorithms, excepting only RSA.

However, someone intercepted the message, cracked slashdot's https rsa key, modified this message, then submitted it on my behalf.

You are aware Slashdot has an in-house analytics database that can easily correlate any AC to IP addresses, corresponding MAC addresses, logins (if any) and locations. If push comes to shove, the police can be given your ID, most likely without the need for a warrant. Just a lowly peon that never registered; not a problem, if your comments are serious enough for the labor.

If this is just being publicly announced now, it was probably known to NSA, the GRU, and the MSS years ago. The superpower security agencies put substantial resources into cryptanalytic number theory.

Early public key cryptosystems used the knapsack problem. That turned out to have reasonably easy solutions. Factoring products of two big primes may be all we have left. That's believed to be hard, but there's no proof of a lower bound on it.

If this is just being publicly announced now, it was probably known to NSA, the GRU, and the MSS years ago. The superpower security agencies put substantial resources into cryptanalytic number theory.

I think that many people forget the NSA's other mission: securing US Government communications. The easiest way to figure out how secure an algorithm is, is to take a look at what level of information it's authorized for. Despite everything that all the folks here say, the NSA and other agencies aren't stupid. They know full well that if they can break the algorithm, somebody else can as well.

Unless they are keeping keys to the algorithm that no one else has, and which cannot be determined after the fact. This is essentially the issue that they were accused of in regards to the elliptic curve random number generator they had put forward as a standard.

I love it how people (shills?) keep bringing up DES S-boxes, as if they had anything to do with anything. The thing with DES was in 1975. Almost 40 years ago. Since then the NSA went through 10 directors, and the US through 8 presidents. And most of the staff in high positions died or retired.

It's ridiculous to try to pretend that something nice a completely different NSA did 40 years ago has the slightest relevance to today's completely different environment and politics.

Yes except no one's been able to show that they actually do have the keys, or even a smoking gun of "this algorithm is always active".

What they are doing is the same wailing and knashing of teeth that happened 40 years ago.

Meanwhile it seems far more obvious that the NSA instead just relies on the CIAs direct intercept and tampering with targeted bits of hardware, rather then depending on people using a specific algorithm, with a specific set of constants, somewhere that's interesting to them and not one wh

There are many other methods to send info, one time pads (number stations), steganography (lots of side channels for noise to be tampered with), couriers, stuff...

And BTW you still have to prove that NSA and all the other agencies are working for their own nation against the other nations. It may hold true at lower levels, but they don't answer to anyone by design, and it's apparent that we live in a global system where politics are a diversion.

There are many other methods to send info, one time pads (number stations), steganography (lots of side channels for noise to be tampered with), couriers, stuff...

OTP needs a secure side channel the same size as the data meaning you can't just call/text/mail someone and verify a fingerprint, which makes it extremely impractical, steganography with no or weak encryption is just security by obscurity, couriers depend on the security of the courier which is just like trusting the Internet - it too is a third party courier. Without regular PKI it wouldn't be practically feasible, fortunately the basic building blocks like RSA for signing/verifying in certificates and PGP

OTP needs a secure side channel the same size as the data meaning you can't just call/text/mail someone and verify a fingerprint, which makes it extremely impractical

Well that's a problem if your are already under the radar but otherwise? What if you just apply a salted hash to blocks of innocent files that you share with your pal and use them as the OTP? We all already share innocent files provably identical, the updates. Linux users routinely download compressed and signed packages those mega if not gigab

Key distribution for one time pads is a nightmare. It's possible for a giovernment to send tapes of one time pads through diplomatic baggage to their embassies in advance. But it's completely useless for anyone else.

Key distribution for one time pads is a nightmare. It's possible for a government to send tapes of one time pads through diplomatic baggage to their embassies in advance. But it's completely useless for anyone else.

> If this is just being publicly announced now, it was probably known to NSA, the GRU, and the MSS years ago.

I don't know about that. One thing that has become clear after Snowden's revelations is that the NSA has troubles with properly implemented cryptography. Whereas they no doubt have one or two tricks up their sleeve, they don't seem to be in possession of any revolutionary mathematical knowledge. They used to be years ahead of academia in cryptography, when academia was barely looking into it. That

Maybe read the article or, I dunno, some of the other comments before making yourself look stupid. This does not break any crypto systems that are currently in use because it only solves the discrete log problem in some specific finite fields with very low characteristic, i.e. not ones used by DH. You might say, oh that is just the first step to breaking DH, but there is no proof of that. Different finite fields have very different properties and there is no evidence at all that results like this can or

Yes; the preprint was posted to arXiv when the research was completed. Obviously Science Magazine (the source for the slashdot posting) prefers to write about results when the journal article comes out later, because otherwise the magazine would to check the preprints for correctness on its own, which it can't be expected to do.

The new algorithm is for small characteristics. Small characteristics have been suspect for some time and are no longer used. In fact: cryptographers are moving away from finite fields altogether in favor of elliptic curves, now that most of the relevant patents in that subject have either expired or been rendered toothlesss. No subexponential algorithm is known for the Discrete Logarithm Problem over elliptic curves.

As I understand it, they've simplified the problem to a compiexity of O(n^log(n))... this is still non-polynomial time... but the rate of complexity growth is effectively polynomial. If I understand correctly, that means that the additional security that was formerly thought to be obtained by merely doubling cryptographic key length must now be obtained by squaring it.

That's not how big-O notation [wikipedia.org] works. O() is not a function, you can't just rearrange the components. There isn't even a constant factor involved in either version of what you wrote. Who modded this informative?

What WoOS writes seems ok to me. He's not rearranging the components, he's pointing out that the correct formula is a different one. There's a constant involved in the most common definition of the big-O notation --- look at line 3 of the wikipedia page that you linked to.

Squaring key lengths would be entirely impractical. That said, the improvements only apply to a case of discrete log which isn't actually in use. Cryptographic algorithms generally depend on hardness of discrete log mod p (p a large prime), not in the field with p^k (p fixed, k large).

A short key right now is 512 bits (0.5 KB). A longish key right now is 4096 bits (4 KB); many sites use shorter keys because high traffic sites can't afford the bandwidth and CPU required to transmit and process even a 4096 bit key for thousands of connections per second. Squaring even the short end of that range would produce a 262144 bit key (256KB). That's a ridiculous amount of data overhead just to initiate a connection, and performing math in a space that large would tax the CPU of individual computer

F(2^2) = 2^4 = you can probably solve it with pen and paperF(2^3) = 2^9 = you could probably still solve it by hand if your life depended on it.F(2^4) = 2^16 = too hard to solve by hand, but a PC could do it in less than a microsecond.F(2^5) = 2^25 = solved in millisecond or so on a PCF(2^6) = 2^36 = solved in a few minutes on a PCF(2^7) = 2^49 = a little difficult to brute force, but still easily solved in half a day on a GPU.F(2^8) = 2^64 = a server farm can solve this in a day.F(2^9) = 2^81 = th

The result is on for fields with small characteristic, but the most commonly used finite fields in this context are the Zp for some prime p which have characteristic p.So though this is a very interesting result, I am not tossing out all my crypto suit yet.we should be cautiously seeking better alternatives, but the worst thing we can do is to panic and ditch well studied algorithms and implementations every time some progress is made on their cryptanalysis.

SmartCards actually mostly rely on symmetric algorithms for most applications. The only commonly used public key algorithm is RSA, which is not based on discrete logarithm. This leaves DSA, among the relatively common algorithms, but that is rarely used on SmartCards. What would be interesting to know, is how EC-DSA is affected, since it is slowly replacing RSA because of the reduced key size.

Were all of the asymmetric cryptosystems invalidated, smart cards would probably become dramatically more important. Everything we do with asymmetric crypto can be done with symmetric crypto (and in many cases with liberal use of trusted third parties), but key distribution becomes a much harder problem. Smart cards provide a cost-effective and reasonably-secure mechanism for key distribution.

As long as we have one secure, general-purpose asymmetric algorithm, though, I don't see much of an impact. Well,

Do you know what PFS is or how it works? Clearly the answer is "NO" but you should educate yourself.

The OP is right: smartcards mostly rely on symmetric algorithms. In fact the OP never said anything about PFS, and bank cards for example, which use RSA, do not implement PFS, because it is not needed in that context.

Do you know what a smart card is and how it works? Clearly the answer is "NO" but you should educate yourself;-)

Even though I didn't really understood the math, two important points stick out from their description:

a) Complexity is still n^O(log(n)). This is better than O(e^n) but still worse than O(n^c) for any fixed c.

b) It is a heuristic and I couldn't find in their paper a statement how well this heuristics work, i.e. in how many cases it will find its (optimal/only) result and in how many not or only in a much more time.

As far as I understood they empirically showed their approach to work on one example. A study showing the general feasibility of the heuristics would be more convincing (yeah, that's the engineer speaking not the mathematician).

It should also be noted that the authors themselves write in their conclusion:"Compared to existing approaches, and in particular to the line of recent works [15,10], the practical relevance of our algorithm is not clear, and will be explored by further work."

So, before running to conclusions maybe we should wait for the "further work".

Great hacker flick if nothing else :
While the number-field sieve is the best method currently known......there exists an intriguing possibility for a far more elegant approach....over the rationals, and hence contained in a single cyclotomic field.
Using the Artin map, we might induce homomorphisms...
It would be a breakthrough of Gaussian proportions......and allow us to acquire......the solution in a dramatically more efficient manner.
Such an appr

How does the recipient know the sequence of how many times each is run? Obviously, this requires a secure exchange of something that must be kept secret.... so how do you go about ensuring that what you want to exchange with the recipient, nobody else will view? Obviously, you encrypt that information... I trust you can see the recursive nature of your proposed solution.

I'm a cryptographer and the paper didn't even catch my eye when I was glancing this year's Eurocrypt papers. I also haven't heard anyone talk about it at work and this is despite all my coworkers working on crypto which would break if someone came up with a fast dlog algorithm in groups used in practice. The algorithm is purely for fields of small characteristic, which means that it's totally irrelevant for most practical applications, since typically one will work over subgroup of invertible elements for t

Correct. The authors made no claim about breaking anything, it just got distorted somewhere along the line by the press. Also the algorithm is only quasi polynomial (n^O(log n)), which still makes it impractical for even moderately large sized fields.

Even if this problem is actually solved, the cryptographic methods using it were sufficient "until now" if this is actually the first party solving it. Saying that they were presumed sufficient until now misses the point of cryptography. It's never impossible to recover the plaintext. The whole point is to make it take unfeasibly long with current methods except for those with the proper key. If a new method comes along to break a scheme, that means the scheme is no longer sufficient going forward. It doesn