Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

holy_calamity writes "The two encryption systems used to secure the most important connections and digital files could become useless within years, reports MIT Technology Review, due to progress towards solving the discrete logarithm problem. Both RSA and Diffie-Hellman encryption rely on there being no efficient algorithm for that problem, but French math professor Antoine Joux has published two papers in the last six months that suggest one could soon be found. Security researchers that noticed Joux's work recommend companies large and small begin planning to move to elliptic curve cryptography, something the NSA has said is best practice for years. Unfortunately, key patents for implementing elliptic curve cryptography are controlled by BlackBerry."

Without a statement as to whether the NSA has been involved in elliptic curve stuff (though I will point out that they have nearly as much motivation to make things hard for, say, the USSR/China [depending on era] to crack as they do to make things easy for them to crack), did you read your link? It isn't really talking about elliptic curve crypto at all.

It's describing a potential flaw in a random-number generator whose algorithm is based around elliptic curve crypto. Even if every worry presented by the article is true, that means absolutely nothing about whether elliptic curve is secure against the NSA.

(Actually it almost suggests that it is, because if EC was breakable then the NSA wouldn't have as much motivation to get a known key into the RNG standard.)

That article gives no reason to be worried about elliptical curves. What it does give reason to be worried about is magical constants and the use of asymmetrical primitives for something that can be done with symmetrical primitives. The concern about the magical constants is why some algorithms use digits of e or pi for the constants. And since random number generators can be build using symmetrical primitives, it is suspicious when somebody choose to use asymmetrical primitives. That later decision need to

I.E. If you implement these RFC 6090 "Pre-patent" methods, the first obvious thing you would think of to make it better is point compression. That is one of the obvious implementation things that were patented.

Except thats not true. The patent often claimed to cover point compression specifies fields of characteristic 2 (not the large prime fields most implementations actually use; as the certicom patents cover a ton of characteristic 2 field stuff).

It's also the case the point compression was described in publication at least as far back as 1986, making it too old to be patentable in general:

Quoting DJB:

Miller in 1986, in the paper that introduced elliptic-curve cryptography, suggested compressing a public key

More to the point, how the FUCK does one weasel a patent on crypto (which is just math, and therefore, unpatentable) through the system? I would think the USPO would just round file anything coming in that has to do with crypto on general principle...

You cannot patent the laws of nature, including the laws of physics and mathematics.

A car MAKES USE of the laws of physics, but it may be patentable if it's a new invention. You cannot patent X + 1 = X - (-1) because that's mathematical truth, it existed before you noticed it. Just as you can patent a new invention that USES the laws of physics, you can patent some system that uses math. In this case, a system for securely delivering secret messages across a public network. Of course it still has to be a new and useful invention in order to be patentable.

I haven't read the patents, but not necessarily. I can't patent gears Just because a system is made up of gears, doesn't mean the system as a whole isn't patentable. Similarly, just because the software includes logic operations and equations doesn't mean the system isn't a new invention and patentable.A new configuration of gears and levers, doing something new, is patentable.A new configuration of logic operations, equations and and interfaces, doing something new, is patentable.

Elliptic key cryptography per se isn't patented, just the most efficient ways of using it. So, worst-case, we might end up with some horrific eternal kludge whose only reason for existing was to provide a commercially-safe route around patents not set to expire for another 2-3 years.

Likely example: the horrific clusterfuck of an abomination known as "little-endian binary". I don't know for sure it came about due to patent reasons, but I can't think of any sane reason why it would have ever come into existe

I blame little-endian on a sort of backwards compatibility. Whether it's a pointer (index register) to an 8-bit or 16-bit value, you're pointing at the low-order byte. I've always suspected that made the first 16-bit Intel processors easier to build, in some way.

But if little-endian is a "horrific clusterfuck of an abomination" (and I'm not saying it isn't), what do you call PDP "middle-endian" - the Antichrist? I mean seriously, sometimes there is just no excuse.

Somehow big-endian happened just as often. I'm not buying "simplifies" unless there's some legacy that you're trying to find some simple way to add to. After all, the physical order of the circuit traces related to a register need not relate to the logical order of the bytes way off in main memory.

Likely example: the horrific clusterfuck of an abomination known as "little-endian binary". I don't know for sure it came about due to patent reasons, but I can't think of any sane reason why it would have ever come into existence otherwise.

From a purely machine theoretical standpoint, having the low order byte in the lowest memory location makes as much or more sense than the other way around.

Streaming transmission is a different matter, and in some instances can benefit from being able to receive the MSB first. This is especially true if the data gets acted upon in real time and the MSB is required earlier during the calculation. However, in may other cases, LSB first network byte order can be more advantageous (or at most at least not a disadvantage). So the decision to use either is really based on the algorithms chosen for the network traffic itself.

In creating interface code to opposite-endian systems, it's easier to think about avoiding translation and keeping both in the same format. But, I've personally never had trouble with this since I've always used reversed buffers where direct use of reversed multi-byte arithmetic was useful.

However, it stands to reason that the designers of the first little-endian processors didn't consider this a problem, as most byte traffic needs to be buffered and checked before it can be used in calculations, and that obviates the need for having network byte order being same-endian. Since these were all designed in the early days, I see no reason to assume that the choice to go with little-endian would have been any sort of compromise to the state of the art.

There are plenty of sane reasons to use little endian. It means the same pointer can point to a value as 8-bits, 16-bits, 32-bits and so on, and it will get the right value as long as the value does not overflow.

Big-endian only exists because Latin languages write their numbers wrong -- text is written left-to-right but numbers are written right-to-left. This mess has also caused the middle-endian date and time formats currently in use. ISO tries to fix the date format, but unfortunately does it by standardizing exactly the big-endian way that feels so alien to humans.

If computers had been invented by someone writing either left-to-right or right-to-left consistently, big-endian would not have occurred to them. They would naturally write the smallest value first, just like the Arabic inventors of the numbers. Alas...

Big-endian only exists because Latin languages write their numbers wrong -- text is written left-to-right but numbers are written right-to-left.

huh? numbers are written in exactly the same order they would be expressed in words - "fifty-one thousand, three hundred and forty-eight" == 51,348

being trained from a young age to read numbers like that, i have no idea whether it really would have been just as easy to learn to read the digits in reverse order ("8 4 3 1 5") but it doesn't seem so to me. In fact it seems competely unnatural - the kind of thing you might do just to prove you can rather than because it's any better or more efficient.

This mess has also caused the middle-endian date and time formats currently in use.

it's really only americans who do this (MDY). and maybe the japanese because of the "Operation Blacklist" post-WWII occupation led by Gen. MacArthur. Everyone else uses DMY or YMD because the middle-endian american date format is alien - people naturally order things from either smallest to largest (or least significant to most significant) or from largest to smallest (most to least).

ISO tries to fix the date format, but unfortunately does it by standardizing exactly the big-endian way that feels so alien to humans.

YYYY-MM-DD seems perfectly natural to me, not at all alien. I was raised on D-M-Y but figured out for myself that YYYYMMDD is the only format that sorts properly (and then later learnt there was as ISO standard for it).

YYYYMMDD also has the advantage of being unambiguous - you don't have to guess whether whoever wrote the date is american and if so, whether they're using a sane or insane date format or not - for days >12, it's easy to figure out: 8-13-2013 can't be anything but 13th August....but 8-7-2013 could be July 8 (sane) or August 7 (insane).

worse, there's absolutely no way to tell except to look for other dates on the same page (or journal/ledged/book/web site/etc) and check to see if any of them have day numbers > 12.

Patents have been an issue of national security for a while. Several countries, including the US, has secret patents. It takes someone wiser than me to explain how that promotes the progress of science and useful arts.

I'm surprised to see other people going in the direction I've been going for about 3 years now. Really. I thought I was quite alone in my path, LOL.

I need to read this paper still, but if it's taking the same path I did, then it's not a peachy as some think.

I'm only am amateur, so take this from the point of view as someone who kicks back with a beer and enjoys solving impossible computational problems.

I don't think it's that close to being broken... I think it'll take a huge computing effort (think multi-terabyte databases) to generate the tables across the PQ space required so that existing problems can be used to quickly find paths and intersections. At the beginning you're looking at only a VERY SMALL speedup from modern sieving, but once the tables get generated (years of effort) you'll eventually see faster and faster improvements. At least, that's with my algorithm, which I'm sure is far from perfect and only works on a certain set of primes right now. Which is about 20%. Which is far from optimal.

So yeah, progress. But I'm unconvinced that this will work for all primes.

I'm going to read the paper now... which I'm sure is far better than what I've been doing.

Basically, from my first read, this is just a better sieve, a system which should find smooth numbers faster by choosing better starting points and using faster tests. I wouldn't call it a general break in RSA, and while it might certainly be a better sieve than GNFS, it's no silver bullet either. I can't imagine anyone bre

Thanks for that, I found it separately also, and read a few of the papers referenced. I tend to agree that this madness is a bit overblown. Reducing the time by 15% is really impressive overall, but when our anticipated sieving times for a typical 2k-4k keysize are measured in months and years that isn't a huge difference.

There does not appear to exist any single piece of evidence that DLP (discrete logarithm problem)will benefit from algorithms running in polynomial time. The recent work of Antoine Joux that theyare referring to (one of which I assume to be http://arxiv.org/pdf/1306.4244v1.pdf [arxiv.org]) providesimprovements of quasi-polynomial agorithms for breaking DLP. But there is no reason to believethat these improvements can lead to a polynomial-time attack. And as long as this does not happen,those attacks can still be defeated by increasing the key size.

Please remember - when new tools give cryptographers the ability to exploit weaknesses in existing cryptosystems it also gives them the ability to device cryptosystems immune to those exploits.

(if you can get a trusted version with no 'escrow' technology built in, that is)

As I recall, the guys who wrote PGP back in the day almost went to prison for publishing the source code - despite the fact that the RSA algorithm in use was already publicly documented (in Scientific American IIRC). "The Powers that Be" learned from that debacle and have far more reliable mechanisms for gaining access to everything you do in the clear if they want it (for example, the TCM in my new HP PC is turned on and enforcing - I can turn it off, but what other little goodies have manufacturers hidden in the firmware for me to discover?).

Moral of the story - IPv4 is exhausted, go to IPv6. BIND4 is obsolete, go to BIND8. NFS is dated and insecure, go to NFSv4. RSA is at risk of being compromised by advances in mathematics, go to [something better]. Really - is cryptography supposed to be carved in stone? I know that worked for the Egyptians, but anything related to the technology field...

Correct me if I'm wrong but you are not allowed to patent mathematical processes. "Elliptic curve cryptography" sure sounds like a "mathematical process" to me and a pack of especially smart and vicious patent lawyers should be able to blast RIM and Blackberry away in short order (by patent litigation standards which is aeons long). Sounds like a job for Amazon whose entire business model, the one they make money on anyway, depends upon the integrity of SSL which depends upon Diffie-Hellman and RSA for key exchange, if my flawed memory serves. Gotta blow the dust off my SSL book....

Yes, we need to check everything. That being said, this feels like game theory. Don't you get the sense that the NSA wants us to doubt the technology. If cryptography was widely used most of what the NSA does would be made obsolete.

I have been reading his papers for some time now, and the guy is definitely making progress. Recent work, however, in the field of multilinear maps seems to point into a new direction: multiparty Diffie-Hellman agreement. That would be a lot harder to break. Basically, in such a scheme, when wanting or needing to establish a classical Diffie-Hellman agreement, you'd invite a trusted third party in. Eventually, that scheme may get broken, too; yet, it may grant implementors and users another 10-to-20-year truce. As for TFA on technologyreview.com, it sounds a bit like fear-mongering, to my taste.

The RSA encryption has been depreciated for years now. Hell, back in 2000 we were saying that DES was insecure, and triple-DES was just a stop-gap. Everyone's been switching to AES for awhile now. This isn't news.

Your first sentence sounds weird to me, and it isn't supported by your second. AES can't be a suitable replacement for RSA because AES is a secret-key system and RSA is a public-key one.

I'm not a crypto person, but RSA and elliptic-curve systems are the only two public-key systems I can think of. (There are others that allow secure exchange of a secret key, but that's different.)

There is another promising public key encryption method known as NTRUEncrypt (http://en.wikipedia.org/wiki/NTRUEncrypt). It's lattice based, and apparently it will still be effective in a post quantum computing world where RSA/Elliptic curve methods will fail.

Deprecated - I don't think that word means what you think it means. RSA can't be deprecated when there isn't a replacement. Elliptic curve cryptography has only really become a realistic replacement fairly recently, and nobody outside of government rushed to give Blackberry lots of money to use it because there wasn't a compelling reason to do so. Governments DID, which suggests to the conspiracy theorist that they might know of such a reason.

I said nothing about key exchange systems or anything else... I was making a general comment about encryption schemes; Your confusion is because you are drawing your own conclusions, rather than staying on point: Which is that every encryption algorithm, regardless of type or usage-scenario, has a shelf life.

You still can't replace an outdated public-key encryption key system with a symmetric system. Because, in real life, usage scenarios and key exchange systems actually matter - in fact, they are the most crucial aspect of the whole thing, otherwise we'd use true random one-time pads and be safe from any attack with any level of computing power forever.

I didn't say that you said that AES could replace RSA: I said that your AES/DES analogy didn't support your statement that RSA is or should be deprecated. That may sound like I'm nitpicking here, but I'm really not: it's pretty fundamental to my point. And the reason is this:

Which is that every encryption algorithm, regardless of type or usage-scenario, has a shelf life.

This absolutely need not be true. RSA for instance is based in part around a hardness assumption: that given a very large number n which is the product of p and q, it is far harder to find p and q from n then it is to find n from p and q. Assume for the sake of argument that this is the only hardness assumption RSA depends on. (If the summary isn't misleading it apparently also depends on the hardness of discrete log, but I don't know how.)

If the hardness assumption holds, then RSA as such will never be insecure. Why? Suppose you say "here is a computer capable of factoring a number n with b bits." I'll say "OK, fine; I'll use 100*b bits (or something)"; because multiplying is so much easier than factoring, your computer will still be able to carry out that task but it won't be able to crack my key.

In other words, if the hardness assumption holds, RSA doesn't have a specific difficulty: it can scale with computational power. That's why you see people using 2048-bit keys now instead of 512-bit keys a couple of decades ago.

The only things that the age of the algorithm has to say about the security of it is (1) if the difficulty cannot scale with computational power (true of DES, not true of RSA) and (2) being out longer gives people more time to find flaws in its assumptions.

But here's the thing: #2 isn't necessarily bad or speak against the algorithm. It is conceivable that the assumptions just fundamentally hold. If they do, being out longer will not impact the security at all. If anything, being out longer with no one discovering anything should give a higher assurance that an algorithm is secure than a newer one would.

now that resources have increased many-fold since the original, it is no longer secure.

I don't think I've ever heard a blanket statement about RSA being insecure -- only things like certain key sizes or certain implementations or PRNGs being insecure. (Wikipedia also lists a couple of side-channel and plain-text attacks, but those are also arguably quality-of-implementation issues, and similar attacks exist for EC systems.) The intro to the Wikipedia article says nothing about RSA being insecure. "Deprecated" and "discouraged" both fail to appear on the page.

The strongest statement against RSA I've heard is just that EC is better.

I then compared it to other encryption schemes that are less resource-constrained which have been coming into wider use

What you have described is true when both parties hold the relevant keys, and they believe that the keys have not been compromised - this is when I can trust that I really have your public key and not one substituted by an adversary. To solve this problem of key distribution the DH key exchange algorithm is normally used, and this relies on the hardness of discrete logs. If the DH problem is weak (which now appears to be the case) then RSA would be borken in the sense that you could not exchange keys to use

Maybe it might be time for an algorithm challenge, similar to how AES got decided, and the lastest hash algorithm got chosen.

Of course, asymmetric algorithms are a lot harder to make that are secure than symmetric ones.

I wonder about, instead of naming one, naming three. That way, if in the future one gets compromised, the broken one would just not be used, or for very sensitive stuff, all three can be cascaded (not for bit length, but to keep things signed or encrypted in case one gets severely weakened.)

Ffunction ption, fine. For signatures and hashes, cascading WRAKENS it. An attacker only has to crack ANY of the algorithms to crack the whole. To prove that to yourself, try it with one of the algorithms defined as:

For encryption, that's fine. For signatures and hashes, cascading WEAKENS it. An attacker only has to crack ANY of the algorithms to crack the whole. To prove that to yourself, try it with one of the algorithms defined as:

If all three verify, then the message (or realistically, a message hash) are good.

As for hashes, I've wondered about using this method, where one gets multiple hashes of the message via different algorithms, then XORs all of them. In this method, the resulting hash from all three should be as strong as the strongest link, because one couldn't tell the part from one algorithm from another.

Sadly, chaining three methods doesn't make things 3x more secure (either for hashes or encryption). Read Practical Cryptograpy by Bruce Schneier for the details. At least, I think that's the book that talks about it, it's been a few years.

The RSA encryption has been depreciated for years now. Hell, back in 2000 we were saying that DES was insecure, and triple-DES was just a stop-gap. Everyone's been switching to AES for awhile now. This isn't news.

Wow, that is so wrong.

RSA is an asymmetric (aka publick key) cipher - because it requires two keys - one to encrypt, one to decrypt. AES, DES, 3DES are symmetric ciphers because you use the same key to encrypt and decrypt.

RSA and EC (elliptic curve) encryption is useful if you want to send data to someone without the hassles of secretly sharing the key ahead of time - e.g., I can encrypt a message using the public key and only the private key can decrypt it. Or I can use my private key to encrypt a message, and the public key can be used to decrypt it (the latter is often used to sign stuff, except the message is typically a hash instead of the original message).

The reason you use AES, DES, 3DES is because public key encryption is hideously slow. In the case of RSA, you're exponentiating one horrendously large number with other horrendously large numbers. (If your message is long, that horrendously large number Is big).

That's why what every public key encryption thing does is it encrypts the message with a fast symmetric cipher like AES, then encrypts the key (much shorter) with RSA or EC. If I want to send you a document, I encrypt it with AES, then use your public key to encrypt the AES key I used.

It's also why signing uses a hash - it's easier to encrypt the hash than the message. And verification just means recomputing the hash, and then decrypting the encrypted hash with the public key, producing the original hash to which can be compared to the just computed one.

The breakthrough in math would be a way to factor a large number quickly - which is what RSA relies on for security - it's easy to multiply two big numbers, but it's very time consuming to factor it.

How is it wrong? He's not referring to the operating principles, only to the fact that RSA and DES are about equally dated. You've just wasted time and space providing him with information he's already had.

He's not referring to the operating principles, only to the fact that RSA and DES are about equally dated

Adam Van Ymeren said it well [slashdot.org]. An algorithm's age doesn't necessarily speak to how secure it is. DES is considered insecure because it has a fixed key size that can be brute-forced, not because it is a fundamentally weak crypto system.

By contrast, the same objection does not apply to RSA, at least AFAIK: the key size can be scaled arbitrarily, so as computing resources grow so can the difficulty of the pr

This is also true with respect to DES, as in the case of 3DES, and you could easily create 5DES or 10DES or whatever by chaining cipher units with different keys which are each a portion of the combined key.

It's not only the key, which is too small. The blocksize is also too small. DES has a blocksize of only 64 bits. Even the 128 bit blocksize of AES is a bit on the short side. My rule of thumb for how many blocks of data you can safely encrypt with with a single key is two raised to one quarter of the blo

There's a fundamental difference between breaking the algorithm mathematically and the key space being too small.
People left DES and 3-DES because the key size was too small and a brute force became feasible. The same is becoming true for RSA, but this is completely different than solving the discrete logarithm problem that underpins RSA and Diffie-Hellman. Solving that would be an amazing feat of mathematics.
So please stop trying to show off to/. how you're smarter than everyone else.

The thing I'm not sure about right now is whether the RSA method itself is becoming insecure or if standard-size keys can simply be brute-forced. If it's a question of key size, then why not use larger keys?

The last time I checked, it is possible to increase the size of RSA keys quite a bit. Most frontends for PGP/GPG only allow keys up to 4096-bit to be created but several years ago I was able to generate valid key pairs up to 11296-bit. I had to modify the GPG source code and recompile it before it would

The story is talking about the possibility of a mathematical breakthrough that would make solving the discrete logarithm problem (and possibly the integer factorisation problem) much, much easier. RSA relies on it being much easier to raise something to an integer power than to find a discrete logarithm (inverse operations). If you figure out how to make the two operations of similar difficulty then any encryption scheme based on them is hopelessly broken for any key size.

Why elliptic curves when we can go back to good old fashioned original RSA that uses prime number factoring as the problem? No patent nonsense to worry about there.

Sometimes the past needs to remain in the past...

Although prime factoring is considered a hard problem, the sparse distribution of prime numbers (~x/ln(x)) makes RSA increasingly inefficient in that superlinearly large moduli (to match large primes) need to be used to increase security linearly.

Lest nostalgia continue to be your guide, the original RSA was also found to be broken and needed to be patched to avoid the insecurity

1. Messages corresponding to small input values could be simply inverted ignoring the modulus operation (just doing numerical root estimation to invert the exponentiation). The larger the modulus, the more "insecure" messages there are.

2. Encryption is deterministic so is subject to dictionary attacks.

When people say they are using RSA today, they are usually using RSA-OAEP (optimal asymmetric encryption padding) which patches these two specific vunerablities of RSA.

FYI, the original RSA was patented (although later RSA labs decided to not enforce the patent and let it expire). This patent nonsense around RSA was a big issue in its day...

The discrete logarithm problem isn't technically the same as prime number factoring, but approaches that work on one tend to inspire approaches that work on the other. A breakthrough algorithm for one is likely to lead to a breakthrough algorithm for the other.

Based on my limited understanding, proving P = NP would not necessarily and automatically provide a manner of constructing reductions. It might. But there are proofs in computation theory that demonstrate limit complexities but do not provide the algorithms that might implement them, nor do they (currently, visibly) provide any indication of how that algorithm may be arrived at.

Besides, proving P = NP would have a vast number of consequences that would echo across mathematics and the more fundamental sciences. To harp upon the security implications is as short-sighted as fretting that all-out thermonuclear war would negatively affect the postal delivery service.

What exactly, does proving P = NP have to do with the price of tea in China? We knew when RSA was created that advances in computation power would eventually make it feasible for us to decrypt its contents. We even know what that boundary is.. and we're coming up on it now.

No encryption algorithm is immune to the fact that the faster you can run an algorithm, the sooner you'll get a result. That's all encryption is. I don't need to be a math major to figure out that if I have a car that can go 200 MPH it'll

Proving P=NP implies that a host of processes taking non-polynomial time could take polynomial time instead. This has important implications in computer science, physics, chemistry and more, as you go from a problem considered to be effectively "impossible" (as in, impossible in a reasonable amount of time due to exponential growth) to one that is "possible". It's a very different change from incremental speedups we usually get from algorithms, because we're talking about something which has an entirely dif

We knew when RSA was created that advances in computation power would eventually make it feasible for us to decrypt its contents. We even know what that boundary is.. and we're coming up on it now.

No, we did not know any such thing. Advances in computation power can be defeated by increasing the key length of RSA, indefinitely. RSA cannot be made useless just by making regular computers run faster.

You misunderstand the difference between throwing hardware at a problem and coming up with a more efficient algorithm.

RSA doesn't specify a key length. I can use a key that's 64 bits long (used originally but insecure today) or 1 megabit long (secure against known classical algorithms for the age of the universe no matter how much hardware you throw at it). As hardware gets better I can encrypt things using longer keys, in the same amount of time. It takes you MUCH MORE time to decrypt that, even with the better hardware. So long as you keep increasing key length as hardware gets faster, the encryption actually gets BETTER with better hardware.

The article is talking about a breakthrough in mathematics that could make solving discrete algorithms much faster. If it made it anywhere near as fast as exponentiation then it wouldn't take me much longer to decrypt your message than it took you to encrypt it, regardless of key length.

DES is insecure because it uses fixed length keys, that became practical to brute force. RSA doesn't have that problem. The situations are entirely different, and the potential breaking of RSA is much more interesting, and much more of an accomplishment.

Advances in computation power alone will never break encryption. Ever. There is no boundary. An encryption can always just create larger keys.

The article is discussing advances in mathematics. Mathematics is more powerful than any computer. Unfortunately, results are also much less predictable. Encryption could be broken with mathematics in 5 minutes, or even never.

I don't need to be a math major to figure out that if I have a car that can go 200 MPH it'll get there twice as fast as a car that can only do 100 MPH.

You would have been better as a math major. To understand the issue, realize that a car going 200MPH needs much more power than a car going 100MPH. A car going 400MPH will need even more power. Similarly, with some algorithms, the solution becomes harder and harder the larger the dataset grows; often exponentially (or even factorially).

Based on my limited understanding, proving P = NP would not necessarily and automatically provide a manner of constructing reductions. It might. But there are proofs in computation theory that demonstrate limit complexities but do not provide the algorithms that might implement them, nor do they (currently, visibly) provide any indication of how that algorithm may be arrived at.

You are technically correct, but certainly the quickest and most direct proof is to show a general solution for an NP-complete problem that runs in P time. And while proving P=NP would not necessarily provide the manner of constructing reductions in the general case, solving any NP-complete problem in P time does absolutely provide automatic solutions for *all* NP-complete problems in P time since, by definition, all NP-complete problems are reducible to each other. And factoring is an NP-complete problem

You are technically correct, but certainly the quickest and most direct proof is to show a general solution for an NP-complete problem that runs in P time

You don't know that for certain; it is conceivable (if seemingly unlikely) that the easiest proof and the first found could be non-constructive.

(Remember, to prove that a problem is in P you not only have to come up with a P algorithm for it but then you have to prove that the algorithm is actually in P. It could be that any algorithm for a (currently-cons

It was my understanding that polynomial time reducibility among NP complete problems had been proven, although I don't have a reference handy. However, reducibility is not the same thing as solving and just because conversion is theoretically possible doesn't mean that conversion algorithms for every pair of NP complete problems are automatically known and even if they are known, converting from one to another hasn't actually made solving either of them any easier since they both remain NP complete. You'v

Actually in some ways it would be really really exciting and almost certainly a really good thing in the long run, because there are a lot of important, currently-intractable problems that become tractable if P=NP.

Proving that P=NP doesn't make anything tractable, unless you use the ridiculous definition where tractable is the same as polynomial time. What would have practical applications is if someone finds a very fast algorithm for solving all the NP problems. Whether P=NP is not really very much related to the question of whether such an algorithm exists. ML has exponential-time type checking, yet ML compiles don't take that long. Polynomial time is not the same as practical - it fails in both directions.

Actually in some ways it would be really really exciting and almost certainly a really good thing in the long run, because there are a lot of important, currently-intractable problems that become tractable if P=NP.

Proving that P=NP doesn't make anything tractable, unless you use the ridiculous definition where tractable is the same as polynomial time. What would have practical applications is if someone finds a very fast algorithm for solving all the NP problems. Whether P=NP is not really very much related to the question of whether such an algorithm exists. ML has exponential-time type checking, yet ML compiles don't take that long. Polynomial time is not the same as practical - it fails in both directions.

The wonderful thing about a constructive solution to P=NP, is that you can use the solution to optimize itself. Suppose it is of polynomial order N. Just encode the statement "is there a program of length R that solves P=NP in polynomial order N-1" as a SAT, and use the N-order solution to solve it. Keep decreasing N until you can't anymore.

Then take a specific computer model, and encode optimality conditions as a SAT. You could then use the best known solution to solve that.

(One way this could fail is the following: factoring I think is in a no-mans land between P and NP, not known to be in P nor known to be NP-complete. If NP collapses into P then so must factoring, but it could be that factoring is some weird-ass O(n^23) algorithm or something while every NP-complete problem can't be done in less than, say, O(n^6000).)

Consider this: Performing 2^256 operations is physically impossible (based on the quantum mechanical minimum energy to do anything, and the total amount of energy in the universe). 100 digit numbers are about 330 bits in size. If factoring n bit numbers required n^30 operations, then factoring just one 330 bit number would be physically impossible.

There was a story I read way back where the ridiculously-advanced AIs manipulated artificial pocket universes with computationally-friendly physics so they could solve NP problems in much-less-than-P-time from the perspective of the "real" universe. The dominant AIs were the ones with the best pocket universe designs/resources.

Although I'd guess we're a long ways from having to worry about that.:)

They are discrete logs over different groups. Joux's work and related stuff being referred to is for (Z/nZ)*, that is the discrete log over the group of units under in the integers modulo n. That problem is of course, closely connected to prime factorization. In contrast, ECC uses the discrete log over elliptic curve groups, which are also abelian groups but as of yet do not have any similar sort of breakthrough. In fact, this isn't a new situation. Discrete log has been harder over elliptic curves than ove

Discrete log has been believed to almost certainly not be NP-complete since well before this. We have much better than exponential time algorithms for discrete log so it would contradict the exponential time hypothesis for it to be NP complete http://en.wikipedia.org/wiki/Exponential_time_hypothesis [wikipedia.org] . Second, discrete log is closely related to factoring which llives in NP intersect co-NP. Since factoring lives in that intersection, if factoring is NP complete then then the polynomial hierarchy would colla