I’ve always wondered this about public key encryption - the pairs are unique, right? So if you had a sufficiently large database, you could generate all the possible pairs and then you’d have every private key to go with the public key? Or is there a random element that I’m not figuring in to it?

As winamp says, the number of pairs is huge – about https://latex.codecogs.com/gif.latex?10^{0.3\times%20n} for an n-bit key … the number for a 256 bit key is bigger than the number of atoms in the observable universe, and we use much bigger keys than that. So you can’t store and index all the keys, and the derivation of the private key from the public key is based on an algorithm that is computationally infeasible to reverse (so far ).

I’ve always wondered this about public key encryption - the pairs are unique, right? So if you had a sufficiently large database, you could generate all the possible pairs and then you’d have every private key to go with the public key? Or is there a random element that I’m not figuring in to it?

Yes you could calculate every possible key pair thus rendering public key cryptography useless. In the case of a crypto system using 1024 bit keys you’d only have to calculate

I’ve always wondered this about public key encryption - the pairs are unique, right? So if you had a sufficiently large database, you could generate all the possible pairs and then you’d have every private key to go with the public key? Or is there a random element that I’m not figuring in to it?

Yes you could calculate every possible key pair thus rendering public key cryptography useless. In the case of a crypto system using 1024 bit keys you’d only have to calculate

keys.And 5 years ago I would’ve thought this impossible. Now probably doable within 10 years.
Not so many 1024 bit keys afaik.

I don’t think you’re comprehending the number. If every atom in the observable universe somehow contained another universe of the same size, and every atom in each of those universes contained yet another universe of the same size, and you could store a key in every atom of every one of those … you’d still be a factor of a trillion trillion trillion trillion trillion short.

Computationally, if you could derive a key in one machine instruction, and every atom in all those universes was a supercomputer equal to our entire world’s computing power, and they’d been calculating by brute force since the universe began, you’d still be a factor of a trillion trillion short.

If you’ve got communications that absolutely cannot be intercepted—whether you’re a NSA whistleblower, the president of Mexico, or Coca-Cola—quantum cryptography is the way to go.

It harnesses the bizarro-world properties of quantum physics to ensure that information sent from point A to point B isn’t intercepted. The laws of physics dictate that nobody—not even the NSA—can measure a quantum system without disrupting it.

The problem, as Edward Snowden could probably tell you, is that quantum cryptography is still in its infancy. It only works over relatively short distances, and the required gear—including lasers and a dedicated fiber optic network—is prohibitively expensive, limiting its use to a handful of research labs, corporations and governments.

A new research paper from scientists at Toshiba brings quantum cryptography a baby-step closer to the masses. The paper, published today in Nature, explains how to expand a point-to-point quantum network with only two users into a “quantum access network” with up to 64 users.

Adding its experience to those of other major browsers and social media sites, Yahoo said Friday that it received 29,470 requests from governments around the world for user data in the first six months of 2013.

Of that total, 12,744 — 42 percent — came from the U.S. government,

In its “transparency report,” Yahoo adds that it:

“[h]as joined no program to volunteer user data to governments. Our legal department demands that government data requests be made through lawful means and for lawful purposes. We regularly push back against improper requests for user data, including fighting requests that are unclear, improper, overbroad or unlawful. In addition, we mounted a two-year legal challenge to the 2008 amendments to the Foreign Intelligence Surveillance Act, and recently won a motion requiring the U.S. Government to consider further declassifying court documents from that case.”

I’ve always wondered this about public key encryption - the pairs are unique, right? So if you had a sufficiently large database, you could generate all the possible pairs and then you’d have every private key to go with the public key? Or is there a random element that I’m not figuring in to it?

Yes you could calculate every possible key pair thus rendering public key cryptography useless. In the case of a crypto system using 1024 bit keys you’d only have to calculate

keys.And 5 years ago I would’ve thought this impossible. Now probably doable within 10 years.
Not so many 1024 bit keys afaik.

I don’t think you’re comprehending the number. If every atom in the observable universe somehow contained another universe of the same size, and every atom in each of those universes contained yet another universe of the same size, and you could store a key in every atom of every one of those … you’d still be a factor of a trillion trillion trillion trillion trillion short.

Computationally, if you could derive a key in one machine instruction, and every atom in all those universes was a supercomputer equal to our entire world’s computing power, and they’d been calculating by brute force since the universe began, you’d still be a factor of a trillion trillion short.

Fair enough

What about 128-bit keys, though? Is that feasible?

Or perhaps they’ve found a way to reverse engineer the algorithm, not to get an exact number, but to get a range so you drastically reduce the number of keys you have to generate to crack a specific pair? Say down to a few billion or so possibilities - that’s only a few seconds work. Given that public key encryption relies on two primes to be the factor, that reduces the number of operations you have to do. You can probably make some assumptions on the size of the prime numbers too, further limiting the range - this post appears to offer a brute force method: m0x39.blogspot.ie/2012/12/0x00-i … ng-to.html that sufficient scale would cover?

If that was the case, once you’ve cracked it, you have it stored and can read the messages at will and the sender/receiver are no wiser. So the ‘trick’ would appear to be to reduce the possible combinations you need to try to crack a particular public key.

You can probably make some assumptions on the size of the prime numbers too, further limiting the range - this post appears to offer a brute force method: m0x39.blogspot.ie/2012/12/0x00-i … ng-to.html that sufficient scale would cover?

There are several well-known optimizations to help you factorize such numbers more quickly. There are also ASICs available to speed it up considerably. However, last time I checked nothing comes close to allowing you to do it in any reasonable time for commonly-used public key sizes, let alone real-time decryption. (To digress for a second, public key is only used to authenticate TLS web connections, not encypt them – you don’t need public key stuff for that since far more elegant and efficient solutions exit).

We should assume that the NSA (famously the world’s largest employer of mathermaticians) has some non-public optimazations. The question is, do they improve things by 10% (a massively impressive feat for a single organisation, but not enough to get you to real-time decryption) or 90%? Who knows.

To go back to the implementation hacks I mentioned, my money would be on the NSA putting significant resources into this kind of thing. If you can slightly weaken an implementation – for example, if you can get the server to poorly choose some of its initial vectors – you might be able to make the key crackable in real-time with the NSA’s resources. This is kind of what happened with the OpenBSD hack – IIRC the code change caused one of the initialisation vectors to be transmitted in the clear along with the setup messages, which made the connection crackable.

But again, it would be far easier to just ask Verisign for a copy of the keys…

There are several well-known optimizations to help you factorize such numbers more quickly. There are also ASICs available to speed it up considerably.

Yes, but how many prime number factors are we talking about? Is that a storable number of keys? The system I work on does 75,000 transactions a second, with each transaction about 2 million instructions doing about 100 I/Os - so reasonably heavy-duty transactions.

We should assume that the NSA (famously the world’s largest employer of mathermaticians) has some non-public optimazations. The question is, do they improve things by 10% (a massively impressive feat for a single organisation, but not enough to get you to real-time decryption) or 90%? Who knows.

Precisely - a logical weakness (perhaps the idea that prime numbers are safer because they are rare?) is also a likelier hole than a mathematical one. The high-end mathematicians I was in college with were deeply into their philosophy…

To go back to the implementation hacks I mentioned, my money would be on the NSA putting significant resources into this kind of thing. If you can slightly weaken an implementation – for example, if you can get the server to poorly choose some of its initial vectors – you might be able to make the key crackable in real-time with the NSA’s resources. This is kind of what happened with the OpenBSD hack – IIRC the code change caused one of the initialisation vectors to be transmitted in the clear along with the setup messages, which made the connection crackable.

Yep, I found an article earlier that reckoned that there were a number of key generation methods that were not nearly random enough with some of them easily crackable. If you know what ‘thing’ was used to build the key pair, you might get an advantage to working out the parameters it works under.

But again, it would be far easier to just ask Verisign for a copy of the keys…

I’m kinda guessing that most of the stuff they really want to crack is not Verisigned…

There are several well-known optimizations to help you factorize such numbers more quickly. There are also ASICs available to speed it up considerably.

Yes, but how many prime number factors are we talking about? Is that a storable number of keys? The system I work on does 75,000 transactions a second, with each transaction about 2 million instructions doing about 100 I/Os - so reasonably heavy-duty transactions.

That’s a nice toy

Wikipedia:

If a large, b-bit number is the product of two primes that are roughly the same size, then no algorithm has been published that can factor in polynomial time.

So you’re not even in the ballpark.

You talk about factoring a 768-bit number – in 2009 it took a team of researchers 2 years to factor one such number. And it gets exponentially harder as you add bits.

yoganmahew:

We should assume that the NSA (famously the world’s largest employer of mathermaticians) has some non

-public optimazations. The question is, do they improve things by 10% (a massively impressive feat for a single organisation, but not enough to get you to real-time decryption) or 90%? Who knows.
Precisely - a logical weakness (perhaps the idea that prime numbers are safer because they are rare?) is also a likelier hole than a mathematical one.

No, that’s not the way it works. Prime numbers are not safer because they are rare – they’re used because a product of two primes has a unique non-trivial factorisation, which is not the case for non-prime numbers. So it just wouldn’t work for non-prime number.

They aren’t even that “rare”, really, depending on how you look at it. At first glance they intuitively seem to be rare compared to non-prime (composite) numbers. But actually it is correct to say that there are as many prime numbers as non-prime numbers.

It’s not that hard to prove, since there are infinitely many of them (and both sets are the same size, called Aleph-zero – assuming we’re talking about the natural numbers 0,1,2,3).

Anyway, you’ll quickly get yourself tied up into knots throwing about words like “rare” when comparing two sets of infinitely many numbers.

Anyway, you’ll quickly get yourself tied up into knots throwing about words like “rare” when comparing two sets of infinitely many numbers.

But talking about reductions of 90% in a 1024-bit key encryption - are 10% of the factors in a 1024-bit number prime? My limited brain would say, probably not. So how long would it take to calculate all the primes (a one-time job) and how many of them are there? Enough that a billion transactions a second could quickly crack?

PS thanks for the explanation of why primes are used, I’d forgotten that, but have remembered that I used to know it, so while stupid, I am not senile

If you have a public key, you know that it is one factor of a 1024 bit number, so you can work out that the other number must be between a certain range, no? If you are looking for the factors, you are not looking for all prime numbers in a 1024 bit number, but more likely all the prime numbers in two 512 bit numbers (roughly speaking? I’m making the assumption that both prime numbers in a 1024 bit public key are roughly the same length, that they are both ‘large’).

Another vast gap in my knowledge - how are the prime numbers calculated? Aren’t the large primes already known and public before they are used in keys? I’m presuming that anything that I can generate on my PC in the minutes it takes to set up, for example, a PGP key is worthless?

If you have a public key, you know that it is one factor of a 1024 bit number, so you can work out that the other number must be between a certain range, no? If you are looking for the factors, you are not looking for all prime numbers in a 1024 bit number, but more likely all the prime numbers in two 512 bit numbers (roughly speaking? I’m making the assumption that both prime numbers in a 1024 bit public key are roughly the same length, that they are both ‘large’).

Another vast gap in my knowledge - how are the prime numbers calculated? Aren’t the large primes already known and public before they are used in keys? I’m presuming that anything that I can generate on my PC in the minutes it takes to set up, for example, a PGP key is worthless?

Excuse the klunky iPad typing here. The public key is the composite number, not one of the primes. The task of cracking it boils down to finding its unique pair of prime factors. There are a few really interesting properties of primes. First, the density of primes is predictable but their distribution is not. So, for instance, we know that there will be more primes between 2 and 100 than between 1002 and 1100 – the former range has a higher density of primes. We have a formula to calculate it. But that in no way helps us find the values of specific primes. Btw, you said 1/700 of 1024-bit numbers were prime. So the number of 1024-bit numbers is ten to the power of three hundred and something (see previous post). And the number of primes in that range is ten to the power of three hundred and something minus two point something. (Assume you know how this works with logs, saves me typing). IOW, the stuff about all those atoms and universes and gargantuan numbers is barely affected at all by the density of primes compared to if every number was prime. And as an example of how the density doesn’t help with the distribution, consider the prime doubles. These are pairs of primes separated by an interval of two, e.g. 3 and 5, 5 and 7, 11 and 13, 17 and 19, 29 and 31, etc. As you go to higher numbers, there are less prime doubles, just as there are less primes in general as the density falls. But you never run out of them – there is an infinite number of prime doubles. So up in the 1024-bit numbers even though you know that only 1 in 700 numbers is prime on average, you never know that the next one isn’t just an interval of two away.

Another amazing thing about number theory is that there are tests of primality that do not involve finding prime factors. And these are computationally tractable algorithms. So I can (relatively) quickly tell you that a number is prime or composite, even though I cannot quickly tell you its prime factors if composite. The implication of this for cryptography is that the key generation process is straightforward, even though key cracking is fiendishly difficult.

Thanks you - I’m heading for the beakthrough being a weakness (a predictability) in how the primes are calculated, rather than a breakthrough in factoring. Despite a lot of supposition, the reading I did over the weekend of those who do maths for a living didn’t credit a factoring breakthrough.

Thanks you - I’m heading for the beakthrough being a weakness (a predictability) in how the primes are calculated, rather than a breakthrough in factoring.

But you see, it doesn’t matter how the primes are calculated. As long as they are, in fact, prime, and are sufficiently large, you’re good to go. You could look them up in a book and that would be fine (obviously assuming you don’t re-use the same pair all the time). In practice it’s not that hard to find a couple of big primes computationally, as PS says.

Primes are one of the more magical elements of mathematics that are reasonably accessible without a whole lot of background, but the magic can be pretty counterintuitive. PS’s explanation is a good start.

Thanks you - I’m heading for the beakthrough being a weakness (a predictability) in how the primes are calculated, rather than a breakthrough in factoring.

But you see, it doesn’t matter how the primes are calculated. As long as they are, in fact, prime, and are sufficiently large, you’re good to go. You could look them up in a book and that would be fine (obviously assuming you don’t re-use the same pair all the time). In practice it’s not that hard to find a couple of big primes computationally, as PS says.

Primes are one of the more magical elements of mathematics that are reasonably accessible without a whole lot of background, but the magic can be pretty counterintuitive. PS’s explanation is a good start.

It does matter, though, they have to be random primes and the resultant public key has to be unique. If the prime generation is predictable for a given piece of software, that is a huge weakness in it that is exploitable.

One thing I note is that the NSA don’t recommend RSA for US government communications… it rather suggests they think it insecure…