It is known that the two prime factors $p$ and $q$ of an RSA modulus $n$ should not be too close to each other, otherwise an attacker may factor the modulus. In other words, $\Delta = \left| p - q \right|$ should not be too small.

However, "too small" is a somewhat subjective measure. "Too small" with respect to what? And why?

I do realize that it is considered a best practice - given a modulus $n$ of $k$ bits - to pick $p$ and $q$ as random primes with bit length $k/2$ (see also HAC, section 8.8, note ii).

On the other hand, Section B.3 of the Digital Signature Standard standard recommends $\Delta > 2^{k/2-100} $ when generating an RSA key. That is, $p$ and $q$ should simply differ by a number which is at least 100 bits long somewhere in their 100 most-significant bits (thanks poncho!), independently of the bit length. That is not in contrast to the best practice above, and it could be deemed a redundant check.

However, both approaches seem to imply that a dangerous $\Delta$ is much, much less than $2^{k/2}$. Is there any formal proof of that? And is there any quantification for such dangerous $\Delta$?

Your question seems to start from an assumption that there exists some $\Delta$ for which it is sensible/useful to check that $|p-q|\ge \Delta$. This assumption is not valid (in my opinion). Can you rephrase the question in a way that does not contain implicit assumptions?
–
D.W.Nov 4 '12 at 2:10

OK, I provided the quantification you desired in my answer. (I still maintain that this question is not relevant to security in practice, given a proper implementation of RSA key generation, as it is dominated by other considerations.)
–
D.W.Nov 5 '12 at 3:31

1

@D.W.: It might be extremely unlikely that the difference is too small, so you are right about saying that it is not relevant for security in practice. However, if you want to have your key generation evaluated for a certification, it does become relevant.
–
j.p.Nov 5 '12 at 9:39

@jug, If you want your key generation evaluated for NIST certification, and if they require that you add a check that $|p-q|$ be not too small, you do whatever they require you to do to comply with their standard. If your goal is compliance, there's no point asking us about what value of $\Delta$ to use; you just use whatever the standard (or the certifiers) tell you to use. In this case, though, compliance is orthogonal to the technical question of what is required for security.
–
D.W.Nov 5 '12 at 10:58

2 Answers
2

This question is only relevant if you choose $p,q$ in a non-standard way. The standard way to choose $p,q$ is to choose them as two independent random $k/2$-bit numbers. If you do it the standard way, the question is not relevant (the probability that $|p-q|$ is too small is negligible -- and is dominated by the chances of other kinds of failures).

This question would be relevant if you were choosing $p,q$ in some funny way that had an unusually high probability of making $|p-q|$ be unusually small. Yes, you can quantify how much easier this makes factoring. For instance, the Fermat factoring method works as follows: for $a=\lceil \sqrt{n} \rceil, \lceil \sqrt{n} \rceil+1, \lceil \sqrt{n} \rceil+2,\dots$, it checks whether $n/a^2$ is a perfect square; if so, it has factored $n$.

We can analyze the running time of Fermat's method. Let $\epsilon=(p/\sqrt{n}) - 1$, so that $p=\sqrt{n}(1+\epsilon)$ and $q=\sqrt{n}/(1+\epsilon)=\sqrt{n}(1-\epsilon+\epsilon^2-\cdots)$. Fermat's method succeeds when $a=(p+q)/2=\sqrt{n}(1+\epsilon^2/2-\cdots)$. In other words, it requires $\approx \sqrt{n} \epsilon^2/2$ iterations. So, if you want this to take at least $2^{100}$ time, you need $\sqrt{n} \epsilon^2/2 \ge 2^{100}$, or equivalently, $\epsilon \ge 2^{50.5}/n^{1/4}$. Since $|p-q| \approx 2\sqrt{n}\epsilon$, this means we need $|p-q| \ge 2^{51.5} n^{1/4} = 2^{51.5} 2^{k/4}$.

In other words, if you want Fermat factoring to take at least $2^{100}$ time, you need $\Delta$ to be at least $2^{51.5} 2^{k/4}$. For a detailed derivation, see

Cryptanalysis of RSA with small prime difference, Benne de Weger, Applicable Algebra in Engineering, Communication and Computing (AAECC) vol 13 no 1 pp.17-28, 2002. See Section 3 (much of the rest is not relevant and addresses a different question).

See also the following paper, which says that $n$ can be factored in polynomial time if $|p-q| \le 2^{k/3}$:

For instance, the paper gives an example of a 1024-bit RSA modulus ($k=1024$). It says that if $p$ and $q$ are identical in their 171 most significant bits, then you can factor $n$. You can compare this to the requirement in the DSS standard, if you like.

But again, the right way to make this attack infeasible is to choose $p,q$ independently at random (as is the standard method). And if you choose $p,q$ in the proper way, the threat is rendered infeasible, and you don't need to worry about the size of $|p-q|$. For example, if you want a 2048-bit RSA key, choose a random 1024-bit key $p$, and then choose a 1024-bit key $q$. Don't worry about their difference; the mathematics say that they will have a sufficiently large difference.

I hope that this answers your question sufficiently.

I see that the DSS standard does contain the requirement that you mention. I think it is ill-considered, or perhaps not there for the reason you might think it is. It is true that if you chose a RSA modulus by picking $p$ and $q$ in some crazy way that made it likely $|p-q|$ would be small, then RSA would be insecure (there are factoring methods that can be used to factor $n$ in this circumstance). However, in that case the problem there would not be that you chose $p,q$ with a small difference: the problem would be that you failed to generate $p,q$ independently at random. So, don't do that. As long as you do generate $p,q$ properly, you don't need to separately check any condition on $|p-q|$; if $p,q$ are chosen randomly and independently at random, the chances that $|p-q|$ is too small is negligible (less than the chance of getting struck by lightning several times in a row, less than the chance of someone factoring your RSA modulus, etc.).

Why does the DSS contain this recommendation? I don't know. I think the recommendation is misguided and unnecessary.

Bottom line: the best answer to your question is to un-ask the question, as it contains some implicit assumptions that are not valid.

DSS does not cover DSA only anymore. It has been extended to include RSA and ECC. Indeed my question may be academic but I find it interesting. To say, what tells me that a 1026-bit $p$ and a 1022-bit $q$ are not actually more secure?
–
SquareRootOfTwentyThreeNov 3 '12 at 22:08

No, FIPS 186-3 doesn't ask you to choose a $\Delta$ and then chooose $p$ and $q$.
–
ponchoNov 3 '12 at 22:41

@poncho, I'm not saying FIPS 186-3 said that -- I'm saying that SquareRootOfTwentyThree's question seemed to contain an implicit assumption that this is how we should pick RSA keys.
–
D.W.Nov 4 '12 at 2:12

2

The recommendation $\left| p - q \right|>2^{k/2-100}$ appeared in ANSI X9.31, sponsored by a banker's association, and remains in FIPS 186-3 from that origin. I think the objective was to justify that due diligences have been made in the selection process of the RSA key to protect against well known factorization algorithms, allowing to summarily reject rhetorical arguments on the line of: this signature thing is a joke, an algorithm known since the 17th century could allow a forgery. It's only technical use, if any, is to protect against a fault in the process that generated $p$ and $q$.
–
fgrieuNov 5 '12 at 5:08

This recommendation is here specifically to prevent Fermat's Factorization Method from yielding a factorization. This method can yield factor a number if its two factors are sufficiently close; this recommendation would prevent that from being a possibility.

Now, you ask 'is such a recommendation reliable'? Well, it certainly does prevent that factorization method, and it is pretty cheap (involving a simple test; testing to verify that the code does the right thing when the test rejects your $p$ and $q$ is probably your greatest concern).

On the other hand, for RSA-sized numbers, Fermat's method is extremely unlikely to yield a factorization, even if we don't check $\Delta$; it is sufficiently unlikely that an intelligent attacker would try it only if he had apriori reason to believe that $\Delta$ was extremely small; otherwise, he'd be wasting resources that he could have used running NFS or ECM (factorization methods with much better probability of success).

So, the question comes down to 'do you run a cheap test to prevent a factorization method that isn't much of a concern in the first place?' Personally, I'd say "no", the authors of FIPS 186-3 felt differently.

BTW: the recommendation $\Delta > 2^{k/2-100}$ doesn't mean '$p$ and $q$ should differ by a number which is at least 100 bits long', it means closer to '$p$ and $q$ should differ somewhere in their 100 most-significant bits'.

There is absolutely no point in checking this condition (that $|p-q|\le \Delta$ holds). If you generated $p,q$ properly, the probability of a violation of this condition is about $1/2^{100}$, or negligibly small. Checking this condition is a waste of time and software development resources, adds unnecessary complexity, and just distracts people from the things that truly matter. It's far more likely that you have an error in the computation due to a cosmic-ray bitflip, than that $|p-q|$ happens to be too small when you generated $p,q$ by the proper procedure.
–
D.W.Nov 4 '12 at 2:07

@poncho: You should maybe be a little bit more precise in your last sentence (SquareRootOfTwentyThree took it literal instead of "it means (something) closer to"), as this condition is surely not sufficient: just let $p$ and $q$ of the same bitlength have the highest 200 bits $110\dots 0$ rsp. $101\dots 1$.
–
j.p.Nov 5 '12 at 9:52