The NTRU public-key cryptosystem has a lot of interesting properties (being resistant to quantum computer attacks, being standardized by several important bodies), but it also has a pretty unique property:

The decryption algorithm does not always work. Sometimes it just gives wrong answers.

Has this been fixed? Is it really a cryptosystem if having the private key is insufficient to decrypt the encrypted messages?

For instance, from Howgrave-Graham et al. (2003) one reads,

“First, we notice that decryption failures cannot be ignored, as they happen much more frequently than one would have expected. If one strictly follows the recommendations of the EESS standard [3], decryption failures happen as often as every 212 messages with N = 139, and every 225 messages with N = 251. It turns out that the probability is somewhat lower (around 2−40) with NTRU products, as the key generation implemented in NTRU products surprisingly differs from the one recommended in [3]. In any case, decryption failures happen sufficiently often that one cannot dismiss them, even in NTRU products.”

I don't see those claims in the Wikipedia article you link to. Your quote does not appear anywhere in that Wikipedia article, or indeed anywhere in Wikipedia that I can find. Also, the article has not been modified since June 10, so it's not like this is something that has been changed since you posted your question. In short, I'm having trouble telling where you got this from. Can you give a citation for who is making such a claim?
–
D.W.Sep 5 '11 at 4:18

The ">" is used to emphasize the claim. It is not a direct quote. As near as I can tell the decryption failures are common (1 in 10000 for a standardized version of NTRU), are non-recoverable as in the message is simply gone, and are important in several attacks that recover the private key by seeing if a valid ciphertext runs into the decryption failure problem.
–
Jack SchmidtSep 5 '11 at 16:34

@D.W.: I think if you read the section of the wikipedia article titled "History" you'll see the claims made quite clearly.
–
Jack SchmidtSep 5 '11 at 20:05

As a practical example, for the EES1087EP2 parameter set where $N=1087$, $q=2048$, and $d=120$, the failure probability is $5.62·10^{-78}$, which is a bit less than $2^{-256}$. Those parameters have been chosen for a $256$-bit security level in general, and the failure probability is also smaller than $2^{-256}$. So exploiting a decryption failure requires just as much work as breaking other parts of the system.

Can this be illustrated with practical values of q, d, N please?
–
fgrieuSep 8 '11 at 5:47

4

For the EES1087EP2 parameter set where $N=1087$, $q=2048$, and $d=120$, the failure probability is $5.62·10^{-78}$.
–
Prashand GuptaSep 8 '11 at 6:44

2

Just to make it even more obvious: those parameters have been chosen for a $2^{256}$ security level in general, and the failure probability is also smaller than $2^{-256}$. So exploiting a decryption failure requires just as much work as breaking other parts of the system.
–
NakedibleSep 8 '11 at 7:44

1

I edited your answer to have nicer math and quote formatting; could you please check that I didn't break anything? (Ps. I looked at the P1363.1 draft and the Hirschhorn et al. paper, but I have to say I couldn't really figure out how they managed to get those specific formulas out of that paper. Maybe someone more familiar with it can clarify?)
–
Ilmari KaronenMay 12 '12 at 1:21

Does this mean we can simply repeat the decryption it there was a failure, and get other results?
–
Paŭlo EbermannSep 5 '11 at 10:03

1

I really don't know the specifics, but if I remember right the problem was that a "random" parameter selected by the encrypter may make the decryption process fail - and it is impossible for the encrypter to verify if this is the case without the private key. Maybe! Please confirm my understanding.
–
NakedibleSep 5 '11 at 10:25

Decryption isn't probabilistic: running the decryption algorithm multiple times always gives the same result. (Paulo Ebermann asks the right question here). However, it may be inconsistent with encryption, which is a different thing.
–
William WhyteMay 11 '12 at 11:14

I'm Chief Scientist at Security Innovation, which owns NTRU, and have contributed to the design of NTRUEncrypt and NTRUSign.

The headline answer here is: NTRUEncrypt doesn't necessarily require decryption failures; it's a tradeoff you make, trading off key and ciphertext size against decryption failure probabilities. Parameter sets that don't give decryption failures are possible but ones that have a small but non-zero decryption failure probability are more efficient.

The most helpful way to understand this is to think about NTRUEncrypt as a lattice cryptosystem. Here, encryption is a matter of selecting a point (which is effectively a random vector mod q) and adding the message (which is a small vector) to it. Decryption is a matter of mapping the ciphertext point back to the lattice point and recovering the message as the difference between the two. Call this lattice point the "masking point" because it's used to mask the message.

Say we have a two-dimensional lattice, the private basis vectors are (5, 0) and (0, 5), and the message vector is defined as having coordinates with absolute value 1 or 0. So you have 9 possible messages that can be encrypted. In this case, each encrypted message is always closer to the masking point to any other point. (In the case where the masking point is (10, 15), the possible encrypted message values are (9, 14), (9, 15), (9, 16), ... , (11, 16)).

If we said the message vector could have coordinates with absolute value (0, 1, 2), we could encrypt 25 possible messages and the encrypted message would still be closer to the masking point than to any other point.

However, if we said the message vector could have coordinates with absolute value (0, 1, 2, 3), then although we could encrypt 49 messages, any message with a 3 as one of the coordinates would be closer to some other point than to the masking point (because 3 rounds in a different direction mod 5 than 2 does).

What happens in NTRUEncrypt is similar, modulo the differences that you get from moving to higher dimensions. We've defined constraints on the message to be encrypted that ensure that almost all the time, the message will round back to the masking point. We can estimate the probability that the rounding will happen incorrectly and set it to be less than the security level (as Prashand Gupta said). We could also eliminate decryption failures altogether by increasing q, which would corresponding to increasing the size of the private basis relative to the message; we don't see a need to do this, because the decryption failure probability is sufficiently low already and bringing it to 0 would increase q from 2048 to 4096 or 8192, adding N or 2N bits to the size of the ciphertext and key.

A failure rate of 2^-256 is small enough, since hardware failures are certainly more likely than that. But I imagine convincing people that it doesn't matter isn't easy. In my experience many programmers have an irrational fear of probabilistic algorithms.
–
CodesInChaos♦May 11 '12 at 12:37