RSA, the first full fledged public key cryptosystem was designed
by Rivest, Shamir, and Adleman in 1977. RSA gets its security from
the apparent difficulty in factoring very large composites.
However, nothing has been proven with RSA. It is not proved
that factoring the public modulous is the only (best) way to
break RSA. There may be an as yet undiscovered way to break it.

It is also not proven that factoring has to be as hard as it is.
There exists the possiblity that an advance in number theory may lead
to the discovery of a polynomial time factoring algorithm. But, none
of these things has happened, and no current research points in that
direction. However, 3 things that are happening and will continue
to happen that take away from the security of RSA are: the advances
in factoring technique, computing power and the decrease in the
cost of computing hardware. These things, especially the first one,
work against the security of RSA. However, as computing power
increases, so does the ability to generate larger keys. It is
much easier to multiply very large primes than it is to factor the
resulting composite (given today's understanding of number theory).

To understand the attacks on RSA, it is important to understand
how RSA works. This is discussed in detail in The RSA public key cryptographic system. Briefly, an RSA public key consists of a public
exponent e and a modulus
n which is the product of two prime
numbers p and q. The corresponding private key consists of
a number d that is the mathematical
inverse of e.

Assume an attacker has access to the RSA public key (e, n).
The attacker wants the private key, i.e. the number d.
To get d, n needs to be
factored. This will yield p and q, which can then be used to
calculate d. Factoring n is the best known attack against RSA to
date. (Attacking RSA by trying to deduce (p-1)(q-1) is no easier
than factoring n, and executing an exhaustive search for values of d
is harder than factoring n.) Some of the algorithms used for
factoring are as follows:

Trial division: The oldest and least efficient. Exponential
running time. Try all the prime numbers that are smaller than sqrt(n).

Number Field Sieve (NFS): Currently the fastest algorithm known for
numbers larger than 110 digits. Was used to factor the ninth Fermat
number.

These algorithms represent the state of the art in warfare against
large composite numbers (therefore agianst RSA). The best algorithms
have a super-polynomial (sub-exponential) running time, with the NFS
having an asymptotic time estimate closest to polynomial behaivior.

Still, factoring large numbers is hard. However, with the advances
in number theory and computing power, it is getting easier. In 1977
Ron Rivest said that factoring a 125-digit number would take
40 quadrillion years. In 1994 RSA129 was factored using about
5000 MIPS-years of effort from idle CPU cycles on computers across
the Internet for eight months. In 1995 the Blacknet key (116 digits)
was factored using about 400 MIPS-years of effort (1 MIPS-year is
a 1,000,000 instruction per second computer running for one year)
from several dozen workstations and a MasPar for about three months.
Given current trends the keysize that can be factored will only
increase as time goes on.

The table below estimates the effort
required to factor some common PGP-based RSA public-key modulous
lengths using the General Number Field Sieve:

Key size

MIPS-years required to factor

512

30,000

768

200,000,000

1024

300,000,000,000

2048

300,000,000,000,000,000,000

The next chart shows some estimates for the equivalences in brute
force key searches of symmetric keys and brute force factoring
of asymmetric keys, using the NFS.

Symmetric

Asymmetric

56-bits

384-bits

64-bits

512-bits

80-bits

768-bits

112-bits

1792-bits

128-bits

2304-bits

It was said by the 4 men who factored the Blacknet key that
"Organizations with 'more modest' resources can almost certainly
break 512-bit keys in secret right now." This is not to say
that such an organization would be interested in devoting so
much computing power to break just anyone's messages. However, most
people using cryptography do not rest comfortably knowing the
system they trust their secrets to can be broken...

My advice as always is to use the largest key allowable by the
implementation. If the implementation does not allow for large
enough keys to satisfy your paranoia, do not use that implementation.

RSA gets its security from the presumed difficulty in factoring
large prime numbers. So, PGP needs some way of generating large
prime numbers. As it turns out, there is no way to feasibly
generate large primes. So, what PGP does,is generate large odd
numbers and test them for primality.

By the way, there are in fact, an infinite amount of prime numbers.
Probably more than you would first suspect. The prime number theorem
gives a useful approximation for the prime distribution function
PI(n); which specifies the number of primes less than or equal to n:

limit
n ==> infinity PI(n) / [ n / (ln (n) )] == 1

So, the approximation n/ln(n) gives with reasonable accuracy, the
number of primes less than or equal to n.

Trial division

To test a candidate number (n) for primality, one obvious and simple
method is to try trial divisions. Try dividing n by each integer
2, 3, ..., sqrt(n). If n is prime, none of the trial divisors will
divide it. Assuming each division takes constant time, this method
has a worst case running time (if n is in fact prime) proportional to
exponential, in the length of n.

If n is non-trivial (which is the
case for the PGP's candidate numbers) this approach is not feasible
(this is also why trial division is not a viable method of attack
against RSA). (Trial divsion has the one advantage of determining the
prime factorization of n, however. But who wants to wait till the
Universe explodes (implodes)?)

Pseudo-primality testing

In order to test non-trivial candidates, PGP uses pseudo-primality
testing. Psuedo-primality tests take a candidate number and test
it for primality, returning with a certian degree of accuracy, whether
or not it is prime. PGP uses the 4 Fermat Tests to test the numbers
for primality.

Candidate number to be tested primality: n.

Take the first 4 prime numbers b={2, 3, 5, 7}

Take bn-1) % n = w

If w == 1; for all b, n is probably prime. Any other number and n is
definitely not prime.

While it is possible for a number to be reported as being prime
when it is in reality a composite, it is very unlikely. After each
successful test the likelyhood drops dramatically, after one test,
the likelyhood is 1 in 1013, after two tests, the likelyhood is
1 in 1026, and if the number passes all four tests, the possibility
of it not being prime is 1 in 1052. The 4 Fermat Tests will
not discard a prime number.

The Carmichael Numbers

Unfortunately, there are some numbers which are
not prime and which
do satisfy the equation bn-1 % n. These integers are known as
the Carmichael Numbers, and they are quite rare. The reason for this is that
a Carmichael Number must not be divisable by the square of any prime
and must be the product of at least three primes). The first
three Carmichael Numbers are: 561, 1105, and 1729. They are so rare,
in fact, there are only 255 of them less than 109. The chance of
PGP generating a Carmichael Number is less than 1 in 1050.

Even when the cryptographic algorithm itself is secure, there still may
be weaknesses in the implementation. For instance, if a program generates
the same key every time, the algorithm may be extremely secure but every
encrypted message can be read easily.

Choosen cipher-text attack

One known method of attack on PGP's implementation is a so-called
chosen cipher-text attack. In this attack, an attacker listens in on the insecure channel in which RSA
messages are passed. The attacker collects an encrypted message
c, from the target (destined for some other party). The attacker
wants to be able to read this message without having to mount a
serious factoring effort. In other words, she wants m=cd.

To recover m, the attacker first chooses a random number r less than n.
We assume that the attacker has the public-key (e,n). The attacker computes:

x=re mod n (She encrypts r with the target's public-key)

y=xc mod n (Multiplies the target ciphertext with the temp)

t=r-1 mod n (Multiplicative inverse of r mod n)

The attacker counts on the fact that if x=re mod n,
then r=xd mod n.

The attacker then gets the target to sign y with her private key,
(which actually decrypts y) and sends u=yd mod n to the
attacker. The attacker simply computes:

tu mod n = (r-1)(yd) mod n =
(r-1)(xd)(cd) mod n =
(cd) mod n = m

To foil this attack do not sign some random document presented to
you. Sign a one-way hash of the message instead.

As it turns out, e being a small number does not take away from the
security of RSA. If the encryption exponent is small (common values
are 3,17, and 65537) then public-key operations are significantly
faster. The only problem in using small values for e as a public
exponent is in encrypting small messages. If we have 3 as our e
and we have an m smaller than the cubic root of n, then the message
can be recovered simply by taking the cubic root of m beacuse:

m [for m<= 3rdroot(n)]3 mod n

will be equivalent to m3 and therefore:

3rdroot(m3) = m.

To defend against this attack, simply pad the message with a nonce
before encryption, such that m3 will always be reduced mod n.

PGP uses a small e for the encryption exponent. By default it tries
to use 17. If it cannot compute d with e being 17, PGP will iterate
e to 19, and try again... PGP also pads m with a random
value so m is always greater than n.

A very new area of attack publically discovered by Paul Kocher deals
with the fact that different cryptographic operations (in this case
the modular exponentiation operations in RSA) take discretely different
amounts of time to process. If the RSA computations are done without
the Chinese Remainder theorem, the following applies:

An attacker can exploit slight timing differences in RSA computations
to, in many cases, recover d. The attack is a passive one where the
attacker sits on a network and is able to observe the RSA operations.

The attacker passively observes k operations measuring the time t
it takes to compute each modular exponentation operation:
m=cd mod n. The attacker also knows c and n.
The pseudo code of the attack:

From: Ron Rivest
Newsgroups: sci.crypt
Subject: Re: Announce: Timing cryptanalysis of RSA, DH, DSS
Date: 11 Dec 1995 20:17:01 GMT
Organization: MIT Laboratory for Computer Science
The simplest way to defeat Kocher's timing attack is to ensure that the
cryptographic computations take an amount of time that does not depend on the
data being operated on. For example, for RSA it suffices to ensure that
a modular multiplication always takes the same amount of time, independent of
the operands.
A second way to defeat Kocher's attack is to use blinding: you "blind" the
data beforehand, perform the cryptographic computation, and then unblind
afterwards. For RSA, this is quite simple to do. (The blinding and
unblinding operations still need to take a fixed amount of time.) This doesn't
give a fixed overall computation time, but the computation time is then a
random variable that is independent of the operands.

The blinding Rivest speaks of simply introduces a random value into
the decryption process. So,

m = cd mod n

becomes:

m = r-1(cre)d mod n

where r is the random value, and r-1 is its inverse.

PGP is not vulnerable to the timing attack as it uses the CRT to
speed RSA operations. Also, since the timing attack requires an
attacker to observe the cryptographic operations in real time (ie:
snoop the decryption process from start to finish) and most people
encrypt and decrypt off-line, it is further made inpractical.

While the attack is definitly something to be wary of, it is
theoretical in nature, and has not been done in practice as of
yet.

There are other attacks against RSA, such as the common modulous
attack in which several users share n, but have different values
for e and d. Sharing a common modulous with several users, can
enable an attacker to recover a message without factoring n. PGP
does not share public-key modulous' among users.

If d is up to one quarter the size of n and e is less than n, d
can be recovered without factoring. PGP does not choose small
values for the decryption exponent. (If d were too small it might
make a brute force sweep of d values feasible which is obviously a
bad thing.)

It is pointless to make predictions for recommended keysizes.
The breakneck speed at which technology is advancing makes it
difficult and dangerous. Respected cryptographers will not make
predictions past 10 years and I won't embarass myself trying to
make any. For today's secrets, a 1024-bit is probably safe and
a 2048-bit key definitely is. I wouldn't trust these numbers
past the end of the century. However, it is worth mentioning that
RSA would not have lastest this long if it was as fallible as some
crackpots with middle initials would like you to believe.