3 Answers
3

There are three efficiency issues to discuss here: CPU, network bandwidth, and functionalities.

The "moral" reason of public key encryption being slower than private key encryption is that it must realize a qualitatively harder feature: to be able to publish the encryption key without revealing the decryption key. This requires heavier mathematics, compared to symmetric encryption which is "just" making a big tangle of bits. Most known asymmetric encryption systems seem to achieve the needed security, but at some relatively heavy computational cost. Note that the McEliece cryptosystem and NTRUEncrypt achieve asymmetric encryption and decryption at high speed (much higher than, say, RSA or El Gamal over elliptic curves). There is no proof that asymmetric encryption must really be harder, computationally-wise, than symmetric encryption, but the contrary would still be mildly surprising.

Another efficiency issue with asymmetric encryption is network bandwidth. This one is an absolute limitation. Public key encryption is public: anybody, including the attacker, can use the public key to encrypt arbitrary messages. This means that if the encryption is deterministic, then the attacker can run an exhaustive search on the encrypted data (i.e. encrypting potential messages until one matches). The data which is encrypted is still "useful data": it has structure, it is subject to such a search. Therefore, an asymmetric encryption scheme must include extra randomness. This, in turn, necessarily implies a data size increase. For instance, with RSA as described by PKCS#1, with a 1024-bit key, you can encrypt a data element only up to 117 bytes, yielding a 128-byte value. Therefore, if you were to encrypt a big file with "only RSA", you would end up with an encrypted message about 9% bigger than the plaintext: that's 900 extra megabytes for a 10 GB message. On the other hand, symmetric encryption only incurs a constant size overhead (say, at most +32 bytes for AES-CBC, including room for the initial value). There are many contexts where network bandwidth is a scarcer resource than CPU.

Finally, there are key exchange algorithms, which are like asymmetric encryption except that you do not get to choose the "message" you send: the sender and receiver do end up with a shared secret, but that value is mostly "randomly selected". Diffie-Hellman is the most well-known key exchange algorithm. To do "asymmetric encryption" with a key exchange algorithm involves using the "shared secret" as key in a symmetric encryption algorithm.

Blocks ciphers work by applying operations to an $n$-bit block so as to achieve confusion and diffusion. In short, a good block cipher should "mix" the bits of the plaintext and key as thoroughly as possible, so that it becomes practically impossible to recover the key or decipher unknown ciphertext.

Now, to achieve confusion and diffusion, there are a few basic operations in common use: xor, addition modulo 2^32 or 2^64, bit rotations, (small) table lookups, and a few others. The thing that all these operations have in common is that they're all operations a normal CPU can perform very quickly. Hell, a Core 2 CPU can perform 6 billion of adds or xors per second!

Public-key cryptography, on the other hand, relies on the existence of trapdoor functions. A trapdoor function is a function that is very hard to invert, unless one is given some special information, at which point it becomes easy. Coming up with good trapdoor functions is no easy task, and the ones we know (and trust) today are mostly based on number-theory. The most widely known, RSA, allegedly relies on the hardness of inverting the function

$f(m) = m^e \pmod n$,

where $n$ is composite and of unknown factorization. Inverting this function is very hard, for large enough $n$. The "large enough" bit is the key here---in the case of RSA, $n$ must be at least 1024 bits long to be safe, due to the power of the number field sieve, and thus operations (read: exponentiation) must be done modulo at least a 1024-bit number.

This is exactly why RSA is much slower than, say, AES: arithmetic of numbers much larger than the CPU's natural word length is slow, and exponentiation requires $O(\log e)$ multiplications for an exponent $e$ (RSA-1024 requires on the order of 1024 multiplication modulo a 1024-bit number). For example, the eBACS project reports 1640960 cycles for a single RSA-1024 decryption, in a top-of-the-line Sandy Bridge processor. The situation is ameliorated by elliptic curve cryptography, which require smaller key to achieve the same level of security; it is still slower than symmetric ciphers, though.

When, the problem is that for public key cryptography, you have to have a lot a mathematical structure than you need for symmetric key cryptography.

For symmetric operations, we can pick our algorithm pretty much at will. Pretty much any algorithm can be used; even in cases of encryption (where being able to decrypt is a requirement), this can be handled by either making each step invertable, or by using a structure (say, a Feistel network or using your logic as a keystream generator) where the structure itself gives invertability. Now, because we want a secure algorithm, we're not quite as free as all that, but still, we do have considerably latitude.

For asymmetric operations, it's different; we have complimentary 'public' and 'private' operations (say, 'encrypt' and 'decrypt'). The public and private keys must be related, but not in an obvious way; the holder of the public key must not be able to do the private operation (unless he is also given a copy of the private key, in which case it becomes easy). Because of this requirement, we can't use any arbitrary algorithm, instead we need to base it on a problem that allows this non-obvious relationship. Several such problems (such as factorization, discrete logs in specific groups groups, the RSA problem, th DH problem in specific groups, lattice problems) have been proposed and are (as far as we know) secure; however, none of them are as quick to evaluate as the symmetric operations.