If I understand this correctly, when I use a cryptographic (hardware) engine, I have to trust it. Are there any cryptographic algorithms with properties that allow me to verify the correctness of an engine in a computationally cheaper way than to do the same work myself?

EDIT:
In my paranoid reality I assume the engine was specially built to return correct results except for plaintexts with certain keywords for which it would build in a backdoor. What I am looking for is a general way to verify the correctness of the output for arbitrary input without knowing the correct output (although I am not sure this is mathematically possible).

3 Answers
3

No. There is no good way to detect backdoors in cryptographic hardware. There are too many ways to hide backdoors so they won't be detected.

Testing is not an effective way to detect deliberately-introduced backdoors. An attacker can easily arrange for the backdoor to be activated only for particular text, or only after receiving a special "cryptic knock". For instance, the hardware engine might have a 64-bit secret value hidden in it. Initially, the backdoor is disabled. However, if it receives a ciphertext starting with the 64-bit secret, then that's the "cryptic knock"; when the "cryptic knock" is received, the hardware engine can switch on the backdoor. This enables an attacker to enable the backdoor some time after the system enter production. Similar means could also be used to allow an attacker to dynamically target specific communications or otherwise evade detection.

There are many ways that a malicious crypto engine could subvert your security. If you use it to generate random numbers or crypto keys, it could "spike" the random-number source and generate keys that will be guessable to the attacker. If you use it to verify the integrity of signed message, it could potentially falsely accept certain malicious messages when signalled to do so by the attacker. If you use it to encrypt confidential data, it could potentially leak the confidential data to the rest of the world in a number of different ways. If it has direct network access, it could leak confidential data and keys by simply sending it out over the network. If it doesn't have direct network access, it could still conceal confidential data or cryptographic secrets in the ciphertexts and leak this information out over a subliminal channel (for instance, taking advantage of the degree of freedom in the choice of a nonce, IV, or other random value to hide secret data in them).

And even if it can't do any of that, a malicious crypto engine could still leak confidential data or cryptographic key material out using a timing channel. See Jitterbug for an example of a piece of malicious hardware that exfiltrates confidential data by adjusting the timing of network packets. A malicious crypto engine could use a similar mechanism: e.g., when asked to encrypt something, it delays responding to the request until the low-order bit of the current time in milliseconds matches the bit it wants to send. Thus, the time at which ciphertext packets are sent over the network will leak information about the time at which the crypto engine responded, which in turn conceals a subliminal message that the malicious crypto engine wanted to exfiltrate.

Bottom line: if you don't trust the crypto hardware, there's basically no way to win. So you should only use crypto hardware that you trust.

Could a solution be to double encrypt data? Two different vendors?
–
LamonteCristoJun 4 '12 at 18:29

1

@makerofthings7, I like the way you think, but I don't think even that would be sufficient. If the one you use last is malicious, it could potentially leak out data via a subliminal channel (e.g., in nonces or other randomness in the ciphertext), even if the first one is honest. And, if either one is malicious, it can probably leak out data via a timing channel (see, e.g., Jitterbug).
–
D.W.Jun 4 '12 at 22:11

I edited my answer to elaborate on how a malicious crypto engine could exfiltrate secrets. Hopefully this makes it clearer how a malicious crypto engine could perform such exfiltration, even if you are double-encrypting with two different engines from two different vendors.
–
D.W.Jun 4 '12 at 22:18

If you want to check the basic functionality of a cryptography engine, throw a good selection of inputs at it, and check that it returns correct results on these inputs. If your sample is well-chosen, you can have confidence that the engine is functionally correct.

Many cryptographic algorithms based on mathematical structure (typically, public-key algorithms such as RSA) have methods to get some confidence that the result is correct without doing all the work. For example, if you run a computation that is based on operations modulo a large integer, you can perform the computation modulo a small integer and check that the result is correct modulo that small integer¹ — a generalization of casting out nines. This technique is known as wooping. Such checks can work against malicious bignum libraries if you test with enough random small numbers. However, you can't always do this; for example, if there was a way to check a cryptographic digest that was cheaper than computing it the long way, the digest algorithm would be seriously broken. I don't know of anything comparable for symmetric encryption algorithms.

If you want to know whether the cryptographic engine is secure, that's a completely different matter. The engine might be leaking data through side channels, and you cannot notice this by testing. You can only notice this by very careful study, preferably open-box. Similarly, the generation of random numbers cannot be tested, and is hard to get right; you can gain some measure of confidence with statistical tests, but not all statically sound random number generators are good enough for cryptography, which requires unpredictable random numbers. Here are some examples of vulnerabilities which can be very difficult to detect, and are practically impossible to detect if they are deliberate backdoors:

The random generator has a low entropy: it returns random numbers that are predictable with an unreasonably high probability. Example (accidental): the Debian OpenSSL vulnerability.

The engine leaks confidential information such as RNG output or key material through its response time. (This is an example of a side channel, there are side channels other than time: power consumption, radio emissions, etc.)

The engine leaks confidential information through normal channels in an undetectable way, thanks to a subliminal channel. For example, every time the engine generates a random nonce (an IV for block encryption modes, RSA padding, the signing parameter k in DSA, etc.), the n-bit random value is in fact an (n-1)-bit random value plus a bit of the key encrypted with a secret key.

Cryptographic engines can be certified to conform with security standards such as Common Criteria and FIPS 140. Excluding “trivial” certifications such as FIPS 140-2 level 1 which don't really say anything about security, a certification takes several man-months and is performed by experts. Even if your crypto engine is certified, read the certification carefully: most certifications only vouch for the engine if it's used under very precise conditions, and only against attackers with limited means. And of course in fine the certification is only one professional's opinion; certified products can and have been broken.

Your second paragraph describes what I was looking for. Thanks!
–
flacsJun 4 '12 at 1:09

As far as I understand, this is more of a consistency check with a certain probability for errors (i.e. it might approve actually wrong data)?
–
flacsJun 4 '12 at 1:16

"if you run a computation that is based on operations modulo a large integer, you can perform the computation modulo a small integer and check that the result is correct modulo that small integer" - This isn't quite right, unless the small integer is a divisor of the large integer (and maybe not even then). Typically the large modulus will be a prime (or a RSA composite), in which case this scheme doesn't work. In some specific cases it's possible to devise a countermeasure along the lines you mention, but it's quite a bit trickier than your mention would imply.
–
D.W.Jun 4 '12 at 3:06

1

@Tilka Wooping can indeed fail to detect errors, but the probability can be reduced arbitrarily by using enough distinct, random modulos. However, this sort of technique is highly specialized: it can detect wrong computations for some algorithms, it cannot detect information leak.
–
GillesJun 4 '12 at 17:52