I'm just interested in cryptography, so please don't expect me to be an expert. ;) I recently read about AES cache timing attacks and found it very interesting. I read the article Cache-timing attacks on AES by Daniel Bernstein, but I don't seem to understand everything.

How relevant is this to "real-life" network applications? As I understand it, the measurements have to be extremely precise to leak information. Are networks (even LAN networks) fast enough for that? (The author measures the time on the server and sends it to the client.)

The author dedicates a long section on how to prevent the OS to interrupt an AES computation. But how does this leak any information? Yes, the calculation takes more time than usual, but it is not dependent on the input?

Assuming this is in fact a problem to network applications. Would it be sufficient to wait() to a constant time after encryption and before sending any data over the network?

In general timing attacks are pretty relevant to real life network applications (unfortunately). And unfortunately timing attacks generally rely on statistics over many tries. This means that non-key specific delays can easily be averaged out. Constant delays especially don't accomplish much. Best way to attack them is to make a more constant implementation and making sure that a key is only used a certain amount of times. Slow down loops on possible attacks can also be used, simply to slow the attack down.
–
Maarten BodewesNov 29 '12 at 22:05

@owlstead Please note that I didn't wrote "wait a constant time" but "wait to a constant time", meaning that network messages are only send on e.g. 10ms "ticks". -- For a timing attack the plain text must be known? An encrypted chat for example shouldn't even be affected?
–
cooky451Nov 29 '12 at 22:21

Waiting to a constant time would do it, although that really could affect performance. If the plain text must be known depends on the kind of attack I guess. Timing attacks is a group of attacks, not so much a specific one.
–
Maarten BodewesNov 29 '12 at 22:52

@owlstead Not affecting performance to much was the idea behind this. Yes, the answer from the server comes later, but after all the CPU can do something else while waiting. (Handling another client.) -- Well that was my question. As I understand it, there would be no point in measuring any timings, if you don't know what data gets encrypted? What knowledge could you possibly gain?
–
cooky451Nov 29 '12 at 23:10

1 Answer
1

Yes, timing attacks are relevant to real-world implementations of crypto. Yes, as that paper demonstrates, these attacks can be carried out in real life: real networks are fast enough to allow these attacks.

It is also important to understand that some network services do provide timestamps that leak information about how long the operation took on the server; for instance, some TCP stacks will automatically add high-precision timestamps to every packet sent, and a few applications may add timestamps to their packets for their own reasons. This further heightens the risk. If we want AES to be a general-purpose encryption algorithm that is secure for essentially all reasonable uses (and we do), then this is extra motivation for a generic defense that eliminates this attack method.

There are many defenses. The best defense is to ensure that the implementation is constant time: the amount of time it takes is independent of the value of the key. You may also want to stop cache-based attacks, and ensure that the sequence of memory addresses read/written is constant (independent of the value of the key).

Delaying until a constant time (the worst-case execution time) has some issues. It is hard to estimate the worst-case execution time in practice, given the variety of ways that can cause execution to take a long time (e.g., a cache miss, a page fault, pre-empted by the OS, and more). If you take a very conservative estimate, then the estimate will be a very long time and performance will suffer dramatically. If you don't, then there is the risk that the actual time may exceed your constant time, and then you cannot recover. So, while this is indeed a possible solution strategy to consider, the devil is in the details, and I think it's probably not the most promising one: in most cases, the issues with it will make it unattractive in practice.

Thanks for the answer. You kind of didn't address my second question though. :) And I have a follow-up question that came up in the comments: Does a timing attack make any sense at all when the plain text isn't known? I can't imagine how this scenario could possibly leak information, but maybe I overlook something.
–
cooky451Nov 30 '12 at 1:09

The second question is, well, a different question and probably worth posting separately. The site seems to work best when there's one question per question. I encourage you to post the other questions separately!
–
D.W.Nov 30 '12 at 1:16

I'll wait until tomorrow and if nobody answered the question by then, I'll accept your answer and ask on stackoverflow. I don't see any use in posting it here again, considering the still very low amount of questions on this site, I doubt anyone could've possibly missed it. :)
–
cooky451Nov 30 '12 at 1:25

@cooky451, it's not that someone misses it -- it's that it clutters up the thread. I encourage you to post it here on this site, but just post it separately: as it is an almost entirely different question, it deserves a separate post (in my opinion).
–
D.W.Nov 30 '12 at 2:01

Delaying until a constant time is very wrong approach. While this closes one side channel (timing), it opens another side channels — power consumption and CPU load. Some of these can be measured remotely over network. For example, you can measure server throughput in order to measure CPU load. This induces some noise, though.
–
v6akJan 7 at 11:59