Who (Really) Needs Sub-microsecond Packet Timestamps?

Introduction

For years network adapter manufacturer companies have educated their customers that network monitoring applications can’t live without hardware packet timestamps (i.e. the ability for the network adapter to report to the driver the time a given packet was sent or received). State of the art FPGA-based network adapters [1, 2, 3] have hardware timestamps with a resolution of +/- ~10 nsec and accuracy of +/- ~50 nsec so that monitoring applications can safely assume an accuracy of 100 nsec in measurements, for sub-usec measurements. Commodity adapters such as Intel 1 Gbit provide both RX and TX timestamps out of the box with IEEE 1588 time synchronisation, so the problem is on 10 Gbit (this until Intel comes us with a 10G adapter with hardware timestamps).

Who Really Needs Sub-microsecond Packet Timestamps?

This is a good question. Everyone seems to want it, but they in practice they might not need it. Let’s clarify this point a bit more in detail. For RTT (Round-Trip Time) measurements (i.e. I want to see how long a packet takes from location X to location Y) measurements on long-distance (e.g. Italy to USA and back) the order of magnitude is msec (actually tenth/hundred of msec) so usec are not needed, for a LAN is not needed either because if the probe packet used to monitor RTT is originated/received on the same adapter, 1 Gbit commodity adapters can do the trick and PF_RING supports them. For one-way delay (i.e. how to measure the time from A->B) on a WAN, 1G adapters+IEEE 1588 can do the trick (the delay is in msec), on a LAN same as above.

So who needs really sub-microsecond hardware timestamps at 10 Gbit (at 1 Gbit we have the solution as explained until now)? Reading on the Internet, it seems that one of the few markets where they are needed is in microburst detection [1, 2] in particular on critical networks such as high-frequency trading and industrial plants.

Can ntop Provide Sub-microsecond Timestamps in Software at 10 Gbit?

In short: yes we can. When we developed our n2disk application at 10 Gbit, we have faced with the problem of timestamps as no commodity adapter supported them. We have spent quite some time to optimise this application and these are our findings:

We suppose to use a server machine with a good motherboard (i.e. Dell, Supermicro, HP), no toy PCs. This guarantees that the clock on the board is of good quality.

The call to clock_gettime() used to read the timestamp in software takes ~30 nsec in our tests. As at 10 Gbit the max packet ingress rate is (14.88 Mpps) is 67 nsec, reading the timestamp once the packet is received it overkilling (not to mention that the reported time will be shifted in the future with respect to real packet arrive).

We decided to create a thread (we called it pulse thread) that calls clock_gettime() at full speed and shares the time with the capture thread.

Conclusion

Using software timestamps and our “timestamp trick” you can achieve ~30 nsec timestamp precision, so that at 10 Gbit line rate all packets have a different timestamp (so we’re below 67 nsec timestamp resolution). This means that you can use n2disk for detecting microbursts at 10 Gbit line rate as:

It can handle 14.88 Mpps with no drops when dumping them to disk with nsec timestamps

You can avoid using hardware timestamps for sub-usec precision and leave them only for specific tasks where you need very accurate ~100 nsec timestamps. At this point in time however, we have not received any request from people who really need them, so we’re confident that our approach can be enough for most people.

Hardware timestamps still make sense in those cases where you need a NIC with a GPS signal ingress, so that you can accurately sync the time over long distance with an accuracy better than what IEEE 1588 can offer you.