Hi, I'm developing a kernel module that enforces filtered packets toget delayed for a given short time. I'm using netfilter for packetfiltering and using mod_timer() for packet delay.

The kernel module holds packet buffer (skb) in a linked list andwaits until the timer expires. If the timer expires, the modulereleases the packets.

What I'm struggling is about the accuracy of timer function. Sincedefault Linux timer interrupt frequency is set to 100 HZ, I knowthe smallest timer interval is 10 msec. and the one jiffy tick isalso 10 msec. However, it looks like that there's a small amount oferror between real-time clock and jiffy tick.

In my experiment, (I set the 50msec timer for each packet and I sentone packet in every second), if I set 5 jiffies (= 50 msec) for mypacket delay, the timer correctly executes the callback functionafter 5 jiffy ticks, however, the actual real-time measurment shows thepacket delay varies between 40msec and 50msec. Even worse, the actualdelay time variation has a trend. Please see the following data.

Is there any person have experienced the same problem? It looks like that the accuray below 10msec is not guaranteed, but I'd like toknow why this kind of trend happens (I initially thought the error should be randomly distributed between 40msec to 60msec) and how the kernel adjust the timer when the error term becomes over 10msec.