Introduction

Anyone who has used the .NET System.Timers.Timer class for low interval times will realise that it does not offer a very high resolution. The resolution will be system dependant, but a maximum resolution is usually around 15ms (System.Windows.Forms.Timer has an even worse resolution, although it is unlikely a UI will need to be updated this fast). Significantly better performance can be achieved using the Win32 multimedia timer (there are various .NET projects that wrap this timer); however, there are no timers available in the microsecond range.

The problem I encountered was that I needed to send an Ethernet UDP message packet out every 800µs (0.8ms); it did not matter if a packet was slightly delayed or did not go off exactly 800µs after the last one. Basically, what I needed was a microsecond timer that was accurate the majority of the time.

The fundamental problem with a software timer in the region of 1ms is that Windows is a non real-time Operating System (RTOS) and is not suitable for generating regular and accurate events around the 1ms mark. MicroTimer cannot and does not solve this problem; however, it does offer a microsecond timer which offers a reasonable degree of accuracy (approx. 1µs) the majority (approx. 99.9%) of the time. The trouble is, the 0.1% of the time, the timer can be very inaccurate (whilst the Operating System gives some of the processing time to other threads and processes). The accuracy is highly system/processor dependant; a faster system will result in a more accurate timer.

The beauty of MicroTimer is that it is called in a very similar way to the existing System.Timers.Timer class; however, the interval is set in microseconds (as opposed to milliseconds in System.Timers.Timer). On each timed event, MicroTimer invokes the predefined (OnTimedEvent) callback function. The MicroTimerEventArgs properties provide information (to the microsecond) on when exactly (and how late) the timer was invoked.

Using the code

MicroStopwatch - This derives from and extends the System.Diagnostics.Stopwatch class; importantly, it provides the additional property ElapsedMicroseconds. This is useful as a standalone class where the elapsed microseconds from when the stopwatch was started can be directly obtained.

MicroTimer - Designed so it operates in a very similar way to the System.Timers.Timer class, it has a timer interval in microseconds and Start / Stop methods (or Enabled property). The timer implements a custom event handler (MicroTimerElapsedEventHandler) that fires every interval. The NotificationTimer function is where the 'work' is done and is run in a separate high priority thread. It should be noted that MicroTimer is inefficient and very processor hungry as the NotificationTimer function runs a tight while loop until the elapsed microseconds is greater than the next interval. The while loop uses a SpinWait, this is not a sleep but runs for a few nanoseconds and effectively puts the thread to sleep without relinquishing the remainder of its CPU time slot. This is not ideal; however, for such small intervals, this is probably the only practical solution.

MicroTimerEventArgs - Derived from System.EventArgs, this class provides an object for holding information about the event. Namely, the number of times the event has fired, the absolute time (in microseconds) from when the timer was started, how late the event was and the execution time of the callback function (for the previous event). From this data, a range of timer information can be derived.

By design, the amount of work done in the callback function (OnTimedEvent) must be small (e.g. update a variable or fire off a UDP packet). To that end, the work done in the callback function must take significantly less time than the timer interval. Separate threads could be spawned for longer tasks; however, this goes outside the scope of this article. As discussed earlier, because Windows is not a real time Operating System, the callback function (OnTimedEvent) may be late; if this happens and any particular interval is delayed, there are two options:

Either: Set the property IgnoreEventIfLateBy whereby the callback function (OnTimedEvent) will not be called if the timer is late by the specified number of microseconds. The advantage of this is the timer will not attempt to 'catch up', i.e., it will not call the callback function in quick succession in an attempt to catch up. The disadvantage is that some events will be missed.

Or: By default, MicroTimer will always try and catch up on the next interval. The advantage of this is the number of times the OnTimeEvent is called will always be correct for the total elapsed time (which is why the OnTimedEvent must take significantly less time than the interval; if it takes a similar or longer time, MicroTimer can never 'catch up' and the timer event will always be late). The disadvantage of this is when it's trying to 'catch up', the actual interval achieved will be much less than the required interval as the callback function is called in quick succession in an attempt to catch up.

The timer may be stopped in one of three ways:

Stop (or Enabled = false) - This method stops the timer by setting a flag to instruct the timer to stop, however, this call executes asynchronously i.e. the call to Stop will return immediately (but the current timer event may not have finished).

StopAndWait - This method stops the timer synchronously, it will not return until the current timer (callback) event has finished and the timer thread has terminated. StopAndWait also has an overload method that accepts a timeout (in ms), if the timer successfully stops within the timeout period then true is returned, else false is returned.

Abort - This method may be used as a last resort to terminate the timer thread, for example, to abort the timer if it has not stopped after waiting 1sec (1000ms) use:if( !microTimer.StopAndWait(1000) ){ microTimer.Abort(); }

The code below shows the MicroLibrary namespace (MicroLibrary.cs) which contains the three classes, MicroStopwatch, MicroTimer and MicroTimerEventArgs. See the 'Download source' link above.

The screenshot below shows the console output. The performance varies on different runs, but was usually accurate to 1µs. Due to system caching, the accuracy was worse on the first run and got better after the first few events. This test was on a 2GHz Dell Inspiron 1545 with an Intel Core 2 Duo (running Windows 7 64bit). The performance improved significantly on faster machines.

It is very unlikely a UI will need to be updated at intervals in the millisecond range. Purely for the point of demonstration, the 'Download WinForms demo project' link above contains a very simple WinForms application that updates a UI using the MicroTimer. The screenshot below demonstrates the application acting as a stopwatch (with a microsecond display) where the UI is being updated with the ElapsedMicroseconds every 1111µs (1.111ms).

Summary

MicroTimer is designed for situations were a very quick timer is required (around the 1ms mark); however, due to the non real-time nature of the Windows Operating System, it can never be accurate. However, as no other microsecond software timers are available, it does offer a reasonable solution for this task (and although processor hungry, is reasonably accurate on fast systems).

Thanks for your comments and contribution. I'll give your code a go but at a first glance I'm not entirely sure the solution would work as intended. As you suggest it may reduce CPU usage (especially for larger intervals) but you are attempting to join to your own thread and thus would never join and would always time out. This would result in the timer always being late.

Thanks for quick reply. Initially I have tested with original code provided, but it occupies 20-25% CPU usage. Actually reason is SpinWait(10) which occupies the CPU. So I thought current thread should wait for 85-90% time of interval time. Then I wrote logic for that where current thread will wait for 90% time in each cycle and now it occupies 0% CPU usage with accurate interval.

Your code appears to be correct! However you will have to set microTimer.IgnoreEventIfLateBy (to say 500). A PC will struggle (fail) to update a UI every 10ms and thus you will be forced to ignore (miss) some events. If you do not do this the timer will experience the behaviour you are seeing. (I assume Interval is being set to 10000 i.e. 10ms).

I'm using this timer to display something on 144Hz monitor. I set the interval to 6944 microseconds. It works for about 20s, but then the rounding error seems to be accumulating and cause error after a while. Can I ask how to reset the timer every 20s to prevent this? I tried reset() but it simoly stops. Thank you!

In theory you should not need to reset/restart the timer every 20 sec. Two things to look out for:
(i) Ensure you have set the 'IgnoreEventIfLateBy' (to say 500). It will cause you to miss events but it should avoid rounding errors accumulating.
(ii) Make sure the task you are doing in the OnTimedEvent() always returns and takes significantly less time than the interval.

Unfortunately there is currently no way to explicitly 'reset' the timer. The best alternative for this is to call Stop(), or StopAndWait() (to ensure the current Event has ended) and then start the timer again.

It's a little hard to know exactly what's causing the problem but I hope this helps!

Hello,I noticed that the timer using the maximum heart processor resources. I use it with a period of 10ms. by putting a "Sleep (5)" in the loop "while (! stopTimer)", the system works by reducing enormously the used resources. Are there ways to improve it to that level, but I leave it to you because I care not control all the code for this class.Otherwise, it's a great class you have provided.Excuse my bad English (google translation). Thank you.

The simple answer to your question is no! The MicroTimer by design is processor hungry (which is why it should be run on a multi core/processor machine), it will 'hog' one of the cores and run it at nearly 100%. There is no practical way to obtain (and guarantee) such low timer resolutions (as 10ms) on a real time operating system (such as Windows).

As you have done, a Sleep() in the while loop will indeed reduce the load on machine, however, a Sleep(5) is not guaranteed to sleep for 5ms and you may (will) find that on some occasions the Sleep function will not return for for some time (i.e. way over 5ms).

If you need to reduce the load on your operating system (but still retain some timer accuracy) you may be better off using a Sleep(0) in the while loop (instead of Sleep(5)).

Sorry to hear about that, I've just downloaded all the zip files and they have all opened fine for me (using WinZip). I've not heard of this problem before from anyone else, can I suggest you try using a different zip extraction utility (WinZip, WinRAR, etc).

First of all, this is exactly what I'm looking for. However I found a slight issue that I found after basing my use of the class on your WinForms example. The problem is in the StopAndWait(int timeoutInMilliSec) method and is caused by allowing either x.Enabled = true; or x.Start(); to be used to start the time.

Essentially, if you use x.Start(), the Enabled property doesn't get set to true so that, when you later call StopAndWait() the method returns straight away instead of waiting for the _threadTimer thread to complete. This means that the OnTimerEvent() method can be called after you've tried to wait for it to stop and you might access objects that have already been disposed of.

My first thought would be to set Enabled to true in the Start() method after the check for Enabled being true. I'll try that and see if it works.

Glad you have found the MicroTimer useful and thanks for your comments.

The 'Enabled' property getter is really just reporting if the timer thread is alive or not, the StopAndWait() will (should!) only return when the timer has finished. On an initial glance I can't quite see how the scenario outlined could happen although the behavior described is far from ideal. I'll try and investigate this further, in the mean time if you come up with any more information please let me know.

I did a bit more experimenting to try to work round that but didn't find a solution and now don't have the code around due to having to abandon using your timer class. Unfortunately the app I'm working on needs to be able to run on single core/cpu machines and I found they couldn't cope with the tight busy loop that implements the delays, as well as try to run a GUI.

The gist of it was though that I had a WPF GUI where you press a start button and the app sends out datagrams on a socket at very small periods. The timer was enabled using Start() then, when a "Stop" button was pressed, it called "StopAndWait()", with a 1000ms wait, before disposing of stuff. StopAndWait() never waited for the 1000ms; it always returned straight away because the check for Enabled always returned false.

Very nice. Downloaded and ran it without any issues. Managed to obtain over 100,000 events per second on my 2010 ICore 950 with music playing in the background, but had to comment out the writes to the console to achieve this. Instead, stored the MicroTimerEventArgs in a preallocated list. In the console thread after it was done, tracked how often an event was "late", which I defined to be anything over half the period of the interval, which in this case would be 5 microseconds. Results were about 560 / 199581, or roughly .28 of 1% that were late. The worst late time was around 2200 microseconds and the average late time was about 789 microseconds. But better hardware would probably improve on that. Also tried it with 200,000 events per second with an interval of 5 microseconds. Results degraded to 4553 late events out of 399671 total events, or roughly 1.1%. The worst late event was late by around 15,700 microseconds. The average late event was late by about 5963 microseconds. Also tried it without music playing with similar results.

I've often thought about doing a C++ version (but never seam to find the time!), it should definitely be possible. QueryPerformanceCounter() and QueryPerformanceFrequency() can be used to produce a high resolution stopwatch timer and SetPriorityClass() and SetThreadPriority() can then be used to increase the timer thread's scheduling priority.

The Windows timestamp project by Arno Lentfer is a C++ library. But, it appears to be proprietary and not all the C++ source code is available in the download file. There are also certain limitations to the project, and a license is required to remove the limitations - which is why I think it is proprietary. I was able to load it into the VS 2013 Community Edition, but did not try to run it once I discovered that not all the source code was available. But his description of the project is interesting.

I appreciate your article! I am using Windows 7 64-bit on a fast i7-4790 CPU. Using C#, I have a System.Threading.Timer that I start on the next second (delay is 0 to 999 ms) and period is 1000 ms. Every second it updates the time on the screen; the lengthiest calculation is around 1 second, once per minute, however I employ Event1.WaitOne(5000) to block the UI thread and Event1.Set to release it. (The 5 sec timeout never occurs) I also have a radio controlled watch that keeps time perfectly. The callback adds 1 second to a DateTime variable. To my UTTER AMAZEMENT the displayed time looses around 1 second per minute. By adjusting the timer period to 985 millisconds I maintain fairly precise time. The displayed time changes regularly by 1 second every second. This is a HUGE timing inaccuracy! I would have thought that the Thread Pool timer would know after every callback when to schedule the next event (BASED ON THE FIRST EVENT). I have coded a small Windows Forms application that replicates this behavior. An officially certified COSC quarz chronometer runs within ±0.07 seconds/day. My System.Threading.Timer runs around 24 minutes/day late. How is this possible?

Just imagine if they would have used .NET timers in July 1969 ... those astronauts would have missed the moon entirely, let alone take the effects of relativity into account. My fundamental error in thinking appears to be that I believed that a reasonable tolerance (say ±15 msec) would not only apply on the first event but also after the 1000000th period interval has elapsed ... and I never read "do not use as an interval timer" in the official documentation.

The reason I ask is because I'm writing an application that is using MicroTimer to run a method at known intervals. I observed that if I set my timer frequency to a value that is larger than executionTime but by not much; I see more latency and also the executionTime profile is different from what is observed at slower frequencies. Any hints/tricks for me?

The frequency I'm trying to reach is 100Hz, and typical executionTime @ 10Hz is 5~8mS. When I set the frequency to 100Hz, I see executionTime around 5~50mS.

Very glad to hear you are finding the MicroTimer useful, thanks for the feedback and interesting test results.

I keep meaning to do some performance profiling on the MircoTimer but have not yet got around to it . In answer to your question, I think the results you are seeing are due to the nature (and frailties) of a non real time operating system and the way a single task/process is time sliced. As the timer interval is reduced the operating system is allocating more and more resource to (and increasing its time slicing of) the code in the OnTimedEvent.

The only other suggestions I have are:(1) Although this may not be very helpful, a faster machine is likely to go a long way to solving your issue.(2) I doubt if this will have any significant effect but you could try increasing the SpinWait (from 10, to say 100 or higher).(3) Depending on how many cores/processors you have 'spare' you may be able to launch one (or more) 'worker' thread(s) at the start of your application. This worker thread would contain the task you are currently doing in the OnTimedEvent, the OnTimedEvent would then simply be used to signal the 'worker' thread(s) to 'go do your work'. This could be achieved using a 'ManualResetEvent' (or 'AutoResetEvent' or similar). The worker thread(s) would be in the 'wait' [WaitOne()] state and the OnTimedEvent would simply signal [set()] the 'worker' thread(s) to continue, resetting [reset()] on completion. I have not tried this myself and do fear this method may raise more problems than it solves, in particular the signalling may lead to rather inconsistent and inaccurate timings.

Just FYI, Be careful running more than one of these timers. I was writing some code for testing that need multiple timers. This overloaded the system, (soo much precision obviously) I did decrease the thread priority. Seems to be reasonable at the moment

Thanks for your tip. Running more than one timer at a time will put considerable strain on the computer system. In this case reducing the thread priority is a good idea, it will reduce the timers accuracy but in most cases will still give a reasonable timer performance.

Depending on the machine you are running the MicroTimer on and the application you open, this could well happen. From my experience (and feed back from other users) if you are running on a relatively quick multi core machine (say 4 cores upwards) then the timer is relatively accurate and consistent even if the machine is doing other tasks, however it will very much depend on what that other task is. The MicroTimer is (by design) processor hungry, for example, on a 4 core system the MicroTimer will effectively take over one of the cores and (irrespective of the timer interval) that core will work at near 100% (thus the total CPU usage will be approx 25%). However, irrespective of the number of cores and processor speed, the MicroTimer is still at the mercy of the OS, so if (in it’s infinite wisdom) it decided to pause the thread the MicroTimer is on whilst it gives a time slice to the newly opened application, then unfortunately there is nothing that can be done (despite the MicroTimer being run as a high priority thread). So in summary, that may be normal behavior, but will be greatly affected by the specification of the machine and the load the newly opened application puts on the CPU (e.g. on Windows, Microsoft Word will take up much more system resources than Notepad).

Using the above settings the timer will be called every 50ms but if the timer is delayed (by the OS) for more than 0.5ms then the timer will not be called. This should give you good results (within a few micro seconds). Also ensure you are running the code on a reasonably fast (multi core) machine.

In answer to your other question, unfortunately there is no way to make an exact time delay on an Windows OS. The fundamental problem is that a conventional operating system (such as Windows) is not able to perform deterministic timing operations. The only way to achieve deterministic timing behavior is with an RTOS (Real Time Operating System) like IntervalZero RTX:

The simple answer to your question is no. However, the functionality of a pause can be relatively easily achieved using the timer as it is. I've updated the console demo example to demonstrate this (run the example and see timer totals 'timerCountTotal' and 'elapsedMicrosecondsTotal'). Next time I do an update of the article I will look at adding a pause function. In the meantime I hope the code below helps.

Hi Ken,Thanks for the Timer code. It's very useful for my application. I have a question to you.In the code of onTimedEvent(), I call two functions of 2 dlls of 2 devices sequentially. The apis call is done through the ddlImport function .I have developed this Windows program in c# and .Net on xp o.s and I use your timer at 120Hz without any problems.Now I have to continue the develop on win 7 (32bit) and the maximum frequency of the timer without problems is 90Hz.If I increae the timer frequency, the program locks the system.The hardware of the Pc with 7 is better than the notebook with xp.Can you help me?Thanks!!!

Glad to hear you have found the MicroTimer useful. With regards to the OS (Win 7 vs XP), this should not be the critical factor, the major factor will be speed and number of cores (processors). The MicroTimer is (by design) processor hungry, for example, on a 4 core system the MicroTimer will effectively take over one of the cores and (irrespective of the timer interval) that core will work at near 100%, so if you run task manager (taskmgr.exe) the total CPU usage will be approx 25%. Thus, if you run the MicroTimer on a single core machine then there will be very little processor time available for other tasks and the system will easily lock. In reality MicroTimer needs to run on the minimum of a dual core system. If your system is (at least) a dual core then I suggest your processors may not be quick enough to effectively service the high demands of the MicroTimer. It is hard to pin down the exact issue you are seeing but I can only suggest a machine with a Win 7 OS will use more machine resources (CPU time) than the same machine with an XP OS. So in your case (despite being a lower spec) the XP machine is still able to allocate more resources to the MicroTimer. Also, if you've not already done so, I would suggest you set 'IgnoreEventIfLateBy' to approximately half the interval e.g. for an Interval of 10,000 (10ms or 100Hz), use IgnoreEventIfLateBy of 5,000 (5ms).