Introduction

A common diagnostic task is to measure the performance of your code. To do that, the .NET Framework offers the System.Diagnostics.Stopwatch class. This, however, is not truly intended for code performance measured, and would return false values in your measuring (explained in the ExecutionStopwatch vs .NET Stopwatch paragraph).

The ExecutionStopwatch delivers an accurate way for measuring your code performance.

ExecutionStopwatch vs .NET Stopwatch

The .NET Framework already contains a time measuring mechanism that could be found in the Stopwatch class (under System.Diagnostics). However, the sole purpose of this class is to measure the amount of "real-world" time passed. Meaning, if you will attempt to use the Stopwatch class in order to measure the execution time of a method, you will also measure time spent by other background threads. Theoretically, you would like to call "Stop()" each time your OS performs a context switch to another thread, since you are not interested in measuring the time the CPU spent on executing unrelated work.

The ExecutionStopwatch achieves just that - the amount of time the CPU spent on executing your current thread only. Time spent on executing other system threads will not be counted.This is done by using the Win32 function GetThreadTimes that returns time values relating to a particular thread, and not your system's global time (as the Stopwatch class does).

Demonstration

In order to demonstrate the difference in the behavior between the .NET's original Stopwatch class and between the ExecutionStopwatch class, I've come up with the following example:

The difference can be noticed immediately. While the .NET Stopwatch measured 4.9 seconds, the ExecutionStopwatch measured ~0 seconds (Window's time accuracy stands on about 15ms).While the Stopwatch measured the total clock time passed, the ExecutionStopwatch measured the time the CPU spent executing the code.

I never thought about it. I used Stopwatch multiple times to measure the performance of my method. I never thought that it was measuring performance of my method under the system load which was never intended.

Is the 15ms quantum only applicable for reporting, or does it apply to every quantum that gets added to the total thread time? If a thread runs for 5ms, then sleeps for a half-second, then another 5ms, then another half-second of sleeping, etc. will the system record the thread as using a second every hundred, or would the little 5ms chunks be rounded up to 15ms, or rounded down to nothing?

I wonder why Microsoft didn't provide any useful timing metrics? It if were to query the hardware timers when performing a context switch, it should have no trouble getting microsecond-level accuracy. Otherwise it's very possible for threads to be credited with much less time than they're actually using (in some cases, they can be credited with much more time, though I would expect the excess time from threads whose usage is under-reported would be divided among multiple other threads).