You might want to unroll the loop by some number of times, like 10, to minimize the loop overhead.
–
Mike DunlaveyJun 26 '09 at 12:49

2

I just updated to use Stopwatch.StartNew. Not a functional change, but saves one line of code.
–
LukeHJun 26 '09 at 13:23

@Luke, great change (I wish I could +1 it). @Mike im not sure, i suspect the virtualcall overhead will be much higher that the comparison and assignment, so the performance diff will be negligible
–
Sam SaffronJun 27 '09 at 0:05

I'd propose you to pass iteration count to the Action, and create the loop there (possibly - even unrolled). In case you're measuring relatively short operation this is the only option. And I'd prefer seeing inverse metric - e.g. count of passes/sec.
–
Alex YakuninJun 28 '09 at 12:37

2

What do you think about showing the average time. Something like this: Console.WriteLine(" Average Time Elapsed {0} ms", watch.ElapsedMilliseconds / iterations);
–
rudimenterJun 28 '12 at 13:07

Finalisation won't necessarily be completed before GC.Collect returns. The finalisation is queued and then run on a separate thread. This thread could still be active during your tests, affecting the results.

If you want to ensure that finalisation has completed before starting your tests then you might want to call GC.WaitForPendingFinalizers, which will block until the finalisation queue is cleared:

If you want to take GC interactions out of the equation, you may want to run your 'warm up' call after the GC.Collect call, not before. That way you know .NET will already have enough memory allocated from the OS for the working set of your function.

Keep in mind that you're making a non-inlined method call for each iteration, so make sure you compare the things you're testing to an empty body. You'll also have to accept that you can only reliably time things that are several times longer than a method call.

Also, depending on what kind of stuff you're profiling, you may want to do your timing based running for a certain amount of time rather than for a certain number of iterations -- it can tend to lead to more easily-comparable numbers without having to have a very short run for the best implementation and/or a very long one for the worst.

This won't lead to any issues with closures.
–
Alex YakuninNov 9 '09 at 12:30

3

@AlexYakunin: your link appears to be broken. Could you include the code for the Measurement class in your answer? I suspect that no matter how you implement it, you'll not be able to run the code to be profiled multiple times with this IDisposable approach. However, it is indeed very useful in situations where you want to measure how different parts of a complex (intertwined) application are performing, so long as you keep in mind that the measurements might be inaccurate, and inconsistent when ran at different times. I'm using the same approach in most of my projects.
–
ShdNxJun 30 '12 at 17:37

1

The requirement to run performance test several times is really important (warm-up + multiple measurements), so I switched to an approach with delegate as well. Moreover, if you don't use closures, delegate invocation is faster then interface method call in case with IDisposable.
–
Alex YakuninAug 13 '12 at 0:38

I think the most difficult problem to overcome with benchmarking methods like this is accounting for edge cases and the unexpected. For example - "How do the two code snippets work under high CPU load/network usage/disk thrashing/etc." They're great for basic logic checks to see if a particular algorithm works significantly faster than another. But to properly test most code performance you'd have to create a test that measures the specific bottlenecks of that particular code.

I'd still say that testing small blocks of code often has little return on investment and can encourage using overly complex code instead of simple maintainable code. Writing clear code that other developers, or myself 6 months down the line, can understand quickly will have more performance benefits than highly optimized code.

significant is one of those terms that is really loaded. sometimes having an implementation that is 20% faster is significant, sometimes it has to be 100 times faster to be significant. Agree with you on clarity see: stackoverflow.com/questions/1018407/…
–
Sam SaffronJun 26 '09 at 5:54

In this case significant isn't all that loaded. You're comparing one or more concurrent implementations and if the difference in performance of those two implementations isn't statistically significant it's not worth committing to the more complex method.
–
Paul AlexanderJun 28 '12 at 4:02

Suggestions for improvement

Detecting if the execution environment is good for benchmarking (such as detecting if a debugger is attached or if jit optimization is disabled which would result in incorrect measurements).

Measuring parts of the code independently (to see exactly where the bottleneck is).

Comparing different versions/components/chunks of code (In your first sentence you say '... benchmarking small chunks of code to see which implementation is fastest.').

Regarding #1:

To detect if a debugger is attached, read the property System.Diagnostics.Debugger.IsAttached (Remember to also handle the case where the debugger is initially not attached, but is attached after some time).

To detect if jit optimization is disabled, read property DebuggableAttribute.IsJITOptimizerDisabled of the relevant assemblies:

This can be done in many ways. One way is to allow several delegates to be supplied and then measure those delegates individually.

Regarding #3:

This could also be done in many ways, and different use-cases would demand very different solutions. If the benchmark is invoked manually, then writing to the console might be fine. However if the benchmark is performed automatically by the build system, then writing to the console is probably not so fine.

One way to do this is to return the benchmark result as a strongly typed object that can easily be consumed in different contexts.

Etimo.Benchmarks

Another approach is to use an existing component to perform the benchmarks. Actually, at my company we decided to release our benchmark tool to public domain. At it's core, it manages the garbage collector, jitter, warmups etc, just like some of the other answers here suggest. It also has the three features I suggested above. It manages several of the issues discussed in Eric Lippert blog.

This is an example output where two components are compared and the results are written to the console. In this case the two components compared are called 'KeyedCollection' and 'MultiplyIndexedKeyedCollection':

If you're in a hurry, I suggest you get the sample package and simply modify the sample delegates as needed. If you're not in a hurry, it might be a good idea to read the blog post to understand the details.

Depending on the code you are benchmarking and the platform it runs on, you may need to account for how code alignment affects performance. To do so would probably require a outer wrapper that ran the test multiple times (in separate app domains or processes?), some of the times first calling "padding code" to force it to be JIT compiled, so as to cause the code being benchmarked to be aligned differently. A complete test result would give the best-case and worst-case timings for the various code alignments.

The basic problem with your question is the assumption that a single
measurement can answer all your questions. You need to measure
multiple times to get an effective picture of the situation and
especially in a garbage collected langauge like C#.

However, this single measurement does not account for garbage
collection. A proper profile additionally accounts for the worst case performance
of garbage collection spread out over many calls (this number is sort
of useless as the VM can terminate without ever collecting left over
garbage but is still useful for comparing two different
implementations of func.)