Introduction

This article present NPerf a flexible performance benchmark framework. The framework provides custom attributes that the user uses the tag benchmark classes and methods. If you are familiar with NUnit [1], this is similar to the custom attributes they provide.

The framework uses reflection to gather the benchmark testers, the tested types, runs the tests and output the results. The user just have to write the benchmark methods.

At the end of the article, I illustrate NPerf with some metaphysic .NET question: interface vs. delegates, string concatenation race, fastest dictionary.

QuickStart: Benchmarking IDictionary

Let's start with a small introductory example: benchmarking the [] assignment for the different implementation of IDictionary. To do so, we would like to test the assignment on a growing number of assignment calls.

All the custom attributes are located in the NPerf.Framework namespace, NPerf.Framework.dll assembly.

PerfTester attribute: defining testers

First, you need to create a tester class that will contains method to do the benchmark. This tester method has to be decorated with the PerfTester attribute.

the number of test runs. The framework will use this value to call test methods multiple times (explained below).

PerfTest attribute: adding benchmark tests

The PerfTest attribute marks a specific method inside a class that has already been marked with the PerfTester attribute, as a performance test method. The method should take the tested type as parameter, IDictionary here, and the return type should be void:

PerfSetUp and PerfTearDown Attributes

Often, you will need to set up you tester and tested class before actually starting the benchmark test. In our example, we want to update the number of insertion depending the test repetition number. The PerfSetUp attribute can be used to tag a method that will be called before each test repetition. In our test case, we use this method to update the DictionaryTester.count member:

PerfRunDescriptor attribute: giving some information to the framework

In our example, we test the IDictionary object with an increasing number of elements. It would be nice to store this number in the results, and not store just the test index: we would like to store 1000, 2000, .... and not 1, 2, ...

The PerfRunDescriptor attribute can be used to tag a method that returns a double from the test index. This double is typically used for charting the results, as x coordinate.

Compiling and Running

Compile this class to an assembly and copy the NPerf binaries in the output folder: (NPerf.Cons.exe, NPerf.Core.dll, NPerf.Framework.Dll, NPerf.Report.Dll, ScPl.dll).

NPerf.Cons.exe is a console application that dynamically loads the tester assemblies (that you need to specify), the assemblies that contains the tested types (you need to specify), runs the test and output charts using ScPl [2] (ScPl is a chart library under GPL).

Saving to XML

You can also output the results to XML by adding the -x parameter. Internally, .NET XML serialization is used to render the results to XML.

A few remarks

You can add as many test method (PerfTest) as you want in the PerfTester classes,

You can define as many tester class as you want,

You can load tester/tested types from multiple assemblies

Overview of the Core

The NPerf.Core namespace contains the methods that do the job in the background. I do not plan to explain them in details but I'll discuss some problem I ran into while writing the framework.

Getting the machine properties

Getting the physical properties of the machine was a surprisingly difficult task. It took me a bunch of Google tries to get on the right pages. Anyway, here's the self-explaining code that get the machine properties:

IDictionary benchmark

String concatenation benchmark

Interface vs. Delegate

History

References

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

Share

About the Author

Jonathan de Halleux is Civil Engineer in Applied Mathematics. He finished his PhD in 2004 in the rainy country of Belgium. After 2 years in the Common Language Runtime (i.e. .net), he is now working at Microsoft Research on Pex (http://research.microsoft.com/pex).