Introduction

I found the article about code contracts in this month's MSDN very exciting. I was unaware of this feature of .NET 4.0 before reading the article, and as the manager of a team of developers who maintain a very large and complex application, I am always interested in techniques that improve code quality and correctness. I won't rehash the interesting details about code contracts in this article. Go read the MSDN article first or this CodeProject introduction. The basics are all there. Then go here to the Microsoft DevLabs page with the download you'll need to run the code below.

My first thought when reading the MSDN article was that the benefits must come at some price, and my initial concern was an impact on performance. Contracts are enforced at run-time by inserting custom code at compile time, and whenever some other process is adding code to mine, I worry about hidden performance costs.

I wrote the small app below to get some metrics and assess how expensive contracts are compared to the other techniques that can be used to validate pre and post execution conditions.

Using the Code

The author of the MSDN article used a trivial calculator function to highlight the benefits of contracts. I'll use the same basic function here:

The extra if statement in there is just to reinforce the difficulty of adding post-condition checking everywhere in your code where you have premature exits. I'm going to leave it here for the analysis.

"If-Then-Throw"

One way to check pre and post conditions is to use explicit if statements to validate your input parameters and output results. All of the pre and post conditions we're adding here are perfectly arbitrary, but will naturally be consistent in all the examples.

The Setup

I ran the application 5 times to get average times and a feel for the variability of the results. The tests were executed on Windows 7 (32-bit), on a dual core (Intel Core 2 E8400 3.0GHz) CPU with 4Gb of RAM.

I used version 1.4.40314.1 of the code contracts SDK with pre and post contract checking enabled.

I'm not interested in the performance of the different methods of execution when the pre or post conditions are not met, just the overhead of the different validation frameworks.

The Results

Debug

Add

If-then-throw

Assert

Contract

Run 1 (ms)

1934.4

2199.6

2230.8

3213.6

Run 2

1950.0

2184.0

2215.2

3244.8

Run 3

1950.0

2199.6

2246.4

3260.4

Run 4

1950.0

2246.4

2293.2

3369.6

Run 5

1934.4

2184.0

2246.4

3244.8

Average

1940.6

2202.7

2246.4

3266.6

Std Dev

8.54

25.63

29.18

60.01

13.5%

1.98%

45.42%

15.76%

68.33%

Table 1. Results from running code in Visual Studio 2008 built for debug.

The "naked" calculator method took on average 1940.6ms to execute.

The method with the if-then-throw pre and post condition checking took 2202.7ms to execute (13.5% slower than the unchecked benchmark Add() method).

The method that used Debug.Assert() pre and post condition checking took 2246.4ms, (1.98% slower than if-then-throw and 15.76% slower than the unchecked benchmark Add() method).

The method that used contracts took on average 3266.6ms and was 45.42% slower than Debug.Assert() and 68.33% slower than the "naked" calculator method.

When the switch to enable contract code insertion is set to false, the ContractCheckedAdd() method took exactly the same amount of time as the unchecked Add() method, as no code was inserted at compile time. You can verify that by looking at the IL of ContractCheckedAdd() in an assembly built with contracts enabled and contracts disabled. ContractCheckedAdd() looks the same as Add() when contracts are not enabled.

Conclusion

I hope to avoid maintenance headaches on future projects and would be happy to sacrifice some performance for some assurances about correctness, so I expect to use code contracts in a lot of my future development. I thought it would be good to know some of the costs to weigh against the benefits.

I have only played around with Code Contracts. However they do appeal to me as a way of ensuring that conditions can be checked for in an orderly manner. You article has prompted me to take a look at Code Contracts again for my work. Thank you.

Any reason why you're not testing with, or including an additional test with the Requires<T> override? I know the VSM article used the override (which throws a new instance of T on failure) but noticed your article just uses the vanilla mechanism. With the exception override, your contracts will still be tested in ReleaseRequires builds (a CC setting under Runtime Checking) but will just throw the exception and nothing else (IIRC).

I found your article interesting, I hadn't looked into code contracts yet so it served as my first introduction as well (pretty cool stuff).

I after looking at the comments I downloaded your code and did what the commentators suggested.

1. I replaced DateTime with System.Diagnostic.StopWatch. As you stated the accuracy probably isn't that much of an issue, but just to be sure.2. I called each method once before starting the timing.3. I compiled to release mode (I don't care about the performance of Debug.Assert(), it's not going to run in production anyway

I then ran the code in as a VS hosted process and constantly got number like:Add 693IfCheckedAdd 806AssertCheckedAdd 688ContractCheckedAdd 694

I then ran the code as a separate process (double clicking on the exe) and consistently got numbers like these:Add 408IfCheckedAdd 408AssertCheckedAdd 408ContractCheckedAdd 408

So while your article didn't teach me much about the performance of code contracts, it did introduce me to them, and I really learned a lot about measuring performance through the comments.

It must be environmental outside of Visual Studio then, because my numbers are reproducible and show a performance penalty for contracts both in debug in Visual Studio and outside running release code from the DOS prompt.

I don't see how there couldn't be a penalty since you're running extra code.

<blockquote class="FQ"><div class="FQA">Sean Michael Murphy wrote:</div>Hey, are you using VS2010? Can you look at the generated IL of the different methods? Might the compiler be optimizing the checking code out because it knows it will never be run?<br></blockquote>

I actually ran the tests in builds from both VS2008 and VS2010 and got similar results in both. I also tried compiling in x86, x64 and AnyCPU. Each build showed consistent results across the 4 methods. I was a little surprised that the x86 build was a little slower (~530), I thought the 64bit wasn't supposed to be helpful unless you were using huge piles of memory.

This was my first time looking at IL, and I can't actually follow it, but the Add, AssertCheckedAdd and ContractCheckedAdd were all the same, line for line. The IfCheckedAdd had a lot more going on in it. Then I tested passing bad conditions and the ContractCheckedAdd wasn't throwing an exception which surprised me.

Turns out that when I download the code for the this article by default the "Perform Runtime Contract Checking" was not checked (I downloaded it a second time to be sure). Once I checked that, the IL changed for the ContractCheckedAdd, and it started to take longer. Now my timing looks like this

Add 413IfCheckedAdd 410AssertCheckedAdd 410ContractCheckedAdd 692

It's still odd that the IfCheckedAdd takes less time than the Add. Either way, I would say that the performance penalty for contracts looks to be worth it. It's only .3 seconds over the course of 100 million iterations.

You simply can not use DateTime to do accurate benchmarking, use the Stopwatch which relies on Performance Counters.

Also, the first time a method is called it gets JITed, therefore you should call each method once.One more way to increase testing accuracy might be to force a GC run, and wait for it to finish and then to the benchmarking.

And I was wondering since you use Debug.Assert, which only executes/compiles if the DEBUG symbol is declared. Did you take your measurements with a debug build? Maybe even from a hosted VS process?

I'm not timing individual runs of the methods so the hyperaccuracy is not required. As long as the tests are all timed the same way the results are relevant.

Xetrill wrote:

One more way to increase testing accuracy might be to force a GC run

Interesting idea. I'll add a GC.Collect() call before starting each test run and post the results later.

Xetrill wrote:

Did you take your measurements with a debug build?

Yes so the Debug.Assert() method could be included for comparison. Running the tests built for release causes the Debug.Assert() time to be similar to the Add() time. I'll post the release times later.

Methods such as Debug.Assert are DEBUG conditional and will not be called when in release mode. Code Contracts have the same conditional calls when running in Debug mode to pop assert dialogs and such. In release mode I'm certain this will perform a lot better as they should just expand to very basic checks and far fewer methods calls.

I would be curious to see the difference, I'm just a little lazy right now to run it myself but if/when I do I'll let you know.