In the build settings panel of VS2010 Pro, there is a CheckBox with the label "optimize code"... of course, I want to check it... but being unusually cautious, I asked my brother about it and he said that it is unchecked for debugging and that in C++ it can potentially do things that would break or bug the code... but he doesn't know about C#.

So my question is, can I check this box for my release build without worrying about it breaking my code? Second, if it can break code, when and why? Links to explanations welcome.

6 Answers
6

You would normally use this option in a release build. It's safe and mainstream to do so. There's no reason to be afraid of releasing code with optimizations enabled. Enabling optimization can interfere with debugging which is a good reason to disable it for debug builds.

IIRC there are some edge conditions when variable removal can cause floating point calculations to give different values (due to not forcing it down from the native size)
–
Marc Gravell♦Dec 11 '11 at 20:12

1

@Marc It's common to see differences in floating point code with optimisers due to, for example, (a+b)+c being not equal to a+(b+c) and other such quirks of FP. Nothing to worry about.
–
David HeffernanDec 11 '11 at 20:14

2

Its not that simple. Whether or not the jitter will enable the optimizer is determined primarily by whether or not a debugger is attached. Note Tools + Options, Debugging, General, "Suppress JIT optimization on module load" setting. Unticking it allows debugging optimized code.
–
Hans PassantDec 11 '11 at 20:25

@hans ok, but that's a bit orthogonal to whether or not it is safe to use optimisations.
–
David HeffernanDec 11 '11 at 20:30

3

It isn't related to evaluation order. The problem is that the x86 FPU has a stack of registers with 80 bits of precision. The optimizer uses the stack to avoid storing back the result of calculations to memory. Much more efficient but the intermediate results don't get truncated back to 64 bits. Thus changing the calculation result. Not an issue for the x64 jitter, it uses the XMM registers instead which are 64 bits. It sounded like a good idea at the time :)
–
Hans PassantDec 11 '11 at 22:50

The optimizations shouldn't really break your code. There's a post here by Eric Lippert which explains what happens when you turn that flag on. The performance gain will vary from application to application, so you'll need to test it with your project to see if there are any noticeable differences (in terms of performance).

That code is just broken. It works in debug by chance. In release mode your luck runs out.
–
David HeffernanDec 11 '11 at 20:20

1

@David Heffernan: Mmmm, I don't see it as being broken. Why do you think so? This is a well-known compiler/CPU reorder/caching problem.
–
TudorDec 11 '11 at 20:21

1

@tudor are you suggesting that disabling optimisation guarantees correctness of this code and is a alternative to the appropriate use of volatile?
–
David HeffernanDec 11 '11 at 20:28

4

The code is broken independent of any compiler flags. Even in debug mode this may cause problems (just as it may be perfectly fine in optimized code). Can enabling optimizations make some bugs more noticeable? Sure, but it won't break valid code.
–
VooDec 11 '11 at 22:08

2

I see it as broken precisely because it's a well-known compiler/CPU reorder/caching problem. There is no reason why that code should ever return without changing flag to being volatile or inserting Thread.MemoryBarrier(). Getting lucky with a debug build means a bug was hidden, not absent.
–
Jon HannaDec 11 '11 at 23:06

Could optimsations uncover bugs that were always in your code, but are hidden when they are turned off? Absolutely, happens quite a bit.

The important thing is to realise that it's a change. Just like you'd test if you'd done a lot of changes, you should test when you turn them off. If final-release will have them turned on, then final-test must have them turned on too.

Instead, with optimizations turned on the compiler produces more compact CIL when translating between C# and CIL.

I observed (and frankly it's interesting!) that the C# compilers from .NET < 2.0 (1.0 and 1.1) produced as good CIL WITHOUT optimizations as later C# compilers (2.0 and later) produce WITH optimizations.

Can you give an example for the lower CIL quality? How do you even define "good CIL"?
–
CodesInChaosDec 11 '11 at 20:46

2

@Wiktor: Judging performance on what the IL looks like is crazy. It's the jitted code that actually gets executed, not IL. Have you considered the possibility that the "bloated" code that you describe, with extra locals etc, might be easier for the jitter to work with and might actually result in native code that performs better?
–
LukeHDec 11 '11 at 21:47

Example wise i have a piece of code from some simulation parts of my master thesis. In which with the optimization flag turned on the code don't really break the program, but the pathfinder only performs one run and loops. (the recursive code traps itself in a loop on the pathfinder which it always breaks out of with the optimization flag turned off).

So yes it is possible for the optimization flag to make the software behave differently.