I have spent an extensive number of weeks doing multithreaded coding in C# 4.0. However, there is one question that remains unanswered for me.

I understand that the volatile keyword prevents the compiler from storing variables in registers, thus avoiding inadvertently reading stale values. Writes are always volatile in .Net, so any documentation stating that it also avoids stales writes is redundant.

I also know that the compiler optimization is somewhat "unpredictable". The following code will illustrate a stall due to a compiler optimization (when running the release compile outside of VS):

The code behaves as expected. However, if I uncomment out the //Thread.Yield(); line, then the loop will exit.

Further, if I put a Sleep statement before the do loop, it will exit. I don't get it.

Naturally, decorating _loop with volatile will also cause the loop to exit (in its shown pattern).

My question is: What are the rules the complier follows in order to determine when to implicity perform a volatile read? And why can I still get the loop to exit with what I consider to be odd measures?

For the sleep, likely the sleep happens before _loop is read into the register and gives the other thread time to change _loop. Then again who knows, attach the debugger after running it (do not start with debugging) and look at the disassembly to be sure.
–
haroldDec 7 '11 at 11:23

For the yield (or any other non-inlined call), likely the JIT compiler realized that spilling the variable to the stack (if it was in a caller-save register) makes less sense than just reading it again. Then again who knows, see previous comment.
–
haroldDec 7 '11 at 11:26

@harold I agree that putting a sleep before the do loop simply allows the register to update in time. That one is easy.
–
IanCDec 7 '11 at 11:31

If you put a big calculation in that loop it also (usually?) exits.
–
haroldDec 7 '11 at 11:50

1

Could you post the actual disassembly? The MSIL code is fairly useless in this case.
–
haroldDec 7 '11 at 12:12

4 Answers
4

What are the rules the complier follows in order to determine when to
implicity perform a volatile read?

First, it is not just the compiler that moves instructions around. The big 3 actors in play that cause instruction reordering are:

Compiler (like C# or VB.NET)

Runtime (like the CLR or Mono)

Hardware (like x86 or ARM)

The rules at the hardware level are a little more cut and dry in that they are usually documented pretty well. But, at the runtime and compiler levels there are memory model specifications that provide constraints on how instructions can get reordered, but it is left up to the implementers to decide how aggressively they want to optimize the code and how closely they want to toe the line with respect to the memory model constraints.

For example, the ECMA specification for the CLI provides fairly weak guarantees. But Microsoft decided to tighten those guarantees in the .NET Framework CLR. Other than a few blog posts I have not seen much formal documentation on the rules the CLR adheres to. Mono, of course, might use a different set of rules that may or may not bring it closer to the ECMA specification. And of course, there may be some liberty in changing the rules in future releases as long as the formal ECMA specification is still considered.

With all of that said I have a few observations:

Compiling with the Release configuration is more likely to cause instruction reordering.

Simpler methods are more likely to have their instructions reordered.

Hoisting a read from inside a loop to outside of the loop is a typical type of reordering optimization.

And why can I still get the loop to exit with what I consider to be
odd measures?

It is because those "odd measures" are doing one of two things:

generating an implicit memory barrier

circumventing the compiler's or runtime's ability to perform certain optimizations

For example, if the code inside a method gets too complex it may prevent the JIT compiler from performing certain optimizations that reorders instructions. You can think of it as sort of like how complex methods also do not get inlined.

Also, things like Thread.Yield and Thread.Sleep create implicit memory barriers. I have started a list of such mechanisms here. I bet if you put a Console.WriteLine call in your code it would also cause the loop to exit. I have also seen the "non terminating loop" example behave differently in different versions of the .NET Framework. For example, I bet if you ran that code in 1.0 it would terminate.

This is why using Thread.Sleep to simulate thread interleaving could actually mask a memory barrier problem.

Update:

After reading through some of your comments I think you may be confused as to what Thread.MemoryBarrier is actually doing. What it is does is it creates a full-fence barrier. What does that mean exactly? A full-fence barrier is the composition of two half-fences: an acquire-fence and a release-fence. I will define them now.

Acquire fence: A memory barrier in which other reads & writes are not allowed to move before the fence.

Release fence: A memory barrier in which other reads & writes are not allowed to move after the fence.

So when you see a call to Thread.MemoryBarrier it will prevent all reads & writes from being moved either above or below the barrier. It will also emit whatever CPU specific instructions are required.

If you look at the code for Thread.VolatileRead here is what you will see.

Now you may be wondering why the MemoryBarrier call is after the actual read. Your intuition may tell you that to get a "fresh" read of address you would need the call to MemoryBarrier to occur before that read. But, alas, your intuition is wrong! The specification says a volatile read should produce an acquire-fence barrier. And per the definition I gave you above that means the call to MemoryBarrier has to be after the read of address to prevent other reads and writes from being moved before it. You see volatile reads are not strictly about getting a "fresh" read. It is about preventing the movement of instructions. This is incredibly confusing; I know.

@Brian your explanation of the memory fence is spot-on. But I would say that volatile is about getting a "fresh" read because it prevents the use of registers.
–
IanCDec 7 '11 at 16:48

@IanC: Hmm...not exactly. A fresh read would be a typical consequence of volatile, but it is not technically stated in the in the ECMA specification. Take a look at how Thread.VolatileRead is implemented and consider how things would play out if 2 subsequent calls to it were made with the same address. The first call might not be "fresh", but the second certainly would. The first read could be satisfied from the executing thread's write queue. At least the specification says this is possible.
–
Brian GideonDec 7 '11 at 20:15

@BrianGideon Then how come the VolatileRead docs state that the method will read an updated value, regardless of what has been cached? "The value is the latest written by any processor in a computer, regardless of the number of processors or the state of processor cache." This implementation doesn't seem to make such guarantees..
–
dcastroFeb 9 '14 at 15:10

I understand memory barriers. But even a Console.Beep() will do the trick. Surely that doesn't force a memory barrier?
–
IanCDec 7 '11 at 11:29

That makes sense, however it may not agree with the documentation of MemoryBarrier, which seems to imply it's just an mfence (or similar) and not special to the compiler. I may be reading it wrong though.
–
haroldDec 7 '11 at 11:30

Your solution works. I don't know why, though, since MemoryBarrier is designed to prevent reads from happening before writes have happened. There is no before/after code here. Once can only deduce that the complier is seeing the fence and "acting with caution". What I'd really love to know is what the compiler's rules are.
–
IanCDec 7 '11 at 11:34

The minimal solution is to mark _loop as volatile. But that isn't my question.
–
IanCDec 7 '11 at 11:37

It is not only a matter of compiler, it can also be a matter of CPU, which also does it's own optimizations. Granted, generally a consumer CPU does not have so much liberty and usually the compiler is the one guilty for the above scenario.

A full fence is probably too heavy-weight for making a single volatile read.

There seems to be a lot of talk about memory barriers at the hardware level. Memory fences are irrelevant here. It's nice to tell the hardware not to do anything funny, but it wasn't planning to do so in the first place, because you are of course going to run this code on x86 or amd64. You don't need a fence here (and it is very rare that you do, though it can happen). All you need in this case is to reload the value from memory.
The problem here is that the JIT compiler is being funny, not the hardware.

In order to force the JIT to quit joking around, you need something that either (1) just plain happens to trick the JIT compiler into reloading that variable (but that's relying on implementation details) or that (2) generates a memory barrier or read-with-acquire of the kind that the JIT compiler understands (even if no fences end up in the instruction stream).

To address your actual question, there are only actual rules about what should happen in case 2.