However, this means having a different extension method for each delegate type. It’s not too bad if you’re using EventHandler but it’s still not ideal.

C# 6 to the rescue!

The null-conditional operator (?.) in C# 6 isn’t just for properties. It can also be used for method calls. The compiler does the right thing (evaluating the expression only once) so you can do without the extension method entirely:

public void OnFoo()
{
Foo?.Invoke(this, EventArgs.Empty);
}

Hooray! This will never throw a NullReferenceException, and doesn’t need any extra utility classes.

Admittedly it might be nicer if you could write Foo?(this, EventArgs.Empty) but that would no longer be a ?. operator, so would complicate the language quite a bit, I suspect. The extra slight cruft of Invoke really doesn’t bother me much.

What is this thing you call thread-safe?

The code we’ve got so far is “thread-safe” in that it doesn’t matter what other threads do – you won’t get a NullReferenceException from the above code. However, if other threads are subscribing to the event or unsubscribing from it, you might not see the most recent changes for the normal reasons of memory models being complicated.

As of C# 4, field-like events are implemented using Interlocked.CompareExchange, so we can just use a corresponding Interlocked.CompareExchange call to make sure we get the most recent value. There’s nothing new about being able to do that, admittedly, but it does mean we can just write:

with no other code, to invoke the absolute latest set of event subscribers, without failing if a NullReferenceException is thrown. Thanks to David Fowler for reminding me about this aspect.

Admittedly the CompareExchange call is ugly. In .NET 4.5 and up, there’s Volatile.Read which may do the tricky, but it’s not entirely clear to me (based on the documentation) whether it actually does the right thing. (The summary suggests it’s about preventing the movement of later reads/writes earlier than the given volatile read; we want to prevent earlier writes from being moved later.)

public void OnFoo()
{
// .NET 4.5+, may or may not be safe...
Volatile.Read(ref Foo)?.Invoke(this, EventArgs.Empty);
}

… but that makes me nervous in terms of whether I’ve missed something. Expert readers may well be able to advise me on why this is sufficiently foolish that it’s not in the BCL.

An alternative approach

One alternative approach I’ve used in the past is to create a dummy event handler, usually using the one feature that anonymous methods have over lambda expressions – the ability to indicate that you don’t care about the parameters by not even specifying a parameter list:

This has all the same memory barrier issues as before, but it does mean you don’t have to worry about the nullity aspect. It looks a little odd and presumably there’s a tiny performance penalty, but it’s a good alternative option to be aware of.

Even in the Interlocked.CompareExchange(ref Foo, null, null)?.Invoke(this, EventArgs.Empty); version, a subscriber can be called even if it has already detached from the event (there can be a context switch after the cmpxchg and the method invocation itself.

Sadly there seems to be no way, due to the way events are implemented in .net, to have a perfectly thread safe solution without having the subscribers keeping state on whether they can accept events or not.

That’s an inherent race condition, I think – the handler could be unsubscribed while it’s executing, or while an earlier handler is executing. It’s not clear what the desired behaviour really is in that case.

Well, a solution with locks (on the add/remove and the invocations) would solve even that issue, but at the risk of perf penalties, convoys and potential difficult to detect deadlocks. I’m not sure what the best option is, but these race conditions bother me through the years.

Maybe there is no Interlocked.Read because the CLR guarantees that reads of a reference are atomic? After all the existing Interlocked.Read method is really only for avoiding torn reads of a 64-bit long on a 32-bit system.

Of course if that’s the case why use Interlocked.CompareExchange at all if the reference read is atomic? If it’s to avoid problems around reordering of instructions I’d assume it would use Volatile.Read instead. Must be some other reason…

The read being atomic just means it’ll read a value that the variable has had at some point. It doesn’t mean it’ll be the most recent value. It’s important to understand the difference. (This is related to volatile, although I’m loathe to talk about volatile too much as it hurts my brain.)

Sorry, my mistake. I did consider your point about the most recent value before my post, but then I got my reads and writes confused… I mistakenly thought that Volatile.Read puts a memory barrier before the read, where as it actually does it after, so yes you could get a stale value. Interlocked does a full fence so you’re fine.

Well, actually the docs for Volatile.Read do say “This reference is the latest written by any processor in the computer, regardless of the number of processors or the state of processor cache. ” so maybe that would be fine. Will edit that in.

Good spot – I didn’t see that line down the bottom. I would’ve assumed you’d need a Volatile.Write too for that to be the case, in that a normal write might not immediately be available to another processor. But the doc you quote is pretty clear… It’s far too late on a Friday to be thinking about all this. :)

You can implement an event however you want. It’s just an add/remove pair, like a property is a get/set pair. For field-like events, an event with no subscribers will always have a null reference for the underlying variable though.

I’m not sure if MemoryBarrier makes a flush (documentation about multithreading in .NET is poor), but if so, this would be a nicer code:
Interlocked.MemoryBarrier();
Foo?.Invoke(this, EventArgs.Empty);

Eric Lippert has an interesting argument regarding the issue of ‘“stale” handler(s)’ in an older blog entry (http://blogs.msdn.com/b/ericlippert/archive/2009/04/29/events-and-races.aspx): “The onus for solving the [“stale” handler] problem is laid upon the code being invoked; event handlers are required to be robust in the face of being called even after the event has been unsubscribed.” The code that we are concerned with has already done all that it can, and it’s up to the subscribers to anticipate the potential race condition.

Isn’t the case where you must call on the most recent update of the delegate, the case where signaling is needed?!

What if the thread which subscribe to the event, subscribe to it half a millisecond after the event is called/raised?

Because of context switches, there is no way of guarantying this won’t happen (without signaling of course).
If this is not required, than you can just accept that maybe the call would not be of the most recent delegate reference.

In any case (where you do not use signalling), the subscriber may not assume that after unsubscribing, its handler won’t be called.

I will be interested to see if the ?. operator is one I find readable enough to tempt me away from my usual extension-method approach. At first glance, I am not convinced, especially since — even if there’s the slight possible overhead if the extension method winds up not inlined — the commonly-used “Raise” for the extension method name adds expressiveness, while “?.” seems to remove expressiveness. But never say never! :)

Speaking of extension methods, in my own code I always implement events using EventHandler (or EventHandler, where appropriate of course). This means that I only need two extension methods: one for the generic type, and one for the base EventHandler type. There’s no concern of proliferation here, so extension methods really are a nice solution IMHO.

(When I’m feeling particularly ornery, I even forego the use of EventHandler, in favor of EventHandler…then I need only one extension method for that project :) ).

And yes, you’re correct…there’s nothing to be done about race conditions, other than to ensure that the code is flexible enough to accommodate an event handler being invoked even after some other code thought it had unsubscribed. The vast majority of the time, this is a complete non-issue, since the event handler remains subscribed for the lifetime of the object implementing the event. But in the relatively infrequent scenario where that’s not the case, the handler needs to be written such that it’s either okay with doing its thing even after it was unsubscribed, or the code needs a synchronized flag that the handler can check.

It’s certainly not the event implementer’s job to worry about the race. I wouldn’t waste any time at all trying to implement an event in a way to accommodate race-sensitive handlers.

Though currently CLR doesn’t introduce memory reads of reference fields (and may will never do),
technically it is not quite NRE safe in face of multithreaded subscribe/invoke.
In the future JIT may decide to read twice directly from Foo when you do Foo?.Invoke(this, EventArgs.Empty);

I’m not at all convinced by that “read introduction” example where obj.ToString() might throw. For the JIT to do that would make all kinds of expectations fundamentally broken. If the JIT ever starts doing that in places where it matters, that feels like an invalid optimization. The claim that it happens “under special circumstances” without any further explanation doesn’t help to convince me. Why would they “optimize” in a way which so completely defeats expectations? Reordering I can absolutely see, but I really hope the article is incorrect about read introduction.

The way C++11, Java and .NET are defined you only consider single-threaded behavior by default unless you use one of their concurrency primitives (locks, volatile for c#/java or std::atomic).

Clearly we all agree that for a single thread there is no difference in the behavior, so the question is just: Do you introduce anywhere here a happens-before relationship between the two reads and a possible write by another thread? In the given code? No. You would if obj was volatile or you had a lock or a memory fence between the two reads, but we don’t have any such thing.

It’s not an useful optimization in many situations, but I wouldn’t be surprised if you could create a situation where it’d happen – if not now then in a more optimizing JIT down the road. Great now I’ll spend the rest of the morning playing with g++ and clang to show it at least there..

My previous post may seem a little confusing, as it seems I don’t know the magic incantation to put angle brackets in my post. So my mention of the generic EventHandler (which I’d normally write with a T surrounded by a left and right angle bracket following the type name itself) winds up looking exactly like the non-generic EventHandler type.

Volatile.Read() doesn’t prevent the handler from being stale at the time of executing the subscribers. It only decreases the probability of it as it reduces the inconsistency window. You still can have a stale subscriber list if someone is subscribed right after you read the handler.
There’s no way to avoid it unless you use one of the locking techniques (e.g. wrap the handler call in lock(…) { … } statement), but that is something I strongly discourage everyone from doing.
Actually, you shouldn’t bother about the handler being stale. There’s nothing wrong in the situation when a subscriber gets a last call right after it unsubscribes.

Without Volatile.Read or similar you have no guarantee to see the update until some kind of synchronization between the two threads happens, which in some situations could be a very long time off and not just until the next invocation of the handler.

However, this means having a different extension method for each delegate type

Do you mean for events that don’t take (object, EventArgs) as parameters? If so, I’d argue they’re non-standard and don’t need to be supported by such an extension method. That has the added benefit of pushing people towards the idiomatic implementation.

It’s entirely conventional for an event delegate type not to take EventArgs as a parameter, but a subclass of EventArgs – take MouseEventHandler for example.
That’s what EventHandler<TEventArgs> is for, of course – but there are a lot of pre-.NET 2 events with their own delegate type declarations.

.NET sadly is rather horrible about documenting visibility guarantees for methods such as CompareExchange (one situation where Java is definitely better).

But purely from a rational point of view a general CompareExchange has to provide both acquire and release semantics (you can sometimes remove one barrier in performance critical code if you’re careful, but not always). This also correlates nicely with Java’s standard CAS as well as Win32’s InterlockedCompareExchange.

This consequently means that yes a Volatile.Read() (as well as Thread.VolatileRead for .NET before 4.5) will be enough to establish a happens-before relationship between the write and read (actually we’d only need Release semantics for the CAS).

If you’re a bit queasy about it, it’s basically nothing other than the old:
volatile bool isDone
T1:
while (!isDone) // doWork
T2:
isDone = true;
which as we all know is guaranteed to work as long as you don’t forget the volatile (and volatile just means release semantics for writes and acquire for reads).

PS: Personally I still prefer the noop event handler trick – you can see on one glance if your code is correct or not (otherwise you have to look for every possible invocation of your event handler) and the performance hit is negligible.

Personally, I’ve found that I only fire/handle events on UI threads by UI controls or ViewModel objects, which are single-threaded by their nature. I’m happy with “public event EventHandler Foo = delegate {}” approach there.
In most other cases where I deal with concurrency/parallelism, I usually find a suitable Task-based API to consume instead of APIs involving events. Same if I create a new API.

No. The purpose of the local variable is to ensure that the field is only read once – that’s already the case when you call Foo.Raise – effectively, the local variable has become the parameter in the extension method.

I just have a question about a sentence that puzzle me. You wrote “As of C# 4, field-like events are implemented using Interlocked.CompareExchange, so we can just use a corresponding Interlocked.CompareExchange call to make sure we get the most recent value”.

This gives me the feeling that 2 CompareExchange could be “linked” in some way (i.e. the one in the C# implementation of field-like event and your one) and together act as a magical way to “make sure we get the most recent value”.

Is this what you meant? Is it how CompareExchange really works?

I did not understand this from reading the doc…

I mean: since CompareExchange(ref x, null, null) never changes the value of x, then the returned value will always be x, even if 1000 thread execute the same line at the very same time.

The fact that another thread may change x somewhere else (say in a x -= foo somewhere) and that this is implemented using another CompareExchange doesn’t seem relevant to me here. This cannot turn “getting the value (via CompareExchange) + testing it for != null + invoking the handler” into an atomic operation.

Therefore I don’t see the point of using a CompareExchange at all here..

I still wonder why the raising of events of an EventHandler was not made from start to not throw when the list of subscribed delegates was null (so we’d avoid all this and we could just do FooEvent(this, EventArgs.Empty) without having to check for null. Same as when you unsubscribe a non-existant delegate it does not throw at all for the default implementation of the “remove” accesor of an event. I’m sure there were valid reasons behind it, but I fail to see them.

I think that should be the original logic of events: when you raise an event, you call the list of subscribed delegates… if there are none, then you don’t call anything, but that should not be an exception at all in my head.

It’d probably be a breaking change to do it now, but it just shouldn’t have been there from a start.

I’d still propose it as a feature for coming versions and would be making surveys as if ANYONE is actually not checking for null and relying on catching the NullReferenceException for any purpose.

I am having trouble understanding how the first example with the local variable solves anything. I understand you to be saying that the event handler might disappear at any point, so I cannot see a thread-safe way to call it without either guaranteeing that it cannot go missing (what Javier Campos said above) or asserting some kind of system-wide lock that means no other process or thread can touch it while you try to use it. It has been a while since I wrote multi-threaded code (in C) and so maybe I am just not up with the terminology or scenarios that OO or GUIs cause (“boxing”?). I wrote a server where threads did not call each other, they just had to stay within a counted limit.
I teach computer programming, and am trying to evaluate whether C# is a language that I can teach to people in one year who know nothing about programming so that they can get entry-level programming jobs. (We are teaching VB.Net now.) I think that C with Windows was difficult enough, but with OO, it is essentially impossible to become an entry-level programmer without years of study. Please let me know if I am wrong.

When an event handler unsubscribes, the value of the field storing the handlers changes. That can happen between checking and using the field – but if you’ve copied the value of the field to a local variable, the value of that local variable won’t change, even if the field then changes to have a value of null.

As for the question about teaching – I wouldn’t like to say. I would expect bright, motivated students with a good teacher to be able to pick up enough in a year – but that really depends on how much time they’re putting into it, etc. I would suggest you learn the language first, then you’ll be in a decent position to judge. (It’s slightly worrying if you’re already teaching VB.NET, but aren’t confident about what boxing is – that’s as relevant to VB.NET as it is to C#.)

The students are in the classroom full-time. They are very motivated (trying to get jobs) and intelligent. I just wonder if the “entry-level” bar has moved out of reach. I was self-taught using an Apple ][ and a PDP-11 for years before I went to school. I easily kept up with ideas involving language design and operating system functionality in college. When I was in college I was having a programming problem and I said to the TA, “the compiler has a bug.” She replied, “Compilers are too simple, they don’t have bugs.” Is that true any longer? I wrote a compiler in college as part of the curriculum, and created small languages (often using lex and yacc) for particular purposes many times at work. Is that true any longer? I created a set of DOS TSRs in C to share modems over a Novell network in ’92, and a multithreaded, multi-process database transaction server (like a web server) using C in 2000. Can one person still do that? (I have mastered the C comma operator – in response to your challenge to master something. It allowed me to inline a small commonly called function for speed.)
Is it easier or harder to do such things now with OO? Even if it is easier, can someone learn to do it in a year regardless? Can they prove a level of knowledge of C# (or VB.Net) to a hiring company? What do people expect these days?
Maybe I am a curmudgeon, but I want my students to have a realistic chance at becoming employed after they spend so much effort.

I really don’t want to get into too much of a discussion here, as it’s way off the topic of the post. You definitely shouldn’t be expecting your students to do the same things that you did when you were younger, not because they’re impossible but simply because they’re not what modern programming involves to the same extent. Yes, compilers have bugs – and always have had. And yes, a smart and motivated kid could certainly learn enough in a year to make it into an entry-level position.

@nocomprende: It’s not that “the event handler might disappear at any point”. In C#, objects won’t “disappear” as long as any variable holds a reference to them. That’s how the garbage-collection-based system works.

But the value of the variable itself might change between the time one compares it to null, and the time one accesses it again to invoke the delegate instance. The variable in question here is the field named “Foo” (to code outside the declaring class, this member is an “event”, but inside the class it’s essentially just a field). By copying the field’s value to a local variable, now there is a variable that no other code can touch, and so it is guaranteed that it won’t change between the time it’s compared to null and the time it’s accessed for the purpose of invoking the delegate.

If it’s non-null in the if statement, then it will also be guaranteed to be non-null when the code attempts to invoke the delegate.

Note that there is still a race condition: a subscriber could in fact remove itself from the event, but still find itself invoked after having done so. Event subscribers in a multi-threaded scenario should never assume that they won’t be invoked even after unsubscribing. But that’s “a whole ‘nother ball o’ wax”, unrelated to the potential NullReferenceException that could occur if not for the patterns being described here.

If the event handler is still “there”, then it would be better to have it simply return a result saying “I am not accepting requests anymore” than to change a variable that someone could access later. I guess I was assuming that the Null value meant that the event handler was no longer callable by any means. To me, thread safety is about doing the simplest thing that won’t pull the rug out from under another thread. Ultimately, only the event handler “knows” if it is still operable or not, so it is the final authority, and no one else should bother keeping up with that. Just call it and take what you get.
How often does it happen that something you expect to be there is not? Haven’t we already wasted more effort than is needed debating a very unlikely scenario? If you are saying that objects as a whole might come and go, fine, but have the object overall say “I am no longer operable”, rather than doing anything that requires futile advance checks for. I thought that was the idea behind immutable variables? You can count on them. Behavior might change, but the thing is there. Which approach is simpler? If I go to an old friend’s house and he has moved, I ring the doorbell and somebody answers, I don’t find a crater. This is more like how objects in the real world behave, which was the argument for OO in the first place.

“Haven’t we already wasted more effort than is needed debating a very unlikely scenario?”

Well if you only want your code to work most of the time, yes. Given this thread and your other one about precedence, it’s not clear to me that you’re going to enjoy your time in C#.

(The event handler variable isn’t immutable here, by the way. That’s the point. The value of the variable is changing. The object that the variable referred to does not – delegate instances are immutable.)

With respect, it seems to me that you think the language itself is a given. I am saying, why wasn’t the language itself designed to make things easier and make bugs less likely? The only way I know to do that is to make things simpler and cut off a lot of unlikely and rarely useful possibilities. If the language is too complex, I am fond of building a simpler language and using that. Does OO make that approach easier than say, C? C is great, so why were not later languages even easier to use? The progression of machine code to assembler to 3GL seems to have reversed. I thought that 4GLs and so on would have become so simple that more people could code. That is not the perception that I have. C++ raised a lot of thorny issues, and I don’t see that Java and C# have solved them. Have they? Could I have gotten a job with less schooling today than in the past?

Yes, I do think that the language itself is a given. This blog post is talking about how to use the language, not how to redesign it. If you want to talk about how you would redesign C#, I suggest you write a blog post about that yourself – but it does sound to me like you’re saying that just because C# isn’t perfect (and yes, this area is an annoying one) that it clearly isn’t as good as C or C++.

As a long time assembler + C (then C++ and other higher level languages like Delphi, and finally settled on C#) coder, I can definitely say that C# makes things way easier than C or assembler, at least for anything not low-level. One of the reasons is that there is little-to-none (at least when not going unsafe or unmanaged) “undefined behaviour”. There are always things to enhance (and I’d say this is one of them). That said, I wouldn’t change the language to “fix” (in case it needs to be “fixed”) this one, but the runtime.

The main problem here is that invoking an event throws an error when there are no delegates subscribed (the event equals null in this case), so you have to check it for null unless you want to catch the exception (which is not advised for performance reasons): between the check for null and the invoking, the original variable could have been changed from another thread (the last subscribed delegate could have been unsubscribed, so invoking would throw null even if you have checked it). This is not a “bug” or “bug introducing feature”, it’s a well-defined behaviour and, as in every programming language, you have to adapt to its behaviours. It’s a small nuance, and this is why this blog post exist, and what it is about: ways to handle this (again, well defined) behaviour.

The reasoning on my comment was that, I don’t think (but I might be wrong) there’s anyone relying on the NullReferenceException to do anything, and basically everyone using events is just checking for null and invoking if not null, or do nothing otherwise (or have all events initialized with a subscribed “do-nothing” delegate, or use any of the methods described in this post)… so all this “if-null-do-nothing-else-invoke” could be done by the .NET runtime in a thread-safe way, avoiding the need for the user to check (or do any other alternative, like the ones Jon is posting here) everytime he wants to invoke an event.

The problem is that, if any C# programmer is actually relying on the NullReferenceException to do something else than “do nothing”, then this would be a breaking change (and breaking changes must be very carefully implemented in a programming language). I see no reason for anyone to do that, but the fact that I do not see it doesn’t mean nobody else doesn’t… there are too many programmers out there, each with its own idea of how to do things.

This isn’t about the C# language… the language is fine, and allows this kind of flexibility to do things as you please.

I think I have overstayed my welcome and asked my question in the wrong place, so I apologize.

I tend to have expectations that things “improve” over time in ways that matter to me. For programming, I focus on simplicity for users (in this case, programmers are language users), almost to the exclusion of all else. Apparently this event-handler issue, and boxing, are wrinkles in the rug that are hard to iron out. Perhaps such things are inevitable.

“I think I have overstayed my welcome” — I can’t speak for Jon, but I doubt you have. Don’t confuse whether you are welcome with whether certain types of discussions are on-topic and/or relevant. I believe that you the person are very much welcome.

“I tend to have expectations that things “improve” over time” — they do and they have, without a doubt. C++ was a huge advance over C, but it also retained a lot of the same kinds of pitfalls C had, as well as added some new ones. Java and C# (the latter having initially been inspired by the former but IMHO the roles have been reversed for at least a decade now, with C# in the lead) are both significant improvements over C++, and code written in C# is absolutely without question in my mind going to be on average less buggy/more correct. But no language is all things to everyone.

_”I guess I was assuming that the Null value meant that the event handler was no longer callable by any means” — it seems to me that the main issue here is that you don’t yet know enough about the language to have enough context to correctly assess the situation. I mean no offense, but at this point you don’t really understand exactly what’s even happening, so how can you legitimately critique what’s happening?

The issue that you believe should be addressed, is not really that easy to address. To do so would require additional synchronization between threads which is a) costly, and b) potentially dangerous. To ensure the delegate reference can’t be changed between the null comparison and invocation would require holding a lock during invocation, which would just invite a host of other much, much worse types of bugs.

I encourage you to start with looking at what the declaration of an event in C# really means (i.e. what does the compiler generate when you do that), at how delegates in C# work (in particular, what happens when you “add” or “subtract” one delegate from another), and then to consider carefully how you would implement a system that has the properties you seem to think C# should have. Done honestly and thoroughly, you will find it’s not nearly as simple or safe to do as your posts seem to suggest.

(And no, Jon is right…putting a full-blown discussion of the question here in the comments would be inappropriate. It’s the kind of thing that is deserving of an entire blog article, and by all means please write your own article if and when you do the exercise).

I too love this a little cleanup with C#6. One thing I’ve been wondering though is, now that we can invoke event handlers in one statement, is it still necessary to wrap them in a method? For example, if I have a Form with a public “SomethingHappened” event, should I still have a private “OnSomethingHappened” method, or can I just use the Invoke statement wherever I need it in my code. What’s the best practice now?

I have been coding with C# for very little time and I am thinking about events. I would like if you can tell me what you consider a best practice when working with events. I’m working on a project and I found myself writing a lot of delegates for the events and a lot of event args classes for them and I thought there might be a best way of managing events, perhaps defining a lot of event args classes but using the generic EventHandler delegate but I would appreciate if you could give me your opinion.

I tried to write a program to demonstrate the need to do a volatile read to ensure getting the latest set of event subscribers, but found that each “read” was always getting an up-to-date list. Am testing with Visual Studio 2015, C# compiler version 1.2.0.60317 on Windows 8 with x64 platform. The following (complete) program demonstrates that delegate and event fields have “volatile” memory model semantics by default on this platform. It does this by showing that a tight loop that reads such a field will terminate when the field is changed by another thread adding a handler, but that a loop which is identical except using an “object” type field instead of “delegate” (or “event”) will not terminate unless the field is marked “volatile”. The program needs to be compiled with “/optimize” for the non-terminating behaviour to be exhibited.

// This program needs to be compiled with "/optimize" to exhibit
// non-terminating loop behaviour described in comments below.
// This program demonstrates that delegate and event fields
// have "volatile" memory model semantics, i.e. if one thread
// modifies them ("write") then another thread will
// see immediately the most recent value ("read").
using Console = System.Console;
using Thread = System.Threading.Thread;
class Event_test
{
static void Main()
{
Thing t = new Thing();
// Add handlers on another thread at 200ms intervals.
new Thread(() =>
{
Thread.Sleep(200);
t.Clicked += ThingWasClicked1;
t.OClicked = new object();
Thread.Sleep(200);
t.Clicked += ThingWasClicked2;
t.OClicked = new object();
Thread.Sleep(200);
t.Clicked += ThingWasClicked1;
t.OClicked = new object();
}).Start();
// Invoke handlers on main thread.
t.DoSomething("should show one handler");
t.DoSomething("should show two handlers");
t.DoSomething("should show three handlers");
}
static void ThingWasClicked1(object sender, string myValue)
{
Console.WriteLine("1> thing was clicked with: " + myValue);
}
static void ThingWasClicked2(object sender, string myValue)
{
Console.WriteLine("2> thing was clicked with: " + myValue);
}
}
class Thing
{
public delegate void MyClickHandler(object sender, string myValue);
// 'Clicked' is a delegate, not an event.
// However behaviour of this program is the same if type is changed
// to event (by inserting "event" after "public").
public MyClickHandler Clicked = delegate {};
// 'OClicked' is of type object and is modified when 'Clicked' is modified.
// If this is made volatile (by inserting "volatile" after "public")
// then the second tight loop below will terminate.
public object OClicked = null;
public void DoSomething(string message)
{
Console.WriteLine("DoSomething: " + message);
// '?LastHandler' is reference to last delegate (handler list).
// 'DLastHandler' is type delegate, 'OLastHandler' is type object.
MyClickHandler DLastHandler = Clicked;
object OLastHandler = OClicked;
// Poll in tight loop until we have a new handler list.
// Exactly one of two loop statements below should be uncommented.
// The first one reads delegate types; the second reads object types.
// The first one terminates, i.e. it will detect the other thread
// changing 'Clicked' when it adds another handler.
// The second one does not terminate, i.e. it will *not* detect the
// other thread changing 'OClicked' when it adds another handler
// (unless 'OClicked is made volatile).
while (Clicked == DLastHandler) DoNothingMuch(); // terminates ok
// while (OClicked == OLastHandler) DoNothingMuch(); // does not terminate
// Invoke the handlers.
Clicked(this, "foo");
Console.WriteLine("");
}
private void DoNothingMuch() { bool toggle = true; toggle = !toggle; }
}

@colinchambers: “The following (complete) program demonstrates that delegate and event fields have “volatile” memory model semantics by default”

Volatile semantics isn’t the question. Yes, x86 architectures have volatile semantics built in, and yes the code generated for delegate combine operations is different enough that the compiler doesn’t optimize out fetches in that example for the delegate variable, but does for the object variable.

But that doesn’t fix the race condition of read a memory location to check that location for null, and then reading it again to invoke the delegate reference stored at that location. Only thread synchronization of some sort can do that.

Memory model semantics is the question I am interested in here.
I think it is important to know the memory model semantics in C# of
event variables. By “memory model semantics in C#” I mean
an abstraction specific to C# that is implemented by the compiler and runtime
and that is distinct from the memory model semantics of the underlying hardware
platform (x86, x64, ARM, …).

As a C# programmer I want to know these memory model semantics so I can know
what I have to do to ensure
(a) a memory model “release” after a client (on one thread)
modifies the handler list for an event (via a ‘+=’ or ‘-=’ operation that does
a “write” to the event variable) and
(b) ensure a memory model “acquire” prior to invoking the handlers
(on a different thread)
(via an explicit or implicit “invoke” that does a “read” of the event variable).
Ensuring (a) + (b) is necessary to ensure (assuming no intermediate “write”)
that the invoke will “see” the handler list as modified by the client.

Jon says that field-like events are implemented usingInterlocked.CompareExchange, by which I think he means that the
accessors (the ‘+=’ and ‘-=’ operators) are implemented withInterlocked.CompareExchange.
An Interlocked.CompareExchange is a full memory barrier,
effectively a combination of “acquire” and “release”.
This means that the C# implementation automatically deals with (a) above and
I do not have to do anything else.

This is not entirely satisfying in that it depends on inside knowledge
of the current compiler implementation;
the published C# documentation does not specify this behaviour.
The “C# Language Specification Version 5.0” just says
“The addition and removal operations are thread safe,
and may (but are not required to) be done while holding the lock (§8.12)
on the containing object for an instance event, or the type object (§7.6.10.6)
for a static event.(10.8.1 Field-like events).”
It does not further elaborate on what is meant by “thread safe” and
in particular says nothing about memory model semantics
(i.e. about the visibility of writes in one thread to reads in another).
ECMA-334 (17.7.1 Field-like events) requires that locks are used,
which was the case for earlier compiler implementations but is not the case
for the current compiler.

Jon suggests using either an explicit Interlocked.CompareExchange
for (b) above or else a Volatile.Read.
An Interlocked.CompareExchange is effectively a combination of
“acquire” and “release”, which is indeed sufficient.
A Volatile.Read is an “acquire”, which is exactly what is needed.

What I originally wanted to do was write a program that demonstrated the
need for explicitly using a Volatile.Read
(or Interlocked.CompareExchange) for (b), by showing an example
of leaving it out and finding that the “invoke” did not see a handler added
earlier by a client on a different thread. I was unable to produce such a
program. The “invoke” always saw the added handler.
The program included in my comment above is an example of this.
It shows that when
a read of the event variable occurs in a tight loop the C# implementation
is behaving as if it is performing an “acquire”. An “ordinary” (non-event)
variable in an otherwise identical tight loop
requires an explicit “volatile” qualifier to produce the equivalent behaviour.
(“volatile” semantics are “release” on write, “acquire” on read.)

The general difficulty
of finding an example that demonstrates a need for an explicit
“acquire” makes me think it is possible that the C# implementation
always inserts an “acquire” into a read of the event variable,
i.e. that the event variable has “volatile” semantics
(“release” on write, “acquire” on read.)
So, I’m wondering if anyone can definitively confirm or deny this
i.e. can anyone definitively say either
“Yes, C# guarantees to perform an acquire before a read of an event variable”
or
“No, it is not the case that C# always performs an acquire before a read
of an event variable”?

I realise it is extremely unlikely that anyone will make such a pronouncement
definitively. In the absence of such a pronouncement I think the most
reasonable course is to do nothing for (a) and Volatile.Read or
equivalent for (b), i.e. exactly as Jon advises.

@pete.d: “But that doesn’t fix the race condition…”

The example program I posted uses the “alternative approach” documented by Jon,
whereby the delegate variable is initialized with an “empty” delegate.
If it is an “event” variable, as would normally be the case in production
code, it will never contain null, as client code can only
use the ‘+=’ and ‘-=’ operators to add and remove handlers and will
never remove the “empty” handler with which the variable was initialized.
There is therefore no need to check for null before invoking the delegate.

I tried to write a program to demonstrate the need to do a volatile read to ensure getting the latest set of event subscribers, but found that each “read” was always getting an up-to-date list. Am testing with Visual Studio 2015, C# compiler version 1.2.0.60317 on Windows 8 with x64 platform. The following (complete) program demonstrates that delegate and event fields have “volatile” memory model semantics by default on this platform. It does this by showing that a tight loop that reads such a field will terminate when the field is changed by another thread adding a handler, but that a loop which is identical except using an “object” type field instead of “delegate” (or “event”) will not terminate unless the field is marked “volatile”. The program needs to be compiled with “/optimize” for the non-terminating behaviour to be exhibited.

// This program needs to be compiled with "/optimize" to exhibit
// non-terminating loop behaviour described in comments below.
// This program demonstrates that delegate and event fields
// have "volatile" memory model semantics, i.e. if one thread
// modifies them ("write") then another thread will
// see immediately the most recent value ("read").
using Console = System.Console;
using Thread = System.Threading.Thread;
class Event_test
{
static void Main()
{
Thing t = new Thing();
// Add handlers on another thread at 200ms intervals.
new Thread(() =>
{
Thread.Sleep(200);
t.Clicked += ThingWasClicked1;
t.OClicked = new object();
Thread.Sleep(200);
t.Clicked += ThingWasClicked2;
t.OClicked = new object();
Thread.Sleep(200);
t.Clicked += ThingWasClicked1;
t.OClicked = new object();
}).Start();
// Invoke handlers on main thread.
t.DoSomething("should show one handler");
t.DoSomething("should show two handlers");
t.DoSomething("should show three handlers");
}
static void ThingWasClicked1(object sender, string myValue)
{
Console.WriteLine("1> thing was clicked with: " + myValue);
}
static void ThingWasClicked2(object sender, string myValue)
{
Console.WriteLine("2> thing was clicked with: " + myValue);
}
}
class Thing
{
public delegate void MyClickHandler(object sender, string myValue);
// 'Clicked' is a delegate, not an event.
// However behaviour of this program is the same if type is changed
// to event (by inserting "event" after "public").
public event MyClickHandler Clicked = delegate {};
// 'OClicked' is of type object and is modified when 'Clicked' is modified.
// If this is made volatile (by inserting "volatile" after "public")
// then the second tight loop below will terminate.
public object OClicked = null;
public void DoSomething(string message)
{
Console.WriteLine("DoSomething: " + message);
// '?LastHandler' is reference to last delegate (handler list).
// 'DLastHandler' is type delegate, 'OLastHandler' is type object.
MyClickHandler DLastHandler = Clicked;
object OLastHandler = OClicked;
// Poll in tight loop until we have a new handler list.
// Exactly one of two loop statements below should be uncommented.
// The first one reads delegate types; the second reads object types.
// The first one terminates, i.e. it will detect the other thread
// changing 'Clicked' when it adds another handler.
// The second one does not terminate, i.e. it will *not* detect the
// other thread changing 'OClicked' when it adds another handler
// (unless 'OClicked is made volatile).
while (Clicked == DLastHandler) DoNothingMuch(); // terminates ok
// while (OClicked == OLastHandler) DoNothingMuch(); // does not terminate
// Invoke the handlers.
Clicked(this, "foo");
Console.WriteLine("");
}
private void DoNothingMuch() { bool toggle = true; toggle = !toggle; }
}

That makes it thread-safe in terms of not throwing a NullReferenceException. It doesn’t mean that it’s guaranteed to “see” the most recently written value in any thread, which is the point of using Interlocked.CompareExchange here.

So, it’s just that it’s assigned to a code-generated temporary variable (like your first example) that makes it thread-safe. I see the distinction. Thanks. It’d be nice if the code gen would use Interlocked since it seems safer and more orthogonal and if that’s what add/remove handler uses (if I read your comments/post correctly).

Or you could use an “Active Event”, which doesn’t care if it’s got 0 or a gazillion handlers, and will simply do nothing of no handlers exists. I am tempted to post my link to my MSDN Article yet again for this comment, but will avoid doing so, since it would probably feel like as if I am spamming you if I did … :)