Everyone says using atomic instructions is faster than using mutual exclusion locks. Given the problems that come with using mutual exclusion locks (priority inversion, dead locks … etc) atomics seems to be the solution. But there is a higer price to pay from programmer productivity. Because rolling out your own stuff (which are correct) by using atomics is a hard job to achieve. The major problem however is checking whether your program has race conditions or not, I’ve been used to valgrind over the years for even simple debugging. But valgrind does nothing to help you with atomics[1]. Therefore you the developer is truly alone in rolling out your own even debugging other developers’ code. A static analysis tool might be the one which really addresses this issue directly but until then we have have to depend on our brains for debugging. One approach that I think which would work well is to test what you have written very thoroughly so that you can conclude that your algorithms do not fail atleast most of the time.

This is new territory for me since I only started learning about these in depth a few days ago, so my understanding might differ from the truth but my implementation seems to survive the stress test that I threw at it. It is not entirely my code[5] (note that I said its better to use someone else’s stuff in this, you can always roll your own but the time you’ll have to put in to that will be very large) but the stress test and the comparison tests are results from the literature survey I did on the topic[2][3][4].

Lets get on to the core data structure/algorithms here, which is a ringbuffer and operations on it. There is an excellent explanation on how ringbuffers work and why they work in [5] so I am not doing reiterate all that here. But I’d like to explain on the things that were hard to understand for me hoping that others who follow would benefit from it.

Lets have some assumptions inplace before we get in to the details,

We assume that the code we write will be executed in the same order as we write them (which is a big lie for all code by the way, we’ll see how to guarantee this in real life code as we go along but for now stick with this)

We also assume integer read and store are atomic, meaning they adhere to all-or-nothing & consistency principal from ACID properties.

Only one thread can operate on pushing things in to the queue, and only one thread can pop things out. 1-consumer and 1-producer.

It is easy to say that this will work well if you protect each function with a mutex lock. We also see that the only race condition that is possible here happens when the producer and the consumer tries to modify the same memory which is a point in the container. Note that only the producer modifies ‘front’ and only the consumer modifies ‘rear’ while both reads each value. Now with the assumptions we made above lets see how we can use two independent threads to produce and consume. And lets benchmark this with the mutual exclusion lock to see what kind of a gain we can get. We add independently and remove independently only while these two operations see each other’s changes consistantly, meaning that the code should be executed exactly as written and the changes on the integer variables ‘rear’ and ‘front’ must be atomic (this is not a big deal since we already assumed those!). So we can say clearly that when we read a value for ‘rear’ or ‘front’ all above code that made that change has been executed for sure otherwise you will not be seeing that change in the first place. Code might run interlaced but it can never run out of order!

But wait! This is just a mind game we played. How can we write this so that it would run on an actual machine? How can we make the assumptions we made true in a computer? Computers suck at running what we wrote, they do out of order execution to speed things up all the time. How can we guarantee that things won’t be changed? Compilers change code too! Making things worse in some architectures even integer reads and writes are not atomic! Enter C++11, not only C++11 alot of new languages (Java/.NET) have allowed the modification of this out of order execution nature to make way for algorithms that make use of such features. Modern processors have features to allow atomic operations with special instructions they do all the needed synchronizations under the hood (keeping caches up to date, keeping memory up to date with caches).

So this is code for a C++11 queue class that does things in atomic instructions. Remember our assumption that things are sequential? C++11 memory model by default gives that guarantee for atomic variables. It has internal mechanisms to detect and stop optimizations of code reordering where the atomic guarantees fail. The guarantee is as follows: any thread reading an atomic variable will see the changes that are applied to the same variable in another thread prior and operations that depend on it will not be reordered. In other words, you won’t see the new value of ‘rear’ before the ‘container’ has been updated with the last run pop() where it changes ‘rear’ value after changing the ‘container’. C++11 also guarantees the reads and writes are done as transactions just like in ACID, it’s all or nothing. 3rd assumption holds because we are going to hold it!

As always C++11 also allows for more optimizations (ahh, the things to love about a language!). The above sequential assumption is called ‘memory_order_seq_cst’ (short for sequentially consistant). There are other things such as ‘memory_order_relaxed’ (there is no order guarantees here, just the fact that your variable has atomic read and writes), ‘memory_order_acquire’ (this goes with loads, where all changes before the load is seen as sequential at the load) and ‘memory_order_release’ (this goes with stores, where all changes done up until it will synchronize with the next acquire load on the same vaiable). I am not very sure that I have grasped the exact ideas behind these new ways of memory ordering but there exists implementations using these, and you can use them with some knowledge of them (and with some more reading on the internet).

On to the tests!

These numbers are the number of pushes and pops done per second on each thread independently, you will see sometimes the POPs appearing before any PUSHes this is because I have not synchronized the console (I don’t even take a lock for the console, but this must be done for anything serious!)

Using Atomic Instructions, memory_order_relaxed, memory_order_acquire and memory_order_release with alignment set to 64. Without alignment set to 64 for atomic variables the results do not always give results near to the results given below. It sometimes gives nearer results but most of the time it gives a bit worse results.

So in conclusion it gives almost about 5x increase in performance even for two threads! Which is great news! I always wondered whether using atomic instructions for a 1-consumer,1-producer queue implementation would have any performance increases seems it does.