After having an interesting discussion with Brendan on the topic of deadlocks in threaded and asynchronous event handling systems (see the comments on this blog post), I just had something to ask to the developers on OSnews: could you live without blocking API calls? Could you work with APIs where lengthy tasks like writing to a file, sending a signal, doing network I/O, etc is done in a nonblocking fashion, with only callbacks as a mechanism to return results and notify your software when an operation is done?

"There are actually alternatives to just plain old threaded / async / semi-async (you defined threaded + threshold == semi-async). Non-blocking algorithms exist of a more radical type. I have read that there are people who have successfully implemented system-level stuff using unconventional non-blocking mechanisms, most notably compare-and-swap"

I don't know why you call these unconventional, atomics are pretty common for implementing thread safe algorithms which don't require OS level synchronization. Many compilers (GCC) have support for them.

It's an extremely fine grained serialization (at the system bus level). I use them often in MT code, however they're still expensive since they cannot run at CPU speeds.

I'm not disagreeing with your post, however I'd like to add that atomic cpu operations are only necessary in MT code, an async or other single threaded model doesn't need to sync because the possibility for overlap with other threads doesn't exist in the first place.

I would first like to declare that I am speaking out of my area of expertise. It should be obvious that I am out of touch, so if it is not unconventional, then it should be my error.

However, I find it funny how you say that async model does not need to care. If you have multiple cores, then even if you dedicate one core to the server, then if, in the rare case, two other cores decide to request something, then there will be a need to care, no?

On the other hand, these kinds of atomics, the compare-and-swap and the LL/SC, are hardware accelerated to be (a) non-blocking and (b) interrupt-enabled and (c) runs in one cycle. Why do you claim that they are slower than CPU speed?

Nonetheless, if you combine atomicity and MT, I cannot foresee why a good implementation will not outperform simple threaded and/or async as described. It would be capable of doing all of those, and be well-balanced at the same time.

"I would first like to declare that I am speaking out of my area of expertise."

I don't mind in the least.

"If you have multiple cores, then even if you dedicate one core to the server, then if, in the rare case, two other cores decide to request something, then there will be a need to care, no?"

Consider a process like a web server using async IO mechanisms. From one thread, it waits for connections and IO (with a single blocking call to the OS, like epoll_wait). From there, as requests come in, it dispatches further IO requests on behalf of clients but since the async thread only schedules IO and doesn't block, it returns immediately to waiting for more IO. From the userspace perspective, there are no threads nor mutexes needed to implement this model.

One simple way to achieve more parallelism is to run two or more instances of this async application, and there still would be no need for userspace mutex since they'd be running in different processes.

"On the other hand, these kinds of atomics, the compare-and-swap and the LL/SC, are hardware accelerated to be (a) non-blocking and (b) interrupt-enabled and (c) runs in one cycle."

Firstly, my assembly knowledge drops off significantly beyond the x86, so I can't generalize this to be true for other architectures.

(a) non-blocking is true in the threaded sense, but false at the hardware level.

(b) I don't know what you mean, but atomics don't interfere with interrupts.

(c) Runs in one cycle? Where do you get that from?

"Why do you claim that they are slower than CPU speed?"

The CPU exerts a lock signal which prevents other CPUs from using the bus, like a tiny mutex. Since this takes place at the bus level, it also means atomic operations operate at bus speed. And from my hasty measurements, it even takes more than one bus cycle.

But I won't expect you to take my word for it...here's a microbenchmark on my laptop. I used gcc -O2 (gcc optimized away the loop into a mul, which is why I used two adds):

Anyways, using "y+=x", the loop ran in .1s and the underlying opcode was just "add".

Using the gcc atomic function, the compiler emitted "lock add" and the loop executed in 2.8s.

The first can execute in parallel on SMP, the second becomes serialized.

Ideally I'd write a more precise assembly language test, but I think this example is sufficient to demonstrate my claim.

"Nonetheless, if you combine atomicity and MT, I cannot foresee why a good implementation will not outperform simple threaded and/or async as described."

Here are a few reasons:

Threads are best suited for longer running tasks when MT overhead is minimal compared to other processing. However for IO, operations are frequently followed by more IO operations (read/write loops). Very little time is spent in the threads doing real work. CPU state context switching overhead becomes enormous.

The async model can handle the same IO from one thread in a queue with no context switching at all. Using an AIO interface, it's possible to batch many requests into a single syscall.

MT is inherently penalized by the need for synchronization overhead, async is not.

Excessive use of MT in certain designs result in unscalable stack utilization (even with enough RAM there'll be CPU cache performance issues).