After ten weeks of development Linus Torvalds has announced the release of Linux kernel 3.9. The latest version of the kernel now has a device mapper target which allows a user to setup an SSD as a cache for hard disks to boost disk performance under load. There's also kernel support for multiple processes waiting for requests on the same port, a feature which will allow it to distribute server work better across multiple CPU cores. KVM virtualisation is now available on ARM processors and RAID 5 and 6 support has been added to Btrfs's existing RAID 0 and 1 handling. Linux 3.9 also has a number of new and improved drivers which means the kernel now supports the graphics cores in AMD's next generation of APUs and also works with the high-speed 802.11ac Wi-Fi chips which will likely appear in Intel's next mobile platform. Read more about new features in What's new in Linux 3.9.

"The only thing we can really say from this comparison is that very fortunate timing is far more important than anything else. It doesn't say anything about monolithic vs. micro-kernel. If Minix 3 was released in 1991 and Linux was released in 2005, then I doubt anyone would know what Linux was."

Very true, timing was everything. The same is even true of the commercial players as well. In early computing history, there were many competitors. Over time they consolidate and fall to the point were we only have a few options. For better or worse, it would take insane loads of money to budge the current market leaders and get consumers to discard their collective investments in incumbent technologies.

Userspace/Kernel context switches used to be much more expensive, so that may have been a historical factor in microkernels pulling ahead. As CPUs have evolved, this should eliminate the original monolithic kernel motivation, but it's stuck around because alternatives have been marginalized in the market.

It's funny that whenever I've talked about being able to write operating systems to less-technical people, many automatically equate that to being filthy rich and they don't realize how many of us there are who struggle to find any work on OS tech. We would be just as good, but we're too late.

The only thing we can really say from this comparison is that very fortunate timing is far more important than anything else. It doesn't say anything about monolithic vs. micro-kernel. If Minix 3 was released in 1991 and Linux was released in 2005, then I doubt anyone would know what Linux was.

It's funny that whenever I've talked about being able to write operating systems to less-technical people, many automatically equate that to being filthy rich and they don't realize how many of us there are who struggle to find any work on OS tech. We would be just as good, but we're too late.

Userspace/Kernel context switches used to be much more expensive, so that may have been a historical factor in microkernels pulling ahead. As CPUs have evolved, this should eliminate the original monolithic kernel motivation, but it's stuck around because alternatives have been marginalized in the market.

But it's not really the context switches between processes that hurt micro-kernels; it's the way that synchronous IPC requires so many of these context switches. E.g. sender blocks (causing task switch to receiver) then receiver replies (causing task switch back).

But it's not really the IPC that hurts micro-kernels; it's APIs that are designed to require "synchronous behaviour". If the APIs were different you could use asynchronous messaging (e.g. where a message gets put onto the receiver's queue without requiring any task switching, and task switches don't occur as frequently).

But it's not really the APIs that are the problem (it's easy to implement completely different APIs); it's existing software (applications, etc) that are designed to expect the "synchronous behaviour" from things like the standard C library functions.

To fix that problem; you'd have to design libraries, APIs, etc to suit; and redesign/rewrite all applications to use those new libraries, APIs, etc.

Of course this is a lot of work - it's no surprise that a lot of micro-kernels (Minix, L4, Hurd) failed to try. The end result is benchmarks that say applications that use APIs/libraries designed for monolithic kernels perform better when run on the monolithic kernels (and perform worse on "micro-kernel trying to pretend to be monolithic").

"But it's not really the context switches between processes that hurt micro-kernels; it's the way that synchronous IPC requires so many of these context switches. E.g. sender blocks (causing task switch to receiver) then receiver replies (causing task switch back)."

Still, if the context switch were "free", I think it'd help take microkernels out of the shadows. IPC doesn't have to be expensive, but we'd have to use it differently than the synchronous call & block pattern (like you said). I was always a fan of asynchronous batch messaging like that used by mainframes. We think of them like dinosaurs, but they did an inspirational job of breaking problems down into elements that could scale up very easily. Modern software design doesn't do justice to the software efficiency that earlier computers demanded.

"Of course this is a lot of work - it's no surprise that a lot of micro-kernels (Minix, L4, Hurd) failed to try."

I have been working on my own async library, and although it works, the nagging problem is that without an OS written for truly async system calls, it ends up being emulated on top of a synchronous kernel like linux where the benefits cannot be witnessed. It's difficult to sell a new paradigm (even with merit) when it runs poorly on existing operating systems which were optimized for the old paradigm.