Posted
by
Cliff
on Sunday August 06, 2000 @12:04AM
from the can-it-be-done dept.

Kris Warkentin writes: "This question is not entirely intended to start a debate about the pros and cons of microkernels vs. monolithic ones. What I would really like to know, however, is how _feasible_ it would be to convert the Linux kernel to a microkernel. I was looking at how the QNX kernel offers only core services like threading, IPC, process creation, memory management, initial interrupt handling, etc. Everything else functions as a process within its own memory space. Linux can be configured so that it is much like this with other things (filesystems, etc.) compiled as modules. The key difference is that all the modules are operating in kernel space. So, the question is, how difficult do you think it would be to devise a communication protocol to let modules function outside of kernel space and merely talk to the kernel? What would be the cost and benefits? Would it be possible to have both types in the same source tree? (say, as a compile option)"

Hey, go back and brush up on your History of Linux. Linus specifically argued the whys and hows of microkernels with Tannenbaum, (here [dartmouth.edu]) and he's repeated in various interviews (like this one with Yamagata [tlug.gr.jp]) his reasons for not going with a microkernel.

I think it would be far better to turn the monolithic kernel into a microwave. I'm quite sure that doing so would dramatically increase Linux's popularity. I mean, who wouldn't want their computer to have the ability to microwave food or AOL CDs without additional hardware?

a microkernel may not be what linux was planned to be, but considering the "bloat" that is accumulating in the kernel, and the inherit limitations on monolithic kernels, it may be wise to create a linux microkernel which would be developed concurrantly with the present kernel.

A microkernel could be invaluable in the emerging world of embedded linux distros and competition over consumer information apliances.
--
Finish the project. We'll buy you a new family.

In linux, at the time and hopefully forever, just one CPU can be in the (micro-)kernel at one time.

Are you sure? - I don't think so.

Access to sensative areas of code are protected by locking the kernel (shockingly, using calls to the functions 'lock_kernel();' and 'unlock_kernel();':-)

As I understand it, there are many areas of the kernel that are not brilliantly written for SMP (can you say IP stacks:-). Kernel locks are held when they are not needed, so in practice what you say may often be true. But this is changing, and is not the nature of the way the kernel works.

It's not how big your OS kernel is, it's what you do with it that matters;-)

Hurd [is] now obsolete too.

This is really bad news, since it hasn't even hit a 1.0 release yet:-)Why do you say this? not being arguementative - just interested.

I just want to highlight this link [mega-tokyo.com] from the previous post. I hadn't seen this site before, and I really wish I had found it a year ago when I started getting into OS coding.If you are at all interested in OS coding, check it out.

...but if that's sufficient to make NT a microkernel, then, well, err, umm, Linux - or {Free,Net,Open}BSD, or Solaris, or HP-UX, or AIX, or Digital UNIX, or... - are also microkernels if they're running X; in systems running X, the rendering code runs in "a process within its own memory space", i.e. the X server, in user mode.

Um, Digital UNIX (now TRU-64, formerly OSF/1) is a true microkernel-based OS. Just about everything within the "kernel" can be reconfigured on the fly, and each sub-system is protected from the others.

I know there's talk of Compaq opening (or, better still, *freeing*) the TRU-64 base source. Even without the "crown jewels" (LSM/AdvFS/Tru-Clusters etc), the advanced microkernel architecture would be a very valuable contribution to the community - TRU-64 is probably the most "comfortable" proper UNIX, and is certainly one of the most advanced in terms of features.

I was looking at how the QNX kernel offers only core services like threading, IPC, process creation, memory management, initial interrupt
handling, etc. Everything else functions as a process within it's own memory space.

t's not - in NT, file systems, drivers for devices such as disk and network controllers, and network protocol implementations run in kernel mode in a fashion
similar to the way they function in various UNIX systems.

I Belive NT *was* a microkernel (3.5.1 ), even the GUI ran in userspace (and netorking would contintue even if the gui crashed)

I was looking at how the QNX kernel offers only core services like threading, IPC, process creation, memory management, initial interrupt handling, etc. Everything else functions as a process within it's own memory space.

it's not - in NT, file systems, drivers for devices such as disk and network controllers, and network protocol implementations run in kernel mode in a fashion similar to the way they function in various UNIX systems.

Some of the Win32 semantics are implemented in the user-mode Win32 subsystem process, but some Win32 calls just get mapped into native NT system calls by the Win32 library.

I'm interested in microkernels for high reliability systems. One of the problems with most operating systems is that a device driver or major subsystem, such as networking or graphics, can crash the kernel. What if each device driver and major subsystem ran in its own address space? The address space would be restricted to the module's code, data and the address space associated with the I/O device. If the module crashed, the microkernel could recover by reloading and restarting the module.

As Linus has shown, microkernels can have their own weaknesses and performance issues (and this argument dates back to 1992!)

Re-inventing the wheel -- other things are more important

Issues, issues, issues -- new design, new drivers, etc...

If it ain't broke, don't fix it, and...

Very imporant, you can run a microkernel OS underneath a monolithic one!

This is where RT/Linux, and other Linux pre-emptive microkernels come in. Advantages:

It addresses hard real-time

No re-inventing the wheel, you only implement what you need in real-time in the microkernel (e.g., drivers, etc...)

You still have full-blown Linux, which runs as a non-RT task in the microkernel

You can address, change and do all the little things you need, without having to address the whole kernel and compatibility with modules you could care less about.

You get the best of both worlds. Minimal redesign, maximum reuse. The whole microkernel argument is old, very old. Linus has gotten Linux to the best it can be as far as soft real-time can be in 2.4. RT/Linux is the microkernel that addresses hard real-time and other size and response time issues. And it is a microkernel, running the main Linux kernel as a regular process. A perfect solution.

In a nutshell, it's impossible to get Linux to do everything without major modifications. There will have to be non-direct kernel implementations to do those unique applications. I really don't see any other way to do it. And besides, I don't see QNX, VxWorks, nor any other RTOS being as flexible as Linux is at doing many other things.

I'm not sure if both are still worked on. MkLinux was only ever supported under Mac but supposedly you could compile it and run it on x86. I've never done it though.

Taking MkLinux and putting GnuMach under it (I have no idea how involved it is, probably very.) seems like it could be a quick way to get a hurd lite or something similar running. It might be an interesting experiment.

...as is BeOS, and as should every desktop and server operating system, but that's another rant...

[...] and the Be kernel lacks 80% of the function and features of a Linux kernel. It's not exactly a fair comparison or even a valid one.

But that's the point! You don't need all that bloat in the kernel. You might argue that unlike some OSes we won't mention, you don't need your GUI in the kernel, and I'd say you were right. But you don't need USB support, file system drivers, device drivers, networking or swapping in there either. That 80% (in the case of QNX it's probably more like 95%) can be implemented in user land.

Or, to turn the argument around: Do you really want to have to reboot to install a new networking protocol? Is it any different from having to reboot to install an application?

Linux really should become a microkernel, since
it actually easily is outperformed in SMP systems.
In linux, at the time and hopefully forever, just
one CPU can be in the (micro-)kernel at one time.
And the kernel handles, beneath Interrupts, also
the other, un-important, stuff (sorry for my bad
English) like networking, fs etc. which all could
be done by user-space stuff. For example even net
and I/O could easily be done fully by ring-3 stuff
via the IOPL that now per default allows, and not
disallows any more, all currently unused I/O ports
for access by this task. Then the CPU which actually
serves the task does the I/O, and that's why the CPU0
which handles the IRQs branches to the other CPU
serving the I/O task, lets say IDE-PIO, if an IRQ
occured which has to be handled by it.
Microkernel hin und her, but the current kernel image
should get through since I _hate_ those large directories
containing "device drivers" (which for linux really spoken
don't exist) but _do like_ the way where one can choose to
build it into the kernel image or as module. I SAY that
"build into the bzImage" does NOT automatically mean
"included in the (micro-)kernel" !!!

No. In NT 3.5.x, just as in 4.0 and, as far as I know, 5.0^H^H^HW2K, drivers for devices such as disk and network controllers, and network protocol implementations run in kernel mode in a fashion similar to the way they function in various UNIX systems.

even the GUI ran in userspace (and netorking would contintue even if the gui crashed)

Yes, rendering was done in 3.x by sending messages to the Win32 subsystem process...

...but if that's sufficient to make NT a microkernel, then, well, err, umm, Linux - or {Free,Net,Open}BSD, or Solaris, or HP-UX, or AIX, or Digital UNIX, or... - are also microkernels if they're running X; in systems running X, the rendering code runs in "a process within its own memory space", i.e. the X server, in user mode.

BTW, "the GUI" runs, in part, in user mode even on NT 4 - the low-level rendering is done in kernel drivers, but the toolkit - the equivalent of Motif or GTK+ or Qt or... - lives, as far as I know, in user32.dll, which is a library that calls routines in gdi32.dll to get stuff rendered.

user32.dll is, as far as I know, just user-mode library code, as is gdi32.dll; on 3.x, gdi32.dll sent messages to the Win32 subsystem process, and, in 4.x and later, it goes through the kernel driver in at least some cases. (The fact that it's a shared library means that binaries built for 3.x should just continue to work - the ABI for drawing stuff on the screen is, in effect, a bunch of "call routine XXX in this library, with these arguments" items, and the way routine XXX accomplishes that can change from release to release without affecting programs that don't go around the back of the library.)

(user32.dll probably roughly corresponds to your toolkit library or libraries in X, and gdi32.dll probably rougly corresponds to Xlib, although there may be differences.)

Others have mentioned MkLinux [mklinux.org], which is a version of Linux which runs on top of the Mach microkernel. By modern standards, Mach isn't so "micro". On my Hurd [hurd.org] partition, the gnumach executable weighs in at 726kb compressed, and about 1.6Mb uncompressed. Compare with ntoskrnl.exe, which is 907kb on NT 4.0 enterprise server. Both of these are comparable with the size of an average linux or BSD monolithic kernel, which sit around the megabyte mark uncompressed.

The QNX kernel, on the other hand, is something like 8kb in size, which fits in the cache of a 486. Even the BeOS kernel is only something like 78kb compressed. Not that size is the only concern (so my wife keeps telling me), but in general, the less code that runs in the kernel, the easier it is to say something about how secure it is. Also the easier it is to change things while the system is running.

I hate to sound like Andrew Tanenbaum [cs.vu.nl], but MkLinux and the Hurd are now obsolete too. Mach belongs to the old school of microkernels which were popular 10-15 years ago, but with the benefit of hindsight, we know better. Nowadays, for example, we know that you don't even need to do VM swapping inside the kernel.

There are some projects of note which may result in a product which is cleaner and better designed than Linux. Here are some suggestions:

chaos [chaosdev.org], which has a very clean, pragmatic design without sacrificing its microkernel philosophy