Creator of MINIX, flame-war legend and well known supporter of microkernels — these are some of the monikers of Dr Andrew S Tanenbaum; he probably wrote a textbook or two that is in your library as well.

Builder AU's Nick Gibson caught up with Dr Tanenbaum after his keynote address at linux.conf.au and spoke about microkernels, MINIX and what's coming up on the horizon.

Builder AU: Why do you think microkernels are better than monolithic kernels?

Dr Tanenbaum:If you look at systems where it really matters if it works — like avionics systems, jet fighters, ventilators in hospitals — things where a failure means somebody dies, most of them are microkernels.

[For] the guys who really care, guys who make jet fighters and so on, it's really got to work. Because if something goes wrong, rebooting the jet fighter while it is falling out of the sky is not a good thing.

There is a tremendous amount of military, industrial and hospital systems that are based on microkernels because those guys have discovered that they are more reliable.

Microkernels are widely used in the embedded world for high end, high reliability things where there is a real big downside to failure.

Microkernels are getting really popular in embedded devices, with QNX being bought by Harman International, and work being done locally on L4 — do you think embedded devices are a good area to concentrate on if you are a microkernel developer?

It's an application area, [so] yes. The thing about embedded systems is that there is less legacy. If you are making a medical imaging device, it probably doesn't have to be compatible with much else, it just has to work.

Other than the fact that you need people on your development team. It's relatively easy to take this new device and put in a new operating system because nobody else except the developers ever sees that.

On the desktop people want to know what is different from what they used to have, and in embedded you don't have that as much. So it's easier to start again with a new product.

You've been working on MINIX 3, what makes it different from all these other microkernels?

We've made a big effort to make it POSIX conformant and in addition to the embedded stuff, to make it useful on the desktop and notebooks and so on. We've got a full POSIX interface, we've got around 500 programs that have been ported, we have Apache, we have a GUI now and we have perl, python and gcc — many of the tools and programs that people use on the desktop.

Many of the other embedded systems don't focus that way. If you are building software for an F-16 fighter, you don't need Ghostview. So we have ported a lot of that stuff that makes it usable.

We don't have Firefox yet, that comes later this year, but once we have Firefox and maybe OpenOffice then it is usable for a lot of people as their real system.

We have X Windows and normally these embedded systems don't have X11 — programmers expect that — if you are writing for a jet fighter you don't need X11. That's the difference between us and some of the other companies.

No, no, no. We've met briefly now and a couple of years ago and he is a good guy. A couple of years ago this guy called Ken Brown wrote a book saying that Linus stole Linux from me, from Minix, and therefore the intellectual property rights are unclear and therefore companies shouldn't use Linux because I might sue them.

It later came out that Microsoft had paid him to do this — and I defended Linus. I wrote on my Web site saying that this guy Brown came through, visited me and I gave him the [correct] story.

We may have different philosophies on system design but it doesn't mean we dislike each other.

The article to Computer magazine, I didn't post it on Slashdot — someone put it on there and said "Here we go again".

It was an article in an academic journal about future research in operating systems. It wasn't an attack on Linux or Windows — it was an article for an academic audience about what is the work going on in operating systems and it listed a variety a different approaches, of which MINIX is one of them, but it also talked about Singularity, Nooks and other research approaches going on.

If people thought that was an attack on Linux, they are being overly sensitive.

Back in 1992 you called Linux obsolete, it's 15 years later and people are running Linux on their 64 bit x86 chips. Do you still think —

I still think that its basic architecture, the basic design is a bad idea, that a monolithic kernel is not a good idea, and people are seeing it more and more in applications where is really matters. [In] industrial and commercial embedded systems people are moving to microkernels. It's got to work.

It's only on the desktop and some servers where it doesn't matter if it works or not. Where these monolithic systems are popular, but where it has to work — it's not so popular. I saw that a long time ago.

My faulting of Linus was that he had a good, nice clean microkernel and he could have gone and made a better one out of it. Science progresses when you take something and make a better one, not a worse one of what you already had.

I thought he should have taken and made a better microkernel, fine! But he was 20 years old, didn't have that much experience; he was a kid and he developed it differently.

We heard last year that GNU/HURD was changing microkernels again. Do you think that we'll ever see a final release of GNU/HURD?

Ask them! There are plenty of great microkernels out there, there is L4, there's MINIX, I don't know what they are doing.

I don't think that those guys have the same focus that I have of trying to get a product out there that works pretty well, they have a ideological purity behind them.

I was talking to [Richard] Stallman once and we got onto the subject of Free Software, and he sort of bit my head off when I used the term incorrectly from his point of view.

Free Software is software when you have the source and you can do what you want with it, whether it is the Berkeley licence or the GPL, isn't so important. The important thing is having the source code and being able to play with it yourself.

And he went bananas and said "No, the licence is the most important thing!" And I said "No, the software is the most important thing, and having the source code out there and the details of the licensing are secondary.

The important thing is that you release the source code and other people can use it to modify it as you wish under reasonable conditions; the exact nature of those conditions isn't so important. He just went ape.

I think Linus is on the same line as I am in respect of the important thing is the code — making the code available and the exact licensing conditions are not important. He doesn't like the GPLv3 from what I understand and that's fine, I like the Berkeley licence but I respect him for making his choice. That isn't the key thing for me.

I read in your paper that you named microkernels as just one of the tools that you can use. What are some of the other tools that you see as being available to OS providers?

Singularity is a very interesting development from Microsoft, they wrote the entire system from scratch in a single address space in a type safe language. They invented this new language, Sing#, which is derived from c#, everything is in Sing# and like Java you can't just say p= random & *p=0, the language doesn't allow that.

It's a type-safe language, it's very restrictive with what you can do, and all the components talk to each other in the same address space over these, what you might call, "named pipes".

A pipe has a protocol and the protocol is described in a formal language — you send a message to someone of this type and they send back either an A, a B or a C back to you and so on.

You have to write that all down in a formal language and the system can verify that you are doing what you claim to be doing because you have formally specified what the protocol is over that channel.

So they can come very close to having a provably correct system because they have forced you in the language to say what the protocol is over all these things.

It's a very interesting development, they've made it working and so on, it's an interesting approach. It's not compatible with Windows, it's not compatible with Unix, it's not compatible with anything in the known universe, which is going to be a marketing issue for them.

But they have demonstrated that it can be done, that's one approach. On virtual machines they have some potential, but it is a clumsy way of doing things I think, of running all the drivers in separate virtual machines. Using a process is good enough, you don't need a whole machine for it.

The Nooks approach, Uni of Washington, is to wrap them, keeping all the drivers in the kernel, but wrap them so all interaction between the operating system and the drivers goes through the Nooks layer, and it checks to make sure that everything is OK. If you're doing something you shouldn't be doing, the Nooks layer catches it and flags it somehow — that's another approach.

There's a bunch of people working on other approaches; I'm not claiming that this is the only way to go but I think it is potentially a very good way.

Finally, what should developers keep in mind when designing programs to be portable on microkernel based systems?

Stick to the POSIX interface — pretty much all microkernels that are aimed at the desktop world to some extent, support the POSIX interface.

Use ANSI standard C — do not use any extensions from GNU or gcc or anything else. Stick to ANSI C, stick to the POSIX interface and other things that have been standardised. Stick to standards!

Microkernels and most other systems support the standards, it's when you start using weird extensions that somebody made, in-line assembly code and things, that you are going to hang yourself.

If you stick to the standards in terms of the language, the libraries, the operating system calls, there is a pretty good chance that it will run anywhere.

It's when you use these weird extensions that you're going to hang yourself.