Posted
by
Unknown Lamer
on Wednesday February 29, 2012 @10:06AM
from the still-waiting-for-coyotos-hurd dept.

An anonymous reader writes "MINIX 3.2.0 was released today (alternative announcement). Lots of code has been pulled in from NetBSD, replacing libc, much of the userspace and the bootloader. This should allow much more software to be ported easily (using the pkgsrc infrastructure which was previously adopted) while retaining the microkernel architecture. Also Clang is now used as a default compiler and ELF as the default binary format, which should allow MINIX to be ported to other architectures in the near future (in fact, they are currently looking to hire someone with embedded systems experience to port MINIX to ARM). A live CD is available."
The big highlight is the new NetBSD based userland — it replaces the incredibly old fashioned and limited Minix userland. There's even experimental SMP support. Topping it all off, the project switched over to git which would make getting involved in development a bit easier for the casual hacker.

The Linux kernel itself usually isn't. The C libraries that come with the distributions ARM board vendors often bundle with their hardware are typically smaller as they don't usually use glibc (or a derivative of it).

Embedded dists tend to use uClibc + BusyBox. Android uses a BSD user-land with a C runtime implementing a subset of Posix called BIONIC. The kernel would be compiled down to strip out superfluous drivers, filesystems, subsystems and so on.

Yes. The uClibc C runtime ditches or makes optional a lot of stuff which is superfluous for embedding - locale stuff, math and so on and is optimized to save space, not necessarily performance and doesn't provide a stable ABI. Busybox doesn't offer a full implementation of various tools either, just the basics. Both are also modular so you're meant to pick what features you want or not at compile time. It's fine for embedding because space is usually at a premium, e.g. the rootfs has to sit in a small flash partition.

So you could use them on a desktop but the question is why in most cases since you would have the CPU and memory to support the fullblown libs. I doubt uClibc would compile against desktop style applications and most dists would expect full blown GNU tools to function. You'd probably have to roll your own dist for that.

Git? Seriously? So the system developed by the primary "enemy" (or so it's portrayed) of the designer of MINIX (and most vocal opponent of the way MINIX operates) is used to develop MINIX itself now, presumably because "it works" even if it's not architecturally perfect?

I can't decide if that's incredibly ironic, or a wonderfully beautiful illustration of Open Source.

Git is a userspace application, Tanenbaum and Torvalds disagree about the best way to design a kernel, that's a totally different topic..

That's 20th century thinking, you dinosaur. In the 21st century, you cannot disagree with someone without hating them and everything they stand for. I am now obliged to call you an idiot for disagreeing with me, or in modern parlance: "being wrong". w0t??? U iz a bag of FAIL!!!! I bet you're a communist who votes for pinkos!!!

Since when are Tanenbaum and Linus enemies? Seriously, a lot of folks get riled up by a ridiculous debate nearly 20 years ago between an old professor and a young student over theoretically correct vs practically preferable (do the drivers live in ring zero. yeah, thats pretty much the crux of it)

Open source & costing money are by no means mutually exclusive, as OSI will tell you. Even the FSF would argue that 'Free Software' doesn't mean that money can't be charged for it.

Anyway, in the 90s, Minix was something that you got if you purchased Andy Tanenbaum's book 'Operating Systems: Design & Implementation' for whatever the book cost. The CD came in the book. So essentially, it cost one the price of that book. Today, it can be downloaded from the Minix3 website [minix3.org].

I see an interesting convergence of some technologies happening. clang is on the roadmap for several BSDs and now is default on Minix. NetBSD tools were pulled in which are also used in part on several other systems. The Minix folks will probably upstream fixes to NetBSD as well as make improvements to llvm.

It's great to see alternatives to GNU tools gaining ground. It's the only logical choice for embedded systems due to licensing. We're going to need to step up our game and make our own tools with threats like Wayland coming.

>It's great to see alternatives to GNU tools gaining ground. It's the only logical choice for embedded systems due to licensing. We're going to need to step up our game and make our own tools with threats like Wayland coming.

Fair question. The problem with Wayland isn't the license, it's the forcefulness of the Linux community to kill off all GUI systems that aren't theirs. The entire point of Wayland (and newer X.org work to a lesser degree) is to kill backward compatibility in the name of progress. The rest of us don't have IBM money to reinvent half the kernel every few years when a new idea crops up.

KMS is sensible, as is having the kernel manage memory. The problem is things like GEM that are quite leaky abstractions of the Linux virtual memory subsystem and so are difficult to port to other systems.

For those of us that don't know what Wayland could be, might you freshen our feeble minds with it's definition?

It's a massive FUD attack designed to replace Xorg with a less featureful but shinier replacement which also makes a number of the same mistakes that were made by OSX and Windows, rather than keeping the better design of parts of Xorg. On the other hand, it's something new which will keep the developers who have got bored with Xorg happy.

Wayland won't feature remote windowing. The best we can hope for is a pixel-scraper which dumps compressed bitmaps over the network.

Wayland seems to feature client side decorations. This has the advantage that every toolkit will give subtly different window decorations, hung applications will have immovable windows and it will be difficult to imlement global policies such as snap-to-window or snap to edge etc.

Wayland also solves a host of completely unrelated problems (apparently). See, one problem with Xorg is tearing in video. I don't have this problem on any of the intel chipsets I have, so it's clearly not an Xorg problm but a problem with drivers for other chipsets. Wayland people claim that wayland will solve this, apparently by magically dealing with the undocumented chips and proprietary blobs from other vendors.

Wayland does reduce the latency for compositing windowmanagers by removing a number of program->xorg->WM->xorg messages. Given that these are coming at a rate of positively 10s per second from your mouse, this is terribly important since Linux can't deal with high data rate, low latency messages.

About the only use-case for wayland is so that you can have a nice graphical transition butween multiple X servers running on a single monitor on a computer. I think that's definitely giving up network transparency for!

Wayland also seems to incite blatantly disengenuous claims from people who should know better like "oh you will be able to run Xorg on top of wayland". This completely ignores the fact that new wayland only programs won't have remote networking and secondly on every other system which does this, X11 is very much a second class citizen and the programs don't integrate properly with the native system.

Oh, apparently the BEST thing about Wayland is that it no longer has the 1980's style graphic primitives. This means that X is old and unfashionable. It also means that the Wayland developers have apparently never heard of software modularity where a bunch of rarely used function calls can sit somewhere on the side in a different source file and not clutter up the main body of code.

Wayland won't feature remote windowing. The best we can hope for is a pixel-scraper which dumps compressed bitmaps over the network.

You people need to get over your whole "x11 can run over the network" thing... I don't care what the theory is, nothing beats RDP (windows remote desktop) for running applications. I've used them all: rdp, vnc, ssh -X, nx... nothing comes close to rdp, and if you guys took of the "windows sucks" blinders you would admit that too. Whatever advantages Xorg might have, running appliations remotely is absolutely NOT one of them.

I've used them all as well, and I usually prefer ssh -X. Of course, I'm working within a fast internal network and over given more latency other tools may be better suited. On a daily basis I run my email client on my Linux laptop and pipe the display to a rootless X display running under cygwin on my Windows desktop. I commonly forget that the app is not just another windows program.

Can RDP be set to mix remote applications with locally running ones? I like the idea of switching with just an alt-tab an

Windows (with either a terminal server or the newer Remote Desktop Services) can do this beautify. I use it all the time for various programs and forget that they're running in a data center far away. Lots of times they run faster if I'm on a slower laptop or working remotely. For example a line-of-business app that needs to hit a data base server that's in the data center. It's pretty nice.

Meh. The plural of anecdote is not data. Nothing beats X11 over a LAN, and the proper integration bewtween local and remote windows is simply fantastic. IME, nx handily beats RDP over the internet. Perhaps if you took off your X11 sucks blinders, you would see that too.

Which is what we've got now with Xorg + any non-trivial widget set, no?

No, not exactly. The x server has been extended with porter duff compositing, and fonts are usually uploaded as pixmaps. Once uploaded, they can be pasted down into a printout of a string by effectively sending a list of pixmap IDs. That's much faster.

Also, GLX is capable of serializing the OpenGL stream and sending that over the network.

> Wayland does reduce the latency for compositing windowmanagers by removing a number of program->xorg->WM->xorg messages

Only with the current implementation: nothing in X protocol prevents from having an X server with a compositor and a window manager in the same process, if the performance cost of X.org's modular design is really an issue..Also with client-side decoration, I think that Wayland *increases* the number of messages between the client and server when you move a window.

Wayland seems to feature client side decorations. This has the advantage that every toolkit will give subtly different window decorations, hung applications will have immovable windows and it will be difficult to imlement global policies such as snap-to-window or snap to edge etc.

So then are you calling Mac OS X more like X windows, even though the 'server' runs on the same machine?

If an app hangs, you can still move the windows. (In the vast majority of cases. Very old legacy apps don't have movable wind

Great post, but there are things that X11 needs to fix. The whole "visuals" bit and the capturing of the mouse? xlib is a mess to program to and the GUI toolkits try to hide that but the overhead still exists.

Now, having said all of that, I would rather have a push to streamline X11 while keeping a strong window manager separation (this is actually important for security in addition to usability) and the remotable constructs. X11 has drawing primitives that are better than bitmaps (wayland) but not re

It will be nice if people start to realize that their code needs to compile on things other than GCC. These days you can't even compile a lot of software if you have a different version of GCC than the author did.

Well, GCC started out life as a victim of that stuff, and it claimed for a long time to be the universal solution to that problem. Does GCC still not flag all nonstandard extensions when the -pedantic switch is turned on? It used to have that problem, which made it a code trap. "Write it for GCC but it won't build anywhere else."

Note that clang is quite gcc-compatible (by design), so a lot of "gcc only" code works fine with it. Thus it's probably not going to do so much to reduce the popularity of gcc extensions.

Howevery, although it largely implements the same interface as gcc, because it's an entirely separate implementation, it's is very useful as a way to detect inadvertent dependencies on gcc quirks / bugs (compile and test your project with both gcc and clang).

Why is Wayland a 'threat'? Open source is evolution. Let Wayland come - if users go for it, they go for it and it becomes the new thing. If not, it was a try and I'm sure some good ideas can be harvested from it still.
Where's the love, brah?

Why is Wayland a 'threat'? Open source is evolution. Let Wayland come - if users go for it, they go for it and it becomes the new thing

The problem is not "if the users go for it". The problem is "if every major distribution tries to cram it down everybody's throat", with no alternatives or making it very hard to choose an alternative.

If that happens, open-source will route around it - it does that. Something else will come out on the other side that discontent people will like and gravitate towards. Or, Linux will wither away and die. Which outcome do you think is more likely?

Is there any particular reason why Wayland couldn't be ported to run on the Minix 3.x microkernel? Yeah, it would run in user space, but is there anything that would stop it from running on Minix, when X already runs well? That way, Minix 3.x could also have KDE5 at least running on it, when it's available.

Great for whom? What do users gain with the alternatives to the GNU tools? I like being able to fix bugs in the software running on my routers, or to upgrade my Android phone even after its manufacturer stopped supporting it, and it's only possible because of the GPL.

It's the only logical choice for embedded systems due to licensing.

The vast majority of the embedded systems around me are running GPL code (Linux, busybox) and they seem to be doin

I wonder, how is real-world useability state of microkernels? As i know of only 3 serious open source projects developing an actual useable microkernel for pc-ish hardware (namely: minix, hurd and -shoot me- reactos), how does minix compare to hurd. Which of those 2 projects would be likely to be a serious (`production`-ready) alternative for linux?

On first sight, it seems Hurd is a few steps further - debian delivers an experimental distro around a Hurd kernel (comparable to the debian/freebsd project) for a few years now, whereas minix just implemented netBSD's userland with this release. On the other hand, news on Hurd has been steadily stale for a decade or 2.

If our future would be easy selectable kernels (linux/hurd/minix) and userland (gnu/*bsd), in any combination of our liking and/or most suited for the goals, then i'd welcome it, but i'm quite sure this is an oversimplification of current reality, and probably future, especially seen current *bsd vs linux development (partly caused by licensing issues). Maybe some expert on the matter could enlighten us with enduser-understandable technical details, and a comparison on those projects, please.

Linux is pure macrokernel, in the sense that it isn't even slightly micro-kernelish. Of course, FUSE might confuse the issue a little bit. I am not sure where such facilities fall on the macro/microkernel debate.

That's exactly what i ment.. Commercial, not open-source, and secondly, aimed at embedded systems.

And indeed, most current kernels will be some hybrid between monolithic and micro. That's what makes Hurd and Minix so interesting: pure userland drivers/servers/ or whatever you call it; sshfs and fuse just faints by it. From a programmer stance of view, it narrows the gap between database (in the broad sense) / filesystem / IO, allowing way more efficient approaches of dealing with data and information servic

A full featured kernel and userland that allows you to tinker with a micro-kernel based system. Linux and the BSD's are all monolithic kernels, even where they offer modules support (Darwin, the core of OS X, isn't a true micro-kernel based system either).

Actually Minix is trying to make advancements in a few areas.1. Embedded. Minix is trying to be smaller, lighters, and more modular than Linux which to be honest is pretty dang good.2. Security. Being a microkernel and having drivers running in user space it is the goal that a security exploit in a driver will not lead to a global aka root level exploit.3. Reliability. With the microkernel if a driver crashes then instead of all of Minix

Are there any plans to add real-time extensions to MINIX? I know that ARM support is in the works - with that and hard (or even soft) real-time extensions, it could sweep the embedded world in a big way.

In principle, instrumenting significant parts of micro-kernel such as Minix is much easier than with a monolithic system like Linux. How true that is in practice is not something I can answer - I've only got the second edition of Tanenbaum's "Operating Systems" book, but it describes a much older and less sophisticated version of Minix.

Its an educational tool not a training tool. Education is learning how stuff works, training is votech. Almost no one in the world will ever be hired because of minix on a resume. It is helpful for learning how OS work. Another way to put it is education gives you something interesting to think about, makes life worth living. Training gives you a way to make money to afford the contemplative life of an educated person. Its an educational tool.

MINIX was and is used in many computer science curriculums. There are millions of people over the last 24+ years who learned about operating systems with the help of Minix. You might even have heard of a guy named Linus who used MINIX as scaffolding and teaching tool to write his own kernel.

Seconded. Aside from a purely nostalgic standpoint, not sure how relevant MINIX is in this day and age, given the hardware and OS choices available. Still, I guess people have the right to work on whatever the hell they want to.

Perhaps it's being used for educational purposes. Linux is a bit huge to use as a learning tool for various aspects of how operating systems work. I speculate that pulling in code from NetBSD seems to make sense to provide more up-to-date examples of modern OS architecture issues.

Linux can be configured to run in 2MB of RAM and 2MB of flash or less. It can run in 4MB RAM with a full network stack, busybox, and several hundred K remaining for apps.

There is no other full featured free unix like kernel which can do that. Certainly none of the free BSDs.

I'll take the bait. Care to show a reference to running a modern Linux kernel w/ 2 MB RAM, or 4 MB RAM with busybox, on i386 or ARM? Busybox [busybox.net] can do wonders for storage requirements (e.g. for NAND FLASH), but it doesn't help with RAM at all! I found 8 MB to be difficult enough (!), last I tried ulibc [uclibc.org] and busybox on i386.

Just as a point of discussion, generic NetBSD is smaller than generic Linux (e.g. Debian) on the ARM platforms I've been using. A line from top shows the latest (NetBSD 5.1.2) kernel R

Thanks, appreciate the link. But it sorta makes my point:- An allusion to a vapor product with a 3 MB RAM goal is far from showing a dmesg.:)- The linked "TLK" project reads nicely but has more aspirations than code, AFAICT- The included 'web browser' was a misstatement, clarified in a subsequent post.

I'm aware of (uc)Linux's lovely support for MMU-less systems. It was a considerable kernel fork; what I'm impressed with is how much of it has been integrated back into mainline. It's a pity that someone

Where I started in this thread was posting a 1.2 MB kernel RAM footprint in vanilla NetBSD/ARM. This is with UFS/ext2/msdos filesystems, tcp/ip networking, NIC, USB standard devices (bulk storage, audio, etc.) loaded. It doesn't sound terribly far off at all. Outside of the XIP and nommu advantages, which are very significant, I'm actually curious whether it would boot in 2 MB with a minimal userland. The SoC hardware has 32 MB, so I've never bothered

Of the various microkernels that ever existed @ different times, Mach 3 was less than satisfactory, Chorus ended up digested by Sun, and Amoeba itself today stands discontinued. L4 is a 2nd generation microkernel that has been tried out in some projects, including a Linux project called L4Linux as well as an OS/2 successor called OSfree. There are some other microkernels, such as Coyotos & Viengoos that have in the past been tried by the Hurd project.

I'd think that something like Minix 3.0 (not 3.2) would be a good microkernel to base Hurd on. Given the licensing differences, the Hurd guys may need to fork Minix anyway in order to get a microkernel that has everything that Hurd needs. If they get that, they can then continue on the rest of the project, and finally have the GNU's own kernel (which ain't Linux).

On another note, I wonder why the Minix guys chose the NetBSD userland, since NetBSD is the least used BSD among the big 3. They could have simply gone w/ FreeBSD, which would have given them a range of targeting options, allowing them to borrow from PC-BSD for netbooks, pFSense for routers/firewalls, FreeNAS for storage, and so on.

And finally, I do hope they get an arm version sometime. Another suggestion - they might want to get a Raspberry Pi and port the ARM Minix to that platform, making it the target platform for this initially.

Remember it for what it was originally made for... an operating system to learn from while coding.You might not remember those days, but when you have a working operating system that is minimal in code size, it's easier to grasp.

I'm just a little disoriented by the need to advance it, unless it's a minimal codebase of the NetBSD variety. Then again, they did say it was "pulled" from NetBSD, so that'd mean in my mind it's not minimal... which nullifies that.... and we're back to square one.

Remember it for what it was originally made for... an operating system to learn from while coding.You might not remember those days, but when you have a working operating system that is minimal in code size, it's easier to grasp.

I'm just a little disoriented by the need to advance it, unless it's a minimal codebase of the NetBSD variety. Then again, they did say it was "pulled" from NetBSD, so that'd mean in my mind it's not minimal... which nullifies that.... and we're back to square one.

I think it's evolving the userspace side - while the Minix is the kernel side. The userspace of Minix suffered and gets crufty as people can't use what they already know since the Minix userspace starts looking a little barren.

There are lots of teaching operating systems out there. Most are just applications that run on another OS to teach basic concepts like multithreading, locking and such. Minix appears to be the end goal - an OS that can be taught running on real hardware. A userspace revamp to make it feel modern and not a toy. And being able to go and compile some program you wrote on an OS you're tinkering with can trigger excitement among students.

And there's probably a ton of OS research going on with Minix as well, and not having to put up with limitations on tools because the userspace can't run them is helpful.

Oh, I remember that all right. What I remember is that circa 1989, the vi editor Minix had could not handle text files larger than 32k! Our first assignment was to hack on some source that was, of course, in a file larger than 32k, so we had to use split to break it into pieces, then cat to join everything together. Compiling might fail because you ran out of hard drive space, or memory, or file handles, process table entries, or who knows what. Over and over, Minix told its users that their cheesy cons