Posted
by
samzenpus
on Monday February 10, 2014 @12:55PM
from the now-even-better dept.

jones_supa writes "At FOSDEM 2014 some recent developments of GNU Hurd were discussed (PDF slides). In the name of freedom, GNU Hurd has now the ability to run device drivers from user-space via the project's DDE layer. Among the mentioned use-cases for the GNU Hurd DDE are allowing VPN traffic to just one application, mounting one's own files, redirecting a user's audio, and more flexible hardware support. You can also run Linux kernel drivers in Hurd's user-space. Hurd developers also have working IDE support, X.Org / graphics support, an AHCI driver for Serial ATA, and a Xen PV DomU. Besides the 64-bit support not being in a usable state, USB and sound support is still missing. As some other good news for GNU Hurd, around 79% of the Debian archive is now building for GNU Hurd, including the Xfce desktop (GNOME and KDE soon) and Firefox web browser."

Having a project like HURD reflects poorly on Open Source/Free software. It's kind-of emblematic of the major problem with non-commerical software projects; namely, without a central guiding force and a *real* budget, big software projects have a very difficult time getting finished.

Having a project like HURD reflects poorly on Open Source/Free software.

Rubbish. It's a hobby/research project for a few people at the moment. This no more reflects poorly on Open Source software than my crap github account.

Stallman should just kill it. It's pointless.

Tell you what, I'll petition Stallman to ask them to stop (RMS doean't have to power to tell people what to do in their spare time and I doubt he'd weald it if he did) if you agree to cease all your hobbies since you're not pro-level at any of them.

That aside, it's actally beginning to get to the interesting stage. Things are beginning to work quite well. It's soon going to be able to use all of the Linux drivers (i.e. no hardware support problems). The capabilities are interesting because it can do all of this without hacks or root. This gives a much smaller attack surface.

It's also interesting because the difficult and complex security concious important system code can be written in something other than C very easily. In facy you can cobble together all sorts of stuff in all sorts of languages if desired.

I'm not a developer, but if a Linux driver exists that is written to sit within a kernel, the way it is in Linux, how does one have that driver run in user mode in any OS? I'm somewhat not getting that.

There's a Device Driver Environment [gnu.org] that emulates parts of Linux as calls to other servers and Mach. Slides 22-25 [fosdem.org] have a bit of info on the port from running inside Mach to userspace.

I don't think it reflects badly on FOSS as a whole, it just shows that Stallman doesn't seem to know when he is in over his head, and is too difficult to work with to attract a sizable developer base for support. It is true that Open Source projects need a sensible leader just as much as regular companies do, but that shouldn't be a surprise. I guess there is one problem in FOSS where, especially on non-megahuge projects, when the original leader steps down there is no meta-organization (like some sort of

just shows that Stallman doesn't seem to know when he is in over his head,

Yay. Today is "make shit up about Stallman day" just like every other day.

Let's go for an RMS quote on the HURD:

"finishing it is not crucial" for the GNU system because a free kernel already existed (Linux), and completing Hurd would not address the main remaining problem for a free operating system: device support.

But Linux *didn't* exist when Hurd was started. It was the moribund state of development 23 years ago which motivated Linus to write his own kernel. So in a sense we should thank Hurd for being so badly mismanaged, mired in politics and kernel correctness that it drove someone to produce something better and more useful. Pragmatism won the day.

I'm not sure what your point is or if "morbiund" is even right. The Hurd was only started a year before Linux.

The 3rd definition of moribund on dictionary.com is what I meant - not progressing or advancing; stagnant. Of course perhaps my frame of reference was too short. Maybe I should have measured progress across decades.

Do you have any citiaions that the hurd was mired in politics and mismanagement in late 1990/early 1991, before Linux was released?

I didn't say it was. Linux didn't become better and more useful the instant that 0.01 came out. But it was the utter lack of progress with Hurd which meant Linux gained critical mass. Hurd couldn't even bootstrap until mid-1994 and a dist only crawled out into the light of day in 1997. It was too

Perhaps you should actually read the definition rather than regurgitate it. People were working on it.Not producing usable results is not the same as stagnant. It was a year old, people were working on it, it wasn't ready because it was a much harder problem than expected. It becme morbiund for a while after Linux came onto the scene as developers left.

I didn't say it was.

Oh my bad. You see, when you said: " So in a sense we should thank Hurd for being so badl

Perhaps you should actually read the definition rather than regurgitate it. People were working on it.

Er what? I know what moribund means and I used it without expecting someone to start nitpicking. The definition is there. It applies.

Oh my bad. You see, when you said: " So in a sense we should thank Hurd for being so badly mismanaged, mired in politics", I thought you meant that the Hurd was being mismanaged and mired in politics. Looking back at your post it seems obvious now and I can't imagine how I misunderstood you.

Ah yes, it was my fault that you misinterpreted what I said. I see now. It's so obvious.

Clearly it was. Linus was itching to produce a kernel that ran on his 386 hardware but was more Unix-like than Minix. And Hurd wasn't scratching that itch. It seems it didn't scratch anybody's itch for that matter.

> So in a sense we should thank Hurd for being so badly mismanaged, mired in politics and kernel correctness that it drove someone to produce something better and more useful.

I don't disagree with how it is often ironic for some software to be the father of a good idea but someone else comes along and uses that as motivation to produce something even better. The history of computer software is littered with examples. i.e. Closed source compilers -> open source gcc (originally), Xerox Park, Mac GUI, V

"Your signature project has been in development hell for over 20 years, how do you respond?"

You're making stuff up. Making up things is generally known as lieing. I'll be generous and assume that you're merely staggeringly ignorant and perfer to regurgitate anti-GNU talking points you've culled from various message boards and have never bothered to actually find out much about the GNU project yourself.

The HURD kernel is not and has never been the "signature" project. The project is the GNU project (est. 1983) and has been progressing quite nicely. The kernel was not worked on until about 1990. When Linux came along in '91, it was rapidly adopted as the GNU kernel of choice, since it is under an appropriate license.

The goal is to be able to run computers entirely from copyleft software. The fact that some of these were achieved externally is neither here nor there. The GNU project has in fact achieved its major goal: you can now run a computer on completely copyleft software.

I'm not sure what your point is. It's not an embarrassmet to the FSF and it's not a flagship project. It's yet-another-kernel to go with all the yet-another-language or yet-another-shell or yet-another-editor. Basically at this point (as has been the ase since the early 90's) it's the pet research project of a small group of people.

The goal is to be able to run computers entirely from copyleft software. The fact that some of these were achieved externally is neither here nor there. The GNU project has in fact achieved its major goal: you can now run a computer on completely copyleft software.

If I understand it correctly, that was not the original goal of the GNU project. The original goal was to provide a a free operating environment. The goal was not to replace all commercial applications or to nit-pick free-to-use-but-commerci

Well, then that's where I do have a problem with the GNU manifesto. Commercial software pays the salaries of developers and ensures that a significant number of developers are trained to produce a pool that can work on open source. Without at least some form of personal profit, I don't see that many people receiving degrees in computer science or an associated field.

HURD was the signature project for GNU. The whole point of the GNU project was a Free Open Source Unix like operating system. GNU stands for GNU is Not Unix. I don't have a problem with HURD as a project. I am all for grand experiments and now that we have Linux as a production FOSS operating system HURD can fill that roll. HURD may not now be GNUs signature project but at one time it really was.I will add that I find Minix 3.0 to be very interesting project that is not getting the attention that it deserve

Yes that little gem cracked me up at the time. That's when RMS went from pretending linux did not exist (the "never HURD of it" joke in every interview or appearance, sometimes multiple times in one interview, for YEARS) to hinting at ownership "in the good cause of promoting GNU" by pushing the stupid LiGnuX name and then the gnu/linux name.Linux is not a gnu project.

I read the gnu newsletter that you should take a look at. It was an "ends justifies the means" thing of advertising gnu by painting their name on someone else's wagon. LiGnuX was the first name suggested but it didn't get any traction.

attributable to a hero figurehead "Linus Torvalds"

Ah yes - RMS was jealous of someone that was respected more than himself and rolled that one out. Linus is no hero figurehead and has tried far less than RMS t

I am no fan of RMS, but to be fair to him, he had abandoned HURD years ago. His chosen OS is GNU/'libre'-Linux i.e. Linux w/o any 'binary blobs', whatever that is. His preferred distro is gNewSense, running on his Lemote Yeedong laptop, based on the Loongson CPU.

HURD is now not so much a GNU project as much as a project that some FOSS organizations, like Debian & Arch are interested in. Personally, I think that they should use Minix for their microkernel (fork it to GPL3 if they want) and combine

Got to love the idea of someone that is so concerned about freedom using an all Chinese laptop. Buying an all Chinese computer because you are worried about US spying is like feeling the US to Nazi Germany because you were worried about anti-semites.

The thing is it's very hard to determine whether project is or isn't pointless. There's always non-obvious reasons to have it around, maybe as research project, maybe it'll end up linux killer after all, maybe it just will result to some ideas reused in other kernel projects. There's no way YOU would know better than people actually contributing to it. When you're saying that it's pointless you're just randomly guessing.

Commercial software projects have their own ways of stagnating.. Needed features disappear due to pressure from marketing's desire to segment the market so customers pay more for less with each new version. Security problems are routinely covered up because the vendor is more worried about image than the actual quality of their product. Large budgets can hinder as well as help development, Windows 8 is a good example where Microsoft just threw money/people at the problem instead of focusing on design cohe

I think that HURD itself is interesting - all the services that sit ON TOP of the microkernel, but I agree w/ you that Mach is an outdated platform for this one. Instead, they should look at Minix 3, which is one of the smallest FOSS microkernels around, and use that as the basis for HURD.

As a rule, I support the idea of making a new OS just for the sake of it. But the important thing to realize most of these will never really get too far as in terms of market share.

Linux success was by luck. It came out when BSD had a lot of serious licencing issues and a big demand for something free, it was developed to a point of being useful fairly rapidly and got a lot of attention. At the same time the 32bit computers for home users were available, and people were jumping on getting a Real OS to do real work on. MS/DOS and Windows 3.1 wasn't a good option, for real work, other solutions just costed way too much money.

Hurd which was made during the same time BSD was having their issues, however it was more of am ambitious project, and couldn't get in during that opening which Linux did.

Now BSD with Free/Open/Net being based on original Unix code, came out of the Licencing mess as an open solution, with some still bad taste in peoples mouth. However they came out a bit more stable than Linux at that time. Where xBSD was being used in a business production settings, for a long time, while Linux matured and took over.

There is a lot of flamewars about GNU being superior then the new BSD license. Saying Linux is proof of this. I would disagree GNU and BSD are both Open Enough standards for general adoption, and Linux success was based on getting in at the right time. Otherwise you would expect HURD to be nearly as possible as BSD is now.

I agree it's basically a confluence of circumstances. Fwiw, while GNU's kernel project was pretty unsuccessful, I do think their more general project of trying to put together all the parts of a free Unix-like was quite useful, and one part of the circumstantial confluence. With BSD tied up in licensing issues at the time, Linus was able to basically grab the GNU compiler, libc, userland, etc. and make a working system. GNU's efforts were less essential to the BSDs after the lawsuit was resolved, but still fairly important in the early years to get something up and running: the lawsuit resolution resulted in ripping out the AT&T-licensed code from BSD, a bunch of which was replaced by GNU utilities as drop-in replacements. These have since been re-replaced in most of the BSDs ('grep' was one of the last GNU utilities to be phased out), but served as a pretty useful 20-year stopgap. And of course GCC had replaced the traditional CC much earlier (GCC appears in 4.3BSD).

One missing bit of this soup that's a real shame, imo: The very late open-sourcing of Plan9 led to a bunch of good stuff that could've been pulled in being ignored. If at least parts of Plan9 had been available in the early '90s when this GNU/BSD/Linux code was coalescing into free operating systems, Plan9's code could've contributed usefully.

Linux won because it was GPL. Up until that point no company would contribute to open source because their rivals could take their inventions, improve them and not share them back. This is the same thing that caused UNIX's original fragmentation. GPL prevented that, it gave any company contributing a guarantee that anything they put in that someone else improves they get to use as well.

IMO it's only the success of GPL that showed companies that forking was unprofitable that led some of those same companies

If it were under GPL these companies would have avoided it, and gave nothing back.If you want to scare a business who's main business model is selling software licenses, is to try to convince them to use a GPL alternative.

From what I remember, NT 4 didn't support USB very well, if at all. Windows 2000 however, did work reasonably well with USB. So unless they had upgraded to Windows 2000 or XP, they would still be stuck with the PS/2 -connected devices.

Interestingly enough, the summary indicates that Hurd still doesn't support USB... that does limit the selection of useful hardware.

Last I checked, I don't think it supported anything other than IDE either, but admittedly that was a few years ago. I just remember being curious to try it, looking at the supported hardware list, thinking "I don't have anything that ancient anymore" and moving on...

I'm not saying that Linux is the be-all, end-all of Free Operating Systems, but after 24 years I think Hurd meets the definition of a failed software project. (And you think Duke Nukem Forever was in development for a while!)

If the developers want to continue developing it, great. But I hope the project is not siphoning off any resources from the FSF's productive work. But I have my doubts as long as the FSF webpages continue to treat Linux as some sort of temporary work-around to Hurd not being available. (And please, just please, let go of the whole GNU/Linux thing... that ship sailed about fifteen years ago.)

yes, if you don't mind getting the performance of a Pentium 3 out of your core I7 system whenever that driver becomes the I/o bottleneck. All that extra context switching basically guarantees it will happen regularly.

I think you confused the purpose of IOMMU. It's for restricting the device's memory access. Without IOMMU, it just means that any firmware running on the device's coprocessor can access the main memory unrestricted, meaning that a hacked firmware can root the machine. IOMMU virtualizes device's access to main memory so that doesn't happen. On a machine without IOMMU, you can still run device drivers in user space as long as the kernel sets up the correct memory mapping for the device's PCI address space. Th

MMIO is slow. To me the IOMMU is as important as the MMU was in the past to ensure system stability and security. You certainly do not want a malicious or buggy piece of hardware writing crap all over the rest of your system any more than you would want someone's program to do the same in a system without memory protection.

I can see corner cases for microkernels, like single-purpose/banking/ATM machines where security is paramount. SCADA (though latency might be an issue) might be another good application. However I bet there isn't hardware fast enough to compensate for them in performance critical systems like stock markets.

High frequency trading systems are not a typical example of anything, and should be held as a typical use case for any reasonable operating system. They do things like use custom network hardware which plays fast and loose with the standards, and could easily be compromised if they weren't on a fairly trusted network.

With multi cores, and a lack of applications that can thoroughly stress them, one could run the ring 0 kernel mode stuff on 1 CPU, ring 1 on a second, ring 2 on a third and ring 3 on a fourth. So performance doesn't have to necessarily take a hit.

I'd think that initially, very few people would be running multimedia, databases, simulation programs on HURD. Compilers/dev, yep. But in real life, very few programs are embarrassingly parallel so as to easily stretch systems no matter how many cores are tossed at them.

I'm sorry, but what? Running drivers as the kernel in ring 0 -- which is the Linux model since it's a monolithic kernel -- is a better security model than user space drivers? How about running as root and writing directly to/dev/mem for memory mapped devices? Is that a better security model, too?

A "context switch" in a macrokernel OS (on Intel hardware; architectures which support tagged TLBs have a different tradeoff) is a single thing. In L4, the various parts of a context switch are decoupled and the kernel tries very hard to only do as much as it needs to. For commonly-accessed drivers, for example, an IPC round-trip requires only a selector switch (which a macrokernel OS does anyway when it enters and exits kernel mode) and avoids

You've missed that the "beige box is the hard drive" people don't consider something is an operating system until it has a GUI solitaire card game in userspace. To them that card game is part of the operating system simply because it came with the computer. Anything with less than that they consider too archaic to have an operating system. Text screen on a linux server? "How quaint, obviously far less advanced than a TV program recorder" they think.Thus it's better ignoring them instead of trying to fee

I really think you're wrong. QNX, for example, is an amazing, fast operating system. Microkernels make certain things difficult, but for all of those difficulties there are technical solutions. That HURD can't implement these is not the fault of the microkernel architecture.

An interesting historical tidbit about QNX is that it was started more or less on the basis of a textbook implementation of a microkernel with real-time features. In the literal sense that the company's co-founders did a class project where they implemented a basic realtime microkernel in an OS class, wondered why there wasn't something similar in the marketplace, and founded a company to sell it.

It used to be slow on single core CPUs. But does that have to be the case w/ today's multi-cores? I would think that in the x64, each ring could have its own CPU. They could run the microkernel on one core in ring 0, the OS management on a second on ring 1, and the user mode programs on a third on ring 2, and any VMs in ring 3.

My understanding is that inter-core communications, while fast, are not as fast as the things a single core can do by itself when only interacting with the level 1 cache. Since the rings of a microkernel would be communicating very frequently and as fast as possible, I'm not sure it would work better.

But more importantly, free software is full of tens of thousands of experiments that didn't seem to make sense at a start. Most wither and die, a few become very big and hugely popular, and even the ones

The Amiga microkernel was fast because there was no memory protection. "Kernel" entry consisted of pushing a few registers to the stack and doing a jump. Context switches were similar.

Practically no one is willing to do without memory protection today, and it is likely that achieving Amiga-like context switch times while retaining some kind of memory protection would require significant hardware changes.

Your answer sounds like it is nothing more than the regurgitated result of the Torvalds - Tenenbaum debate. Basically it was an argument between the creator of Minix (Tenenbaum) and Torvalds who was inspired to write Linux after playing around with Minix. Torvalds outright called Tenenbaum an idiot and since then we have this single argument as some sort of proof that macrokernels are the holy grail of OS design. And this was over 20 years ago. Though in the end the Linux kernel won because it was available