Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

mjasay writes "Linus Torvalds, founder of the Linux kernel, made a somewhat surprising comment at LinuxCon in Portland, Ore., on Monday: 'Linux is bloated.' While the open-source community has long pointed the finger at Microsoft's Windows as bloated, it appears that with success has come added heft, heft that makes Linux 'huge and scary now,' according to Torvalds." TuxRadar provides a small capsule of his remarks as well, as does The Register.

"Okay, so the summary of this is that you expect that 12 per cent to be back to where it should be next year, and you expect someone else to come up with a plan to do it," joked Bottomley. "That's open source."

That is also the problem. Everyone adds pieces and eventually it starts to become a mess. Then someone else should fix it.

But when its open source, it's easier to think that maybe I cant be bothered to look at this now, someone else can do it. When its proprietary software and you get the assignment to look at it, you pretty much have to do it.

How does that work? In a proprietary project if your boss says "do this" you either do it or find another job. In an open source project you could just flame the hell out of the guy that told you on the public mailing list and carry on working on something else.

And in a proprietary project if customers want something fixed they can threaten to not pay which in even the most incompetent company will tend to make your boss tell you to fix it. In open source that mechanism does not exist.

In FreeBSD, you chose to accept a project. If you fail to perform, you are replaced with another volunteer. It doesn't matter if you're a core committer or a port maintainer, it all works that way. There are occasional problems but overall a successful approach. Many other opensource projects do the same. That's why hierarchies work in opensource--they hold people accountable just like in a proprietary project.

It gets done because ultimately somebody says "Fuck this, I can't work on this bloated codebase any longer. We're refactoring, guys!"

Then, if the old lead dev / maintainer / admin doesn't like it, a fork happens...

Projects where this has happened before: The kernel itself, several times (as well as various subsystems, again several times), X (XFree to XOrg), KDE (2-3, 3-4), Amarok (1.x to 2.x), SodiPodi -> Inkscape, Firefox from 2 to 3... These are off the top of my mind, of course - there are lots more.

Of course, there are some cases where this process has failed. I don't think the failure rate is any higher (or lower) than proprietary projects, though...

Precisely. The grandparent is forgetting that, in the proprietary world, the scenario you described can't happen. I can't go to my boss and tell him, "Screw this, I'm going to spend the next month refactoring our messy code, rather than adding new functionality." However, I can do that in an open-source project.

The same way people in raid guild do what they're supposed to in raids even though it's only a game and raid officers can't do anything to you really; or members of Civil Air Patrol follow military customs and courtesies toward their officers despite those officers having no actual UCMJ authority; or people in SCA listen to the nobles of their "Baronies" despite those people not having any real world authority. When you join a group or a project, you agree to abide by the rules of the group or project. If you eventually find that you can't, you generally either leave or are forced out. if the project lead on a properly managed project asks you to do some boring grunt work, you either do it or find a new project and someone else will be asked to do the work.

If the project is generally fun or personally beneficial for you to work on, you'll do the grunt tasks you're asked to do, because otherwise you'll eventually be off the project. If the project wants to keep it's user base (and most do) it'll fix as many problems as it can to keep the users happy.

How does that work? In a proprietary project if your boss says "do this" you either do it or find another job

You don't work in software, do you? I've worked at 5 different companies as a software engineer, and in all of these jobs I've never had my boss tell me to fix the crappy parts of the software I was assigned to work on. Actually in neither of them my boss even took the time to look at the code itself. It always was "[we | customer x ] needs [feature | bugfix] y within z [hours | weeks | days]. Make it happen."

How does that work? In a proprietary project if your boss says "do this" you either do it or find another job.

Sure... You're given an assignment and you basically have to do it. But somewhere along the line somebody has to decide what is a priority and what isn't. Somebody decides what actually gets done. And it doesn't really matter if it's a proprietary project or not - stuff slips through the cracks.

You think a company is going to drop everything to refactor some code just because it's getting a little long in the tooth? Even though everything works? You think a company is going to put a whole lot of time

That's false of course:1) the deciding factor for project management is the non-commercial/commercial status of a project, not the closed/open state of the source.

2) for non-commercial projects, both developers 'goodwill' and proper management are needed to avoid bloat; whereas for a commercial project only proper management is needed (as the management decides where the money will go).

Note that the Linux kernel is a blend of non-commercial and commercial projects as many developers are paid to work on the Linux kernel and many aren't.

When it's proprietary software, management will be too busy handing out assignments to add new sales fodder, excuse me, features to worry about actually doing anything proactive to improve the code base. Having a slimmed-down code base may be good in the long run, but doesn't do anything towards getting the next bonus.

The people hating messes are the developers which have to look at this day by day. Cleaning up code is never something managers care about - its always driven by developers with a sense for order and simplicity.

That means that Open Source software has a higher chance of getting cleaned up than propietary software, because there you have a higher percentage of truly motivated developers and no managers to bug them. Sigh...

If only there was somebody at the top deciding what to let it/reject in such a way to keep the bloat out! While I am a linux/gpl fanboi, i think the bsd distros don't have this problem because they have much stricter people at the top of their kernels, and i think this is yet another sign that Linus should not be the only one running the show. If Linus isn't producing the kernel desktop users need (it's bloated, has the wrong scheduler, etc) then distros should step up and work around the problem GIT makes it very easy for them to start elsewhere, their previous release tree, mm tree, etc and add the patches they require!

Before you jump at me and say that this will ruin Linux by duplicating work, it will still be the (essentially) same code that goes into the pool, its just the administration that changes, and producing incompatible distro's isn't a problem as the userspace API is fairly stable and changes to the ABI for prop drivers can be agreed on by the major players (or they can just follow linus's changes to them, or go crazy and stabilise the ABI so that the prop drivers work)

Keeping the bloat out is not just about rejecting patches, it's about encouraging code reuse. In the BSD kernels, for example, the WiFi drivers are very small and all use the same code for everything that is not hardware-specific. I believe this is the case in Linux now, but for a while Intel had their own (almost) complete WiFi stack for their drivers and no one else used any of that code. This is a pretty endemic problem in Linux. It gets even worse when you stray a little way from x86, and find that everyone is implementing their own, incompatible, code for platform-specific features without realising that a lot of it ought to be shared everywhere above the very lowest layer.

You could tweak your driver and improve it's code instead of spending all day chasing to keep up with the latest KBI changes.

I've written a few proprietary kernel modules, and I don't think this problem is as significant as you believe. I found that it was pretty easy to take a stock kernel, build my driver to target it, and then move forward and build a set of version-dependent macros for the different KBI changes as they crop up. It's not like they change the entire KBI every day, and unless you're par

The BSD distros do not have this problem, but it's not just the strict top-down management.

It's the users.

Linux is trying to court three major user groups wih the exact same kernel, and trying to be all things to all people. The big corporations who make up most of the Linux coding/funding/purchasing want better server performance (more processors, more RAM, etc). The desktop guys want better desktop, laptop, and netbook experiences (3D graphics, sound cards, processor power scaling). The third are the end-users who contribute almost nothing but want the system to be easy and simple.

BSD however, really only has one user base - and they largely want the same thing. Stability, security, and performance. So all the cute little desktop friendly stuff that Linux keeps adding and all the server-specific stuff that Linux keeps adding aren't there. There's just the one major direction.

Why should Linus focus for Desktop Linux. It is a dead horse... Deal with it. Its a Dyeing market. Let Microsoft go down with that ship. Linux should be designed better for cloud/distributive processing, and server stuff. We are getting to a point we don't need desktops we need a thin client that can connect to the network. And let someone else do the work.

Point is that "perfect kernel" from system developer POV, is "piece of useless junk" from POV of application developers.

Sound interface is at best dysfunctional. Video acceleration is constantly "under construction" (redone 5th time now or so). Real-time timers required for smooth multimedia and games are still at large.

Just look at Apple's Mac OS X on how problems have to be handled. Instead of debating about what should/shouldn't be in "perfect kernel," people concentrate their work on areas which actually relevant to application developers and of benefit to end users. Apple took the line "if we don't do it who else" while Linus' official line is something like "do it in userspace" or "do not care. don't have to. I'm system programmer."

by Jurily (900488) Alter Relationship on Tuesday September 22, @09:38AM (#29503123)
While I am a linux/gpl fanboi, i think the bsd distros don't have this problem because they have much stricter people at the top of their kernels, and i think this is yet another sign that Linus should not be the only one running the show.
Heh. BSD doesn't have this problem because nobody cares enough about them to contribute enough code. You don't really have to think about feature creep at 3 patches per week.

More FUD, thank you. BSD has a large and dedicated fan based and rather than just put any code willy nilly into the kernel it is carefully evaluated. FreeBSD powers the root nameservers and OpenBSD is arguably the most secure operating system in the world. With reputations like this to uphold, often state of the art features are not added to maintain stability and security. No need to start a flamewar. Both BSD and Linux would be better off with cooperation, rather than conflict. Especially because Li

Clearly whoever modded you up has never tried what you are suggesting. I can only name a handfull of open source projects that backport security fixes to old versions and of those, they only backport to versions a few years old.

In fact, I'd say the longest lived "old version" is probably Apache 1.3. The 2.x series has been out for, what, forever and yet they continue to push out fixes for 1.3 (last was Jan. 2008).

I'd wager the biggest complaint I have with most open source is the a) dont understand what true stability means and as a result they b) rarely support old versions. It was one of the prime reasons I switched to FreeBSD. If I install FreeBSD 6.2 today, I know I'll get security fixes for at least a good half decade and probably a bit more if I track the 6.x series.

Erm actually its quite the opposite, windows XP got security patches for years, i doubt you'll find a safe 2.6.8 (~2004) kernel about. Even "slow" distros like debian only backport security fixes for 3 years after that you have to upgrade, or start maintaining your own kernel.

Yea, I was rooting around in your system just the other say. You seem to be completely frugal when it comes to purchasing new software. On a plus side, your system is completely useless to me outside of just exploring around it a little. Good call and staying arcane.

I see where you are coming from, but I'll offer that bloat isn't necessarily *bad*. Personally, I've thought of Linux as somewhat to rather bloated for 5 or 6 years.

It just means there are a lot of available features. Many of which people need.

Bloat isn't a problem. In software, it's in a lot of places because that's what you need many (but not all) cases that target a wide audience. The problems come in two flavors. 1) the inability for an individual to turn off the bits he or she doesn't need, and 2) lack

I can easily compile a linux kernel that runs in very little space on a super slow processor and it screams.

Problem is the "bloat" that Linus is talking about is simply plain old kludgy coding done to get it out the door faster. Adding features need to stop and all kernel coders need to work on cleaning things up. It's the sucky part of the job that nobody wants to do, but it needs to be done. I've seen the insides of some kernel modules that will make your toes curl in fear as they are early prototypes pre-alphas at best.

Until it causes system instability, slow performance, or increases the size of the code without adding any new features or fixing a problem. Bloat can become a problem, but it doesn't have to be. I thought I would just point that difference out because "isn't" seems to be an absolute which it shouldn't be.

Often the term bloated is misused meaning the speaker is at a point where he/she personally starts to find a technology confusing to wade through. Different people perceive different "bloat" points, so it's often relative. When it comes down to it, bloat is just software. As long as the pieces are loaded and run efficiently enough that the end-user, sysadmin, etc is happy then bloat is often a moot point and each person only needs to understand their own role and related facets of the software. We work as a

Torvalds' use of the term "Bloated" in this case refers specifically to a loss of performance and an increase in size and memory usage, not of confusion.

I think there are two (competing) goals for the Linux kernel as a whole (well, there are as many goals as there are developers, of course, so the two competing goals are more of a continuum).

On one side, there is a desire for the Linux kernel to support more features so distros can be built to be more like popular mainstream operating systems like Windows and Mac. Ease-of-use, a pleasant user experience, separation/insulation from the dreaded Command Line, pretty graphics, massive hardware support, and support for more "oddball" configurations like multiple screens, etc. So it's desirable to have lots of driver support and lots of hooks into the operating system to support fancy stuff.

On the other, there is a desire for Linux to be small, sleek, and fast, particularly for embedded projects.

The former has been running the show for a while, and I think that's healthy and positive, but the kernel has gotten larger and slower at its basic job. For desktop users, this is good news since a lot of things that had to be done at "higher" levels can now be accomplished directly in the kernel, so they might actually have a faster user experience, and they've got resources to burn since most PCs are specced out for Windows, so Linux has a lot of spare growing room in that hardware.

But for embedded/minimalist supporters, it means they need to add more hardware to their machines to support the now-larger kernel, chock full of features they'll never need or want.

On one side, there is a desire for the Linux kernel to support more features so distros can be built to be more like popular mainstream operating systems like Windows and Mac. Ease-of-use, a pleasant user experience, separation/insulation from the dreaded Command Line, pretty graphics, massive hardware support, and support for more "oddball" configurations like multiple screens, etc

I risk sounding like Stallman here, but in this case the distinction actually matters. We're discussing the kernel, not the OS.

Things like device drivers can be easily diked out. When it comes to stuff like memory managers, VFS, CPU schedulers, basic networking, so on and so forth, I imagine that the bloat hurts the embedded guys more.

15 years ago you'd install linux and get a CLI, right? So you'd have a little blinking underline and that's it.

Today you boot, with most distros, into a fully functional GUI with support for 100s of devices.

You generally can't have both "unbloated" and "desktop ready" at the same time. About the only way to do that would be for the linux devs to first insist on a CLI and to also design the hardware from the transistors all the way up to the DLLs. A lot like Apple IF Apple booted to a CLI. Then you'

> The BIOS - take a look at the LinuxBIOS or OpenBIOS work to see where that can be improved. But oh, my dear goodness, it can be improved.> Incredible masses of new hardware that do need detection and configuration at boot time. That's been a sore point: it takes time to scan for all that hardware, and you can optimize it by leaving out tools, but people do like having their network cards and USB drives and graphics tablets work automatically at boot time. Tha

init scripts especially are rather idiotic, and it's a testament to how much crap Windows is doing that Linux distros manage to load in roughly the same time.

It's especially dumb when things that could start after the system has finished booting, like samba and ssh, instead start first.

Likewise, driver detection. Um, no, you don't do that on startup, unless it's a first-time boot. You do that when the system is running, which means the very first time someone boots with that fancy new sound card the start

I'm afraid that hardware detection may well be required, because critical services (such as NFS exports or MySQL) which rely on mounted partitions in most large-scale environments must have those directories already mounted before running 'exportfs' or before starting the relevant services, or they can create incredible chaos. And the flushing of/tmp/ is tricky: it's much safer to do at a well-defined init step, before the other services are running, and not potentially scrub weird components out from unde

the difference ismake menuconfig & modprobe -rbloating in the windows kernel is compulsory!bloat in the linux kernel is optional and much of it can be removed at runtime, ofc if the whole kernel is getting worse every release then that is bad. So before making comparisons to windows it's important do remember that an extra 10% of something small (once you trim the crap you don't need) is less than an extra 10% of something big (because you can't)

Linus's approach has always been "What the hell, throw it in the kernel". The result is that if you try running Linux on something like a Nokia N800 or N810, where there's only 128MB or 256MB of RAM, it crawls and thrashes even with the swap on flash memory.

Of course nobody refers to Windows' kernel when people call it bloatware. Linus however is not talking about Linux as a distro or an operating system, it's just the kernel that's too bloated in his view. And with over 11 million lines of code, it's hardly even a flame.

QNX on the other hand is a faster deploy time, you dont have to spend time wrapping your own embedded distro for your product, just pay the QNX license fee and you're off.

Back 4 years ago I proved that by making my own linux install for a company product and kicked out the QNX system. It ran far faster, but they did not want to pay to support the custom OS so we stuck with QNX, and they already paid for the QNX licensing.

Well, if you just take a look at this monster [makelinux.net] I think you'll quickly will come to the conclusion that even providing the most basic functionality can lead to something quite complicated. And of course, "basic functionality" in 2009 means something else entirely when compared to 1991 when Linux started out.

It should be noted that of course the module-system works pretty good to keep things organised, so no developer needs to dig through millions of lines of code to make a few tweaks. But it's a monster nonetheless.

I don't know anything about kernels, but shouldn't they only contain the absolute minimum necessary functions of an operating system? What are the things that can make an OS kernel bloat up to 11 millions lines? Is everything that is in the kernel truly necessary, or could you move some of it to a driver or something?

er... um... drivers are distributed with the kernel and are probably counted in the kernel 11MLOC metric.

What are the things that can make an OS kernel bloat up to 11 millions lines?

Mostly drivers. Which are kind of irrelevant with regard to bloat because if you so desire, you can build a kernel that only contains drivers that you need. I realize that no distro can realistically do this with their pre-compiled kernels however, no one is going to compile support for everything that the Linux kernel is capable of supporting in a single kernel either.

I still think it is funny that Linux is considered "bloatware" when Windows will still use several times the same resources as Linux. For instance, take any desktop distro (Ubuntu, Fedora, etc...) and a complete installation including multiple desktop environments, browsers, office suites, etc... still takes up less disk space, memory and CPU than does a bare installation of Windows Vista/7.

For instance, take any desktop distro (Ubuntu, Fedora, etc...) and a complete installation including multiple desktop environments, browsers, office suites, etc... still takes up less disk space, memory and CPU than does a bare installation of Windows Vista/7.

I'm sorry, but you seem to be severely misinformed regarding the performance of modern Linux distributions vs Windows 7 on modern hardware.
Yes, sure, you can use something like Debian and it will run faster than Windows 7 out of the box, but at what cost?

I can tell you this: Vista (!!!) appears to run smoother and with a more-responsive UI on my laptop than when I try a default ubuntu install on the thing (for example, flash just crawls when I am viewing it thru firefox in ubuntu).

It has been my experience in the past that every time I install linux, it runs slower (or at least appears to run slower) than the windows install on the same machine.

I'm not trying to troll. Maybe someone could explain this phenomenon to me. I actually *want* to switch, but I can't if the alternative is providing a degraded experience.

Uh, I'd love to say we have a plan. I mean, sometimes it's a bit sad that we are definitely not the streamlined, small, hyper-efficient kernel that I envisioned 15 years ago...The kernel is huge and bloated, and our icache footprint is scary. I mean, there is no question about that. And whenever we add a new feature, it only gets worse.

And also:

He maintains, however, that stability is not a problem. "I think we've been pretty stable," he said. "We are finding the bugs as fast as we're adding them -- even though we're adding more code." Bottomley took this to mean that Torvalds views that the current level of integration acceptable under those terms. But Mr. Linux corrected him. "No. I'm not saying that," Torvalds answered. "Acceptable and avoidable are two different things. It's unacceptable but it's also probably unavoidable."

I think that's very important to note. His quote by itself is very self-loathing but to add that tit's unavoidable really says a lot. You want to be popular? You have to satisfy more people and in doing so you become more bloated. He does maintain that Linux remains stable and that's usually the biggest problem I have with bloat. It decreases stability. I don't think there's any reason to get excited about level headed rational and reflection.

Funny, I find Open Office to be bloated compared to MS Office.KDE/Gnome to be bloated to XP.

That's why I use the best tools for me: MS Office and XP (in that order)

It's not perfect, far from it, but works the best for me.KDE, Gnome, OO just feels like molasses everytime I try, and don't misunderstand:I've spent years under KDE, but given up on it every time after spending ungodly hours fixing what should work out of the box.OO has awful UI. I can't use it. Feels like a program from the early 90's which you

It's mostly because Linus isn't talking about the "Linux" you're talking about -- that is, a whole Linux distribution, as compared to other OSes.

He's talking about Linux itself, compared to what he thought it would be.

Basically, the original plan for Linux was never to be an OS in its own right, but to be just another POSIX kernel, one highly-tuned for the then state-of-the-art 386 chip. Even porting to PowerPC was never part of the plan. The fact that this kernel is so flexible and featureful -- that it has drivers for damned-near everything, that it runs on everything from cell phones to mainframes, from set-top boxes to thousand-machine clusters, from wristwatches to... Yeah, all that portability necessarily makes it bigger than what would strictly be needed for one architecture and a limited set of hardware.

It's also got to do with things like multiple schedulers, and it explains something of why Linus wanted one scheduler to rule them all -- the idea of pluggable schedulers is ludicrous, compared to the original idea of one kernel per platform, where you wouldn't have a Linux app, you'd have a Posix app that would run on Linux on x86, and on something entirely different on PPC, and yet another kernel on ARM. If it had been done that way, at least in theory, all of those kernels combined should've still been smaller than Linux currently is.

About two years ago I tested wether my Gentoo kernel was really faster. Disabling 3/4 of the options really just improved boot time and memory footprint, but not overall performance that much, at least far from 12%. Compared to a modularized kernel with just the stuff loaded, that was needed, the difference was negligible. I'm not sure if Torvalds is telling the truth about the reasons. To me it seems that the central, overall kernel architecture has degraded over time with regard to performance.

I always thought that building drivers into the kernel was going to be Linux's downfall. There is an un-ending supply of equipment that requires drivers and they can't all go into the kernel without some repercussions. Let alone being a black hole that continually sucks up stuff and never deletes it. This design may work well for a small system with limited hardware but is doomed to fail at some point when trying to scale it up for the real world.

I think the GPs concern is not about performance but about maintainability. Being a module doesn't really affect that. When the driver API changes every driver has to be changed. The more drivers the more work has to be done. What adds to this problem is that these APIs really do change in Linux.

What you write makes no sense what so ever. The kernel provides interfaces between its core services and the drivers. It doesn't matter how many drivers exist, so long as they use the proper interfaces. All kernels work this way.

I have a laptop HD with my copy of Ubuntu running on it. I popped it into another model of laptop yesterday, (from a Dell D630 to a Lenovo T400) everything worked fine.

I plugged a printer in a week ago, worked fine. Connected my Cannon camera, it popped up and asked if I wanted to import the photos. I plugged in my wife's ipod, and it asked if I wanted to open Rythmbox.

On windows, I would have had to go to countless websites, download drivers (or itunes) install, and troubleshoot. With linux, all of that just worked. On XP it was a pain in the ass to switch between AHCI and compatibility mode on my laptop. With linux, I can switch whenever I want.. it just works..

I guess that we all need to decide. Do we want to run an OS that supports all sorts of peripherals, has libraries for applications developed in many languages and has many additions that are useful for a particular set of users? Or do we want an architecturally neat, clean, and lean OS. If we want the former we go with Linux or Windows. If you want the latter then Minix 3 is pretty neat.

bloat was ever inevitable, if anything it shows linux is fostering a vibrant development community. the thing that separates us from the MS bloat is that we can do something about ours quickly and easily. not all kernel hackers are master coders, so id speculate there is quite a bit of shoddy code (no offense) that can be streamlined by new members, or improved by the originals.

It has been a long time since I needed to compile my own kernel and modules, but I can't imagine things have changed that much over the years. Seems to me that when compiling the kernel, you can select out a LOT of hardware support and other options that aren't necessary for that particular installation. It would surprise me to find that the kernel still fits on a floppy disk though.

I still compile the kernel from time to time. Its not that different and the core kernel compiles quickly. But the modules take ages if everything is enabled. Generally you can disable more than 70% on any given system, then compile time is much faster. With the make -j2 thing on a dual core i wait less time with slackware 13.0 than I did with slackware 1.? on a 486. (can't remember the kernel numbers)

This is like the salesman's nightmare, where you take the guy from engineering to visit the customer. Things are going great, the engineer can answer all the customer's questions.

Then you realize, *the stupid bastard is answering the questions honestly*.

Honesty is a basic requirement to be a halfway decent engineer. Persistent and incurable dissatisfaction with how you did the last job is another. Even if you *know* you did a great job, deep inside part of you knows you could have done it *better*.

Then let's do like most other open source projects when they reach that point : Analyze current version, find good things and bads things, find possible improvement that were impossible because of breakage and legacy. Once the analysis process is complete, start version 3.0 from scratch, implement the new stuffs and improvements, then bring current features in one by one. And don't tell me it cant be done, it has been. And dont tell me it wouldn't be supported : how much time did it take before the 2.6 line has been adopted by industrials and missing critial distro?

Bloated? Of course. Happens in every walk of life. It starts out lean and mean killing machine out of necessity, otherwise there is no success. Life is tough and to be other than at the top of efficiency is a death sentence.

After achieving success then being fat and lazy is a luxury that is no longer fatal.

This happens everywhere the jungle, in the business world, your job and governments. Evolution.

Version 5.0 of Microsoft's flagship spreadsheet program Excel came out in 1993. It was positively huge: it required a whole 15 megabytes of hard drive space. In those days we could still remember our first 20MB PC hard drives (around 1985) and so 15MB sure seemed like a lot... In 1993, given the cost of hard drives in those days, Microsoft Excel 5.0 took up about $36 worth of hard drive space. In 2000, given the cost of hard drives in 2000, Microsoft Excel 2000 takes up about $1.03 in hard drive space...

In fact there are lots of great reasons for bloatware. For one, if programmers don't have to worry about how large their code is, they can ship it sooner. And that means you get more features, and features make your life better (when you use them) and don't usually hurt (when you don't). If your software vendor stops, before shipping, and spends two months squeezing the code down to make it 50% smaller, the net benefit to you is going to be imperceptible. Maybe, just maybe, if you tend to keep your hard drive full, that's one more Duran Duran MP3 you can download. But the loss to you of waiting an extra two months for the new version is perceptible, and the loss to the software company that has to give up two months of sales is even worse.

Next year he's going to claim that Minix was doing it right all along. We've seen a lot of Linusisms to that effect... $X needs to be outside the kernel... $Y shouldn't happen the way I've been screaming for years... I told $Z to fuck off because he's stupid but he was right and we need to go do that yesterday... it's just how Linus is. He's an opinionated fat bastard, and then one day he realizes he's fucking wrong and just goes, "SHIT! Well let's do that then >:O"

I did RTFA and I must say the article was poorly written - so much so that the author felt he needed to publish a correction that summarily states (what open source power users already know) that the Linux kernel can be "trimmed or fattened up." It is immaterial that Linux has gotten more bloated as the fundamental difference between it and Windows is that you as the consumer have the choice to "trim the fat." While I am an open source users, I am pragmatic and I believe it cannot be all things to all people and Windows has some advantages over Linux. For example, the choices of Linux can be downright bewildering and each distribution behaves differently with its own quirks. Windows is Windows. Even though distributions share a common kernel, they are really distinct OSes in their own right - applications run differently and have different behaviors. As Samba will tell you, sometimes compiling succeeds on three out four large distros. In theory, they should be all compatible.

Let's take a look at the patch history of QNX. [qnx.com] QNX is a message passing microkernel mostly used for embedded systems. But it can be run with a full GUI, runs on multiprocessors, and can be run as a server. Millions of "headless" embedded systems have QNX inside. I used it in a DARPA Grand Challenge vehicle. BigDog, the legged robot, runs QNX.

Drivers are outside the kernel. All drivers. File systems are outside the kernel. Networking is outside the kernel. And they're all application programs, not some special kind of loadable kernel module.

There have been 14 patches to QNX in the last two years. Only one is an actual kernel patch: "This patch contains updates to the PPCBE version of the SMP kernel. You need this patch only for Freescale MPC8641D boards." Only one is security-related: "This patch updates npm-tcpip-v6.so to fix a Denial of Service vulnerability where receipt of a specially crafted network packet forces the io-net network manager to fault (terminate)." Neither Linux nor Windows comes close to that record.

There's little "churn" in a good microkernel. Since little code is going in, new bugs aren't going in.
Good microkernels tend to slowly converge toward a zero-bugs state.

QNX generally has a "there's only one way to do it" approach, like Python. Linux supports three completely different driver placement - compiled into the kernel, loadable as a kernel module at boot time, and run as a user process. QNX only supports one - run as a user process "resource manager". That simplifies things. A "one way to do it" approach means that the one best way is thoroughly exercised and tested. There are few seldom-used dark corners in critical code.

When QNX boots, it brings in an image with the kernel, a built-in process called "proc", any programs built into the boot image, and any shared objects ".so" wanted at boot. These last two run entirely in user space; they're just put in the boot image so they're there at startup. That's how drivers needed at startup get loaded. They don't have to be in the kernel. (In fact, you can put the whole boot image in ROM, and many embedded systems do this.)

A QNX "resource manager" is a program which has registered to receive messages for a certain portion of pathname space. The QNX kernel has no file systems; part of the initial "proc" process is a little program which keeps an in-memory table of "resource managers" and what part of pathname space they manage. This is similar to "mounting" a driver under Linux, but it doesn't require a file system up during boot. File systems are user programs which start up and ask for some pathname space, after which "open" messages are directed to that file system.

Another QNX simplification is that the kernel doesn't load programs. "exec" is implemented by a shared library. That library is loaded with the boot image, to allow things to start up. "exec" runs entirely in user space, with no special privileges, so if there's a bug in "exec" vulnerable to a mis-constructed executable, that program load fails and everything else goes on normally.

The price paid for this is some extra copying, since all I/O is done by message passing. This isn't
much of a cost any more, because you're almost always copying from cache to cache. That's an important point. Message passing kernels used to be seen as expensive due to copying cost. But today, copying recently used material is cheap. On the other hand, some early microkernels (Mach comes to mind) worked very hard to mess with the MMU to avoid big copies, moving blocks from one address space to another by changing the MMU. This seems to be a lose on modern CPUs; the cache flushing required when you mess with the address space on recently used data hurts performance.

I used to pump uncompressed video through QNX message passing using 2% of a Pentium III class CPU. Message passing, done right, is not a major performance problem.

And is also utterly impossible while there is a single line of GPLv2-only code in it that the author doesn't give permission for, or whom is dead. There's quite a lot of code like that, there's a lot that can't be traced to an author, there's a lot of authors that won't give their permission, there's a lot that *can't* give their permission (employers, etc.) and there's so much of it that recreating it from scratch without reference to the original code would actually take longer than just starting a GPLv3

The biggest problem with that is USB devices. Who knows what weirdo USB hardware you're going to want to plug into your computer in the next couple years. Using the stock Debian kernel, that's something I really don't need to worry about.