Has Linux lost its root vision?

When I first started using Linux it was fast. Even the most bloated of desktop environments, gnome and kde, were blindingly fast when compared to other operating systems on the same hardware. This was because the unsung volunteer hero's of the open source world took pride in producing the best code they possibly could. It was their measure of status in the programming world. The better, faster and more efficient code they could produce elevated them in the eyes of their peers.

That was their motivation in programing open source code. They could throw out a challenge and say "Look at my code. I think its neat. If you can improve it then go ahead." If others did improve that code it was taken as a challenge to write even better code. This sort of friendly rivalry resulted in a lightning fast and efficient operating system. In those times the root vision of Linux was to produce the cleanest and most efficient code possible.

Yet today it doesn't seem so. Today it seems that those who contribute to open source and Linux are focused more on adding features and new programming gadgets than in neat and efficient code. It seems to me that the quality of the code has taken a back seat in the race to be the first to bring out the next whiz bang menu, eye candy or killer program. The challenge seems to be now along the lines of "Look at what I created. It looks good. If you can make something look better then go ahead."

In short once something has been created it is dropped like a hot potato and the next bauble on the horizon is chased after. Leaving behind a littered landscape of inefficient and buggy code which the end users struggle through. The attitude towards this so called legacy code is that no-one really wants to clean it up because it is boring and not glamorous. Sure there are people who still do take pride in cleaning up their code and I applaud them. They are to me the real hero's of the open source world. All the rest I call gunners. They are gunner do this and gunner do that and they never do anything properly.

What brought this on? A remark by a co-worker yesterday when he asked me about Linux. He said that he tried Linux once but it was slower than windows so he left it. This shocked me because throughout my Linux life I believed that Linux runs faster than windows. I still do. Yet.....

A year or so ago I started looking at faster, lighter desktop environments. My beloved desktop environment, which I had used pretty much since I began Linux, started seeming slower and slower. It was taking longer and longer for programs to start when I clicked on them. Also, on the rare occasions where I have to start my computer from either a power failure or kernel upgrade the wait to get to a usable desktop was getting longer and longer. Changing distributions didn't change much and every update seemed to add more to my waiting (dis)pleasure.

What do you long time Linux users think? Has Linux slowed down? Are the contributors focusing more on bits and pieces instead of the nuts and bolts? Has the Linux focus changed from being efficient and fast to being colourful and eye catching? You tell me and we will both know.

Related White Papers

12 Comments

Jul 30, 2009

I would have to say that Linux may be getting slower, as despite what I hear of the miracle 27 second boot time on Ubuntu (ext4), and of course the 1-second boot on an integrated system, it takes my system about a minute and twenty seconds to boot, and another forty until I have a usable X system on hand. Comparatively, this is something of a slow boot time, I could probably trim it down (on ext3, so there's a bit of a clog in the system there), but honestly things haven't been as fast as one may expect.

Part of the problem is because of the huge amount of hardware support being added to the kernel. Another issue is the auto detection of the hardware each time you boot the computer. The "old" way stored the configuration of the major components (graphics card, monitor, keyboard, mouse and so on) during installation and then these settings were used during subsequent boots. The "old" way, of course, has its own issues when hardware is upgraded or added.

What I'd like to see is an "optimise" process each time the kernel is upgraded and during installation, where any unnecessary modules are disabled. This would also require an "Add hardware" option where new hardware is detected and enabled, in a similar manner to software installation through Synaptic\YUM\YaST\Portage\[insert favourite package manager]. This would shave a considerable amount of time off the boot process, especially for most desktops notebooks where the hardware doesn't really change. Anyone that is constantly moving around to different projectors, printers and so on would just do a quick "Add hardware" and away you go. You could even look at saving "hardware profiles" if you were moving between a number of locations where the infrastructure stays the same.

Actually with any different hardware change there would probably only be three places to look at. USB, PCI and /proc. A hash could be made at a first boot of these locations and then compared at subsequent boots. If the hash (md5sum?) changes then the add hardware routine could be called. If not then its move along now, nothing more to see here.

Jul 30, 2009

Linux doesn't really seem that slow. If you look at Xandros on some of the EeePC netbooks, that has a pretty good boot time.

Even a lot of the installations from Live CDs still use less space than a default Windows XP installation.

Jul 30, 2009

Yes and no. Even the current vanilla kernel with auto detecting enabled doesn't take long to boot. My typical medium mixed desktop/server configuration boots in less than 20 seconds (my reference is Arch since that's the only system I nowadays use) on medium hardware. That's fairly good for a system capable of supporting about every device you attach and starting a bunch of server daemons. Less is only "needed" on for example netbooks with a more fixed hardware configuration.

Desktop environments and software running on top of the Linux kernel is however a mixed bag. Some pieces of software are becoming ridiculously slow compared to what they achieve, others though are as efficient and low on resources as in the days. Unfortunately these changes widens the drift between so called power and laymen user. Light applications have become the interested of the initiated ones, which is kind of unfortunate. There are many really good applications out there, running lightning fast.

A polished look will of course attract more users than will arguments about better code and hence better performance. I therefore hope that we've seen the end of the fancy-looking-rat-race; it should be enough to improve on what we have today. Good hardware is unfortunately also an excuse for bad code or bad decisions about code, a pitfall the big proprietary colossus also fell into.

To sum up my view: Linux in itself isn't slow, but from there you can choose either a slow or fast path and we have examples of both.

Garry O regularly complains about how the Linux community isn't doing enough to challenge Windows.

It could be said Locutus that what you and others in this thread are describing is the result of Linux developers doing exactly that. Making Linux all-singing-all-dancing (i.e. things like automatic hardware detection) so that it represents a more complete easy to use system (even an alternative to Windows!) comes with a cost as well as benefits.

Don't get me wrong, I love the way that I can install Linux on almost any piece of hardware around and run just about any program I want or need. But if Linux being slower is a result, then that's unfortunate. I will live with it for now.

Hopefully there will be attention paid to keeping code in all aspects of Linux lean and mean in the future - as I am sure much of it is now. That is just good practice, even with ever faster processors and more disk space. Lets not fall into the Windows trap.

Personally I find a few things on my Fedora desktop machine are slower than I would like, but there are none that are worse than my previous system with Windows.

Can't say this Ubuntu 9.04 on an Athlon single core and 2Gb RAM is slow - 60 seconds to boot (including login) and programs open fast enough. Given the choice of a fast boot but poor hardware support over a longer boot but good hardware support I think that the latter is preferable IMHO. No good getting to the desktop in 20 seconds if it takes you days of hassle and kernel compiling to be able to use your printer !!

Two things that maybe having an impact:
1) Hardware paradigm shift for desktop computers (move to multi-core processors instead of single core). Most desktop application software has been written exclusively for single core processors which means that they only use one of the cores on the chip. The actual speed of an individual core on a multi-core processor is slower than a single core.

Also, if a program has been optimized to run on multi-processoros or multi-core processors then it may run slower on a single core system. Another unfortunate scenario is that most application programmers are not accustomed to multi-core/multi-processor programming. Before a programme could rely on a compiler to optimize his code, but a majority of the compilers out there are built for single core systems. They don't optimize well for balancing operations across multi-cores.

2) More and more open source programmers are not just enthusiasts. More programmers on a project means not just writing highly optimized code, but also writing more maintainable code. The following two C statements are equivalent:

I think ran out of space:
2) More and more open source programmers are not just enthusiasts. More programmers on a project means not just writing highly optimized code, but also writing more maintainable code. The following two C statements are equivalent:

The first statement could be considered far more efficient than the second especially before compilers became more efficient. One may have even seen a speed boost if the statement was in a large loop. However, the second statement is easier to understand and maintain than the first. Large monolithic linear programs (no loops or function calls) will run faster and more efficient than modular programs which have the same functionality. When it comes to maintainability modular programs win hands down.

Aug 1, 2009

?Linux? is doing just fine. Considering it's developed primarily as a server kernel it does a remarkably good job for the desktop. The next layer - glibc, gcc and all the other GNU stuff - is doing just fine too. The health of the next layer 'above' that - Xorg - is perhaps a little more worrying but I'd say that it's also coming along nicely (if a little slowly). I'd say the major GUI toolkits, GTK and QT, are fundamentally sound too.

The rot does seem to have set in the upper levels though. I think the plethora of slow, buggy, bloated, inefficient, crappily designed, shoddily and amateurishly written, ..., undocumented stuff that can make GNU/Linux desktop life such a misery these days is a consequence of its increasing popularity. It's an unholy combination of corporate expediency and rush and an inevitable regression to the mean in the qualities of the increased population of (other) developers.

Disclaimer: Blog contents express the viewpoints of their independent authors and
are not reviewed for correctness or accuracy by
Toolbox for IT. Any opinions, comments, solutions or other commentary
expressed by blog authors are not endorsed or recommended by
Toolbox for IT
or any vendor. If you feel a blog entry is inappropriate,
click here to notify
Toolbox for IT.