Reason I switched to linux was because it seemed like a developer’s dream come true. Compilers are a package manager operation away (and generally aren’t tied to OS versions, I’m looking at you Apple), everything can be recompiled to address any particular concerns and there are crapload of weird languages to write your software in. This is especially awesome when compared to a typical proprietory software stack where one can’t easily fix problems that involve multiple components due to licensing issues or due to not having the code available(or due to not having the development environment available). So one would think that Linux distributions hold the ultimate software power: unlimited pass to modify their offering to their target audience’s content.

Unfortunately my view of Linux ways is unrealistic. For example when people do friggin’ awesome work knocking down boot time to 5 seconds by hacking and slashing their way through the entire software stack involved in boottime delays, distributions claim that it’s not somethin they can seriously consider because they are too set in their ways of general purpose(and generally bloated) init systems and stock kernels(worst case: why can’t we recompile the kernel on the user’s machine or ship a couple of custom variations for common hardware out there). What’s the point of open source if we continue pretending that everything is a general purpose black box that doesn’t like to play together? It’s been two weeks and I haven’t seen a single distro bite the bullet and attempt to list 5 second boot in their goals. Come on guys, don’t you like to be challenged?

Hope someone proves me wrong.

This entry was posted on Thursday, October 9th, 2008 at 9:18 am and is filed under Uncategorized.
You can follow any comments to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

4 comments

I agree completely. However, in general I think that the distributors do a great job. I am more angry with large corporate FOSS projects that “don’t get it”.

Plugins are great, but insisting that the projects themselves must distribute them as binaries, ranther than the linux distributions, is a disease. Yes, it patches the general suckyness of windows software management. But it also blocks the full benefit of FOSS.

Most distros exist to serve a particular market segment (RedHat the enterprise, Ubuntu desktop users), so I imagine you see a chilly reception to the things *you* specifically listed because while it’s interesting, it doesn’t sell more units of RHEL or get grandmothers to switch away from Win32.

In those cases, rock-solid stability and API compatibility, a la RHEL, and a better desktop experience, a la Ubuntu, are what people get paid to work on.

Mozilla is no different; for instance, I think the rewriting stuff you’re working on is pretty cool, and is going to provide a real benefit in terms of speed and code reduction.

So why doesn’t Mozilla “bite the bullet” and hold Firefox 3.1 until it’s all landed and ready to go? As you ask, “Don’t you like to be challenged?”

We’re not seeing this because because we know there are other requirements that need to be taken into account, and many of those, we may not personally care much about.

The beauty of open source is not that projects are “required” to do what others think they should; it’s that one has the ability *and* the freedom to do it themselves, and to defend the value of their plan and/or work to the community of developers and users who care about that particular software product.

If you tell RedHat you’ll buy 2 million units of RHEL if only it booted faster, or promised Ubuntu 2 million users for life, if only it booted faster, their answers might be different.

In my opinion the problem is that the distros don’t see this as their priority. They even don’t have a “no regressions” policy on boot time. As my Boot time regressed by 10 seconds when switching from hardy to intrepid.
Unfortunately there is no real “upstream” for boot time. Only half of the work that Arjan did is going into projects shared by the distros. Like sReadahead or kernel work.
Distros have to put more resources into removing their bloat. Not paying the price for something you don’t need is the first step in the right direction.

And well if you want to customize the kernel and everything else on your system, you can still use gentoo. But then do not complain if something breaks.

It hasn’t been pointed out yet, so I thought I’d mention that Ubuntu and Fedora both have releases coming within 3-5 weeks. I wouldn’t expect the Fedora Project or Canonical to make any big decisions about future plans until they are out of crunch time.

I also note from the article:

“At the conference roundup Friday, speaker Kyle McMartin announced that both Fedora and Ubuntu have fixed some delays in their boot process, and there was much applause.”

That sounds promising to me.

Optimizing boot time for a netbook with limited and known hardware is always going to be easier than making a widely-used (desktop, workstation, laptop, netbook, …) distro dance faster, but it sounds to me like ideas were exchanged and new things were tried by all. I dig that about Linux geeks.

I agree wholeheartedly that I want a system that boots in less time than it takes me to drum my fingers; I just don’t expect to see any announcements until the bigger players get a change to play around with these new ideas a bit.

On a different note, didn’t Ubuntu and Fedora both move from init to Upstart to speed up boot times by parallelizing the process? Kinda funny that these Intel guys did all their heavy lifting with init…