Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

I use different distros for different tasks, because the distros themselves place different weights of importance on various factors.

For years, my servers have run on Debian plus the odd BSD box here and there. Rock solid reliability with very little maintenance overhead, but you don't get the latest and greatest stuff in the repositories.

I've got a couple of servers running Ubuntu with VMware Server on top for internal VPS work. Again, very few problems aside from a couple of issues related to kernel upgrades.

My laptop runs Ubuntu Desktop edition, which works great for me. I have almost no trouble with package management, even for cutting edge stuff, and the driver support is great.

I use a couple of live CD distros for repairing Windows systems when they get out of whack. The list goes on and on. It's kinda like programming languages; use the right tool for the job. While you *could* use most modern languages for just about any task, some are better for "X job" than others.

Did you know that Stallman sounds like the Swedish Stålman, literally man of steel (or figuratively Superman). The nomme de guerre Stalin means (more or less) man of steel in Russian.

Clearly Stallman has the right name and the requisite facial hair and he can write GPL4,5 and 6 to enforce collectivisation [wikipedia.org] of Stallix and the crushing of Kulaks [wikipedia.org] like Torvalds.

So either everyone learns what "apt-get" does (not to mention how to use a command line interface in the first place), or everyone runs commands and has no idea what they are doing. Then a hardware issue comes up with their video card. Oops.

Plus, why apt-get? Why did we decide to use debian over rpm? hmm.

One problem, if it's a problem, with Linux is that those that have learned to use it (read: taken time) presume everyone else can learn, too (read: has time). That's not the case.

No, no responsibility exists at all, in any situation - I can produce either a free or a pay for product, and I can happily walk away from it at any point, taking with me my tools and code and no responsibility to support you exists at all. There is no way in hell you can tell me to keep working on something that I don't want to keep working on.

"Suppose someone creates a very minimalist linux distro which includes a very good package management system. Suppose this package management system includes nearly all popular linux software packages.

Now suppose it were rather easy for anyone to install any number of those packages, bundle them together into one meta-package keyword, and call that a distro.

We don't have to imagine. Thanks to the diversity of FOSS and the strength of the ability to bundle and innovate at will, there is Gentoo Linux [gentoo.org] and Open Embedded [openembedded.net] (which is based on Gentoo's Portage [gentoo.org] software installation and management tool.)

The vanilla run on everything kernels are quite big but they include support for all types of CPU's (for that architecture of course), hardware and even debugging stuff. Ever custom compile a kernel for your running system eliminating all but the drivers you need? It becomes very small. I cant remember the last time I did but I remember it was under 10MB but I could be wrong. The Core boot team has trimmed down the kernel to fit inside a 2MB bios chip with tiny X and busy box to boot into a GUI with X terminal. Now that is small.

i just downloaded and build 2.6.27.14 and the tar.gz source code package was something like 61 megs, out of that 61 megs of source code. the kernel image itself is 1.1 meg and the modules i selected to include all installed to/lib/modules/2.6.27.14/* and it weighs in at 10 megs.

just because the kernel source is BIG does not mean the compiled/installed package is going to be big too, depends on the builder, i build my own custom kernels trimming the fat by not building support for hardware i do not own, but of course distro builders will build in more support to handle as much hardware as feasible and still have a responsive and usable kernel so the default/generic kernel in most distros will be considerably larger but will still run almost as good as any custom kernel...

This is a really cool idea, but I foresee the implementation falling short.

In my experience Linux packages have terrible names, non-descriptive names, or both, and usually, worthless or no description.

So you end up with have several different packages that do similar or the same things with no significantly distinguishing characteristics. For example: smartphone-system, smart-phone-system, smartphonesystem, dtmf-system, smart-phone, and smartphone. Then you'll have 5 different distros that use different but overlapping packages, with insufficient documentation to make a decision as to which you need.

So in practice, I usually have no idea what package(s) I need without extensive searching of the tubes, but maybe it's just my lack of experience.

That defeats one of the primary benefits of the linux kernel for the end user: You plug a device in and it usually just works. No driver installers, no unhelpful dialog saying "Should I search for a driver for you?" and no unnecessary vendor crapware added to the system tray.

30MB is a fairly trivial amount of disk space and these modules at worse add a small performance penalty to boot times and no performance penalty to runtime.

the kernel is not bloated, it's just that it comes with drivers for a shitload of hardware.

That _is_ the definition of bloated, since many people use a limited subset of PC-compatible hardware. It's nice that you can just stick in almost any piece of kit and Linux detects it and runs with it. It would be nicer if all that cruft was cut out of the base kernel and drivers were available for download on demand rather than shipped with it.

That said, I agree with Linus on the topic of multiple distros. I have had on one occasion to patch the hell out of Linux/Glibc just to make a distro that met size+international+supported by our toolset vendor. There is a market for many distros for many things.

I also think the biggest hold-back for Linux is that there is no single "this is Linux" distro for the masses. Windows is the same no matter where you install it. Things are in the same place on every XP install on the planet for the most part. Why can't there be a single distribution that meets the needs of average Joe desktop user, has good 3D hardware and gaming support, lets them achieve all they need without logging in as root, and "just works".

Ubuntu comes close to this, and almost every conceivable software package for the average user exists in its software repository somewhere. I'm a power user/dev/sysadmin and I don't often need to stray outside the Ubuntu package system to find what I'm looking for.