Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

edesio writes with a snippet from debian-news.net, trumpeting an announcement from the ongoing DebConf10 in NYC: "Debian's release managers have announced a major step in the development cycle of the upcoming stable release Debian 6.0 'Squeeze': Debian 'Squeeze' has now been frozen. In consequence this means that no more new features will be added and all work will now be concentrated on polishing Debian 'Squeeze' to achieve the quality Debian stable releases are known for. The upcoming release will use Linux 2.6.32 as its default kernel in the installer and on all Linux architectures.""

What a terrible attitude to have. The Open Source community is about shared effort for shared gain, not personal recognition. No matter the distribution that gets all the 'spotlight', it's Linux that reaps the reward, and the more ground Linux gains the better off everyone with a PC is.

The Open Source community is about shared effort for shared gain, not personal recognition.

Have you spent a moment in the "Open Source community"? The majority of contributions to Linux are from profit-making corporations. Most of the remainder take glory in advertising their contributions for CV and geek cred. Certain projects are so cliquish that a friendly attitude (read "sucking up") to the core team is a far better way of being welcomed as a contributor than technical expertise.

My original post included specific project examples, but since the most political organisations also have the most

Pretty much this. And just because everything is somewhat political, it doesn't mean every venture is as bad as every other

True that. I'm pretty sure Thomas Jefferson knew what politics was when he made it the basis of our political system...on purpose...as though it was going to solve problems we used to and no longer have.

Like women and unlike wine, all man's endeavours grow more wrought with bitterness over time.

I had to learn this the hard way, back when, so pay heed: politeness is a social lubricant. It gets in the areas where different peoples' rough edges would otherwise rub and create friction, and it costs nothing to be polite.

For example, a few months ago I opened a bug report with $LIBRE_PROJECT asking for help making a Windows build, or whether they'd be kind enough to start releasing Windows builds of the stable tree, rather than an occasional build from an unstable branch. After a bit of back and forth - the guys who weren't involved in making the Windows build were a bit rude - they eventually pointed me to the non-obvious way of compiling their code, and eventually their Windows guy started releasing regular semi-stable builds (the Win build isn't quite there yet).

A little politeness as social lubricant, and I might have helped some other poor schmuck who wanted a free Windows program that does what $PROJECT does.

You're right. That's always come quite naturally to me, so I've got a history of being surprised at how nice people are in spheres where I've been told by others the only way to get ahead is to 'suck up' or be someone's bitch in an unspecified but theoretically humiliating way.

Individuals without a company and contributors with unknown affiliation add more to the Linux kernel than any _individual_ company, but that does not negate the statement that "the majority of contributions to Linux are from profit-making corporations". Red Hat, Novell, and IBM together make more Linux kernel contributions than all of the unaffiliated and unknown-affiliation contributors combined.

The document you appears to have misread even includes this sentence: "It is worth noting that, even if one assumes that all of the 'unknown' contributors were working on their own time, over 70% of all kernel development is demonstrably done by developers who are being paid for their work."

Morton works at Google, Viro pops up as basically an alias: http://en.wikipedia.org/wiki/User:Niels_Olson/Al_Viro [wikipedia.org], Miller works at Red Hat, Baechle at MIPS, etc.. You just gave a list of Corporations and actual top developers all working for those corporations. Thanks for reinforcing the prior fact that the bulk of the kernel code is paid directly or indirectly by corporations.

I really wonder why some people seem to hate the notion of companies paying developers to work on Linux.

Yes, Linux is an excellent example of how successful open source development can be. Especially in the sense GNU HURD isn’t.

The fact that most development comes from various companies should be counted as a success of Linux.
I mean, think about it. Unlike other operating systems, developed either by monopolists or by relatively small communities, Linux is now a result of joint effort of both numerous independent programmers and several large companies. All scratching their own itches, all working on making Linux better, all sharing their improvements with everybody else.
This is also the greatest success of GNU: without the GPL, there would have been no strong incentive for everyone to share their improvements (even though it would be a good long-term strategy; the modern corporate world is more interested in quarterly statements, it seems).

I guess because it doesn't attract the glamour-seekers, nor does it consider itself elite.

I think that Debian suffers from a different form of elitism; the elitism that says "if we release something thats broken to stable we won't fix it because its *STABLE*"

The problem, as I've seen it over the last 10 years as a Debian sysadmin, is that Debian is not run as a business; it doesn't have customers, it has users.

If you want to use Debian in enterprise you NEED a really good engineering team; its really risky to use Debian in the small/medium business eg with sole-sysadmin because when Debian release something thats broken it STAYS broken and you need an internal engineering team to fix, patch and maintain the fixes.

This is why I am encouraging my employer to go with Redhat instead; because Redhat is run as a BUSINESS, they understand the needs of business. For Redhat you are not just a user, you are a CUSTOMER and that actually counts for something.

You might look at the php disaster in RHEL 5.x

Basically, Rackspace is pleading with Redhat to compile pcre with unicode support, and Redhat seems to be saying wait until RHEL 6

php in RHEL is so far behind that many open source and closed source php applications do not support the ancient version of php in RHEL because of the known security issues. (yes Redhat claims to have backported security fixes, but that does not mean that the latest versions of your software support the version of php in RHEL that php

FreeBSD upgrades without console access are not well supported so I am not a big fan of using it on leased servers

I'm not sure what you mean by this. I took a FreeBSD machine through every release between 4.7 and 6.2 without console access doing source updates. The newer freebsd-update tool makes it even easier - just run a single command and do a binary update. I don't think I've ever updated a FreeBSD system in a way that could not be done via SSH. What is the 'supported' update process that does require console access? It doesn't seem to be either of the ones that I found in the FreeBSD Handbook...

The majority of contributions to Linux are from profit-making corporations.

Does anybody still remember the times when corporations were like "we just hire people so that they concentrating on what they already do full time"?

I can think of at least one major open source Unix distribution the central developers of which seem to deliberately so poorly document their work that getting up to sufficient speed on what they do to make a positive contribution requires mentorship.

RedHat? That never was a secret really. And they were first to break the mold of "people do what they already do" to "we pay money so we say what you do".

Though I'm not sure what you mean by the mentorship. RedHat doesn't hire developers that easily. They spare themselves mentoring newhires by always trying to hire people who are already experts in the pie

This is a mistaken view. Even if Ubuntu support was always effective, there is no weight taken off Debian. Every community has to deal with noobs.

In the real world (specifically, the irc support channels), there's a chronic problem: a fresh Ubuntu user realizes that they're not getting help in #ubuntu, so they come to #debian, because, well, Ubuntu is based on Debian, so you #debian people know how to fix my problem, right? right? Much time is lost trying to help them when their problem is particular

Thats not exactly true. A lot of stuff Ubuntu does/fixes gets sent back to Debian. Its a mutual relationship that they both benefit from. The same is true for many other debian-based distributions. And hey, its open source, the people who makes Debian want others to reap their benefits.

At a time when cutting-edge distros were all moving to Linux 2.6 and conservative distributions and ones that hadn't been updated lately were still using 2.4.x, the Debian installer was asking users if they wanted to try the "new" 2.2 kernel, which might not be totally ready for prime time yet, or stick with the tried and true 2.0 kernel.You exagerate. When 2.6 first came out the current version of debian stable was woody which offered either 2.2 or 2.4.

Still I agree that debians longest release cycle ever came at about the worst possible time.

Anyway I would probably prefer the reverse: uFreeBSD/Linux + ports. But porting the ports collection would be a major hindrance.

So what you're looking for is something like Gentoo. It doesn't have the BSD userland, but it does have Portage which is comparable to ports but with even better package management tools (in my opinion).

My big problem with this is that FreeBSD is an operating system, kernel + userland. If you are just using the Kernel and not the userland, don't call it FreeBSD. It's just like OSX isn't FreeBSD because it used the BSD userland with a mach kernel.

"Linux" is just a kernel. When combined with the GNU userland tools you end up with a complete OS typically known as "distros" such as Red Hat, SuSE, Ubuntu, etc., but it's quite possible to have Linux without the userland, i.e. many embedded uses of Linux.

BTW, your sig is wrong. BSD is free as in speech, some would argue much more so than the GPL with me being one. Quite frankly, trying to simply paint BSD as only free as in beer is asinine. Now which OS was it that was the first campaigned against binary blobs? I'll give you hint, it has BSD in its name.

That is especially silly considering the price UC charged for netBSD on tape. (ftp was free of course)

Hardly surprising about Debian Multimedia, as the FreeBSD kernel actually has a sound subsystem that doesn't suck (i.e. OSS 4 interfaces, in-kernel low-latency mixing, per-channel volume controls, and so on). It makes me chortle slightly whenever anyone mentions pain with PortAudio or whatever this week's sound daemon of choice is on Linux. When writing code to play sound on FreeBSD, I just open/dev/dsp[W] and write audio data there, maybe with a couple of ioctl()s to set the sample rate, volume, and num

Well duh! Of course libc uses reserved identifiers for those. If it used non-reserved identifiers, it would conflict with valid user code.

Nope, sorry, not true. Parameter names never conflict with identifiers in any other scope. Identifiers beginning with an underscore are reserved for the 'implementation,' which can be interpreted as including the libc as well as the compiler, however the GNU C standard reserves ones starting with a double underscore for the compiler, yet unistd.h (and other headers in glibc) are littered with parameters starting with double underscores. In particular, the __block parameter name means that you have to do hacky work-arounds if you want to compile code using blocks on a GNU platform. Meanwhile, this code work out of the box with any other libc implementation.

It requires one or more of the macros that, according to POSIX / SUS, the code needs to define.

Which would be fine, except that the glibc man pages don't say which functions are from which standard, so you need to hunt around looking for every symbol. If a function comes from 4BSD but was later adopted by POSIX and SUS, what do you define? If you define the POSIX macro, then you may find that you've suddenly hidden a load of other things that were working correctly. There are some really fun cases where no combination of the public macros expose all of the features that you want and you need to define some of the glibc internal ones.

On other platforms, the macros work in a much more sane way. Everything the libc supports is exposed by default, but if you are writing portable code then you can define a specific set of standard macros and it will disallow anything not in those standards.

Just kidding. I like debian but switched to Ubuntu years ago seeking more up-to-date packages. But I find all the config files etc in Ubuntu a little hard to work with (providing simplicity for the user makes things more complex behind the scenes, which isn't good if you like to fiddle around behind the scenes). Is debian any more up-to-date these days?

While ubuntu is derived from debian that doesn't stop them from packaging newer stuff than in debian. The big name stuff is often newer in ubuntu's development versions than in sid. More obscure stuff will generally be either at the same versions or newer in sid than in ubuntus development version.

Debian and ubuntu have very different release cycles. Ubuntu makes a release every 6 months and releases are prepared one at a time. This fast turnaround means more up to date software at relase time but also means little time for things to settle and bugs to get rooted out. Ubuntu won't delay a release unless there is a cripping issue with a package they consider particulally important.

Debian's release cycles on the other hand are generally on the order of two years these days and they tend to spend a large amount of time at the end of that release letting things stabilise and working on the bug count.

Things got particularlly bad a few years back. The sarge development cycle was debians longest ever and it came at a time when linux in general was improving a lot for the desktop but it still gets annoying near the end of a cycle.

Yeah, IIRC I got frustrated with Woody and went to Unstable before Sarge made it across the finish line. It also seemed like debian did not have any reasonable support for proprietary software (NVIDIA drivers, vmware... even mp3 files IIRC). dpkg on my Unstable system got hopelessly confused and the install was trashed.

I switched to gentoo since it had a lot of momentum (critical in staying both up-to-date and stable - lots of eyeballs and fingers at keyboards) and thinking local compilation would prov

Most of the time when ubuntu needs to update a package they first check if debian has an updated version, and most of the time it has. And if you compare the package count of the distros debians is higher. It happens, but is pretty rare, that ubuntu adds some package that debian doesn't have for some reason. You've probably come across a few of those.
You shouldn't be running experimental. Things that gets put in experimental are things that are known to be very likely to break stuff. Its mean for debian developers and people who want to help test things and report bugs only. And even they don't install all of experimental, just the packages they want to test.
Chances are you didn't run experimental unless you know a lot about how the package system works, as you have to specifically specify that you want stuff from experimental when you install or update a package, just adding it to the repos doesn't do it. Its pretty unlikely that you got a system working with no problems if you really did install all of experimental.

Most of the time when ubuntu needs to update a package they first check if debian has an updated version, and most of the time it has.That's probablly true for the more minor stuff but the big name stuff like glibc, gnome, kde etc is often newer in ubuntu's development version than in debian unstable and sometimes newer than even experimental.

as you have to specifically specify that you want stuff from experimental when you install or update a packageYou can pin the whole of experimental at the same level as unstable and therefore cause apt to install stuff from it automatically (you can even pin it higher but thats a bad idea because often older versions get left in experimental after unstable is updated). I've done it in a chroot but never tried it on an independent system.

I haven't used either in years, but when I did use Debian experimental,

Ummmm.... You used a Debian release designed for Debian devs to use for integrating packages into Sid, and then complain that it was incomplete?

Just for your information, Ubuntu takes a snapshot of Sid(unstable) and works on it for 6 months before it's released. During that 6 months Debian is adding new packages and new package versions to Sid on a regular basis. Packages move from Sid to testing after 10 days unless there are severe

Compared to a few years ago, yes, debian is a lot more up to date. I'd recommend running testing, or unstable if you know what you're doing. Stable doesn't get updated after release except for critical fixes like security updates (which is the way its supposed to be, so you can throw it on a server and not have to worry about a future update breaking things), but debians testing and unstable quality is higher than the stable of most distros.

The equivalent of X, networking, and email are built-in to Windows and security updates are provided automatically. Drivers are provided by 3rd parties, but Windows offers me options to get updates for those too.

None, but back when I ran Debian stable it was way too out of date to be useful. With Windows, most software comes with packaged installers and works with the stable version. With Debian, the choice was to either run old software, build it myself, or run testing.

You are running Debian stable, because you prefer the stable Debian tree. It runs great, there is just one problem: the software is a little bit outdated compared to other distributions. That is where backports come in.

Backports are recompiled packages from testing (mostly) and unstable (in a few cases only, e.g. security updates), so they will run without new libraries (wherever it is possible) on a stable Debian distribution. I recommend you to pick out sing

With windows apps are still supporting XP and often even 2K and 9x. Hardware vendors are also still providing drivers for XP as long as you buy machines from thier buisness ranges. This along with microsoft's security update policies means that you can run the same version of windows for a long time (several PCs worth of time) while updating application software as desired.

With linux on the other hand if you want to upgrade your application software you pretty much have to either.

If you are talking about APT, there is either Filehippo Update Checker, or Securina...take your pick. Both will keep all the third party stuff up to date, and Windows update takes care of the rest.

Since others have posted their linux rants, and I have more karma than I know what to do with, and hopefully some Linux devs might actually read this, here goes...WTF is it with you guys and the God damned CLI? Huh? It is fricking 2010 already! On a server, yes I'm all for it, use it myself, great for scripting or

Let us compare to my last Linux installation, shall we? First a bunch of questions about partitions that an average user would have fucked up royally, finally get to the desktop and...WTF!!! NO Sound, my Wireless don't even exist according to it, and my screeen resolution is fucked and the GUI for some reason won't stick.

Since you didn't post as AC - I would have known you're a troll -1. Which year was that?2. Which Linux distribution?3. Which hardware?

I use and prefer Debian Stable, but if you place a high value on the latest packages, then Debian Stable is not for you, and never will be. I have used Debian Testing for a couple of years or so, and I have tried Ubuntu a few times, and from what I have seen, Debian Testing is slightly more up to date and more stable than Ubuntu. I agree that Debian is easier to configure.

Debian is always as up-to-date as you want it to be. It's just a question of which version you run.

Debian "stable" goes in cycles. Shortly after a release, it's fairly up to date. As time goes on, working towards the next release, packages get a little dated because they are intentionally not updated. Security and bug fixes are applied but no upgrades or new features -- this is why they call it "stable", because it doesn't change.

Debian "testing" is a less cyclical and tends to stay fairly up to date all the time. The exception is during a freeze, like the one we just started. Since the current testing is being morphed into a new stable, it has just stopped receiving updates, and won't start again until the new stable version is released.

Debian "unstable" is always quite up to date. All new features and packages are introduced in unstable first. Don't let the name confuse you -- it's about as reliable as most distributions' released versions. It's "unstable" in the sense that it gets constant updates, which means that things are always changing. Every once in a blue moon, a change will actually seriously break something for a day or so. Maybe once every 3-4 years in my experience.

Debian "experimental" is more of a layer on top of "unstable", and it is what it sounds like: experimental. The Bleeding Edge.

In addition to those versions, you can mix-n-match a bit by running stable plus backports. That allows you to keep a very stable, consistent base platform, and just pull in newer versions of particular packages, as needed.

I switched from Debian to Ubuntu three years ago, but I'm very seriously considering switching back. My theory was that Ubuntu LTS releases were roughly equivalent to Debian stable, and that regular Ubuntu was somewhere between testing and unstable. The second half of that works out sort of okay, but using Ubuntu LTS as an alternative to Debian stable is a bad choice. The upgrade path from one LTS release to the next is horribly painful, because you have to upgrade to each intermediate release. And, in practice, I find the every-six-months big-bang upgrades more intrusive and problematic than the continual, incremental upgrades on Debian testing or unstable.

All in all, after giving Ubuntu a good try, I think I'm going back to Debian stable on my server, Debian stable+backports on my laptop and Debian unstable on my desktop.

Don't let the name confuse you -- it's about as reliable as most distributions' released versions. It's "unstable" in the sense that it gets constant updates, which means that things are always changing. Every once in a blue moon, a change will actually seriously break something for a day or so. Maybe once every 3-4 years in my experience.

While I agree with this, and run unstable myself (for the past 8 years or so), running it does require some degree of technical savvy when it comes to dependency resolutio

Rather than using apt-pinning to pull packages from testing/unstable into stable, I'd suggest using it to pull packages from the backports repositories. That way you'll get newer software that's built against the stable versions of the supporting libraries.

Use sid. First install sqeeeze, then add the sid (unstable ) repositories, # apt-get update && apt-get dist-upgrade. Have fun. Don't bother whining to sid developers if you break your system. You could also try Sidux, based largely on Sid with such testing as is necessary. Don't use anything but apt-get to install packages or dist-upgrade; Sidux doesn't support any other package management system. Oh and you should be in init 3 to dist-upgrade. Works well. Sis is sid though, and sometimes t

Why don't you use Ubuntu, that's what they focus on. Some people who like Debian bitch about Ubuntu that is this or that, but they should realize that Ubuntu is protecting Debian from people like you who want to make it less stable and more experimental.

``I wish they'd just cut the bull and focus on unstable and testing.''

You are free to wish that, but I fervently hope they won't do that. I love Debian stable: install it, configure it, and it will keep working for years. You get security updates, but no new versions and new configuration options that may break your working system, at least until the next version of stable is released. And then, Debian take great care to make the upgrade as painless and automatic as possible. If you want stuff to keep worki

In mid June I set up my latest server based on Squeeze with the expectation that it would go stable this summer. For a while I thought perhaps I had jumped the gun and would be stuck with a relatively unstable system for a longer period, but I guess not.

In particular, I'm happy with Squeeze because I could use it to get my Kerberos-OpenLDAP-OpenAFS system working on both the file server and workstations. Not that I've ever use any FOSS other than Debian for my server, but after my attempts failed to get the latest Ubuntu client to run the necessary client software for this (unfortunately) uncommon, but very capable distributed file system, I suspected the same Debian version for the workstation represented my best chance of success. And sure enough: it worked straight away! Ubuntu may have certain benefits, but it seems that if you want a desktop system that is a little out of the ordinary, Debian is still your best bet.

Perhaps this is a duplicate post, but does anyone else find the version scheme for Debian (and Ubuntu) a little confusing? I use Debian on my laptop and encounter Ubuntu in my line of work; figuring out which version precedes/supersedes which is somewhat of a pain. Is there any a priori reason why Sarge is older or newer than Squeeze? What about a Koala vs. a Lynx?

Although the upgrade process itself was more difficult for, say, Slackware, figuring out when to upgrade was pretty easy -- "I'm running 10.0, a

Well for Ubuntu they're both numbered and named. The numbers are year.month (e.g. 9.10 is October 2009) and therefore go up in the expected manner. For the names, they're alphabetical (or at least have been for the last 5 years), so Intrepid came before Jaunty, which was followed by Karmic.

It's a little different, but this page [debian.org] gives you an ordered list of releases. More generally though, if you see news about a new stable, it will be newer than the one you already installed:-)

If you hear that Squeeze is stable and you're running something that isn't Squeeze, it's time to think about upgrading.

I do. I lose track of the releases when there's just one in every three years. I mean, I've used Woody for so long that Sarge always seems to be the new release code name......

But then, tell me why XP is older or newer than Vista? And why 2000 is older than 7?

As for figuring out when to upgrade... you'll know when to upgrade as you grow impatient as the world moves forward and yet you're still using antique versions of software from 3 years ago... Or, if you're perfectly happy to keep the old versions, you'

... and they do. Except the point of the freeze is to focus on fixing problems so they can push out a solid, stable release.

What's the point of slipping a freeze date? (There's mailing list traffic from a while ago now saying that they were pushing back the freeze, so yeah, they slipped -- even if it's just from an internal date.)

Well the first announced freeze date for squeeze was part of an unpopular plan to sync up with ubuntu by having a very short release cycle. That was abandoned pretty quickly (unfortunately after that)

Asside from that there afaict are a couple of reasons to delay the freeze.

A big reason is what are referred to as transitions. A transition is a group of package updates (usually a new major version of a library and the various updates and rebuilds associated with it) that need to move from unstable to testing at the same time to leave testing in a consistent state (unstable is allowed to be in an inconsistant state, testing isn't). The release planners will have a set of transitions that they really want to get in for a given release, transitions can easilly get held up by build failures and other rc bugs and they don't want to do too many at the same time because then they become intertangled leaving the release team with one big transition which is even harder to make migrate.

Also they want to pick a good time to freeze. Freezing the application level stuff while there are still big issues to fix in core package won't affect the release date much while it will mean releasing with older versions of the application level stuff (which is the stuff that is most visible to users and often the stuff that needs the most security updates).

The Debian project has decided to adopt a new policy of time-based development freezes for future releases, on a two-year cycle. Freezes will from now on happen in the December of every odd year, which means that releases will from now on happen sometime in the first half of every even year. To that effect the next freeze will happen in December 2009, with a release expected in spring 2010.

Yeah, that was never really an official policy. It shouldn't have gone out in the newsletter, or at least not worded as something definite. It was an idea one group had, and they thought they had enough support to do it, but they didn't.

The Debian project has decided to adopt a new policy of time-based development freezes for future releases, on a two-year cycle. Freezes will from now on happen in the December of every odd year, which means that releases will from now on happen sometime in the first half of every even year. To that effect the next freeze will happen in December 2009, with a release expected in spring 2010.