Posted
by
timothy
on Thursday November 25, 2010 @10:01AM
from the really-wrong dept.

Responding to yesterday's post indicating that Ubuntu might move to a rolling release schedule, reader ddfall writes
"This is wrong! Engineering Director of Ubuntu Rick Spencer says 'Ubuntu is not changing to a rolling release.' He goes on to say, 'We are confident that our customers, partners, and the FLOSS ecosystem are well served by our current release cadence. What the article was probably referring to was the possibility of making it easier for developers to use cutting edge versions of certain software packages on Ubuntu. This is a wide-ranging project that we will continue to pursue through our normal planning processes.'"

I personally like the idea of scheduled releases which have been somewhat reasonably tested. Giving developers a mechanism to deal with the cutting edge versions of each package is nice, but I'd rather not have those in the releases on my servers.

I personally like the idea of scheduled releases which have been somewhat reasonably tested. Giving developers a mechanism to deal with the cutting edge versions of each package is nice, but I'd rather not have those in the releases on my servers.

I agree. Rolling releases works for beta but the idea that substantial changes could be rolled out in a daily update (as opposed to security updates) would kill any corporate use. They don't want changes that could involve the users seeing something different appearing without testing, training, etc. Many people like the LTS releases [ubuntu.com] for this reason.

Just for me, the biggie was the new Xorg 1.9 which broke almost every nvidia-produced driver out there. Mine (nvidia 96) was the last to get fixed. I had to put off upgrading for about a month until it was fixed.

I realize bitching about waiting a month extra for a new release makes me sound like a douchebag, but when it's something as high profile as display drivers I must take offense.

Not really, I lay it at the feet of Ubuntu for that. They knew early on that a major showstopper bug existed with Nvidia & Xorg 1.9 and they should have made the decision not to upgrade until the fix was done and tested.

Sounds familiar. The last kernel update just prior to 10.04 suddenly caused serious problems with RTL8194SE chips, leading to kernel panic reboots whenever a user tried to switch between wi-fi and cable connections. Mighty annoying and the first time it happened to me I was right in the middle of a company-internal presentation.

I don't think that rolling releases work well for an OS which is comprised of a Kernel and 3rd party contributed apps. Especially since Ubuntu doesn't have much control over the Kernel. To try and enforce a release schedules on 3rd party developers would be foolhardy at best.

However, for OSes like FreeBSD where the entirety of the userland is maintained and controlled by the project they probably could get away with it. And in a sense they do with -STABLE.

Similarly, I run Arch Linux [archlinux.org], and have found its rolling release to be at least as bombproof as Ubuntu's cadenced release. The difference is simply that your upgrade cycle happens at a time when you and the individual program developers are ready for it, and you can be as selective as you like.

+ It is very easy to roll back to state before upgrade if the upgrade did not work. And especially if user use btrfs the snapshotting comes very very handy. Just take a snapshopt, upgrade and if there comes problems, rever to snapshot.

I have found that rolling release distributions (like Arch Linux) being more stable and more pleasant to use than 6 months release scheduled distributions and definitely nicer than Debian's Stable and Canonicals LTS based to Debian testing branch. On servers the situation is

Rolling releases probably work just fine when you're only running it on your personal laptop or desktop. It's a very different matter when you have a site installation on a large number of machines where installations and upgrades are a bit more complex than to insert the CD and click next a few times. It is in those environments you appreciate that you can come in one day and things still work consistently with what they did yesterday.

I'm not trying to troll, but before someone less noble says it: why are you running Ubuntu on a server in the first place ? I'd like to know why you would choose Ubuntu over something like CentOS.

I'll be perfectly transparent here, I'm just as bad: I run Gentoo on my servers. So don't be shy to profess your love for the easy-to-use distro, I'm not here to judge:) I use Gentoo because I have zero patience for binary package "management" and the dependency hell / obsolete libraries that come along with it

My experience with CentOS is that it's very stable. Which for me is a euphemism for "antiquated".

For example ; my organization has a contract with CollabNet for server hosting on their TeamForge platform. In addition to the usual forge servers, there are hosted servers (both virtual and "real" with lights-out management) in the back for running build services, etc. The provided OS build on most of these servers is CentOS 5.0

Now CollabNet are very big on Subversion, and selling services related to Subversion

I work for CollabNet's engineering team for TeamForge -- CollabNet does provide a yum server for updates and current versions of subversion for TeamForge users. While CentOS (What our VMWare image uses) is at 5.x, we stay with that version so that companies get the benefits of having a stable release (as far as underlying software versions go) with security updates (through the upstream).

Feel free to email me and if you have any questions, or any additional feedback about our installer or the product in ge

I'll bite. In my instance, why I have chosen at times Ubuntu over CentOS:

1) specific daemons, libraries (if I'm using certain commercial software that relies up on it) are provided in the distribution's repositories, removing the need for me to package my own versions of the software where CentOS did not offer. This removes the issue of me having to monitor for security vulnerabilities, needing to back port code should the new library be incompatibl

Rolling release is the reason I love Arch, and half the reason I'm planning to put it on a server I'll be building soon. Between Arch, Mandriva, Ubuntu, Slackware, and Fedora, Arch is the most stable distro I've ever used. It's not like the packages they distribute are alpha quality or anything - they're stable versions, they're just the _newest_ stable version. Meaning they've hopefully fixed the major bugs from previous releases. Plus, Arch is rather minimal...which I think any rolling release distro woul

Last year in his speech at the Open World Forum in Paris, Mark was trying to convince people that more open source projects should get in lockstep with the Ubuntu six-month release cycle. I would be surprised if he had changed his mind so soon.

He wasn't saying the world should revolve around Ubuntu, but rather that everyone should work together. A little different, don't you think? If everyone agreed to work in cadence to a different cycle than ubuntu's, I think he would have still called it a success.

Mark has never listened the open source community saying that the community is already working together. Mark believes that the magical fix would tie everyone to same schedule. As everyone would work at same corporation and at same building and at same room with same working times and everyone would get paid from 8-16 hours working.

Open source has worked wonderfully now since the first mainframes were started at 50-60's in universities. And Linux community (big role in the OSS community) has proofed that cu

"Ubuntu founder Mark Shuttleworth said during an Ubuntu 10.10 conference call last month that a move to daily updates would help the popular Linux distro keep pace with an increasingly complex software and platform ecosystem...Today we have a six-month release cycle," Shuttleworth said. "In an internet-oriented world, we need to be able to release something every day....That's an area we will put a lot of work into in the next five years. The small steps

From what I can see, Mark is basically saying "backports might be something worth looking into"; then the media, being the media, blow it out of all proportion into "Mark Shuttleworth declares that every Ubuntu package will be bleeding edge tomorrow".

I wonder what it's like for the poor guy, any time he mentions anything, in any context, people take it to the extreme then claim that that is what Ubuntu will do next...

When Shuttleworth talks about the Ubuntu Software Center, it makes me thing he's talking about daily updates to user software. So software like OpenOffice, Firefox/Chromium, Pidgin/Empathy, GIMP, etc would get version updates between releases. I don't see this as being a bad thing. I'm sure they can make this work without creating problems. They already have a mechanism for this, it's the -updates repository, they just need to iterate at a faster pace.

My distribution of choice, Arch Linux, uses a rolling release schedule, which has its good and bad points. I suppose the worst part of it is that with Arch Linux, old versions of software are not retained in the repositories and the package management tools don't make it easy to go back to a prior version of the software in the event of a problem. As a result, upgrading is a bit of a 'cross your fingers' endeavor and more often than not, I've regretted a full system upgrade.

I think that rolling release can work well but only if the package management system is designed to, and the repositories are set up to, allow easy rolling forward and backward on software versions as necessary. It's my number one wish for Arch Linux, which otherwise is the best distribution I've used.

Since pacman caches any package you download, downgrading is in my opinion pretty easy (execpt the package depends on some library of a certain version). All you have to do is to install a previous version of a faulty package from the cache directory and let pacman ignore the package for future updates.

Actually, I don't really like rolling releases, but I tolerate it because it's what Arch Linux uses. I prefer Arch Linux's extreme simplicity over Debian's incredible complexity which is why I use it. Just having to keep track of the 10 programs you need to just manage packages on Debian gives me migraines, not to mention the convoluted system configuration setup on Debian. Arch Linux is *dead* simple which is why I use it. The shortfalls of Arch Linux are:

How big is your Pacman cache? I have 12Gb and since installation (over 11 months) I have only used 5.5 GB from it. I could even roll back to the base level what I had after installation. It is not as "press this button", but easier than doing a fresh install with Ubuntu install image.

And do you know what you would gain with the snapshot features from the filesystems and joined it with LVM?

I upgrade system now and then (usually 2-3 weeks) if I can not find otherwise bugs. And so far I have not yet needed to

I use SSDs exclusively (will never buy a spinner platter drive again) and I would prefer if the old packages were hosted on a server somewhere instead of having to be cached on my drive. Seems more efficient to me for 12 GB on a server to serve hundreds of thousands of users than for each of those users to have to spend 12 GB to cache their own packages.

That being said, I have never deleted anything from my pacman package cache so I could probably use the technique that you described. There are cases wher

Come of it, it's not *THAT* hard to run multiple versions of Python at once. In fact, it's mostly pretty easy. This is one reason why I use a proper IDE for my python work now. It tends to handle multiple python versions very well.

Clearly you have not tried what I said and you have no idea how Debian and Ubuntu repositories work.

It's more or less like this:

Maverick is released

The day after maverick was released, the natty repository was created. It contained an exact copy of maverick

New packages are imported from Debian and added to the Natty repository. These packages show up in the repository as they are added: 5 new packages today, 20 new versions tomorrow, a new kernel in 2 months, etc

By replacing 'maverick' with 'natty' in your sources.list, you get updates daily, not just when natty is finally released (in fact the day natty is released you will not get any new update if you have been updating every day since the maverick release).

Do you understand that that is not rolling release but that is developing and testing?

In Rolling Release you do not put alpha/beta/RC software there. you keep latest stable versions from the software there. You get upgrades all the time. Usually just fixes when the upstream adds them and now and then newest versions when the upstream release new version.

Then there is totally different [testing] and [unstable] in rolling release schedules as well. From there you get the GIT/SVN versions from the upstream. Th

If I understand correctly, debian unstable is usually the most recently *released* version from upstream. So, if you want the latest *stable* version from upstream, you need debian / ubuntu *unstable*. I use this in practice on a debian (stable) web server, with a few select web apps such as wordpress pegged to unstable. It's the only way to ensure wordpress / drupal etc are up to date without installing by hand. (In theory, debian p

use this in practice on a debian (stable) web server, with a few select web apps such as wordpress pegged to unstable.For those reading along be aware that while this may be workable for webapps (which are usually written in scripting languages) it can be a poor strategy for packages in general because often apps in unstable often pick up dependencies on unstable's versions of key libraries (this isn't as bad as it used to be due to the introduction of symbols files but it's still an issue).

use this in practice on a debian (stable) web server, with a few select web apps such as wordpress pegged to unstable.For those reading along be aware that while this may be workable for webapps (which are usually written in scripting languages) it can be a poor strategy for packages in general because often apps in unstable often pick up dependencies on unstable's versions of key libraries (this isn't as bad as it used to be due to the introduction of symbols files but it's still an issue)

new versions of windows come out every 7-8 yearsWhile your point in general is correct you are exaggerating. Looking at the years of windows releases (I could look at the months but I CBA to and it dosen't change the overall point) and ignoring server releases.

conventional series1.0: 19852.0: 1988: 3 years from previous release3.0: 1990: 2 years from previous release3.1: 1992: 2 years from previous release95: 1995: 3 years from previous release98: 1998: 3 years from previous releaseME: 2000: 2 years from pr

In my personal opinion, a half-rolling release model would be a great idea. I want my base system(xorg/kernel/gnome or kde) to be as stable as possible. But why would anyone need to wait 6 month or use some PPA to get the latest version of Firefox/Chrome/GIMP/Whatever? I was taking a look at Chakra (a KDE-oriented distro with Archlinux roots) a few days ago and found their half rolling-release model idea to be extremely good. I hope to see something similar in other distros in the future.

The ubuntu "rolling release" issue is critical for servers and corporate users, but not for individuals. For people with a handful of machines, a simple weekly or monthly cronjob with aptitude or apt-get (i.e. with debian ubuntu) will do. Besides, I don't think that most standalone users will see noticeable changes between slight incremental changes in the kernel. Waiting for a cronjob to take up an incremental upgrade a few days after the fact won't matter at all for most individual users.