Even though the idea of formal alignment between the freezes of Debian and Ubuntu didn’t hold, there has been some good practical collaboration between the maintainers of key subsystems. There are real benefits to this, because maintainers have a much more fruitful basis for sharing patches when they are looking at the same underlying version.

Harmonization for Ubuntu 10.04 LTS and Debian Squeeze

I think this is where we stand now:

Ubuntu

Debian

RHEL

SLES

Kernel

2.6.32 + drm-33

2.6.32 + drm-33

2.6.32

2.6.32

GCC

4.4

4.4

Python

2.6

2.6

OpenOffice.org

3.2

3.2

Perl

5.10.1

5.10.1

Boost

1.40

1.40

X Server

1.7

1.7

Mesa

7.7

7.7

Mono (thanks Jo Shields)

2.4-snapshot

2.4-snapshot

I’m sure there are inaccuracies, please help me keep this up to date, sabdfl on freenode is the best way to reach me. The RHEL and SLES numbers are third-hand, so up-to-date information would be appreciated.

The actual release dates of Ubuntu LTS and Debian will vary of course, because of different priorities. And there’s no requirement that the same base version be used for every major component – there may well be differences allowing for different approaches. But where we do have it, we’ll be able to collaborate much more effectively on bug fixes for key upstream pieces. If a lot of distributions pick the same base upstream version, it greatly increases the value of extended shared maintenance and point releases of that upstream.

Why every two years?

Two years is a compromise between those who want 1 year releases for better support of cutting-edge hardware and those who want 7 year releases so their software stack doesn’t change before their job description does ;-).

A whole-year multiple has several advantages. It means we can schedule the processes that are needed for collaboration at the same time of year whenever we need them – unlike 1.5 or 2.5 year cycles. Three years was felt to be too long for hardware support. Two years is perceived to be the Goldilocks Cadence – just right.

What are the criteria for choosing a common base version?

In both the Ubuntu and Debian cases, we’ll be making a release that we support for many years. So be looked for versions of key upstreams that will pass the test of time. Sometimes, that means they can’t be too old, because they’ll be completely obsolete or unmaintainable in the life of the release. And sometimes that means they can’t be too young. In general, it would be better to be reviewing code that is already out there. But there are also lots of upstreams that do a credible job of release management, so we could commit to shipping a version that is not yet released, based on the reputation of the community it’s coming from.

What if there’s no agreement on a particular kernel, or X or component-foo?

We will almost certainly diverge on some components, and that’s quite OK. This is about finding opportunities to do a better job for upstreams and for users, not about forcing any distro to make a particular choice. If anyone feels its more important to them to use a particular version than another, they’ll do that.

Open invitations

It’s really helpful to have upstreams and other distributions participate in this process.

If you’re an upstream, kick off a thread in your mailing list or forums about this. Upstreams don’t need to do anything different if they don’t want to, we’ll still just make the best choices we can. But embracing a two year cadence is the best way you have to be sure which versions of your software are going to be in millions of hands in the future – it’s a great opportunity to influence how your users will experience your work.

Of course, we’d also like to have more distributions at the table. There’s no binding commitment needed – collaboration is opportunistic. But without participating in the conversation one can’t spot those opportunities! If you represent a distribution and are interested, then please feel free to contact me, or Matt Zimmerman, or anyone on the Debian release management team about it.

I think this is a big win for the free software community. Many upstreams have said “we’d really like to help deliver a great stable release, but which distro should we arrange that around?” Upstreams should not have to play favourites with distributions, and it should be no more work to support 10 distributions as to support one. If we can grow the number of distributions that embrace this cadence, the question becomes moot – upstreams can plan around that cycle knowing that many distributions will deliver their work straight to users.

This entry was posted
on Monday, March 15th, 2010 at 9:08 am and is filed under free software, ubuntu.
You can follow any responses to this entry through the
RSS 2.0 feed.
Both comments and pings are currently closed.

35 Responses to “2 year cadence for major releases: some progress”

I love the cadence idea. I think it would be really benefical, but I also know that there are a lot of people that are more likely to ship bad software and not advance Free Software as a whole than to help Ubuntu.

This should provide maintainers with a simple, easy target to aim for with stable versions – seems like an excellent idea, but one which will take a long time to truly get off the ground.

On a side note, isn’t the kernel updated regularly in LTS, like it is in other releases? In which case is stating the exact version number at release actually very useful? Surely it’s a target that moves throughout the life of the release?

Cadence is the idea that different distros ship the same version so they work together on bugs. This is a good backup idea to the efficiencies gained from having a unified team maintain a package.

Tom, I disagree with your idea that people despise Ubuntu for no reason. We are fighting about important issues like freedom. And Ubuntu has suddenly become a wild success and the face of the Linux community so its decisions now matter.

I’m personally bummed that there is no Debian-Testing type tree in Ubuntu. I upgrade every 6 months and yet I’m always out of date.

I’m excited that the idea of a shared cadence has started to come to fruition–particularly because it’s completely opportunistic and optional. I think this has the possibility to really improve not just each distro but also the participating upstream projects as well. This is such a simple way to increase collaboration that I really hope it gains traction when possible or practical.

The problem is that you have two sets of users: Those who scream their lungs out it you don’t ship the latest $VCS_DU_JOUR tip of everything (even if that breaks random stuff one day out of two); and the ones who don’t want anything to change, ever, even if it means cohabitating 5 or 10 years with the self same bugs. There is a reason why there are different distributions, from “rolling release” like Arch to Red Hat Enterprise Linux and SUSE Linux Enterpise Server with multi-year schedules and Fedora with its 6-month timing, and those with irregular releases like Debian. Your upstreams mostly work on the “we ship when we deem it is ready” schedule (sure, larger components like Gnome and KDE do have a more timed schedule, but the bulk is smallish projects with a release branch only, not “legacy”, “release,” and “testing”). Synchronizing all those is like hearding cats. Plus open source is still in large part the work of hobbyists, so imposing schedules (let alone one across the board) is dificult.

Sure, 6 months is on the short side to squeeze in testing of relevant changes, 5 years means guaranteed obsoletion (and mounting work of hard to do right and thus expensive, boring (back)porting of fixes). But each distribution/upstream package has their own agenda, which will determine their timings. Best that can be hoped for is getting a rough lockstep between major components for a given stretch of time. Live with it, noboby will be able to heard this particular set of cats.

Andy, the Ubuntu kernel has never received a full point upgrade after release. In other words, bug fixes may progress the version from 2.6.32-16 to 2.6.32-17 or 2.6.32-24 but not to 2.6.33. However, I believe the Ubuntu kernel team is considering changing that for Lucid based on testing, at least in backports.

there is a comment about “some alarm bells going off” at http://lwn.net/Articles/378600/. i think that perception has some skeleton behind it. i’m using ubuntu at home/work for 3+ years, and it was the best thing i ever did to my laptop. but when i look at kernel changelogs, i see emails like @redhat.com @novell.com @ibm.com etc. So before the big dogs agree on this, i doubt there will be any difference (the current pace of foss technology is already very good i think. also i hope they do agree asap). so i think foss market is, at the end of the day, a market and companies/groups will see you as much as your bone and meat in the market. i’m no kernel hacker, but for instance, before ubuntu/debian devs get to be in the top 7-8 kernel committers/hackers as a total, then in the kernel we (ubuntu/debianers & devs:) will mostly be followers. so realistically there is some time before ubuntu/debian can initiate such cadence cooperations. i hope i will be proven wrong asap. best wishes…

[…] 2 year cadence for major releases: some progress Even though the idea of formal alignment between the freezes of Debian and Ubuntu didn’t hold, there has been some good practical collaboration between the maintainers of key subsystems. There are real benefits to this, because maintainers have a much more fruitful basis for sharing patches when they are looking at the same underlying version. […]

Now as a Windows and Mac user would they be stuck with an out of date version of OpenOffice or the like on a old version of windows. Answer no. So why do distrobutions force old versions of programs on people running old versions of the distributions. Even newly released can be versions out of date. Not what should be going on.

Question why do distributions have to work on a time frame release pattern at all for most applications. Release when ready is far more sane.

Think about it how many packages could be packaged directly by the up streams if there existed a common repo system between distrobutions. Secuirty still can be done.

Basically why once every 2 years to have alignment it makes no sence. Either you will have a usable ABI and install system between distributions for upstreams or you will not.

Packaging is one area where distributions have refused to get along.

Part of the issue is here for people needing new hardware supported is they need the new kernel and the parts that go with it. I have swapped kernels and X11 servers many times on Linux the system still works.

The complete problem with distributions is the stupid idea of 1 size will fit all. Can you truly explain why ubuntu could not ship with 2.6.32 and 2.6.33 kernels with a system to allow user to choose what one works with their hardware. So saving stacks of back-porting of features from 2.6.33 to 2.6.32.

When 2.6.34 gets released another stack of hardware drivers are added so again another group of users locked out. If you say it might be unstable. True. But if the distribution does not run at all its a paper weight. Multi kernels side by side allow gradual transition. Ie try the new kernel it don’t work report the bug go back to old. So user still has an operational machine.

This applies across the complete software stack. Can I install multi versions of Firefox from the package manager side by side can I install multi versions of openoffice side by side. I can under Windows and Mac OS but I cannot under Ubuntu and most Linux distrobutions. So when migrating users are getting jammed. Lets say OpenOffice has a bug and will not open old versions of docs for some reason yet old will not open new. Cure it how with your current package management. Yep users are forced to manual install current package management system fails.

Sort out package management and being synced disappears from being a issue. Part of it might be altering ld.so to be smarter as well. Ie this program is this age so it needs this version lib.

Basically if installed programs see all the correct libraries for them that they have been tested with they will work right. Does it really matter other than disk space if multiable different versions of libraries are installed so users can use there software. Of course once applications are tested and proved to work with newer versions of libs the older ones could be done away with.

To make this work needs a major overhall to the package system and ld.so. Spiting dependencies away from packages so that dependencies can be updated without updating the package itself. ld.so being able to process applications dependencies and only use what is on the applications dependencies list. Effectively doing away with ln -s libc-2.xx.xx.so libc-6.so and the like in all lib directories. Reason ld.so creating associations to libs based on the applications own dependencies. Of course a generic can exist for unknown applications. Heck simple list format would do.

“libc-6.so” “libc-2.xx.xx.so”

Basically static dependencies to applications based on what they have been tested with. So ending the complete need of being perfectly instep. None of this automatically loading newer lib and praying that it works.

Of course some applications would have to be altered to allow on being installed side by side with older versions.

I hope it’s not just wishful thinking. I got the impression that only Ubuntu wants to be part of this, or at least, the only one that tries. It’s really frustrating that every major distribution has different versions of same software. So basically everyone in Linux community is doing the same work twice. This especially goes for Ubuntu/Debian.
It would be fantastic if Debian freezes and Ubuntu versions of the most important components for having a good desktop were synchronized. That way we could get a really stable LTS versions of Ubuntu, because it would basically be the same version as Debian Stable (once it becomes stable from testing).
The most important feature of a system desktop environment is stability of the desktop it self. I really don’t care that much if rhythmbox crashes (well, I do, but it’s a minor inconvenience), but I’m quite frustrated if Xorg starts using 100% of CPU or if screensaver appears when watching movies or when my girlfriend comes and goes to her account, and when I try to use switch user white screen appears some times.
These are the things that need to be stable, basic desktop environment, and if possible, at least the LTS versions should be synchronized with Debian stable distribution when it freezes their packages for new version.
I’m not a professional but it seems to me that 3 components need to be synchronized if possible:
– xorg (most people really don’t care if its ver 1.8.2 or 1.7.4, they need it stable
– gnome
– stable nvidia/ATI drivers

And there is software that ALWAYS need to be up to date as Firefox. It’s really frustrating using 3.0 version if 3.6 is available for example. The browsers are the software that is the most used on any desktop system, and the improvements in newer versions are quite often major. Obviously you shouldn’t use the same version as Debian freeze/stable here.

As for the rest of software, I think that current policy is fine.
I don’t use Windows at all for last two years, but one mayor advantage is that you can pick an exe file and install the newest software version on your system without any problems.
I don’t know what the solution is, since I don’t quite understand what the problem is, but… if I could use LTS version of Ubuntu and 2 years later install some newest software version that I use, that would make Ubuntu the perfect desktop.

To summarize, for perfect desktop we (or at least I) need a very stable BASIC system/desktop, the newest browser and, the possibility to use choose versions of our liking for the rest of software.
I’m not sure if the third is possible, but the first two should be.

As for server version. Stability of software is most important, so a more conservative approach would be preferable. But I think it is more or less covered by LTS releases.
Also ufw is a great, easy to use, but it urgently an easy forwarding ports implementation (why is it not possible to use firestarter code for that?).

Mark Shuttleworth: With more distributions participating, the incentive for Django to hit its deadlines would be that much greater. It’s no guarantee, but it certainly is motivational for upstreams when they realise that they can concentrate their energy into getting something done and get multiple hits of love from multiple distributions for the same effort.

SLES11 runs 2.6.27. SLES12 (which is still into the future) will be having at least 2.6.33.
I personally don’t like the synchronization you are proposing. It tends to impose a “develop; release; rinse-repeat;” cycle into development and people will cut back on working combinations. In other words, if major $nominal_distros (Debian, Fedora, SUSE, you name it) shipped, say, glibc 2.11, developers get lazy and don’t ensure it works with glibc 2.9 from some $enterprise_distro, for example. This is especially bad with binary-only releases made by some commerces.

Other option could be return to some “interesting” idea before the threat of Caldera. United Linux would be an interesting option. In the cadence topic, open collaboration to other “debuntu” derivatives might help, at leat to aligne all debian based distros in a common effort.

In case of Red Hat, the hardcoded sourcecode and the “secret to alive vox” that the compiling process of a RHEL is made with other tools, in comparission with CentOS ( did RHEL use free compilers? mmmm ), i dont see to Red Hat or Fedora working together with Ubuntu staff…

Probably, could be more useful, to not repeat ur invitations each time to “unify community” work on a project on launchpad for unificate efforts, and let the community participate directly on that package issues. Community sometimes, only read, and wait for something more concrete, including the expectations of Canonical when a new version comes…

Theres no doubt, that ur wishes are focused on not reinvented the wheel, and thats an important challenge that we must solve as i said in my last article. I think that “no unification” process has polarity sides : The “ego” of being in advantage, from some poeple and the “need” to be solid on the mainstream support flux.

Other topic, going into the desktop arena, is the “disruptive innovation” looking for personal “look and feel”. Its enought, to use one hand fingers to detect the real “desktop oriented” distros ( xbuntu, Mandriva, PCLinuxOS, Unity, Sabayon, etc). I think that back to basics on that point is a great point for starting with “interdistro collab”, working on have the same base functions on a desktop as Ene said.

The common collab, must be working as an standard, take a common base not only on package versions, this common base must be inside of desktop interface, end user usability,etc.

As LSB, could be a PMSB ( package manteinance standard base )to work on common problems in differente distros and situations.