Posted
by
Unknown Lamer
on Monday May 23, 2011 @05:00PM
from the linux-millenium-edition dept.

An anonymous reader writes "With the Linux 2.6 kernel set to begin its 40th development cycle and the Linux kernel nearing its 20th anniversary, Linus Torvalds has expressed interest today in moving away from the Linux 2.6.x kernel version. Instead he's looking to change things up by releasing the next kernel as Linux version 2.8 or 3.0."

I know you're joking, but things like that have happened before, and I'm sure they'll happen again. As an example, back when Microsoft brought out IE 5, Earthlink's connection software jumped from Total Access 2 all the way to Total Access 5. I'm sure the marketdroids were highly impressed, but nobody else was, especially tech support.

No, it was done because the NT kernel in 7 is hardly different from that in Vista, so technically, it was just a.1 increase. While they could have artificially increased it, they actually did the right thing by making it 6.1.

2.6.: still a stable kernel, but accept bigger changes leading up to it (timeframe: a month or two).

2..x: aim for big changes that may destabilize the kernel for several releases (timeframe: a year or two).x.x: Linus went crazy, broke absolutely everything, and rewrote the kernel to be a microkernel using a special message-passing version of Visual Basic. (timeframe: "we expect that he will be released from the mental institution in a decade or two").

GNU Emacs went from 1.12 directly to 13 since the major number wasn't expected to change. Linux can probably do one better and go from 2.6.41 to 42, considering it is the ultimate answer to life, the universe and everything.

I generally change the minor number when something important has happened, but this are still compatible. And after all of the effort that they've gone through to finally remove the Big Kernel Lock, I think they deserve a new minor version number. It really is a very different architecture inside the kernel now as compared to the start of the 2.5 series.

Because there is overlap in kernel development. 2.4 continued to be actively supported and developed long since 2.6 was released. If you went with release date a 2.4.36 would look like a newer kernel then a 2.6.20.

Since 2004, after version 2.6.0 was released, the kernel developers held several discussions... ultimately Linus Torvalds and others decided that a much shorter release cycle would be beneficial. Since then, the version has been composed of three or four numbers. The first two numbers became largely irrelevant, and the third number is the actual version of the kernel. The fourth number accounts for bug and security fixes (only) to the kernel version.

The first use of the fourth number occurred when a grave error, which required immediate fixing, was encountered in 2.6.8's NFS code. However, there were not enough other changes to legitimize the release of a new minor revision (which would have been 2.6.9). So, 2.6.8.1 was released, with the only change being the fix of that error. With 2.6.11, this was adopted as the new official versioning policy. Later it became customary to continuously back-port major bug-fixes and security patches to released kernels and indicate that by updating the fourth number.

Additionally, When you change the first (major) version number it usually means a significant re-write. Whereas the second version number would mean still mostly the same code-base, but with major features added/removed/rewritten.

Take from this what you will, but to say the version numbers are arbitrary is just plain ignorant.

When you write a kernel and have it installed on everything from phones to mainframes, you can decide what the version numbering means. Linus can decide tomorrow to call it Linux 666 and it will still be used.

Sure you describe a fairly typical situation, but not one that is anywhere near universal.

In some senses, you can also use a version # as "ye who shall go before this version number beware of a Grue". After 36 increments of the 2.6 series, I'm sorta for the "freshness" of a 3.0 series. Just to say that "here is our dividing line, we've made all this progress, let's lock it in."

I know, there will be a little fuss, but thinking like a 5 year plan, it's good sometimes to make some dividing lines that progress has been achieved.

With both RHEL6 and Debian Squeeze on their own versions of 2.6.32, as well as the last Ubuntu LTS 10.04, that version will effectively be the end of the 2.6 line for many places if version numbers jump like this. The kernel versions actively targeted by the -stable team [linux.com] are the only ones some people (including me) are interested in, and this cluster of distributions on 2.6.32 is a good thing in my book. The main issues I'm seeing in newer kernels that I'm concerned about backports of are the continued fixes to weird ext4 bugs happening in newer kernels. Keep those coming, backport drivers for the most common hardware out there, and the rest of the kernel development can march along without me being so worried about it. (Context disclaimer: I worry about PostgreSQL database servers for a living, so my customers are more paranoid about stability than most)

The eventual release of btrfs is one of the things I'd would be glad to see happen only in a kernel that's clearly labeled part of new, less stable development. New filesystems are one of the hardest things to get right, and there's no other class of bugs as likely to lead to major loss of data.

The eventual release of btrfs is one of the things I'd would be glad to see happen only in a kernel that's clearly labeled part of new, less stable development.

Linus is not proposing to create a less-stable development branch. He's not proposing to change the kernel development process at all, just to change the numbering because the major number has become completely useless, the minor number has become somewhat useless and the sub-minor number (where all the action happens) is getting awfully big. Every kernel release will still be considered basically-stable, with the distros being left to do whatever final stabilization is needed.

Yes, that's what Linus will say. I was commenting on how businesses will perceive things, regardless of the technical message that comes along with it. "2.8.1" or even worse "3.0.1" will be considered toxic no matter how it's presented to business people. And with so many distributions lined up with the same kernel version right now, it's a decent time to do it; I think the sort of places that think like that will be happy with the currently available options until enough Linux version bumps that the num

Btrfs is already in the kernel. Removing the experimental tag in the config item won't in itself cause any more instability.

The only reason I could see forking a new branch is if integrating btrfs required making changes to a higher layer in the kernel (which the filesystem drivers plug into). Changes contained within the filesystem driver don't impact people who don't use that filesystem, or enable support for it when building their kernel.

We use a dating system due to the large number of minor revisions, but keep the major revision number - so for instance,

1.2011.03.23.1:= The first minor revision of version 1 of the software that took place on 23rd March 2011.It gives us the best of both worlds. We spent way too long coming to that solution, but it suits us a lot.

Maybe for Linux it may make sense to move the first minor revision to the left of the revision date.Outside of that, subdivisions become pretty meaningless anyhow.

I dated a cute minor revision once, she was sweet and all but had an inferiority complex. Then after she upgraded (plastic surgery), there were major incompatibilities;). So how does that minor revision dating work, you have a dating web site? Or, when you have rules, because you do that quicker, it is called speed dating? and the major question is mark zuckerberg involved in this version social network?

Those revision numbers of yours are huge. I think it'd be easier to remember them if you used alphabetics for the month (e.g. 1.2011.Mar.23.1), but they're still unwieldy. If you want to use a date, I'd much prefer an Ubuntu-style convention. Whether or not you like Ubuntu, the version numbering is very sensible -- enough digits to tell you what you need to know, but few enough that it's very easy to remember. Actually, it might be even better just to use YY.V, where V is incremented with each release a

If they go with 3.0 I hope they include major changes in it. Otherwise what's the point ?

Well, think of how far the kernel's come since 2.6.0 (let alone the original 2.0, fifteen years ago) and a big jump may make some sense.

There's been other changes, as well. For instance they abandoned the even/odd scheme for "stable" vs. "development" kernels when they started v2.6. So this next big increment to the version number will be the first "stable" version without a dedicated "development" version at a neighboring number. A change in the version numbering scheme is also a good reason to bump up

Be careful, when HAL reached version 9000, it became sentient... we want to go slowly with the version numbering... that said, I welcome our omniscient linux overlords as long as they don't kick my butt, act nice and kiss it instead:)

When I said there won't be any major changes, by the way, I meant no more than there are every single new release.

The kernel changes immensely every release, we just don't notice it because the version number change seems so minor, and because it remains so thoroughly backwards-compatible. But each release of Linux includes probably 20,000 patches.

Noooo! fix your damn filesystem!It's bloody annoying when a developer on windows commits a file in the wrong case, which of course works on NTFS. Then follows the merry-go-round of deleting the file from revision control and re-adding under the correct case e.g. certain MS software saving extensions in all uppercase.

As much as I dislike Windows... what purpose does a case-sensitive file system serve? It just confuses people.

Well, for starters it would allow the OS to be properly compatible with systems and software that use case-sensitive file storage.:) (Yeah, kind of circular logic there.)

I think you have a reasonable point there - but it's mostly something that can be dealt with at the application level. Like if you're typing a filename in a file dialog, the UI can do a case-insensitive match regardless of the underlying filesystem. The OS doesn't need to prevent creation of files whose names differ only in case to provide that.

There's also a much larger issue: simply treating uppercase as equivalent to lowercase is fine for English, but for international languages, providing that kind of feature gets you into issues of Unicode normalization. Japanese gets you a good collection of degenerate cases: for instance distinguishing between filenames in hiragana, katakana, half-width katakana, kanji (of which there may be multiple equivalents)... I expect other East Asian languages contain similar challenges. I don't know about other languages... But shouldn't all those filenames be equivalent, too? Is that a problem that's not solved just because it's harder to solve? Isn't that disparity a bit awkward?

Again, it seems to me that the place to address the issue is in UI, in response to user input - not at the underlying file handling calls. If the user searches for a filename - it's fine if there's multiple matches, and appropriate to return matches based on what the software "thinks" the user intended. And UI already does this to some extent. If you're typing in a filename to load, the UI can display approximate matches. File dialogs for save are very similar to those for loading - so again, you'll see, as you're typing, if there's a naming clash that could confuse you later. So why the ham-fisted rule of "no filenames which differ only in case"?

To take it a step further - do filenames even need to be unique any more? Windows UI has hidden filename extensions by default for years. So you could have two files "with the same name" (apparently, anyway) in a single directory already. If you're going to do that, I think it may be time to start letting go of the idea that filenames are unique. There's been a trend toward identifying files by metadata anyway - content indexing, tagging, and so on. Certainly traditional filesystem calls depend on filename uniqueness - but at the UI level, is it really still an appropriate restriction?

They number and if you can run Crysis on a linux system detail exactly what system hardware, video, ram, etc by UPC number. The exact flavor of Linux (strawberry, pony, salmon) with detailed configuration and what fucking technogod you prayed to. Saint Vidicon just ain't cuttin' it at my end.;)

so many that there are several commits that have exactly the same SHA1 hash, so we're hitting SHA1 collisions.

Really? That seems somewhat wrong to me. I assumed that the hash associated with a commit was based on its content and the commit comment, so for there to be a collision more than one user must have pushed *exactly* the same commit a repo with exactly the same comment. Otherwise a SHA1 collision is mathematically very unlikely. Unless you are referring to the shortened forms often used (I've seen people quoting as little as the first six hex digits of a commit when talking about the history or a small repo)