Posted
by
timothyon Sunday November 03, 2013 @10:22PM
from the in-time-for-guy-fawkes-day dept.

An anonymous reader writes "Linus Torvalds announced the Linux 3.12 kernel release with a large number of improvements through many subsystems including new EXT4 file-system features, AMD Berlin APU support, a major CPUfreq governor improvement yielding impressive performance boosts for certain hardware/workloads, new drivers, and continued bug-fixing. Linus also took the opportunity to share possible plans for Linux 4.0. He's thinking of tagging Linux 4.0 following the Linux 3.19 release in about one year and is also considering the idea of Linux 4.0 being a release cycle with nothing but bug-fixes. Does Linux really need an entire two-month release cycle with nothing but bug-fixing? It's still to be decided by the kernel developers."

There have been so many fast and furious features added over the last couple releases, not only to the kernel but also the various and sundry major components (like systemd) that taking a breather isn't going to hurt anything. There is nothing huge waiting in the wings that everyone needs next week.

I don't know how you can honestly say that there's "nothing huge waiting in the wings that everyone needs next week." You must not understand the current operating system market.

THERE IS BALLS TO THE WALL COMPETITION RIGHT NOW!

The moment the Linux community rests on its laurels, even if just to fix some "bugs" that don't even exist, the competition from Windows and OS X will intensify to an extent that we haven't seen in ages.

Look, Windows 8.1 was just released, and it's a game-changer. It makes the Windows 8 stream a viable option for businesses and home users alike. Windows 8.0 was like Vista was; Windows 8.1 is like Windows 7. Windows 8.0 tried some things out, and some of those were mistakes. Windows 8.1 remedies these, and the result is a powerful, usable operating system.

OS X 10.9 Mavericks was just released recently, too. It took what was perhaps the most popular and widely used Unix-like system and made it even more efficient and powerful.

Then there's Linux. There are major changes underway as we speak. The Ubuntu and GNOME 3 communities, which were once among the largest and most appreciated, shat upon the faces of their users, causing them to seek refuge in other distributions and desktop environments. Now we have Wayland on the way, and it's going to bring so much disruption that there may in fact be a civil war of sorts within the Linux community. X is not going to die easily! And then there's LLVM and Clang, which are kicking the living shit out of GCC. In fact, this is a revolution that we haven't seen the likes of in years.

With so much turmoil in the userland software, it's now up to the kernel to pick up the slack. We're going to need to see the kernel team at least double their efforts to make up for the stupidity of the GNOME crew, for example. We're going to need to see a kernel that offers greater power efficiency on modern systems. We need to see a kernel that'll offer even better process and thread scheduling. We'll need to see a kernel that can scale from the smallest cell phones to the largest clusters. We need to see the completion of Btrfs.

Never forget that when it comes to operating systems, the BALLS ARE TO THE WALL!. This is more true today than ever before. The competition is fierce, and prisoners will not be taken. When there is BALLS TO THE WALL competition, everybody involved needs to bring their best. This includes the Linux kernel developers. They need to be the best they've ever been. This is no ordinary situation; this is a BALLS TO THE WALL situation. And don't you ever forget that!

Not really. Mavericks did some really cool stuff under the hood. Timer-coalescing, "App Nap", and compressed memory are all pretty big. Take a look at the relevant sections of the Ars review to see what I mean.

Linux has had compressed memory for quite some time, originally as Compcache and now as ZRAM. I have managed to use it on low-memory systems even today to get more work done faster. I'm not saying this to attack OS X, but rather to point out that equivalents already do exist. Also, I remember when a company (Quarterdeck?) offered a product for DOS/Windows called "RAM Doubler" that did the same kind of thing.

No one is claiming that compressed memory is unique to Mac OS X. However, it is a new feature that it didn't have before, and it has drastically improved the performance and power consumption for OS X users.

Essentially it means that the OS can keep from using swap. Spinning the drive up and remaining awake to page in and out takes a lot longer (and thus the CPU, disk, etc. must be powered up and in an active state for far longer) than compressing the memory and staying off swap. The compressed memory isn't the only power saving feature Mavericks has obviously, but it does contribute to keeping parts of the system asleep as much as possible to save power.

Oh and by "spinning the drive up" that includes SSDs. SSD is obviously not a moving part, but waking it up, reading/writing data and remaining awake while waiting for the SSD (which is hundreds of times slower than RAM) still uses power.

RAM Doubler?! Are you talking about the program that added more swap and faked the total RAM as your real RAM+ swap? that one that after reverse engineering showed that it didn't even tried to compress anything?

yep, i used it for a few week and confirmed that it didn't really did anything.

No. Mavericks has a huge number of improvements with the VM subsystem (compressed memory to avoid swap at all costs for better performance and power consumption), timer coalescing, etc. I am seeing a "no bullshit" battery life improvement of 15-20 percent on my 2011 MacBook Pro 15" - and improved performance.

Mavericks is the biggest improvement in OS X performance since Snow Leopard.

No, it should be modded exactly what it is. The whole post is a rant about how competitive the whole Desktop Linux OS has to be with OS X and Windows 8.1, but fails to address anything about how the Linux Kernel development relates to Desktop Linux that makes it more competitive with OS X or Windows 8.1 or how fixing bugs in the kernel makes it less competitive to OS X or Windows 8.1If there was -1 Upvote Bait, I'm sure that's what it would get rated as, seeing as it is a mostly thoughtless desenting opinio

userland desktop UI development has little to nothing to do with the kernel. What do you expect the kernel devs to do to make up for gnome? Where is the kernel deficient on modern hardware? BTRFS is meant for large multi disk arrays, hardly something you see on typical user desktops.

The reason you were downmodded is because you don't know what you're talking about.

It's still a shitbox. Windows 7 wasn't much different from vista, but it magically got rave reviews. It makes me wonder about the validity of a lot of these review sites. Were they paid off, or did they just hop on the bandwagon to get hits?

iPhone's are for hipsters. OSX is certified UNIX running on rock solid, high performance hardware. Don't confuse the two.

I used Linux exclusively for fifteen years. I've contributed to many open source projects, including the Linux kernel, and I'm the maintainer of Linux::LVM and other projects. In other words, I'm a fan of Linux. From one fan of Linux to another, don't dismiss OSX just because the same company makes overpriced toys as well. It's a solid UNIX which will run all of your favorite FOSS software, and do it well.

Pretty much that. I'm a Solaris, FreeBSD and Linux user, and OS X is "worth it", to have a well supported UNIX on really nice hardware that "just works". OS X does pretty much everything a Linux box can/will do, you can get right down into the technical low level stuff if you want, but if you just want to do something simple, it is simple to the point of being virtually automatic.

If you haven't spent much time with OS X and are just judging it based on the Aqua UI fluff, you're making a huge mistake.

It's not a UNIX if it doesn't ship with a C compiler. End of story. I mean, you can take a motorcycle and add a roof and some gyro-stabilizing stuff and have it certified by the NTSB/whatever as a car, but it doesn't meet people's expectation of what a car is, and that's the only definition that matters in the real world.

Add in the mess that is ports and the hours you have to spend to get a decent environment for almost any programming language, and it's pretty far off fitting my definition of a UNIX at

Q0. What is the Single UNIX Specification?
The Single UNIX Specification is a set of open, consensus specifications that define the requirements for a conformant UNIX system. The standardized programming environment provides a broad-based functional set of interfaces to support the porting of existing UNIX applications and the development of new applications. The environment also supports a rich set of tools for application development.

Could you please explain to me again how failing to include a C compiler fits with the above statement? I realize that OSX has gotten the stamp of approval (TM) of the Open-group, and that it is technically a UNIX. But it doesn't fit with the common expectation of what a UNIX does.

To be fair, Python 3 isn't quite mature enough to switch over from Python 2. It is now better with Python 3.3 and the reintroduction of some language features that make it easier to port the Python 2 programs up to Python 3. I wouldn't make much of an argument about Apple not moving to Python 3 by default. In fact, I would have been more concerned if they had.

Python 3 can be downloaded from the webs or even better installed with Homebrew.

OSX is indeed a real Unix... but the user world has moved on to Linux where things don't just work but are also easy to setup and control.

If you are used to a Linux install, going to OSX will be a shock. For one, there is no native package system, you will have to install one of several on your own. They are not nearly as reliable as say the debian system. It is do-able but for young people it pays to remind yourself that there is a reason UNIX never took off. That BSD never took off. Linux (the whole eco-system) did far more then make a Unix compatible system, it made Unix usable for the average geek. The "real" unixes were bastards to work on with each system just totally different enough to not make them at all compatible. Not like the way you can google a red hat fix and apply it to ubuntu or the way a arch-linux wiki page is useful to a gentoo user.

OSX made Unix usuable for the average hipster but crossing from geek Linux to once touched a girl OSX will be a shock, just how many things are different and just how much of OSX overrides the Unix way of doing things.

You can run FOSS software but it is NOT as easy as with a debian system. Before you buy a OSX machine to replace your ubuntu install, get an OSX user to show case their FOSS capabilities. Let them show you how they install an apache upgrade not yet released by Apple. Then go and hug your Ubuntu box and swear you will never ever look at an other system again.

You must be insane. Running your own web server from a Mac is as easy as going to System Preferences and activate Web Sharing. While you are at it, I honestly beg you to try the share internet checkmark and then choose to broadcast the WiFi signal you are using over bluetooth to another device. Try to do the same in Linux and let me know if it is a walk in the park. Hell forget about that, using Ubuntu, try to install a Wi-fi card that does not have the driver precompiled by Ubuntu and again, let me know how that goes.

I think Ubuntu has done a fantastic job at making Linux easier, but don't kid yourself, the closed source OSX is vastly superior and just taking a look at their App Store should remind you that. That being said I will always use Linux for work stuff, especially servers where it really excels, but even in that field Linux is getting a run for its money with OpenBSD. That BSD that you said that never took off. Once you realize how awful iptables is compared to what OpenBSD gives you, I believe you will stop bullshitting yourself about the supposed virtues of a holy grail system.

iPhone's are for hipsters. OSX is certified UNIX running on rock solid, high performance hardware. Don't confuse the two.

I used Linux exclusively for fifteen years. I've contributed to many open source projects, including the Linux kernel, and I'm the maintainer of Linux::LVM and other projects. In other words, I'm a fan of Linux. From one fan of Linux to another, don't dismiss OSX just because the same company makes overpriced toys as well. It's a solid UNIX which will run all of your favorite FOSS software, and do it well.

TBH the biggest problem I'm seeing in the wild with the latest software from Apple, Microsoft and Google is the lack of sensible exception handling.

In the old days, if something broke you got an error message telling you that something broke and giving you enough information to figure out what (hell, even if it was just "Error 2312 happened" you could at least look it up). Then they (primarilly Apple it seems, but the others are not blameless) decided that telling people what broke isn't user friendly so you got totally unhelpful "something broke" error messages with no indication as to what - many times I've have to trawl through a tcpdump capture to figure out what went wrong, and often it's that the remote server returned an error message - giving the user an easy way to see that error message would be really good!

Now, increasingly I'm seeing new software simply not producing any error messages at all - it just sits there looking like its waiting on a remote server or something when in fact it's doing nothing because the remote server threw an error back. Added to that the fact that a lot of software is now becoming an asynchronous background service means you don't even know *when* its trying and failing, all you know is it just isn't working (stuff like iCloud - all you know is that your calendars / files / whatever aren't syncing, no indication as to why or when it failed).

I get that the majority of people aren't going to *personally* find debugging information useful, but when they take it to a professional to figure out why it isn't working it would be damned helpful for the professional to be able to get at some information about what's going on - if you want to keep the error dialogue boxes tidy, just hide the debugging information in an "advanced" button.

It's a disgrace. I also can't believe that Microsoft still haven't given us a way to at copy and paste error messages from dialog boxes when they do bother to produce an error message.

My favorite is something along the lines of "An error occurred, please contact your system administrator" and I'm left thinking "ok, I am my system administrator and I have *no clue* what the error is".

I like the Windows Update ones where it gives you a hex code with the message "an unknown error has occurred". If you know enough about it to give it a code then how can it possibly be an "unknown error"? My first senior programmer would have beaten me with a deck of punchcards for doing something like that. Lazy kids today.

It could be very useful to have the code stabilize for a bit, put it through regression tests, do some auditing, maximize use of static code checkers, and fix the problems. I hope they seriously consider it.

I wish KDE and Gone would do exactly the same thing, and ideally, at the same time. In general, everything's pretty stable, but there's always one little bug that everybody knows that interferes with their workflow. Imagine if we got to a state where almost all of those were gone.

I'm thinking you meant to say "Gnome", not "Gone", but I have to admit, as a typo, it makes one hell of a Freudian Slip. I won't say I wish Gnome was Gone, but I do wish the Gnome team would restore some user option control, and even extend it. Gnome has pruned a lot with version 3 and the 3.X's, and I would even think joining in the big bug fix movement should be a lesser priority than for KDE or any of the others to join in. A massively less buggy version of a still heavily restricted Gnome might say that

All projects slowly accumulate those hard-to-fix bugs, or the "maybe later" bugs, or the "not interesting right now", bugs. Periodically every project needs to have that cruft cleaned up.

Spending two months fixing those bugs might be a minor annoyance to some of the kernel maintainers but would be a godsend to people who have been waiting a very long time for low priority and low interest kernel bug fixes.

All projects slowly accumulate those hard-to-fix bugs, or the "maybe later" bugs, or the "not interesting right now", bugs. Periodically every project needs to have that cruft cleaned up.

In my experience, many of those are esoteric bugs that affect one or two people in weird situations, perhaps with a custom kernel patch applied (i.e. method works correctly unless you mod calling code to pass an otherwise invalid parameter). I wonder what the breakdown is between bug types and severity.

Spending two months fixing those bugs might be a minor annoyance to some of the kernel maintainers but would be a godsend to people who have been waiting a very long time for low priority and low interest kernel bug fixes.

I agree, somtimes it is good to clean up even the low priority bugs which impact a small number of total use cases, but could be huge: imagine if there were some "minor" bugs which impact embedded devices such as cable routers. For my home file server the bug is nothing, but could cause a security nightmare for someone who runs custom router software. Linux is too useful in too many places to ignore this many bug reports.

The other issue with "small number of users" bugs is that it's hard to determine how small they are. The bugs you see are just the ones someone could be bothered to report (or in the kernel's case, were eventually percolated up from users through distros as kernel issues).

Develop Linux like Intel develops CPUs: first you make a new shiny, then you do an entire release on improving that shiny. Rinse and repeat ad infinitum. Even better if you have two competing teams working on it. Whichever team comes up with the better product by launch time gets the nod.

Pretty much the way FreeBSD do it. You have -CURRENT (at the moment, v10), which is the bleeding edge, you have current -STABLE which is where most of the stability/bugfix stuff often shakes out and then you have the previous -STABLE release which is for those who are extremely conservative.

Develop Linux like Intel develops CPUs: first you make a new shiny, then you do an entire release on improving that shiny. Rinse and repeat ad infinitum.

So, how it used to work in the 2.2/2.4 days? And they rejected that?

Even better if you have two competing teams working on it. Whichever team comes up with the better product by launch time gets the nod.

Ah, internal competition, a fine strategy from the management manual, but a terrible terrible idea in practice that fosters resentment, animosity, stops cooperation. What do you think the team that fail are going to do? Say "ah never mind", or get frustrated, go off and do something else with their lives and never contribute again?

RHEL cherry picks bug fixes and even features into their "stable" kernel. Last time I checked, they applied ~100MB of patches compared to vanilla for their version. I'd much rather have a vanilla upstream than the potential mess that they're introducing. You run into kernel issues that none of the other (non RHEL based) distros don't.

A valid point. However, the company that absolutely had to turn its nose up at APT and develop a creature that demands updates from the Internet for a simple repository Search operation (and boy is it painful to get out of if you're not connected...) has seen fit to conserve kernel version numbers for posterity.

All released Linux versions tried to be bug free, that should be nothing as big to deserve a whole new version for 4.0. But probably this "bug fix" goes beyond the normal scope. It must not just work, but work in an hostile environment where governments with plenty of resources try to exploit any "more or less work" vulnerability to plant backdoors and snoop, where hardware, firmwares (the methods that could use #badBIOS [erratasec.com] to spread could be an example), internet protocols or encryption algorythms are not so

One of the most frustrating things for me is that the frenzy over the past six or seven years has led to some serious annoyances with the kernel's behavior:
1. Linux kernels for i386/x86 can't boot in less than roughly 28MB of RAM. I have tried to make it happen, but the features added along the way don't allow it. Perhaps it's the change to ELF? I'm not sure.
2. Linux x86 can't have the perf subsystem removed. It's sort of pointless for a Turion 64 X2 or a Core i3, but for systems with weaker processors (netbooks, embedded, etc.) every single evicted cache line counts.
3. Some parts of the kernel seem to be dependent on other parts almost arbitrarily. I once embarked on a quest to see what it took to discard the entire cryptographic subsystem. Long story short: good luck. I was surprised at how many different hashing and crypto algorithms were required to make use of common hardware and filesystems and network protocols. Are all of these interdependencies really necessary?
4. The help text for lots of kernel configuration options are in SEVERE need of updating and clarification. Most of the network drivers still say roughly the exact same thing, and some of the help text sounds pretty silly at this point.
5. Speaking of help text, why doesn't the kernel show me what options are forcing the mandatory selection of a particular option? For some, it's simple, but try hitting the question mark on CRC32c and you get a disastrous and impossible to read list of things that force the selection of that option. The help screen should show an option dependency tree that explains how the option in question was forced.
6. ARM is still a disaster. I have a Motorola Triumph I don't use anymore, but I wanted to build a custom system for. It uses a Snapdragon SoC and the only kernel I can use with it is a 2.6 series kernel from Motorola (or derivatives based on that code base) with lots of nasty deviations from the mainline kernel tree that will never make it into said mainline tree. I have a WonderMedia WM8650-based netbook that originally came with an Android 2.3 port and I can't build anything but the WonderMedia GPL compliance kernel release if I want to use most of the hardware in the netbook, even though general WM8650 support exists in mainline. Something needs to change to make it easier for vendors to bring their drivers and SoC specifics to mainline so that ARM devices aren't permanently stuck with the kernel version that they originally shipped with.
I'm still using a VIA C7-M netbook which suffers heavily due to the tiny on-chip caches. I also have a Fujitsu P2110 with a Transmeta TM5800 CPU that makes my VIA look like an i7. I also own Phenom II servers, AMD A8 laptops, MIPS routers, a Raspberry Pi, and many Android devices I've collected over the years. What I've seen is that the mad rush to develop for every new thing and every new idea results in old hardware being tossed by the wayside and ignored, especially when that hardware isn't based on an x86 processor. Even then, I'm sure that this frenetic, rapid development process has resulted in a lot of unnecessary bloat and a pile of little unnoticed security holes. It may be time to step back and stop adding new features. I would like to see the existing mainline kernel become much more heavily optimized and cleaned up, and then see the inclusion of support for at least some of the embedded platforms that never managed to make it back into mainline. I know that this is an unrealistically broad set of "wants," but I also know that these are the big nasty unspoken problems in the Linux world that there are no easy answers for.

I once embarked on a quest to see what it took to discard the entire cryptographic subsystem. Long story short: good luck. I was surprised at how many different hashing and crypto algorithms were required to make use of common hardware and filesystems and network protocols. Are all of these interdependencies really necessary?

Rather than just asking if they are necessary, the better question to ask is what are they using the cryptographic subsystem for? For example, BTRFS does checksumming and offers compression. EXT4 uses CRC32 as well. And that use isn't arbitrary, they use it to protect data integrity and, in the case of BTRFS, maximize use of disk space. The TCP/IP stack offers encryption. These requirements aren't arbitrary, they pull it in to accomplish a specific goal and avoid duplicating code.

ARM is still a disaster.

And it will continue to be so long as every ARM device is its own unique thing. There might be forward progress with AArch64.

I have a Motorola Triumph I don't use anymore, but I wanted to build a custom system for. It uses a Snapdragon SoC and the only kernel I can use with it is a 2.6 series kernel from Motorola (or derivatives based on that code base) with lots of nasty deviations from the mainline kernel tree that will never make it into said mainline tree.

Probably lots of board specific details (the board support package) that have no relevance in the kernel. x86(-64) and other architectures have the advantage that once processor support is added, support for every motherboard that CPU gets plugged into is virtually guaranteed. x86 would have the same problem as ARM if not for the use of things like ACPI, PCI, and the various hardware reporting formats supplied by legacy bios/UEFI.

I have a WonderMedia WM8650-based netbook that originally came with an Android 2.3 port and I can't build anything but the WonderMedia GPL compliance kernel release if I want to use most of the hardware in the netbook, even though general WM8650 support exists in mainline.

You'll have to blame WonderMedia. Barnes and Noble, Amazon, etc. all do the same thing: baseline GPL compliance release. Chip vendors will do the same thing, releasing only what is necessary and not bothering to integrate upstream. This is no small part of why vendors abandon Android devices so rapidly.

Something needs to change to make it easier for vendors to bring their drivers and SoC specifics to mainline so that ARM devices aren't permanently stuck with the kernel version that they originally shipped with.

Something does need to change, however that something is not in the kernel.

I also have a Fujitsu P2110 with a Transmeta TM5800 CPU that makes my VIA look like an i7. I also own Phenom II servers, AMD A8 laptops, MIPS routers, a Raspberry Pi, and many Android devices I've collected over the years. What I've seen is that the mad rush to develop for every new thing and every new idea results in old hardware being tossed by the wayside and ignored, especially when that hardware isn't based on an x86 processor.

And virtually all of that is still supported, with the ARM caveat noted above. Even the Transmeta CPU is still supported. What ends up happening is that the world moves on, and older hardware passes into history and receives less attention.

4.0 should consist of the following: The ability to decipher the hardware that it is installed into and then an automated optimization and re-compiling process for that hardware à la Gentoo with a bloated fall-back option in case of failure. Realistically, how often have ANY of you ever changed a bus, processor, network card, drive controllers and other hardware - especially on boards with much of that built in?

Just compile drivers/extra features as loadable modules, and get on with your life? The whole obsession with recompiling the kernel and stripping things (rather than just building as loadable modules) is (for 99% of users) just making work for yourself when you discover that "oh, crap this software i'm trying to use needs the frumble-mumbo kernel feature".

I've run into this before, and I've gotten modern (late 2.6) kernels running on systems with 8MB of ram. I have not tried with 3.x, and it's difficult to get the kernel size under 3 or 4MB these days. In processor type and features, try disabling the 'build a relocatable kernel' option, and setting CONFIG_PHYSICAL_START (shown in menuconfig as "physical address where the kernel is loaded") to a value less than the default 0x1000000 (16MB). This is a worked-for-me status solution.

You're complaining that it's not easy to compile your own kernel? I am simultaneously both kind of sympathetic, and not. What is the use case that the average-to-slightly-power-user needs to compile their own kernel for, anyway? (I am actually curious. Hardware support?) And if you're a legit power-user, shouldn't you already know more or less how to do it?

On the other hand, documentation always sucks. ALWAYS. Which is NOT to say that we shouldn't try to make it better.

Nowadays it is extremely difficult to compile your own custom kernel without tripping into a cluster fuck dependency hell ( usually through no fault of your own / ie i know what i am doing )

If you end up there then I doubt the "I know what I am doing" bit. If you're building your own kernel, the best ways to do it are either make oldconfig if you have a known good one or make modconfig if you're using a pre-built kernel and want to use only what's loaded. I'm not sure how you end up in "dependency hell" when building the kernel because it will autocorrect missing dependencies.

poster above made good points with regard to nonsensical feature dependencies in make menuconfig

You do realize that there are lots of "switches" that turn on simply by virtue of the option "CONFIG_X86=y" right? The DS booting (an older 2.6 uClinux kernel) with 4MB and an ARM chip is irrelevant. I am aware that Linux can boot on MIPS and ARM routers with 8MB of RAM, but the relevance is nil when compared to x86. In fact, I dare you: compile an x86 kernel with almost nothing in it but console drivers and whatnot (I've build gzip-compressed kernels at ~800K compressed), make a minimalist BusyBox+uClibc initramfs, fire up QEMU/KVM with the "-m 16" option and boot your kernel. It won't work. Someone here suggested changing an advanced setting I didn't try yet, so that might make a difference, but it doesn't change the fact that ARM and x86 are two different worlds and a lot is forced in x86 that is optional in ARM.

Also, perhaps you should consider being less of an asshole when you fire off a knee-jerk response like this one. You are capable of questioning information without being condescending.

Linus's stated reason for not wanting numbers to go too high is seemingly based on a feeling or personal dislike of high numbers.

Two questions.

1. What happens when there are major changes in the Linux kernel? How are they now represented in selection of version number?

2. What happens when the major digit begins to resemble Firefox / Chromes out of control version madness? How many years before Linux 19.4?

It used to be version numbers actually meant something and conveyed some useful hint of scope or amount of change between versions.

I'm not sure dumping this concept for the sake of political games and or OCD pedantry are worth opportunity cost to the user when contrasted with structured predictable scheme based on commonly agreed and understood guidelines.

2. What happens when the major digit begins to resemble Firefox / Chromes out of control version madness? How many years before Linux 19.4?

3.0 was released on 21 Jul 2011. Given the expected timeframe for 4.0 (if he decides to go through with this proposal, of course), then that's roughly 3.25 years per major version. So the answer to your question would be sometime in 2061.

It used to be version numbers actually meant something and conveyed some useful hint of scope or amount of change between versions.

With this proposal, it does mean something. It means that a 4.0 release is the result of focused testing and bugfixing of the changes and features added in the 3.x series. If the model seems to work, then 5.0 would probably be the culmination of the work put into the 4.x series. Sure, the meaning is different than is used for most projects, but that doesn't make it worse.

2. What happens when the major digit begins to resemble Firefox / Chromes out of control version madness? How many years before Linux 19.4?

3.0 was released on 21 Jul 2011. Given the expected timeframe for 4.0 (if he decides to go through with this proposal, of course), then that's roughly 3.25 years per major version. So the answer to your question would be sometime in 2061.

I was going to post exactly the same thing, you would think that after 20 years we could go from 3 to 4 without someone whining about it.

There is such a great contrast between the slow, steady, improvement-laden release of Linux and the article that precedes this on on Windows 8.1 which can't even get its mouse to work. You'd think that Microsoft is trying to push out Windows 95!

Overall, it speaks to the simple fact that, if the agenda is to improve things vs make money, improvements are the things that make money in the long run.

Haven't seen the mouse problem in 8.1 yet. Apparently it occurs in some games, but I've not seen it in the 80 I have installed yet. but, I'm willing to bet that the number of games the problem DOES NOT occur in is equal or greater to the entire Linux game library. And yeah, I run a bunch of stuff too, OS X, Win7, Win8.1, server 2003, 2008r2, FreeBSD and Linux.

Now it's not that I bump up against many bugs but this is a very smart move. So many times you see feature upon feature added, maybe crash a bit blah blah. But sometimes you just have to stop, take a deep breath and just fix what is there rather than pile on new stuff. A brave decision but essential for the OS itself which must be rock solid above all else.

It opens up the possibility of providing support for the kernel for sufficiently longer periods, and essentially, it could act as an LTS kernel for distributions. Linux is not that stable at this time, and the experience is still very much a hit or miss on systems. Whilst things are certainly better than they used to be, there are still many cases where I come across systems which should work, but don't (ie, they might stutter a lot, sometimes occasionally ker

I think for this to work he has to say something like "We won't move on and merge new features until X bugs have been fixed." In other words if you want the merge window to reopen for features, fix some bugs. X has to be high enough that a good many developers have to work at it. Kinda like making sure you hit your target heart-rate before getting off the treadmill.

I'm still pissed that Linus moved away from the traditional development model: Even number x.Y releases were stable branch and odd releases were testing/development.

Linux moved away from that model because of the problems that it caused. There were very long (compared to today) cycles where the current "stable" kernel series was basically in maintenance, and the development kernel was diverging further and further from the stable kernel. So if you wanted to use a kernel with new features, you were stuck using the development branch -- and if you waited until there was a new stable series, then there was a big jump from the kernel you were on up to the new one.

Once Linus decided to change the development model, there was no point in keeping the old format for the version number. The version numbers should be determined based on the development policy, not the other way around.

Well, when I was naive I was pissed off a lot too. When I had about 10 years of code under my belt all Major version numbers in my codebases indicated a complete re-write / major design overhaul and API breakage as far as the eye can see. That same reasoning was what Linus was going by when he said there'd never be a 3.x.x release -- v3 would mean he when insane and wrote the whole thing in a message passing version of VB; I'm paraphrasing. [wikiquote.org]

What's interesting is that I follow the Unix Way(tm): "Do one thing and do it well"; So my "Applications" are actually just that: Application of multiple smaller modules each with their own names / codenames and version numbers. The Editor application "Sledge 0.4.x" is a UI layer stack provided by Core v3.0.x leveraging Sterling v1.6.x for rendering, Vaporworks v1.13.x for a scripting VM, CFG9000 v5.2.x for INI/.conf persistence, etc. Git submodules makes building other programs that target disparate points in the independent module versions simple. Eg: A server for providing HTTP interface to other game-engines/servers via remote console utilizes Core, Vaporworks, and CFG9k. My code editor, audio assemblers, etc. use a different group of modules, but the same common codebase. So, the major application version of an application may not change even if I use a different subsystem or rewrite a module (eg: to get my rendering engine using Wayland natively); Major module version changes translate to minor Application version changes.

Each of the modules is like a library with its own test suite, but provides a small set of associated (terminal) tools (eg: My "Core" library provides a platform abstraction layer and provides a virtual file/network system where local / remote / archived paths can be mounted and mapped to the installed system, allowing me to "cd", "cp", "mv" across the network and OS barriers; Vaporworks provides a scripting environment, but also provides a compiler / bytecode translator and debugger / profiler tools. For these individual modules and their smaller tools the "Major version change = Rewrite" method makes sense.

However with larger applications (say, a distributed versioned 3D game development environment), or a browser, or a Monolithic Kernel: Full / Majority code rewrites aren't occurring. So after having created some sprawling and immense applications I came around to the idea that it doesn't make sense to require the same level of change for a major version number in the application as the module -- Why even have a major version number if it never changes? The game dev studio always has the same interface: It must always interface at the human / machine level. Eg: There's a few ways to create a multi-threaded event pump, but the API for them all will be the same. There's different ways to handle pointer input (esp. on Win32 vs X11 vs Wayland to reduce input latency), but the pointer API is not going to change (it did have to change years ago to support multiple pointers / multi-touch, and that was a major version bump in Core.UI, and in apps that use it). It's not like I scrapped pointers for eye tracking, context awareness and vocalizations or gestures... yet, but that was a substantial addition to the system.

The Linux Kernel is in the same boat. It's to the point now that it's got to provide largely the same interface to its users i.e. programs; ergo: ABI stability; There's not going to be a full rewrite because that would be death -- It wouldn't be "Linux" anymore. Nothing that depends on it would be able to function, and all the dependent applications / modules / systems -- A huge chunk of the ecosystem -- would have to be rewritten given the level of change that warrants a rewrite. Especially if we actually want to improve on operating systems -- Say, eschew POSIX in favor of Agent oriented operating environment with byte-code program modules linkable into machine code at install time, or runnable via VM if untrusted (sandboxing that actually works

The Linux Colonel stayed in the 2.x numbers for many years. I even remember a post by Linux Torvalds on the mailing list saying that there would never ever be a version 3.0. At the time I thought that was pretty weird. I mean, things are going to get a little strange when you get to version 2.99.99.99.99.99.

So,obviously he changed his mind and not only went to 3.0 but apparently he is bored with 3.x and wants to jump from 3.19 directly to 4.0.

Maybe he's jealous of Firefox and Chrome and is trying to catch up to them.

Onto a totally different topic: we're getting to release numbers where
I have to take off my socks to count that high again. I'm ok with
3., but I don't want us to get to the kinds of crazy
numbers we had in the 2.x series, so at some point we're going to cut
over from 3.x to 4.x, just to keep the numbers small and easy to
remember. We're not there yet, but I would actually prefer to not go
into the twenties, so I can see it happening in a year or so, and
we'll have 4.0 follow 3.19 or something like that.

These days, kids will relate every first-number-before-the-dot version increase with Chrome and Firefox.

Quite honestly, their versioning schemes wouldn't even be all that bad for Linux, the "3." or "4." are totally meaningless numbers anyway. At the same time, it provides some buffer zone for people that expect X.Y schemes represent significant new versions whenever Y is increased.