Posted
by
Unknown Lamer
on Tuesday August 16, 2011 @05:04PM
from the linux-gets-a-day-job dept.

jfruhlinger writes '-stable' is the term for the current Linux release most suitable for general use; but as Linux moves into more and more niches, there's a need for a kernel more stable than -stable, which is updated fairly regularly. Both enterprise and embedded systems in particular need a longer horizon of kernel stability, which prompted Greg Kroah-Hartman, then at SuSE, to establish a -longterm kernel, which will remain stable for up to two years. Now there are moves to get this schedule formalized — moves that are a good sign of Linux's long-term health."

Indeed. When was the last time you did a kernel update on your washer or car? Yet, the manufacturer must be able to do so if a serious flaw is discovered down the road.

2 years is laughable. In the embedded world, 10 years would be more like it.

Also consider the need for long-term stable kernels outside the embedded world. Red Hat Enterprise Linux is supported for 7-10 years with a support agreement. RHEL5 is still at 2.6.18, and will stay so for years. Maintaining compatibility is paramount to many bu

Long time ago: it is possible to save the settings to file, at least on mine (which is a Linksys WAP54G, so technically a access point. My router is a net5501-70 running OpenBSD) The file must be sitting on the file server (which is backupped nightly). I don't think my wireless routers configuration has changed significantly since that backup.

Not that that backup will help much, if the WAP fails, it most likely will be hardware failure . I've had it since May 2005 and unless it stops working, I have no r

Wow, impressive that you dared to talk about system updates with us...does Windows Update now also update the whole system including installed client applications? Does it maintain a complete repository of software, easy to install with just a handful of clicks? Does it cleanly remove installed software? Does it automatically pull in needed dependencies? No? Such a pity...come back when you've learned the difference between a "Package Management System" and "Windows Updates".

Wow, impressive that you dared to talk about system updates with us...does Windows Update now also update the whole system including installed client applications?

No Linux distro does this - unless you limit yourself only to applications provided by that distro. Who does that? If you limit yourself to only to stuff that comes with Windows, then yes, it would update everything.

Does it maintain a complete repository of software, easy to install with just a handful of clicks?

Yes.

Does it cleanly remove installed software?

Yes.

Does it automatically pull in needed dependencies?

Yes.

No? Such a pity...come back when you've learned the difference between a "Package Management System" and "Windows Updates".

Look, I realize you just discovered Linux last week and now you want to trumpet its virtues loudly and feel superior - that's a pretty common reaction to people who've discovered new technologies. "Look at me, I use *LINUX* and I am superior!" Yeah, yeah - but when you get out of

No Linux distro does this - unless you limit yourself only to applications provided by that distro. Who does that? If you limit yourself to only to stuff that comes with Windows, then yes, it would update everything.

Limit? Okay, we'll assume that you're a video/music/image-editor/software-developer/engineer and therefor you need software which is not available via the repository. But for everyone else (especially the so called 'average' user) is 'limit' the wrong term, they'll barely scratch on the surface of the available applications. And for everyone else there are repositories available which can be easily added to the system.

Does it maintain a complete repository of software, easy to install with just a handful of clicks?

Yes.

Okay, I've got, let me check, 2,500 applications ready to install. How much are you offeri

Its about your wifi routers ADSL modems, cable modems, and electric toasters , and everything else that has linux embedded these days, many millions of which are attached directly to the net, serving as your first line of defense.

Not one in a hundred wifi routers get updated over their life span.

I have servers running ancient linux. (Embarrassed to say just HOW old). They do specific tasks and have no user accounts, and they reside on the Local net, but still any disgruntled employee could own them if they tried. There is no patch source for these old installations, and trying to back port security patches is simply a non-starter.

Two years is not enough. 5 years is marginal. Even then, I want nothing but security patches. If I need the next version of something I'll upgrade, but for embedded devices or single purpose servers, all I need is security fixes.

Why not use Caldera, or Corel Linux, or something as old from companies that are either no longer around, or who no longer do Linux? If it's a Debian based distro, patch it up w/ the latest from Debian, but otherwise, it seems to meet your requirements - much more than 5 years. Chances are that even Linux crackers won't be interested in those, and you can build it into anything - your garage, your car, your home security system, anything!

Because you can't put Caldera or Corel Linux an a system with 16MB NOR and 32MB RAM.
We are talking about openWrt and similar distros.
Chances are that crackers will go first on your router where most users have their DNS/DHCP and termination for all your network devices.

Not one in a hundred wifi routers get updated over their life span because their owners are ignorant about their own security. If your router would have some kind of auto-update (a lot of providers deliver devices with this functionality) then the difference between -stable and -longterm can be significant (of course, if the router manufacturer implements newest security patches in firmware updates)

That entirely depends on how much you drive, no? Mine is scheduled for a maintenance ever 15000km or every year. Given I drive 20000km/year, maintenance is about every nine months. My car is old (11.5 years) and a gas engine. I know that many newer diesel engines only require maintenance every 30000km or every two years.

Well, of course you could still define "few months" as "less than twelve months".

You don't frequently install or update anything in your car either (I cannot think of any other way in which it can break every few months). I am sick of the comparison between the car and desktop-computer industries.

Yea, but according to Greg KH, distributions are not the only users of Linux kernel who need a long term support. Consumer devices and other embedded systems that just need the kernel without the "GNU" part could do with support too.

This does not help help RH most direct competitor SUSE.To me this looks like it could be a SUSE ploy to get a stable kernel to compete with RH while not having to do all the work. Debian would like this as well.Or being less cynical they are making something that other distros can use like OBS and zypper ended up being used by Meego.

Why not? All of the Red Hat kernel source is available. They stopped doing the 1000s of patches in the SRPM thing, but it's still source. (Oh the horror of that old patch structure... maybe it was handy if you were trying to undo Red Hat's changes, or just "lift" their changes into a different kernel version. But if all you wanted was to work around the bug in your stupid TSSTCorp DVD burner....)

Or Red Hat's competitors could put the same effort into maintaining their kernel snapshot... but at some poin

Hello constant updates is not a sign of Stability!The problem is there isn't much need for commercial support for something that doesn't break all the time.

I have used RedHat in a server farm of over 1000 systems and I have used FreeBSD in servers systems that were a little smaller.

The BSD generally run's behind in code version on the application side, but these are more stable and not constantly pushing the bleeding edge. It's used inside Router and Big server farms and so tends to be better on the network side.

With Red hat we had so many problem with the BNX/BNX2 10 GB ethernet drivers, it was a nightmare scenario with over $500,000K in blade servers constantly crashing, there were the HP vendor drivers, and the RH drivers and the Linux main line drivers, which we ended up building and using till RH caught up.

FreeBSD is hardly dead. Some of the fastest network drivers exist in FreeBSD.At this point the BSD's are almost a flavor of Linux. There is a Linux compatibility layer also.

I have written drivers for Both BSD and Linux. BSD drivers are generally much clean and more straight forward and it's because of them that many HW vendors bring up a BSD driver first even if they choose never to share it.

With Red hat we had so many problem with the BNX/BNX2 10 GB ethernet drivers, it was a nightmare scenario with over $500,000K in blade servers constantly crashing, there were the HP vendor drivers, and the RH drivers and the Linux main line drivers, which we ended up building and using till RH caught up.

Next time build a test lab so that your QA group within IT, can validate the software you're about to deploy.

Lack of preparation on your part doesn't constitute an emergency on anybody elses.

Next time, please sign your comment with your company so I can validate that I do not have any working relationship or consume your company's products.

Yeah, you say that till you have driver issues on your band spanking new Dell running Windows Server. HockeyPuck, has a point that you should lab the performance of the box before deploying a bunch of Blades. The kind of money you have to put down for that! It's always wise to make sure they're going to work.

But who hasn't been there when you trust the vendors just a little too much, purchase a bunch of new gear and the OS and the Hardware simply doesn't mesh.

Both enterprise and embedded systems in particular need a longer horizon of kernel stability, which prompted Greg Kroah-Hartman, then at SuSE, to establish a -longterm kernel, which will remain stable for up to two years.

Have you ever taken a Kroah-Hartman test? It's a test designed to provoke an emotional response.

Hartman: You're in a repository, compiling a kernel,
when all of a sudden you look down.Dotzler: What version?Hartman: What?Dotzler: What version?Hartman: It doesn't make any difference what version -
it's completely hypothetical.Dotzler: That's what I've been trying to convince the world all week! Hartman: Maybe you're fed up. Maybe you want to be by yourself.
Who knows? You look down at the screen and see the codebase in TortoiseGIT.
It's crawling toward release.Dotzler: TortoiseGIT? What's that?Hartman: You know what TortoiseSVN was?Dotzler: Of course!Hartman: Same thing.Dotzler: I've never seen a stable UI. But I understand what you mean.Hartman: You merge some code down, change the UI, and increment the release number just for the hell of it, Asa.Dotzler: Do you make up these questions Mr. Hartman?
Or do Slashdotters just write cheap pop culture parodies instead of working?Hartman: The project lays on its back, its belly baking in the
white-hot flames of a thousand angry users, beating its legs trying to make itself stable but it can't. Not without your help. But you're not helping.Dotzler: What do you mean I'm not helping?Hartman: I mean you're not helping! Why is that, Asa? (pause) They're just questions, Asa. In answer to your query, it was either this or a filk based on a Rob Zombie song. It's a test, designed to provoke an emotional response.
Shall we continue?Dotzler: Nothing is worse than having an itch you can never scratch!Hartman: Describe in single words only the good things that come into
your mind about your mother.Dotzler: My mother?Hartman: Yeah.Dotzler: Let me tell you about my mother... *BLAM BLAM BLAM*

Do you like our kernel?
It's unstable?Of course it is.
Must crash a lot.Often. It seems you feel our work is not a benefit to the public.
Kernels are like any other software - they're either a benefit or a hazard. If they're a benefit, it's not my problem.

Since the -longterm is going to have to be based off of a -stable release and be maintained off that branch, we end right back where we were, with four version numbers, each level denoting the number of rounds of fixes applied to the number to the left. Only there's now going to be increased stagger, since stable will lag behind the release and longterm will lag behind stable. (They have to.)

If we're going to have lots of version numbers, then going back to the odd/even minor digit makes more sense than to do rapid increments. Yes, this pushes us out to five digits, which is borderline insane, but it is then five digits that carry specific pieces of discrete information rather than four digits where two don't necessarily convey a whole lot.

"this guy" is Greg K-H, second-in-command to Linus and the maintainer of the -stable tree. His arguments were one of the main reasons Linus changed the 3.0 numbering. Greg is just proposing that he maintains another tree officially, not a "fork".As for version numbering, I think there will be 3 numbers - first two for mainline releases, and one more for stable/longterm patch level. I don't think -longterm will be needing an extra number.

Yes, a static baseline is great for certification programs such as EAL [commoncriteriaportal.org] and FAA approval [lynuxworks.com], but it's not the only sort of "stable" that you want. Data centres want a "carrier-grade" OS (which means five nines reliability). They don't necessarily care if they have to patch, since you can now hot-patch the kernel without taking it down, but they absolutely do not want the software to show any unreliability whatsoever. They'd likely get upset at having to patch more than once a year, since in-situ patching isn't always safe, but if you're limited to a few minutes downtime a year on a server as an absolute maximum (this is ignoring failover, etc, that's a whole different issue than a specific physical or virtual server instance being five nines) then I could see it being tolerated a whole lot more than a blind kernel upgrade at year's end.

(This assumes that the hot upgrades can be made fault-tolerant enough that a brown-paper-bag release - you know they're going to happen on any tree eventually - can be backed out without violating five nines.)

Who is Mozilla targeting? If they are not going after the enterprise are they going after the basement hobbyist? Or the firefox developer? Surely grandma would like to provide an easy answer to the request "Grandma, click on Help then About Firefox, and tell me the number next to Version..."

By definition a stable system has to be running older code that's been fixed and is well understood rather then "the latest" updated code.

If your constantly churning and updating you can not be stable.

Red Had run's behind the main Linux distribution to get added stability.

But FreeBSD which seems old and stodgy is like that because of the emphasis on stability over features and improvement.It's also simpler under the hood which is also important for Stability.

But it all depends on what your trying to do. GUI vs. Server.For Server I'd go with BSD.For GUI I'd go with Windows, Apple OS-X (BSD variant), maybe Android (haven't developed on it yet) X Windows just sucks.For Embedded , I'd go with what ever the eval boards ship with. Usually Linux these days. (Certainly not PSOS or QNIX)

At this point I can compile the same code on all of these using GCC and run them equally well. They are all Posix compliant. SDL run's on all of them.Java also run on them. So does Flash, LLVM, TCL, PERL, RUBY, Python or what ever langue du jour.

Let's end the religious wars on OS's, it's about getting your work done. The OS is just a platform for the language your want your code to run on.

VxWorks or Microware OS/9 still kick Linux's butt in the RTOS world for reliability and strength/stability of codebase. Just sayin'... if you're building missile systems, you're probably reaching for one of those.

Xvworks and microware, yuck.I am a video specialist and love doing real time control stuff and embedded systems. I have yet to understand what they are talking about with RTOS. I can do microsecond accurate timing now in vanilla BSD or linux. Yes 1/1,000,000 second timing. Verifiable on an oscilloscope from user space or in drivers.

Overall I think the opensource is the important part of stability. The more eyeballs looking at code the more solid it will be.This is why new code should be treated with some s

I'm the dude who envisioned, designed, developed and shipped both autoexec.bat and Clippy, so clearly I know my shit. What the hell have YOU ever done? You know what, I don't have time for your jibber jabber, I have to finish the bootware for the Kin 3. It's gonna be the iPhone killer.

Debian security support stands for more than 2 years. So if you say "more than 2 years", I'd say, that's what we get with any Debian release. So I hope that the plan is to have it for longer, otherwise it's YASM (Yet Another Suse Marketing...). There's all signs that 2.6.32 will be maintained for a long long, very long, extremely long time, since so many distro are using it.

But this is already what's happening! Almost all distro are currently using kernel 2.6.32, and any security patch that a given distro will make will be able to be shared with others. So I don't really see the point here.

Linux could have dominated, if there was some sort of stable API for third-party developers. Developing for the Linux platform quickly becomes an experience of insanity, when you start doing compatibility test, and the test matrix just explodes.

I'd say, if it was too hard to keep API stable across all versions of Linux, maybe we should at least have API stable for all minor versions, say, 2.6.x?

I know all the arguments for moving faster, for keeping a cleaner code base, etc. But hell, what good is a shiny kernel if the apps can't keep up with?

...it's the external drivers - as you point out in your post it was a kernel module that caused you massive grief. Most applications don't depend on out-of-tree with kernel drivers and thus are insulated from the kernel changes (this is why you can often get away with running a hand rolled modern kernel on "old" distributions).

However, the moment you have touch out-of-tree drivers your experience is going to be ongoing pain and it generally doesn't matter which out-of-tree driver you are using. If you are e

If the target for a long-term stable kernel is embedded systems, then I would suggest having some sort of arrangement with the real-time kernel patches [kernel.org] which typically don't release with every kernel.

If, for example, 2.6.39 was chosen as a -longterm, it's unattractive for many embedded developers without the option of the -rt.

This is pretty much one of the 2 major services Redhat offers(the other is support). They pretty much backport all security fixes(and most bug fixes as well) while leaving everything else as stable as possible. That way you can continue to run your machines without worrying about some new whiz bang update breaking everything.

Kroah-Hartman says - "Consumer devices have a 1-2 year lifespan" -- this is a sign of our times. Just make junk that last a couple of years at best, and then chuck it. It would be far better to create devices that last twenty years and can be updated and repaired. This is why I like 'dumb phones'- cellphones that are less likely to be pwn3d, last longer, are cheaper, tougher, and easier to use. Ah, I am going to miss you, Nokia, and Motorola, and Siemens, and...

We are being driven by software, which then drives the hardware, which then drives more software. Sanity would seem to say that a computer which works should continue to work, and continue to be supported. No, it's not a good business model if you're a gigantic company trying to steal all the money you can until someone else puts you out of business.
Why aren't there any companies picking up old software and hardware service? The best I can see is Wary 5 Puppy, which goes back to the last kernel which cou

Solaris got the 'slow-aris' nickname because lots of things in the kernel were designed for scalability at the expense of speed. Run it on a 1-2 processor machine, and that's overhead. Run it on a 64-processor machine, and you really appreciate it. The Solaris kernel was split into lots of largely independent threads back when Linux was still using a single lock for everything.