Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

climenole writes "Phoronix recently published an article regarding a ~200 line Linux Kernel patch that improves responsiveness under system strain. Well, Lennart Poettering, a Red Hat developer, replied to Linus Torvalds on a mailing list with an alternative to this patch that does the same thing yet all you have to do is run 2 commands and paste 4 lines in your ~/.bashrc file."

I've done some tests and the result is that Lennart's approach seems to work best. It also _feels_ better interactively compared to the vanilla kernel and in-kernel cgrougs on my machine. Also it's really nice to have an interface to actually see what is going on. With the kernel patch you're totally in the dark about what is going on right now.

1) There isn't a difference between the kernel patch and the command line hack. They are equivalent. The command line bit was known beforehand because that was the method used to figure out if this kernel hack would be a good idea. The kernel hack just makes the process transparent.

Linus says: Right. And that's basically how this "patch" was actually tested
originally - by doing this by hand, without actually having a patch in
hand. I told people: this seems to work really well.

2) Linus recommends the kernel patch:

Linus also says:Put another way: if we find a better way to do something, we should
_not_ say "well, if users want it, they can do this *technical thing
here*". If it really is a better way to do something, we should just
do it. Requiring user setup is _not_ a feature.

It seems like a kernel command line option would be a great solution -- it would "just work" for the normal user, and the user with specific needs / servers / whatever could just append the appropriate few characters to the bootloader config.

There is no one single magic bullet to say do XYZ and the desktop usage will be super awesome responsive. That is because there are different situations and conditions that can affect performance.

This specific patch is to handle the case where running background tasks (updating, backup, searching the filesystem to index files and other things the computer can do) that eat up CPU causes the system to become unresponsive (especially on lower spec machines that don't have enough CPUs to handle moderately complex workloads). The reason the "make -j64" was used was not to say that this is great for developers or people building stuff in the background while watching video (which it will be), but to simulate the system under stress.

The difference is the kernel patch is 200 lines of C code, which compiles to several kilobytes of machine code. The shell code needs to spawn a bash process upon startup of every other process, that's several megabytes of RAM and interpreting contents of text scripts that perform the operations.

The final effect may be the same but the overhead of performing the operation is much smaller with the kernel patch.

No, incorrect. This is a modification to your.bashrc, which is (already) run every time you start a bash process, within that process (i.e., not a new process). Nothing needs to be spawned on every single process.

Admittedly the bash script does spawn some processes, but a) that's the way.bashrc works, and you have dozens of those in there, and b) it's only one process, a mkdir. The echo and the conditional run within bash itself.

The way that the configuration works, whether done in the kernel or in your.bashrc, is to associate all processes spawned from a single bash shell with a single new scheduling group. This gets you better performance when you're running processes from terminals, by associating logically-similar groups of processes in the kernel instead of letting it see all the processes as a giant pile.

The intended use case, which is pretty clear from the LKML discussion, is to make performance between something intensive (like a compilation) in a terminal and something non-terminal-associated (like watching a movie) better-balanced.

One never sets PS1 for non-interactive shells, and it's the primary way the shell tells the user's startup scripts whether they're interactive. There's a good chance the PS1 method spares a system call, too:-) It's also what the documentation says to do.

Your [ -t 0 ] approach also fails in cases where an interactive shell is being run on a non-tty. Although almost any shell since about 1990 tends to complain in such cases, at least the PS1 method will still run the right.profile code, and the -t method will not.

I've read your comment several times and each time I hear the voice of comic book guy from the simpsons in my head. I'm not trying to be rude so I'm hoping you're not offended by this. I think it has to do with the "no, incorrect" part that your comment starts with.

Users won't need to recompile or reconfigure anything -- they'll get the updated system installed for them by the distro packagers in upcoming versions. You only need to do anything if you want to enable this *right now by yourself*, and there are indeed a few different ways to do it.

The differences between the change to the kernel and the shell script are basically two: one, they apparently have slightly different algorithms for choosing how to group the processes. That's not due to it being in-kernel vs out-of-kernel though -- that's just because they are slightly different. Both can be implemented in both ways, and both work with the same actual implementation mechanism -- simply one works from userspace through the interfaces and one's built-in to the kernel.

Auto-tuning behavior that's built in will probably be the most reliable, easiest, and best-performing way to do this, rather than requiring every Linux distribution to ensure that they're running the same extra scripts and keeping the userspace stuff in sync. Do it once and leave it built-in to the kernel.

However I disagree with the conclusion that the patch should therefore be merged into the kernel. First, instead of pasting some lines to bashrc and running some commands, the user now has to recompile to kernel to benefit from the change. That's a lot less user friendly. Secondly, if one really wants to push user friendliness, one should convince distributions to update their init scripts to run those cgroup commands automatically. Since all software users use go through distros anyway it should be the distros' job to ensure user friendliness.

Umm no. It will be in the default kernal eventually and that works out-of-the-box. The idea that user friendliness is "pasting some lines to bashrc and running some commands" and that "user friendliness" should be left up to distros rather than the main line for Linux is pretty much one of the reasons Linux has never really mattered on the desktop and why 95% of computer users prefer Windows or Macs.

One requires a kernel patch. One uses functionality already present in the kernel to do the same thing. Testing reveals the one that doesn't require a kernel patch is more responsive. You tell me which is best.

The stock.bashrc on most Linux systems includes/etc/bashrc. On the recent systems I've looked at, this then includes/etc/bash.d/* (or something similar). You can get the same effect as adding this to every single user's.bashrc by simply dropping a file with the magic lines into/etc/bash.d/. A package can do this and you don't even need to reboot for it to take effect. Best, it's trivial to turn off for systems where it's not applicable.

The kernel patch is the hackish way to do it. They're hard-coding policy settings into a kernel patch. Dumb. The kernel is there to provide the knobs, not to twiddle them for you.

Lennart's argument is that policy should not be hard-coded into the kernel. He's not saying "everyone should do this in a bash script". He's saying "leave policy settings to userspace mechanisms that can handle them better." Say, systemd for instance.

Users would be better served by Lennart's approach, I think.

Funny thing is, most desktop users will not see the benefits of the patch, since most of them never use the terminal to run cpu-hogging kernel builds. All desktop apps share the same cgroup.

That won't stop hordes of n00bs from claiming ZOMG MAI SYSTEM IS SO MUCH FA$TER NOW OMG!

But I was under the impression that Android devices already utilized a different scheduler. In addition, phones have different requirements - for example, some might require a real-time kernel in order to operate on a cell network. Long story short, messing with the scheduler could have serious repercussions on an Android device. Only those who really know what they are doing should attempt it. The rest of us should simply wait - unless you don't care if your phone is reliable or not.

It makes every process spawned by the user that passes through the bash shell add their process ID to a per-user task control group. See the documentation on control groups [mjmwired.net] for more information about exactly what that means, and what what some of the commands involved aim to do. I'm not sure if this is exactly the same impact as the kernel-level patch, which aimed at per-TTY control groups. That might includes some processes that don't pass through something that executes the.bashrc file along the way.

The kernel has a mechanism to schedule groups of processes, and it has for years. By grouping tasks together, you can make one process (video playing) get the same cpu share as a group of processes put together (compiling code). By doing this (instead of the video processing being equal to just one of the compiling processes), everything feels more interactive, even though it's actually slightly slower.

No one uses scheduling groups because they have to be setup by root and it's not the easiest thing in the world (you have to write stuff into sysfs, I think). No distributions set them up.

The magic kernel patch just adds a simple rule to the scheduler. When a process starts, it goes into a group with the rest of the processes in that TTY (virtual terminal). This means the user doesn't have to do anything and the groups are setup automatically.

Poettering thinks this is somewhat hackish, and that things shouldn't be based on what TTY a process is started on. He made the little script to prove that this can easily be done in userspace.

Linus has rejected this, basically saying that we've had years for people to make something like this and no one did until the kernel patch came along. The patch is simple, reasonable, and doesn't require distributors to ship updated userland files to put processes in groups.

I should note that my understanding comes from LWN [lwn.net], which has had excellent coverage of this on their kernel page, as always. You'll be able to see their articles in two weeks if you're not a member (which is worth it if you like this kind of stuff).

Thanks for the nice words about LWN! Here's a special link to the LWN article [lwn.net] on per-tty group scheduling for Slashdot folks. Hopefully a few of you will like what you see and decide to subscribe.

Thanks for the nice words about LWN! Here's a special link to the LWN article [lwn.net] on per-tty group scheduling for Slashdot folks. Hopefully a few of you will like what you see and decide to subscribe.

In fact, they still make turbo buttons, though as on-screen controls instead of physical switches. There's a turbo button on my laptop's taskbar, called "CPU Frequency Scaling Monitor". I can turn it on, but then I get less battery life. Ordinarily, an OS is set to turn turbo on when the machine is plugged in and turn it off when running on battery power.

this is definitely one of those things that I add now, then forget about later, and it becomes a condition that may or may not work when I apply upgrades & patches in the future. Whether or not the.bashrc approach is faster, I think that going down the kernel route makes it more consistently usable.

True, though it could be done at the distro level, which appears to be the author's plans (the person who wrote this script works for Red Hat, and discussed elsewhere in the thread what Red Hat's plans are for rolling out systemd [freedesktop.org], which will handle this). Then things would be appropriately updated by the maintainers rather than relying on users to keep their.bashrc synced with infrastructure changes.

True, though it could be done at the distro level, which appears to be the author's plans (the person who wrote this script works for Red Hat, and discussed elsewhere in the thread what Red Hat's plans are for rolling out systemd [freedesktop.org], which will handle this).

Indeed. "Should we be punting this for userspace tools to handle?" isn't the same question as "should we punt it to the user?".

True, though it could be done at the distro level, which appears to be the author's plans (the person who wrote this script works for Red Hat, and discussed elsewhere in the thread what Red Hat's plans are for rolling out systemd [freedesktop.org], which will handle this). Then things would be appropriately updated by the maintainers rather than relying on users to keep their.bashrc synced with infrastructure changes.

I understand what you're saying and agree. The problem I have is with your userid. 597. Users with IDs as low as this are mythical. Kind of like unicorns or maybe even grues; they are creatures of the imagination. Users with sub-1000 user ids are DANGEROUS. They say stuff that most often makes sense and this can be mesmerising. They do this to lure us into the trees to have intercourse with sirens of the forest, I have heard. Your post is an incredible example of the delirium that can ensue when magical bei

My understanding of the original kernel patch is that it just puts stuff from different ttys into different groups for scheduling purposes so that they're less able to hog each other's resources. This alternative just makes your shell sort it out itself when it starts i.e. when you're running a new terminal. So this should basically be equivalent.

The comment here is very important to remember though:http://linux.slashdot.org/comments.pl?sid=1870628&cid=34241622 [slashdot.org]another comment on that article (which I can't now find - anybody know where it is?) basically said that the patch suits Linus's own use of compiling kernels whilst surfing the web. Sounds like a reasonably accurate assessment really so for now it's far from the magical boost to general interactivity some may have hoped for. In some sense there's no such thing anyhow.

Nonetheless the comment linked above also has Linus talking about increasing the scope of the automatic grouping heuristics in the future so hopefully the "just works" nature of this should become available to more people eventually.

The original kernel patch (and this alternative) aren't magically making everything respond better, they just improve certain usecases.

except some people (*cough*unbuntufolks*cough*) don't like to use the terminal... so the kernel patch might be better, although wouldn't all gui apps have the same [p]tty?

Exactly. Gui apps (usually) don't have a controlling terminal, so would all end up in the same scheduling group, making the patch ineffective.

However, with user-space managed cgroups, the window manager (or whatever starts up the GUI apps) could do its own thing (the.bashrc hack doesn't work as is either, because the window manager doesn't usually invoke apps via the shell)

Poettering wants scheduling to be handled by his "systemd", a replacement to init/upstart. This, by the way, is the developer of Pulseaudio, so those of you who've experienced broken sound in recent years can now look forward to broken system initialization, coming soon to a Linux distribution near you...

Has Linux ever had a stable sound system?My recollection is a neverending series of different sound related components (OSS, ALSA, ESD, aRts, Jack etc) of which the best you could say is that they worked most of the time but invariably didn't behave very well in certain cases.

Lennart seems to cop a lot of crap over Pulseaudio but as far as I can tell it's a positive contribution in an area with a lot of historical and legacy issues.

not multiseat support kills using pulseaudio for me. I need 3 seats all working at the same time. "htpc/wife's instance", mpd (server run as a seperate user so that not everyone has the ability to play with shit they shouldn't, and my desktop instance.

Until he gets his head out of his ass, and starts supporting the systemwide server, I'm not really going to touch it.

You don't need Pulseaudio if your machine has a single set of speakers and a single input device or maybe a couple of devices that never change.

As soon as you add things like bluetooth or USB headsets into the mix and want to do things like move audio streams between output devices without stopping them (play the sound from the DVD I am watching on the main speakers, unless I turn on the bluetooth headset) you either need to modify each and every application to understand all these devices or else you need some kind of sound server.

You want to see some real, over-the-top rudeness? Go to a forum designed to help Linux newbs.

I've been tooling with Linux for 15 years now, so used to the arrogant "help" found in many forums and groups, but I've had non-Linux friends check some out and were completely amazed at the average level of rudeness in the average "help" reply. It certainly didn't make them want to jump into Linux when that is the average help. For obvious reasons, half the admin-types on Linux forums remind me of the comic book

Ever seen a 50-year-old ER nurse? 90% of the time, they are callused to the suffering around them. It comes with repeated exposure to the environment, and although their demeanor may seem rough to others, they are extremely efficient and skilled.

Sometimes, I think what some mistake for IT snobbishness is just a natural consequence of exposure to the lifestyle.

I thought it would be fun to post some things in answers.yahoo.com in the IT-ish categories... after a while you realize that the REALLY good questions are drowned out by people who REALLY just need to GTFO and RTFSomething.

I work in public ed IT, and can say with NO uncertainty that most people don't want the right answers, they want the nice answers. It's hard not be rude in some cases.... it just comes out your pores after enough exposure to the environment.

You are right in that many people are inadvertently (or apathetically) rude for the purpose of efficiency. "I don't have time to be nice, I'm busy helping sick people, and being nice slows me down." While it makes them efficient and effective at the technical skills (things that CEOs love) it doesn't necessarily make them the best care givers. Outside of actual life and death emergencies (and your ER example would be excempted), how care is given is as important as the ca

Ever seen a 50-year-old ER nurse? 90% of the time, they are callused to the suffering around them. It comes with repeated exposure to the environment

So the people who are good at Linux treat the noobs like shit because they calloused from all the pain they've suffered? That's hardly a ringing endorsement for Linux if using it long enough to become a proficient user makes you as shellshocked and numb as someone working in a triage unit.

I'm on a few mailinglists, and while I do my best to provide clear, concise, correct and helpful answers to questions, I keep being amazed at how some people simply don't bother to do the basics first. Like, you know, even looking in the general direction of the manual.

One of the lists I'm most active on these days is the MySQL one, given that I'm almost fulltime DBAing these days. Note that MyQSL has excellent and comprehensive online manuals for every version you care to run.

I've seen people actually think that list is populated by MySQL employees who are paid to answer their every stupid question, and get impatient and testy if they haven't seen an answer in ten minutes.

i've seen people spam the list with first their inane RTFM questions, followed by a great big stream of "insights" on how he solved it and the most obvious straight-from-the-manual statements.

I've seen people who seem to think that the list is there to write their queries for them. After I got rather miffed and wrote a bit of a sharp mail at one of them about basic manners, what the list is for, how to ask questions properly and what to do before you even think of asking the list; that guy now consistently does his homework, tries to work it out for himself and if he really doesn't find it or wants another opinion, politely puts the question to the list in a clear and concise way. Of course he now gets all the help he needs, and he's even put a few quite interesting things to the list in the mean time. He has, interestingly, also taken to calling me 'Sir' whenever he asks me a question. I never asked for it, but I have to admit I kinda like it:-)

Sometimes it's necessary to make it quite clear to people what they can and cannot expect from online help, and how to behave there. This used to be perfectly acceptable, but since Eternal September began, the flood of ignorance has gotten so vast that I can fully imagine it can sometimes be hard to remember actually helping people in the middle of all the stupidity.

I get rude when people expect Linux to be Windows after about the third time they complain about where the control panel is (among other things) and when they're simply trolling. It's amazing the amount of trolling going on in the help forums. Sometimes it extends to irc.

Car analogy follows:

"Gee, this BMW is nothing like my Ford." "Why is the battery in back again?" "They should put the battery in the engine compartment" ("but it's there for weight distribution") "I don't care about that, it's stupid"

It's not Windows. It's not a cheap Windows. It's not anything like Windows. Stop expecting it to be Windows. Once you do that, a lot of things become _much_ easier.

I've been using Linux for 15 years too and I have never seen that. I don't know where you people go to have such bad experiences. OTOH, the Windows forums I sometimes stumble into via Google are usually full of clueless guys and devoid of actual help. Granted, Windows is opaque, but still.

This was fixed in 2004, with Ubuntu's Code of Conduct. Telling people RTFM is forbidden, either you help or shut up. People sometimes wonder whats the big deal with Ubuntu, and I'm positive this is one of the main reasons. You can check the forum http://ubuntuforums.org/ [ubuntuforums.org] or hop to Freenode's #ubuntu channel to see this policy in action. No matter how repeated or simple a question is, it is allowed and if you reply, it is to help, even if thats pointing someone to a well written help page (like the many at h

And thus, the cycle perpetuates. Better yet, if you are going to be an ass in your reply - just don't reply. That means the user might NOT get help - and that may be a concern for them - but it's better than getting attacked - which will definitely be a concern for them.

I prefer to be friendly to the folks I like and not worry about the rest. I tend to be pretty easy going, and have helped my fair share of folks, including mailing other slashdotters hardware. I have zero problem showing people how to do something or where they can find more information on a topic.

Complaining about stuff you get for free just irks me, it is the ultimate rudeness.

You know, if you see a homeless person on the street begging for money, and you decide to give them a very generous $10, but you do so by pulling out a huge wad of bills, taking out that $10, crumpling it up, and throwing it down on the floor where the man needs to jump on the floor to get it before the wind will take it away...you're a better man if you instead decide not to give him the money.

"Beggars can't be choosers" means that because the homeless person is in such dire straights, he will probably tak

The linked web page doesn't really explain what's going on. For someone who uses Unix but is far from an expert, can someone describe what's going on and why this is cool? Does it even matter if you're not into deep kernal stuff?

Imagine you have an app that launches just one process, like a music player, and an app that launches 3 (for example, Firefox, which launches a new one for each plugin).Since each process has the same priority, the second app - firefox - will effectively have 3x more CPU time than the media player, and possibly stutter the music.

The kernel has something called cgroups, which enables more than one process to be grouped, and each group will have the same CPU time. So the group (Firefox+plugins) would have the same CPU time than the media player.

This kernel patch and terminal code enables each terminal you launch to have a different group, so if you launch Firefox from one terminal and the music player from another, they'll have different groups.

Sure.Hopefully you know what a TTY is, but in case you don't it it a virtual or real terminal. When you open up an xterm you create one. If you don't have x-windows installed, you reach one, etc.

Well Linus had an idea about using a grouping functionality that was already in the Kernel to allow all the processes (technically actually all the kernel threads) running from one TTY to be grouped together for scheduling.

The result of that is that if you are running 99 processes in one xterm that could consume all of your CPU, and you open another xterm, one one just one process that wants 100% CPU, each xterm's processes gets 50% of the CPU, rather than one getting 99% and the other getting only 1%.

But lets say you only had that first xterm. Since each of those processes are not getting nearly the processor amount they desire, normally the scheduler sees them as nearly starved, and the next process that only wants 5% of CPU does not get much preferential treatment for giving up most of it's time. However, with the grouping, the scheduler can see that those 99 processes are related, and they are not really starved, since as a group they are getting 100%. So now when this other app that wants only 5% comes along, the scheduler might give it pretty much all of that 5% rather than the mere 1% it would have been getting before, and so that app (probably a web browser or something) remains nice and responsive.

That is not 100% accurate, since I've simplified some things a little, especially with regard to the working of the scheduler, but it should give you the idea.Eventually, more heuristics might be added, so that a GUI application that launches a bunch of threads and hogs the CPU might have all it's threads grouped, so they don't hurt responsiveness of interactive apps either.

In theory you could alter the 'launch' process for running software & check a database for 'nice' priorities so that they automatically launch with a preset 'nice' rating.

Currently, the kernel is very egalitarian - everything runs at 'nice 0' unless the user wants something different. If YOU think that extinguish_fire should have more of a priority than watch_tv, then YOU should handle the issue.

However, that isn't the issue addressed by either the patch or the userspace scripts. While adjusting niceness may help in a gross sense, it's not going to handle proper timeslicing of software that's spawning a huge number of threads and lagging other applications.

As an example, we need to run extinguish_fire and evacuate_building at the same time. extinguish_fire spawns a thread for each bucket in the brigade, while evacuate_building only spawns a thread for each escape route. Now, if there are 96 buckets & 4 escape routes, extinguish fire will consume 96% of the CPU & choke out the evacuate_building threads.

You could try to guess the appropriate level of 'nice' for each program when you launch it, but it's not going to be pretty. To get even timing, you would be pushing evacuate_building to nice -19 - an act that would make it next to impossible to establish any control over the bucket brigades.

By grouping all of the threads from a program, extinguish_fire and evacuate_building get equal footing regardless of the number of threads they spawn. Both of them remain responsive to commands without taking the huge hits you get from drastic nice levels. If both processes aren't running smoothly, you can renice the group rather than take the nice hit 'threadcount' times.

An early comment on LWN [lwn.net] captured the technical
argument best, I think, which I guess illustrates both the quality of the
articles and posters on LWN.
The background to this is we are discussing CPU scheduling.
If you don't know what CPU scheduling is, think of it as form of mind reading.
I'll illustrate.

Lets say you have asked your computer to do several things, in fact so many
that if it follows the usual method of simply dividing its time equally between
them it is going to annoy you.
The video you watching might start flickering,
or the music you are listening will drop out.
So obviously the computer must now give more CPU time to playing your movie
and less to whatever background task you started,
such as that MP3 transcode of your 20,000 song library.
Except how is the computer is supposed to know this?
This is how we get to mind reading.

The hack we are discussing is essentially the discovery of a way
to read the minds of one particular type of computer user -
the Linux Kernel developer.
The Linux Kernel developer is in the habit of starting huge background jobs
called kernel compiles.
These kernel compiles take a looong while, so the kernel developers,
being very clever people,
have invented all sorts of ways of speeding them up.
One of those ways is to divide the task into lots of little bits,
and then fire off separate tasks to do each.
This takes maximum advantage of available CPU cores,
soaking up every skerrick of available CPU time.
This naturally enough leaves none left over for other important tasks like
watching a movie while waiting your kernel compile.
In this particular case the default CPU scheduling strategy
of giving each task an equal share of CPU is woefully poor,
because there might be 20 kernel compile tasks and
just one movie watching task,
so the movie player ends up with 1/20 of the available CPU time.
This isn't enough to play a movie.

The mind reading trick discovered boils down to this:
Linux Kernel developers use the linux command line interface
to fire off the kernel compile.
And it turns out that for years now
the kernel has been able group the tasks started from a command line
and give that group a single portion of CPU time,
as opposed to a equal portion to each task in the group.
Thus you only have to split up the CPU time into 2,
one portion going to the kernel compiler group
and the other going to the movie player.
Naturally enough the movie player works real well with a 50%
allocation of CPU, and so we have a happy kernel developer.

Now we come to the merits of the two hacks.
They both do the job I just described equally well.
The difference between them is that one, the kernel patch, is automagic,
meaning it happens automatically without anybody having to lift a finger.
But it comes at the expense of bloating linux kernel a tiny bit, even for users who won't benefit from it.
The other way currently has to be done applied manually
using a process the vast majority of Linux users will at best
find difficult, tedious and error prone.

Seems like a simple decision eh - lets take the tiny bloat hit
and not inflict our long suffering desktop users
with yet another Linux user-unfriendly idiosyncrasy.
But here is the rub: it doesn't help them.
In fact, for some it might have a negative impact
(a gstreamer pipeline started from the command line strings to mind).
The people who will benefit from this are the ones that use the command line
heavily and regularly.
People like Linus.
Which is why he liked it so much I guess.
But these are precisely the people who will have no absolutely no
trouble doing it the manual way.

This isn't a nasty hack like some userspace bodges round kernel problems can be. The functionality to schedule the CPU in controlled ways to different groups of processes has been in the kernel for some time now and simply needs configuring from userspace. The 200 line patch adds some default configuration of this mechanism to the kernel; this alternative uses the existing functionality to do the same thing. The same kernel mechanism should end up handling it.

It seems to be creating a file in a group registry that tells the kernel that your shell, which is interactive, is a group. What that means I don't know, but I'm sure it's something that can be built right into the shell, which can manage that for you. The code that does something useful with a group once it's identified is clearly already part of the kernel, and runs better than the kernel hack that's proposed.

That was the case a couple of years ago, but have you tried it recently? I haven't actually had a single audio problem since switching from debian/alsa to ubuntu 10.04/PA, and I now have a ton of useful features on top:-) (per-app volume, per-app output devices, network streaming, seamless switching between headphones and HDMI, etc)

You were mod'ed "funny", but seriously, I've been using tcsh (interactively) since the 80s and prefer it to bash. I also tend to write scripts in ksh as that's been more portable and available (native) than bash on non-Linux systems - though that's changing.

Why should a user have to bother doing this in the first place just to have a responsive desktop system?

They don't. It's helpful if you are doing certain things that show up behaviour that the patch or the commands described above can counteract. If you don't do that (and it appears to be most helpful at speeding up your desktop if you are *at the same time* compiling huge programs, and similar work, which most desktop users don't do) it won't make much difference.

I see the GP is well on his way to earning the elusive (Score:5, Troll) achievement which is one of the rarest drops on Slashdot (BTW, when did Slashdot turn into XBox Live with achievements? Now, get off my lawn...)

Just an FYI, your ad hominem attack does not detract from his legitimate point. I am well aware of the technical issues involved, but at some point you have to stop giving Linux & X Windows a pass just because Unix/X was crappily designed in this regard back in the 70's (tty's and client/

The problem is, 99% of process run by a heavy GUI desktop user have the same TTY as well. They only change if you start something from a terminal.

Would be nice if DEs could add processes to different cgroups. Then each application and its children would be in a different cgroup, like (Firefox+plugin-container), (mplayer), (terminal+make+gcc+ld), etc.