>> less I throttled that, the less effective the antistarvation was. >> However>> this is clearly a problem without using up full timeslices. I originally>> thought they weren't trying to schedule lots because of the drop in ctx>> during starvation but I forgot that rescheduling the same task doesnt >> count>> as a ctx.>>> Hmm. In what way did it hurt interactivity? I know that if you pass > the cpu off to non-gui task who's going to use his full 100 ms slice, > you'll definitely feel it. (made workaround, will spare delicate > tummies;) If you mean that, say X releases the cpu and has only a > couple of ms left on it's slice and is alone in it's queue, that > preempting it at the end of it's slice after having only had the cpu > for such a short time after wakeup hurts, you can qualify the preempt > decision with a cpu possession time check.

I wish I could get mm3 running so I could evaluate those interactivity statements. I can't imagine it being worse than what I'm experiencing now:

That would be compiling the kernel, bunzipping a file, and some cron mandb thing that was running gzip in the background niced. Plus X and Mozilla, which probably starts the problem. At the end there, you see things calm down. That's also the way it starts out, then something sets off the "priority inversion" and the machine becomes completely worthless. Even the task that are running aren't really accomplishing anything. So the load goes from around 4/5 into the teens and the context switching makes a corresponding jump. And then both interactivity AND throughput fall through the floor.

I can't imagine any interactivity regressions that are worse than this behavior...

And this happens with just X and Mozilla running. It happens less often without X running, but still happens. Even if I'm at a VT, it could take 5-6 seconds for my text to appear after I type. This happens all the time, about once every few minutes and correlates with a simultaneous increase in context switches and load.

>>> Also I recall that winegames got much better in O10 when everything was>> charged at least one jiffy (pre nanosecond timing) suggesting those >> that were>> waking up for minute amounts of time repeatedly were being penalised; >> thus>> taking out the possibility of the starving task staying high priority >> for>> long.>>> (unsure what you mean here)

Can you set a cutoff point where if the process uses less that X percent of the max timeslice, it is penalized? I don't know if it's possible to do a loop of empty spins at some point and time it to find out what the cut-off point should be...otherwise I imagine it would need to be tuned for every processor speed. Could you use the bogomips to gauge the speed of the machine and use that to determine the min timeslice? From what I understand above, that would perhaps be more selective than just penalizing every process and have a positive affect on everything...of course I'm open to the possibility that I have it all wrong ;-)