If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Well, is there any way to check that indeed the BFS scheduler has been applied?
I don't see any difference in my system in comparison with the previous kernel.
No boosts, no slowdowns, no hangs, nothing... while I expected serious problems since I use reiserfs which causes problems with BFS according to Kolivas.

Well, if you had no interactivity problems before, then there's nothing to "improve" in the first place. If you don't have problems like those described here:

No I didn't have any of these problems. Also, I have a single core processor while BFS sines with multicores according to its creator. But not even a single regression or problem with the new scheduler? Strange...

No I didn't have any of these problems. Also, I have a single core processor while BFS sines with multicores according to its creator. But not even a single regression or problem with the new scheduler? Strange...

Five pages of posts since I was last here and only ten of them are BFS-related?

Anyway, I've been thinking more about a benchmark for responsiveness. Using cyclictest from the RT Linux Wiki, create threads that sleep for some number of milliseconds that's not an even multiple of HZ, and measure how long it takes for them to actually wake up. Several threads would be created with different SCHED_FIFO priority levels, plus several threads at SCHED_ISO on BFS and SCHED_OTHER on both schedulers. Gather all the delay statistics from the threads (including a histogram of latencies), and plot them on a 3D bar graph with the x axis being thread priority grouped by scheduling queue (i.e. FIFO, RR, ISO, OTHER), y axis being latency, and z axis being frequency of that latency for that thread. Each scheduler would have a graph plotted for no load, medium load, and heavy load, resulting in six graphs which could be visually compared. Then, the minimum, mean, maximum, and standard deviation of latencies would be plotted for the two schedulers and three loads, giving another graph with three lines on it, with a shaded stripe indicating standard deviation around the mean line.

I don't have time to implement this, but it would be really helpful to have something like this in PTS. Any takers? Please?

P.S. I mostly disagree with the way I was quoted by kebabbert. I only think it would be a Good Thing if there was a single point release devoted to optimization, kind of like Snow Leopard. Instead of the usual commit window where everyone is bombarding LKML with new features and drivers, there's a shorter release cycle where all the subsystem maintainers engage on a virtuous and heroic quest to seek out latencies and hidden bugs in their respective domains. Yes, I know it's just a romantic way of describing a code audit, but marketing works, you know? I didn't intend to suggest that bug fixing doesn't happen.

Plus, as a kernel developer* I would like to have a subset of the kernel API that I know won't change for X years, to reduce my maintenance costs and allow me to focus on cool new ideas.

*I'm a kernel developer in the sense that I write code that runs in the kernel, not in the sense that I participate in LKML and influence mainline.