A development blog of what Con Kolivas is doing with code at the moment with the emphasis on linux kernel, MuQSS, BFS and -ck.

Monday, 27 August 2018

linux-4.18-ck1, MuQSS version 0.173 for linux-4.18

Announcing
a new -ck release, 4.18-ck1 with the latest version of the Multiple
Queue Skiplist Scheduler, version 0.173. These are patches designed to
improve system responsiveness and interactivity with specific emphasis
on the desktop, but configurable for any workload.

EDIT: It turns out it won't build with full dynticks enabled. I've committed a small change to the respective git trees for anyone who wants to configure it that way (I'd usually recommend against it.)

Con, would you ever consider adding Kconfig options for the (default) values of yield_type, rr_interval, interactive to the project?

I've been running a specific combination of those and the experience is so smooth as well as performant that I'm considering maybe sharing it with the community (via PPA/AUR or some such). Obviously I could just run with my own patch set for it but for consistency I think having Kconfig options for them might be better anyhow.

I am using a very specific set of settings for MuQSS as well as for Kconfig.

To be precise, I took the kernel source clean off kernel.org, then applied the Ubuntu low latency configuration (the most noteworthy settings there would be hard preempt and threaded_irqs), set the timer frequency to 100Hz and compiled that.

For IO scheduler I use cfq, since bfq is not production ready and the other rotational schedulers (deadline, noop) simply do not perform well enough while under extreme workload (such as kernel compilation).

Governor is set to schedutil. If I had had an Intel, I would've also disabled intel_pstate. But, fortunately, I do not.

And the boot commandline has rqshare set to smp. Since I found from most workloads I use this machine for (kernel compilation, gaming, some coding and general Internet use) that having multiple queues simply performs worse and rqshare=smp simply forces a single queue, no matter the hardware configuration.

Note that this is a quadcore non-HT CPU, the one I am using.

Anyhow, that's my configuration in a nutshell.

Out of all the possible kernels available to me on this Ubuntu MATE installation (Ubuntu's generic kernel, Ubuntu's low latency kernel, Con's complete ck set and liquorix) I found this personal configuration the most responsive as well as the most performant.

This patch seems to apply w/no errors against 4.18 as well as 4.18.12. However, going with the default of allowing cpus on the same numa node to use the same scheduler Q, the system froze on boot where the last line was (crashed before going to disk, but something about CPU 10 being assigned to the same runQ scheduler as CPU 0, which would make sense as they are on the same numa-node according to 'lscpu':