Monday 27 August 2018

linux-4.18-ck1, MuQSS version 0.173 for linux-4.18

Announcing a new -ck release, 4.18-ck1  with the latest version of the Multiple Queue Skiplist Scheduler, version 0.173. These are patches designed to improve system responsiveness and interactivity with specific emphasis on the desktop, but configurable for any workload.

-ck1 patches:
Git tree:
MuQSS only:
Git tree:


This is just a resync from 4.17 MuQSS and -ck patches.


EDIT: It turns out it won't build with full dynticks enabled. I've committed a small change to the respective git trees for anyone who wants to configure it that way (I'd usually recommend against it.)


  1. Builds and boots, and no issues so far.
    Thanks for the update again.

  2. Thanks so much.

  3. 0011-Make-hrtimer-granularity-and-minimum-hrtimeout-confi.patch fails on 4.18.5:

    patch -p1 < /usr/src/4.18-ck1-patches/0011-Make-hrtimer-granularity-and-minimum-hrtimeout-confi.patch
    patching file kernel/sysctl.c
    Hunk #1 FAILED at 134.
    Hunk #2 succeeded at 1036 with fuzz 1 (offset -46 lines).
    1 out of 2 hunks FAILED -- saving rejects to file kernel/sysctl.c.rej
    patching file kernel/time/clockevents.c
    Hunk #1 FAILED at 198.
    1 out of 1 hunk FAILED -- saving rejects to file kernel/time/clockevents.c.rej
    patching file kernel/time/hrtimer.c

    1. Ok works fine now. Error on my side. Obviously I am too stupid to patch correctly.

    2. So, what was the error you made? Running into failed hunks as well when trying to apply just MuQSS to 4.18.5.

      What was your error? Maybe it'll be mine as well.

    3. I think I had some typo on the first patch so it didnt apply, so the subsequent ones failed ofc.

  4. Nope, getting an actual unresolvable issue in schedutil.c here. Code that cannot even be found. Odd.

    1. Looks good here:

    2. The error was mine, tried applying it to the Ubuntu kernel sources. Which apparently does not work flawlessly. So, based them off of the sources instead.

      Working fine now. Toying around with my own spun kernels now. Never done that before.

      Happy to finally run MuQSS + CFQ, as I don't trust BFQ one bit in its current state.

    3. Sounds wise. BFQ hasn't been impressing me much of late.

    4. Yeah ubuntu kernels are heavily patched. I use mq-deadline, the system seems to be a tad more responsive using it although i did no disk benchmark comparisons.

    5. None of the mq schedulers are viable for me, I'm on a rotational disk and blk-mq (the backend of the mq infrastructure) stalls during boot, for me anyhow.

      Only sq will simply continue booting without stalling. Not sure why that is.

      If I had been on an SSD, I think I personally would've just opted for noop instead. Since SSD are so fast that you basically want the simplest possible IO scheduler.

  5. MuQSS + hrtrimeout patches (from ck) on top of latest Ubuntu's source (linux_4.18.0-7.8) works for me.

  6. A needed buildfix:

    CK patchset broke force_irqthreads export

    Fix that up.

    Signed-of-by: Thomas Backlund

    --- linux-4.18/kernel/irq/manage.c.orig
    +++ linux-4.18/kernel/irq/manage.c
    @@ -27,8 +27,8 @@
    __read_mostly bool force_irqthreads = true;
    __read_mostly bool force_irqthreads;
    static int __init setup_noforced_irqthreads(char *arg)
    force_irqthreads = false;

  7. Still applying perfectly to the recently released 4.18.6.

  8. Con, would you ever consider adding Kconfig options for the (default) values of yield_type, rr_interval, interactive to the project?

    I've been running a specific combination of those and the experience is so smooth as well as performant that I'm considering maybe sharing it with the community (via PPA/AUR or some such). Obviously I could just run with my own patch set for it but for consistency I think having Kconfig options for them might be better anyhow.

    Mind you, it's just a question. Don't feel compelled in any way.

    1. I played with that a while ago but it hardly seems worth it since they're run time configurable and can just be added to sysctl.conf

    2. True but you know how it is, people like fire-and-forget things. Anyhow, good to know the thought had also crossed your mind.

      I'll see what I can do to facilitate things on my end.

      Thanks for getting back to me so promptly. It's appreciated.

    3. What values are you using? Always interesting testing for smoothness and performance :)

    4. I am using a very specific set of settings for MuQSS as well as for Kconfig.

      To be precise, I took the kernel source clean off, then applied the Ubuntu low latency configuration (the most noteworthy settings there would be hard preempt and threaded_irqs), set the timer frequency to 100Hz and compiled that.

      Then, in /etc/sysctl.conf, I have:

      net.core.default_qdisc = fq_codel
      net.ipv4.tcp_congestion_control = bbr
      kernel.interactive = 1
      kernel.rr_interval = 1
      kernel.yield_type = 2

      For IO scheduler I use cfq, since bfq is not production ready and the other rotational schedulers (deadline, noop) simply do not perform well enough while under extreme workload (such as kernel compilation).

      Governor is set to schedutil. If I had had an Intel, I would've also disabled intel_pstate. But, fortunately, I do not.

      And the boot commandline has rqshare set to smp. Since I found from most workloads I use this machine for (kernel compilation, gaming, some coding and general Internet use) that having multiple queues simply performs worse and rqshare=smp simply forces a single queue, no matter the hardware configuration.

      Note that this is a quadcore non-HT CPU, the one I am using.

      Anyhow, that's my configuration in a nutshell.

      Out of all the possible kernels available to me on this Ubuntu MATE installation (Ubuntu's generic kernel, Ubuntu's low latency kernel, Con's complete ck set and liquorix) I found this personal configuration the most responsive as well as the most performant.

    5. Forgot to mention that I obviously also first applied the MuQSS patch to the sources.

    6. And I compiled it with GCC 8.2, since the few benchmarks I did find to compare GCC 7 to 8 do show that 8 does generate faster binaries.

    7. Further development: Left 100 Hz by the wayside. Caused some various issues (including a very annoying sound related issue in WINE) that were solved instantly on 1000 Hz.

      So, same configuration but 1000 Hz vs 100 Hz.

    8. 100Hz is only useful when used in conjunction with the highres timer patches in -ck.

  9. This patch seems to apply w/no errors against 4.18 as well as 4.18.12. However, going with the default of allowing cpus on the same numa node to use the same scheduler Q, the system froze on boot where the last line was (crashed before going to disk, but something about CPU 10 being assigned to the same runQ scheduler as CPU 0, which would make sense as they are on the same numa-node according to 'lscpu':

    NUMA node0 CPU(s): 0,2,4,6,8,10
    NUMA node1 CPU(s): 1,3,5,7,9,11

    But wanted to know how the muqss-patch differs from the 4.18-patch?

    I think I only grabbed the muqss-patch.

    Is it the ck patch that has the SCHED_IDLEPRIO+ISO?



  10. Here is the 4.19 synced ck patch.
    I just got it to compile and seems to be working ok but no guarantee.

  11. Con, are you ok? Did you quit?

    1. Fine. No, just distracted.

    2. OK, alright. Thanks.