Monday, 27 August 2018

linux-4.18-ck1, MuQSS version 0.173 for linux-4.18

Announcing a new -ck release, 4.18-ck1  with the latest version of the Multiple Queue Skiplist Scheduler, version 0.173. These are patches designed to improve system responsiveness and interactivity with specific emphasis on the desktop, but configurable for any workload.

linux-4.18-ck1:
-ck1 patches:
Git tree:
MuQSS only:
Download:
Git tree:


Web: http://kernel.kolivas.org


This is just a resync from 4.17 MuQSS and -ck patches.


Enjoy!
お楽しみ下さい
-ck

EDIT: It turns out it won't build with full dynticks enabled. I've committed a small change to the respective git trees for anyone who wants to configure it that way (I'd usually recommend against it.)

23 comments:

  1. Builds and boots, and no issues so far.
    Thanks for the update again.

    ReplyDelete
  2. Thanks so much.

    ReplyDelete
  3. 0011-Make-hrtimer-granularity-and-minimum-hrtimeout-confi.patch fails on 4.18.5:

    patch -p1 < /usr/src/4.18-ck1-patches/0011-Make-hrtimer-granularity-and-minimum-hrtimeout-confi.patch
    patching file kernel/sysctl.c
    Hunk #1 FAILED at 134.
    Hunk #2 succeeded at 1036 with fuzz 1 (offset -46 lines).
    1 out of 2 hunks FAILED -- saving rejects to file kernel/sysctl.c.rej
    patching file kernel/time/clockevents.c
    Hunk #1 FAILED at 198.
    1 out of 1 hunk FAILED -- saving rejects to file kernel/time/clockevents.c.rej
    patching file kernel/time/hrtimer.c

    ReplyDelete
    Replies
    1. Ok works fine now. Error on my side. Obviously I am too stupid to patch correctly.

      Delete
    2. So, what was the error you made? Running into failed hunks as well when trying to apply just MuQSS to 4.18.5.

      What was your error? Maybe it'll be mine as well.

      Delete
    3. I think I had some typo on the first patch so it didnt apply, so the subsequent ones failed ofc.

      Delete
  4. Nope, getting an actual unresolvable issue in schedutil.c here. Code that cannot even be found. Odd.

    ReplyDelete
    Replies
    1. Looks good here: https://pastebin.com/mn3G4qP5

      Delete
    2. The error was mine, tried applying it to the Ubuntu kernel sources. Which apparently does not work flawlessly. So, based them off of the kernel.org sources instead.

      Working fine now. Toying around with my own spun kernels now. Never done that before.

      Happy to finally run MuQSS + CFQ, as I don't trust BFQ one bit in its current state.

      Delete
    3. Sounds wise. BFQ hasn't been impressing me much of late.

      Delete
    4. Yeah ubuntu kernels are heavily patched. I use mq-deadline, the system seems to be a tad more responsive using it although i did no disk benchmark comparisons.

      Delete
    5. None of the mq schedulers are viable for me, I'm on a rotational disk and blk-mq (the backend of the mq infrastructure) stalls during boot, for me anyhow.

      Only sq will simply continue booting without stalling. Not sure why that is.

      If I had been on an SSD, I think I personally would've just opted for noop instead. Since SSD are so fast that you basically want the simplest possible IO scheduler.

      Delete
  5. MuQSS + hrtrimeout patches (from ck) on top of latest Ubuntu's source (linux_4.18.0-7.8) works for me.

    ReplyDelete
  6. A needed buildfix:

    CK patchset broke force_irqthreads export

    Fix that up.

    Signed-of-by: Thomas Backlund

    --- linux-4.18/kernel/irq/manage.c.orig
    +++ linux-4.18/kernel/irq/manage.c
    @@ -27,8 +27,8 @@
    __read_mostly bool force_irqthreads = true;
    #else
    __read_mostly bool force_irqthreads;
    -EXPORT_SYMBOL_GPL(force_irqthreads);
    #endif
    +EXPORT_SYMBOL_GPL(force_irqthreads);
    static int __init setup_noforced_irqthreads(char *arg)
    {
    force_irqthreads = false;

    ReplyDelete
  7. Still applying perfectly to the recently released 4.18.6.

    ReplyDelete
  8. Con, would you ever consider adding Kconfig options for the (default) values of yield_type, rr_interval, interactive to the project?

    I've been running a specific combination of those and the experience is so smooth as well as performant that I'm considering maybe sharing it with the community (via PPA/AUR or some such). Obviously I could just run with my own patch set for it but for consistency I think having Kconfig options for them might be better anyhow.

    Mind you, it's just a question. Don't feel compelled in any way.

    ReplyDelete
    Replies
    1. I played with that a while ago but it hardly seems worth it since they're run time configurable and can just be added to sysctl.conf

      Delete
    2. True but you know how it is, people like fire-and-forget things. Anyhow, good to know the thought had also crossed your mind.

      I'll see what I can do to facilitate things on my end.

      Thanks for getting back to me so promptly. It's appreciated.

      Delete
    3. What values are you using? Always interesting testing for smoothness and performance :)

      Delete
    4. I am using a very specific set of settings for MuQSS as well as for Kconfig.

      To be precise, I took the kernel source clean off kernel.org, then applied the Ubuntu low latency configuration (the most noteworthy settings there would be hard preempt and threaded_irqs), set the timer frequency to 100Hz and compiled that.

      Then, in /etc/sysctl.conf, I have:

      net.core.default_qdisc = fq_codel
      net.ipv4.tcp_congestion_control = bbr
      kernel.interactive = 1
      kernel.rr_interval = 1
      kernel.yield_type = 2

      For IO scheduler I use cfq, since bfq is not production ready and the other rotational schedulers (deadline, noop) simply do not perform well enough while under extreme workload (such as kernel compilation).

      Governor is set to schedutil. If I had had an Intel, I would've also disabled intel_pstate. But, fortunately, I do not.

      And the boot commandline has rqshare set to smp. Since I found from most workloads I use this machine for (kernel compilation, gaming, some coding and general Internet use) that having multiple queues simply performs worse and rqshare=smp simply forces a single queue, no matter the hardware configuration.

      Note that this is a quadcore non-HT CPU, the one I am using.

      Anyhow, that's my configuration in a nutshell.

      Out of all the possible kernels available to me on this Ubuntu MATE installation (Ubuntu's generic kernel, Ubuntu's low latency kernel, Con's complete ck set and liquorix) I found this personal configuration the most responsive as well as the most performant.

      Delete
    5. Forgot to mention that I obviously also first applied the MuQSS patch to the kernel.org sources.

      Delete
    6. And I compiled it with GCC 8.2, since the few benchmarks I did find to compare GCC 7 to 8 do show that 8 does generate faster binaries.

      Delete