Monday, 22 July 2019

linux-5.2-ck1, MuQSS version 0.193 for linux-5.2

Announcing a new -ck release, 5.2-ck1  with the latest version of the Multiple Queue Skiplist Scheduler, version 0.193. These are patches designed to improve system responsiveness and interactivity with specific emphasis on the desktop, but configurable for any workload.

linux-5.2-ck1:
-ck1 patches:
Git tree:
MuQSS only:
Download:
Git tree:


Web: http://kernel.kolivas.org


This is mostly a resync from 5.1-ck1. A reminder if you're new to using my patches, MuQSS performs best when in combination with the full -ck patchset as they're all complementary changes.

Enjoy!
お楽しみ下さい
-ck

14 comments:

  1. Thank you!!! Broken out set applied to bare kernel cleanly as did the 5.2.1 incremental kernel patch.

    Can you explain the difference between the broken out patchset vs. the regular patchset?

    ReplyDelete
    Replies
    1. Broken out is just all the incremental patches that make up the whole -ck1 patch in case people wanted to audit or select unique parts of the patchset.

      Delete
  2. @ck:
    BTW, your last blog entry/thread "linux-5.1-ck1, MuQSS version 0.192 for linux-5.1" completely disappeared with your new announcement for kernel 5.2.
    Any chance to get it back?

    Best regards,
    Manuel

    ReplyDelete
  3. Runs great!
    Thanks.

    ReplyDelete
  4. With regards to reducing timing freq. to 100, would it be unwise to manually patch it down to 10Mhz? I've already brought CONFIG_RCU_BOOST_DELAY under 100.

    ReplyDelete
    Replies
    1. Probably wouldn't make any demonstrable improvement but might break code that expects things to be 100+

      Delete
  5. Runs amazing on a Pentium N3700.
    Thank you.

    ReplyDelete
  6. Can you please check if statistics for PSI (especially memory) are collected correctly?
    HZ=1000, Idle_dynticks.

    cat /proc/pressure/*
    [cpu]some avg10=99.00 avg60=99.00 avg300=98.94 total=2059480257
    [io]some avg10=13.37 avg60=14.76 avg300=9.24 total=231227064
    [io]full avg10=0.00 avg60=0.00 avg300=0.00 total=1985
    [memory]some avg10=49.00 avg60=49.00 avg300=48.94 total=933291484
    [memory]full avg10=0.00 avg60=0.00 avg300=0.00 total=325

    ReplyDelete
  7. Con, I would like to ask Your opinion on muqss + LLC / rqs.
    I have checked the code (which don't understand much), but You're checking whether CPUs share cache: when the CPU is the same - locality is set to 0, when CPUs are SMT siblings - locality is set to 1, in case CPUs share cache - locality is set to 2, it seems that the rest is non-important for desktop CPUs.

    On Intel desktop CPUs, there is one big L3 or LLC (Last Level Cache) available for all CPUs, CPU topology seems to be 2 levels, however Ryzen has multiple LLCs which are shared to core complexes (groups of cores) (like on Xeon CPUs?) which makes topology to 3 levels. So to me it seems (and I could be wrong here) that this is not taken into account.
    I assume that it's debatable whether considering multiple LLCs actually helps, I would guess that on Ryzen that may help more (due to infinity fabric latency) than on Intel, but again, I could be wrong here as well.

    So the questions are: whether it could be beneficial to take multiple LLCs into account when finding best CPU to migrate (schedule) task to? Is 3 level topology the issue why on Ryzen (or Xeon as well?,I can not verify), even with rqshare=all (which in code says "/* This should only ever read 1 */"), we have odd number of rqs? Is odd rq count is just cosmetic issue? Is this hard to try out multiple LLC support / fix rq count?

    Thanks.

    ReplyDelete
    Replies
    1. Doubt it will make a demonstrable difference on muqss. No idea why there's an uneven number; I suspect something fundamental is actually broken as it really shouldn't happen. I just don't have the time and hardware to investigate.

      Delete
    2. Sounds like it might be benefitial to start bringing more people on board. To diversify the test hardware and accelerate the investigative process.

      Delete
    3. And where exactly do you propose I get these people? I certainly wouldn't turn down patches but it's not like anyone's offering to help.

      Delete