Wednesday 27 June 2018

linux-4.17-ck1, MuQSS version 0.172 for linux-4.17

Announcing a new -ck release, 4.17-ck1  with the latest version of the Multiple Queue Skiplist Scheduler, version 0.172. These are patches designed to improve system responsiveness and interactivity with specific emphasis on the desktop, but configurable for any workload.
linux-4.17-ck1:
-ck1 patches:
Git tree:
MuQSS only:
Download:
Git tree:


Web: http://kernel.kolivas.org


This is just a resync with 4.16 MuQSS and -ck patches.


Enjoy!
お楽しみ下さい
-ck

32 comments:

  1. Thanks a lot!

    ReplyDelete
  2. Thank you very much!

    ReplyDelete
  3. Thanks.
    I also love the tunables.
    rqshare=mc
    echo 1 > /proc/sys/kernel/hrtimeout_min_us
    echo 15 > /proc/sys/kernel/hrtimer_granularity_us
    echo 0 > /proc/sys/kernel/yield_type
    Perfection.

    ReplyDelete
  4. I get an unbootable system when attempting to boot with 4.17-ck1 patched. Coming out of grub2, I see only, "Loading initial ramdisk ..." and a corresponding freeze.

    ReplyDelete
    Replies
    1. You could compile in drivers for harddisk-controller, filesystem(s), usb, video, etc. and go without initial ramdisk.

      Delete
    2. I heard of people running into problems with kernels > 4.16.x while using xz compression. Their problem went away after switching to old gzip.

      Source:
      http://blog.fefe.de/?ts=a5bcaff9 (site is in German)

      Delete
    3. I got problem with >4.16 (i think because of new Intel microcode) with >HZ_300.

      Delete
    4. Ok. What problem exactly?

      Delete
  5. I suspect there may be a problem in the interaction between MuQSS and recent developments in BFQ (either SQ or MQ).

    - CFQ + CFS = Fine
    - BFQ + CFS = Fine
    - BFQ + MuQSS = Kernel panics, randomly.

    Haven't tried CFQ + MuQSS yet though. Building the kernel on this box is going to end up being dreadful and Liquorix have resorted to dumping all non-BFQ IO schedulers.

    I can't be entirely certain but I think it may somehow be related to cgroups.

    - Kyber + MuQSS also seems fine so it's probably not the blk-mq infrastructure itself.

    ReplyDelete
    Replies
    1. +1
      I have a similar situation. Only I no have panic - the computer just hangs with the large input / output activity.

      Delete
    2. Actually, it probably is kernel panics. For the longest time I wasn't sure what it was as the window manager and desktop environment would just completely die.

      But then, one time, I got lucky and it happened while I was in a virtual TTY (CTRL+ALT+Fx). So, I could finally see what was happening. Kernel panic in BFQ.

      Delete
    3. Why wouldn't you all post the call traces of those panics?..

      Delete
    4. Because 99% of them happen with the Window Manager and DE having focus/control. So, nothing to write down. Literally, all control I have left after these crashes... is physically pulling the plug.

      Literally, the PC is dead. Cutting power is all that works.

      And with it being a kernel panic, no log is actually written (for obvious reasons).

      So, it would have to be by hand, by definition. And the one time I did see the panic, well, I just didn't think about writing down every little bit of a console screen full of information.

      Shoot me, never seen the kernel panic before. Part of the reason why I like Linux so much. It usually never does this. Unlike Windows.

      Delete
    5. Valid point. If I had access to a second PC or anything else that can be properly configured as a receiver. Which I don't.

      Closest candidate would be the modem/router but it's a proprietary ISP box so, not enough low level access.

      Looks like no dice there.

      Besides, the panics are very random in nature. It can go for an entire day without it crashing but it can also crash back to back. So, no longer using the combination of MuQSS and BFQ.

      I wish I could help in pinpointing the issue more precisely but for now, I am forced to simply conclude that in my case it is not production ready and I do kind of need this machine to actually work.

      Again, it's a perfectly valid point you're making. Just not one that is available to me at this time. Sorry.

      Delete
    6. You can use any remote box with public IP and ability to receive UDP packets on some port. Or you can ask someone to accept those packets for you.

      Delete
    7. Looking into options there. In the meanwhile, using BFQ + CFS for well over a week now without seeing a single issue.

      Definitely related to the specific combination of BFQ + MuQSS. Again, looking into available netconsole options.

      Kind of new to kernel panics. Never experienced this kind of behaviour from the kernel before.

      Delete
    8. Still no real luck finding a good, reliable method of getting that panic information out there.

      But, on an interesting note -- Decided to give MuQSS + BFQ-SQ another go and, so far it's been stable.

      The sole difference -- the MuQSS knobs. Previously I had kernel.interactive and kernel.yield_type set to 0.

      Now, with kernel.interactive 1, kernel.yield_type 2 and kernel.rr_interval 1 it is perfectly stable.

      Not too sure what to make of that but, with BFQ being the complex scheduler it is, there might be something there.

      Delete
    9. OK, nevermind... it crashed again. And, coming to the realization that it can't be MuQSS. It has never given me any trouble before.

      It has to be BFQ itself. It simply is not production ready.

      This is now across 2 different HDDs, so definitely not hardware. Has to be software and has to be BFQ. OK... maybe I'll need to start thinking about spinning my own kernel, to have MuQSS + CFQ.

      Delete
    10. In my experience once you get muqss to boot it is stable.
      Although sometimes it will hang/crash at boot in a virtualbox vm.

      Delete
    11. MuQSS isn't the issue for me. MuQSS is fine and has been fine for me for quite some time now.

      I was assuming originally that it was somehow some form of interaction between MuQSS and BFQ but I'm getting more and more signals it simply is just BFQ itself.

      Currently running MuQSS + noop (because Liquorix removed all non-BFQ schedulers from the build, something I am not happy with) and it's fine.

      Did take a look at spinning my own kernel, just to get MuQSS + CFQ going but, nah, with my hardware a kernel compile could take anywhere from 90 minutes to 150 minutes. And if I then happen to make a mistake somewhere I can just restart that entire process.

      No, I think I'll have to accept noop for some time. Until the BFQ people figure out how to actually get their IO scheduler running properly.

      I literally cannot believe it was mainlined in this state. It boggles the mind. Has to be a political decision, not a technological one. OK, that was off-topic, sorry.

      Delete
  6. using mq-deadline, no problems...

    ReplyDelete
  7. Don't know if this is the right place to post this, but is there an issue with CUDA working on this kernel right now? I have a few people experiencing the same people on Arch Linux right now.

    Here's the forum post: https://bbs.archlinux.org/viewtopic.php?id=239174

    ReplyDelete
    Replies
    1. Enabling NUMA in the kernel config seems to have fixed the issue^.

      Delete
  8. Will this work on 4.18?

    ReplyDelete
  9. Con, are you okay? You got lost somewhere.

    ReplyDelete
  10. Latest 4.17 kernels fails to build or patch
    For 4.17.12 I deleted cpufreq_schedutil.c part from the MuQSS patch (and excluded Schedutil from config)
    For 4.17.15 I added
    #include
    into kernel/sched/MuQSS.c

    ReplyDelete
  11. Hi Con, thanks and please come back.

    ReplyDelete
  12. So many of you might need the help of a hacker to do a job.



    But most of you haven't be able to meet one. Well at The Hack geeks, we  provide you with the best hacking services there is.

    We have hackers in stand by ready to do a job immediately at your pace.



    Contact: 

    thehackgeeks@gmail.com

    We look forward to hearing from you.

    ReplyDelete
  13. kernel.yield_type = 2
    kernel.rr_interval = 1
    kernel.interactive = 1
    elevator = noop

    Silky smooth. Even on a rotational disk. Silky, silky smooth. MuQSS loves noop or noop loves MuQSS. Not sure which but it's amazing.

    ReplyDelete
    Replies
    1. Forgot to mention: rqshare = smp

      Delete