Tuesday, 31 August 2021

5.14 and the future of MuQSS and -ck once again

 Having missed the update for the 5.13 kernel entirely, I thought I'd just skip ahead to merge up with 5.14 and started looking at/working on it today. The size of the changes are depressingly large and whilst it's mostly trivial changes, and features I wouldn't implement in MuQSS, I'm once again left wondering if I should be bothering with maintaining this patch-set, as I've mentioned before on this blog.

 The size of my user-base seems to be diminishing with time, and I'm getting further and further out of touch with what's happening in the linux kernel space at all, with countless other things to preoccupy me in my spare time. 

 As much as I still prefer running my own kernel on my hardware, I'm having trouble motivating myself after the last 18 months of world madness due to Covid19 and feel that I should really sadly bring this patch-set to a graceful end. My first linux kernel patches stretch back 20 years and with almost no passion for working on it any more, I feel it may be long overdue.

 Unfortunately I also do not have faith that there is anyone I can reliably hand the code over to as a successor as well, as almost all forks I've seen on my work have been prone to problems I've tried hard to avoid myself.

 There is always the possibility that mainline linux kernel will be so bad that I'll be forced to create a new kernel of my own out of disgust, which is how I got here in the first place, but that looks very unlikely. Many of you would have anticipated this coming after my last motivation blog-post, but unless I can find the motivation to work on it again, or something comes up that gives me a meaningful reason to work on it, I will have to sadly declare 5.12-ck the last of the MuQSS and -ck patches.

Final word. If you want to get the most out of the mainline kernel without trying to port MuQSS, then using at least the hrtimer patches from -ck and 1000Hz should make a significant difference.

-ck

22 comments:

  1. MuQSS is used by liquorix kernel. Maybe they can keep it up. Thank you for all the work you done!

    ReplyDelete
    Replies
    1. That's correct, but the responsivenes under higher workloads is far below of the CK MuQSS. Anyways, better than almost any other schedulers.

      Delete
    2. First of all Con, thanks for all the work on MuQSS. I think it's still the best scheduler for most people that use their system interactively. And IMO, it's very easy to underestimate the impact of your scheduler, but _a lot_ of people use it today. Maybe not directly through downloads on your site or community participation on your blog, but at least through Liquorix, Zen Kernel, and other custom kernels.

      As for porting, I'll see how far I can take MuQSS on Liquorix / Zen Kernel. I'm no C expert or kernel developer, but I'd say I have about 6-9 more months of runway before I need serious help from someone who's a more competent maintainer than I am. But I'm not too worried, it looks like I'm not the only one porting and we've already received fixes in Zen Kernel from community contributions, like here: https://github.com/ckolivas/linux/pull/24

      Not to mention, improvements to MuQSS' RQ sharing and core selection are work in progress to better support Ryzen, so MuQSS may be able to somewhat withstand the changes in CPU topology as the years go by: https://github.com/ckolivas/linux/pull/24

      Maybe by the time you revisit kernel development, you'll find that we're all still here! The need for MuQSS won't go away, but you also don't need to stress over it. I think it's the community's turn to take a proper stab at maintaining your patch set, no one should need to work on code they no longer have interest in for the rest of their life.

      Here's to another decade of great CPU scheduling! ��

      Delete
    3. Just learned about this project via Phoronix. I suspect there are many people that could benefit from MuQSS that aren't aware of it's existence. Perhaps this is a worthwhile project with many folks moving to Linux as a result of the Windows 11 BS forthcoming.

      I'm a fan and would love to have this scheduler in the current Kernel or via a custom kernel. And hrtimer?

      If folks aren't using your patches; they are missing out.

      Hope you reconsider, and if not, Cheers and Best of Luck.

      Delete
  2. MuQSS user for ~6/7 years here, I personally feels so sad for it, MuQSS is pure magic and has been my go-to CPU scheduler for everything on all my personal machines but I totally understand and support your point. All what I have to say is: thank you for everything you did maintaining this patchset and best wishes for all your other projects, you are a hero.

    Regards,
    Ed

    ReplyDelete
  3. venenux uses ck patches in some user compiled kernels and first versions..

    i hope liquorix guys anc handle this excelent work

    ReplyDelete
  4. Con, thanks for your effort to make desktop systems responsive! Your user-base will definitely miss you.
    I can fully understand your decision and I hope someone will pick up at least sync part of the patch sets.
    Best wishes and good luck in your future endeavours!

    BR,
    Eduardo

    ReplyDelete
  5. It is very sad to hear that, Con! I will be happy to help and support and maybe port some CacULE or Interactivity Score factor to MuQSS if you like. However, I won't do it as a separate fork, I would like to work with you and help porting MuQSS to future kernel versions under your instructions and your directions (as a manpower support).

    You inspired me on kernel CPU scheduler hacking. I have made some humble modification to CFS to enhance interactivity called CacULE which can never be compared to the excellence of MuQSS. I would be happy to help.


    Hamad Al Marri

    ReplyDelete
    Replies
    1. So.. only Alfred left for an alternative scheduler then i guess...

      Delete
  6. Best of luck to you Con. Been using -ck patches for a long time now, and hopefully the hrtimer patches will live on for a while longer.

    https://github.com/xanmod/linux-patches/tree/master/linux-5.13.y-xanmod/ck-hrtimer

    Xanmod uses it for his (popular) kernel, and mixed with other schedulers like CacULE and BMQ, i think it works quite nice.

    ReplyDelete
    Replies
    1. And here's a patch to disable/reduce sched_yield
      (not sure about yield_type=1 but at least I haven't seen any side-effects with it)

      diff -u a/kernel/sched/core.c b/kernel/sched/core.c > cfs__add_yield_type_tunable
      diff -u a/kernel/sysctl.c b/kernel/sysctl.c >> cfs__add_yield_type_tunable

      --- a/kernel/sched/core.c
      +++ b/kernel/sched/core.c
      @@ -80,6 +80,7 @@
      */
      int sysctl_sched_rt_runtime = 950000;

      +int sched_yield_type __read_mostly = 0;

      /*
      * Serialization rules:
      @@ -6638,10 +6639,15 @@
      struct rq_flags rf;
      struct rq *rq;

      + if (!sched_yield_type)
      + return;
      +
      rq = this_rq_lock_irq(&rf);

      schedstat_inc(rq->yld_count);
      - current->sched_class->yield_task(rq);
      +
      + if (sched_yield_type > 1)
      + current->sched_class->yield_task(rq);

      preempt_disable();
      rq_unlock_irq(rq, &rf);
      --- a/kernel/sysctl.c
      +++ b/kernel/sysctl.c
      @@ -142,6 +142,8 @@
      static int six_hundred_forty_kb = 640 * 1024;
      #endif

      +extern int sched_yield_type;
      +
      /* this is needed for the proc_doulongvec_minmax of vm_dirty_bytes */
      static unsigned long dirty_bytes_min = 2 * PAGE_SIZE;

      @@ -453,6 +455,15 @@
      .mode = 0644,
      .proc_handler = sched_rr_handler,
      },
      + {
      + .procname = "yield_type",
      + .data = &sched_yield_type,
      + .maxlen = sizeof (int),
      + .mode = 0644,
      + .proc_handler = proc_dointvec,
      + .extra1 = SYSCTL_ZERO,
      + .extra2 = &two,
      + },
      #ifdef CONFIG_UCLAMP_TASK
      {
      .procname = "sched_util_clamp_min",

      Delete
  7. I am very sorry to hear that, your version has been like magic. Allowing me to completely max my server cpu with kernel builds and encoding simultaneously all while STILL gaming in a vfio virtual machine as if nothing was happening. Without MuQSS I never would have even considered vfio. The forks are sadly just not as good as you mentioned. Cacule can't even do a single video encoding without causing heavy audio skipping and lag on basic desktop.

    I am sad to see you go but I wish you all the best hopefully someday you re find your motivation and can continue MuQSS

    ReplyDelete
  8. Another long time -ck user here. Enlarging your user-base with MuQSS preaching since the day one. It's not as small as you think! Thanks for all the fish.

    ReplyDelete
  9. ;((( windows again??? https://web.archive.org/web/20110707151924/http://apcmag.com/why_i_quit_kernel_developer_con_kolivas.htm

    cowid19 = flu haha

    ReplyDelete
  10. I'll keep using 5.12 till you decide to send the next version, I love your job, don't know if I can live without it...

    ReplyDelete
  11. My saddest day as a linux user. Regardless, it's totally understandable. Thank you for all your hard work over the years in providing us a much better desktop experience.

    ReplyDelete
  12. The utility of MuQSS is underestimated, Its the only scheduler that can run under 100% load and not choke in my experience something that is basically impossible to achieve on the mainline kernel, I think that there could be a future for it, provided that it comes to the attention of the right people.

    It will sorely be missed. Ill probably keep using it until it is no longer possible.

    ReplyDelete
  13. Comrade Kolivas - many thanks for his work! Very, very pity...

    ReplyDelete
  14. I recently set up a low powered acer cloudbook as a little local media server. While on the mainline kernel it was hanging constantly when trying to do anything substantial. I truly could not believe how much of a difference your patch-set made! It just chugged along with maxed out cpu and heavy IO, night and day.

    Ill keep that system on your kernel as long as its feasible.

    This is sad news. Much thanks for your work or the years.

    ReplyDelete
  15. Con: Really hard to read, but, that's life, and we should respect your decision. Thanks for all your effort and time involved on this patch. Hope someone can continue your amazing work, best of luck, and regards to you. You will be missed...

    ReplyDelete
  16. Great Respect and many thanks Con for all of the hard work! I used your patch-set over many years on my main vdr mediaserver, rock-solid and super-smooth. I very much hope that MuQSS can survive somehow.

    All the best Con!!

    ReplyDelete