Announcing
a new -ck release, 4.18-ck1 with the latest version of the Multiple
Queue Skiplist Scheduler, version 0.173. These are patches designed to
improve system responsiveness and interactivity with specific emphasis
on the desktop, but configurable for any workload.
linux-4.18-ck1:
-ck1 patches:
Git tree:
MuQSS only:
Download:
Git tree:
お楽しみ下さい
-ck
EDIT: It turns out it won't build with full dynticks enabled. I've committed a small change to the respective git trees for anyone who wants to configure it that way (I'd usually recommend against it.)
Builds and boots, and no issues so far.
ReplyDeleteThanks for the update again.
Thanks so much.
ReplyDelete0011-Make-hrtimer-granularity-and-minimum-hrtimeout-confi.patch fails on 4.18.5:
ReplyDeletepatch -p1 < /usr/src/4.18-ck1-patches/0011-Make-hrtimer-granularity-and-minimum-hrtimeout-confi.patch
patching file kernel/sysctl.c
Hunk #1 FAILED at 134.
Hunk #2 succeeded at 1036 with fuzz 1 (offset -46 lines).
1 out of 2 hunks FAILED -- saving rejects to file kernel/sysctl.c.rej
patching file kernel/time/clockevents.c
Hunk #1 FAILED at 198.
1 out of 1 hunk FAILED -- saving rejects to file kernel/time/clockevents.c.rej
patching file kernel/time/hrtimer.c
Ok works fine now. Error on my side. Obviously I am too stupid to patch correctly.
DeleteSo, what was the error you made? Running into failed hunks as well when trying to apply just MuQSS to 4.18.5.
DeleteWhat was your error? Maybe it'll be mine as well.
I think I had some typo on the first patch so it didnt apply, so the subsequent ones failed ofc.
DeleteNope, getting an actual unresolvable issue in schedutil.c here. Code that cannot even be found. Odd.
ReplyDeleteLooks good here: https://pastebin.com/mn3G4qP5
DeleteThe error was mine, tried applying it to the Ubuntu kernel sources. Which apparently does not work flawlessly. So, based them off of the kernel.org sources instead.
DeleteWorking fine now. Toying around with my own spun kernels now. Never done that before.
Happy to finally run MuQSS + CFQ, as I don't trust BFQ one bit in its current state.
Sounds wise. BFQ hasn't been impressing me much of late.
DeleteYeah ubuntu kernels are heavily patched. I use mq-deadline, the system seems to be a tad more responsive using it although i did no disk benchmark comparisons.
DeleteNone of the mq schedulers are viable for me, I'm on a rotational disk and blk-mq (the backend of the mq infrastructure) stalls during boot, for me anyhow.
DeleteOnly sq will simply continue booting without stalling. Not sure why that is.
If I had been on an SSD, I think I personally would've just opted for noop instead. Since SSD are so fast that you basically want the simplest possible IO scheduler.
MuQSS + hrtrimeout patches (from ck) on top of latest Ubuntu's source (linux_4.18.0-7.8) works for me.
ReplyDeleteA needed buildfix:
ReplyDeleteCK patchset broke force_irqthreads export
Fix that up.
Signed-of-by: Thomas Backlund
--- linux-4.18/kernel/irq/manage.c.orig
+++ linux-4.18/kernel/irq/manage.c
@@ -27,8 +27,8 @@
__read_mostly bool force_irqthreads = true;
#else
__read_mostly bool force_irqthreads;
-EXPORT_SYMBOL_GPL(force_irqthreads);
#endif
+EXPORT_SYMBOL_GPL(force_irqthreads);
static int __init setup_noforced_irqthreads(char *arg)
{
force_irqthreads = false;
Still applying perfectly to the recently released 4.18.6.
ReplyDeleteAnd 4.18.7
DeleteCon, would you ever consider adding Kconfig options for the (default) values of yield_type, rr_interval, interactive to the project?
ReplyDeleteI've been running a specific combination of those and the experience is so smooth as well as performant that I'm considering maybe sharing it with the community (via PPA/AUR or some such). Obviously I could just run with my own patch set for it but for consistency I think having Kconfig options for them might be better anyhow.
Mind you, it's just a question. Don't feel compelled in any way.
I played with that a while ago but it hardly seems worth it since they're run time configurable and can just be added to sysctl.conf
DeleteTrue but you know how it is, people like fire-and-forget things. Anyhow, good to know the thought had also crossed your mind.
DeleteI'll see what I can do to facilitate things on my end.
Thanks for getting back to me so promptly. It's appreciated.
What values are you using? Always interesting testing for smoothness and performance :)
DeleteI am using a very specific set of settings for MuQSS as well as for Kconfig.
DeleteTo be precise, I took the kernel source clean off kernel.org, then applied the Ubuntu low latency configuration (the most noteworthy settings there would be hard preempt and threaded_irqs), set the timer frequency to 100Hz and compiled that.
Then, in /etc/sysctl.conf, I have:
net.core.default_qdisc = fq_codel
net.ipv4.tcp_congestion_control = bbr
kernel.interactive = 1
kernel.rr_interval = 1
kernel.yield_type = 2
For IO scheduler I use cfq, since bfq is not production ready and the other rotational schedulers (deadline, noop) simply do not perform well enough while under extreme workload (such as kernel compilation).
Governor is set to schedutil. If I had had an Intel, I would've also disabled intel_pstate. But, fortunately, I do not.
And the boot commandline has rqshare set to smp. Since I found from most workloads I use this machine for (kernel compilation, gaming, some coding and general Internet use) that having multiple queues simply performs worse and rqshare=smp simply forces a single queue, no matter the hardware configuration.
Note that this is a quadcore non-HT CPU, the one I am using.
Anyhow, that's my configuration in a nutshell.
Out of all the possible kernels available to me on this Ubuntu MATE installation (Ubuntu's generic kernel, Ubuntu's low latency kernel, Con's complete ck set and liquorix) I found this personal configuration the most responsive as well as the most performant.
Forgot to mention that I obviously also first applied the MuQSS patch to the kernel.org sources.
DeleteAnd I compiled it with GCC 8.2, since the few benchmarks I did find to compare GCC 7 to 8 do show that 8 does generate faster binaries.
DeleteFurther development: Left 100 Hz by the wayside. Caused some various issues (including a very annoying sound related issue in WINE) that were solved instantly on 1000 Hz.
DeleteSo, same configuration but 1000 Hz vs 100 Hz.
100Hz is only useful when used in conjunction with the highres timer patches in -ck.
DeleteThis patch seems to apply w/no errors against 4.18 as well as 4.18.12. However, going with the default of allowing cpus on the same numa node to use the same scheduler Q, the system froze on boot where the last line was (crashed before going to disk, but something about CPU 10 being assigned to the same runQ scheduler as CPU 0, which would make sense as they are on the same numa-node according to 'lscpu':
ReplyDeleteNUMA node0 CPU(s): 0,2,4,6,8,10
NUMA node1 CPU(s): 1,3,5,7,9,11
But wanted to know how the muqss-patch differs from the 4.18-patch?
I think I only grabbed the muqss-patch.
Is it the ck patch that has the SCHED_IDLEPRIO+ISO?
thanks!
---
Here is the 4.19 synced ck patch.
ReplyDeleteI just got it to compile and seems to be working ok but no guarantee.
https://jki.io/ck-4.19.patch.bz2
Con, are you ok? Did you quit?
ReplyDeleteFine. No, just distracted.
DeleteOK, alright. Thanks.
Deletehttps://www.youtube.com/watch?v=8WQVb_nuKvs
Deletelol
ReplyDelete