Announcing
a new -ck release, 4.16-ck1 with the latest version of the Multiple
Queue Skiplist Scheduler, version 0.171. These are patches designed to
improve system responsiveness and interactivity with specific emphasis
on the desktop, but configurable for any workload.
linux-4.16-ck1:
-ck1 patches:
Git tree:
MuQSS only:
Download:
Git tree:
Web: http://kernel.kolivas.org
This is mostly just a resync with 4.15 MuQSS and -ck patches. The only significant difference is that the default config for threaded IRQs is now set to disabled as this seems to be associated with boot failures when used in concert with runqueue sharing. I still include the patch in -ck that stops build warnings from making the kernel build fail, and I've added a single patch to aid building an evil out-of-kernel driver that many of us use.
Enjoy!
お楽しみ下さい
-ck
Thanks for the resync.
ReplyDeleteThanks. Much appreciated.
ReplyDeleteGreat. Much appreciated man
ReplyDelete:-)
ReplyDeleteRunning great. Very thanks.
ReplyDeleteHello, great work, I've used -ck for a while but have missed it recently on Artix since the maintainer gave up on it.
ReplyDeleteSo I compiled my own and tried to also use the repo-ck which interrupts my connection at about 2%, which is better than doing it at 99%.
Here is an article I put up for -ck http://sysdfree.wordpress.com/204
Thanks Con.
ReplyDeleteI did some throughput benchmarks.
https://docs.google.com/spreadsheets/d/163U3H-gnVeGopMrHiJLeEY1b7XlvND2yoceKbOvQRm4/edit?usp=sharing
Still the same consistant performance for MuQSS.
I'm willing to try benchmarking latencies again.
I've found this article :
https://lwn.net/Articles/725238/
Do you think any of those tool could be used ?
Pedro
may in this case be appropriate to use https://github.com/ckolivas/interbench ?
ReplyDeleteI already ran interbench with linux 4.15. The results are rather difficult to understand.
DeleteJudging by the numbers, PDS seems to be the best, but some users noted slowdowns in UI while using it. So there is more to it than that.
I'd like to find other tools to compare latencies.
Pedro
rt-tests, cyclictest.
DeleteWell I've tried that one too with linux 4.10 and MuQSS 152, and also bcc runqlat.
DeleteMuQSS latencies where higher than CFS. Con commented that you can't compare directly CFS and MuQSS with this tool as it doesn't use the same functions.
Pedro
Any tools that hook into function calls in the kernel are simply not going to work as the function names and purposes are different in muqss. As for interbench results, it is a fairness test as well as a latency test so looking for just the lowest latency as some kind of perfect endpoint will give you the wrong conclusion.
DeleteThanks for answering.
ReplyDeleteSo could these tools be used for regression testing between MuQSS releases ?
I don't know if the internals change that much between MuQSS and mainline releases.
Pedro
Most of the time, yes, though there is variance in results too, so repeating the tests is almost important if something suddenly looks much better or much worse.
DeleteHi Con,
ReplyDeleteCompile error on 32 bit Pentium 4:
CC kernel/sched/MuQSS.o
In file included from kernel/sched/MuQSS.c:73:0:
kernel/sched/MuQSS.h:739:46: warning: ‘struct sched_domain’ declared inside parameter list will not be visible outside of this definition or declaration
unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu)
^~~~~~~~~~~~
kernel/sched/MuQSS.h: In function ‘arch_scale_cpu_capacity’:
kernel/sched/MuQSS.h:741:15: error: dereferencing pointer to incomplete type ‘struct sched_domain’
if (sd && (sd->flags & SD_SHARE_CPUCAPACITY) && (sd->span_weight > 1))
^~
kernel/sched/MuQSS.h:741:25: error: ‘SD_SHARE_CPUCAPACITY’ undeclared (first use in this function)
if (sd && (sd->flags & SD_SHARE_CPUCAPACITY) && (sd->span_weight > 1))
^~~~~~~~~~~~~~~~~~~~
kernel/sched/MuQSS.h:741:25: note: each undeclared identifier is reported only once for each function it appears in
Kernel is 4.16.8, only your patch is added. But the same with zen Kernel.
Old 4.15 muqss was working fine.
Thanks.
Regards sysitos
Use maxcpus= kernel parameter, where x is # of real cores.
ReplyDeleteHT sucks anyway.
fail^ maxcpus=x
ReplyDeletewhy are you giving bad advice?
ReplyDeletebad advice?
ReplyDeletethe real number of cores is at least twice as small as the number of cores with HT - the system will be almost twice as slow.
ReplyDeletehttps://en.wikipedia.org/wiki/Hyper-threading
ReplyDelete"For each processor core that is physically present, the operating system addresses two virtual (logical) cores and shares the workload between them when possible."
yes.
ReplyDeletei was refering to "the system will be almost twice as slow." though.
Ever since this version of MuQSS I have been having complete lockups; initially I figured it to have been WINE since it seemed to only occur when using that.
ReplyDeleteBut just now, it also happened after having completely removed WINE. Another theory was Chromium but it also did occur at least once without having a single instance of Chromium running.
Having discounted all other variables over time by now, all I can say is that the sole constant is this new version of MuQSS.
This is on an AMD A6-3650.
Was running rqshare=mc, going to try rqshare=none now to see if that resolves the issue. If not, I am going to try to see what happens if I run the stock kernel (so, no MuQSS at all).
I'd hate for it to truly be related to MuQSS though, it is such an amazing scheduler.
rqshare=none did nothing to resolve the issue; still experiencing complete deadlocks. And no mention of any warning or error or panic in any of the logs in /var/log. Not even in kern.log, which should be the one that is always being updated.
DeleteImplying the kernel is really deadlocking hard.
Trying a completely stock kernel and configuration now.
Good news: MuQSS is not the cause. Stock kernel and configuration also displayed this behaviour.
DeleteBad news: Something is really wrong. Going to have to look elsewhere. Not inclined to point to a hardware issue though, as it mostly seems to occur when context switching between tasks and putting memory pressure on the box by having an array of heavy applications open is no guarantee. So, probably not memory. And if it had been the CPU, I'd have expected it to fail to boot. I'd be expecting kernel panics if it had been hardware.
Anyhow, nothing to see here, sorry to have cried wolf over... well, something that is unrelated to MuQSS.
If you got no swap and disabled oom maybe you run out of memory?
DeleteJust for reference purposes; the issue, as it turned out, is related to BFQ (the IO scheduler).
Delete100% confirmed.
It is simply not production ready. And not sure it ever will be. It just keeps getting worse actually. A year ago it performed just as well WITHOUT crashing.
@Enih -- As per Con's own entry for this version of MuQSS -- Threaded IRQ were defaulted to off. And, besides, this problem never occurred before. I'd have expected a problem to occur long, long ago if that was the problem. The PC is years old and I've been using Linux for years on it as well. This problem never occurred until quite recently.
ReplyDelete@Anonymous -- Running oom would not cause the entire system to completely seize up. The Linux kernel is a bit more sophisticated than that, it will, at one point or another, start aggressively killing applications just to keep running. No joke, it will.
Anyhow, traced the issue -- It's hardware. Specifically, it seemed to have been an overheating issue. It has been rather warm lately and apparently the PC just needed a good cleaning. Been running very stable for hours now after I gave it a quick once-over.
Just in case people were curious, here's another update:
DeleteIt is not hardware at all... not even remotely. It's a bug in BFQ. Finally saw the kernel panic floating by (no log was ever recorded for it though, just happened to be in one of the virtual TTY's when it happened, so no desktop manager was in the way of me seeing the kernel panic).
"kernel panic in blahblahblah/bfq_sq/iosched.c" or words to that effect.
End of the line for Liquorix for me. End of the line for BFQ for me. Apparently it is still in too rough of a shape to be relied upon.
Back to stock. So, no more MuQSS either. Sadly. Because I really fell in love with it. It is lightyears ahead of CFS.
CK, I have a static IP and domain with 5 Megabit upload. I'd like to offer to mirror your materia for free (I'm not hosting anything else). Are you interested? If so, send an SMS to 0438 470 680
ReplyDelete4.16.16-ck1 panics or hangs at "Sharing SMP runqueue from CPU 3 to CPU0" in virtualbox 5.2.12 vm.
ReplyDeleteMight boot after like 5 tries or more.
config https://pastebin.com/raw/dqKm9MvW
Hi Con,
ReplyDeleteFirst of all thank you very much.
Linux has been way more fun for me because of your work.
Since 4.16 is EOL now, are there 4.17 patches in the making?
Best regards,
Anonymous
4.16.17 & 4.16.18 didnt boot here. Early crash within 0.3s. 4.16.16 working fine though.
ReplyDeleteWow really good information ! keep up the great work!
ReplyDeletePrediksi Bola
Bandar Datar Judi Togel
Situs Judi Roulette Terpercaya
Agen Casino SBOBET
Hey, i'm using a linux server(CPU E3-1231 v3 @ 3.40GHz, 4 core - 8 thread) to host a private game server and my question is if this kernel patch may help my server to have a better latency in the gameplay?
ReplyDeleteSo many of you might need the help of a hacker to do a job.
ReplyDeleteBut most of you haven't be able to meet one. Well at The Hack geeks, we provide you with the best hacking services there is.
We have hackers in stand by ready to do a job immediately at your pace.
Contact:
thehackgeeks@gmail.com
We look forward to hearing from you.