This is mainly a bugfix release for those who had boot failures, TOI patched failures, and warnings. Otherwise it only has minor changes.
http://ck.kolivas.org/patches/4.0/4.8/4.8-ck3/
MuQSS version 0.115 by itself:
4.8-sched-MuQSS_115.patch
Git tree includes branches for MuQSS and -ck:
https://github.com/ckolivas/linux
EDIT: There is a regression in this release as well and you need to either grab the latest 4.8-ck git tree or add the two patches here:
http://ck.kolivas.org/patches/muqss/4.0/4.8/Pending/
Sorry, when enough other problems get fixed I'll release another version pretty soon too.
Enjoy!
お楽しみ下さい
-ck
Thanks for the fast update. I had issues with booting. This seemed to do the trick thanks again.
ReplyDeleteYou're welcome, I can't stand leaving a known show-stopper bug out there. Thanks for reporting back.
DeleteStill good on x64 and i686 noSMP...have a good weekend Con!
ReplyDeleteThanks for your work.
ReplyDeleteHowever cpu frequency signaling seems to be broken again on linux 4.8.3 + MuQSS 115 with intel pstate, and acpi-cpufreq to a lesser extent. I didn't test MuQSS114 so I don't know when this regression appeared.
I don't know what to think about intel-pstate. It's supposed to be better than acpi-cpufreq but always seems to be broken...
Some quick numbers:
MuQSS112+intel-pstate powersave (linux 4.8.1)
make j2 317.21
bz2 62.11
xz 104.78
MuQSS115+intel-pstate powersave (linux 4.8.3)
make j2 742.47
bz2 142.34
xz 171.23
MuQSS115+intel-pstate peformance (linux 4.8.3)
make j2 339.70
bz2 57.62
xz 106.19
MuQSS115+acpi-cpufreq ondemand (linux 4.8.3)
make j2 407.76
bz2 77.99
xz 105.86
Pedro
Thanks Pedro. My guess is commit b2a4f8a05b6fdcec2742661ec24829a73bb0bd1c
DeleteThanks for the fast reply. But I won't be able to test this until tomorrow.
DeletePedro
With the two patches in pending (0001 & 0002), that revert the commit you mentionned amongst other things, the regression is still here.
DeletePedro
And the regression is also here on MuQSS114.
DeleteMuQSS114+intel-pstate powersave (linux 4.8.4)
bz2 143.88
xs 182.15
Do other users see this with intel pstate ? Or is intel pstate + ivybridge a bad combo ?
@ck:
ReplyDeleteI'm also making strange observations, looking at the two gkrellm charts for each cpu core:
1. having two worldcommunitygrid clients running SCHED_IDLE they only fill up to 50% of each cpu load (on v0.114 ~99% each)
2. adding kernel "make -j2" leads to spiky charts for each core, only rarely going up to 99% (on v0.114 continuous ~98% each)
3. then removing the wcg clients results in kernel make filling up to 33% each core (on v0.114 continuous ~98% each), re-adding wcg clients is again like 2. point
Kernel compile time also increased somehow, approx. by 10%. Hope this helps a bit. Let me know, if you need more info.
BR, Manuel Krause
Feel free to git bisect to find when that happened. I rushed this release to fix the boot failures and had a few extra patches in there that were only lightly tested.
DeleteOnly reverting the commit mentioned in Pedros thread above, as a first shot, doesn't change things for me.
DeleteGoing to bisect more seriously, now. BR, MK*
@ck:
DeleteI've done the bisect "manually" by fetching the commits as patches, but kept the series when reverting.
First known bad is the first after v0.114 release: 6b45f1f3... "Remove the last remnants of the global runqueue...".
I already was thinking about reverting 4.7.9-10 when seeing all these failing steps. Not needed.
Btw., the full v0.115 seemed to have cured my TOI hibernation issue. But by now I reverted any commit in the row that was supposed to make it work. :-(
Thank you in advance anyways,
BR, Manuel Krause
@ck:
DeleteO.k., I've quickly gone one step ahead and re-added the patches that are supposed to cure my system's issue or are not harmful for it and are likely to not rely on 6b45f1f3...
On quasi v0.114 re-applied:
muqss114-0002.04bbfdc3164a52ae35bd0b7572a67689de2a8a62
muqss114-0003.98bd0d525be19905bf76e0555936ab92713dec78
muqss114-0005.7f9672079f5db72ba078f0e7cd1772b795ac0312
muqss114-0007.9f76ed8a80aa32d5feba3eab36a248bfc34d084f
So far, I've only collected 1/1 successful TOI hibernations, but testing continues.
BR, Manuel Krause
Thanks for the quick testing Manuel. That makes sense as there was something unnecessary done in that patch. I'll add a fix into git very shortly.
DeleteI'm now at 5/5 successful TOI hibernations, done with different memory/ cpu loads, with the above mentioned setup. Overall behaviour is fine.
DeleteIt can --and should-- stay that way ;-)
BR, Manuel Krause
Btw., I have no HT on my two cores. MK*
DeleteOh wow so there's something else wrong... I'll look further.
DeleteFor the time being I've reverted that patch and all dependent code so the latest git should be equivalent to what you have now Manuel.
DeleteThank you for covering anyways, and: No need to rush!
DeleteI see, that the main advantages of v0.115 do work well and I don't get error messages in dmesg.
BR, Manuel Krause
@ck: Mmmh, and how to go on? BR, MK*
DeleteEither grab latest git for whatever branch you're using, or add the 2 pending patches in the Pending/ directory for 4.7/4.8 muqss.
DeleteThat was clear to me, what I'd do. ^^
DeleteBR, Manuel Krause, MK* in short.
@ck:
DeleteO.k., fine with that two new commits, regarding the cpu core distribution (freshly rebuilt upon v0.115 at kernel 4.7.10).
Regarding TOI, I want to make at least 9 more hibernations, have only 1/1 tested, atm.
BR, Manuel Krause
@ck:
DeleteAlso after compiling-in your third commit (f2374e15) upon v0.115, everything is well on here.
Thank you for your quick correspondence and fixing last night (@ my timezone)!
Now you "only" need to solve Pedro's reported regression. :-)
Good luck and my best wishes,
BR, Manuel Krause
Hi!
ReplyDeleteNew set of "sorta gaming" workloads, this time with mainline, ubuntu and liquorix thrown in for a good measure, please see page "Perf. (DRI3), SET 3".
https://docs.google.com/spreadsheets/d/1EayezAsGlJdXjZbS3b9m7YtvtRF-DJ3xrT3hYCvfymQ/edit?usp=sharing
Br, Eduardo
Added a fix for SMT in git for the last minute regression, and put a link directly to the patch in the top post. This won't have any effect without hyperthreading.
ReplyDelete