Showing posts with label benchmark. Show all posts
Showing posts with label benchmark. Show all posts

Monday, 24 October 2016

Interbench benchmarks for MuQSS 116

As mentioned in my previous post, I recently upgraded interbench which is a benchmark application I invented/wrote to assess perceptible latency in the setting of various loads. The updates were to make the results meaningful on today's larger ram/multicore machines where the load scales accordingly.

The results for mainline 4.8.4 and 4.8.4-ck4 on a multithreaded hexcore (init 1) can be found here:
 http://ck.kolivas.org/patches/muqss/Benchmarks/20161024/
and are copied below. I do not have swap on this machine so the "memload" was not performed. This is a 3.6GHz hexcore with 64GB ram and fast Intel SSDs so to show any difference on this is nice. To make it easier, I've highlighted it in colours similar to the throughput benchmarks I posted previously. Blue means within 1% of each other, red means significantly worse and green significantly better.


Load set to 12 processors

Using 4008580 loops per ms, running every load for 30 seconds
Benchmarking kernel 4.8.4 at datestamp 201610242116
Comment: cfs

--- Benchmarking simulated cpu of Audio in the presence of simulated ---
Load Latency +/- SD (ms)  Max Latency   % Desired CPU  % Deadlines Met
None       0.1 +/- 0.1        0.1           100         100
Video      0.0 +/- 0.0        0.1           100         100
X          0.1 +/- 0.1        0.1           100         100
Burn       0.0 +/- 0.0        0.0           100         100
Write      0.1 +/- 0.1        0.1           100         100
Read       0.1 +/- 0.1        0.1           100         100
Ring       0.0 +/- 0.0        0.1           100         100
Compile    0.0 +/- 0.0        0.0           100         100

--- Benchmarking simulated cpu of Video in the presence of simulated ---
Load Latency +/- SD (ms)  Max Latency   % Desired CPU  % Deadlines Met
None       0.1 +/- 0.1        0.1           100         100
X          0.1 +/- 0.1        0.1           100         100
Burn      17.4 +/- 19.5      46.3            87        7.62
Write      0.1 +/- 0.1        0.1           100         100
Read       0.1 +/- 0.1        0.1           100         100
Ring       0.0 +/- 0.0        0.0           100         100
Compile   17.4 +/- 19.1      45.9          89.5        6.07

--- Benchmarking simulated cpu of X in the presence of simulated ---
Load Latency +/- SD (ms)  Max Latency   % Desired CPU  % Deadlines Met
None       0.0 +/- 0.1        1.0           100        99.3
Video     13.4 +/- 25.8      68.0          36.2        27.3
Burn      94.4 +/- 127.0    334.0          12.9        4.37
Write      0.1 +/- 0.4        4.0          97.4        96.4
Read       0.1 +/- 0.7        4.0          96.2        93.8
Ring       0.5 +/- 1.9        9.0          89.3        84.9
Compile   93.3 +/- 127.7    333.0          12.2         4.2

--- Benchmarking simulated cpu of Gaming in the presence of simulated ---
Load Latency +/- SD (ms)  Max Latency   % Desired CPU
None       0.0 +/- 0.2        2.2           100
Video      7.9 +/- 21.4      69.3          92.7
X          1.4 +/- 1.6        2.7          98.7
Burn     136.5 +/- 145.3    360.8          42.3
Write      1.8 +/- 2.0        4.4          98.2
Read      11.2 +/- 20.3      47.8          89.9
Ring       8.1 +/- 8.1        8.2          92.5
Compile  152.3 +/- 166.8    346.1          39.6
Load set to 12 processors

Using 4008580 loops per ms, running every load for 30 seconds
Benchmarking kernel 4.8.4-ck4+ at datestamp 201610242047
Comment: muqss116-int1

--- Benchmarking simulated cpu of Audio in the presence of simulated ---
Load Latency +/- SD (ms)  Max Latency   % Desired CPU  % Deadlines Met
None       0.0 +/- 0.0        0.0           100         100
Video      0.0 +/- 0.0        0.0           100         100
X          0.0 +/- 0.0        0.0           100         100
Burn       0.0 +/- 0.0        0.0           100         100
Write      0.0 +/- 0.0        0.1           100         100
Read       0.0 +/- 0.0        0.0           100         100
Ring       0.0 +/- 0.0        0.0           100         100
Compile    0.0 +/- 0.1        0.8           100         100

--- Benchmarking simulated cpu of Video in the presence of simulated ---
Load Latency +/- SD (ms)  Max Latency   % Desired CPU  % Deadlines Met
None       0.0 +/- 0.0        0.0           100         100
X          0.0 +/- 0.0        0.0           100         100
Burn       3.1 +/- 7.2       17.7           100        81.6
Write      0.0 +/- 0.0        0.5           100         100
Read       0.0 +/- 0.0        0.0           100         100
Ring       0.0 +/- 0.0        0.0           100         100
Compile   10.5 +/- 13.3      19.7           100        37.3

--- Benchmarking simulated cpu of X in the presence of simulated ---
Load Latency +/- SD (ms)  Max Latency   % Desired CPU  % Deadlines Met
None       0.0 +/- 0.1        1.0           100        99.3
Video      3.7 +/- 12.1      56.0            89        82.6
Burn      47.2 +/- 66.5     142.0          16.7        7.58
Write      0.1 +/- 0.5        5.0          97.7        95.7
Read       0.1 +/- 0.7        4.0          95.6        93.5
Ring       0.5 +/- 1.9       12.0          89.8          86
Compile   55.9 +/- 77.6     196.0          18.6        8.12

--- Benchmarking simulated cpu of Gaming in the presence of simulated ---
Load Latency +/- SD (ms)  Max Latency   % Desired CPU
None       0.0 +/- 0.1        0.5           100
Video      1.2 +/- 1.2        1.8          98.8
X          1.4 +/- 1.6        2.9          98.7
Burn     130.9 +/- 132.1    160.3          43.3
Write      2.4 +/- 2.5        7.0          97.7
Read       3.2 +/- 3.2        3.6          96.9
Ring       5.9 +/- 6.2       10.3          94.4
Compile  146.5 +/- 149.3    209.2          40.6

As you can see, the only times mainline is better, there is less than 1% difference between them which is within the margins for noise. MuQSS meets more deadlines, gives the benchmarked task more of its desired CPU and has substantially lower max latencies.

I'm reasonably confident that I've been able to maintain the interactivity people have come to expect from BFS in the transition to MuQSS now and have the data to support it above.

Enjoy!
お楽しみ下さい
-ck

linux-4.8-ck4, MuQSS CPU scheduler v0.116

Yet another bugfix release for MuQSS and the -ck patchset with one of the most substantial latency fixes yet. Everyone should upgrade if they're on a previous 4.8 patchset of mine. Sorry about the frequency of these releases but I just can't allow a known buggy release be the latest version.

4.8-ck4 patchset:
http://ck.kolivas.org/patches/4.0/4.8/4.8-ck4/

MuQSS by itself for 4.8:
4.8-sched-MuQSS_116.patch

MuQSS by itself for 4.7:
4.7-sched-MuQSS_116.patch

I'm hoping this is the release that allows me to not push any more -ck versions out till 4.9 is released since it addresses all remaining issues that I know about.

A lingering bug that has been troubling me for some time was leading to occasional massive latencies and thanks to some detective work by Serge Belyshev I was able to narrow it down to a single line fix which dramatically improves worst case latency when measured. Throughput is virtually unchanged. The flow-on effect to other areas was also apparent with sometimes unused CPU cycles and weird stalls on some workloads.

Sched_yield was reverted to the old BFS mechanism again which GPU drivers prefer but it wasn't working previously on MuQSS because of the first bug. The difference is substantial now and drivers (such as nvidia proprietary) and apps that use it a lot (such as the folding @ home client) behave much better now.

The late introduced bugs that got into ck3/muqss115 were reverted.

The results come up quite well now with interbench (my latency under load benchmark) which I have recently updated and should now give sensible values:

https://github.com/ckolivas/interbench

If you're baffled by interbench results, the most important number is %deadlines met which should be as close to 100% as possible followed by max latency which should be as low as possible for each section. In the near future I'll announce an official new release version.

Pedro in the comments section previously was using runqlat from bcc tools to test latencies as well, but after some investigation it became clear to me that the tool was buggy and did not work properly with bfs/muqss either so I've provided a slightly updated version here which should work properly:

runqlat.py

Enjoy!
お楽しみ下さい
-ck

Tuesday, 18 October 2016

First MuQSS Throughput Benchmarks

The short version graphical summary:



Red = MuQSS 112 interactive off
Purple = MuQSS 112 interactive on
Blue = CFS

The detail:
http://ck.kolivas.org/patches/muqss/Benchmarks/20161018/

I went on a journey looking for meaningful benchmarks to conduct to assess the scalability aspect as far as I could on my own 12x machine and was really quite depressed to see what the benchmark situation on linux is like. Only the old and completely invalid benchmarks seem to still be hanging around in public sites and promoted, like Reaim, aim7, dbench, volanomark, etc. and none of those are useful scalability benchmarks. Even more depressing was the only ones with any reputation are actually commercial benchmarks costing hundreds of dollars.

This made me wonder out loud just how the heck mainline is even doing scalability improvements if there are precious few valid benchmarks for linux and no one's using them. The most promising ones, like mosbench, need multiple machines and quite a bit of set up to get them going.

I spent a day wading through the phoronix test suite - a site and its suite not normally known for meaningful high performance computing discussion and benchmarks - looking for benchmarks that could be used for meaningful results for multicore scalability assessment and were not too difficult to deploy and came up with the following collection:

John The Ripper - a CPU bound application that is threaded to the number of CPUs and intermittently drops to one thread making for slightly more interesting behaviour than just a fully CPU bound workload.

7-Zip Compression - a valid real world CPU bound application that is threaded but rarely able to spread out to all CPUs making it an interesting light load benchmark.

ebizzy - This emulates a heavy content delivery server load which scales beyond the number of CPUs and emulates what goes on between a http server and database.

Timed Linux Kernel Compilation - A perennial favourite because it is a real world case and very easy to reproduce. Despite numerous complaints about its validity as a benchmark, it is surprisingly consistent in its results and tests many facets of scalability, though does not scale to use all CPUs at all time either.

C-Ray - A ray tracing benchmark that uses massive threading per CPU and is completely CPU bound but overloads all CPUs.

Primesieve - A prime number generator that is threaded to the number of CPUs exactly, is fully CPU bound and is cache intensive.

PostgreSQL pgbench - A meaningful database benchmark that is done at 3 different levels - single threaded, normal loaded and heavily contended, each testing different aspects of scalability.

And here is a set of results comparing 4.8.2 mainline (labelled CFS), MuQSS 112 in interactive mode (MuQSS-int1) and MuQSS 112 in non-interactive mode (MuQSS-int0):

http://ck.kolivas.org/patches/muqss/Benchmarks/20161018/

It's worth noting that there is quite a bit of variance in these benchmarks and some are bordering on the difference being just noise. However there is a clear pattern here - when the load is light, in terms of throughput, CFS outperforms MuQSS. When load is heavy, the heavier it gets, MuQSS outperforms CFS, especially in non-interactive mode. As a friend noted, for the workloads where you wouldn't be running MuQSS in interactive mode, such as a web server, database etc, non-interactive mode is of clear performance benefit. So at least on the hardware I had available to me, on a 12x machine, MuQSS is scaling better than mainline on these workloads as load increases.

The obvious question people will ask is why MuQSS doesn't perform better at light loads, and in fact I have an explanation. The reason is that mainline tends to cling to processes much more so that if it is hovering at low numbers of active processes, they'll all cluster on one CPU or fewer CPUs than being spread out everywhere. This means the CPU benefits more from the turbo modes virtually all newer CPUs have, but it comes at a cost. The latency to tasks is greater because they're competing for CPU time on fewer busy CPUs rather than spreading out to idle cores or threads. It is a design decision in MuQSS, as taken from BFS, to always spread out to any idle CPUs if they're available, to minimise latency, and that's one of the reasons for the interactivity and responsiveness of MuQSS. Of course I am still investigating ways of closing that gap further.

Hopefully I can get some more benchmarks from someone with even bigger hardware, and preferably with more than one physical package since that's when things really start getting interesting. All in all I'm very pleased with the performance of MuQSS in terms of scalability on these results, especially assuming I'm able to maintain the interactivity of BFS which were my dual goals.

There is MUCH more to benchmarking than pure throughput of CPU - which is almost the only thing these benchmarks is checking - but that's what I'm interested in here. I hope that providing my list of easy to use benchmarks and the reasoning behind them can generate interest in some kind of meaningful standard set of benchmarks. I did start out in kernel development originally after writing and being a benchmarker :P

To aid that, I'll give simple instructions here for how to ~imitate the benchmarks and get results like I've produced above.

Download the phoronix test suite from here:
http://www.phoronix-test-suite.com/

The generic tar.gz is perfectly fine. Then extract it and install the relevant benchmarks like so:

tar xf phoronix-test-suite-6.6.1.tar.gz
cd phoronix-test-suite
./phoronix-test-suite install build-linux-kernel c-ray compress-7zip ebizzy john-the-ripper pgbench primesieve
./phoronix-test-suite default-run build-linux-kernel c-ray compress-7zip ebizzy john-the-ripper pgbench primesieve


Now obviously this is not ideal since you shouldn't run benchmarks on a multiuser login with Xorg and all sorts of other crap running so I actually always run benchmarks at init level 1.

Enjoy!
お楽しみ下さい
-ck