Wednesday 29 September 2010

Biasing for latency under load

One of the mantras of BFS is that it has very little in the way of tunables and should require no input on the part of the user to get both good latency and good throughput by default. The only tunable available is the rr_interval found in /proc/sys/kernel/ and turning it up will improve throughput at the expense of latency, while turning it down will do the opposite (iso_cpu is also there, but that's more a permission tunable than a scheduler behavioural tunable). Giving you the bottom line for those who want to tune, I suggest running 100Hz with an rr_interval of 300 if you are doing nothing but cpu intensive slow tasks (video encoding, folding etc) and running 1000Hz with an rr_interval of 2 if you care only for latency at all costs.

I've believed for a long time that it makes no sense to tune for ridiculously high loads on your machine if your primary use is a desktop, and if you get into an overload situation, you should expect a slowdown. Trying to tune for such conditions always ends up costing you in other ways that just isn't worth it since you spend 99.9999% of your time at "normal loads". What BFS does at high loads is a progressive lengthening of latency proportional to the load while maintaining relatively regular throughput. So if you run say a make -j4 on a quad core machine, you shouldn't really notice anything going on, but if you run a make -j64 you should notice a dramatic slowdown, and loss of fluid movement of your cursor and possibly have audio skip. What exactly is the point of doing a make -j64 on a quad core desktop? There is none apart from as some kind of mindless test. However on any busy server, it spends most of its time on loads much greater than 1 per CPU. In that setting maintaining reasonable latency, while ensuring maximal throughput is optimal.

The mainline kernel seems intent on continually readdressing latency under load on a desktop as though that's some holy grail. Lately the make -j10 load on uniprocessor workload has been used as the benchmark. What they're finding, not surprisingly, is that the lower you aim your latencies, the smoother the desktop will continue to feel at the higher loads, and trying to find some "optimum" value where latency will still be good without sacrificing throughput too much. Why 10? Why not 100? How about 1000? Why choose some arbitrary upper figure to tune to? Why not just accept that overload is overload and that latency is going to suffer and not damage throughput to try and contain it?

For my own response to this, here's a patch:
(edit, patch for bfs357)


The changelog, also in the patch itself, follows:

Make it possible to maintain low latency as much as possible at high loads by shortening timeslice the more loaded the machine will get. Do so by adding a tunable latency_bias which is disabled by default. Valid values are from 0 to 100, where higher values mean bias more for latency as load increases. Note that this should still maintain fairness, but will sacrifice throughput, potentially dramatically, to try and keep latencies as low as possible. Hz will
still be a limiting factor so the higher Hz is, the lower the latencies maintainable.

The effect of enabling this tunable will be to ensure that very low CPU usage processes, such as mouse cursor movement, will remain fluid no matter how high the load is. It's possible to have a smooth mouse cursor with massive loads, but the effect on throughput can be up to a -20%- loss at ultra high loads. At meaningful loads, a value of one will have minimal impact on throughput and ensure that under the occasional overload condition the machine will still feel fluid.

This is to achieve the converse of what normally happens. You can choose to tune to maintain either latency at high loads (set to 100), or throughput (set to 0 for current behaviour), or some value in between (set to 1 or more). So I'm putting this out there for people to test and report back to see if they think it's worthwhile.

No comments:

Post a Comment