[updated] BFS vs. mainline scheduler benchmarks and measurements

From: Ingo Molnar
Date: Thu Sep 10 2009 - 03:43:36 EST



* Ingo Molnar <mingo@xxxxxxx> wrote:

> OLTP performance (postgresql + sysbench)
> http://redhat.com/~mingo/misc/bfs-vs-tip-oltp.jpg

To everyone who might care about this, i've updated the sysbench
results to latest -tip:

http://redhat.com/~mingo/misc/bfs-vs-tip-oltp-v2.jpg

This double checks the effects of the various interactivity fixlets
in the scheduler tree (whose interactivity effects
mentioned/documented in the various threads on lkml) in the
throughput space too and they also improved sysbench performance.

Con, i'd also like to thank you for raising general interest in
scheduler latencies once more by posting the BFS patch. It gave us
more bugreports upstream and gave us desktop users willing to test
patches which in turn helps us improve the code. When users choose
to suffer in silence that is never helpful.

BFS isnt particularly strong in this graph - from having looked at
the workload under BFS my impression was that this is primarily due
to you having cut out much of the sched-domains SMP load-balancer
code. BFS 'insta-balances' very agressively, which hurts cache
affine workloads rather visibly.

You might want to have a look at that design detail if you care -
load-balancing is in significant parts orthogonal to the basic
design of a fair scheduler.

For example we kept much of the existing load-balancer when we went
to CFS in v2.6.23 - the fairness engine and the load-balancer are in
large parts independent units of code and can be improved/tweaked
separately.

There's interactions, but the concepts are largely separate.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/