Re: [ANNOUNCE] BFS CPU scheduler version 0.420 AKA "Smoking" forlinux kernel 3.3.0
From: Con Kolivas
Date: Sat Mar 24 2012 - 22:33:23 EST
On 25 March 2012 13:05, <Valdis.Kletnieks@xxxxxx> wrote:
> On Sat, 24 Mar 2012 05:53:32 -0400, Gene Heskett said:
>> I for one am happy to see this, Con. ÂI have been running an earlier patch
>> as pclos applies it to 18.104.22.168, and I must say the desktop interactivity
>> is very much improved over the non-bfs version.
> I'va always wondered what people are using to measure interactivity. Do we have
> some hard numbers from scheduler traces, or is it a "feels faster"? ÂAnd if
> it's a subjective thing, how are people avoiding confirmation bias (where you
> decide it feels faster because it's the new kernel and *should* feel faster)?
> Anybody doing blinded boots, where a random kernel old/new is booted and the
> user grades the performance without knowing which one was actually running?
> And yes, this can be a real issue - anybody who's been a aysadmin for
> a while will have at least one story of scheduling an upgrade, scratching it
> at the last minute, and then having users complain about how the upgrade
> ruined performance and introduced bugs...
I would say the vast majority of -ck/BFS users rely purely on
subjective feeling. On the other hand I have done numerous benchmarks
in the past trying to show the bound latencies of bfs are better than
mainline on regular workloads which is not surprising since BFS is
deterministic with respect to its latencies whereas mainline is not
(except on uniprocessor). I also documented interbench numbers showing
worst case latencies are bound better with BFS but since interbench is
a complicated benchmark that also displays fairness, most people don't
know how to read the values. Since I was never out to displace the
mainline scheduler but to demonstrate alternatives and provide a
standard for comparison I didn't bother with the benchmarks much
further than the occasional one I've posted. Since the main mailing
list seems distinctly disinterested in said results, I've only
published the throughput benchmarks as a kind of baseline regression
point to show that BFS' throughput is not significantly adversely
affected on the commodity hardware that people are using it on.
A comprehensive comparison of (an earlier BFS) compared to CFS and the
old O(1) scheduler evaluating throughput and fairness was in the
excellent thesis by Joseph T. Meehean entitled "Towards Transparent
A few of the latency benchmarks that still remain published on my site
can be found here:
Note how old they are. Not much has been done to repeat them since
then, but BFS' main design has not drastically changed in that time.
Some may be found on the old mailing list posts, but not a lot has
been documented with regards to this.
Some throughput benchmarks:
Yes the results are from relatively simple benchmarks and limited in
scope. Yes there is hardly a decent benchmark for either interactivity
or responsiveness (interbench and contest were my attempt to benchmark
both of those).
Here's my very brief summary of the difference between interactivity
and responsiveness as I see it that I wrote many years ago:
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/