Re: [tbench regression fixes]: digging out smelly deadmen.

From: Ingo Molnar
Date: Mon Oct 27 2008 - 14:33:47 EST



* Alan Cox <alan@xxxxxxxxxxxxxxxxxxx> wrote:

> > The way to get the best possible dbench numbers in CPU-bound dbench
> > runs, you have to throw away the scheduler completely, and do this
> > instead:
> >
> > - first execute all requests of client 1
> > - then execute all requests of client 2
> > ....
> > - execute all requests of client N
>
> Rubbish. [...]

i've actually implemented that about a decade ago: i've tracked down
what makes dbench tick, i've implemented the kernel heuristics for it
to make dbench scale linearly with the number of clients - just to be
shot down by Linus about my utter rubbish approach ;-)

> [...] If you do that you'll not get enough I/O in parallel to
> schedule the disk well (not that most of our I/O schedulers are
> doing the job well, and the vm writeback threads then mess it up and
> the lack of Arjans ioprio fixes then totally screw you) </rant>

the best dbench results come from systems that have enough RAM to
cache the full working set, and a filesystem intelligent enough to not
insert bogus IO serialization cycles (ext3 is not such a filesystem).

The moment there's real IO it becomes harder to analyze but the same
basic behavior remains: the more unfair the IO scheduler, the "better"
dbench results we get.

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/