Steven Cole wrote:
>
> ...
> > If you want good dbench numbers:
> >
> > echo 70 > /proc/sys/vm/dirty_background_ratio
> > echo 75 > /proc/sys/vm/dirty_async_ratio
> > echo 80 > /proc/sys/vm/dirty_sync_ratio
> > echo 30000 > /proc/sys/vm/dirty_expire_centisecs
>
> That last one looks like the biggest cheat. Rather than optimizing for
> dbench, is there a set of pessimizing numbers which would optimally turn
> dbench into a semi-useful tool for measuring meaningful IO performance?
> Or is dbench really only useful for stress testing?
>
We tend to use dbench in two modes nowadays. One is the "RAM only"
mode, where the run completes before hitting disk at all. That's
a very useful and repeatable test for CPU efficiency and lock contention.
The other mode is of course when there are enough clients and
enough dirty data for the test to go to disk. As Rik says, this
tends to be subject to chaotic effects, and it is also extremely
non linear.
Because when the run slows down a little bit, it takes longer, so
more data becomes eligible for time-expiry-based writeback, which
causes more IO, which causes the run to take longer, etc, etc.
Yes, one does tend still to keep one's eye on the "heavy" dbench
throughput, but I suspect that tuning for this workload is a bad
thing overall. This is because good dbench numbers come from
allowing a large amount of dirty data to float about in memory
(it will never get written out). But for real workloads which
don't delete their own output 30 seconds later, we want to start
writeback earlier. To use the disk bandwidth more smoothly
and to decrease memory allocation latency.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Wed Aug 07 2002 - 22:00:31 EST