On Thu, Jan 11, 2007 at 10:13:55AM +1100, Nick Piggin wrote:
David Chinner wrote:
On Wed, Jan 10, 2007 at 03:04:15PM -0800, Christoph Lameter wrote:
On Thu, 11 Jan 2007, David Chinner wrote:
The performance and smoothness is fully restored on 2.6.20-rc3
by setting dirty_ratio down to 10 (from the default 40), so
something in the VM is not working as well as it used to....
dirty_background_ratio is left as is at 10?
Yes.
So you gain performance by switching off background writes via pdflush?
Well, pdflush appears to be doing very little on both 2.6.18 and
2.6.20-rc3. In both cases kswapd is consuming 10-20% of a CPU and
all of the pdflush threads combined (I've seen up to 7 active at
once) use maybe 1-2% of cpu time. This occurs regardless of the
dirty_ratio setting.
Hi David,
Could you get /proc/vmstat deltas for each kernel, to start with?
Sure, but that doesn't really show the how erratic the per-filesystem
throughput is because the test I'm running is PCI-X bus limited in
it's throughput at about 750MB/s. Each dm device is capable of about
340MB/s write, so when one slows down, the others will typically
speed up.
So, what I've attached is three files which have both
'vmstat 5' output and 'iostat 5 |grep dm-' output in them.