Re: XFS/btrfs performance after IO-less dirty throttling
From: Dave Chinner
Date: Thu Dec 15 2011 - 19:32:02 EST
On Thu, Dec 15, 2011 at 09:31:37PM +0800, Wu Fengguang wrote:
> > The other big regressions happen in the XFS UKEY-thresh=100M cases.
>
> > 3.1.0+ 3.2.0-rc3
> > ------------------------ ------------------------
> > 4.17 -37.8% 2.59 fat/UKEY-thresh=100M/xfs-100dd-1-3.1.0+
> > 4.14 -53.3% 1.94 fat/UKEY-thresh=100M/xfs-10dd-1-3.1.0+
> > 6.30 +0.4% 6.33 fat/UKEY-thresh=100M/xfs-1dd-1-3.1.0+
>
> Here are more details for the 10dd case. The attached
> balance_dirty_pages-pause.png shows small pause time (mostly in range
> 10-50ms) and nr_dirtied_pause (mostly < 5), which may be the root cause.
>
> The iostat graphs show very unstable throughput and IO size often
> drops low.
And it's doing shitloads more allocation work. IOWs, the delayed
allocation algorithms are being strangled by writeback, causing
fragmentation and hence not allowing enough data per thread to be
written at a time to maximise throughput.
However, I'd argue that the performance of 10 concurrent writers to
a slow USB key formatted with XFS is so *completely irrelevant* that
I'd ignore it. Spend your time optimising writeback on XFS for high
throughputs (e.g > 500MB/s), not for shitty $5 USB keys that are 2-3
orders of magnitude slower than the target market for XFS...
Cheers,
Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/