Re: [PATCH 0/7] Per-bdi writeback flusher threads v20

From: Chris Mason
Date: Mon Sep 21 2009 - 09:54:27 EST


On Sat, Sep 19, 2009 at 12:26:07PM +0800, Wu Fengguang wrote:
> On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> > On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > >
> > > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > > in mid next week to help review and benchmark recent writeback works,
> > > > > and hope to get things done in this merge window.
> > > >
> > > > Did you have some chance to get more work done on the your writeback
> > > > patches?
> > >
> > > Sorry for the delay, I'm now testing the patches with commands
> > >
> > > cp /dev/zero /mnt/test/zero0 &
> > > dd if=/dev/zero of=/mnt/test/zero1 &
> > >
> > > and the attached debug patch.
> > >
> > > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > > in the traces, which could slow down the inode writeback significantly.
> >
> > FYI, it's this redirty_tail() called in writeback_single_inode():
> >
> > /*
> > * Someone redirtied the inode while were writing back
> > * the pages.
> > */
> > redirty_tail(inode);
>
> Hmm, this looks like an old fashioned problem get blew up by the
> 128MB MAX_WRITEBACK_PAGES.

I'm starting to rethink the 128MB MAX_WRITEBACK_PAGES. 128MB is the
right answer for the flusher thread on sequential IO, but definitely not
on random IO. We don't want the flusher to get bogged down on random
writeback and start ignoring every other file.

My btrfs performance branch has long had a change to bump the
nr_to_write up based on the size of the delayed allocation that we're
doing. It helped, but not as much as I really expected it too, and a
similar patch from Christoph for XFS was good but not great.

It turns out the problem is in write_cache_pages. It processes a whole
pagevec at a time, something like this:

while(!done) {
for each page in the pagegvec {
writepage()
if (wbc->nr_to_write <= 0)
done = 1;
}
}

If the filesystem decides to bump nr_to_write to cover a whole
extent (or a max reasonable size), the new value of nr_to_write may
be ignored if nr_to_write had already gone done to zero.

I fixed btrfs to recheck nr_to_write every time, and the results are
much smoother. This is what it looks like to write out all the .o files
in the kernel.

http://oss.oracle.com/~mason/seekwatcher/btrfs-nr-to-write.png

In this graph, Btrfs is writing the full extent or 8192 pages, whichever
is smaller. The write_cache_pages change is here, but it is local to
the btrfs copy of write_cache_pages:

http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable.git;a=commit;h=f85d7d6c8f2ad4a86a1f4f4e3791f36dede2fa76

I'd rather see a more formal use of hints from the FS about efficient IO
than a blanket increase of the writeback max. It's more work than
bumping a single #define, but even with the #define at 1GB, we're going
to end up splitting extents and seeking when nr_to_write does finally
get down to zero.

Btrfs currently only bumps the nr_to_write when it creates the extent, I
need to change it to also bump it when it finds an existing extent.

-chris

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/