Re: [PATCH 11/12] vmscan: Write out dirty pages in batch

From: Nick Piggin
Date: Tue Jun 15 2010 - 01:12:57 EST


On Mon, Jun 14, 2010 at 09:37:32PM -0700, Andrew Morton wrote:
> On Tue, 15 Jun 2010 00:08:14 -0400 Rik van Riel <riel@xxxxxxxxxx> wrote:
>
> > On 06/14/2010 09:45 PM, Andrew Morton wrote:
> > > On Mon, 14 Jun 2010 21:16:29 -0400 Rik van Riel<riel@xxxxxxxxxx> wrote:
> > >
> > >> Would it be hard to add a "please flush this file"
> > >> way to call the filesystem flushing threads?
> > >
> > > Passing the igrab()bed inode into the flusher threads would fix the
> > > iput_final() problems, as long as the alloc_pages() caller never blocks
> > > indefinitely waiting for the work which the flusher threads are doing.
> > >
> > > Otherwise we get (very hard-to-hit) deadlocks where the alloc_pages()
> > > caller holds VFS locks and is waiting for the flusher threads while all
> > > the flusher threads are stuck under iput_final() waiting for those VFS
> > > locks.
> > >
> > > That's fixable by not using igrab()/iput(). You can use lock_page() to
> > > pin the address_space. Pass the address of the locked page across to
> > > the flusher threads so they don't try to lock it a second time, or just
> > > use trylocking on that writeback path or whatever.
> >
> > Any thread that does not have __GFP_FS set in its gfp_mask
> > cannot wait for the flusher to complete. This is regardless
> > of the mechanism used to kick the flusher.
>
> mm... kinda. A bare order-zero __GFP_WAIT allocation can still wait
> forever, afaict.
>
> > Then again, those threads cannot call ->writepage today
> > either, so we should be fine keeping that behaviour.
>
> I'm not sure. iput_final() can take a lot of locks, both VFS and
> heaven knows what within the individual filesystems. Is it the case
> that all allocations which occur under all of those locks is always
> !__GFP_FS? Hard to say...

__GFP_FS is set with i_mutex held in places, and there is nothing to
prevent a filesystem from using that in iput_final paths, AFAIK.


> > Threads that do have __GFP_FS in their gfp_mask can wait
> > for the flusher in various ways. Maybe the lock_page()
> > method can be simplified by having the flusher thread
> > unlock the page the moment it gets it, and then run the
> > normal flusher code?
>
> Well, _something_ has to pin the address_space. A single locked page
> will do.
>
> > The pageout code (in shrink_page_list) already unlocks
> > the page anyway before putting it back on the relevant
> > LRU list. It would be easy enough to skip that unlock
> > and let the flusher thread take care of it.
>
> Once that page is unlocked, we can't touch *mapping - its inode can be
> concurrently reclaimed. Although I guess the technique in
> handle_write_error() can be reused.

Nasty. That guy needs to be using lock_page_nosync().

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/