Re: [PATCH 3.5 29/64] fs: buffer: move allocation failure loop intothe allocator

From: Jan Kara
Date: Thu Oct 31 2013 - 10:48:56 EST


On Thu 31-10-13 10:00:08, Johannes Weiner wrote:
> On Mon, Oct 28, 2013 at 02:47:48PM +0000, Luis Henriques wrote:
> > 3.5.7.24 -stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Johannes Weiner <hannes@xxxxxxxxxxx>
> >
> > commit 84235de394d9775bfaa7fa9762a59d91fef0c1fc upstream.
> >
> > Buffer allocation has a very crude indefinite loop around waking the
> > flusher threads and performing global NOFS direct reclaim because it can
> > not handle allocation failures.
> >
> > The most immediate problem with this is that the allocation may fail due
> > to a memory cgroup limit, where flushers + direct reclaim might not make
> > any progress towards resolving the situation at all. Because unlike the
> > global case, a memory cgroup may not have any cache at all, only
> > anonymous pages but no swap. This situation will lead to a reclaim
> > livelock with insane IO from waking the flushers and thrashing unrelated
> > filesystem cache in a tight loop.
> >
> > Use __GFP_NOFAIL allocations for buffers for now. This makes sure that
> > any looping happens in the page allocator, which knows how to
> > orchestrate kswapd, direct reclaim, and the flushers sensibly. It also
> > allows memory cgroups to detect allocations that can't handle failure
> > and will allow them to ultimately bypass the limit if reclaim can not
> > make progress.
So I was under the impression that __GFP_NOFAIL is going away, doesn't
it? At least about an year ago there was some effort to remove its users so
we ended up creating loops like the above one (and similar ones for
jbd/jbd2) in cases where handling the failure wasn't easily possible. And now
it seems we are going in the opposite direction... At least we have a
steady flow of patches guaranteed :)

Honza
> >
> > Reported-by: azurIt <azurit@xxxxxxxx>
> > Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx>
> > Cc: Michal Hocko <mhocko@xxxxxxx>
> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> > Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
> > Signed-off-by: Luis Henriques <luis.henriques@xxxxxxxxxxxxx>
> > ---
> > fs/buffer.c | 14 ++++++++++++--
> > mm/memcontrol.c | 2 ++
> > 2 files changed, 14 insertions(+), 2 deletions(-)
> >
> > diff --git a/fs/buffer.c b/fs/buffer.c
> > index 2c78739..2675e5a 100644
> > --- a/fs/buffer.c
> > +++ b/fs/buffer.c
> > @@ -957,9 +957,19 @@ grow_dev_page(struct block_device *bdev, sector_t block,
> > struct buffer_head *bh;
> > sector_t end_block;
> > int ret = 0; /* Will call free_more_memory() */
> > + gfp_t gfp_mask;
> >
> > - page = find_or_create_page(inode->i_mapping, index,
> > - (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE);
> > + gfp_mask = mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS;
> > + gfp_mask |= __GFP_MOVABLE;
> > + /*
> > + * XXX: __getblk_slow() can not really deal with failure and
> > + * will endlessly loop on improvised global reclaim. Prefer
> > + * looping in the allocator rather than here, at least that
> > + * code knows what it's doing.
> > + */
> > + gfp_mask |= __GFP_NOFAIL;
> > +
> > + page = find_or_create_page(inode->i_mapping, index, gfp_mask);
> > if (!page)
> > return ret;
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 226b63e..953bf3c 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -2405,6 +2405,8 @@ done:
> > return 0;
> > nomem:
> > *ptr = NULL;
> > + if (gfp_mask & __GFP_NOFAIL)
> > + return 0;
> > return -ENOMEM;
> > bypass:
> > *ptr = root_mem_cgroup;
> > --
> > 1.8.3.2
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Jan Kara <jack@xxxxxxx>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/