Re: [RFC 1/1] fs/reiserfs/journal.c: Remove obsolete __GFP_NOFAIL
From: David Rientjes
Date: Tue Mar 25 2014 - 21:06:28 EST
On Sat, 22 Mar 2014, Dave Jones wrote:
> On Sat, Mar 22, 2014 at 10:55:24AM -0700, Andrew Morton wrote:
> > On Sat, 22 Mar 2014 13:32:07 -0400 tytso@xxxxxxx wrote:
> >
> > > On Sat, Mar 22, 2014 at 01:26:06PM -0400, tytso@xxxxxxx wrote:
> > > > > Well. Converting an existing retry-for-ever caller to GFP_NOFAIL is
> > > > > good. Adding new retry-for-ever code is not good.
> > >
> > > Oh, and BTW --- now that checkpatch.pl now flags an warning whenever
> > > GFP_NOFAIL is used
> >
> > I don't know what the basis for this NOFAIL-is-going-away theory could
> > have been. What's the point in taking a centrally implemented piece of
> > logic and splattering its implementation out to tens of different
> > callsites?
>
> I wonder if some of that thinking came from this..
>
> commit dab48dab37d2770824420d1e01730a107fade1aa
> Author: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Date: Tue Jun 16 15:32:37 2009 -0700
>
> page-allocator: warn if __GFP_NOFAIL is used for a large allocation
>
> __GFP_NOFAIL is a bad fiction. Allocations _can_ fail, and callers should
> detect and suitably handle this (and not by lamely moving the infinite
> loop up to the caller level either).
>
It came from me pointing out the fact that __GFP_NOFAIL requires
__GFP_WAIT to actually never fail in the page allocator's implementation.
I wanted to fix that, Andrew said nobody is currently doing
GFP_NOWAIT | __GFP_NOFAIL or GFP_ATOMIC | __GFP_NOFAIL so let's warn
against new callers being added and hopefully eventually get rid of it.
In those cases, we also don't invoke the oom killer because we don't have
__GFP_FS so we livelock.
The point is not to add new callers and new code should handle NULL
correctly, not that we should run around changing current users to just do
infinite retries. Checkpatch should have nothing to do with that.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/