Re: [PATCH v2 2/4] mm/vmalloc: add support for __GFP_NOFAIL

From: Michal Hocko
Date: Thu Nov 25 2021 - 14:23:29 EST


On Thu 25-11-21 19:40:56, Uladzislau Rezki wrote:
> On Thu, Nov 25, 2021 at 9:48 AM Michal Hocko <mhocko@xxxxxxxx> wrote:
> >
> > On Wed 24-11-21 21:37:54, Uladzislau Rezki wrote:
> > > On Wed, Nov 24, 2021 at 09:43:12AM +0100, Michal Hocko wrote:
> > > > On Tue 23-11-21 17:02:38, Andrew Morton wrote:
> > > > > On Tue, 23 Nov 2021 20:01:50 +0100 Uladzislau Rezki <urezki@xxxxxxxxx> wrote:
> > > > >
> > > > > > On Mon, Nov 22, 2021 at 04:32:31PM +0100, Michal Hocko wrote:
> > > > > > > From: Michal Hocko <mhocko@xxxxxxxx>
> > > > > > >
> > > > > > > Dave Chinner has mentioned that some of the xfs code would benefit from
> > > > > > > kvmalloc support for __GFP_NOFAIL because they have allocations that
> > > > > > > cannot fail and they do not fit into a single page.
> > > > >
> > > > > Perhaps we should tell xfs "no, do it internally". Because this is a
> > > > > rather nasty-looking thing - do we want to encourage other callsites to
> > > > > start using it?
> > > >
> > > > This is what xfs is likely going to do if we do not provide the
> > > > functionality. I just do not see why that would be a better outcome
> > > > though. My longterm experience tells me that whenever we ignore
> > > > requirements by other subsystems then those requirements materialize in
> > > > some form in the end. In many cases done either suboptimaly or outright
> > > > wrong. This might be not the case for xfs as the quality of
> > > > implementation is high there but this is not the case in general.
> > > >
> > > > Even if people start using vmalloc(GFP_NOFAIL) out of lazyness or for
> > > > any other stupid reason then what? Is that something we should worry
> > > > about? Retrying within the allocator doesn't make the things worse. In
> > > > fact it is just easier to find such abusers by grep which would be more
> > > > elaborate with custom retry loops.
> > > >
> > > > [...]
> > > > > > > + if (nofail) {
> > > > > > > + schedule_timeout_uninterruptible(1);
> > > > > > > + goto again;
> > > > > > > + }
> > > > >
> > > > > The idea behind congestion_wait() is to prevent us from having to
> > > > > hard-wire delays like this. congestion_wait(1) would sleep for up to
> > > > > one millisecond, but will return earlier if reclaim events happened
> > > > > which make it likely that the caller can now proceed with the
> > > > > allocation event, successfully.
> > > > >
> > > > > However it turns out that congestion_wait() was quietly broken at the
> > > > > block level some time ago. We could perhaps resurrect the concept at
> > > > > another level - say by releasing congestion_wait() callers if an amount
> > > > > of memory newly becomes allocatable. This obviously asks for inclusion
> > > > > of zone/node/etc info from the congestion_wait() caller. But that's
> > > > > just an optimization - if the newly-available memory isn't useful to
> > > > > the congestion_wait() caller, they just fail the allocation attempts
> > > > > and wait again.
> > > >
> > > > vmalloc has two potential failure modes. Depleted memory and vmalloc
> > > > space. So there are two different events to wait for. I do agree that
> > > > schedule_timeout_uninterruptible is both ugly and very simple but do we
> > > > really need a much more sophisticated solution at this stage?
> > > >
> > > I would say there is at least one more. It is about when users set their
> > > own range(start:end) where to allocate. In that scenario we might never
> > > return to a user, because there might not be any free vmap space on
> > > specified range.
> > >
> > > To address this, we can allow __GFP_NOFAIL only for entire vmalloc
> > > address space, i.e. within VMALLOC_START:VMALLOC_END.
> >
> > How should we do that?
> >
> <snip>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index d2a00ad4e1dd..664935bee2a2 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -3029,6 +3029,13 @@ void *__vmalloc_node_range(unsigned long size,
> unsigned long align,
> return NULL;
> }
>
> + if (gfp_mask & __GFP_NOFAIL) {
> + if (start != VMALLOC_START || end != VMALLOC_END) {
> + gfp_mask &= ~__GFP_NOFAIL;
> + WARN_ONCE(1, "__GFP_NOFAIL is allowed only for
> entire vmalloc space.");
> + }
> + }

So the called function effectivelly ignores the flag which could lead to
an actual failure and that is something the caller has told us not to
do. I do not consider such an API great, to say the least.

> +
> if (vmap_allow_huge && !(vm_flags & VM_NO_HUGE_VMAP)) {
> unsigned long size_per_node;
> <snip>
>
> Or just allow __GFP_NOFAIL flag usage only for a high level API, it is
> __vmalloc() one where
> gfp can be passed. Because it uses whole vmalloc address space, thus
> we do not need to
> check the range and other parameters like align, etc. This variant is
> preferable.
>
> But the problem is that there are internal functions which are
> publicly available for kernel users like
> __vmalloc_node_range(). In that case we can add a big comment like:
> __GFP_NOFAIL flag can be
> used __only__ with high level API, i.e. __vmalloc() one.
>
> Any thoughts?

I dunno. I find it rather ugly. We can surely document some APIs that
they shouldn't be used with __GFP_NOFAIL because they could result in an
endless loop but I find it rather subtle to change the contract under
the caller's feet and cause other problems.

I am rather curious about other opinions but at this moment this is
trying to handle a non existing problem IMHO. vmalloc and for that
matter other allocators are not trying to be defensive in API because we
assume in-kernel users to be good citizens.
--
Michal Hocko
SUSE Labs