Re: [PATCH 1/3] kvfree_rcu: Allocate a page for a single argument

From: Paul E. McKenney
Date: Wed Feb 03 2021 - 14:38:29 EST


On Mon, Feb 01, 2021 at 12:47:55PM +0100, Michal Hocko wrote:
> On Fri 29-01-21 17:35:31, Uladzislau Rezki wrote:
> > On Fri, Jan 29, 2021 at 09:56:29AM +0100, Michal Hocko wrote:
> > > On Thu 28-01-21 19:02:37, Uladzislau Rezki wrote:
> > > [...]
> > > > >From 0bdb8ca1ae62088790e0a452c4acec3821e06989 Mon Sep 17 00:00:00 2001
> > > > From: "Uladzislau Rezki (Sony)" <urezki@xxxxxxxxx>
> > > > Date: Wed, 20 Jan 2021 17:21:46 +0100
> > > > Subject: [PATCH v2 1/1] kvfree_rcu: Directly allocate page for single-argument
> > > > case
> > > >
> > > > Single-argument kvfree_rcu() must be invoked from sleepable contexts,
> > > > so we can directly allocate pages. Furthermmore, the fallback in case
> > > > of page-allocation failure is the high-latency synchronize_rcu(), so it
> > > > makes sense to do these page allocations from the fastpath, and even to
> > > > permit limited sleeping within the allocator.
> > > >
> > > > This commit therefore allocates if needed on the fastpath using
> > > > GFP_KERNEL|__GFP_NORETRY.
> > >
> > > Yes, __GFP_NORETRY as a lightweight allocation mode should be fine. It
> > > is more robust than __GFP_NOWAIT on memory usage spikes. The caller is
> > > prepared to handle the failure which is likely much less disruptive than
> > > OOM or potentially heavy reclaim __GFP_RETRY_MAYFAIL.
> > >
> > > I cannot give you ack as I am not familiar with the code but this makes
> > > sense to me.
> > >
> > No problem, i can separate it. We can have a patch on top of what we have so
> > far. The patch only modifies the gfp_mask passed to __get_free_pages():
> >
> > >From ec2feaa9b7f55f73b3b17e9ac372151c1aab5ae0 Mon Sep 17 00:00:00 2001
> > From: "Uladzislau Rezki (Sony)" <urezki@xxxxxxxxx>
> > Date: Fri, 29 Jan 2021 17:16:03 +0100
> > Subject: [PATCH 1/1] kvfree_rcu: replace __GFP_RETRY_MAYFAIL by __GFP_NORETRY
> >
> > __GFP_RETRY_MAYFAIL is a bit heavy from reclaim process of view,
> > therefore a time consuming. That is not optional and there is
> > no need in doing it so hard, because we have a fallback path.
> >
> > __GFP_NORETRY in its turn can perform some light-weight reclaim
> > and it rather fails under high memory pressure or low memory
> > condition.
> >
> > In general there are four simple criterias we we would like to
> > achieve:
> > a) minimize a fallback hitting;
> > b) avoid of OOM invoking;
> > c) do a light-wait page request;
> > d) avoid of dipping into the emergency reserves.
> >
> > Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx>
>
> Looks good to me. Feel free to add
> Acked-by: Michal Hocko <mhocko@xxxxxxxx>

Queued, thank you both!

Thanx, Paul

> > ---
> > kernel/rcu/tree.c | 14 +++++++++++++-
> > 1 file changed, 13 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 70ddc339e0b7..1e862120db9e 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -3489,8 +3489,20 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp,
> > bnode = get_cached_bnode(*krcp);
> > if (!bnode && can_alloc) {
> > krc_this_cpu_unlock(*krcp, *flags);
> > +
> > + // __GFP_NORETRY - allows a light-weight direct reclaim
> > + // what is OK from minimizing of fallback hitting point of
> > + // view. Apart of that it forbids any OOM invoking what is
> > + // also beneficial since we are about to release memory soon.
> > + //
> > + // __GFP_NOMEMALLOC - prevents from consuming of all the
> > + // memory reserves. Please note we have a fallback path.
> > + //
> > + // __GFP_NOWARN - it is supposed that an allocation can
> > + // be failed under low memory or high memory pressure
> > + // scenarios.
> > bnode = (struct kvfree_rcu_bulk_data *)
> > - __get_free_page(GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOMEMALLOC | __GFP_NOWARN);
> > + __get_free_page(GFP_KERNEL | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN);
> > *krcp = krc_this_cpu_lock(flags);
> > }
> >
> > --
> > 2.20.1
> >
> > --
> > Vlad Rezki
>
> --
> Michal Hocko
> SUSE Labs