Re: [RFC-PATCH 1/2] mm: Add __GFP_NO_LOCKS flag

From: Uladzislau Rezki
Date: Tue Aug 11 2020 - 05:18:15 EST


On Mon, Aug 10, 2020 at 09:25:25PM +0200, Michal Hocko wrote:
> On Mon 10-08-20 18:07:39, Uladzislau Rezki wrote:
> > > On Sun 09-08-20 22:43:53, Uladzislau Rezki (Sony) wrote:
> > > [...]
> > > > Limitations and concerns (Main part)
> > > > ====================================
> > > > The current memmory-allocation interface presents to following
> > > > difficulties that this patch is designed to overcome:
> > > >
> > > > a) If built with CONFIG_PROVE_RAW_LOCK_NESTING, the lockdep will
> > > > complain about violation("BUG: Invalid wait context") of the
> > > > nesting rules. It does the raw_spinlock vs. spinlock nesting
> > > > checks, i.e. it is not legal to acquire a spinlock_t while
> > > > holding a raw_spinlock_t.
> > > >
> > > > Internally the kfree_rcu() uses raw_spinlock_t(in rcu-dev branch)
> > > > whereas the "page allocator" internally deals with spinlock_t to
> > > > access to its zones. The code also can be broken from higher level
> > > > of view:
> > > > <snip>
> > > > raw_spin_lock(&some_lock);
> > > > kfree_rcu(some_pointer, some_field_offset);
> > > > <snip>
> > >
> > > Is there any fundamental problem to make zone raw_spin_lock?
> > >
> > Good point. Converting a regular spinlock to the raw_* variant can solve
> > an issue and to me it seems partly reasonable. Because there are other
> > questions if we do it:
> >
> > a) what to do with kswapd and "wake-up path" that uses sleepable lock:
> > wakeup_kswapd() -> wake_up_interruptible(&pgdat->kswapd_wait).
>
> If there is no RT friendly variant for waking up process from the atomic
> context then we might need to special case this for the RT tree.
>
I do not see it in RT kernel. The waiting primitives, see the wait.c,
use sleepable locks all over the file.

> > b) How RT people reacts on it? I guess they will no be happy.
>
> zone->lock should be held for a very limited amount of time.
>
> > As i described before, calling the __get_free_page(0) with 0 as argument
> > will solve the (a). How correctly is it? From my point of view the logic
> > that bypass the wakeup path should be explicitly defined.
>
> gfp_mask == 0 is GFP_NOWAIT (aka an atomic allocation request) which
> doesn't wake up kswapd. So if the wakeup is a problem then this would be
> a way to go.
>
What do you mean Michal? gfp_mask 0 != GFP_NOWAIT:

#define GFP_NOWAIT (__GFP_KSWAPD_RECLAIM)

it does wakeup of the kswapd. Or am i missing something? Please comment.
If we are about to avoid the kswapd, should we define something special?

#define GFP_NOWWAKE_KSWAPD 0

> > Or we can enter the allocator with (__GFP_HIGH|__GFP_ATOMIC) that bypass
> > the __GFP_KSWAPD_RECLAIM as well.
>
> This would be an alternative which consumes memory reserves. Is this
> really needed for the particular case?
>
No. That was just another example illustrating how to bypass the
__GFP_KSWAPD_RECLAIM.

> >
> > Any thoughts here? Please comment.
> >
> > Having proposed flag will not heart RT latency and solve all concerns.
> >
> > > > b) If built with CONFIG_PREEMPT_RT. Please note, in that case spinlock_t
> > > > is converted into sleepable variant. Invoking the page allocator from
> > > > atomic contexts leads to "BUG: scheduling while atomic".
> > >
> > > [...]
> > >
> > > > Proposal
> > > > ========
> > > > 1) Make GFP_* that ensures that the allocator returns NULL rather
> > > > than acquire its own spinlock_t. Having such flag will address a and b
> > > > limitations described above. It will also make the kfree_rcu() code
> > > > common for RT and regular kernel, more clean, less handling corner
> > > > cases and reduce the code size.
> > >
> > > I do not think this is a good idea. Single purpose gfp flags that tend
> > > to heavily depend on the current implementation of the page allocator
> > > have turned out to be problematic. Users used to misunderstand their
> > > meaning resulting in a lot of abuse which was not trivial to remove.
> > > This flag seem to fall into exactly this sort of category. If there is a
> > > problem in nesting then that should be addressed rather than a new flag
> > > exported IMHO. If that is absolutely not possible for some reason then
> > > we can try to figure out what to do but that really need a very strong
> > > justification.
> > >
> > The problem that i see is we can not use the page allocator from atomic
> > contexts, what is our case:
> >
> > <snip>
> > local_irq_save(flags) or preempt_disable() or raw_spinlock();
> > __get_free_page(GFP_ATOMIC);
> > <snip>
> >
> > So if we can convert the page allocator to raw_* lock it will be appreciated,
> > at least from our side, IMHO, not from RT one. But as i stated above we need
> > to sort raised questions out if converting is done.
> >
> > What is your view?
>
> To me it would make more sense to support atomic allocations also for
> the RT tree. Having both GFP_NOWAIT and GFP_ATOMIC which do not really
> work for atomic context in RT sounds subtle and wrong.
>
Same view on it.

Thank you for your comments!

--
Vlad Rezki