Re: [PATCH tip/core/rcu 14/15] rcu/tree: Allocate a page when caller is preemptible

From: Paul E. McKenney
Date: Wed Sep 30 2020 - 19:22:24 EST


On Wed, Sep 30, 2020 at 10:41:39AM +0200, Michal Hocko wrote:
> On Tue 29-09-20 18:53:27, Paul E. McKenney wrote:
> > On Tue, Sep 29, 2020 at 02:07:56PM +0200, Michal Hocko wrote:
> > > On Mon 28-09-20 16:31:01, paulmck@xxxxxxxxxx wrote:
> > > [...]
> >
> > Apologies for the delay, but today has not been boring.
> >
> > > > This commit therefore uses preemptible() to determine whether allocation
> > > > is possible at all for double-argument kvfree_rcu().
> > >
> > > This deserves a comment. Because GFP_ATOMIC is possible for many
> > > !preemptible() contexts. It is the raw_spin_lock, NMIs and likely few
> > > others that are a problem. You are taking a conservative approach which
> > > is fine but it would be good to articulate that explicitly.
> >
> > Good point, and so I have added the following as a header comment to
> > the add_ptr_to_bulk_krc_lock() function:
> >
> > // Record ptr in a page managed by krcp, with the pre-krc_this_cpu_lock()
> > // state specified by flags. If can_sleep is true, the caller must
> > // be schedulable and not be holding any locks or mutexes that might be
> > // acquired by the memory allocator or anything that it might invoke.
> > // If !can_sleep, then if !preemptible() no allocation will be undertaken,
> > // otherwise the allocation will use GFP_ATOMIC to avoid the remainder of
> > // the aforementioned deadlock possibilities. Returns true iff ptr was
> > // successfully recorded, else the caller must use a fallback.
>
> OK, not trivial to follow but at least verbose enough to understand the
> intention after some mulling. Definitely an improvement, thanks!

Glad it helped! With some luck, perhaps it will improve with time...

> [...]
> > > > -kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr)
> > > > +add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp,
> > > > + unsigned long *flags, void *ptr, bool can_sleep)
> > > > {
> > > > struct kvfree_rcu_bulk_data *bnode;
> > > > + bool can_alloc_page = preemptible();
> > > > + gfp_t gfp = (can_sleep ? GFP_KERNEL | __GFP_RETRY_MAYFAIL : GFP_ATOMIC) | __GFP_NOWARN;
> > >
> > > This is quite confusing IMHO. At least without a further explanation.
> > > can_sleep is not as much about sleeping as it is about the reclaim
> > > recursion AFAIU your changelog, right?
> >
> > No argument on it being confusing, and I hope that the added header
> > comment helps. But specifically, can_sleep==true is a promise by the
> > caller to be schedulable and not to be holding any lock/mutex/whatever
> > that might possibly be acquired by the memory allocator or by anything
> > else that the memory allocator might invoke, to your point, including
> > for but one example the reclaim logic.
> >
> > The only way that can_sleep==true is if this function was invoked due
> > to a call to single-argument kvfree_rcu(), which must be schedulable
> > because its fallback is to invoke synchronize_rcu().
>
> OK. I have to say that it is still not clear to me whether this call
> path can be called from the memory reclaim context. If yes then you need
> __GFP_NOMEMALLOC as well.

Right now the restriction is that single-argument (AKA can_sleep==true)
kvfree_rcu() cannot be invoked from memory reclaim context.

But would adding __GFP_NOMEMALLOC to the can_sleep==true GFP_ flags
allow us to remove this restriction? If so, I will queue a separate
patch making this change. The improved ease of use would be well
worth it, if I understand correctly (ha!!!).

> [...]
>
> > > What is the point of calling kmalloc for a PAGE_SIZE object? Wouldn't
> > > using the page allocator directly be better?
> >
> > Well, you guys gave me considerable heat about abusing internal allocator
> > interfaces, and kmalloc() and kfree() seem to be about as non-internal
> > as you can get and still be invoking the allocator. ;-)
>
> alloc_pages resp. __get_free_pages is a normal page allocator interface
> to use for page size granular allocations. kmalloc is for more fine
> grained allocations.

OK, in the short term, both work, but I have queued a separate patch
making this change and recording the tradeoffs. This is not yet a
promise to push this patch, but it is a promise not to lose this part
of the picture. Please see below.

You mentioned alloc_pages(). I reverted to __get_free_pages(), but
alloc_pages() of course looks nicer. What are the tradeoffs between
__get_free_pages() and alloc_pages()?

Thanx, Paul

------------------------------------------------------------------------

commit 490b638d7c241ac06cee168ccf8688bb8b872478
Author: Paul E. McKenney <paulmck@xxxxxxxxxx>
Date: Wed Sep 30 16:16:39 2020 -0700

kvfree_rcu(): Switch from kmalloc/kfree to __get_free_page/free_page.

The advantages of using kmalloc() and kfree() are a possible small speedup
on CONFIG_SLAB=y systems, avoiding the allocation-side cast, and use of
more-familiar API members. The advantages of using __get_free_page()
and free_page() are a possible reduction in fragmentation and direct
access to the buddy allocator.

To help settle the question as to which to use, this commit switches
from kmalloc() and kfree() to __get_free_page() and free_page().

Suggested-by: Michal Hocko <mhocko@xxxxxxxx>
Suggested-by: "Uladzislau Rezki (Sony)" <urezki@xxxxxxxxx>
Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx>

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 2886e81..242f0f0 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3225,7 +3225,8 @@ static void kfree_rcu_work(struct work_struct *work)
bkvhead[i] = NULL;
krc_this_cpu_unlock(krcp, flags);

- kfree(bkvhead[i]);
+ if (bkvhead[i])
+ free_page((unsigned long)bkvhead[i]);

cond_resched_tasks_rcu_qs();
}
@@ -3378,7 +3379,7 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp,
bnode = get_cached_bnode(*krcp);
if (!bnode && can_alloc_page) {
krc_this_cpu_unlock(*krcp, *flags);
- bnode = kmalloc(PAGE_SIZE, gfp);
+ bnode = (struct kvfree_rcu_bulk_data *)__get_free_page(gfp);
*krcp = krc_this_cpu_lock(flags);
}