Re: [RFC-PATCH 2/4] mm: Add __rcu_alloc_page_lockless() func.
From: Uladzislau Rezki
Date: Tue Oct 06 2020 - 18:25:37 EST
On Mon, Oct 05, 2020 at 05:41:00PM +0200, Michal Hocko wrote:
> On Mon 05-10-20 17:08:01, Uladzislau Rezki wrote:
> > On Fri, Oct 02, 2020 at 11:05:07AM +0200, Michal Hocko wrote:
> > > On Fri 02-10-20 09:50:14, Mel Gorman wrote:
> > > > On Fri, Oct 02, 2020 at 09:11:23AM +0200, Michal Hocko wrote:
> > > > > On Thu 01-10-20 21:26:26, Uladzislau Rezki wrote:
> > > > > > >
> > > > > > > No, I meant going back to idea of new gfp flag, but adjust the implementation in
> > > > > > > the allocator (different from what you posted in previous version) so that it
> > > > > > > only looks at the flag after it tries to allocate from pcplist and finds out
> > > > > > > it's empty. So, no inventing of new page allocator entry points or checks such
> > > > > > > as the one you wrote above, but adding the new gfp flag in a way that it doesn't
> > > > > > > affect existing fast paths.
> > > > > > >
> > > > > > OK. Now i see. Please have a look below at the patch, so we fully understand
> > > > > > each other. If that is something that is close to your view or not:
> > > > > >
> > > > > > <snip>
> > > > > > t a/include/linux/gfp.h b/include/linux/gfp.h
> > > > > > index c603237e006c..7e613560a502 100644
> > > > > > --- a/include/linux/gfp.h
> > > > > > +++ b/include/linux/gfp.h
> > > > > > @@ -39,8 +39,9 @@ struct vm_area_struct;
> > > > > > #define ___GFP_HARDWALL 0x100000u
> > > > > > #define ___GFP_THISNODE 0x200000u
> > > > > > #define ___GFP_ACCOUNT 0x400000u
> > > > > > +#define ___GFP_NO_LOCKS 0x800000u
> > > > >
> > > > > Even if a new gfp flag gains a sufficient traction and support I am
> > > > > _strongly_ opposed against consuming another flag for that. Bit space is
> > > > > limited.
> > > >
> > > > That is definitely true. I'm not happy with the GFP flag at all, the
> > > > comment is at best a damage limiting move. It still would be better for
> > > > a memory pool to be reserved and sized for critical allocations.
> > >
> > > Completely agreed. The only existing usecase is so special cased that a
> > > dedicated pool is not only easier to maintain but it should be also much
> > > better tuned for the specific workload. Something not really feasible
> > > with the allocator.
> > >
> > > > > Besides that we certainly do not want to allow craziness like
> > > > > __GFP_NO_LOCK | __GFP_RECLAIM (and similar), do we?
> > > >
> > > > That would deserve to be taken to a dumpster and set on fire. The flag
> > > > combination could be checked in the allocator but the allocator path fast
> > > > paths are bad enough already.
> > >
> > > If a new allocation/gfp mode is absolutely necessary then I believe that
> > > the most reasoanble way forward would be
> > > #define GFP_NO_LOCK ((__force gfp_t)0)
> > >
> > Agree. Even though i see that some code should be adjusted for it. There are
> > a few users of the __get_free_page(0); So, need to double check it:
>
> Yes, I believe I have pointed that out in the previous discussion.
>
OK. I spent more time on it. A passed gfp_mask can be adjusted on the entry,
that adjustment depends on the gfp_allowed_mask. It can be changed in run-time.
For example during early boot it excludes: __GFP_RECLAIM|__GFP_IO|__GFP_FS flags,
what is GFP_KERNEL. So, GFP_KERNEL is adjusted on entry and becomes 0 during early
boot phase.
How to distinguish it:
<snip>
+ /*
+ * gfp_mask can become zero because gfp_allowed_mask changes in run-time.
+ */
+ if (!gfp_mask)
+ alloc_flags |= ALLOC_NO_LOCKS;
+
gfp_mask &= gfp_allowed_mask;
alloc_mask = gfp_mask;
if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags))
<snip>
> >
> > Apart of that. There is a post_alloc_hook(), that gets called from the prep_new_page().
> > If "debug page alloc enabled", it maps a page for debug purposes invoking kernel_map_pages().
> > __kernel_map_pages() is ARCH specific. For example, powerpc variant uses sleep-able locks
> > what can be easily converted to raw variant.
>
> Yes, there are likely more surprises like that. I am not sure about
> kasan, page owner (which depens on the stack unwinder) and others which
> hook into this path.
>
I have checked kasan_alloc_pages(), kernel_poison_pages() both are OK,
at least i did not find any locking there. As for set_page_owner(), it
requires more attention, since it uses arch specific unwind logic. Though,
i spent some time on it and so far have not noticed anything.
--
Vlad Rezki