Re: [PATCH v2] mm: hugetlb: optionally allocate gigantic hugepages using cma
From: Roman Gushchin
Date: Tue Mar 10 2020 - 13:58:25 EST
On Tue, Mar 10, 2020 at 06:39:51PM +0100, Michal Hocko wrote:
> On Tue 10-03-20 10:30:56, Roman Gushchin wrote:
> > On Tue, Mar 10, 2020 at 10:01:21AM +0100, Michal Hocko wrote:
> > > On Mon 09-03-20 17:25:24, Roman Gushchin wrote:
> > > [...]
> > > > 2) Run-time allocations of gigantic hugepages are performed using the
> > > > cma allocator and the dedicated cma area
> > >
> > > [...]
> > > > @@ -1237,6 +1246,23 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
> > > > {
> > > > unsigned long nr_pages = 1UL << huge_page_order(h);
> > > >
> > > > + if (IS_ENABLED(CONFIG_CMA) && hugetlb_cma[0]) {
> > > > + struct page *page;
> > > > + int nid;
> > > > +
> > > > + for_each_node_mask(nid, *nodemask) {
> > > > + if (!hugetlb_cma[nid])
> > > > + break;
> > > > +
> > > > + page = cma_alloc(hugetlb_cma[nid], nr_pages,
> > > > + huge_page_order(h), true);
> > > > + if (page)
> > > > + return page;
> > > > + }
> > > > +
> > > > + return NULL;
> > >
> > > Is there any strong reason why the alloaction annot fallback to non-CMA
> > > allocator when the cma is depleted?
> >
> > The reason is that that gigantic pages allocated using cma require
> > a special handling on releasing. It's solvable by using an additional
> > page flag, but because the current code is usually not working except
> > a short time just after the system start, I don't think it's worth it.
>
> I am not deeply familiar with the cma much TBH but cma_release seems to
> be documented to return false if the area doesn't belong to the area so
> the free patch can try cma_release and fallback to the regular free, no?
Good point! Then the fallback is not adding too much of complexity, so
I'll add it in the next version.
Thanks!