Re: [PATCH 2/2] mm/hugetlb: pass correct order_per_bit to cma_declare_contiguous_nid
From: Frank van der Linden
Date: Thu Apr 04 2024 - 18:02:54 EST
Rushing is never good, of course, but see my reply to David - while
smaller hugetlb page sizes than HUGETLB_PAGE_ORDER exist, that's not
the issue in that particular code path.
The only restriction for backports is, I think, that the two patches
need to go together.
I have backported them to 6.6 (which was just a clean apply), and
5.10, which doesn't have hugetlb page demotion, so it actually can
pass the full 1G as order_per_bit. That works fine if you also apply
the CMA align check fix, but would fail otherwise.
- Frank
On Thu, Apr 4, 2024 at 1:52 PM Roman Gushchin <roman.gushchin@linuxdev> wrote:
>
> On Thu, Apr 04, 2024 at 10:13:21PM +0200, David Hildenbrand wrote:
> > On 04.04.24 18:25, Frank van der Linden wrote:
> > > The hugetlb_cma code passes 0 in the order_per_bit argument to
> > > cma_declare_contiguous_nid (the alignment, computed using the
> > > page order, is correctly passed in).
> > >
> > > This causes a bit in the cma allocation bitmap to always represent
> > > a 4k page, making the bitmaps potentially very large, and slower.
> > >
> > > So, correctly pass in the order instead.
> > >
> > > Signed-off-by: Frank van der Linden <fvdl@xxxxxxxxxx>
> > > Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx>
> > > Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
> >
> > It might be subopimal, but do we call it a "BUG" that needs "fixing". I
> > know, controversial :)
>
> We probably should not rush with a stable backporting, especially given your
> next comment on page sizes on arm.