Re: [PATCH 2/2] mm/hugetlb: pass correct order_per_bit to cma_declare_contiguous_nid
From: Frank van der Linden
Date: Thu Apr 04 2024 - 17:58:35 EST
On Thu, Apr 4, 2024 at 1:17 PM Andrew Morton <akpm@linux-foundationorg> wrote:
>
> On Thu, 4 Apr 2024 16:25:15 +0000 Frank van der Linden <fvdl@xxxxxxxxxx> wrote:
>
> > The hugetlb_cma code passes 0 in the order_per_bit argument to
> > cma_declare_contiguous_nid (the alignment, computed using the
> > page order, is correctly passed in).
> >
> > This causes a bit in the cma allocation bitmap to always represent
> > a 4k page, making the bitmaps potentially very large, and slower.
> >
> > So, correctly pass in the order instead.
>
> Ditto. Should we backport this? Can we somewhat quantify "potentially very",
> and understand under what circumstances this might occur?
It would create bitmaps that would be pretty big. E.g. for a 4k page
size on x86, hugetlb_cma=64G would mean a bitmap size of (64G / 4k) /
8 == 2M. With HUGETLB_PAGE_ORDER as order_per_bit, as intended, this
would be (64G / 2M) / 8 == 4k. So, that's quite a difference :)
Also, this restricted the hugetlb_cma area to ((PAGE_SIZE <<
MAX_PAGE_ORDER) * 8) * PAGE_SIZE (e.g. 128G on x86) , since
bitmap_alloc uses normal page allocation, and is thus restricted by
MAX_PAGE_ORDER. Specifying anything about that would fail the CMA
initialization.
- Frank