Re: [PATCH 2/2] mm/hugetlb: pass correct order_per_bit to cma_declare_contiguous_nid
From: Andrew Morton
Date: Thu Apr 04 2024 - 18:21:45 EST
On Thu, 4 Apr 2024 15:02:34 -0700 Frank van der Linden <fvdl@xxxxxxxxxx> wrote:
> Rushing is never good, of course, but see my reply to David - while
> smaller hugetlb page sizes than HUGETLB_PAGE_ORDER exist, that's not
> the issue in that particular code path.
>
> The only restriction for backports is, I think, that the two patches
> need to go together.
>
> I have backported them to 6.6 (which was just a clean apply), and
> 5.10, which doesn't have hugetlb page demotion, so it actually can
> pass the full 1G as order_per_bit. That works fine if you also apply
> the CMA align check fix, but would fail otherwise.
OK, thanks. I added cc:stable to both patches and added this:
: It would create bitmaps that would be pretty big. E.g. for a 4k page
: size on x86, hugetlb_cma=64G would mean a bitmap size of (64G / 4k) / 8
: == 2M. With HUGETLB_PAGE_ORDER as order_per_bit, as intended, this
: would be (64G / 2M) / 8 == 4k. So, that's quite a difference.
:
: Also, this restricted the hugetlb_cma area to ((PAGE_SIZE <<
: MAX_PAGE_ORDER) * 8) * PAGE_SIZE (e.g. 128G on x86) , since
: bitmap_alloc uses normal page allocation, and is thus restricted by
: MAX_PAGE_ORDER. Specifying anything about that would fail the CMA
: initialization.
to the [2/2] changelog.
For extra test & review I'll leave them in mm-[un]stable so they go
into mainline for 6.10-rc1 which will then trigger the backporting
process. This can of course all be altered...