Re: [PATCH 2/2] mm/hugetlb: pass correct order_per_bit to cma_declare_contiguous_nid

From: David Hildenbrand
Date: Mon Apr 08 2024 - 04:15:56 EST


On 04.04.24 23:44, Frank van der Linden wrote:
On Thu, Apr 4, 2024 at 1:13 PM David Hildenbrand <david@xxxxxxxxxx> wrote:

On 04.04.24 18:25, Frank van der Linden wrote:
The hugetlb_cma code passes 0 in the order_per_bit argument to
cma_declare_contiguous_nid (the alignment, computed using the
page order, is correctly passed in).

This causes a bit in the cma allocation bitmap to always represent
a 4k page, making the bitmaps potentially very large, and slower.

So, correctly pass in the order instead.

Signed-off-by: Frank van der Linden <fvdl@xxxxxxxxxx>
Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx>
Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma")

It might be subopimal, but do we call it a "BUG" that needs "fixing". I
know, controversial :)

---
mm/hugetlb.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 23ef240ba48a..6dc62d8b2a3a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -7873,9 +7873,9 @@ void __init hugetlb_cma_reserve(int order)
* huge page demotion.
*/
res = cma_declare_contiguous_nid(0, size, 0,
- PAGE_SIZE << HUGETLB_PAGE_ORDER,
- 0, false, name,
- &hugetlb_cma[nid], nid);
+ PAGE_SIZE << HUGETLB_PAGE_ORDER,
+ HUGETLB_PAGE_ORDER, false, name,
+ &hugetlb_cma[nid], nid);
if (res) {
pr_warn("hugetlb_cma: reservation failed: err %d, node %d",
res, nid);

... I'm afraid this is not completely correct.

For example, on arm64, HUGETLB_PAGE_ORDER is essentially PMD_ORDER.

... but we do support smaller hugetlb sizes than that (cont-pte hugetlb
size is 64 KiB, not 2 MiB -- PMD -- on a 4k kernel)

Right, smaller hugetlb page sizes exist. But, the value here is not
intended to represent the minimum hugetlb page size - it's the minimum
hugetlb page size that we can demote a CMA-allocated hugetlb page to.
See:

a01f43901cfb ("hugetlb: be sure to free demoted CMA pages to CMA")

So, this just restricts demotion of the gigantic, CMA-allocated pages
to HUGETLB_PAGE_ORDER-sized chunks.

Right, now my memory comes back. In v1 of that patch set, Mike did support denoting to any smaller hugetlb order.

And because that smallest hugetlb order is not known when we call cma_declare_contiguous_nid(), he used to pass "0" for the order.

In v2, he simplifed that, though, and limited it to HUGETLB_PAGE_ORDER. He didn't update the order here, though.

Acked-by: David Hildenbrand <david@xxxxxxxxxx>


Note that I don't think any of these patches are CC-stable material.

--
Cheers,

David / dhildenb