On Thu, Apr 4, 2024 at 1:13 PM David Hildenbrand <david@xxxxxxxxxx> wrote:
On 04.04.24 18:25, Frank van der Linden wrote:
The hugetlb_cma code passes 0 in the order_per_bit argument to
cma_declare_contiguous_nid (the alignment, computed using the
page order, is correctly passed in).
This causes a bit in the cma allocation bitmap to always represent
a 4k page, making the bitmaps potentially very large, and slower.
So, correctly pass in the order instead.
Signed-off-by: Frank van der Linden <fvdl@xxxxxxxxxx>
Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx>
Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
It might be subopimal, but do we call it a "BUG" that needs "fixing". I
know, controversial :)
---
mm/hugetlb.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 23ef240ba48a..6dc62d8b2a3a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -7873,9 +7873,9 @@ void __init hugetlb_cma_reserve(int order)
* huge page demotion.
*/
res = cma_declare_contiguous_nid(0, size, 0,
- PAGE_SIZE << HUGETLB_PAGE_ORDER,
- 0, false, name,
- &hugetlb_cma[nid], nid);
+ PAGE_SIZE << HUGETLB_PAGE_ORDER,
+ HUGETLB_PAGE_ORDER, false, name,
+ &hugetlb_cma[nid], nid);
if (res) {
pr_warn("hugetlb_cma: reservation failed: err %d, node %d",
res, nid);
... I'm afraid this is not completely correct.
For example, on arm64, HUGETLB_PAGE_ORDER is essentially PMD_ORDER.
... but we do support smaller hugetlb sizes than that (cont-pte hugetlb
size is 64 KiB, not 2 MiB -- PMD -- on a 4k kernel)
Right, smaller hugetlb page sizes exist. But, the value here is not
intended to represent the minimum hugetlb page size - it's the minimum
hugetlb page size that we can demote a CMA-allocated hugetlb page to.
See:
a01f43901cfb ("hugetlb: be sure to free demoted CMA pages to CMA")
So, this just restricts demotion of the gigantic, CMA-allocated pages
to HUGETLB_PAGE_ORDER-sized chunks.