[PATCH 5.10 566/593] mm/hugetlb: use helper huge_page_order and pages_per_huge_page
From: Greg Kroah-Hartman
Date: Mon Jul 12 2021 - 03:18:58 EST
From: Miaohe Lin <linmiaohe@xxxxxxxxxx>
[ Upstream commit c78a7f3639932c48b4e1d329fc80fd26aa1a2fa3 ]
Since commit a5516438959d ("hugetlb: modular state for hugetlb page
size"), we can use huge_page_order to access hstate->order and
pages_per_huge_page to fetch the pages per huge page. But
gather_bootmem_prealloc() forgot to use it.
Link: https://lkml.kernel.org/r/20210114114435.40075-1-linmiaohe@xxxxxxxxxx
Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx>
Reviewed-by: David Hildenbrand <david@xxxxxxxxxx>
Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>
---
mm/hugetlb.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d4f89c2f9544..991b5cd40267 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2500,7 +2500,7 @@ static void __init gather_bootmem_prealloc(void)
struct hstate *h = m->hstate;
WARN_ON(page_count(page) != 1);
- prep_compound_huge_page(page, h->order);
+ prep_compound_huge_page(page, huge_page_order(h));
WARN_ON(PageReserved(page));
prep_new_huge_page(h, page, page_to_nid(page));
put_page(page); /* free it into the hugepage allocator */
@@ -2512,7 +2512,7 @@ static void __init gather_bootmem_prealloc(void)
* side-effects, like CommitLimit going negative.
*/
if (hstate_is_gigantic(h))
- adjust_managed_page_count(page, 1 << h->order);
+ adjust_managed_page_count(page, pages_per_huge_page(h));
cond_resched();
}
}
--
2.30.2