Re: [PATCH 00/11] mm/hugetlb: Eliminate fake head pages from vmemmap optimization
From: Kiryl Shutsemau
Date: Tue Dec 09 2025 - 09:44:47 EST
On Tue, Dec 09, 2025 at 02:22:28PM +0800, Muchun Song wrote:
> The prerequisite is that the starting address of vmemmap must be aligned to
> 16MB boundaries (for 1GB huge pages). Right? We should add some checks
> somewhere to guarantee this (not compile time but at runtime like for KASLR).
I have hard time finding the right spot to put the check.
I considered something like the patch below, but it is probably too late
if we boot preallocating huge pages.
I will dig more later, but if you have any suggestions, I would
appreciate.
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 04a211a146a0..971558184587 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -886,6 +886,14 @@ static int __init hugetlb_vmemmap_init(void)
BUILD_BUG_ON(__NR_USED_SUBPAGE > HUGETLB_VMEMMAP_RESERVE_PAGES);
for_each_hstate(h) {
+ unsigned long size = huge_page_size(h) / sizeof(struct page);
+
+ /* vmemmap is expected to be naturally aligned to page size */
+ if (WARN_ON_ONCE(!IS_ALIGNED((unsigned long)vmemmap, size))) {
+ vmemmap_optimize_enabled = false;
+ continue;
+ }
+
if (hugetlb_vmemmap_optimizable(h)) {
register_sysctl_init("vm", hugetlb_vmemmap_sysctls);
break;
--
Kiryl Shutsemau / Kirill A. Shutemov