Re: [PATCHv3 10/15] mm/hugetlb: Remove fake head pages

From: Kiryl Shutsemau

Date: Fri Jan 16 2026 - 10:52:09 EST


On Fri, Jan 16, 2026 at 10:38:02AM +0800, Muchun Song wrote:
>
>
> > On Jan 16, 2026, at 01:23, Kiryl Shutsemau <kas@xxxxxxxxxx> wrote:
> >
> > On Thu, Jan 15, 2026 at 05:49:43PM +0100, David Hildenbrand (Red Hat) wrote:
> >> On 1/15/26 15:45, Kiryl Shutsemau wrote:
> >>> HugeTLB Vmemmap Optimization (HVO) reduces memory usage by freeing most
> >>> vmemmap pages for huge pages and remapping the freed range to a single
> >>> page containing the struct page metadata.
> >>>
> >>> With the new mask-based compound_info encoding (for power-of-2 struct
> >>> page sizes), all tail pages of the same order are now identical
> >>> regardless of which compound page they belong to. This means the tail
> >>> pages can be truly shared without fake heads.
> >>>
> >>> Allocate a single page of initialized tail struct pages per NUMA node
> >>> per order in the vmemmap_tails[] array in pglist_data. All huge pages
> >>> of that order on the node share this tail page, mapped read-only into
> >>> their vmemmap. The head page remains unique per huge page.
> >>>
> >>> This eliminates fake heads while maintaining the same memory savings,
> >>> and simplifies compound_head() by removing fake head detection.
> >>>
> >>> Signed-off-by: Kiryl Shutsemau <kas@xxxxxxxxxx>
> >>> ---
> >>> include/linux/mmzone.h | 16 ++++++++++++++-
> >>> mm/hugetlb_vmemmap.c | 44 ++++++++++++++++++++++++++++++++++++++++--
> >>> mm/sparse-vmemmap.c | 44 ++++++++++++++++++++++++++++++++++--------
> >>> 3 files changed, 93 insertions(+), 11 deletions(-)
> >>>
> >>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> >>> index 322ed4c42cfc..2ee3eb610291 100644
> >>> --- a/include/linux/mmzone.h
> >>> +++ b/include/linux/mmzone.h
> >>> @@ -82,7 +82,11 @@
> >>> * currently expect (see CONFIG_HAVE_GIGANTIC_FOLIOS): with hugetlb, we expect
> >>> * no folios larger than 16 GiB on 64bit and 1 GiB on 32bit.
> >>> */
> >>> -#define MAX_FOLIO_ORDER get_order(IS_ENABLED(CONFIG_64BIT) ? SZ_16G : SZ_1G)
> >>> +#ifdef CONFIG_64BIT
> >>> +#define MAX_FOLIO_ORDER (34 - PAGE_SHIFT)
> >>> +#else
> >>> +#define MAX_FOLIO_ORDER (30 - PAGE_SHIFT)
> >>> +#endif
> >>
> >> Where do these magic values stem from, and how do they related to the
> >> comment above that clearly spells out 16G vs. 1G ?
> >
> > This doesn't change the resulting value: 1UL << 34 is 16GiB, 1UL << 30
> > is 1G. Subtract PAGE_SHIFT to get the order.
> >
> > The change allows the value to be used to define NR_VMEMMAP_TAILS which
> > is used specify size of vmemmap_tails array.
>
> How about allocate ->vmemmap_tails array dynamically? If sizeof of struct
> page is not power of two, then we could optimize away this array. Besides,
> the original MAX_FOLIO_ORDER could work as well.

This is tricky.

We need vmemmap_tails array to be around early, in
hugetlb_vmemmap_init_early(). By the time, we don't have slab
functional yet.

I think getting the array compile-time is the best shot.

--
Kiryl Shutsemau / Kirill A. Shutemov