Re: [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps

From: Lance Yang

Date: Wed Mar 04 2026 - 22:22:25 EST




On 2026/3/4 23:56, David Hildenbrand (Arm) wrote:
There was recently some confusion around THPs and the interaction with
KernelPageSize / MMUPageSize. Historically, these entries always
correspond to the smallest size we could encounter, not any current
usage of transparent huge pages or larger sizes used by the MMU.

Ever since we added THP support many, many years ago, these entries
would keep reporting the smallest (fallback) granularity in a VMA.

For this reason, they default to PAGE_SIZE for all VMAs except for
VMAs where we have the guarantee that the system and the MMU will
always use larger page sizes. hugetlb, for example, exposes a custom
vm_ops->pagesize callback to handle that. Similarly, dax/device
exposes a custom vm_ops->pagesize callback and provides similar
guarantees.

Let's clarify the historical meaning of KernelPageSize / MMUPageSize,
and point at "AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped"
regarding PMD entries.

While at it, document "FilePmdMapped", clarify what the "AnonHugePages"
and "ShmemPmdMapped" entries really mean, and make it clear that there
are no other entries for other THP/folio sizes or mappings.

Link: https://lore.kernel.org/all/20260225232708.87833-1-ak@xxxxxxxxxxxxxxx/
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx>
Cc: Zi Yan <ziy@xxxxxxxxxx>
Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
Cc: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx>
Cc: Nico Pache <npache@xxxxxxxxxx>
Cc: Ryan Roberts <ryan.roberts@xxxxxxx
Cc: Dev Jain <dev.jain@xxxxxxx>
Cc: Barry Song <baohua@xxxxxxxxxx>
Cc: Lance Yang <lance.yang@xxxxxxxxx>
Cc: Jonathan Corbet <corbet@xxxxxxx>
Cc: Shuah Khan <skhan@xxxxxxxxxxxxxxxxxxx>
Cc: Usama Arif <usamaarif642@xxxxxxxxx>
Cc: Andi Kleen <ak@xxxxxxxxxxxxxxx>
Signed-off-by: David Hildenbrand (Arm) <david@xxxxxxxxxx>
---

Makes sense to me. Feel free to add:

Reviewed-by: Lance Yang <lance.yang@xxxxxxxxx>