Re: [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps

From: Lorenzo Stoakes (Oracle)

Date: Thu Mar 05 2026 - 05:47:14 EST


On Wed, Mar 04, 2026 at 04:56:36PM +0100, David Hildenbrand (Arm) wrote:
> There was recently some confusion around THPs and the interaction with
> KernelPageSize / MMUPageSize. Historically, these entries always
> correspond to the smallest size we could encounter, not any current
> usage of transparent huge pages or larger sizes used by the MMU.
>
> Ever since we added THP support many, many years ago, these entries
> would keep reporting the smallest (fallback) granularity in a VMA.
>
> For this reason, they default to PAGE_SIZE for all VMAs except for
> VMAs where we have the guarantee that the system and the MMU will
> always use larger page sizes. hugetlb, for example, exposes a custom
> vm_ops->pagesize callback to handle that. Similarly, dax/device
> exposes a custom vm_ops->pagesize callback and provides similar
> guarantees.
>
> Let's clarify the historical meaning of KernelPageSize / MMUPageSize,
> and point at "AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped"
> regarding PMD entries.
>
> While at it, document "FilePmdMapped", clarify what the "AnonHugePages"
> and "ShmemPmdMapped" entries really mean, and make it clear that there
> are no other entries for other THP/folio sizes or mappings.
>
> Link: https://lore.kernel.org/all/20260225232708.87833-1-ak@xxxxxxxxxxxxxxx/
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx>
> Cc: Zi Yan <ziy@xxxxxxxxxx>
> Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
> Cc: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx>
> Cc: Nico Pache <npache@xxxxxxxxxx>
> Cc: Ryan Roberts <ryan.roberts@xxxxxxx
> Cc: Dev Jain <dev.jain@xxxxxxx>
> Cc: Barry Song <baohua@xxxxxxxxxx>
> Cc: Lance Yang <lance.yang@xxxxxxxxx>
> Cc: Jonathan Corbet <corbet@xxxxxxx>
> Cc: Shuah Khan <skhan@xxxxxxxxxxxxxxxxxxx>
> Cc: Usama Arif <usamaarif642@xxxxxxxxx>
> Cc: Andi Kleen <ak@xxxxxxxxxxxxxxx>
> Signed-off-by: David Hildenbrand (Arm) <david@xxxxxxxxxx>

Overall this is great, some various nits and comments below so we can tweak it.

Cheers, Lorenzo

> ---
> Documentation/filesystems/proc.rst | 37 ++++++++++++++++++++++--------
> 1 file changed, 27 insertions(+), 10 deletions(-)
>
> diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
> index b0c0d1b45b99..0f67e47528fc 100644
> --- a/Documentation/filesystems/proc.rst
> +++ b/Documentation/filesystems/proc.rst
> @@ -464,6 +464,7 @@ Memory Area, or VMA) there is a series of lines such as the following::
> KSM: 0 kB
> LazyFree: 0 kB
> AnonHugePages: 0 kB
> + FilePmdMapped: 0 kB
> ShmemPmdMapped: 0 kB
> Shared_Hugetlb: 0 kB
> Private_Hugetlb: 0 kB
> @@ -477,13 +478,25 @@ Memory Area, or VMA) there is a series of lines such as the following::
>
> The first of these lines shows the same information as is displayed for
> the mapping in /proc/PID/maps. Following lines show the size of the
> -mapping (size); the size of each page allocated when backing a VMA
> -(KernelPageSize), which is usually the same as the size in the page table
> -entries; the page size used by the MMU when backing a VMA (in most cases,
> -the same as KernelPageSize); the amount of the mapping that is currently
> -resident in RAM (RSS); the process's proportional share of this mapping
> -(PSS); and the number of clean and dirty shared and private pages in the
> -mapping.
> +mapping (size); the smallest possible page size allocated when
> +backing a VMA (KernelPageSize), which is the granularity in which VMA
> +modifications can be performed; the smallest possible page size that could
> +be used by the MMU (MMUPageSize) when backing a VMA; the amount of the

Is it worth retaining 'in most cases the same as KernelPageSize' here?

Ah wait you dedicate a whole paragraph after this to tha :)

> +mapping that is currently resident in RAM (RSS); the process's proportional
> +share of this mapping (PSS); and the number of clean and dirty shared and
> +private pages in the mapping.
> +
> +Historically, the "KernelPageSize" always corresponds to the "MMUPageSize",
> +except when a larger kernel page size is emulated on a system with a smaller

NIT: is -> was, as historically implies past tense.

But it's maybe better to say:

+Historically, the "KernelPageSize" has always corresponded to the "MMUPageSize",

And:

+except when a larger kernel page size is being emulated on a system with a smaller

> +page size used by the MMU, which was the case for PPC64 in the past.
> +Further, "KernelPageSize" and "MMUPageSize" always correspond to the

NIT: Further -> Furthermore

> +smallest possible granularity (fallback) that could be encountered in a

could be -> can be

Since we are really talking about the current situation, even if this, is
effect, a legacy thing.

> +VMA throughout its lifetime. These values are not affected by any current
> +transparent grouping of pages by Linux (Transparent Huge Pages) or any

'transparent grouping of pages' reads a bit weirdly.

Maybe simplify to:

+These values are not affected by Transparent Huge Pages being in effect, or any...

> +current usage of larger MMU page sizes (either through architectural

NIT: current usage -> usage

> +huge-page mappings or other transparent groupings done by the MMU).

Again I think 'transparent groupings' is a bit unclear. Perhaps instead:

+huge-page mappings or other explicit or implicit coalescing of virtual ranges
+performed by the MMU).

?

> +"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" provide insight into
> +the usage of some architectural huge-page mappings.

Is 'some' necessary here? Seems to make it a bit vague.

>
> The "proportional set size" (PSS) of a process is the count of pages it has
> in memory, where each page is divided by the number of processes sharing it.
> @@ -528,10 +541,14 @@ pressure if the memory is clean. Please note that the printed value might
> be lower than the real value due to optimizations used in the current
> implementation. If this is not desirable please file a bug report.
>
> -"AnonHugePages" shows the amount of memory backed by transparent hugepage.
> +"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" show the amount of
> +memory backed by transparent hugepages that are currently mapped through
> +architectural huge-page mappings (PMD). "AnonHugePages" corresponds to memory

'mapped through architectural huge-page mappings (PMD)' reads a bit strangely to
me,

Perhaps 'mapped by transparent huge pages at a PMD page table level' instead?

> +that does not belong to a file, "ShmemPmdMapped" to shared memory (shmem/tmpfs)
> +and "FilePmdMapped" to file-backed memory (excluding shmem/tmpfs).
>
> -"ShmemPmdMapped" shows the amount of shared (shmem/tmpfs) memory backed by
> -huge pages.
> +There are no dedicated entries for transparent huge pages (or similar concepts)
> +that are not mapped through architectural huge-page mappings (PMD).

similarly, perhaps better as 'are not mapped by transparent huge pages at a PMD
page table level'?

>
> "Shared_Hugetlb" and "Private_Hugetlb" show the amounts of memory backed by
> hugetlbfs page which is *not* counted in "RSS" or "PSS" field for historical
> --
> 2.43.0
>