Re: [PATCH v3 2/2] mm: add docs for per-order mTHP split counters
From: Ryan Roberts
Date: Fri Jul 05 2024 - 05:17:07 EST
On 04/07/2024 02:29, Lance Yang wrote:
> This commit introduces documentation for mTHP split counters in
> transhuge.rst.
>
> Reviewed-by: Barry Song <baohua@xxxxxxxxxx>
> Signed-off-by: Mingzhe Yang <mingzhe.yang@xxxxxx>
> Signed-off-by: Lance Yang <ioworker0@xxxxxxxxx>
> ---
> Documentation/admin-guide/mm/transhuge.rst | 20 ++++++++++++++++----
> 1 file changed, 16 insertions(+), 4 deletions(-)
>
> diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
> index 1f72b00af5d3..0830aa173a8b 100644
> --- a/Documentation/admin-guide/mm/transhuge.rst
> +++ b/Documentation/admin-guide/mm/transhuge.rst
> @@ -369,10 +369,6 @@ also applies to the regions registered in khugepaged.
> Monitoring usage
> ================
>
> -.. note::
> - Currently the below counters only record events relating to
> - PMD-sized THP. Events relating to other THP sizes are not included.
> -
> The number of PMD-sized anonymous transparent huge pages currently used by the
> system is available by reading the AnonHugePages field in ``/proc/meminfo``.
> To identify what applications are using PMD-sized anonymous transparent huge
> @@ -514,6 +510,22 @@ file_fallback_charge
> falls back to using small pages even though the allocation was
> successful.
>
> +split
> + is incremented every time a huge page is successfully split into
> + smaller orders. This can happen for a variety of reasons but a
> + common reason is that a huge page is old and is being reclaimed.
> + This action implies splitting any block mappings into PTEs.
nit: the block mappings will already be PTEs if starting with mTHP?
regardless:
Reviewed-by: Ryan Roberts <ryan.roberts@xxxxxxx>
> +
> +split_failed
> + is incremented if kernel fails to split huge
> + page. This can happen if the page was pinned by somebody.
> +
> +split_deferred
> + is incremented when a huge page is put onto split
> + queue. This happens when a huge page is partially unmapped and
> + splitting it would free up some memory. Pages on split queue are
> + going to be split under memory pressure.
> +
> As the system ages, allocating huge pages may be expensive as the
> system uses memory compaction to copy data around memory to free a
> huge page for use. There are some counters in ``/proc/vmstat`` to help