Re: [PATCH 7.2 v2 05/12] mm/khugepaged: remove READ_ONLY_THP_FOR_FS check in hugepage_pmd_enabled()
From: David Hildenbrand (Arm)
Date: Thu Apr 16 2026 - 04:50:44 EST
On 4/15/26 20:01, Zi Yan wrote:
> On 15 Apr 2026, at 5:21, Baolin Wang wrote:
>
>> On 4/15/26 4:00 PM, David Hildenbrand (Arm) wrote:
>>
>> My comments are in reply to Zi’s comment:
>>
>> "I think hugepage_global_enabled() should be enough to decide whether khugepaged should run or not. "
>>
>> I’m concerned that only relying on hugepage_global_enabled() to decide whether khugepaged should run would cause a regression for anonymous and shmem memory collapse, as it ignores per-size mTHP configuration.
>>
>>> The question is really which semantics we want.
>>>
>>> Right now, there is no way to disable khugepaged for anon pages, to just
>>> get them during page faults.
>>
>> Right.
>>
>>> And we are now talking about the same problem for FS: to only get them
>>> during page faults (like we did so far without CONFIG_READ_ONLY_THP_FOR_FS).
>>
>> OK. I’m fine with using hugepage_global_enabled() to determine whether khugepaged scans file folios.
>>
>> My concern is that for anonymous memory and shmem, the per-size mTHP settings should be considered.
>
> OK, I misunderstood the meaning of hugepage_global_enabled(), since per-size
> mTHP settings could also enable khugepaged if PMD_SIZE is set.
>
> I will take willy’s original suggestion and make khugepaged on if the global
> setting is enabled. The below is the new version of this patch. I moved anon
> pmd huge page code to a separate anon_hpage_pmd_enabled() like
> shmem_hpage_pmd_enabled() and cleaned up the comment. Let me know your thoughts.
>
> Thanks.
>
> From 92b92f2b2ab41c70b41dd304ce648786ee6a1603 Mon Sep 17 00:00:00 2001
> From: Zi Yan <ziy@xxxxxxxxxx>
> Date: Wed, 15 Apr 2026 13:52:50 -0400
> Subject: [PATCH] mm/khugepaged: remove READ_ONLY_THP_FOR_FS check in
> hugepage_pmd_enabled()
>
> Remove READ_ONLY_THP_FOR_FS and khugepaged for file-backed pmd-sized
> hugepages are enabled by the global transparent hugepage control.
> khugepaged can still be enabled by per-size control for anon and shmem when
> the global control is off.
>
> Signed-off-by: Zi Yan <ziy@xxxxxxxxxx>
> ---
> mm/khugepaged.c | 26 +++++++++++++++-----------
> 1 file changed, 15 insertions(+), 11 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index b8452dbdb043..586d27ce896e 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -406,18 +406,8 @@ static inline int collapse_test_exit_or_disable(struct mm_struct *mm)
> mm_flags_test(MMF_DISABLE_THP_COMPLETELY, mm);
> }
>
> -static bool hugepage_pmd_enabled(void)
> +static inline bool anon_hpage_pmd_enabled()
> {
> - /*
> - * We cover the anon, shmem and the file-backed case here; file-backed
> - * hugepages, when configured in, are determined by the global control.
> - * Anon pmd-sized hugepages are determined by the pmd-size control.
> - * Shmem pmd-sized hugepages are also determined by its pmd-size control,
> - * except when the global shmem_huge is set to SHMEM_HUGE_DENY.
> - */
> - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
> - hugepage_global_enabled())
> - return true;
> if (test_bit(PMD_ORDER, &huge_anon_orders_always))
> return true;
> if (test_bit(PMD_ORDER, &huge_anon_orders_madvise))
> @@ -425,6 +415,20 @@ static bool hugepage_pmd_enabled(void)
> if (test_bit(PMD_ORDER, &huge_anon_orders_inherit) &&
> hugepage_global_enabled())
> return true;
> + return false;
> +}
> +
Works for me.
> +static bool hugepage_pmd_enabled(void)
> +{
> + /*
> + * Anon, shmem and file-backed pmd-sized hugepages are all determined by
> + * the global control. If the global control is off, anon and shmem
> + * pmd-sized hugepages are also determined by its per-size control.
> + */
> + if (hugepage_global_enabled())
> + return true;
> + if (anon_hpage_pmd_enabled())
> + return true;
> if (IS_ENABLED(CONFIG_SHMEM) && shmem_hpage_pmd_enabled())
BTW, can we please provide a stub for shmem_hpage_pmd_enabled() in
shmem_fs.h such that we can remove the IS_ENABLED here?
--
Cheers,
David