Re: [PATCH v3 6/6] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY
From: Lance Yang
Date: Sun Jan 04 2026 - 21:10:04 EST
On 2026/1/5 08:31, Wei Yang wrote:
On Sun, Jan 04, 2026 at 08:20:29PM +0800, Lance Yang wrote:
On 2026/1/4 13:41, Vernon Yang wrote:
When an mm with the MMF_DISABLE_THP_COMPLETELY flag is detected during
scanning, directly set khugepaged_scan.mm_slot to the next mm_slot,
reduce redundant operation.
Signed-off-by: Vernon Yang <yanglincheng@xxxxxxxxxx>
---
mm/khugepaged.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 1ca034a5f653..d4ed0f397335 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2541,7 +2541,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
* Release the current mm_slot if this mm is about to die, or
* if we scanned all vmas of this mm.
*/
- if (hpage_collapse_test_exit(mm) || !vma) {
+ if (hpage_collapse_test_exit_or_disable(mm) || !vma) {
/*
* Make sure that if mm_users is reaching zero while
* khugepaged runs here, khugepaged_exit will find
Let's convert hpage_collapse_test_exit() in collect_mm_slot() as well,
otherwise the mm_slot would not be freed and will be scanned again ...
static void collect_mm_slot(struct mm_slot *slot)
{
struct mm_struct *mm = slot->mm;
lockdep_assert_held(&khugepaged_mm_lock);
if (hpage_collapse_test_exit(mm)) { <-
What if user toggle the MMF_DISABLE_THP_COMPLETELY flag again?
Maybe it's fine :)
If user sets MMF_DISABLE_THP_COMPLETELY, they probaly would not
clear it soon. Keeping the slot wastes memory.
If they do clear it later, page faults will trigger
do_huge_pmd_anonymous_page() -> khugepaged_enter_vma(), which
re-adds the mm.
Anyway, no strong opinion on that.
hash_del(&slot->hash);
list_del(&slot->mm_node);
mm_slot_free(mm_slot_cache, slot);
mmdrop(mm);
}
}