Re: [PATCH v2 2/3] ksm: perform a range-walk in break_ksm
From: David Hildenbrand (Red Hat)
Date: Mon Nov 03 2025 - 12:14:35 EST
On 31.10.25 18:46, Pedro Demarchi Gomes wrote:
Make break_ksm() receive an address range and change
break_ksm_pmd_entry() to perform a range-walk and return the address of
the first ksm page found.
This change allows break_ksm() to skip unmapped regions instead of
iterating every page address. When unmerging large sparse VMAs, this
significantly reduces runtime.
In a benchmark unmerging a 32 TiB sparse virtual address space where
only one page was populated, the runtime dropped from 9 minutes to less
then 5 seconds.
Suggested-by: David Hildenbrand <david@xxxxxxxxxx>
Signed-off-by: Pedro Demarchi Gomes <pedrodemargomes@xxxxxxxxx>
---
mm/ksm.c | 88 ++++++++++++++++++++++++++++++--------------------------
1 file changed, 48 insertions(+), 40 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 922d2936e206..64d66699133d 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -607,35 +607,55 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
return atomic_read(&mm->mm_users) == 0;
}
-static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next,
+struct break_ksm_arg {
+ unsigned long addr;
+};
Leftover? :)
+
+static int break_ksm_pmd_entry(pmd_t *pmdp, unsigned long addr, unsigned long end,
struct mm_walk *walk)
{
- struct folio *folio = NULL;
+ unsigned long *found_addr = (unsigned long *) walk->private;
+ struct mm_struct *mm = walk->mm;
+ pte_t *start_ptep, *ptep;
spinlock_t *ptl;
- pte_t *pte;
- pte_t ptent;
- int ret;
+ int found = 0;
Best to perform the ret -> found rename already in patch #1.
With both things
Acked-by: David Hildenbrand (Red Hat) <david@xxxxxxxxxx>
--
Cheers
David