[PATCH 1/1] mm/ksm: fix spurious soft-dirty bit on zero-filled page merging

From: Lance Yang

Date: Sun Sep 28 2025 - 00:52:17 EST


From: Lance Yang <lance.yang@xxxxxxxxx>

When KSM merges a zero-filled page with the shared zeropage, it uses
pte_mkdirty() to mark the new PTE for internal accounting. However,
pte_mkdirty() unconditionally sets both the hardware dirty bit and the
soft-dirty bit.

This behavior causes false positives in userspace tools like CRIU that
rely on the soft-dirty mechanism for tracking memory changes.

So, preserve the correct state by reading the old PTE under the page
table lock and explicitly clearing the soft-dirty bit from the new PTE
if the original was not soft-dirty.

Fixes: 79271476b336 ("ksm: support unsharing KSM-placed zero pages")
Signed-off-by: Lance Yang <lance.yang@xxxxxxxxx>
---
mm/ksm.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/mm/ksm.c b/mm/ksm.c
index 04019a15b25d..e34516b8fbe4 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1403,6 +1403,9 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
* the dirty bit in zero page's PTE is set.
*/
newpte = pte_mkdirty(pte_mkspecial(pfn_pte(page_to_pfn(kpage), vma->vm_page_prot)));
+ if (!pte_soft_dirty(ptep_get(ptep)))
+ newpte = pte_clear_soft_dirty(newpte);
+
ksm_map_zero_page(mm);
/*
* We're replacing an anonymous page with a zero page, which is
--
2.49.0