Re: [PATCH v3 2/2] ksm: Optimize rmap_walk_ksm by passing a suitable address range
From: xu.xin16
Date: Tue Apr 07 2026 - 02:23:07 EST
> > From the current implementation of mremap, before it succeeds, it always calls
> > prep_move_vma() -> madvise(MADV_UNMERGEABLE) -> break_ksm(), which splits KSM pages
> > into regular anonymous pages, which appears to be based on a patch you introduced
> > over a decade ago, 1ff829957316(ksm: prevent mremap move poisoning). Given this,
> > KSM pages should already be broken prior to the move, so they wouldn't remain as
> > mergeable pages after mremap. Could there be a scenario where this breaking mechanism
> > is bypassed, or am I missing a subtlety in the sequence of operations?
>
> I'd completely forgotten that patch by now! But it's dealing with a
> different issue; and note how it's intentionally leaving MADV_MERGEABLE
> on the vma itself, just using MADV_UNMERGEABLE (with &dummy) as an
> interface to CoW the KSM pages at that time, letting them be remerged after.
>
> The sequence in my testcase was:
>
> boot with mem=1G
> echo 1 >/sys/kernel/mm/ksm/run
> base = mmap(NULL, 3*PAGE_SIZE, PROT_READ|PROT_WRITE,
> MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
> madvise(base, 3*PAGE_SIZE, MADV_MERGEABLE);
> madvise(base, 3*PAGE_SIZE, MADV_DONTFORK); /* in case system() used */
> memset(base, 0x77, 2*PAGE_SIZE);
> sleep(1); /* I think not required */
> mremap(base + PAGE_SIZE, PAGE_SIZE, PAGE_SIZE,
> MREMAP_MAYMOVE|MREMAP_FIXED, base + 2*PAGE_SIZE);
> base2 = mmap(NULL, 512K, PROT_READ|PROT_WRITE,
> MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
> madvise(base2, 512K, MADV_DONTFORK); /* in case system() used */
> memset(base2, 0x77, 512K);
> print pages_shared pages_sharing /* 1 1 expected, 1 1 seen */
> run something to mmap 1G anon, touch all, touch again, exit
> print pages_shared pages_sharing /* 0 0 expected, 1 1 seen */
> exit
>
> Those base2 lines were a late addition, to get the test without mremap
> showing 0 0 instead of 1 1 at the end; just as I had to apply that
> pte_mkold-without-folio_mark_accessed patch to the kernel's mm/ksm.c.
>
> Originally I was checking the testcase's /proc/pid/smaps manually
> before exit; then found printing pages_shared pages_sharing easier.
>
> Hugh
Following the idea from your test case, I wrote a similar test program,
using migration instead of swap to trigger reverse mapping. The results
show that pages after mremap can still be successfully migrated.
See my testcase:
https://lore.kernel.org/all/20260407140805858ViqJKFhfmYSfq0FynsaEY@xxxxxxxxxx/
Therefore, I suspect that the reason your test program did not swap out
the pages might lie elsewhere, rather than being caused by this optimization.
Thanks.