Re: [PATCH v3 2/2] ksm: Optimize rmap_walk_ksm by passing a suitable address range

From: David Hildenbrand (Arm)

Date: Thu Apr 09 2026 - 05:56:42 EST


On 4/9/26 11:41, David Hildenbrand (Arm) wrote:
> On 4/9/26 11:37, David Hildenbrand (Arm) wrote:
>> On 4/9/26 11:18, Lorenzo Stoakes wrote:
>>>
>>> anon_vma doesn't have a vma field :) it has anon_vma->rb_root which maps to all
>>> 'related' VMAs.
>>
>> Right, anon_vma_chain has. Dammit.
>>
>>>
>>> And we're already looking at what might be covered by the anon_vma by
>>> invoking anon_vma_interval_tree_foreach() on anon_vma->rb_root in [0,
>>> ULONG_MAX).
>>>
>>>
>>> One interesting thing here is in the anon_vma_interval_tree_foreach() loop
>>> we check:
>>>
>>> if (addr < vma->vm_start || addr >= vma->vm_end)
>>> continue;
>>>
>>> Which is the same as saying 'hey we are ignoring remaps'.
>>>
>>> But... if _we_ got remapped previously (the unsharing is only temporary),
>>> then we'd _still_ have an anon_vma with an old index != addr >> PAGE_SHIFT,
>>> and would still not be able to figure out the correct pgoff after sharing.
>>>
>>> I wonder if we could just store the pgoff in the rmap_item though?
>>
>> That's what I said elsewhere and what I was trying to avoid here.
>>
>> It's 64bytes, and adding a new item will increase it to 96 bytes IIUC.
>
> As we're using a dedicate kmem cache it might "only" add 8 bytes, not
> sure. Still an undesired increase given that we need that for each entry
> in the stable/unstable tree.
>

Hmm, maybe we could do the following. I think the other members are only
relevant for the unstable tree.

diff --git a/mm/ksm.c b/mm/ksm.c
index 7d5b76478f0b..0c6bfed280f7 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -191,12 +191,13 @@ struct ksm_stable_node {
* @nid: NUMA node id of unstable tree in which linked (may not match page)
* @mm: the memory structure this rmap_item is pointing into
* @address: the virtual address this rmap_item tracks (+ flags in low bits)
- * @oldchecksum: previous checksum of the page at that virtual address
+ * @oldchecksum: previous checksum of the page at that virtual address (unstable tree)
* @node: rb node of this rmap_item in the unstable tree
* @head: pointer to stable_node heading this list in the stable tree
* @hlist: link into hlist of rmap_items hanging off that stable_node
- * @age: number of scan iterations since creation
- * @remaining_skips: how many scans to skip
+ * @age: number of scan iterations since creation (unstable tree)
+ * @remaining_skips: how many scans to skip (unstable tree)
+ * @pgoff: pgoff into @anon_vma where the page is mapped (stable tree)
*/
struct ksm_rmap_item {
struct ksm_rmap_item *rmap_list;
@@ -208,9 +209,14 @@ struct ksm_rmap_item {
};
struct mm_struct *mm;
unsigned long address; /* + low bits used for flags below */
- unsigned int oldchecksum; /* when unstable */
- rmap_age_t age;
- rmap_age_t remaining_skips;
+ union {
+ struct {
+ unsigned int oldchecksum;
+ rmap_age_t age;
+ rmap_age_t remaining_skips;
+ };
+ pgoff_t pgoff;
+ };
union {
struct rb_node node; /* when node of unstable tree */
struct { /* when listed from stable tree */
@@ -1600,6 +1606,7 @@ static int try_to_merge_with_ksm_page(struct ksm_rmap_item *rmap_item,

/* Must get reference to anon_vma while still holding mmap_lock */
rmap_item->anon_vma = vma->anon_vma;
+ rmap_item->pgoff = linear_page_index(vma, rmap_item->address);
get_anon_vma(vma->anon_vma);
out:
mmap_read_unlock(mm);
--
2.43.0

--
Cheers,

David