Re: [PATCH 02/10] mm/ksm: skip subpages of compound pages

From: David Hildenbrand
Date: Tue Jun 04 2024 - 04:12:27 EST


On 04.06.24 06:24, alexs@xxxxxxxxxx wrote:
From: "Alex Shi (tencent)" <alexs@xxxxxxxxxx>

When a folio isn't fit for KSM, the subpages are unlikely to be good,
So let's skip the rest page checking to save some actions.

Signed-off-by: Alex Shi (tencent) <alexs@xxxxxxxxxx>
---
mm/ksm.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 97e5b41f8c4b..e2fdb9dd98e2 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2644,6 +2644,8 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
goto no_vmas;
for_each_vma(vmi, vma) {
+ int nr = 1;
+
if (!(vma->vm_flags & VM_MERGEABLE))
continue;
if (ksm_scan.address < vma->vm_start)
@@ -2660,6 +2662,9 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
cond_resched();
continue;
}
+
+ VM_WARN_ON(PageTail(*page));
+ nr = compound_nr(*page);
if (is_zone_device_page(*page))
goto next_page;
if (PageAnon(*page)) {
@@ -2672,7 +2677,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
if (should_skip_rmap_item(*page, rmap_item))
goto next_page;
- ksm_scan.address += PAGE_SIZE;
+ ksm_scan.address += nr * PAGE_SIZE;
} else
put_page(*page);
mmap_read_unlock(mm);
@@ -2680,7 +2685,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
}
next_page:
put_page(*page);
- ksm_scan.address += PAGE_SIZE;
+ ksm_scan.address += nr * PAGE_SIZE;
cond_resched();
}
}

You might be jumping over pages that don't belong to that folio. What you would actually want to do is somehow use folio_pte_batch() to really know the PTEs point at the same folio, so you can skip them. But that's not that easy when using follow_page() ...

So I suggest dropping this change for now.

--
Cheers,

David / dhildenb