[PATCH v1 1/6] mm/rmap: drop stale comment in page_add_anon_rmap and hugepage_add_anon_rmap()

From: David Hildenbrand
Date: Wed Sep 13 2023 - 08:52:11 EST


That comment was added in commit 5dbe0af47f8a ("mm: fix kernel BUG at
mm/rmap.c:1017!") to document why we can see vma->vm_end getting
adjusted concurrently due to a VMA split.

However, the optimized locking code was changed again in bf181b9f9d8
("mm anon rmap: replace same_anon_vma linked list with an interval tree.").

... and later, the comment was changed in commit 0503ea8f5ba7 ("mm/mmap:
remove __vma_adjust()") to talk about "vma_merge" although the original
issue was with VMA splitting.

Let's just remove that comment. Nowadays, it's outdated, imprecise and
confusing.

Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
---
mm/rmap.c | 2 --
1 file changed, 2 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index ec7f8e6c9e48..ca2787c1fe05 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1245,7 +1245,6 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
__lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);

if (likely(!folio_test_ksm(folio))) {
- /* address might be in next vma when migration races vma_merge */
if (first)
__page_set_anon_rmap(folio, page, vma, address,
!!(flags & RMAP_EXCLUSIVE));
@@ -2536,7 +2535,6 @@ void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma,

BUG_ON(!folio_test_locked(folio));
BUG_ON(!anon_vma);
- /* address might be in next vma when migration races vma_merge */
first = atomic_inc_and_test(&folio->_entire_mapcount);
VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page);
VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page);
--
2.41.0