ksm_exit() may remove the mm from the ksm_scan between the unlocking of
the ksm_mmlist and the start of the VMA iteration. This results in the
mmap_read_lock() not being taken and a report from lockdep that the mm
isn't locked in the maple tree code.
Fix the race by checking if this mm has been removed before iterating
the VMAs. __ksm_exit() uses the mmap lock to synchronize the freeing of
an mm, so it is safe to keep iterating over the VMAs when it is going to
be freed.
This change will slow down the mm exit during the race condition, but
will speed up the non-race scenarios iteration over the VMA list, which
should be much more common.
Reported-by: Pengfei Xu <pengfei.xu@xxxxxxxxx>
Link: https://lore.kernel.org/lkml/ZAdUUhSbaa6fHS36@xxxxxxxxxxxxxxxx/
Reported-by: syzbot+2ee18845e89ae76342c5@xxxxxxxxxxxxxxxxxxxxxxxxx
Link: https://syzkaller.appspot.com/bug?id=64a3e95957cd3deab99df7cd7b5a9475af92c93e
Cc: linux-mm@xxxxxxxxx
Cc: linux-kernel@xxxxxxxxxxxxxxx
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
Cc: heng.su@xxxxxxxxx
Cc: lkp@xxxxxxxxx
Cc: <Stable@xxxxxxxxxxxxxxx>
Fixes: a5f18ba07276 ("mm/ksm: use vma iterators instead of vma linked list")
Signed-off-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx>
---
mm/ksm.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 525c3306e78b..723ddbe6ea97 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1044,9 +1044,10 @@ static int unmerge_and_remove_all_rmap_items(void)
mm = mm_slot->slot.mm;
mmap_read_lock(mm);
+ if (ksm_test_exit(mm))
+ goto mm_exiting;
+
for_each_vma(vmi, vma) {
- if (ksm_test_exit(mm))
- break;
if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma)
continue;
err = unmerge_ksm_pages(vma,
@@ -1055,6 +1056,7 @@ static int unmerge_and_remove_all_rmap_items(void)
goto error;
}
+mm_exiting:
remove_trailing_rmap_items(&mm_slot->rmap_list);
mmap_read_unlock(mm);