Hello Zhou,
Great catch.
On Thu, May 05, 2016 at 08:42:56PM +0800, Zhou Chengming wrote:
remove_trailing_rmap_items(slot, ksm_scan.rmap_list);
+ up_read(&mm->mmap_sem);
spin_lock(&ksm_mmlist_lock);
ksm_scan.mm_slot = list_entry(slot->mm_list.next,
@@ -1666,16 +1667,12 @@ next_mm:
*/
hash_del(&slot->link);
list_del(&slot->mm_list);
- spin_unlock(&ksm_mmlist_lock);
free_mm_slot(slot);
clear_bit(MMF_VM_MERGEABLE,&mm->flags);
- up_read(&mm->mmap_sem);
mmdrop(mm);
I thought the mmap_sem for reading prevented a race of the above
clear_bit against a concurrent madvise(MADV_MERGEABLE) which takes the
mmap_sem for writing. After this change can't __ksm_enter run
concurrently with the clear_bit above introducing a different SMP race
condition?
- } else {
- spin_unlock(&ksm_mmlist_lock);
- up_read(&mm->mmap_sem);
The strict obviously safe fix is just to invert the above two,
up_read; spin_unlock.
Then I found another instance of this same SMP race condition in
unmerge_and_remove_all_rmap_items() that you didn't fix.
Actually for the other instance of the bug the implementation above
that releases the mmap_sem early sounds safe, because it's a
ksm_text_exit that takes the clear_bit path, not just the fact we
didn't find a vma with VM_MERGEABLE set and we garbage collect the
mm_slot, while the "mm" may still alive. In the other case the "mm"
isn't alive anymore so the race with MADV_MERGEABLE shouldn't be
possible to materialize.
Could you fix it by just inverting the up_read/spin_unlock order, in
the place you patched, and add this comment:
} else {
/*
* up_read(&mm->mmap_sem) first because after
* spin_unlock(&ksm_mmlist_lock) run, the "mm" may
* already have been freed under us by __ksm_exit()
* because the "mm_slot" is still hashed and
* ksm_scan.mm_slot doesn't point to it anymore.
*/
up_read(&mm->mmap_sem);
spin_unlock(&ksm_mmlist_lock);
}
And in unmerge_and_remove_all_rmap_items() same thing, except there
you can apply your up_read() early and you can just drop the "else"
clause.
.