On Fri, May 06, 2016 at 11:27:36AM +0800, Zhou Chengming wrote:
@@ -1650,16 +1647,22 @@ next_mm:
*/
hash_del(&slot->link);
list_del(&slot->mm_list);
- spin_unlock(&ksm_mmlist_lock);
free_mm_slot(slot);
clear_bit(MMF_VM_MERGEABLE,&mm->flags);
up_read(&mm->mmap_sem);
mmdrop(mm);
} else {
- spin_unlock(&ksm_mmlist_lock);
up_read(&mm->mmap_sem);
}
+ /*
+ * up_read(&mm->mmap_sem) first because after
+ * spin_unlock(&ksm_mmlist_lock) run, the "mm" may
+ * already have been freed under us by __ksm_exit()
+ * because the "mm_slot" is still hashed and
+ * ksm_scan.mm_slot doesn't point to it anymore.
+ */
+ spin_unlock(&ksm_mmlist_lock);
/* Repeat until we've completed scanning the whole list */
slot = ksm_scan.mm_slot;
Reviewed-by: Andrea Arcangeli<aarcange@xxxxxxxxxx>
While the above patch is correct, I would however prefer if you could
update it to keep releasing the ksm_mmlist_lock as before (I'm talking
only about the quoted part, not the other one not quoted), because
it's "strictier" and it better documents that it's only needed up
until:
hash_del(&slot->link);
list_del(&slot->mm_list);
It should be also a bit more scalable but to me this is just about
keeping implicit documentation on the locking by keeping it strict.
The fact up_read happens exactly after clear_bit also avoided me to
overlook that it was really needed, same thing with the
ksm_mmlist_lock after list_del, I'd like to keep it there and just
invert the order of spin_unlock; up_read in the else branch.
That should be enough because after hash_del get_mm_slot will return
NULL so the mmdrop will not happen anymore in __ksm_exit, this is
further explicit by the code doing mmdrop itself just after
up_read.
The SMP race condition is fixed by just the two liner that reverse the
order of spin_unlock; up_read without increasing the size of the
spinlock critical section for the ksm_scan.address == 0 case. This is
also why it wasn't reproducible because it's about 1 instruction window.
Thanks!
Andrea
.