On Sun, Apr 02, 2023 at 06:19:20AM +0800, Rongwei Wang wrote:It looks better than mine. Sorry for my confusing description, it will be fixed in the next version.
Without this modification, a core will wait (mostly)This feels like a very verbose way of saying
'swap_info_struct->lock' when completing
'del_from_avail_list(p)'. Immediately, other cores
soon calling 'add_to_avail_list()' to add the same
object again when acquiring the lock that released
by former. It's not the desired result but exists
indeed. This case can be described as below:
"The si->lock must be held when deleting the si from the
available list. Otherwise, another thread can re-add the
si to the available list, which can lead to memory corruption.
The only place we have found where this happens is in the
swapoff path."
+++ b/mm/swapfile.cThis comment isn't necessary. But I would add a lockdep assert inside
@@ -2610,8 +2610,12 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile)
spin_unlock(&swap_lock);
goto out_dput;
}
- del_from_avail_list(p);
+ /*
+ * Here lock is used to protect deleting and SWP_WRITEOK clearing
+ * can be seen concurrently.
+ */
__del_from_avail_list() that p->lock is held.
spin_lock(&p->lock);
+ del_from_avail_list(p);
if (p->prio < 0) {
struct swap_info_struct *si = p;
int nid;
--
2.27.0