Re: [PATCH 0/2] mm/swap: fix missing locks in swap_reclaim_work()

From: Chris Li

Date: Mon Mar 09 2026 - 01:50:41 EST


On Fri, Mar 6, 2026 at 3:51 AM Hui Zhu <hui.zhu@xxxxxxxxx> wrote:
>
> From: Hui Zhu <zhuhui@xxxxxxxxxx>
>
> swap_cluster_alloc_table() assumes that the caller holds the following
> locks:
> ci->lock
> percpu_swap_cluster.lock
> si->global_cluster_lock (required for non-SWP_SOLIDSTATE devices)
>
> There are five call paths leading to swap_cluster_alloc_table():
> swap_alloc_hibernation_slot->cluster_alloc_swap_entry
> ->alloc_swap_scan_list->isolate_lock_cluster->swap_cluster_alloc_table
>
> swap_alloc_slow->cluster_alloc_swap_entry->alloc_swap_scan_list
> ->isolate_lock_cluster->swap_cluster_alloc_table
>
> swap_alloc_hibernation_slot->cluster_alloc_swap_entry
> ->swap_reclaim_full_clusters->isolate_lock_cluster
> ->swap_cluster_alloc_table
>
> swap_alloc_slow->cluster_alloc_swap_entry->swap_reclaim_full_clusters
> ->isolate_lock_cluster->swap_cluster_alloc_table
>
> swap_reclaim_work->swap_reclaim_full_clusters->isolate_lock_cluster
> ->swap_cluster_alloc_table
>
> Other paths correctly acquire the necessary locks before calling
> swap_cluster_alloc_table().
> But the swap_reclaim_work() path fails to acquire
> percpu_swap_cluster.lock and, for non-SWP_SOLIDSTATE devices,
> si->global_cluster_lock.
>
> The first patch ensures swap_reclaim_work() correctly acquires
> percpu_swap_cluster.lock and si->global_cluster_lock before calling
> swap_reclaim_full_clusters(). Without these locks, the preconditions
> for swap_cluster_alloc_table() are not met.
>
> The second patch adds lockdep assertions in swap_cluster_alloc_table()
> to help catch such locking inconsistencies early.
>
> I tried to reproduce this naturally, but the swap_reclaim_work path
> rarely hits the !cluster_table_is_alloced(found) condition. To verify
> the fix, I used GDB to force found->table to NULL, which triggered
> the following warning due to the missing locks:

As YoungJun and Kairui point out, isolate_lock_cluster() will take a
cluster from the full cluster list. When the cluster is isolated, it
already has the ci->lock. The cluster should be full, all entries
allocated, and the cluster table should be already allocated. The
ci->lock prevents the cluster from changing behind us. Forcing the
table to NULL in gdb is not the right way to demonstrate triggering
it. If you still believe there is a bug, please demonstrate it by
providing the backtrace of the execution flow sequence and explaining
how the race condition across different threads/CPUs caused the bug.
For debugging, you can also introduce arbitrary synchronization (e.g.
spin lock) in the allocation code path to force a wait for the race
condition to happen, especially if the race window is too hard to
align.

Chris