Re: [PATCH v4 1/3] mm, swap: speed up hibernation allocation and writeout
From: YoungJun Park
Date: Tue Feb 24 2026 - 02:48:54 EST
On Mon, Feb 16, 2026 at 10:58:02PM +0800, Kairui Song via B4 Relay wrote:
> From: Kairui Song <kasong@xxxxxxxxxxx>
>
> Since commit 0ff67f990bd4 ("mm, swap: remove swap slot cache"),
> hibernation has been using the swap slot slow allocation path for
> simplification, which turns out might cause regression for some
> devices because the allocator now rotates clusters too often, leading to
> slower allocation and more random distribution of data.
...
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index c6863ff7152c..32e0e7545ab8 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1926,8 +1926,9 @@ void swap_put_entries_direct(swp_entry_t entry, int nr)
> /* Allocate a slot for hibernation */
> swp_entry_t swap_alloc_hibernation_slot(int type)
> {
> - struct swap_info_struct *si = swap_type_to_info(type);
> - unsigned long offset;
> + struct swap_info_struct *pcp_si, *si = swap_type_to_info(type);
> + unsigned long pcp_offset, offset = SWAP_ENTRY_INVALID;
> + struct swap_cluster_info *ci;
> swp_entry_t entry = {0};
>
> if (!si)
> @@ -1937,11 +1938,21 @@ swp_entry_t swap_alloc_hibernation_slot(int type)
> if (get_swap_device_info(si)) {
Hi Kairui :)
Reading through the patch, I have some thoughts and review comments regarding
the hibernation slot allocation logic. I'd like to discuss potential
improvements. (Somewhat long... lot of thoughts come up on my mind)
First, regarding the race with swapoff and refcounting.
The code identifies the swap type before allocation, so a swapoff could
occur in between. It seems safer to acquire the reference when identifying
the type (e.g., find_first_swap). Also, instead of repeating get/put for
every slot (allocation and free), could we hold the reference once during
the initial lookup and release it after the image load? This avoids
overhead since swapoff is effectively blocked once hibernation slots are
allocated.
> if (si->flags & SWP_WRITEOK) {
> /*
> - * Grab the local lock to be compliant
> - * with swap table allocation.
> + * Try the local cluster first if it matches the device. If
> + * not, try grab a new cluster and override local cluster.
> */
> local_lock(&percpu_swap_cluster.lock);
Second, regarding local_lock:
It seems mandatory now because distinguishing the lock context during swap
table allocation is tricky (e.g., GFP_KERNEL allocation assumes a local
locked context). Have you considered modifying the swap table allocation
logic to handle this specifically? This might allow us to avoid holding the
local_lock, especially if the device is not SWP_SOLIDSTATE.
> - offset = cluster_alloc_swap_entry(si, NULL);
> + pcp_si = this_cpu_read(percpu_swap_cluster.si[0]);
> + pcp_offset = this_cpu_read(percpu_swap_cluster.offset[0]);
> + if (pcp_si == si && pcp_offset) {
> + ci = swap_cluster_lock(si, pcp_offset);
> + if (cluster_is_usable(ci, 0))
> + offset = alloc_swap_scan_cluster(si, ci, NULL, pcp_offset);
> + else
> + swap_cluster_unlock(ci);
> + }
> + if (!offset)
> + offset = cluster_alloc_swap_entry(si, NULL);
> local_unlock(&percpu_swap_cluster.lock);
> if (offset)
> entry = swp_entry(si->type, offset);
Third, regarding cluster allocation:
1. If hibernation targets a lower-priority device, the per-cpu cluster
usage might cause priority inversion (though minimal).
2. Have you considered treating clusters as a global resource for this
case? For instance, caching next_offset in si(using union on global_cluster or new field) or allowing the
allocator to calculate the next value directly, rather than splitting
clusters per CPU.
Finally, regarding readahead and freeing:
Hibernation slots might be read during cluster-based readahead. Can we
avoid this (e.g., by checking for a NULL fake shadow entry or adding a specific
check for hibernation slots)? If so, we could also avoid triggering
try_to_reclaim when freeing these slots.
Thanks for your work!
Youngjun Park