[RFC PATCH v2 4/4] mm/zswap: defer zs_free() in zswap_invalidate() path

From: Wenchao Hao

Date: Tue Apr 21 2026 - 08:20:25 EST


zswap_invalidate() is called on the same process exit path as
zram_slot_free_notify(). The zswap_entry_free() it calls internally
performs zs_free() which is expensive due to zsmalloc internal locking.
Unlike zram which has a trylock fallback, zswap_invalidate() executes
unconditionally, making the latency impact potentially worse.

Like zram, the expensive zs_free() here blocks the process exit path,
delaying overall memory release. Additionally, zswap_entry_free()
performs extra work beyond zs_free(): list_lru_del() (takes its own
spinlock), obj_cgroup accounting, and kmem_cache_free for the entry
itself.

Use zs_free_deferred() in zswap_invalidate() path to defer the
expensive zsmalloc handle freeing to a workqueue, allowing the exit
path to release memory faster. All other callers (zswap_load,
zswap_writeback_entry, zswap_store error paths) run in process context
and continue to use synchronous zs_free().

Signed-off-by: Wenchao Hao <haowenchao@xxxxxxxxxx>
---
mm/zswap.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index 0823cadd02b6..7291f6deb5b6 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -713,11 +713,16 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
/*
* Carries out the common pattern of freeing an entry's zsmalloc allocation,
* freeing the entry itself, and decrementing the number of stored pages.
+ * When @deferred is true, the zsmalloc handle is queued for async freeing
+ * instead of being freed immediately.
*/
-static void zswap_entry_free(struct zswap_entry *entry)
+static void __zswap_entry_free(struct zswap_entry *entry, bool deferred)
{
zswap_lru_del(&zswap_list_lru, entry);
- zs_free(entry->pool->zs_pool, entry->handle);
+ if (deferred)
+ zs_free_deferred(entry->pool->zs_pool, entry->handle);
+ else
+ zs_free(entry->pool->zs_pool, entry->handle);
zswap_pool_put(entry->pool);
if (entry->objcg) {
obj_cgroup_uncharge_zswap(entry->objcg, entry->length);
@@ -729,6 +734,11 @@ static void zswap_entry_free(struct zswap_entry *entry)
atomic_long_dec(&zswap_stored_pages);
}

+static void zswap_entry_free(struct zswap_entry *entry)
+{
+ __zswap_entry_free(entry, false);
+}
+
/*********************************
* compressed storage functions
**********************************/
@@ -1655,7 +1665,7 @@ void zswap_invalidate(swp_entry_t swp)

entry = xa_erase(tree, offset);
if (entry)
- zswap_entry_free(entry);
+ __zswap_entry_free(entry, true);
}

int zswap_swapon(int type, unsigned long nr_pages)
--
2.34.1