Re: [PATCH v13 19/22] mm: zswap: Per-CPU acomp_ctx resources exist from pool creation to deletion.
From: Yosry Ahmed
Date: Thu Dec 11 2025 - 21:47:33 EST
December 11, 2025 at 5:58 PM, "Sridhar, Kanchana P" <kanchana.p.sridhar@xxxxxxxxx mailto:kanchana.p.sridhar@xxxxxxxxx?to=%22Sridhar%2C%20Kanchana%20P%22%20%3Ckanchana.p.sridhar%40intel.com%3E > wrote:
>
> >
> > -----Original Message-----
> > From: Yosry Ahmed <>
> > Sent: Thursday, December 11, 2025 5:06 PM
> > To: Sridhar, Kanchana P <>
> > Cc:;;
> > hannes@xxxxxxxxxxx; mailto:hannes@xxxxxxxxxxx; ;;
> > usamaarif642@xxxxxxxxx; mailto:usamaarif642@xxxxxxxxx; ;;
> > ying.huang@xxxxxxxxxxxxxxxxx; mailto:ying.huang@xxxxxxxxxxxxxxxxx; ;
> > senozhatsky@xxxxxxxxxxxx; mailto:senozhatsky@xxxxxxxxxxxx; ;; linux-
> > crypto@xxxxxxxxxxxxxxx; mailto:crypto@xxxxxxxxxxxxxxx; ;
> > davem@xxxxxxxxxxxxx; mailto:davem@xxxxxxxxxxxxx; ;;
> > ebiggers@xxxxxxxxxx; mailto:ebiggers@xxxxxxxxxx; ; Accardi, Kristen C
> > <>; Gomes, Vinicius <>;
> > Feghali, Wajdi K <>; Gopal, Vinodh
> > <>
> > Subject: Re: [PATCH v13 19/22] mm: zswap: Per-CPU acomp_ctx resources
> > exist from pool creation to deletion.
> >
> > On Fri, Dec 12, 2025 at 12:55:10AM +0000, Sridhar, Kanchana P wrote:
> >
> > > -----Original Message-----
> > > From: Yosry Ahmed <>
> > > Sent: Thursday, November 13, 2025 12:24 PM
> > > To: Sridhar, Kanchana P <>
> > > Cc:;;
> > >;;
> > chengming.zhou@xxxxxxxxx; mailto:chengming.zhou@xxxxxxxxx;
> > >;;;
> > >;;
> > >;;; linux-
> > >;;
> > >;;;
> > >;; Accardi, Kristen C
> > > <>; Gomes, Vinicius
> > <>;
> > > Feghali, Wajdi K <>; Gopal, Vinodh
> > > <>
> > > Subject: Re: [PATCH v13 19/22] mm: zswap: Per-CPU acomp_ctx
> > resources
> > > exist from pool creation to deletion.
> > >
> > > On Tue, Nov 04, 2025 at 01:12:32AM -0800, Kanchana P Sridhar wrote:
> > >
> > > The subject can be shortened to:
> > >
> > > "mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool"
> > >
> > > > This patch simplifies the zswap_pool's per-CPU acomp_ctx resource
> > > > management. Similar to the per-CPU acomp_ctx itself, the per-CPU
> > > > acomp_ctx's resources' (acomp, req, buffer) lifetime will also be from
> > > > pool creation to pool deletion. These resources will persist through CPU
> > > > hotplug operations instead of being destroyed/recreated. The
> > > > zswap_cpu_comp_dead() teardown callback has been deleted from the
> > call
> > > > to cpuhp_setup_state_multi(CPUHP_MM_ZSWP_POOL_PREPARE). As a
> > > result, CPU
> > > > offline hotplug operations will be no-ops as far as the acomp_ctx
> > > > resources are concerned.
> > >
> > > Currently, per-CPU acomp_ctx are allocated on pool creation and/or CPU
> > > hotplug, and destroyed on pool destruction or CPU hotunplug. This
> > > complicates the lifetime management to save memory while a CPU is
> > > offlined, which is not very common.
> > >
> > > Simplify lifetime management by allocating per-CPU acomp_ctx once on
> > > pool creation (or CPU hotplug for CPUs onlined later), and keeping them
> > > allocated until the pool is destroyed.
> > >
> > > >
> > > > This commit refactors the code from zswap_cpu_comp_dead() into a
> > > > new function acomp_ctx_dealloc() that is called to clean up acomp_ctx
> > > > resources from:
> > > >
> > > > 1) zswap_cpu_comp_prepare() when an error is encountered,
> > > > 2) zswap_pool_create() when an error is encountered, and
> > > > 3) from zswap_pool_destroy().
> > >
> > >
> > > Refactor cleanup code from zswap_cpu_comp_dead() into
> > > acomp_ctx_dealloc() to be used elsewhere.
> > >
> > > >
> > > > The main benefit of using the CPU hotplug multi state instance startup
> > > > callback to allocate the acomp_ctx resources is that it prevents the
> > > > cores from being offlined until the multi state instance addition call
> > > > returns.
> > > >
> > > > From Documentation/core-api/cpu_hotplug.rst:
> > > >
> > > > "The node list add/remove operations and the callback invocations are
> > > > serialized against CPU hotplug operations."
> > > >
> > > > Furthermore, zswap_[de]compress() cannot contend with
> > > > zswap_cpu_comp_prepare() because:
> > > >
> > > > - During pool creation/deletion, the pool is not in the zswap_pools
> > > > list.
> > > >
> > > > - During CPU hot[un]plug, the CPU is not yet online, as Yosry pointed
> > > > out. zswap_cpu_comp_prepare() will be run on a control CPU,
> > > > since CPUHP_MM_ZSWP_POOL_PREPARE is in the PREPARE section
> > of
> > > "enum
> > > > cpuhp_state". Thanks Yosry for sharing this observation!
> > > >
> > > > In both these cases, any recursions into zswap reclaim from
> > > > zswap_cpu_comp_prepare() will be handled by the old pool.
> > > >
> > > > The above two observations enable the following simplifications:
> > > >
> > > > 1) zswap_cpu_comp_prepare(): CPU cannot be offlined. Reclaim cannot
> > > use
> > > > the pool. Considerations for mutex init/locking and handling
> > > > subsequent CPU hotplug online-offline-online:
> > > >
> > > > Should we lock the mutex of current CPU's acomp_ctx from start to
> > > > end? It doesn't seem like this is required. The CPU hotplug
> > > > operations acquire a "cpuhp_state_mutex" before proceeding, hence
> > > > they are serialized against CPU hotplug operations.
> > > >
> > > > If the process gets migrated while zswap_cpu_comp_prepare() is
> > > > running, it will complete on the new CPU. In case of failures, we
> > > > pass the acomp_ctx pointer obtained at the start of
> > > > zswap_cpu_comp_prepare() to acomp_ctx_dealloc(), which again, can
> > > > only undergo migration. There appear to be no contention scenarios
> > > > that might cause inconsistent values of acomp_ctx's members. Hence,
> > > > it seems there is no need for mutex_lock(&acomp_ctx->mutex) in
> > > > zswap_cpu_comp_prepare().
> > > >
> > > > Since the pool is not yet on zswap_pools list, we don't need to
> > > > initialize the per-CPU acomp_ctx mutex in zswap_pool_create(). This
> > > > has been restored to occur in zswap_cpu_comp_prepare().
> > > >
> > > > zswap_cpu_comp_prepare() checks upfront if acomp_ctx->acomp is
> > > > valid. If so, it returns success. This should handle any CPU
> > > > hotplug online-offline transitions after pool creation is done.
> > > >
> > > > 2) CPU offline vis-a-vis zswap ops: Let's suppose the process is
> > > > migrated to another CPU before the current CPU is dysfunctional. If
> > > > zswap_[de]compress() holds the acomp_ctx->mutex lock of the
> > offlined
> > > > CPU, that mutex will be released once it completes on the new
> > > > CPU. Since there is no teardown callback, there is no possibility of
> > > > UAF.
> > > >
> > > > 3) Pool creation/deletion and process migration to another CPU:
> > > >
> > > > - During pool creation/deletion, the pool is not in the zswap_pools
> > > > list. Hence it cannot contend with zswap ops on that CPU. However,
> > > > the process can get migrated.
> > > >
> > > > Pool creation --> zswap_cpu_comp_prepare()
> > > > --> process migrated:
> > > > * CPU offline: no-op.
> > > > * zswap_cpu_comp_prepare() continues
> > > > to run on the new CPU to finish
> > > > allocating acomp_ctx resources for
> > > > the offlined CPU.
> > > >
> > > > Pool deletion --> acomp_ctx_dealloc()
> > > > --> process migrated:
> > > > * CPU offline: no-op.
> > > > * acomp_ctx_dealloc() continues
> > > > to run on the new CPU to finish
> > > > de-allocating acomp_ctx resources
> > > > for the offlined CPU.
> > > >
> > > > 4) Pool deletion vis-a-vis CPU onlining:
> > > > The call to cpuhp_state_remove_instance() cannot race with
> > > > zswap_cpu_comp_prepare() because of hotplug synchronization.
> > > >
> > > > This patch deletes acomp_ctx_get_cpu_lock()/acomp_ctx_put_unlock().
> > > > Instead, zswap_[de]compress() directly call
> > > > mutex_[un]lock(&acomp_ctx->mutex).
> > >
> > > I am not sure why all of this is needed. We should just describe why
> > > it's safe to drop holding the mutex while initializing per-CPU
> > > acomp_ctx:
> > >
> > > It is no longer possible for CPU hotplug to race against allocation or
> > > usage of per-CPU acomp_ctx, as they are only allocated once before the
> > > pool can be used, and remain allocated as long as the pool is used.
> > > Hence, stop holding the lock during acomp_ctx initialization, and drop
> > > acomp_ctx_get_cpu_lock()//acomp_ctx_put_unlock().
> >
> > Hi Yosry,
> >
> > Thanks for these comments. IIRC, there was quite a bit of technical
> > discussion analyzing various what-ifs, that we were able to answer
> > adequately. The above is a nice summary of the outcome, however,
> > I think it would help the next time this topic is re-visited to have a log
> > of the "why" and how races/UAF scenarios are being considered and
> > addressed by the solution. Does this sound Ok?
> >
> > How about using the summarized version in the commit log and linking to
> > the thread with the discussion?
> >
> Seems like capturing just enough detail of the threads involving the
> discussions, in this commit log would be valuable. As against reading long
> email threads with indentations, as the sole resource to provide context?
>
If you feel strongly about it then sure, but try to keep it as concise as possible, thanks.