RE: [PATCH v13 22/22] mm: zswap: Batched zswap_compress() with compress batching of large folios.

From: Sridhar, Kanchana P

Date: Fri Dec 19 2025 - 14:03:19 EST



> -----Original Message-----
> From: Yosry Ahmed <yosry.ahmed@xxxxxxxxx>
> Sent: Friday, December 19, 2025 7:26 AM
> To: Sridhar, Kanchana P <kanchana.p.sridhar@xxxxxxxxx>
> Cc: linux-kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx;
> hannes@xxxxxxxxxxx; nphamcs@xxxxxxxxx; chengming.zhou@xxxxxxxxx;
> usamaarif642@xxxxxxxxx; ryan.roberts@xxxxxxx; 21cnbao@xxxxxxxxx;
> ying.huang@xxxxxxxxxxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx;
> senozhatsky@xxxxxxxxxxxx; sj@xxxxxxxxxx; kasong@xxxxxxxxxxx; linux-
> crypto@xxxxxxxxxxxxxxx; herbert@xxxxxxxxxxxxxxxxxxx;
> davem@xxxxxxxxxxxxx; clabbe@xxxxxxxxxxxx; ardb@xxxxxxxxxx;
> ebiggers@xxxxxxxxxx; surenb@xxxxxxxxxx; Accardi, Kristen C
> <kristen.c.accardi@xxxxxxxxx>; Gomes, Vinicius <vinicius.gomes@xxxxxxxxx>;
> Feghali, Wajdi K <wajdi.k.feghali@xxxxxxxxx>; Gopal, Vinodh
> <vinodh.gopal@xxxxxxxxx>
> Subject: Re: [PATCH v13 22/22] mm: zswap: Batched zswap_compress() with
> compress batching of large folios.
>
> On Fri, Dec 19, 2025 at 02:29:15AM +0000, Sridhar, Kanchana P wrote:
> >
> > > -----Original Message-----
> > > From: Yosry Ahmed <yosry.ahmed@xxxxxxxxx>
> > > Sent: Thursday, November 13, 2025 4:46 PM
> > > To: Sridhar, Kanchana P <kanchana.p.sridhar@xxxxxxxxx>
> > > Cc: linux-kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx;
> > > hannes@xxxxxxxxxxx; nphamcs@xxxxxxxxx;
> chengming.zhou@xxxxxxxxx;
> > > usamaarif642@xxxxxxxxx; ryan.roberts@xxxxxxx; 21cnbao@xxxxxxxxx;
> > > ying.huang@xxxxxxxxxxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx;
> > > senozhatsky@xxxxxxxxxxxx; sj@xxxxxxxxxx; kasong@xxxxxxxxxxx; linux-
> > > crypto@xxxxxxxxxxxxxxx; herbert@xxxxxxxxxxxxxxxxxxx;
> > > davem@xxxxxxxxxxxxx; clabbe@xxxxxxxxxxxx; ardb@xxxxxxxxxx;
> > > ebiggers@xxxxxxxxxx; surenb@xxxxxxxxxx; Accardi, Kristen C
> > > <kristen.c.accardi@xxxxxxxxx>; Gomes, Vinicius
> <vinicius.gomes@xxxxxxxxx>;
> > > Feghali, Wajdi K <wajdi.k.feghali@xxxxxxxxx>; Gopal, Vinodh
> > > <vinodh.gopal@xxxxxxxxx>
> > > Subject: Re: [PATCH v13 22/22] mm: zswap: Batched zswap_compress()
> with
> > > compress batching of large folios.
> > [...]
> > > > > > Architectural considerations for the zswap batching framework:
> > > > > >
> > > > >
> > >
> ==============================================================
> > > > > > We have designed the zswap batching framework to be
> > > > > > hardware-agnostic. It has no dependencies on Intel-specific features
> > > and
> > > > > > can be leveraged by any hardware accelerator or software-based
> > > > > > compressor. In other words, the framework is open and inclusive by
> > > > > > design.
> > > > > >
> > > > > > Other ongoing work that can use batching:
> > > > > > =========================================
> > > > > > This patch-series demonstrates the performance benefits of
> compress
> > > > > > batching when used in zswap_store() of large folios.
> shrink_folio_list()
> > > > > > "reclaim batching" of any-order folios is the major next work that
> uses
> > > > > > the zswap compress batching framework: our testing of
> > > kernel_compilation
> > > > > > with writeback and the zswap shrinker indicates 10X fewer pages get
> > > > > > written back when we reclaim 32 folios as a batch, as compared to
> one
> > > > > > folio at a time: this is with deflate-iaa and with zstd. We expect to
> > > > > > submit a patch-series with this data and the resulting performance
> > > > > > improvements shortly. Reclaim batching relieves memory pressure
> > > faster
> > > > > > than reclaiming one folio at a time, hence alleviates the need to scan
> > > > > > slab memory for writeback.
> > > > > >
> > > > > > Nhat has given ideas on using batching with the ongoing kcompressd
> > > work,
> > > > > > as well as beneficially using decompression batching & block IO
> batching
> > > > > > to improve zswap writeback efficiency.
> > > > > >
> > > > > > Experiments that combine zswap compress batching, reclaim
> batching,
> > > > > > swapin_readahead() decompression batching of prefetched pages,
> and
> > > > > > writeback batching show that 0 pages are written back with deflate-
> iaa
> > > > > > and zstd. For comparison, the baselines for these compressors see
> > > > > > 200K-800K pages written to disk (kernel compilation 'allmod' config).
> > > > > >
> > > > > > To summarize, these are future clients of the batching framework:
> > > > > >
> > > > > > - shrink_folio_list() reclaim batching of multiple folios:
> > > > > > Implemented, will submit patch-series.
> > > > > > - zswap writeback with decompress batching:
> > > > > > Implemented, will submit patch-series.
> > > > > > - zram:
> > > > > > Implemented, will submit patch-series.
> > > > > > - kcompressd:
> > > > > > Not yet implemented.
> > > > > > - file systems:
> > > > > > Not yet implemented.
> > > > > > - swapin_readahead() decompression batching of prefetched
> pages:
> > > > > > Implemented, will submit patch-series.
> > > > > >
> > > > > > Additionally, any place we have folios that need to be compressed,
> can
> > > > > > potentially be parallelized.
> >
> > [...]
> >
> > > For example, you should remove mentions of ongoing work and future
> work,
> > > simply because things change and they may not land. Just briefly
> > > mentioning that there are future use cases (with maybe an example) is
> > > sufficient.
> >
> > Hi Yosry,
> >
> > The mentions of ongoing/future work were included as per Andrew's
> suggestion.
> > Hence, I would like to keep these in the commit log. Hope this is Ok with
> you?
>
> We can keep them, but not in the detail they are currently in, and
> avoiding mentioning what is implemented or not implemented yet because
> it's not very relevant to the patch imo.
>
> So maybe focus on the fact that the compression batching can be used for
> other use cases like batching decompression in zswap writeback, batching
> compression in zram, batch compression of different folios during
> reclaim, etc -- without going too much into detail because these details
> will probably change when these extensions are proposed.

Sure, this sounds good, thanks!

>
>
> >
> > Thanks,
> > Kanchana
> >