RE: [PATCH v13 22/22] mm: zswap: Batched zswap_compress() with compress batching of large folios.
From: Sridhar, Kanchana P
Date: Sun Dec 07 2025 - 23:17:42 EST
> -----Original Message-----
> From: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>
> Sent: Sunday, December 7, 2025 7:24 PM
> To: Sridhar, Kanchana P <kanchana.p.sridhar@xxxxxxxxx>
> Cc: Yosry Ahmed <yosry.ahmed@xxxxxxxxx>; SeongJae Park <sj@xxxxxxxxxx>;
> linux-kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; hannes@xxxxxxxxxxx;
> nphamcs@xxxxxxxxx; chengming.zhou@xxxxxxxxx;
> usamaarif642@xxxxxxxxx; ryan.roberts@xxxxxxx; 21cnbao@xxxxxxxxx;
> ying.huang@xxxxxxxxxxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx;
> senozhatsky@xxxxxxxxxxxx; kasong@xxxxxxxxxxx; linux-
> crypto@xxxxxxxxxxxxxxx; davem@xxxxxxxxxxxxx; clabbe@xxxxxxxxxxxx;
> ardb@xxxxxxxxxx; ebiggers@xxxxxxxxxx; surenb@xxxxxxxxxx; Accardi,
> Kristen C <kristen.c.accardi@xxxxxxxxx>; Gomes, Vinicius
> <vinicius.gomes@xxxxxxxxx>; Feghali, Wajdi K <wajdi.k.feghali@xxxxxxxxx>;
> Gopal, Vinodh <vinodh.gopal@xxxxxxxxx>
> Subject: Re: [PATCH v13 22/22] mm: zswap: Batched zswap_compress() with
> compress batching of large folios.
>
> On Wed, Nov 26, 2025 at 08:05:40PM +0000, Sridhar, Kanchana P wrote:
> >
> > Herbert, to make sure I understand, will you be implementing all of these
> > features in crypto_acomp for software compressors? I would appreciate it
> > if you can clarify:
> >
> > 1) Error & compressed length propagation to the dst sg->length only for
> > non-batching compressors.
> > a) For batching compressors, this wouldn't apply since errors could occur
> > for any page in the batch, and the first page (dst sg->length) could have
> > successfully compressed.
>
> This would be the first step.
Hi Herbert,
Thanks for these clarifications! This sounds like a great first step.
>
> > 2) Will you also be handling the case where zswap can send an SG list batch
> > with multiple pages to a non-batching compressor, and the crypto_acomp
> > API will internally compress each page sequentially, propagate
> > errors/compress lengths before returning?
> >
> > If so, this would really standardize the code in zswap for batching and
> > non-batching compressors.
>
> Yes this will be done as the next step. My understanding is that
> your patch-set doesn't require this yet as all non-batching compressors
> will have a batch size of 1.
I see. So the way my patch-set tries to standardize batching in
zswap_compress() is to call it with a batch of 8 pages, regardless of batching
or non-batching compressors. In zswap_compress(), I presently iterate
through each page in the batch for sequential processing for non-batching
compressors whose batch size is 1. For batching compressors, the iteration
happens just once: the whole batch is compressed in one call to
crypto_acomp_compress().
When the next step is ready, I will no longer need this for loop that
iterates over the batch in "batch_size" increments. If Yosry and Nhat are
Ok with staging it as you've described, this should all be good.
Also, I have incorporated your suggestion to implement batching within
iaa_crypto in a manner that adheres to the acomp API. I was planning to
start creating an updated patch-set with this. Please let me know if it would
be a good idea to wait to sync with the first step you are working on before
submitting the updated patch-set. Thanks for collaboration!
>
> But yes this certainly will be extended, not just with sequential
> processing, but we could also use pcrypt/cryptd to parallelise the
> compression across CPUs.
Sounds great!
Best regards,
Kanchana
>
> Cheers,
> --
> Email: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt