RE: [PATCH v7 00/15] zswap IAA compress batching
From: Sridhar, Kanchana P
Date: Fri Feb 28 2025 - 20:18:24 EST
> -----Original Message-----
> From: Yosry Ahmed <yosry.ahmed@xxxxxxxxx>
> Sent: Friday, February 28, 2025 5:13 PM
> To: Sridhar, Kanchana P <kanchana.p.sridhar@xxxxxxxxx>
> Cc: linux-kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx;
> hannes@xxxxxxxxxxx; nphamcs@xxxxxxxxx; chengming.zhou@xxxxxxxxx;
> usamaarif642@xxxxxxxxx; ryan.roberts@xxxxxxx; 21cnbao@xxxxxxxxx;
> ying.huang@xxxxxxxxxxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx; linux-
> crypto@xxxxxxxxxxxxxxx; herbert@xxxxxxxxxxxxxxxxxxx;
> davem@xxxxxxxxxxxxx; clabbe@xxxxxxxxxxxx; ardb@xxxxxxxxxx;
> ebiggers@xxxxxxxxxx; surenb@xxxxxxxxxx; Accardi, Kristen C
> <kristen.c.accardi@xxxxxxxxx>; Feghali, Wajdi K <wajdi.k.feghali@xxxxxxxxx>;
> Gopal, Vinodh <vinodh.gopal@xxxxxxxxx>
> Subject: Re: [PATCH v7 00/15] zswap IAA compress batching
>
> On Sat, Mar 01, 2025 at 01:09:22AM +0000, Sridhar, Kanchana P wrote:
> > Hi All,
> >
> > > Performance testing (Kernel compilation, allmodconfig):
> > > =======================================================
> > >
> > > The experiments with kernel compilation test, 32 threads, in tmpfs use the
> > > "allmodconfig" that takes ~12 minutes, and has considerable
> swapout/swapin
> > > activity. The cgroup's memory.max is set to 2G.
> > >
> > >
> > > 64K folios: Kernel compilation/allmodconfig:
> > > ============================================
> > >
> > > -------------------------------------------------------------------------------
> > > mm-unstable v7 mm-unstable v7
> > > -------------------------------------------------------------------------------
> > > zswap compressor deflate-iaa deflate-iaa zstd zstd
> > > -------------------------------------------------------------------------------
> > > real_sec 775.83 765.90 769.39 772.63
> > > user_sec 15,659.10 15,659.14 15,666.28 15,665.98
> > > sys_sec 4,209.69 4,040.44 5,277.86 5,358.61
> > > -------------------------------------------------------------------------------
> > > Max_Res_Set_Size_KB 1,871,116 1,874,128 1,873,200 1,873,488
> > > -------------------------------------------------------------------------------
> > > memcg_high 0 0 0 0
> > > memcg_swap_fail 0 0 0 0
> > > zswpout 107,305,181 106,985,511 86,621,912 89,355,274
> > > zswpin 32,418,991 32,184,517 25,337,514 26,522,042
> > > pswpout 272 80 94 16
> > > pswpin 274 69 54 16
> > > thp_swpout 0 0 0 0
> > > thp_swpout_fallback 0 0 0 0
> > > 64kB_swpout_fallback 494 0 0 0
> > > pgmajfault 34,577,545 34,333,290 26,892,991 28,132,682
> > > ZSWPOUT-64kB 3,498,796 3,460,751 2,737,544 2,823,211
> > > SWPOUT-64kB 17 4 4 1
> > > -------------------------------------------------------------------------------
> > >
> > > [...]
> > >
> > > Summary:
> > > ========
> > > The performance testing data with usemem 30 processes and kernel
> > > compilation test show 61%-73% throughput gains and 27%-37% sys time
> > > reduction (usemem30) and 4% sys time reduction (kernel compilation)
> with
> > > zswap_store() large folios using IAA compress batching as compared to
> > > IAA sequential. There is no performance regression for zstd/usemem30
> and a
> > > slight 1.5% sys time zstd regression with kernel compilation allmod
> > > config.
> >
> > I think I know why kernel_compilation with zstd shows a regression whereas
> > usemem30 does not. It is because I lock/unlock the acomp_ctx mutex once
> > per folio. This can cause decomp jobs to wait for the mutex, which can cause
> > more compressions, and this repeats. kernel_compilation has 25M+
> decomps
> > with zstd, whereas usemem30 has practically no decomps, but is
> > compression-intensive, because of which it benefits the once-per-folio lock
> > acquire/release.
> >
> > I am testing a fix where I return zswap_compress() to do the mutex
> lock/unlock,
> > and expect to post v8 by end of the day. I would appreciate it if you can hold
> off
> > on reviewing only the zswap patches [14, 15] in my v7 and instead review
> the v8
> > versions of these two patches.
>
> I was planning to take a look at v7 next week, so take your time, no
> rush to post it on a Friday afternoon.
>
> Anyway, thanks for the heads up, I appreciate you trying to save
> everyone's time.
Thanks Yosry!