Re: [PATCH v3 00/20] Virtual Swap Space
From: Nhat Pham
Date: Tue Feb 17 2026 - 18:38:40 EST
On Tue, Feb 10, 2026 at 11:11 AM Nhat Pham <nphamcs@xxxxxxxxx> wrote:
>
> On Tue, Feb 10, 2026 at 10:00 AM Kairui Song <ryncsn@xxxxxxxxx> wrote:
> > # free -m
> > total used free shared buff/cache available
> > Mem: 31582 909 26388 8 4284 29989
> > Swap: 40959 41 40918
> >
> > The swap setup follows the recommendation from Huang
> > (https://lore.kernel.org/linux-mm/87ed474kvx.fsf@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/).
> >
> > Test (average of 18 test run):
> > vm-scalability/usemem --init-time -O -y -x -n 1 56G
> >
> > 6.19:
> > Throughput: 618.49 MB/s (stdev 31.3)
> > Free latency: 5754780.50us (stdev 69542.7)
> >
> > swap-table-p3 (3.8%, 0.5% better):
> > Throughput: 642.02 MB/s (stdev 25.1)
> > Free latency: 5728544.16us (stdev 48592.51)
> >
> > vswap (3.2%, 244% worse):
> > Throughput: 598.67 MB/s (stdev 25.1)
> > Free latency: 13987175.66us (stdev 125148.57)
> >
> > That's a huge regression with freeing. I have a vm-scatiliby test
> > matrix, not every setup has such significant >200% regression, but on
> > average the freeing time is about at least 15 - 50% slower (for
> > example /data/vm-scalability/usemem --init-time -O -y -x -n 32 1536M
> > the regression is about 2583221.62us vs 2153735.59us). Throughput is
> > all lower too.
Hi Kairui - a quick update.
Took me awhile to get a host that matches your memory spec:
free -m
total used free shared buff/cache available
Mem: 31609 5778 7634 20 18664 25831
Swap: 65535 1 65534
I think I managed to reproduce your observations (average over 5 runs):
Baseline (6.19)
real: mean: 191.19s, stdev: 4.53s
user: mean: 46.98s, stdev: 0.15s
sys: mean: 127.97s, stdev: 3.95s
average throughput: 382057 KB/s
average free time: 8179978 usecs
Vswap:
real: mean: 199.85s, stdev: 6.09s
user: mean: 46.51s, stdev: 0.25s
sys: mean: 137.24s, stdev: 6.46s
average throughput: 367437 KB/s
average free time: 9887107.6 usecs
(command is time ./usemem --init-time -w -O -s 10 -n 1 56g)
I think I figured out where the bulk of the regression lay - it's in
the PTE zapping path. In a nutshell, we're not batching in the case
where these PTEs are backed by virtual swap entries with zswap
backends (even though there is not a good reason not to batch), and
unnecessarily performing unnecesary xarray lookups to resolve the
backend for some superfluous checks (2 xarray lookups for every PTE,
which is wasted work because as noted earlier, we ended up not
batching anyway).
Just by simply fixing this issue, the gap is much closer
real: mean: 192.24s, stdev: 4.82s
user: mean: 46.42s, stdev: 0.27s
sys: mean: 129.84s, stdev: 4.59s
average throughput: 380670 KB/s
average free time: 8583381.4 usecs
I also discovered a couple more inefficiencies in vswap free path.
Hopefully once we fix those, the gap will be non-existent.