Re: [PATCH RFC 00/15] mm, swap: swap table phase IV with dynamic ghost swapfile

From: Kairui Song

Date: Mon Feb 23 2026 - 21:11:30 EST


On Tue, Feb 24, 2026 at 1:00 AM Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
>
> On Fri, Feb 20, 2026 at 07:42:01AM +0800, Kairui Song via B4 Relay wrote:
> > - 8 bytes per slot memory usage, when using only plain swap.
> > - And the memory usage can be reduced to 3 or only 1 byte.
> > - 16 bytes per slot memory usage, when using ghost / virtual zswap.
> > - Zswap can just use ci_dyn->virtual_table to free up it's content
> > completely.
> > - And the memory usage can be reduced to 11 or 8 bytes using the same
> > code above.
> > - 24 bytes only if including reverse mapping is in use.
>
> That seems to tie us pretty permanently to duplicate metadata.
>
> For every page that was written to disk through zswap, we have an
> entry in the ghost swapfile, and an entry in the backend swapfile, no?

No, only one entry in the ghost swapfile (xswap or virtual swap file,
anyway it's just a name). The one in the physical swap is a reverse
mapping entry, it tells which slot in the ghost swapfile is pointing
to the physical slot, so swapoff / migration of the physical slot can
be done in O(1) time.

So, zero duplicate of any data.

>
> > - Minimal code review or maintenance burden. All layers are using the exact
> > same infrastructure for metadata / allocation / synchronization, making
> > all API and conventions consistent and easy to maintain.
> > - Writeback, migration and compaction are easily supportable since both
> > reverse mapping and reallocation are prepared. We just need a
> > folio_realloc_swap to allocate new entries for the existing entry, and
> > fill the swap table with a reserve map entry.
> > - Fast swapoff: Just read into ghost / virtual swap cache.
>
> Can we get this for disk swap as well? ;)
>
> Zswap swapoff is already fairly fast, albeit CPU intense. It's the
> scattered IO that makes swapoff on disks so terrible.

I am talking about disk swap here, not zswap. Swapoff of a physical
entry just loads the swap data in the virtual slot according to the
reverse mapping entry.

> > free -m
> > total used free shared buff/cache available
> > Mem: 1465 250 927 1 356 1215
> > Swap: 15269887 0 15269887
>
> I'm not a fan of this. This makes free(1) output kind of useless, and
> very misleading. The swap space presented here has nothing to do with
> actual swap capacity, and the actual disk swap capacity is obscured.
>
> And how would a user choose this size? How would a distribution?

It can be dynamic (just si->max += 2M on every cluster allocation
since it's really just a number now). Can be hidden, and can have an
infinite size. That's just an interface design that can be flexibly
changed.

For example if we just set this to a super large value and hide it, it
will look identical to vss from userspace perspect, but stay optional
and zero overhead for existing ZRAM or plain swap users.

> The only limit is compression ratio, and you don't know this in
> advance. This restriction seems pretty arbitrary and avoidable.

Just as a reference: In practice we limit our ZRAM setup to 1/4 or 1:1
of the total RAM to avoid the machine goto endless reclaim and never
go OOM.

But we can also have an infinite size ZSWAP now, with this series.