Re: [PATCH 0/2] minimize swapping on zswap store failure

From: Nhat Pham
Date: Fri Apr 04 2025 - 11:31:22 EST


On Fri, Apr 4, 2025 at 7:06 AM Joshua Hahn <joshua.hahnjy@xxxxxxxxx> wrote:
>
> On Fri, 4 Apr 2025 10:46:22 +0900 Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> wrote:
>
> > On (25/04/03 13:38), Nhat Pham wrote:
> > > > Ultimately the goal is to prevent an incompressible page from hoarding the
> > > > compression algorithm on multiple reclaim attempts, but if we are spending
> > > > more time by allocating new pages... maybe this isn't the correct approach :(
> > >
> > > Hmmm, IIUC this problem also exists with zram, since zram allocates a
> > > PAGE_SIZE sized buffer to hold the original page's content. I will
> > > note though that zram seems to favor these kinds of pages for
> > > writeback :) Maybe this is why...?
> >
> > zram is a generic block device, it must store whatever comes in,
> > compressible or incompressible. E.g. when we have, say, ext4
> > running atop of the zram device we cannot reject page stores.
> >
> > And you are right, when we use zram for swap, there is some benefit
> > in storing incompressible pages. First, those pages are candidates
> > for zram writeback, which achieves the goal of removing the page from
> > RAM after all, we give up on the incompressible page reclamation with
> > "return it back to LRU" approach. Second, on some zram setups we do
> > re-compression (with a slower and more efficient algorithm) and in
> > certain number of cases what is incompressible with the primary (fast)
> > algorithm is compressible with the secondary algorithm.
>
> Hello Sergey,
>
> Thank you for your insight, I did not know this is how zram handled
> incompressible pages. In the case of this prototype, I expected to see the most
> gains from storing incompressible pages in the zswap LRU when writeback was
> disabled (if writeback is enabled, then we expect to see less differences with
> just writing the page back).
>
> On the note of trying a second compression algorithm -- do you know how much
> of the initially incompressible pages get compressed later? I can certainly
> imagine that trying different compression algorithms makes a difference, I am
> wondering if zswap should attempt this as well, or if it is not worth spending
> even more CPU trying to re-comprses the page.

It wouldn't help us :) The algorithm we use, zstd, is usually already
the slow algorithm in this context. We can try higher levels of zstd,
but there are always data that are simply incompressible - think
random values, or memory already compressed by userspace.

Yeah we can target them for writeback to swap in zswap as well. It
wouldn't help your (micro)benchmark though, because IIRC you don't do
writeback and/or do not writeback before it is faulted back in :)

>
> Thank you again for your response! Have a great day : -)
> Joshua
>
> Sent using hkml (https://github.com/sjp38/hackermail)
>