Re: [PATCH 0/2] minimize swapping on zswap store failure
From: Sergey Senozhatsky
Date: Mon Apr 07 2025 - 23:33:57 EST
Hi,
Sorry for the delay
On (25/04/04 07:06), Joshua Hahn wrote:
> On Fri, 4 Apr 2025 10:46:22 +0900 Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> wrote:
>
> > On (25/04/03 13:38), Nhat Pham wrote:
> > > > Ultimately the goal is to prevent an incompressible page from hoarding the
> > > > compression algorithm on multiple reclaim attempts, but if we are spending
> > > > more time by allocating new pages... maybe this isn't the correct approach :(
> > >
> > > Hmmm, IIUC this problem also exists with zram, since zram allocates a
> > > PAGE_SIZE sized buffer to hold the original page's content. I will
> > > note though that zram seems to favor these kinds of pages for
> > > writeback :) Maybe this is why...?
> >
> > zram is a generic block device, it must store whatever comes in,
> > compressible or incompressible. E.g. when we have, say, ext4
> > running atop of the zram device we cannot reject page stores.
> >
> > And you are right, when we use zram for swap, there is some benefit
> > in storing incompressible pages. First, those pages are candidates
> > for zram writeback, which achieves the goal of removing the page from
> > RAM after all, we give up on the incompressible page reclamation with
> > "return it back to LRU" approach. Second, on some zram setups we do
> > re-compression (with a slower and more efficient algorithm) and in
> > certain number of cases what is incompressible with the primary (fast)
> > algorithm is compressible with the secondary algorithm.
>
> Hello Sergey,
>
> Thank you for your insight, I did not know this is how zram handled
> incompressible pages.
Well, yes, zram doesn't have a freedom to reject writes, to the
fs/vfs that would look like a block device error.
[..]
> On the note of trying a second compression algorithm -- do you know how much
> of the initially incompressible pages get compressed later?
So I don't recall the exact numbers, but, if I'm not mistaken, in
my tests (on chromeos) I think I saw something like 20+% (a little
higher than just 20%) success rate (successful re-compression with
a secondary algorithm), but like you said this is very data patterns
specific.
> Thank you again for your response! Have a great day : -)
Thanks, you too!