Re: [PATCH v5 02/12] crypto: acomp - Define new interfaces for compress/decompress batching.

From: Barry Song
Date: Sat Feb 22 2025 - 11:24:25 EST


On Sun, Feb 23, 2025 at 1:31 AM Sergey Senozhatsky
<senozhatsky@xxxxxxxxxxxx> wrote:
>
> On (25/02/22 19:26), Barry Song wrote:
> > After reviewing the zRAM code, I don't see why zram_write_page() needs
> > to rely on
> > comp_len to call write_incompressible_page().
>
> [..]
>
> > zram_write_page()
> > {
> > ret = zcomp_compress(zram->comps[ZRAM_PRIMARY_COMP], zstrm,
> > mem, &comp_len);
> > kunmap_local(mem);
> >
> > if (unlikely(ret && ret != -ENOSP)) {
> > zcomp_stream_put(zstrm);
> > pr_err("Compression failed! err=%d\n", ret);
> > return ret;
> > }
> >
> > if (comp_len >= huge_class_size || ret) {
> > zcomp_stream_put(zstrm);
> > return write_incompressible_page(zram, page, index);
> > }
> > }
>
> Sorry, I'm slower than usual now, but why should we? Shouldn't compression
> algorithms just never fail, even on 3D videos, because otherwise they won't
> be able to validate their Weissman score or something :)
>
> On a serious note - what is the use-case here? Is the failure here due to
> some random "cosmic rays" that taint the compression H/W? If so then what
> makes us believe that it's uni-directional? What if it's decompression
> that gets busted and then you can't decompress anything previously stored
> compressed and stored in zsmalloc. Wouldn't it be better in this case
> to turn the computer off and on again?
>
> The idea behind zram's code is that incompressible pages are not unusual,
> they are quite usual, in fact, It's not necessarily that the data grew
> in size after compression, the data is incompressible from zsmalloc PoV.
> That is the algorithm wasn't able to compress a PAGE_SIZE buffer to an
> object smaller than zsmalloc's huge-class-watermark (around 3600 bytes,
> depending on zspage chain size). That's why we look at the comp-len.
> Anything else is an error, perhaps a pretty catastrophic error.
>
> > As long as crypto drivers consistently return -ENOSP or a specific error
> > code for dst_buf overflow, we should be able to eliminate the
> > 2*PAGE_SIZE buffer.
> >
> > My point is:
> > 1. All drivers must be capable of handling dst_buf overflow.
> > 2. All drivers must return a consistent and dedicated error code for
> > dst_buf overflow.
>
> Sorry, where do these rules come from?
>
> > +Minchan, Sergey,
> > Do you think we can implement this change in zRAM by using PAGE_SIZE instead
> > of 2 * PAGE_SIZE?
>
> Sorry again, what problem are you solving?

The context is that both zswap and zRAM currently use a destination buffer of
2 * PAGE_SIZE instead of just PAGE_SIZE. Herbert, Chenming, and Yosry are
questioning why it hasn't been reduced to a single PAGE_SIZE, and some
attempts have been made to do so.[1][2].

The rules are based on my thoughts on feasibility if we aim to reduce it to a
single PAGE_SIZE.

[1] https://lore.kernel.org/linux-mm/Z7F1B_blIbByYBzz@xxxxxxxxxxxxxxxxxxx/
[2] https://lore.kernel.org/lkml/20231213-zswap-dstmem-v4-1-f228b059dd89@xxxxxxxxxxxxx/

Thanks
Barry