Re: [PATCH 5/8] pstore: Fix long-term implicit conversions in the compression routines
From: Ard Biesheuvel
Date: Mon Oct 10 2022 - 03:31:20 EST
On Sat, 8 Oct 2022 at 21:44, Kees Cook <keescook@xxxxxxxxxxxx> wrote:
>
> On Sat, Oct 08, 2022 at 07:52:48PM +0200, Ard Biesheuvel wrote:
> > Again, please correct me if I am missing something here (Kees?). Are
> > there cases where we compress data that may be compressed already?
>
> Nope, for dmesg, it should all only be text. I'd agree -- the worst case
> seems a weird thing to need.
>
I've spent a bit more time looking into this, and it seems the pstore
code already deals with the compression failure gracefully, so we
should be able to just use the uncompressed size as an upper bound for
the compressed size without the need for elaborate support in the
crypto API.
Then I wondered if we still need the big_oops_buf, or whether we could
just compress in place. So after a bit more digging, I found out that
we can, but only because the scompress backend of the acompress API we
intend to use allocates 256 KiB of scratch buffers *per CPU* (which
amounts to 32 MB of DRAM permanently tied up in the kernel on my 32x4
ThunderX2 box).
So then I wondered why we are using this terrible API in the first
place, and what the added value is of having 6 different algorithms
available to compress a small amount of ASCII text, which only occurs
on an ice cold code path to begin with.
So can we rip out this layer and just use GZIP directly (or just use
LZMA if we are obsessed with compression ratio)? I am aware that we
will need some kind of transition period where we need to support
existing records compressed with other compression types, but I don't
think that is something we'd need to sustain for more than a year,
right?