Re: [PATCH 8/8] pstore/ram: Correctly calculate usable PRZ bytes

From: Kees Cook
Date: Mon Nov 05 2018 - 12:04:21 EST


On Sun, Nov 4, 2018 at 8:42 PM, Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote:
> Dumping the magic bytes of the non decompressable .enc.z files, I get this
> which shows a valid zlib compressed header:
>
> Something like:
> 48 89 85 54 4d 6f 1a 31
>
> The 0b1000 in the first byte means it is "deflate". The file tool indeed
> successfully shows "zlib compressed data" and I did the math for the header
> and it is indeed valid. So I don't think the data is insane. The buffer has
> enough room because even the very small dumps are not decompressable.

Interesting. So the kernel wouldn't decompress it even though it's the
right algo and untruncated? That seems worth fixing.

> At this point we can park this issue I guess, but a scenario that is still
> broken is:
> Say someone crashes the system on compress algo X and then recompiles with
> compress algo Y, then the decompress would fail no?
>
> One way to fix that is to store the comrpession method in buffer as well,
> then initialize all algorithms at boot and choose the right one in the
> buffer ideally. Otherwise atleast we should print a message saying "buffer is
> encoded with algo X but compression selected is Y" or something. But I agree
> its a very low priority "doctor it hurts if I do this" kind of issue :)

Right, this is fine: if algos change across a kernel version, I'm fine
with it failing. pstore isn't expected to work sanely outside of a
pretty narrow set of use-cases.

-Kees

--
Kees Cook