Re: [PATCH v3] module: add in-kernel support for decompressing

From: Luis Chamberlain
Date: Tue Jan 11 2022 - 10:42:14 EST


On Sun, Jan 02, 2022 at 06:58:15PM -0800, Dmitry Torokhov wrote:
> OK, so I finally got around to doing it and the differences are pretty
> much noise, as I expected:
>
> 5.16.0-rc7: Startup finished in 5.022s (firmware) + 6.106s (loader) + 1.370s (kernel) + 5.685s (initrd) + 10.842s (userspace) = 29.026s
> 5.16.0-rc7-patched: Startup finished in 4.958s (firmware) + 6.701s (loader) + 1.382s (kernel) + 5.278s (initrd) + 10.822s (userspace) = 29.145s
> 5.16.0-rc7-patched: Startup finished in 4.953s (firmware) + 5.912s (loader) + 1.385s (kernel) + 5.327s (initrd) + 10.457s (userspace) = 28.036s
>
> Also see attached.

If kmod didn't do the decompression I suspect things might be slightly
different, but I agree that given the different with kernel compression
now being done, removing userespace compression might just be noise as
well.

> > > We still reading and uncompressing
> > > file in kmod (to make sure the format is valid)
> >
> > I don't understand, that seems wasteful.
>
> This way we can make sure we are not feeding kernel garbage and abort
> early. Yes, we could just check signature and hope that the data is good
> (and if it is not the kernel will reject it) but this is not a hot path
> at all and amount of data we decompress is relatively small, so I do not
> think trying to optimize this makes much sense (as shown by the numbers
> above).

Sure. And if an LSM is used, one would assume the LSM does its own finit
module checks.

> > > and we can uncompress
> > > using large buffers (we are not concerned with using unswappable kernel
> > > memory).
> > >
> > > Maybe in the future when we have streaming and accelerated in-kernel
> > > decompression API we could optimize for that in kmod and see some
> > > savings on very large modules.
> >
> > That would be very nice.
>
> Again, practical benefit of doing this is pretty much close to 0 in this
> particular case.

Based on what is observed so far, I agree.

Luis