Re: [PATCH] mm: init_mlocked_on_free_v3

From: Yuanchu Xie
Date: Mon Apr 01 2024 - 18:35:15 EST


On Fri, Mar 29, 2024 at 7:56 AM York Jasper Niebuhr
<yjnworkstation@xxxxxxxxx> wrote:
>
> Implements the "init_mlocked_on_free" boot option. When this boot option
> is enabled, any mlock'ed pages are zeroed on free. If
> the pages are munlock'ed beforehand, no initialization takes place.
> This boot option is meant to combat the performance hit of
> "init_on_free" as reported in commit 6471384af2a6 ("mm: security:
> introduce init_on_alloc=1 and init_on_free=1 boot options"). With
I understand the intent of the init_on_alloc and init_on_free options,
but what's the idea behind special-casing on mlock?
Is the idea that mlocking implies something other than "preventing
memory from being swapped out"?

> "init_mlocked_on_free=1" only relevant data is freed while everything
> else is left untouched by the kernel. Correspondingly, this patch
> introduces no performance hit for unmapping non-mlock'ed memory. The
> unmapping overhead for purely mlocked memory was measured to be
> approximately 13%. Realistically, most systems mlock only a fraction of
> the total memory so the real-world system overhead should be close to
> zero.
>
> Optimally, userspace programs clear any key material or other
> confidential memory before exit and munlock the according memory
> regions. If a program crashes, userspace key managers fail to do this
> job. Accordingly, no munlock operations are performed so the data is
> caught and zeroed by the kernel. Should the program not crash, all
> memory will ideally be munlocked so no overhead is caused.
>
> CONFIG_INIT_MLOCKED_ON_FREE_DEFAULT_ON can be set to enable
> "init_mlocked_on_free" by default.
>
> Signed-off-by: York Jasper Niebuhr <yjnworkstation@xxxxxxxxx>
FYI, git format-patch takes a -v parameter to specify the version of
the patch series. and scripts/checkpatch.pl should catch some of the
formatting and style issues.

I also accidentally forgot to reply all, sorry about the noise York.

Thanks,
Yuanchu Xie