Re: [PATCH 2/2] arm64: remove page granularity limitation from KFENCE
From: Jisheng Zhang
Date: Mon May 24 2021 - 06:08:37 EST
On Mon, 24 May 2021 12:04:18 +0200 Marco Elver wrote:
>
>
> +Cc Mark
>
> On Mon, 24 May 2021 at 11:26, Jisheng Zhang <Jisheng.Zhang@xxxxxxxxxxxxx> wrote:
> >
> > KFENCE requires linear map to be mapped at page granularity, so that
> > it is possible to protect/unprotect single pages in the KFENCE pool.
> > Currently if KFENCE is enabled, arm64 maps all pages at page
> > granularity, it seems overkilled. In fact, we only need to map the
> > pages in KFENCE pool itself at page granularity. We acchieve this goal
> > by allocating KFENCE pool before paging_init() so we know the KFENCE
> > pool address, then we take care to map the pool at page granularity
> > during map_mem().
> >
> > Signed-off-by: Jisheng Zhang <Jisheng.Zhang@xxxxxxxxxxxxx>
> > ---
> > arch/arm64/kernel/setup.c | 3 +++
> > arch/arm64/mm/mmu.c | 27 +++++++++++++++++++--------
> > 2 files changed, 22 insertions(+), 8 deletions(-)
> >
> > diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> > index 61845c0821d9..51c0d6e8b67b 100644
> > --- a/arch/arm64/kernel/setup.c
> > +++ b/arch/arm64/kernel/setup.c
> > @@ -18,6 +18,7 @@
> > #include <linux/screen_info.h>
> > #include <linux/init.h>
> > #include <linux/kexec.h>
> > +#include <linux/kfence.h>
> > #include <linux/root_dev.h>
> > #include <linux/cpu.h>
> > #include <linux/interrupt.h>
> > @@ -345,6 +346,8 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
> >
> > arm64_memblock_init();
> >
> > + kfence_alloc_pool();
> > +
> > paging_init();
> >
> > acpi_table_upgrade();
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index 89b66ef43a0f..12712d31a054 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -13,6 +13,7 @@
> > #include <linux/init.h>
> > #include <linux/ioport.h>
> > #include <linux/kexec.h>
> > +#include <linux/kfence.h>
> > #include <linux/libfdt.h>
> > #include <linux/mman.h>
> > #include <linux/nodemask.h>
> > @@ -515,10 +516,16 @@ static void __init map_mem(pgd_t *pgdp)
> > */
> > BUILD_BUG_ON(pgd_index(direct_map_end - 1) == pgd_index(direct_map_end));
> >
> > - if (rodata_full || crash_mem_map || debug_pagealloc_enabled() ||
> > - IS_ENABLED(CONFIG_KFENCE))
> > + if (rodata_full || crash_mem_map || debug_pagealloc_enabled())
> > flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> >
> > + /*
> > + * KFENCE requires linear map to be mapped at page granularity, so
> > + * temporarily skip mapping for __kfence_pool in the following
> > + * for-loop
> > + */
> > + memblock_mark_nomap(__pa(__kfence_pool), KFENCE_POOL_SIZE);
> > +
>
> Did you build this with CONFIG_KFENCE unset? I don't think it builds.
>
Oops, nice catch! I will fix it in v2
thanks for your review