Re: [RFC PATCH v2 13/15] arm64: mm: Guard page table writes with kpkeys

From: Qi Zheng
Date: Thu Jan 09 2025 - 02:17:57 EST


Hi Kevin,

On 2025/1/8 18:32, Kevin Brodsky wrote:
When CONFIG_KPKEYS_HARDENED_PGTABLES is enabled, page tables (both
user and kernel) are mapped with a privileged pkey in the linear
mapping. As a result, they can only be written under the
kpkeys_hardened_pgtables guard, which sets POR_EL1 appropriately to
allow such writes.

Use this guard wherever page tables genuinely need to be written,
keeping its scope as small as possible (so that POR_EL1 is reset as
fast as possible). Where atomics are involved, the guard's scope
encompasses the whole loop to avoid switching POR_EL1 unnecessarily.

This patch is a no-op if CONFIG_KPKEYS_HARDENED_PGTABLES is disabled
(default).

Signed-off-by: Kevin Brodsky <kevin.brodsky@xxxxxxx>
---
arch/arm64/include/asm/pgtable.h | 19 +++++++++++++++++--
arch/arm64/mm/fault.c | 2 ++
2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index f8dac6673887..0d60a49dc234 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -39,6 +39,7 @@
#include <linux/mm_types.h>
#include <linux/sched.h>
#include <linux/page_table_check.h>
+#include <linux/kpkeys.h>
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
@@ -314,6 +315,7 @@ static inline pte_t pte_clear_uffd_wp(pte_t pte)
static inline void __set_pte_nosync(pte_t *ptep, pte_t pte)
{
+ guard(kpkeys_hardened_pgtables)();
WRITE_ONCE(*ptep, pte);
}
@@ -758,6 +760,7 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
}
#endif /* __PAGETABLE_PMD_FOLDED */
+ guard(kpkeys_hardened_pgtables)();
WRITE_ONCE(*pmdp, pmd);
if (pmd_valid(pmd)) {

I noticed a long time ago that set_pte/set_pmd/... was implemented
separately by each architecture without a unified entry point. This
makes it difficult to add some hooks for them.

Taking set_pte() as an example, is it possible to do the following:

1) add a generic set_pte() in include/asm-generic/tlb.h (Or other more
appropriate files)

static inline void set_pte(pte_t *ptep, pte_t pte)
{
arch_set_pte(ptep, pte);
}

2) let each architecture include this file and rename the original
set_pte() to arch_set_pte().

3) then we can add hooks for generic set_pte():

static inline void set_pte(pte_t *ptep, pte_t pte)
{
guard(kpkeys_hardened_pgtables)();
arch_set_pte(ptep, pte);
}

4) in this way, the architecture that supports
ARCH_HAS_KPKEYS_HARDENED_PGTABLES only needs to implement
the kpkeys_hardened_pgtables(), otherwise it is no-op.

Just some immature ideas, and the related set/clear interfaces
are currently quite messy. ;)

Of course, this does not affect the feature to be implemented
in this patch series.

Thanks!