Re: [RFC PATCH -next V2 5/7] arm64: add get_user to machine check safe

From: Tong Tiangen
Date: Thu Apr 07 2022 - 10:38:57 EST




在 2022/4/6 19:22, Mark Rutland 写道:
On Wed, Apr 06, 2022 at 09:13:09AM +0000, Tong Tiangen wrote:
Add scenarios get_user to machine check safe. The processing of
EX_TYPE_UACCESS_ERR_ZERO and EX_TYPE_UACCESS_ERR_ZERO_UCE_RECOVERY is same
and both return -EFAULT.

Which uaccess cases do we expect to *not* be recoverable?

Naively I would assume that if we're going to treat a memory error on a uaccess
as fatal to userspace we should be able to do that for *any* uacesses.

The commit message should explain why we need the distinction between a
recoverable uaccess and a non-recoverable uaccess.

Thanks,
Mark.


Currently, any memory error consumed in kernel mode will lead to panic
(do_sea()).

My idea is that not all memory errors consumed in kernel mode are fatal,
such as copy_ from_ user/get_ user is a memory error consumed when
reading user data in the process context. In this case, we can not let the kernel panic, just kill the process without affecting the operation
of the system.

However, not all uaccess can be recovered without affecting the normal
operation of the system. The key is not whether it is uaccess, but whether there are key data affecting the normal operation of the system in the read page.

Thanks,
Tong.



Signed-off-by: Tong Tiangen <tongtiangen@xxxxxxxxxx>
---
arch/arm64/include/asm/asm-extable.h | 14 +++++++++++++-
arch/arm64/include/asm/uaccess.h | 2 +-
arch/arm64/mm/extable.c | 1 +
3 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
index 74d1db74fd86..bfc2d224cbae 100644
--- a/arch/arm64/include/asm/asm-extable.h
+++ b/arch/arm64/include/asm/asm-extable.h
@@ -10,8 +10,11 @@
/* _MC indicates that can fixup from machine check errors */
#define EX_TYPE_FIXUP_MC 5
+#define EX_TYPE_UACCESS_ERR_ZERO_MC 6
-#define IS_EX_TYPE_MC(type) (type == EX_TYPE_FIXUP_MC)
+#define IS_EX_TYPE_MC(type) \
+ (type == EX_TYPE_FIXUP_MC || \
+ type == EX_TYPE_UACCESS_ERR_ZERO_MC)
#ifdef __ASSEMBLY__
@@ -77,6 +80,15 @@
#define EX_DATA_REG(reg, gpr) \
"((.L__gpr_num_" #gpr ") << " __stringify(EX_DATA_REG_##reg##_SHIFT) ")"
+#define _ASM_EXTABLE_UACCESS_ERR_ZERO_MC(insn, fixup, err, zero) \
+ __DEFINE_ASM_GPR_NUMS \
+ __ASM_EXTABLE_RAW(#insn, #fixup, \
+ __stringify(EX_TYPE_UACCESS_ERR_ZERO_MC), \
+ "(" \
+ EX_DATA_REG(ERR, err) " | " \
+ EX_DATA_REG(ZERO, zero) \
+ ")")
+
#define _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero) \
__DEFINE_ASM_GPR_NUMS \
__ASM_EXTABLE_RAW(#insn, #fixup, \
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index e8dce0cc5eaa..24b662407fbd 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -236,7 +236,7 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
asm volatile( \
"1: " load " " reg "1, [%2]\n" \
"2:\n" \
- _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 2b, %w0, %w1) \
+ _ASM_EXTABLE_UACCESS_ERR_ZERO_MC(1b, 2b, %w0, %w1) \
: "+r" (err), "=&r" (x) \
: "r" (addr))
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index f1134c88e849..7c05f8d2bce0 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -95,6 +95,7 @@ bool fixup_exception(struct pt_regs *regs, unsigned int esr)
case EX_TYPE_BPF:
return ex_handler_bpf(ex, regs);
case EX_TYPE_UACCESS_ERR_ZERO:
+ case EX_TYPE_UACCESS_ERR_ZERO_MC:
return ex_handler_uaccess_err_zero(ex, regs);
case EX_TYPE_LOAD_UNALIGNED_ZEROPAD:
return ex_handler_load_unaligned_zeropad(ex, regs);
--
2.18.0.huawei.25


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@xxxxxxxxxxxxxxxxxxx
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
.