Re: [PATCH v2 6/7] x86/mce: Recover from poison found while copying from user space
From: Borislav Petkov
Date: Mon Oct 05 2020 - 12:32:54 EST
On Wed, Sep 30, 2020 at 04:26:10PM -0700, Tony Luck wrote:
> From: Youquan Song <youquan.song@xxxxxxxxx>
>
> Existing kernel code can only recover from a machine check on code that
> is tagged in the exception table with a fault handling recovery path.
>
> Two new fields in the task structure to pass information from machine
> check handler to the "task_work" that is queued to run before the task
> returns to user mode:
>
> + mce_vaddr: will be initialized to the user virtual address of the fault
> in the case where the fault occurred in the kernel copying data from
> a user address. This is so that kill_me_maybe() can provide that
> information to the user SIGBUS handler.
>
> + mce_kflags: copy of the struct mce.kflags needed by kill_me_maybe()
> to determine if mce_vaddr is applicable to this error.
>
> Add code to recover from a machine check while copying data from user
> space to the kernel. Action for this case is the same as if the user
> touched the poison directly; unmap the page and send a SIGBUS to the task.
>
> Refactor the recovery code path to share common code between the "fault
> in user mode" case and the "fault while copying from user" case.
>
> New code paths will be activated by the next patch which sets
> MCE_IN_KERNEL_COPYIN.
>
> Signed-off-by: Youquan Song <youquan.song@xxxxxxxxx>
> Signed-off-by: Tony Luck <tony.luck@xxxxxxxxx>
> ---
> arch/x86/kernel/cpu/mce/core.c | 33 +++++++++++++++++++++------------
> include/linux/sched.h | 2 ++
> 2 files changed, 23 insertions(+), 12 deletions(-)
Isn't that just simpler?
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 4d2cf08820af..dc6c83aa2ec1 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1261,6 +1261,21 @@ static void kill_me_maybe(struct callback_head *cb)
kill_me_now(cb);
}
+static inline void queue_task_work(struct mce *m, int kill_it)
+{
+ current->mce_addr = m->addr;
+ current->mce_kflags = m->kflags;
+ current->mce_ripv = !!(m->mcgstatus & MCG_STATUS_RIPV);
+ current->mce_whole_page = whole_page(m);
+
+ if (kill_it)
+ current->mce_kill_me.func = kill_me_now;
+ else
+ current->mce_kill_me.func = kill_me_maybe;
+
+ task_work_add(current, ¤t->mce_kill_me, true);
+}
+
/*
* The actual machine check handler. This only handles real
* exceptions when something got corrupted coming in through int 18.
@@ -1402,13 +1417,8 @@ noinstr void do_machine_check(struct pt_regs *regs)
/* If this triggers there is no way to recover. Die hard. */
BUG_ON(!on_thread_stack() || !user_mode(regs));
- current->mce_addr = m.addr;
- current->mce_ripv = !!(m.mcgstatus & MCG_STATUS_RIPV);
- current->mce_whole_page = whole_page(&m);
- current->mce_kill_me.func = kill_me_maybe;
- if (kill_it)
- current->mce_kill_me.func = kill_me_now;
- task_work_add(current, ¤t->mce_kill_me, true);
+ queue_task_work(&m, kill_it);
+
} else {
/*
* Handle an MCE which has happened in kernel space but from
@@ -1423,6 +1433,9 @@ noinstr void do_machine_check(struct pt_regs *regs)
if (!fixup_exception(regs, X86_TRAP_MC, 0, 0))
mce_panic("Failed kernel mode recovery", &m, msg);
}
+
+ if (m.kflags & MCE_IN_KERNEL_COPYIN)
+ queue_task_work(&m, kill_it);
}
out:
mce_wrmsrl(MSR_IA32_MCG_STATUS, 0);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index e7f65573dde3..d383cf09e78f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1308,6 +1308,8 @@ struct task_struct {
#endif
#ifdef CONFIG_X86_MCE
+ void __user *mce_vaddr;
+ __u64 mce_kflags;
u64 mce_addr;
__u64 mce_ripv : 1,
mce_whole_page : 1,
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette