[PATCH 0/5] x86/entry: simply stack switching when exception on userspace

From: Lai Jiangshan
Date: Wed May 27 2020 - 03:31:17 EST

7f2590a110b8("x86/entry/64: Use a per-CPU trampoline stack for IDT entries")
has resulted that when exception on userspace, the kernel (error_entry)
always push the pt_regs to entry stack(sp0), and then copy them to the
kernel stack.

This is a hot path (for example page fault) and interrupt_entry
directly switches to kernel stack and pushes pt_regs to kernel stack.
We should do it for error_entry. This is the job of patch1,2.

Patch 3-5 simply stack switching for .Lerror_bad_iret by just doing
all the work in one function (fixup_bad_iret()).

The patch set is based on tip/master (c021d3d8fe45) (Mon May 25).

The diffstat is "66 insertions(+), 66 deletions(-)", but actually
it mainly adds comments and deletes code.

Cc: Andy Lutomirski <luto@xxxxxxxxxx>,
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>,
Cc: Ingo Molnar <mingo@xxxxxxxxxx>,
Cc: Borislav Petkov <bp@xxxxxxxxx>,
Cc: x86@xxxxxxxxxx,
Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>,
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>,
Cc: Alexandre Chartre <alexandre.chartre@xxxxxxxxxx>,
Cc: "Eric W. Biederman" <ebiederm@xxxxxxxxxxxx>,
Cc: Jann Horn <jannh@xxxxxxxxxx>,
Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>

Lai Jiangshan (5):
x86/entry: introduce macro idtentry_swapgs_and_switch_to_kernel_stack
x86/entry: avoid calling into sync_regs() when entering from userspace
x86/entry: directly switch to kernel stack when .Lerror_bad_iret
x86/entry: remove unused sync_regs()
x86/entry: don't copy to tmp in fixup_bad_iret

arch/x86/entry/entry_64.S | 89 ++++++++++++++++++++----------------
arch/x86/include/asm/traps.h | 1 -
arch/x86/kernel/traps.c | 42 +++++++----------
3 files changed, 66 insertions(+), 66 deletions(-)