From: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>
7f2590a110b8("x86/entry/64: Use a per-CPU trampoline stack for IDT entries")
has resulted that when exception on userspace, the kernel (error_entry)
always push the pt_regs to entry stack(sp0), and then copy them to the
kernel stack.
And recent x86/entry work makes interrupt also use idtentry
and makes all the interrupt code save the pt_regs on the sp0 stack
and then copy it to the thread stack like exception.
This is hot path (page fault, ipi), such overhead should be avoided.
And the original interrupt_entry directly switches to kernel stack
and pushes pt_regs to kernel stack. We should do it for error_entry.
This is the job of patch1.
Patch 2-3 simplify stack switching for .Lerror_bad_iret by just doing
all the work in one function (fixup_bad_iret()).
The patch set is based on v5.9-rc1
Changed from V1:
based on tip/master -> based on tip/x86/entry
patch 1 replaces the patch1,2 of V1, it borrows the
original interrupt_entry's code into error_entry.
patch2-4 is V1's patch3-5, unchanged (but rebased)
Changed from V2:
(re-)based on v5.9-rc1
drop the patch4 of V2 patchset
Cc: Andy Lutomirski <luto@xxxxxxxxxx>,
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>,
Cc: Ingo Molnar <mingo@xxxxxxxxxx>,
Cc: Borislav Petkov <bp@xxxxxxxxx>,
Cc: x86@xxxxxxxxxx,
Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>,
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>,
Cc: Alexandre Chartre <alexandre.chartre@xxxxxxxxxx>,
Cc: "Eric W. Biederman" <ebiederm@xxxxxxxxxxxx>,
Cc: Jann Horn <jannh@xxxxxxxxxx>,
Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
Lai Jiangshan (3):
x86/entry: avoid calling into sync_regs() when entering from userspace
x86/entry: directly switch to kernel stack when .Lerror_bad_iret
x86/entry: remove unused sync_regs()
arch/x86/entry/entry_64.S | 52 +++++++++++++++++++++++-------------
arch/x86/include/asm/traps.h | 1 -
arch/x86/kernel/traps.c | 22 +++------------
3 files changed, 38 insertions(+), 37 deletions(-)