[PATCH v2 00/10] x86: Rewrite 64-bit syscall code
From: Andy Lutomirski
Date: Thu Jan 28 2016 - 18:14:43 EST
This is kind of like the 32-bit and compat code, except that I
preserved the fast path this time. I was unable to measure any
significant performance change on my laptop in the fast path.
Changes from v1:
- Various tidying up.
- Remove duplicate tables (folded in, so the fastpath table isn't in this set).
- Rebased to 4.5-rc1
- Remove enter_from_user_mode stuff -- let's get the basics in first.
Andy Lutomirski (10):
selftests/x86: Extend Makefile to allow 64-bit-only tests
selftests/x86: Add check_initial_reg_state
x86/syscalls: Refactor syscalltbl.sh
x86/syscalls: Remove __SYSCALL_COMMON and __SYSCALL_X32
x86/syscalls: Move compat syscall entry handling into syscalltbl.sh
x86/syscalls: Add syscall entry qualifiers
x86/entry/64: Always run ptregs-using syscalls on the slow path
x86/entry/64: Call all native slow-path syscalls with full pt-regs
x86/entry/64: Stop using int_ret_from_sys_call in ret_from_fork
x86/entry/64: Migrate the 64-bit syscall slow path to C
arch/x86/entry/common.c | 26 ++
arch/x86/entry/entry_64.S | 271 +++++++--------------
arch/x86/entry/syscall_32.c | 10 +-
arch/x86/entry/syscall_64.c | 13 +-
arch/x86/entry/syscalls/syscall_64.tbl | 18 +-
arch/x86/entry/syscalls/syscalltbl.sh | 58 ++++-
arch/x86/kernel/asm-offsets_32.c | 2 +-
arch/x86/kernel/asm-offsets_64.c | 10 +-
arch/x86/um/sys_call_table_32.c | 4 +-
arch/x86/um/sys_call_table_64.c | 7 +-
arch/x86/um/user-offsets.c | 6 +-
tools/testing/selftests/x86/Makefile | 14 +-
.../selftests/x86/check_initial_reg_state.c | 109 +++++++++
13 files changed, 317 insertions(+), 231 deletions(-)
create mode 100644 tools/testing/selftests/x86/check_initial_reg_state.c
--
2.5.0