[PATCH v5 00/10] x86/alternative: text_poke() fixes
From: Nadav Amit
Date: Tue Nov 13 2018 - 15:25:45 EST
This patch-set addresses some issues that might affect the security and
the correctness of code patching.
The main issue that the patches deal with is the fact that the fixmap
PTEs that are used for patching are available for access from other
cores and might be exploited. They are not even flushed from the TLB in
remote cores, so the risk is even higher. This set addresses this issue
by introducing a temporary mm that is only used during patching.
To do so, we need to avoid using text_poke() before the poking-mm is
initialized and instead use text_poke_early().
During v3 of this set, Andy & Thomas suggested that early patching of
modules can be improved by simply writing to the memory. This actually
raises a security concern: there should not be any W+X mappings at any
given moment, and modules loading breaks this protection for no good
reason. So this patch also addresses this issue, while (presumably)
improving patching speed by making module memory initially RW(+NX) and
before execution changing it into RO(+X).
In addition the patch addresses various issues that are related to code
patching, and do some cleanup. I removed in this version some
tested-by and reviewed-by tags due to some extensive changes of some
patches.
v4->v5:
- Fix Xen breakage [Damian Tometzki]
- BUG_ON() when poking_mm initialization fails [PeterZ]
- Better comments on "x86/mm: temporary mm struct"
- Cleaner removal of the custom poker
v3->v4:
- Setting modules as RO when loading [Andy, tglx]
- Adding text_poke_kgdb() to keep the text_mutex assertion [tglx]
- Simpler logic to decide when to use early-poking [peterZ]
- More cleanup
v2->v3:
- Remove the fallback path in text_poke() [peterZ]
- poking_init() was broken due to the local variable poking_addr
- Preallocate tables for the temporary-mm to avoid sleep-in-atomic
- Prevent KASAN from yelling at text_poke()
v1->v2:
- Partial revert of 9222f606506c added to 1/6 [masami]
- Added Masami's reviewed-by tag
RFC->v1:
- Added handling of error in get_locked_pte()
- Remove lockdep assertion, clarify text_mutex use instead [masami]
- Comment fix [peterz]
- Removed remainders of text_poke return value [masami]
- Use __weak for poking_init instead of macros [masami]
- Simplify error handling in poking_init [masami]
Andy Lutomirski (1):
x86/mm: temporary mm struct
Nadav Amit (9):
Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
x86/jump_label: Use text_poke_early() during early init
fork: provide a function for copying init_mm
x86/alternative: initializing temporary mm for patching
x86/alternative: use temporary mm for text poking
x86/kgdb: avoid redundant comparison of code
x86: avoid W^X being broken during modules loading
x86/jump-label: remove support for custom poker
x86/alternative: remove the return value of text_poke_*()
Andy Lutomirski (1):
x86/mm: temporary mm struct
Nadav Amit (9):
Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()"
x86/jump_label: Use text_poke_early() during early init
fork: provide a function for copying init_mm
x86/alternative: initializing temporary mm for patching
x86/alternative: use temporary mm for text poking
x86/kgdb: avoid redundant comparison of patched code
x86: avoid W^X being broken during modules loading
x86/jump-label: remove support for custom poker
x86/alternative: remove the return value of text_poke_*()
arch/x86/include/asm/fixmap.h | 2 -
arch/x86/include/asm/mmu_context.h | 32 +++++
arch/x86/include/asm/pgtable.h | 3 +
arch/x86/include/asm/text-patching.h | 9 +-
arch/x86/kernel/alternative.c | 208 +++++++++++++++++++++------
arch/x86/kernel/jump_label.c | 19 ++-
arch/x86/kernel/kgdb.c | 19 +--
arch/x86/kernel/module.c | 2 +-
arch/x86/mm/init_64.c | 35 +++++
arch/x86/xen/mmu_pv.c | 2 -
include/linux/filter.h | 6 +
include/linux/sched/task.h | 1 +
init/main.c | 3 +
kernel/fork.c | 24 +++-
kernel/module.c | 10 ++
15 files changed, 292 insertions(+), 83 deletions(-)
--
2.17.1