[tip: core/entry] context_tracking: Introduce HAVE_CONTEXT_TRACKING_OFFSTACK
From: tip-bot2 for Frederic Weisbecker
Date: Fri Nov 20 2020 - 07:45:01 EST
The following commit has been merged into the core/entry branch of tip:
Commit-ID: 83c2da2e605c73aafcc02df04b2dbf1ccbfc24c0
Gitweb: https://git.kernel.org/tip/83c2da2e605c73aafcc02df04b2dbf1ccbfc24c0
Author: Frederic Weisbecker <frederic@xxxxxxxxxx>
AuthorDate: Tue, 17 Nov 2020 16:16:33 +01:00
Committer: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
CommitterDate: Thu, 19 Nov 2020 11:25:41 +01:00
context_tracking: Introduce HAVE_CONTEXT_TRACKING_OFFSTACK
Historically, context tracking had to deal with fragile entry code path,
ie: before user_exit() is called and after user_enter() is called, in
case some of those spots would call schedule() or use RCU. On such
cases, the site had to be protected between exception_enter() and
exception_exit() that save the context tracking state in the task stack.
Such sleepable fragile code path had many different origins: tracing,
exceptions, early or late calls to context tracking on syscalls...
Aside of that not being pretty, saving the context tracking state on
the task stack forces us to run context tracking on all CPUs, including
housekeepers, and prevents us to completely shutdown nohz_full at
runtime on a CPU in the future as context tracking and its overhead
would still need to run system wide.
Now thanks to the extensive efforts to sanitize x86 entry code, those
conditions have been removed and we can now get rid of these workarounds
in this architecture.
Create a Kconfig feature to express this achievement.
Signed-off-by: Frederic Weisbecker <frederic@xxxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Link: https://lkml.kernel.org/r/20201117151637.259084-2-frederic@xxxxxxxxxx
---
arch/Kconfig | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/arch/Kconfig b/arch/Kconfig
index 56b6ccc..090ef35 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -618,6 +618,23 @@ config HAVE_CONTEXT_TRACKING
protected inside rcu_irq_enter/rcu_irq_exit() but preemption or signal
handling on irq exit still need to be protected.
+config HAVE_CONTEXT_TRACKING_OFFSTACK
+ bool
+ help
+ Architecture neither relies on exception_enter()/exception_exit()
+ nor on schedule_user(). Also preempt_schedule_notrace() and
+ preempt_schedule_irq() can't be called in a preemptible section
+ while context tracking is CONTEXT_USER. This feature reflects a sane
+ entry implementation where the following requirements are met on
+ critical entry code, ie: before user_exit() or after user_enter():
+
+ - Critical entry code isn't preemptible (or better yet:
+ not interruptible).
+ - No use of RCU read side critical sections, unless rcu_nmi_enter()
+ got called.
+ - No use of instrumentation, unless instrumentation_begin() got
+ called.
+
config HAVE_TIF_NOHZ
bool
help