[tip: core/rcu] preempt: Make preempt count unconditional
From: tip-bot2 for Thomas Gleixner
Date: Fri Oct 09 2020 - 13:01:48 EST
The following commit has been merged into the core/rcu branch of tip:
Commit-ID: 7681205ba49d8b0dcb3a0f55d97f71e1da93e972
Gitweb: https://git.kernel.org/tip/7681205ba49d8b0dcb3a0f55d97f71e1da93e972
Author: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
AuthorDate: Mon, 14 Sep 2020 19:18:06 +02:00
Committer: Paul E. McKenney <paulmck@xxxxxxxxxx>
CommitterDate: Mon, 28 Sep 2020 16:02:49 -07:00
preempt: Make preempt count unconditional
The handling of preempt_count() is inconsistent across kernel
configurations. On kernels which have PREEMPT_COUNT=n
preempt_disable/enable() and the lock/unlock functions are not affecting
the preempt count, only local_bh_disable/enable() and _bh variants of
locking, soft interrupt delivery, hard interrupt and NMI context affect it.
It's therefore impossible to have a consistent set of checks which provide
information about the context in which a function is called. In many cases
it makes sense to have separate functions for separate contexts, but there
are valid reasons to avoid that and handle different calling contexts
conditionally.
The lack of such indicators which work on all kernel configuratios is a
constant source of trouble because developers either do not understand the
implications or try to work around this inconsistency in weird
ways. Neither seem these issues be catched by reviewers and testing.
Recently merged code does:
gfp = preemptible() ? GFP_KERNEL : GFP_ATOMIC;
Looks obviously correct, except for the fact that preemptible() is
unconditionally false for CONFIF_PREEMPT_COUNT=n, i.e. all allocations in
that code use GFP_ATOMIC on such kernels.
Attempts to make preempt count unconditional and consistent have been
rejected in the past with handwaving performance arguments.
Freshly conducted benchmarks did not reveal any measurable impact from
enabling preempt count unconditionally. On kernels with CONFIG_PREEMPT_NONE
or CONFIG_PREEMPT_VOLUNTARY the preempt count is only incremented and
decremented but the result of the decrement is not tested. Contrary to that
enabling CONFIG_PREEMPT which tests the result has a small but measurable
impact due to the conditional branch/call.
It's about time to make essential functionality of the kernel consistent
across the various preemption models.
Enable CONFIG_PREEMPT_COUNT unconditionally. Follow up changes will remove
the #ifdeffery and remove the config option at the end.
Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx>
---
kernel/Kconfig.preempt | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
index bf82259..3f4712f 100644
--- a/kernel/Kconfig.preempt
+++ b/kernel/Kconfig.preempt
@@ -75,8 +75,7 @@ config PREEMPT_RT
endchoice
config PREEMPT_COUNT
- bool
+ def_bool y
config PREEMPTION
bool
- select PREEMPT_COUNT