[tip: core/core] signal: Don't disable preemption in ptrace_stop() on PREEMPT_RT
From: tip-bot2 for Sebastian Andrzej Siewior
Date: Tue Sep 19 2023 - 16:16:56 EST
The following commit has been merged into the core/core branch of tip:
Commit-ID: 1aabbc532413ced293952f8e149ad0a607d6e470
Gitweb: https://git.kernel.org/tip/1aabbc532413ced293952f8e149ad0a607d6e470
Author: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
AuthorDate: Thu, 03 Aug 2023 12:09:32 +02:00
Committer: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
CommitterDate: Tue, 19 Sep 2023 22:08:29 +02:00
signal: Don't disable preemption in ptrace_stop() on PREEMPT_RT
On PREEMPT_RT keeping preemption disabled during the invocation of
cgroup_enter_frozen() is a problem because the function acquires
css_set_lock which is a sleeping lock on PREEMPT_RT and must not be
acquired with disabled preemption.
The preempt-disabled section is only for performance optimisation reasons
and can be avoided.
Extend the comment and don't disable preemption before scheduling on
PREEMPT_RT.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Acked-by: Oleg Nesterov <oleg@xxxxxxxxxx>
Link: https://lore.kernel.org/r/20230803100932.325870-3-bigeasy@xxxxxxxxxxxxx
---
kernel/signal.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/kernel/signal.c b/kernel/signal.c
index 3035beb..f2a5578 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -2345,11 +2345,22 @@ static int ptrace_stop(int exit_code, int why, unsigned long message,
* will be no preemption between unlock and schedule() and so
* improving the performance since the ptracer will observe that
* the tracee is scheduled out once it gets on the CPU.
+ *
+ * On PREEMPT_RT locking tasklist_lock does not disable preemption.
+ * Therefore the task can be preempted after do_notify_parent_cldstop()
+ * before unlocking tasklist_lock so there is no benefit in doing this.
+ *
+ * In fact disabling preemption is harmful on PREEMPT_RT because
+ * the spinlock_t in cgroup_enter_frozen() must not be acquired
+ * with preemption disabled due to the 'sleeping' spinlock
+ * substitution of RT.
*/
- preempt_disable();
+ if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+ preempt_disable();
read_unlock(&tasklist_lock);
cgroup_enter_frozen();
- preempt_enable_no_resched();
+ if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+ preempt_enable_no_resched();
schedule();
cgroup_leave_frozen(true);