[PATCH v2] sched/rt: Document why has_pushable_tasks() isn't called with a runqueue lock
From: Steven Rostedt
Date: Thu Mar 02 2017 - 20:08:39 EST
From: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
While reviewing the RT scheduling IPI logic, I was thinking that it was
a bug that has_pushable_tasks(rq) was not called under the runqueue
lock. But then I realized that there isn't a case where a race would
cause a problem, as to update has_pushable_tasks() would trigger a
push_rt_task() call from the CPU doing the update.
This subtle logic deserves a comment.
Signed-off-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
---
Changes from v1:
Removed pronouns that causes confusion, and added a statement about
push_rt_task() being called elsewhere when has_pushable_tasks() is
set someplace else.
Index: linux-trace.git/kernel/sched/rt.c
===================================================================
--- linux-trace.git.orig/kernel/sched/rt.c
+++ linux-trace.git/kernel/sched/rt.c
@@ -1976,6 +1976,18 @@ static void try_to_push_tasks(void *arg)
src_rq = rq_of_rt_rq(rt_rq);
again:
+ /*
+ * Normally, has_pushable_tasks() would be performed within the
+ * runqueue lock being held. But if has_pushable_tasks() is false
+ * when entering this hard interrupt handler function, then to have
+ * it set to true would require a wake up. A wake up of an RT task
+ * will either cause a schedule if the woken task is higher priority
+ * than the running task, or it would try to do a push from the CPU
+ * doing the wake up. In ether case push_rt_task() would be performed
+ * there, and missing it here would not be an issue. Grabbing the
+ * runqueue lock in such a case would more likely just cause
+ * unnecessary contention.
+ */
if (has_pushable_tasks(rq)) {
raw_spin_lock(&rq->lock);
push_rt_task(rq);