[PATCH v2 2/2 sched_ext/for-6.12] sched_ext: Use sched_clock_cpu() instead of rq_clock_task() in touch_core_sched()
From: Tejun Heo
Date: Fri Aug 30 2024 - 13:54:49 EST
Since 3cf78c5d01d6 ("sched_ext: Unpin and repin rq lock from
balance_scx()"), sched_ext's balance path terminates rq_pin in the outermost
function. This is simpler and in line with what other balance functions are
doing but it loses control over rq->clock_update_flags which makes
assert_clock_udpated() trigger if other CPUs pins the rq lock.
The only place this matters is touch_core_sched() which uses the timestamp
to order tasks from sibling rq's. Switch to sched_clock_cpu(). Later, it may
be better to use per-core dispatch sequence number.
v2: Use sched_clock_cpu() instead of ktime_get_ns() per David.
Signed-off-by: Tejun Heo <tj@xxxxxxxxxx>
Fixes: 3cf78c5d01d6 ("sched_ext: Unpin and repin rq lock from balance_scx()")
Cc: David Vernet <void@xxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
---
kernel/sched/ext.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -1453,13 +1453,18 @@ static void schedule_deferred(struct rq
*/
static void touch_core_sched(struct rq *rq, struct task_struct *p)
{
+ lockdep_assert_rq_held(rq);
+
#ifdef CONFIG_SCHED_CORE
/*
* It's okay to update the timestamp spuriously. Use
* sched_core_disabled() which is cheaper than enabled().
+ *
+ * As this is used to determine ordering between tasks of sibling CPUs,
+ * it may be better to use per-core dispatch sequence instead.
*/
if (!sched_core_disabled())
- p->scx.core_sched_at = rq_clock_task(rq);
+ p->scx.core_sched_at = sched_clock_cpu(cpu_of(rq));
#endif
}
@@ -1476,7 +1481,6 @@ static void touch_core_sched(struct rq *
static void touch_core_sched_dispatch(struct rq *rq, struct task_struct *p)
{
lockdep_assert_rq_held(rq);
- assert_clock_updated(rq);
#ifdef CONFIG_SCHED_CORE
if (SCX_HAS_OP(core_sched_before))