On Mon, 2024-11-04 at 08:05 -0500, Phil Auld wrote:
On Sat, Nov 02, 2024 at 05:32:14AM +0100 Mike Galbraith wrote:
The buddy being preempted certainly won't be wakeup migrated...
Not the waker who gets preempted but the wakee may be a bit more
sticky on his current cpu and thus stack more since he's still
in that runqueue.
Ah, indeed, if wakees don't get scraped off before being awakened, they
can and do miss chances at an idle CPU according to trace_printk().
I'm undecided if overall it's boon, bane or even matters, as there is
still an ample supply of wakeup migration, but seems it can indeed
inject wakeup latency needlessly, so <sharpens stick>...
My box booted and neither become exceptionally noisy nor inexplicably
silent in.. oh, minutes now, so surely yours will be perfectly fine.
After one minute of lightly loaded box browsing, trace_printk() said:
645 - racy peek says there is a room available
11 - cool, reserved room is free
206 - no vacancy or wakee pinned
38807 - SIS accommodates room seeker
The below should improve the odds, but high return seems unlikely.
---
kernel/sched/core.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3790,7 +3790,13 @@ static int ttwu_runnable(struct task_str
rq = __task_rq_lock(p, &rf);
if (task_on_rq_queued(p)) {
update_rq_clock(rq);
- if (p->se.sched_delayed)
+ /*
+ * If wakee is mobile and the room it reserved is occupied, let it try to migrate.
+ */
+ if (p->se.sched_delayed && rq->nr_running > 1 && cpumask_weight(p->cpus_ptr) > 1) {
+ dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED | DEQUEUE_NOCLOCK);
+ goto out_unlock;
+ } else if (p->se.sched_delayed)
enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED);
if (!task_on_cpu(rq, p)) {
/*
@@ -3802,6 +3808,7 @@ static int ttwu_runnable(struct task_str
ttwu_do_wakeup(p);
ret = 1;
}
+out_unlock:
__task_rq_unlock(rq, &rf);
return ret;