Re: [PATCH v4 3/3] sched/core: Prevent wakeup of ksoftirqd during idle load balance

From: K Prateek Nayak
Date: Sun Nov 10 2024 - 23:42:26 EST


Hello Sebastian,

On 11/8/2024 5:47 PM, Sebastian Andrzej Siewior wrote:
On 2024-10-30 07:15:57 [+0000], K Prateek Nayak wrote:
Scheduler raises a SCHED_SOFTIRQ to trigger a load balancing event on
from the IPI handler on the idle CPU. Since the softirq can be raised
from flush_smp_call_function_queue(), it can end up waking up ksoftirqd,
which can give an illusion of the idle CPU being busy when doing an idle
load balancing.

Adding a trace_printk() in nohz_csd_func() at the spot of raising
SCHED_SOFTIRQ and enabling trace events for sched_switch, sched_wakeup,
and softirq_entry (for SCHED_SOFTIRQ vector alone) helps observing the
current behavior:

<idle>-0 [000] dN.1.: nohz_csd_func: Raising SCHED_SOFTIRQ from nohz_csd_func
<idle>-0 [000] dN.4.: sched_wakeup: comm=ksoftirqd/0 pid=16 prio=120 target_cpu=000
<idle>-0 [000] .Ns1.: softirq_entry: vec=7 [action=SCHED]
<idle>-0 [000] .Ns1.: softirq_exit: vec=7 [action=SCHED]
<idle>-0 [000] d..2.: sched_switch: prev_comm=swapper/0 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=ksoftirqd/0 next_pid=16 next_prio=120
ksoftirqd/0-16 [000] d..2.: sched_switch: prev_comm=ksoftirqd/0 prev_pid=16 prev_prio=120 prev_state=S ==> next_comm=swapper/0 next_pid=0 next_prio=120
...

ksoftirqd is woken up before the idle thread calls
do_softirq_post_smp_call_flush() which can make the runqueue appear
busy and prevent the idle load balancer from pulling task from an
overloaded runqueue towards itself[1].

Since the softirq raised is guranteed to be serviced in irq_exit() or
via do_softirq_post_smp_call_flush(), set SCHED_SOFTIRQ without checking
the need to wakeup ksoftirq for idle load balancing.

Following are the observations with the changes when enabling the same
set of events:

<idle>-0 [000] dN.1.: nohz_csd_func: Raising SCHED_SOFTIRQ for nohz_idle_balance
<idle>-0 [000] dN.1.: softirq_raise: vec=7 [action=SCHED]
<idle>-0 [000] .Ns1.: softirq_entry: vec=7 [action=SCHED]

No unnecessary ksoftirqd wakeups are seen from idle task's context to
service the softirq.

| Use __raise_softirq_irqoff() to raise the softirq. The SMP function call
| is always invoked on the requested CPU in an interrupt handler. It is
| guaranteed that soft interrupts are handled at the end.

You could extend it

| If the SMP function is invoked from an idle CPU via
| flush_smp_call_function_queue() then the HARD-IRQ flag is not set and
| raise_softirq_irqoff() wakes needlessly ksoftirqd because soft
| interrupts are handled before ksoftirqd get on the CPU.

I'll reword the log as suggested in the next version.


This on its own is a reasonable optimisation. A different question would
be if flush_smp_call_function_queue() should pretend to be in-IRQ like a
regular IPI but…

I thought about it initially but seeing optimizations and checks around
"hardirq_stack" and checks to reuse it in certain context led be to
believe that there may be more nuances that I do not have a full picture
of, and I went ahead with this simpler solution.


Reviewed-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>

Thank you for the review!


Fixes: b2a02fc43a1f ("smp: Optimize send_call_function_single_ipi()")
Reported-by: Julia Lawall <julia.lawall@xxxxxxxx>
Closes: https://lore.kernel.org/lkml/fcf823f-195e-6c9a-eac3-25f870cb35ac@xxxxxxxx/ [1]
Suggested-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
Signed-off-by: K Prateek Nayak <kprateek.nayak@xxxxxxx>
---
v3..v4:

o New patch based on Sebastian's suggestion.
---
kernel/sched/core.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index aaf99c0bcb49..2ee3621d6e7e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1244,7 +1244,18 @@ static void nohz_csd_func(void *info)
rq->idle_balance = idle_cpu(cpu);
if (rq->idle_balance) {
rq->nohz_idle_balance = flags;
- raise_softirq_irqoff(SCHED_SOFTIRQ);
+
+ /*
+ * Don't wakeup ksoftirqd when raising SCHED_SOFTIRQ
+ * since the idle load balancer may mistake wakeup of
+ * ksoftirqd as a genuine task wakeup and bail out from
+ * load balancing early. Since it is guaranteed that
+ * pending softirqs will be handled soon, either on
+ * irq_exit() or via do_softirq_post_smp_call_flush(),
+ * raise SCHED_SOFTIRQ without checking the need to
+ * wakeup ksoftirqd.
+ */

/*
* This is always invoked from an interrupt handler, simply raise the
* softirq.
*/

should be enough IMHO. But *I* would even skip that, since it is
obvious.

I'll remove it in the subsequent version. I'll wait a bit before sending
it to see if folks have any suggestion on the parallel thread regarding
handling SCHED_SOFTIRQ from ksoftirqd.


+ __raise_softirq_irqoff(SCHED_SOFTIRQ);
}
}

Sebastian

--
Thanks and Regards,
Prateek