Re: [PATCH v3 5/5] softirq: Avoid unnecessary wakeup of ksoftirqd when a call to do_sofirq() is pending

From: K Prateek Nayak
Date: Mon Oct 28 2024 - 00:24:14 EST


Hello Sebastian,

Thank you for reviewing the series!

On 10/25/2024 10:33 PM, Sebastian Andrzej Siewior wrote:
On 2024-10-14 09:03:39 [+0000], K Prateek Nayak wrote:
Since commit b2a02fc43a1f4 ("smp: Optimize
send_call_function_single_ipi()"), sending an actual interrupt to an
idle CPU in TIF_POLLING_NRFLAG mode can be avoided by queuing the SMP
call function on the call function queue of the CPU and setting the
TIF_NEED_RESCHED bit in idle task's thread info. The call function is
handled in the idle exit path when do_idle() calls
flush_smp_call_function_queue().

However, since flush_smp_call_function_queue() is executed in idle
thread's context, in_interrupt() check within a call function will
return false. raise_softirq() uses this check to decide whether to wake
ksoftirqd, since, a softirq raised from an interrupt context will be
handled at irq exit. In all other cases, raise_softirq() wakes up
ksoftirqd to handle the softirq on !PREEMPT_RT kernel.

Stupid question. You talk about the invocation from nohz_csd_func(),
right?.
Given that this is an IPI and always invoked from an IRQ then the
softirq is invoked on IRQ-exit.

Yes, there is no issues in that case.

If it is flushed from
flush_smp_call_function_queue() then the softirq is handled via
do_softirq_post_smp_call_flush(). In that case couldn't you just tell
nohz_csd_func() to use __raise_softirq_irqoff(SCHED_SOFTIRQ) ? This
should solve this, right?

I cannot think of any reason why it wouldn't work. Let me check real
quick and update the series if it works. Thanks a ton for the
suggestion!


diff --git a/kernel/softirq.c b/kernel/softirq.c
index 0730c2b43ae4..3a6b3e67ea24 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -99,6 +99,10 @@ EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context);
*
* The per CPU counter prevents pointless wakeups of ksoftirqd in case that
* the task which is in a softirq disabled section is preempted or blocks.
+ *
+ * The bottom bits of softirq_ctrl::cnt is used to indicate an impending call
+ * to do_softirq() to prevent pointless wakeups of ksoftirqd since the CPU
+ * promises to handle softirqs soon.
*/

The comment that you are extending and the comment regarding
SOFTIRQ_OFFSET were nearby. I don't like that those two are now far
apart.

Noted. If the above suggestion doesn't work, I'll rearrange this bit and
refresh the series.


struct softirq_ctrl {
local_lock_t lock;
@@ -109,6 +113,16 @@ static DEFINE_PER_CPU_ALIGNED(struct softirq_ctrl, softirq_ctrl) = {
.lock = INIT_LOCAL_LOCK(softirq_ctrl.lock),
};
+inline void set_do_softirq_pending(void)
+{
+ __this_cpu_inc(softirq_ctrl.cnt);
+}
+
+inline void clr_do_softirq_pending(void)

there should be no inline here.

Ack. Will fix in the subsequent version if the alternate approach
doesn't work.


+{
+ __this_cpu_dec(softirq_ctrl.cnt);
+}
+
static inline bool should_wake_ksoftirqd(void)
{
return !this_cpu_read(softirq_ctrl.cnt);

Sebastian

--
Thanks and Regards,
Prateek