[PATCH] latency improvement in __smp_call_single_queue
From: George Prekas
Date: Wed Sep 23 2020 - 11:00:56 EST
If an interrupt arrives between llist_add and
send_call_function_single_ipi in the following code snippet, then the
remote CPU will not receive the IPI in a timely manner and subsequent
SMP calls even from other CPUs for other functions will be delayed:
if (llist_add(node, &per_cpu(call_single_queue, cpu)))
send_call_function_single_ipi(cpu);
Note: llist_add returns 1 if it was empty before the operation.
CPU 0 | CPU 1 | CPU 2
__smp_call_single_q(2,f1) | __smp_call_single_q(2,f2) |
llist_add returns 1 | |
interrupted | llist_add returns 0 |
... | branch not taken |
... | |
resumed | |
send_call_function_single_ipi | |
| | f1
| | f2
The call from CPU 1 for function f2 will be delayed because CPU 0 was
interrupted.
Signed-off-by: George Prekas <prekageo@xxxxxxxxxx>
---
kernel/smp.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/smp.c b/kernel/smp.c
index aa17eedff5be..9dc679466cf0 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -135,6 +135,8 @@ static
DEFINE_PER_CPU_SHARED_ALIGNED(call_single_data_t, csd_data);
void __smp_call_single_queue(int cpu, struct llist_node *node)
{
+ unsigned long flags;
+
/*
* The list addition should be visible before sending the IPI
* handler locks the list to pull the entry off it because of
@@ -146,8 +148,10 @@ void __smp_call_single_queue(int cpu, struct
llist_node *node)
* locking and barrier primitives. Generic code isn't really
* equipped to do the right thing...
*/
+ local_irq_save(flags);
if (llist_add(node, &per_cpu(call_single_queue, cpu)))
send_call_function_single_ipi(cpu);
+ local_irq_restore(flags);
}
/*
--
2.16.6