[PATCH 05/10] smp: Fast path check on IPI list

From: Frederic Weisbecker
Date: Fri Jul 18 2014 - 20:47:28 EST


When we enqueue a remote irq work, we trigger the same IPI as those
raised by smp_call_function_*() family.

So when we receive such IPI, we check both irq_work and smp_call_function
queues. Thus if we trigger a remote irq work, we'll likely find the
smp_call_function queue empty unless we collide with concurrent enqueuers
but the probability is low.

Meanwhile, checking the smp_call_function queue can be costly because
we use llist_del_all() which relies on cmpxchg().

We can reduce this overhead by doing a fast path check with llist_empty().
Given the implicit IPI ordering:

Enqueuer Dequeuer
--------- --------
llist_add(csd, queue) get_IPI() {
send_IPI() if (llist_empty(queue)
...
When the IPI is sent, we are guaranteed that the IPI receiver will
see the new csd.

So lets do the fast path check to optimize non smp_call_function() related
jobs.

Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Viresh Kumar <viresh.kumar@xxxxxxxxxx>
Signed-off-by: Frederic Weisbecker <fweisbec@xxxxxxxxx>
---
kernel/smp.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index a1812d1..34378d4 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -184,11 +184,19 @@ static int generic_exec_single(int cpu, struct call_single_data *csd,
*/
void generic_smp_call_function_single_interrupt(void)
{
+ struct llist_head *head = &__get_cpu_var(call_single_queue);
struct llist_node *entry;
struct call_single_data *csd, *csd_next;
static bool warned;

- entry = llist_del_all(&__get_cpu_var(call_single_queue));
+ /*
+ * Fast check: in case of irq work remote queue, the IPI list
+ * is likely empty. We can spare the expensive llist_del_all().
+ */
+ if (llist_empty(head))
+ goto irq_work;
+
+ entry = llist_del_all(head);
entry = llist_reverse_order(entry);

/*
@@ -212,6 +220,7 @@ void generic_smp_call_function_single_interrupt(void)
csd_unlock(csd);
}

+irq_work:
/*
* Handle irq works queued remotely by irq_work_queue_on().
* Smp functions above are typically synchronous so they
--
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/