Re: call_function_many: fix list delete vs add race

From: Mike Galbraith
Date: Mon Jan 31 2011 - 02:21:42 EST

On Fri, 2011-01-28 at 18:20 -0600, Milton Miller wrote:
> Peter pointed out there was nothing preventing the list_del_rcu in
> smp_call_function_interrupt from running before the list_add_rcu in
> smp_call_function_many. Fix this by not setting refs until we have put
> the entry on the list. We can use the lock acquire and release instead
> of a wmb.

Wondering if a final sanity check makes sense. I've got a perma-spin
bug where comment apparently happened. Another CPU's diddle the mask
IPI may make this CPU do horrible things to itself as it's setting up to
IPI others with that mask.

kernel/smp.c | 3 +++
1 file changed, 3 insertions(+)

Index: linux-2.6.38.git/kernel/smp.c
--- linux-2.6.38.git.orig/kernel/smp.c
+++ linux-2.6.38.git/kernel/smp.c
@@ -490,6 +490,9 @@ void smp_call_function_many(const struct
cpumask_and(data->cpumask, mask, cpu_online_mask);
cpumask_clear_cpu(this_cpu, data->cpumask);

+ /* Did you pass me a mask that can be changed/emptied under me? */
+ BUG_ON(cpumask_empty(data->cpumask));
* We reuse the call function data without waiting for any grace
* period after some other cpu removes it from the global queue.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at