Re: Buggy IPI and MTRR code on low memory

From: Steven Rostedt
Date: Wed Jan 28 2009 - 17:48:15 EST



On Wed, 28 Jan 2009, Andrew Morton wrote:
>
> So if we're going to use per-cpu data then we'd need to protect it with
> a lock. We could (should?) have a separate lock for each destination
> CPU.
>
> We could make smp_call_function_single() block until the IPI handler
> has consumed the call_single_data, in which case we might as well put
> the call_single_data, onto the caller's stack, as you've done.
>
> Or we could take the per-cpu spinlock in smp_call_function_single(),
> and release it in the IPI handler, after the call_single_data has been
> consumed, which is a bit more efficient. But I have a suspicion that
> this is AB/BA deadlockable.
>
> <tries to think of any scenarios>
>
> <fails>
>
> So we have
>
> smp_call_function_single(int cpu)
> {
> spin_lock(per_cpu(cpu, locks));
> per_cpu(cpu, call_single_data) = <stuff>
> send_ipi(cpu);
> return;
> }
>
> ipi_handler(...)
> {
> int cpu = smp_processor_id();
> call_single_data csd = per_cpu(cpu, call_single_data);
>
> spin_unlock(per_cpu(cpu, locks));
> use(csd);
> }
>
> does that work?

I don't think so. With ticket spinlocks and such, that just looks like it
is destine to crash. Also, spinlocks disable preemption, and we would need
to enable it again otherwise we have a dangling preempt_disable.

>
> Dunno if it's any better than what you have now. It does however
> remove the unpleasant "try kmalloc and if that failed, try something
> else" mess.

We could have a percpu for single data and still use the lock. Keep the
method of the IPI handler signaling to the caller that they copied it to
their stack. Then the caller could release the lock.

We can keep this only for the non wait case. Most users set the wait bit,
so it will only slow down the non waiters.

Of course this will only work with the single cpu function callers. I
think the all cpu function callers still need to alloc.

-- Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/