Re: [PATCH -v3] use per cpu data for single cpu ipi calls

From: Peter Zijlstra
Date: Fri Jan 30 2009 - 07:49:22 EST


On Fri, 2009-01-30 at 13:38 +0100, Jens Axboe wrote:

> > if (!wait) {
> > + /*
> > + * We are calling a function on a single CPU
> > + * and we are not going to wait for it to finish.
> > + * We first try to allocate the data, but if we
> > + * fail, we fall back to use a per cpu data to pass
> > + * the information to that CPU. Since all callers
> > + * of this code will use the same data, we must
> > + * synchronize the callers to prevent a new caller
> > + * from corrupting the data before the callee
> > + * can access it.
> > + *
> > + * The CSD_FLAG_LOCK is used to let us know when
> > + * the IPI handler is done with the data.
> > + * The first caller will set it, and the callee
> > + * will clear it. The next caller must wait for
> > + * it to clear before we set it again. This
> > + * will make sure the callee is done with the
> > + * data before a new caller will use it.
> > + * We use spinlocks to manage the callers.
> > + */
>
> That last sentence appears stale now, since there's no locking involved
> for the per-cpu csd.

Gah, I edited that, and generated the patch using

# git diff --stat -p 4f4b6c1a94a8735bbdc030a2911cf395495645b6.. kernel/smp.c | xclip

on Ingo's tip/master. However it seems such a diff does not include
non-commited changes, even though a regular git diff does:

# git diff kernel/smp.c
diff --git a/kernel/smp.c b/kernel/smp.c
index 9eead6c..bbedbb7 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -251,7 +251,6 @@ int smp_call_function_single(int cpu, void (*func) (void *info), void *info,
* it to clear before we set it again. This
* will make sure the callee is done with the
* data before a new caller will use it.
- * We use spinlocks to manage the callers.
*/
data = kmalloc(sizeof(*data), GFP_ATOMIC);
if (data)

What is the recommended way to do such a diff?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/