Re: [RFC PATCH] introduce sys_membarrier(): process-wide memorybarrier (v5)

From: Mathieu Desnoyers
Date: Thu Jan 21 2010 - 11:12:41 EST

* Peter Zijlstra (peterz@xxxxxxxxxxxxx) wrote:
> On Tue, 2010-01-19 at 20:06 +0100, Peter Zijlstra wrote:
> >
> > We could possibly look at placing that assignment in context_switch()
> > between switch_mm() and switch_to(), which should provide a mb before
> > and after I think, Ingo?
> Right, just found out why we cannot do that, the first thing
> context_switch() does is prepare_context_switch() which includes
> prepare_lock_switch() which on __ARCH_WANT_UNLOCKED_CTXSW machines drops
> the rq->lock, and we have to have rq->curr assigned by then.


One efficient way to fit the requirement of sys_membarrier() would be to
create spin_lock_mb()/spin_unlock_mb(), which would have full memory
barriers rather than the acquire/release semantic. These could be used
within schedule() execution. On UP, they would turn into preempt off/on
and a compiler barrier, just like normal spin locks.

On architectures like x86, the atomic instructions already imply a full
memory barrier, so we have a direct mapping and no overhead. On
architecture where the spin lock only provides acquire semantic (e.g.
powerpc using lwsync and isync), then we would have to create an
alternate implementation with "sync".

We can even create a generic fallback with the following kind of code in
the meantime:

static inline void spin_lock_mb(spinlock_t *lock)

static inline void spin_unlock_mb(spinlock_t *lock)

How does that sound ?


Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at