Re: [patch 00/21] mutex subsystem, -V14
From: Linus Torvalds
Date: Thu Jan 05 2006 - 11:20:52 EST
On Thu, 5 Jan 2006, Ingo Molnar wrote:
>
> the patch below adds the barriers to the asm-generic mutex routines, so
> it's not like i'm lazy ;), but i really think this is unnecessary.
> Adding this patch would add a second, unnecessary barrier for all the
> arches that have barrier-less atomic ops.
>
> it also makes sense: the moment you are interested in the 'previous
> value' of the atomic counter in an atomic fashion, you very likely want
> to use it for a critical section. (e.g. all the put-the-resource ops
> that use atomic_dec_test() rely on this implicit barrier.)
Ok, fair enough. However, that still leaves the question of which way the
barrier works. Traditionally, we have only cared about one thing: that all
preceding writes have finished, because the "atomic_dec_return" thing is
used as a _reference_counter_, and we're going to release the thing.
However, that's not the case in a mutex. A mutex locking operation works
exactly the other way around: it doesn't really care about the previous
writes at all, since those operations were unlocked. It cares about the
_subsequent_ writes, since those have to be seen by others as being in the
critical region, and never be seen as happening before the lock.
So I _think_ your argument is bogus, and your patch is bogus. The use of
"atomic_dec_return()" in a mutex is _not_ the same barrier as using it for
reference counting. Not at all. Memory barriers aren't just one thing:
they are semi-permeable things in two different directions and with two
different operations: there are several different kinds of them.
> #define __mutex_fastpath_lock(count, fail_fn) \
> do { \
> + smp_mb__before_atomic_dec(); \
> if (unlikely(atomic_dec_return(count) < 0)) \
> fail_fn(count); \
> } while (0)
So I think the barrier has to come _after_ the atomic decrement (or
exchange).
Because as it is written now, any writes in the locked region could
percolate up to just before the atomic dec - ie _outside_ the region.
Which is against the whole point of a lock - it would allow another CPU to
see the write even before it sees that the lock was successful, as far as
I can tell.
But memory ordering is subtle, so maybe I'm missing something..
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/