Re: [PATCHv4 2/2] memory barrier: adding smp_mb__after_lock

From: Eric Dumazet
Date: Thu Jul 02 2009 - 02:55:19 EST


Jiri Olsa a écrit :
> Adding smp_mb__after_lock define to be used as a smp_mb call after
> a lock.
>
> Making it nop for x86, since {read|write|spin}_lock() on x86 are
> full memory barriers.
>
> wbr,
> jirka
>
>
> Signed-off-by: Jiri Olsa <jolsa@xxxxxxxxxx>


Maybe we should remind that sk_has_helper() is always called
right after a call to read_lock() as in :

read_lock(&sk->sk_callback_lock);
if (sk_has_sleeper(sk))
wake_up_interruptible_all(sk->sk_sleep);

Signed-off-by: Eric Dumazet <eric.dumazet@xxxxxxxxx>

Thanks Jiri

>
> ---
> arch/x86/include/asm/spinlock.h | 3 +++
> include/linux/spinlock.h | 5 +++++
> include/net/sock.h | 2 +-
> 3 files changed, 9 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
> index b7e5db8..39ecc5f 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -302,4 +302,7 @@ static inline void __raw_write_unlock(raw_rwlock_t *rw)
> #define _raw_read_relax(lock) cpu_relax()
> #define _raw_write_relax(lock) cpu_relax()
>
> +/* The {read|write|spin}_lock() on x86 are full memory barriers. */
> +#define smp_mb__after_lock() do { } while (0)
> +
> #endif /* _ASM_X86_SPINLOCK_H */
> diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
> index 252b245..ae053bd 100644
> --- a/include/linux/spinlock.h
> +++ b/include/linux/spinlock.h
> @@ -132,6 +132,11 @@ do { \
> #endif /*__raw_spin_is_contended*/
> #endif
>
> +/* The lock does not imply full memory barrier. */
> +#ifndef smp_mb__after_lock
> +#define smp_mb__after_lock() smp_mb()
> +#endif
> +
> /**
> * spin_unlock_wait - wait until the spinlock gets unlocked
> * @lock: the spinlock in question.
> diff --git a/include/net/sock.h b/include/net/sock.h
> index 4eb8409..b3e96a4 100644
> --- a/include/net/sock.h
> +++ b/include/net/sock.h
> @@ -1280,7 +1280,7 @@ static inline int sk_has_sleeper(struct sock *sk)
> *
> * This memory barrier is paired in the sock_poll_wait.
> */
> - smp_mb();
> + smp_mb__after_lock();
> return sk->sk_sleep && waitqueue_active(sk->sk_sleep);
> }
>
> --

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/