Re: [PATCH/RFC 6/7] arm64: Replace ACCESS_ONCE for spinlock code with barriers
From: Christian Borntraeger
Date: Mon Nov 24 2014 - 13:50:46 EST
Am 24.11.2014 um 14:03 schrieb Christian Borntraeger:
> ACCESS_ONCE does not work reliably on non-scalar types. For
> example gcc 4.6 and 4.7 might remove the volatile tag for such
> accesses during the SRA (scalar replacement of aggregates) step
> (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145)
>
> Change the spinlock code to access the lock with a barrier.
>
> Signed-off-by: Christian Borntraeger <borntraeger@xxxxxxxxxx>
> ---
> arch/arm64/include/asm/spinlock.h | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
> index c45b7b1..f72dc64 100644
> --- a/arch/arm64/include/asm/spinlock.h
> +++ b/arch/arm64/include/asm/spinlock.h
> @@ -99,12 +99,15 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
>
> static inline int arch_spin_is_locked(arch_spinlock_t *lock)
> {
> - return !arch_spin_value_unlocked(ACCESS_ONCE(*lock));
> + arch_spinlock_t lockval = *lock;
> + barrier();
> + return !arch_spin_value_unlocked(lockval);
> }
>
> static inline int arch_spin_is_contended(arch_spinlock_t *lock)
> {
> - arch_spinlock_t lockval = ACCESS_ONCE(*lock);
> + arch_spinlock_t lockval = *lock;
> + barrier();
> return (lockval.next - lockval.owner) > 1;
> }
> #define arch_spin_is_contended arch_spin_is_contended
>
FWIW,
we could also make this with ACCESS_ONCE, but this requires to change the definition of arch_spinlock_t for arm64 to be a union. I am a bit reluctant to do these changes without being able to test. Let me know if this is preferred and if somebody else can test.
Christian
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/