Re: arm64 tools build failure wrt smp_load_{acquire,release} expansion on gcc version 5.4.0 20160609 (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.9)
From: Daniel Borkmann
Date: Wed Oct 31 2018 - 14:08:37 EST
On 10/31/2018 06:44 PM, Will Deacon wrote:
> Hi Arnaldo,
>
> On Wed, Oct 31, 2018 at 12:45:50PM -0300, Arnaldo Carvalho de Melo wrote:
>> So I noticed the following build failure thare point to:
>>
>> commit 09d62154f61316f7e97eae3f31ef8770c7e4b386
>> Author: Daniel Borkmann <daniel@xxxxxxxxxxxxx>
>> Date: Fri Oct 19 15:51:02 2018 +0200
>>
>> tools, perf: add and use optimized ring_buffer_{read_head, write_tail} helpers
>>
>> -------------------------
>>
>> 50 ubuntu:16.04-x-arm64 : FAIL aarch64-linux-gnu-gcc (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
>>
>> Works well with:
>>
>> 59 ubuntu:18.04-x-arm64 : Ok aarch64-linux-gnu-gcc (Ubuntu/Linaro 7.3.0-27ubuntu1~18.04) 7.3.0
>>
>> And all the other environments I test build :-)
Urgh, sorry about that, I tested the patch with arm64 on Ubuntu 18.04
back then, but not 16.04. :(
> Cheers for reporting this. I managed to reproduce the build failure with
> gcc version 6.3.0 20170516 (Debian 6.3.0-18+deb9u1).
>
> The code in question is the arm64 versions of smp_load_acquire() and
> smp_store_release(). Unlike other architectures, these are not built
> around READ_ONCE() and WRITE_ONCE() since we have instructions we can
> use instead of fences. Bringing our macros up-to-date with those (i.e.
> tweaking the union initialisation and using the special "uXX_alias_t"
> types) appears to fix the issue for me.
>
> Diff below...
I just gave this a spin on 16.04 and it resolves the build issue, thanks
for the quick fix, Will!
Tested-by: Daniel Borkmann <daniel@xxxxxxxxxxxxx>
> Will
>
> --->8
>
> diff --git a/tools/arch/arm64/include/asm/barrier.h b/tools/arch/arm64/include/asm/barrier.h
> index 12835ea0e417..378c051fa177 100644
> --- a/tools/arch/arm64/include/asm/barrier.h
> +++ b/tools/arch/arm64/include/asm/barrier.h
> @@ -14,74 +14,75 @@
> #define wmb() asm volatile("dmb ishst" ::: "memory")
> #define rmb() asm volatile("dmb ishld" ::: "memory")
>
> -#define smp_store_release(p, v) \
> -do { \
> - union { typeof(*p) __val; char __c[1]; } __u = \
> - { .__val = (__force typeof(*p)) (v) }; \
> - \
> - switch (sizeof(*p)) { \
> - case 1: \
> - asm volatile ("stlrb %w1, %0" \
> - : "=Q" (*p) \
> - : "r" (*(__u8 *)__u.__c) \
> - : "memory"); \
> - break; \
> - case 2: \
> - asm volatile ("stlrh %w1, %0" \
> - : "=Q" (*p) \
> - : "r" (*(__u16 *)__u.__c) \
> - : "memory"); \
> - break; \
> - case 4: \
> - asm volatile ("stlr %w1, %0" \
> - : "=Q" (*p) \
> - : "r" (*(__u32 *)__u.__c) \
> - : "memory"); \
> - break; \
> - case 8: \
> - asm volatile ("stlr %1, %0" \
> - : "=Q" (*p) \
> - : "r" (*(__u64 *)__u.__c) \
> - : "memory"); \
> - break; \
> - default: \
> - /* Only to shut up gcc ... */ \
> - mb(); \
> - break; \
> - } \
> +#define smp_store_release(p, v) \
> +do { \
> + union { typeof(*p) __val; char __c[1]; } __u = \
> + { .__val = (v) }; \
> + \
> + switch (sizeof(*p)) { \
> + case 1: \
> + asm volatile ("stlrb %w1, %0" \
> + : "=Q" (*p) \
> + : "r" (*(__u8_alias_t *)__u.__c) \
> + : "memory"); \
> + break; \
> + case 2: \
> + asm volatile ("stlrh %w1, %0" \
> + : "=Q" (*p) \
> + : "r" (*(__u16_alias_t *)__u.__c) \
> + : "memory"); \
> + break; \
> + case 4: \
> + asm volatile ("stlr %w1, %0" \
> + : "=Q" (*p) \
> + : "r" (*(__u32_alias_t *)__u.__c) \
> + : "memory"); \
> + break; \
> + case 8: \
> + asm volatile ("stlr %1, %0" \
> + : "=Q" (*p) \
> + : "r" (*(__u64_alias_t *)__u.__c) \
> + : "memory"); \
> + break; \
> + default: \
> + /* Only to shut up gcc ... */ \
> + mb(); \
> + break; \
> + } \
> } while (0)
>
> -#define smp_load_acquire(p) \
> -({ \
> - union { typeof(*p) __val; char __c[1]; } __u; \
> - \
> - switch (sizeof(*p)) { \
> - case 1: \
> - asm volatile ("ldarb %w0, %1" \
> - : "=r" (*(__u8 *)__u.__c) \
> - : "Q" (*p) : "memory"); \
> - break; \
> - case 2: \
> - asm volatile ("ldarh %w0, %1" \
> - : "=r" (*(__u16 *)__u.__c) \
> - : "Q" (*p) : "memory"); \
> - break; \
> - case 4: \
> - asm volatile ("ldar %w0, %1" \
> - : "=r" (*(__u32 *)__u.__c) \
> - : "Q" (*p) : "memory"); \
> - break; \
> - case 8: \
> - asm volatile ("ldar %0, %1" \
> - : "=r" (*(__u64 *)__u.__c) \
> - : "Q" (*p) : "memory"); \
> - break; \
> - default: \
> - /* Only to shut up gcc ... */ \
> - mb(); \
> - break; \
> - } \
> - __u.__val; \
> +#define smp_load_acquire(p) \
> +({ \
> + union { typeof(*p) __val; char __c[1]; } __u = \
> + { .__c = { 0 } }; \
> + \
> + switch (sizeof(*p)) { \
> + case 1: \
> + asm volatile ("ldarb %w0, %1" \
> + : "=r" (*(__u8_alias_t *)__u.__c) \
> + : "Q" (*p) : "memory"); \
> + break; \
> + case 2: \
> + asm volatile ("ldarh %w0, %1" \
> + : "=r" (*(__u16_alias_t *)__u.__c) \
> + : "Q" (*p) : "memory"); \
> + break; \
> + case 4: \
> + asm volatile ("ldar %w0, %1" \
> + : "=r" (*(__u32_alias_t *)__u.__c) \
> + : "Q" (*p) : "memory"); \
> + break; \
> + case 8: \
> + asm volatile ("ldar %0, %1" \
> + : "=r" (*(__u64_alias_t *)__u.__c) \
> + : "Q" (*p) : "memory"); \
> + break; \
> + default: \
> + /* Only to shut up gcc ... */ \
> + mb(); \
> + break; \
> + } \
> + __u.__val; \
> })
>
> #endif /* _TOOLS_LINUX_ASM_AARCH64_BARRIER_H */
>