RE: [PATCH v5] x86: bitops: fix build regression

From: David Laight
Date: Sun May 10 2020 - 09:54:19 EST


From: Nick Desaulniers
> Sent: 08 May 2020 19:32
..
> It turns out that if your config tickles __builtin_constant_p via
> differences in choices to inline or not, these statements produce
> invalid assembly:
...
> diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
> index b392571c1f1d..35460fef39b8 100644
> --- a/arch/x86/include/asm/bitops.h
> +++ b/arch/x86/include/asm/bitops.h
> @@ -52,9 +52,9 @@ static __always_inline void
> arch_set_bit(long nr, volatile unsigned long *addr)
> {
> if (__builtin_constant_p(nr)) {
> - asm volatile(LOCK_PREFIX "orb %1,%0"
> + asm volatile(LOCK_PREFIX "orb %b1,%0"
> : CONST_MASK_ADDR(nr, addr)
> - : "iq" (CONST_MASK(nr) & 0xff)
> + : "iq" (CONST_MASK(nr))
> : "memory");

What happens if CONST_MASK() is changed to:
#define CONST_MASK_(n) (n == 0 ? 1 : n == 1 ? 2 : n ....)
#define CONST_MASK(n) CONST_MASK_(((n) & 7))

and a separate definition for the inverse mask.

The lack of arithmetic promotion may mean that the only "i"
constraint is needed.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)