Re: [RFC PATCH 6/8] READ_ONCE: Drop pointer qualifiers when reading from scalar types
From: Will Deacon
Date: Mon Jan 13 2020 - 10:00:03 EST
On Fri, Jan 10, 2020 at 10:54:27AM -0800, Linus Torvalds wrote:
> On Fri, Jan 10, 2020 at 8:56 AM Will Deacon <will@xxxxxxxxxx> wrote:
> >
> > +/* Declare an unqualified scalar type. Leaves non-scalar types unchanged. */
> > +#define __unqual_scalar_typeof(x) typeof( \
>
> Ugh. My eyes. That's horrendous.
>
> I can't see any better alternatives, but it does make me go "Eww".
I can't disagree with that, but the only option we've come up with so far
that solves this in the READ_ONCE() macro itself is the thing from PeterZ:
// Insert big fat comment here
#define unqual_typeof(x) typeof(({_Atomic typeof(x) ___x __maybe_unused; ___x; }))
That apparently *requires* GCC 4.8, but I think the question is more about
whether it's easier to stomach the funny use of _Atomic or the nested
__builtin_choose_expr() I have here. I'm also worried about how reliable
the _Atomic thing is, or whether it's just an artifact of how GCC happens
to work today.
> Well, I do see one possible alternative: just re-write the bitop
> implementations in terms of "atomic_long_t", and just avoid the issue
> entirely.
>
> IOW, do something like the attached (but fleshed out and tested - this
> patch has not seen a compiler, much less any thought at all).
The big downside of this approach in preference to the proposal here is that
as long as we've got volatile-qualified pointer arguments describing shared
memory, I fear that we'll be playing a constant game of whack-a-mole adding
non-volatile casts as you do below. The same problem manifests for the
acquire/release accessors, which is why having something like
__unqual_typeof() would be beneficial and at least the awfulness is
contained in one place.
So I suppose my question is: how ill does this code really make you feel?
The disassembly is really nice!
Will
> include/asm-generic/bitops/lock.h | 12 +++++++++---
> 1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h
> index 3ae021368f48..071d8bfd86e5 100644
> --- a/include/asm-generic/bitops/lock.h
> +++ b/include/asm-generic/bitops/lock.h
> @@ -6,6 +6,12 @@
> #include <linux/compiler.h>
> #include <asm/barrier.h>
>
> +/* Drop the volatile, we will be doing READ_ONCE by hand */
> +static inline atomic_long_t *atomic_long_bit_word(unsigned int nr, volatile unsigned long *p)
> +{
> + return BIT_WORD(nr) + (atomic_long_t *)p;
> +}
> +
> /**
> * test_and_set_bit_lock - Set a bit and return its old value, for lock
> * @nr: Bit to set
> @@ -20,12 +26,12 @@ static inline int test_and_set_bit_lock(unsigned int nr,
> {
> long old;
> unsigned long mask = BIT_MASK(nr);
> + atomic_long_t *loc = atomic_long_bit_word(nr, p);
>
> - p += BIT_WORD(nr);
> - if (READ_ONCE(*p) & mask)
> + if (atomic_read(loc) & mask)
> return 1;
>
> - old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p);
> + old = atomic_long_fetch_or_acquire(mask, loc);
> return !!(old & mask);
> }
>