Re: [PATCH v3] skbuff: fix a data race in skb_queue_len()
From: Jason A. Donenfeld
Date: Thu Feb 06 2020 - 13:43:45 EST
On Thu, Feb 06, 2020 at 10:22:02AM -0800, Eric Dumazet wrote:
> On 2/6/20 10:12 AM, Jason A. Donenfeld wrote:
> > On Thu, Feb 6, 2020 at 6:10 PM Eric Dumazet <eric.dumazet@xxxxxxxxx> wrote:
> >> Unfortunately we do not have ADD_ONCE() or something like that.
> >
> > I guess normally this is called "atomic_add", unless you're thinking
> > instead about something like this, which generates the same
> > inefficient code as WRITE_ONCE:
> >
> > #define ADD_ONCE(d, s) *(volatile typeof(d) *)&(d) += (s)
> >
>
> Dmitry Vyukov had a nice suggestion few months back how to implement this.
>
> https://lkml.org/lkml/2019/10/5/6
That trick appears to work well in clang but not gcc:
#define ADD_ONCE(d, i) ({ \
   Âtypeof(d) *__p = &(d); \
   Â__atomic_store_n(__p, (i) + __atomic_load_n(__p, __ATOMIC_RELAXED), __ATOMIC_RELAXED); \
})
gcc 9.2 gives:
Â0:  8b 47 10        Âmov  Â0x10(%rdi),%eax
 3:  83 e8 01        Âsub  Â$0x1,%eax
 6:  89 47 10        Âmov  Â%eax,0x10(%rdi)
clang 9.0.1 gives:
 Â0:  81 47 10 ff ff ff ff  Âaddl  $0xffffffff,0x10(%rdi)
But actually, clang does equally as well with:
#define ADD_ONCE(d, i) *(volatile typeof(d) *)&(d) += (i)
And testing further back, it generates the same code with your original
WRITE_ONCE.
If clang's optimization here is technically correct, maybe we should go
talk to the gcc people about catching this case?