Re: [PATCH 2/3] asm-generic, x86: wrap atomic operations
From: Dmitry Vyukov
Date: Tue Mar 28 2017 - 05:57:12 EST
On Tue, Mar 28, 2017 at 11:51 AM, Ingo Molnar <mingo@xxxxxxxxxx> wrote:
>
> * Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
>> On Tue, Mar 28, 2017 at 09:52:32AM +0200, Ingo Molnar wrote:
>>
>> > No, regular C code.
>> >
>> > I don't see the point of generating all this code via CPP - it's certainly not
>> > making it more readable to me. I.e. this patch I commented on is a step backwards
>> > for readability.
>>
>> Note that much of the atomic stuff we have today is all CPP already.
>
> Yeah, but there it's implementational: we pick up arch primitives depending on
> whether they are defined, such as:
>
> #ifndef atomic_read_acquire
> # define atomic_read_acquire(v) smp_load_acquire(&(v)->counter)
> #endif
>
>> x86 is the exception because its 'weird', but most other archs are
>> almost pure CPP -- check Alpha for example, or asm-generic/atomic.h.
>
> include/asm-generic/atomic.h looks pretty clean and readable overall.
>
>> Also, look at linux/atomic.h, its a giant maze of CPP.
>
> Nah, that's OK, much of is is essentially __weak inlines implemented via CPP -
> i.e. CPP is filling in a missing compiler feature.
>
> But this patch I replied to appears to add instrumentation wrappery via CPP which
> looks like excessive and avoidable obfuscation to me.
>
> If it's much more readable and much more compact than the C version then maybe,
> but I'd like to see the C version first and see ...
>
>> The CPP help us generate functions, reduces endless copy/paste (which induces
>> random differences -- read bugs) and construct variants depending on the
>> architecture input.
>>
>> Yes, the CPP is a pain, but writing all that out explicitly is more of a
>> pain.
>
> So I'm not convinced that it's true in this case.
>
> Could we see the C version and compare? I could be wrong about it all.
Here it is (without instrumentation):
https://gist.github.com/dvyukov/e33d580f701019e0cd99429054ff1f9a
Instrumentation will add for each function:
static __always_inline void atomic64_set(atomic64_t *v, long long i)
{
+ kasan_check_write(v, sizeof(*v));
arch_atomic64_set(v, i);
}