Re: [PATCH] locking/atomics: don't alias ____ptr
From: Thomas Gleixner
Date: Wed Jun 28 2017 - 09:21:11 EST
On Wed, 28 Jun 2017, Sebastian Andrzej Siewior wrote:
> On 2017-06-28 14:15:18 [+0300], Andrey Ryabinin wrote:
> > The main problem here is that arch_cmpxchg64_local() calls cmpxhg_local() instead of using arch_cmpxchg_local().
> >
> > So, the patch bellow should fix the problem, also this will fix double instrumentation of cmpcxchg64[_local]().
> > But I haven't tested this patch yet.
>
> tested, works. Next step?
Check all other implementations in every architecture whether there is a
similar problem .....
But this really want's a proper cleanup unless we want to waste the time
over and over again with the next hard to figure out macro expansion fail.
First of all, cmpxchg64[_local]() can be implemented as inlines right away.
For cmpxchg*(), the situation is slightly different, but the sizeof()
evaluation should be done at the top most level, even if we do it further
down in the low level arch/asm-generic implementation once more.
Something along the lines of:
static inline unsigned long cmpxchg_varsize(void *ptr, unsigned long old,
unsigned long new, int size)
{
switch (size) {
case 1:
case 2:
case 4:
break;
case 8:
if (sizeof(unsigned long) == 8)
break;
default:
BUILD_BUG_ON(1);
}
kasan_check(ptr, size);
return arch_cmpxchg(ptr, old, new);
}
#define cmpxchg(ptr, o, n) \
({ \
((__typeof__(*(ptr)))cmpxchg_varsize((ptr), (unsigned long)(o), \
(unsigned long)(n), sizeof(*(ptr)))); \
})
That's the first step to cure the actual mess.
Ideally we get rid of that whole macro maze and convert everything to
proper inlines with actual cmpxchg8/16/32/64() variants, but that's going
to take some time. As an intermediate step we can at least propagate 'size'
to arch_cmpxchg(), which is not that much of an effort.
Thanks,
tglx