Re: [PATCH 06/20] arch,avr32: Fold atomic_ops

From: Peter Zijlstra
Date: Fri May 09 2014 - 16:43:27 EST


On Fri, May 09, 2014 at 08:32:41PM +0200, Hans-Christian Egtvedt wrote:
> Around Thu 08 May 2014 15:58:46 +0200 or thereabout, Peter Zijlstra wrote:
> > Many of the atomic op implementations are the same except for one
> > instruction; fold the lot into a few CPP macros and reduce LoC.
>
> The add and sub atomic operations are not 100% the same. Sub has more
> constraints on the integer size than add. Sub only takes a signed 21-bit
> integer, while add can do 32-bit additions IIRC correctly the instructions
> for AVR32.
>
> This is why you see in atomic_sub_return() that i is typed as "rKs21", while
> in atomic_add_return, i is typed "r".
>
> Your change limits both atomic operations to work only on signed 21-bit
> integers.

Urgh, fail on me for not seeing that.


> > - if (__builtin_constant_p(i) && (i >= -1048575) && (i <= 1048576))
> > - result = atomic_sub_return(-i, v);
>
> I do not recall why we did it like this any more, I would assume both sub and
> add to be single cycle instructions.

Right and if its a constant the negate is compile time too.

OK, so if I only generate add and provide inline stubs to implement sub
with add this should be good again, right?

Any other instructions I should be careful with? I take it the bit ops
are full 32 bits again?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/