Re: [PATCH 1/2] WorkStruct: Add assign_bits() to give an atomic-bitops safe assignment

From: Russell King
Date: Tue Dec 12 2006 - 17:55:19 EST


On Tue, Dec 12, 2006 at 08:11:12PM +0000, David Howells wrote:
> diff --git a/include/asm-arm/bitops.h b/include/asm-arm/bitops.h
> index b41831b..5932134 100644
> --- a/include/asm-arm/bitops.h
> +++ b/include/asm-arm/bitops.h
> @@ -117,6 +117,32 @@ ____atomic_test_and_change_bit(unsigned
> return res & mask;
> }
>
> +#if __LINUX_ARM_ARCH__ >= 6 && defined(CONFIG_CPU_32v6K)
> +static inline void assign_bits(unsigned long v, unsigned long *addr)
> +{
> + unsigned long tmp;
> +
> + __asm__ __volatile__("@ atomic_set\n"
> +"1: ldrex %0, [%1]\n"
> +" strex %0, %2, [%1]\n"
> +" teq %0, #0\n"
> +" bne 1b"
> + : "=&r" (tmp)
> + : "r" (addr), "r" (v)
> + : "cc");
> +}

This seems to be a very silly question (and I'm bound to be utterly
wrong as proven in my last round) but why are we implementing a new
set of atomic primitives which effectively do the same thing as our
existing set?

Why can't we just use atomic_t for this?

--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of:
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/