Re: [PATCH 2/5] bitops: compile time optimization for hweight_long(CONSTANT)

From: Brian Gerst
Date: Thu Feb 04 2010 - 10:39:39 EST


2010/2/4 Borislav Petkov <bp@xxxxxxxxx>:
> On Wed, Feb 03, 2010 at 11:49:54AM -0800, H. Peter Anvin wrote:
>> On 02/03/2010 10:47 AM, Peter Zijlstra wrote:
>> > On Wed, 2010-02-03 at 19:14 +0100, Borislav Petkov wrote:
>> >
>> >> alternative("call hweightXX", "popcnt", X86_FEATURE_POPCNT)
>> >
>> > Make sure to apply a 0xff bitmask to the popcnt r16 call for hweight8(),
>> > and hweight64() needs a bit of magic for 32bit, but yes, something like
>> > that ought to work nicely.
>> >
>>
>> Arguably the "best" option is to have the alternative being a jump to an
>> out-of-line stub which does the necessary parameter marshalling before
>> calling a stub. ÂThis technique is already used in a few other places.
>
> Ok, here's a first alpha prototype and completely untested. The asm
> output looks ok though. I've added separate 32-bit and 64-bit helpers in
> order to dispense with the if-else tests. The hw-popcnt versions are the
> opcodes for "popcnt %eax, %eax" and "popcnt %rax, %rax", respectively,
> so %rAX has to be preloaded with the bitmask and the computed value
> has to be retrieved from there afterwards. And yes, it looks not that
> elegant so I'm open for suggestions.
>
> The good thing is, this should work on any toolchain since we don't rely
> on the compiler to know about popcnt and we're protected by CPUID flag
> so that the hw-popcnt version is used only on processors which support
> it.
>
> Please take a good look and let me know what do you guys think.
>
> Thanks.
>
> --
> Âarch/x86/include/asm/bitops.h | Â Â4 ++
> Âarch/x86/lib/Makefile     |  Â2 +-
> Âarch/x86/lib/popcnt.c     |  62 +++++++++++++++++++++++++++++++++++++++++
> Â3 files changed, 67 insertions(+), 1 deletions(-)
> Âcreate mode 100644 arch/x86/lib/popcnt.c
>
> diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
> index 02b47a6..deb5013 100644
> --- a/arch/x86/include/asm/bitops.h
> +++ b/arch/x86/include/asm/bitops.h
> @@ -434,6 +434,10 @@ static inline int fls(int x)
> Â#endif
> Â Â Â Âreturn r + 1;
> Â}
> +
> +
> +extern int arch_hweight_long(unsigned long);
> +
> Â#endif /* __KERNEL__ */
>
> Â#undef ADDR
> diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
> index cffd754..c03fe2d 100644
> --- a/arch/x86/lib/Makefile
> +++ b/arch/x86/lib/Makefile
> @@ -22,7 +22,7 @@ lib-y += usercopy_$(BITS).o getuser.o putuser.o
> Âlib-y += memcpy_$(BITS).o
> Âlib-$(CONFIG_KPROBES) += insn.o inat.o
>
> -obj-y += msr.o msr-reg.o msr-reg-export.o
> +obj-y += msr.o msr-reg.o msr-reg-export.o popcnt.o
>
> Âifeq ($(CONFIG_X86_32),y)
> Â Â Â Â obj-y += atomic64_32.o
> diff --git a/arch/x86/lib/popcnt.c b/arch/x86/lib/popcnt.c
> new file mode 100644
> index 0000000..179a6e8
> --- /dev/null
> +++ b/arch/x86/lib/popcnt.c
> @@ -0,0 +1,62 @@
> +#include <linux/kernel.h>
> +#include <linux/bitops.h>
> +
> +int _hweight32(void)
> +{
> + Â Â Â unsigned long w;
> +
> + Â Â Â asm volatile("" : "=a" (w));
> +
> + Â Â Â return hweight32(w);
> +

You should just use a normal parameter here (use %rdi for 64-bit).
The way you have it written isn't guaranteed to work. GCC could
clobber %eax/%rax before you read it, although it is highly unlikely.
For a good example of how this should be written, look at
__mutex_fastpath_lock().

--
Brian Gerst
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/