Re: [PATCH 2/5] bitops: compile time optimization forhweight_long(CONSTANT)

From: Peter Zijlstra
Date: Thu Feb 04 2010 - 10:14:21 EST


On Thu, 2010-02-04 at 16:10 +0100, Borislav Petkov wrote:
> On Wed, Feb 03, 2010 at 11:49:54AM -0800, H. Peter Anvin wrote:
> > On 02/03/2010 10:47 AM, Peter Zijlstra wrote:
> > > On Wed, 2010-02-03 at 19:14 +0100, Borislav Petkov wrote:
> > >
> > >> alternative("call hweightXX", "popcnt", X86_FEATURE_POPCNT)
> > >
> > > Make sure to apply a 0xff bitmask to the popcnt r16 call for hweight8(),
> > > and hweight64() needs a bit of magic for 32bit, but yes, something like
> > > that ought to work nicely.
> > >
> >
> > Arguably the "best" option is to have the alternative being a jump to an
> > out-of-line stub which does the necessary parameter marshalling before
> > calling a stub. This technique is already used in a few other places.
>
> Ok, here's a first alpha prototype and completely untested. The asm
> output looks ok though. I've added separate 32-bit and 64-bit helpers in
> order to dispense with the if-else tests. The hw-popcnt versions are the
> opcodes for "popcnt %eax, %eax" and "popcnt %rax, %rax", respectively,
> so %rAX has to be preloaded with the bitmask and the computed value
> has to be retrieved from there afterwards. And yes, it looks not that
> elegant so I'm open for suggestions.
>
> The good thing is, this should work on any toolchain since we don't rely
> on the compiler to know about popcnt and we're protected by CPUID flag
> so that the hw-popcnt version is used only on processors which support
> it.
>
> Please take a good look and let me know what do you guys think.

> +int arch_hweight_long(unsigned long w)
> +{
> + if (sizeof(w) == 4) {
> + asm volatile("movl %[w], %%eax" :: [w] "r" (w));
> + alternative("call _hweight32",
> + "call _popcnt32",
> + X86_FEATURE_POPCNT);
> + asm volatile("" : "=a" (w));
> +
> + } else {
> + asm volatile("movq %[w], %%rax" :: [w] "r" (w));
> + alternative("call _hweight64",
> + "call _popcnt64",
> + X86_FEATURE_POPCNT);
> + asm volatile("" : "=a" (w));
> + }
> + return w;
> +}

hweight_long() isn't an arch primitive, only __arch_hweight{8,16,32,64}
are.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/