Re: [PATCH -tip 2/2] x86/hweight: Use POPCNT when available with X86_NATIVE_CPU option

From: Uros Bizjak
Date: Sun Mar 30 2025 - 03:49:59 EST


On Sat, Mar 29, 2025 at 12:00 PM David Laight
<david.laight.linux@xxxxxxxxx> wrote:
>
> On Sat, 29 Mar 2025 10:19:37 +0100
> Uros Bizjak <ubizjak@xxxxxxxxx> wrote:
>
> > On Tue, Mar 25, 2025 at 10:56 PM Ingo Molnar <mingo@xxxxxxxxxx> wrote:
> > >
> > >
> > > * Uros Bizjak <ubizjak@xxxxxxxxx> wrote:
> > >
> > > > Emit naked POPCNT instruction when available with X86_NATIVE_CPU
> > > > option. The compiler is not bound by ABI when emitting the instruction
> > > > without the fallback call to __sw_hweight{32,64}() library function
> > > > and has much more freedom to allocate input and output operands,
> > > > including memory input operand.
> > > >
> > > > The code size of x86_64 defconfig (with X86_NATIVE_CPU option)
> > > > shrinks by 599 bytes:
> > > >
> > > > add/remove: 0/0 grow/shrink: 45/197 up/down: 843/-1442 (-599)
> > > > Total: Before=22710531, After=22709932, chg -0.00%
> > > >
> > > > The asm changes from e.g.:
> > > >
> > > > 3bf9c: 48 8b 3d 00 00 00 00 mov 0x0(%rip),%rdi
> > > > 3bfa3: e8 00 00 00 00 call 3bfa8 <...>
> > > > 3bfa8: 90 nop
> > > > 3bfa9: 90 nop
> > > >
> > > > with:
> > > >
> > > > 34b: 31 c0 xor %eax,%eax
> > > > 34d: f3 48 0f b8 c7 popcnt %rdi,%rax
> > > >
> > > > in the .altinstr_replacement section
> > > >
> > > > to:
> > > >
> > > > 3bfdc: 31 c0 xor %eax,%eax
> > > > 3bfde: f3 48 0f b8 05 00 00 popcnt 0x0(%rip),%rax
> > > > 3bfe5: 00 00
> > > >
> > > > where there is no need for an entry in the .altinstr_replacement
> > > > section, shrinking all text sections by 9476 bytes:
> > > >
> > > > text data bss dec hex filename
> > > > 27267068 4643047 814852 32724967 1f357e7 vmlinux-old.o
> > > > 27257592 4643047 814852 32715491 1f332e3 vmlinux-new.o
> > >
> > > > +#ifdef __POPCNT__
> > > > + asm_inline (ASM_FORCE_CLR "popcntl %[val], %[cnt]"
> > > > + : [cnt] "=&r" (res)
> > > > + : [val] ASM_INPUT_RM (w));
> > > > +#else
> > > > asm_inline (ALTERNATIVE(ANNOTATE_IGNORE_ALTERNATIVE
> > > > "call __sw_hweight32",
> > > > ASM_CLR "popcntl %[val], %[cnt]",
> > > > X86_FEATURE_POPCNT)
> > > > : [cnt] "=a" (res), ASM_CALL_CONSTRAINT
> > > > : [val] REG_IN (w));
> > >
> > > So a better optimization I think would be to declare and implement
> > > __sw_hweight32 with a different, less intrusive function call ABI that
> >
> > With an external function, the ABI specifies the location of input
> > argument and function result. Unless we want to declare the whole
> > function as asm() inline function (with some 20 instructions), we have
> > to specify the location of function arguments and where the function
> > result is to be found in the asm() that calls the external function.
> > Register allocator then uses this information to move arguments to the
> > right place before the call.
> >
> > The above approach, when used to emulate an insn, has a drawback.
> > When the instruction is available as an alternative, it still has
> > fixed input and output registers, forced by the ABI of the function
> > call. Register allocator has to move registers unnecessarily to
> > satisfy the constraints of the function call, not the instruction
> > itself.
>
> Forcing the argument into a fixed register won't make much difference
> to execution time.
> Just a bit more work for the instruction decoder and a few more bytes
> of I-cache.
> (Register-register moves can be zero clocks.)
> In many cases (but not as many as you might hope for) the compiler
> back-tracks the input register requirement to the instruction that
> generates the value.

I'm afraid I don't fully understand what you mean by "back-tracking
the input register requirement". However, with:

asm("insn %0, %1" : "=r" (out) : "r" (in));

the compiler is not obliged to match input with output, although many
times it does so (especially when input argument is dead). To avoid
false dependence on the output, we should force the compiler to always
match input and output:

asm("insn %0, %1" : "=r" (out) : "0" (in));

and this will resolve false dependence (input register obviously needs
to be ready before insn) at the expense of an extra move instruction
in front of the insn in case input is not dead. This is unfortunately
not possible when one of the alternatives is a function call, where
location of input and output arguments is specified by ABI.

> In this case the called function needs two writeable registers.
> I think you can tell gcc the input is invalidated and the output
> is 'early clobber' so that the register are different.

Yes, my first patch used this approach, where output operand is cleared first:

asm("xorl %0, %0; popcntl %1, %0" : "=&r" (out) : "rm" (in));

Please note that "earlyclobbered" output reg can't be matched with
input reg, or with any reg that forms the address.

> > The proposed solution builds on the fact that with -march=native (and
> > also when -mpopcnt is specified on the command line) , the compiler
> > signals the availability of certain ISA by defining the corresponding
> > definition. We can use this definition to relax the constraints to fit
> > the instruction, not the ABI of the fallback function call. On x86, we
> > can also access memory directly, avoiding clobbering a temporary input
> > register.
> >
> > Without the fix for (obsolete) false dependency, the change becomes simply:
> >
> > #ifdef __POPCNT__
> > asm ("popcntl %[val], %[cnt]"
> > : [cnt] "=r" (res)
> > : [val] ASM_INPUT_RM (w));
> > #else
> >
> > and besides the reported savings of 600 bytes in the .text section
> > also allows the register allocator to schedule registers (and input
> > arguments from memory) more optimally, not counting additional 9k
> > saved space in the alternative section.
> >
> > The patch is also an example, how -march=native enables further
> > optimizations involving additional ISAs.
>
> To my mind it would be better to be able to specify oldest cpu
> type the build should support.
> Either by actual cpu type (eg 'skylake' or 'zen2') or maybe by
> a specific instruction (eg popcnt).
> The scripts would then determine the appropriate compiler flags
> and any extra -Dvar to generate appropriate code.

Please note that with -march=native the compiler driver ("gcc") does
this for you. -march=native expands to a series of -m compile flags
(you can see these flags by passing -### to gcc) and each flag defines
corresponding ISA macro when set. E.g., passing -mpopcnt defines
__POPCNT__ macro. These macros can be used instead of -Dvar for
conditional compilation that depends on -m ISA flags, passed to the
compiler ("cc1").

> The arch/x86/Kconfig.cpu seems to be missing options to select
> between 64bit cpus.
> That would also be the place to add CONFIG defines that mirror the
> X86_FEATURE_xxx flags.

While the above -march=native expands in the driver, setting e.g.
-march=skylake enables CPU capabilities in the compiler itself.
However, using -march=...cpu... also sets corresponding ISA macros,
so the proposed approach does not exclude Kconfig.cpu options.
Automatically set ISA macros can be used instead of X86_FEATURE_xxx
flags for maximum flexibility.

Thanks,
Uros.