Re: [PATCH 1/1] bitfield.h: Ensure FIELD_PREP_CONST() is constant

From: Yury Norov

Date: Mon Apr 13 2026 - 12:52:28 EST


On Sat, Apr 11, 2026 at 11:54:15AM +0100, David Laight wrote:
> On Sat, 11 Apr 2026 00:24:28 -0400
> Yury Norov <ynorov@xxxxxxxxxx> wrote:
>
> > On Fri, Apr 10, 2026 at 07:45:38PM +0100, David Laight wrote:
> > > On Fri, 10 Apr 2026 12:55:25 -0400
> > > Yury Norov <ynorov@xxxxxxxxxx> wrote:
> >
> > ...
> >
> > > > > Note that when 'val' is a variable 'val << constant' is likely
> > > > > to execute faster than 'val * (1 << constant)'.
> > > > > So the normal FIELD_PREP() is best left alone.
> > > >
> > > > Do you have any numbers? I'd prefer to have the codebase consistent
> > > > when possible.
> > >
> > > I think the multiply instruction will have a higher latency than the shift.
> > > So you are talking about a very small number of clocks if the expression
> > > is in the critical register dependency path.
> > > However FIELD_GET() would need to use a divide - and that would be a lot
> > > worse.
> > >
> > > Having written that, ISTR that 'mask' is required to be a constant.
> > > So the compiler may use a shift anyway - if the divide is unsigned.
> > > But for non-constant mask you definitely want a 'shift right'.
> >
> > Non-constant masks are handled with __field_get(), which doesn't use
> > __bf_shf().
> >
> > > While you might think that it only makes sense to use unsigned values,
> > > I've found one piece of code (IIRC in the x86 fault handler) that
> > > passes a signed value to FIELD_GET() and needs the result sign extended.
> > > So, unless that is changed, FIELD_GET() must use an explicit right shift.
> > > (Of course, right shift of negative values is probably UB...)
> >
> > FIELD_GET() is quite fine with the change:
> >
> > #define __FIELD_GET(mask, reg, pfx) \
> > ({ \
> > __BF_FIELD_CHECK_MASK(mask, 0U, pfx); \
> > - (typeof(mask))(((reg) & (mask)) >> __bf_shf(mask)); \
> > + (typeof(mask))(((reg) & (mask)) / __bf_low_bit(mask)); \
> > })
> >
> > void my_test(void)
> > {
> > f3 0f 1e fa endbr64
> > 48 83 ec 08 sub $0x8,%rsp
> > volatile int i = -1;
> >
> > pr_err("%lx\n", FIELD_GET(GENMASK(10,5), i));
> > 48 c7 c7 13 e3 51 82 mov $0xffffffff8251e313,%rdi
> > volatile int i = -1;
> > c7 44 24 04 ff ff ff movl $0xffffffff,0x4(%rsp)
> > ff
> > pr_err("%lx\n", FIELD_GET(GENMASK(10,5), i));
> > 8b 74 24 04 mov 0x4(%rsp),%esi
> >
> > }
> > 48 83 c4 08 add $0x8,%rsp
> > pr_err("%lx\n", FIELD_GET(GENMASK(10,5), i));
> > 81 e6 e0 07 00 00 and $0x7e0,%esi
> > 48 c1 ee 05 shr $0x5,%rsi
> > e9 32 aa b9 ff jmp <_printk>
>
> There is a subtle difference between (https://www.godbolt.org/z/KM7MesPWM):
>
> int a(int x)
> {
> return x >> __bf_shf(0xf0u);
> }
>
> int b(int x)
> {
> return x / __bf_low_bit(0xf0);
> }
>
> int c(int x)
> {
> return x / __bf_low_bit(0xf0u);
> }
>
> a:
> movl %edi, %eax
> sarl $4, %eax
> ret
> b:
> testl %edi, %edi
> leal 15(%rdi), %eax
> cmovns %edi, %eax
> sarl $4, %eax
> ret
> c:
> movl %edi, %eax
> shrl $4, %eax
> ret
>
> A while ago I did a compile-test for negative values and found one
> place that requires the sign-replicating right shift.

Again, please be more certain. When? Which compiler did you use? Which
place have you found? Does that place still exist?

>
> So you'd need that check and to fixup the caller.

None of them use DIV or MUL expensive instructions, which was your
original concern. If you're concerned about code generation in (b),
you can typecast it to an unsigned with __bf_cast_unsigned(). And I
also think that __bf_low_bit() is a bad name - low bit is always #0.
Maybe ffs_mask(), lsb_mask() or more wordy least_set_bit_mask()?

Altogether, IMO this would be:

#define ffs_mask(val) (__bf_cast_unsigned(val, val) & \
(~(__bf_cast_unsigned(val, val)) + 1)

--

Regardless of __bf_low_bit() discussion, __bf_shf() needs to get fixed
for gcc <= 14, because it's a public API and has over 100 users in the
kernel. So, Matt, you're very welcome to submit v2.

Thanks,
Yury