Re: [PATCH V3 1/2] uapi: Define GENMASK_U128

From: Arnd Bergmann
Date: Mon Aug 19 2024 - 03:14:05 EST


On Fri, Aug 16, 2024, at 08:28, Anshuman Khandual wrote:
>
> This is caused by ((unsigned __int128)(1) << (128)) which is generated
> via (h + 1) element in __GENMASK_U128().
>
> #define _BIT128(x) ((unsigned __int128)(1) << (x))
> #define __GENMASK_U128(h, l) \
> ((_BIT128((h) + 1)) - (_BIT128(l)))

Right, makes sense.

>
> The most significant bit in the generate mask can be added separately
> , thus voiding that extra shift. The following patch solves the build
> problem.
>
> diff --git a/include/uapi/linux/bits.h b/include/uapi/linux/bits.h
> index 4d4b7b08003c..4e50f635c6d9 100644
> --- a/include/uapi/linux/bits.h
> +++ b/include/uapi/linux/bits.h
> @@ -13,6 +13,6 @@
> (~_ULL(0) >> (__BITS_PER_LONG_LONG - 1 - (h))))
>
> #define __GENMASK_U128(h, l) \
> - ((_BIT128((h) + 1)) - (_BIT128(l)))
> + (((_BIT128(h)) - (_BIT128(l))) | (_BIT128(h)))

This could probably use a comment then, as it's less intuitive.

Another solution might be to use a double shift, as in

#define __GENMASK_U128(h, l) \
((_BIT128((h)) << 1) - (_BIT128(l)))

but I have not checked if this is correct for all inputs
or if it avoids the warning. Your version looks fine to
me otherwise.

Arnd