Re: [tip:x86/mce] x86/bitops: Move BIT_64() for a wider use

From: H. Peter Anvin
Date: Wed May 23 2012 - 12:51:04 EST


On 05/23/2012 09:47 AM, H. Peter Anvin wrote:
>
> BIT(0), okay. I thought we were talking about BIT_64() here...
>
> Any reason we can't just tell people to use BIT() for a native "unsigned
> long" type (32/64 bits) and BIT_64() if they really want a 64-bit result?
>
> There are good reasons for the latter. Consider, for example:
>
> u64 msr;
> ...
> msr &= ~BIT_64(1);
>
> This *better* not be an unsigned 32 bit value, or we just chopped off
> the upper half. In this case and similar ones the 64-bitness of the
> result really matters.
>

To better clarify my concern: my concern is that if we make BIT() be a
DWIM type, it will appear to work in most situations. As such, we'll
see things in headers like:

#define MSR_BLAH_FOO BIT(31)
#define MSR_BLAH_BAR BIT(32)

... and *almost all the time* the above will work. But if you use
MSR_BLAH_FOO inverted, then you get truncation.

-hpa

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/