On 2/23/07, Richard Knutsson <ricknu-0@xxxxxxxxxxxxxx> wrote:Yes, kinda. But wouldn't the compiler complain if you included both input.h and bitops.h (multiply definitions of BIT)?Dmitry Torokhov wrote:
>
> Hm, I thought as was clear, but apparently I messed up explaining my
> position:
>
> 1. I don't like BITWRAP name at all and I don't want anything like
> that near input code. I think BIT is just fine.
Oh, I think I understand now. So the (in input.h):
#undef BIT
#define BIT(...
business is what you want to do? Well, that I will not object to.
No, #undefs may be barely tolerable in .c files but they are not
acceptable in core subsystem interfaces. If you do that you will never
know what version of BIT patricular module is using.
Sorry about that. Am surprised I didn't notice it earlier...Your
patch with:
+#define BIT(nr) (1UL << (nr))
+#define LLBIT(nr) (1ULL << (nr))
+#define BITWRAP(nr) (1UL << ((nr) % BITS_PER_LONG))
in bitops.h made me believe the #undef in input.h was just a temporarily
thing.
No. There is no "my patch". You are confusing me with Milind
Choudhary.
I am saying that IMO input's BIT definition should beIs the reason for the modulo to put a bitmask larger then the variable into an array? I did just a quick 'grep' for "BIT(" in drivers/input/ and from what I saw, most (or all?) of the values are defined constants and those in input.h were noway near the limits of a 'long'.
adequate for 99% of potential users and that I would be OK with moving
said BIT definition from input.h to bitops.h and maybe supplementing
it with LLBIT. I am also saying that I do not want BITWRAP, BITSWAP
(what swap btw?) nor BIT(x % BITS_PER_LONG) in input drivers.