Re: [PATCH v3 1/2] perf, x86: Implement event scheduler helperfunctions

From: Peter Zijlstra
Date: Wed Nov 16 2011 - 11:02:42 EST


On Mon, 2011-11-14 at 18:51 +0100, Robert Richter wrote:
> @@ -22,8 +22,14 @@ extern unsigned long __sw_hweight64(__u64 w);
> #include <asm/bitops.h>
>
> #define for_each_set_bit(bit, addr, size) \
> - for ((bit) = find_first_bit((addr), (size)); \
> - (bit) < (size); \
> + for ((bit) = find_first_bit((addr), (size)); \
> + (bit) < (size); \
> + (bit) = find_next_bit((addr), (size), (bit) + 1))
> +
> +/* same as for_each_set_bit() but use bit as value to start with */
> +#define for_each_set_bit_cont(bit, addr, size) \
> + for ((bit) = find_next_bit((addr), (size), (bit)); \
> + (bit) < (size); \
> (bit) = find_next_bit((addr), (size), (bit) + 1))

So my version has the +1 for the first as well, this is from the
assumption that the bit passed in has been dealt with and should not be
the first. ie. cont _after_ @bit instead of cont _at_ @bit.

This seems consistent with the list_*_continue primitives as well, which
will start with the element after (or before for _reverse) the given
position.

Thoughts?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/