Re: [PATCH] Optimize bitmap_weight

From: Andrew Morton
Date: Fri May 11 2012 - 18:48:40 EST


On Fri, 11 May 2012 23:10:14 +0900
Akinobu Mita <akinobu.mita@xxxxxxxxx> wrote:

> The current implementation of bitmap_weight simply evaluates the
> population count for each long word of the array, and adds.
>
> The subsection "Counting 1-bits in an Array" in the revisions of
> the book 'Hacker's Delight' explains more superior methods than
> the naive method.
>
> http://www.hackersdelight.org/revisions.pdf
> http://www.hackersdelight.org/HDcode/newCode/pop_arrayHS.c.txt
>
> My benchmark results on Intel Core i3 CPU with 32-bit kernel
> showed 50% faster for 8192 bits bitmap. However, it is not faster
> for small bitmap (< BITS_PER_LONG * 8) than the naive method.
> So if the bitmap size is known to be small at compile time,
> use the naive method.
>
> ...
>
> extern void bitmap_clear(unsigned long *map, int start, int nr);
> @@ -277,7 +278,9 @@ static inline int bitmap_weight(const unsigned long *src, int nbits)
> {
> if (small_const_nbits(nbits))
> return hweight_long(*src & BITMAP_LAST_WORD_MASK(nbits));

Why do we require a constant_p `nbits' for this case?

> - return __bitmap_weight(src, nbits);
> + else if (__builtin_constant_p(nbits) && (nbits) < BITS_PER_LONG * 8)
> + return __bitmap_weight(src, nbits);
> + return __bitmap_weight_fast(src, nbits);
> }

BITS_PER_LONG*8 sounds like a large bitmap: 256 or 512 entries. Will
the kernel call __bitmap_weight_fast() sufficiently often to make this
extra code worth merging?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/