Re: [PATCH v4] binder: use bitmap for faster descriptor lookup

From: Carlos Llamas
Date: Thu Jun 06 2024 - 14:50:27 EST


On Tue, Jun 04, 2024 at 04:06:16PM +0200, Greg Kroah-Hartman wrote:
> On Fri, May 17, 2024 at 03:28:27AM +0000, Carlos Llamas wrote:
> > diff --git a/drivers/android/dbitmap.h b/drivers/android/dbitmap.h
> > new file mode 100644
> > index 000000000000..2cf470702bbb
> > --- /dev/null
> > +++ b/drivers/android/dbitmap.h
> > @@ -0,0 +1,139 @@
> > +/* SPDX-License-Identifier: GPL-2.0-only */
> > +#ifndef _LINUX_DBITMAP_H
> > +#define _LINUX_DBITMAP_H
> > +#include <linux/bitmap.h>
>
> No copyright line for a new file? Somehow I doubt that's what your
> corporate policy is :(

It is not, sorry I'll fix it.

>
>
> > +
> > +#define NBITS_MIN BITS_PER_TYPE(unsigned long)
> > +
> > +struct dbitmap {
> > + unsigned int nbits;
> > + unsigned long *map;
> > +};
>
> Some documentation about how this all works would be nice so we can
> verify / validate it is doing what it should be doing.
>
> And maybe a test?
>
> > +
> > +static inline int dbitmap_enabled(struct dbitmap *dmap)
> > +{
> > + return dmap->map != NULL;
> > +}
> > +
> > +static inline void dbitmap_free(struct dbitmap *dmap)
> > +{
> > + dmap->nbits = 0;
> > + kfree(dmap->map);
> > + dmap->map = NULL;
>
> Why are you setting this to NULL after freeing it? What does that
> signal?

The signal is on dbitmap_enabled(). I'll either add a comment to this or
create a dbitmap_disable() to pair with it in the next version.

>
> > +}
> > +
> > +static inline unsigned int dbitmap_shrink_nbits(struct dbitmap *dmap)
> > +{
> > + unsigned int bit;
> > +
> > + if (dmap->nbits <= NBITS_MIN)
> > + return 0;
> > +
> > + bit = find_last_bit(dmap->map, dmap->nbits);
> > + if (unlikely(bit == dmap->nbits))
> > + return NBITS_MIN;
> > +
> > + if (unlikely(bit < (dmap->nbits >> 2)))
> > + return dmap->nbits >> 1;
>
> And these unlikely() markings actually work better than not having them?
> Please document that if so.
>
>
> > +
> > + return 0;
> > +}
> > +
> > +static inline void
> > +dbitmap_replace(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
> > +{
> > + bitmap_copy(new, dmap->map, min(dmap->nbits, nbits));
> > + kfree(dmap->map);
> > + dmap->map = new;
> > + dmap->nbits = nbits;
> > +}
> > +
> > +static inline void
> > +dbitmap_shrink(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
> > +{
> > + if (unlikely(!new))
> > + return;
>
> All unlikely/likely needs to be "proven" to be needed, otherwise the
> compiler and cpu almost always does a better job over time.

Ok, I'll drop all these unlikely/likely and let the compiler and cpu do
their thing.

>
> > +
> > + /*
> > + * Make sure we can still shrink to the requested nbits as
> > + * this call might have raced with another shrink or more
> > + * bits have been assigned. In such case, release the @new
> > + * bitmap and move on.
> > + */
> > + if (unlikely(!dbitmap_enabled(dmap) ||
> > + dbitmap_shrink_nbits(dmap) != nbits)) {
> > + kfree(new);
> > + return;
> > + }
> > +
> > + dbitmap_replace(dmap, new, nbits);
> > +}
> > +
> > +static inline unsigned int
> > +dbitmap_expand_nbits(struct dbitmap *dmap)
> > +{
> > + return dmap->nbits << 1;
> > +}
> > +
> > +static inline void
> > +dbitmap_expand(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
> > +{
> > + /*
> > + * Determine if the expand is still valid as it might have
> > + * raced with another expand or free. In such case, release
> > + * the @new bitmap and move on.
>
> Shouldn't locks protect any race? otherwise what happens if it changes
> right after you check for this?

Yes, there are locks protecting this call. However, these are spinlocks
so the bitmap gets alloc'ed outside of them. e.g.

nbits = dbitmap_expand_nbits(dmap);
spin_unlock(&proc->lock);
new = bimap_zalloc(nbits, GFP_KERNEL);
spin_lock(&proc->lock);
dbitmap_expand(dmap, new, nbits);

Thus why we need to check if other task beat us to the expand.
Sorry, I now realize this needs to be documented.

>
>
> > + */
> > + if (unlikely(!dbitmap_enabled(dmap) || nbits <= dmap->nbits)) {
> > + kfree(new);
> > + return;
> > + }
> > +
> > + /*
> > + * ENOMEM is checked here as we can now discard a potential
> > + * race with another successful expand. In such case, disable
> > + * the dbitmap and fallback to slow_desc_lookup_olocked().
> > + */
> > + if (unlikely(!new)) {
>
> As you control the callers, how can this happen?

Delaying this check has the advantage that we can rely on another
successful expand call. So in this case the "race" is good. The comment
above the check has these details.

>
> > + dbitmap_free(dmap);
> > + return;
> > + }
> > +
> > + dbitmap_replace(dmap, new, nbits);
> > +}
> > +
> > +static inline int
> > +dbitmap_acquire_first_zero_bit(struct dbitmap *dmap, unsigned long *bit)
> > +{
> > + unsigned long n;
> > +
> > + n = find_first_zero_bit(dmap->map, dmap->nbits);
> > + if (unlikely(n == dmap->nbits))
> > + return -ENOSPC;
> > +
> > + *bit = n;
> > + set_bit(n, dmap->map);
> > +
> > + return 0;
> > +}
> > +
> > +static inline void
> > +dbitmap_clear_bit(struct dbitmap *dmap, unsigned long bit)
> > +{
> > + clear_bit(bit, dmap->map);
> > +}
> > +
> > +static inline int dbitmap_init(struct dbitmap *dmap)
> > +{
> > + dmap->map = bitmap_zalloc(NBITS_MIN, GFP_KERNEL);
> > + if (!dmap->map) {
> > + dmap->nbits = 0;
> > + return -ENOMEM;
> > + }
> > +
> > + dmap->nbits = NBITS_MIN;
> > + /* 0 is reserved for the context manager */
> > + set_bit(0, dmap->map);
>
> Yeah, this all needs to be documented somewhere please :)

Yeah ok, I'll add the documentation.

>
> thanks,
>
> greg k-h

Thanks,
Carlos Llamas