Re: [PATCH 20/24] irqchip/gic-v5: Add GICv5 LPI/IPI support

From: Lorenzo Pieralisi
Date: Fri Apr 11 2025 - 05:26:24 EST


[+Liam, Alexei, since I mentioned to them this maple tree "usage"
in the past, in case they see a point in chiming in]

On Tue, Apr 08, 2025 at 12:50:19PM +0200, Lorenzo Pieralisi wrote:

[...]

> The LPI INTID namespace is managed using a maple tree which encodes
> ranges, add code to alloc/free LPI INTIDs.
>
> Maple tree entries are not used by the driver, only the range tracking
> is required - therefore the driver first finds an empty area large
> enough to contain the required number of LPIs then checks the
> adjacent (and possibly occupied) LPI ranges and try to merge them
> together, reducing maple tree slots usage.

The maple tree usage for this purpose is an RFC at this stage.

Added Alexei because I know BPF arena used the maple tree in
a similar way in the past and moved to a range tree because
the BPF arena requires a special purpose mem allocator.

As Thomas already pointed out a plain bitmap could do even though
it requires preallocating memory up to 2MB (or we can grow it
dynamically).

We could allocate IDs using an IDA as well, though that's 1 by 1,
we allocate LPI INTIDs 1 by 1 - mostly, upon MSI allocation, so
using an IDA could do (AFAIU it works for 0..INT_MAX we need
0..2^24 worst case).

I don't want to abuse the maple tree for this purpose, opinions
welcome, it is just to understand if there are core code structures
that can be reused.

IDs allocation snippet below.

[...]

> +static struct maple_tree lpi_mt;
> +static u32 num_lpis;
> +
> +void __init gicv5_init_lpi(u32 lpis)
> +{
> + mt_init_flags(&lpi_mt, MT_FLAGS_ALLOC_RANGE);
> + num_lpis = lpis;
> +}
> +
> +void __init gicv5_free_lpi(void)
> +{
> + mtree_destroy(&lpi_mt);
> + num_lpis = 0;
> +}
> +
> +#define MT_ENTRY ((void *)&lpi_mt) /* Unused - just a valid pointer */
> +
> +static int alloc_lpi_range(u32 lpis, u32 *base)
> +{
> + int ret;
> + void *entry;
> + unsigned long lpi_base, startp, lastp;
> +
> + MA_STATE(mas, &lpi_mt, 0, 0);
> +
> + if (!num_lpis)
> + return -ENODEV;

s/ENODEV/ENOSPC

> +
> + mtree_lock(&lpi_mt);
> + ret = mas_empty_area(&mas, 0, num_lpis - 1, lpis);
> + if (ret) {

//Fixed
+ mtree_unlock(&lpi_mt);

> + pr_err("Failed to perform a dynamic alloc in the LPI MT!\n");
> + return ret;
> + }
> +
> + lpi_base = mas.index;
> +
> + /*
> + * We don't really care about the entry itself, only about
> + * allocation of a maple tree ranges describing in use LPIs.
> + * That's why, upon allocation, we try to merge slots adjacent
> + * with the empty one we are allocating to minimize the number
> + * of slots we take from maple tree nodes for nothing, all
> + * we need to keep track of is in use ranges.
> + */
> + startp = mas.index;
> + lastp = mas.last;
> +
> + entry = mas_next(&mas, num_lpis - 1);
> + if (entry && mas.index == lastp + 1)
> + lastp = mas.last;
> +
> + entry = mas_prev(&mas, 0);
> + if (entry)
> + startp = mas.index;
> + mas_set_range(&mas, startp, lastp);
> + mas_store_gfp(&mas, MT_ENTRY, GFP_KERNEL);
> + mtree_unlock(&lpi_mt);
> +
> + // startp is the index at which we allocated, i.e. the base LPI.
> + *base = lpi_base;
> +
> + return 0;
> +}
> +
> +// Drop entries between min and max (inclusive)
> +static int release_lpi_range(u32 min, u32 max)
> +{
> + return mtree_store_range(&lpi_mt, min, max, NULL, GFP_KERNEL);
> +}

Thanks,
Lorenzo