Re: PCI MSI issue for maxcpus=1

From: Marc Zyngier
Date: Mon Jan 24 2022 - 06:22:49 EST


On Mon, 17 Jan 2022 11:59:58 +0000,
John Garry <john.garry@xxxxxxxxxx> wrote:
>
> On 17/01/2022 09:14, Marc Zyngier wrote:
> >> I guess that for managed interrupts, it shouldn't matter, as these
> >> interrupts should only be used when the relevant CPUs come online.
> >>
> >> Would something like below help? Totally untested, as I don't have a
> >> Multi-MSI capable device that I can plug in a GICv3 system (maybe I
> >> should teach that to a virtio device...).
>
> JFYI, NVMe PCI uses the same API (pci_alloc_irq_vectors_affinity()),
> but does not suffer from this issue - for maxcpus=1 the driver looks
> to only want 1x vector
>
> > Actually, if the CPU online status doesn't matter for managed affinity
> > interrupts, then the correct fix is this:
> >
> > diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
> > index d25b7a864bbb..af4e72a6be63 100644
> > --- a/drivers/irqchip/irq-gic-v3-its.c
> > +++ b/drivers/irqchip/irq-gic-v3-its.c
> > @@ -1624,7 +1624,7 @@ static int its_select_cpu(struct irq_data *d,
> > cpu = cpumask_pick_least_loaded(d, tmpmask);
> > } else {
> > - cpumask_and(tmpmask, irq_data_get_affinity_mask(d), cpu_online_mask);
> > + cpumask_copy(tmpmask, irq_data_get_affinity_mask(d));
> > /* If we cannot cross sockets, limit the search to
> > that node */
> > if ((its_dev->its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_23144) &&
>
> That produces a warn:
>
> [ 7.833025] ------------[ cut here ]------------
> [ 7.837634] WARNING: CPU: 0 PID: 44 at
> drivers/irqchip/irq-gic-v3-its.c:298 valid_col+0x14/0x24
> [ 7.846324] Modules linked in:
> [ 7.849368] CPU: 0 PID: 44 Comm: kworker/0:3 Not tainted 5.16.0-dirty #119
> [ 7.856230] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI
> RC0 - V1.16.01 03/15/2019
> [ 7.864740] Workqueue: events work_for_cpu_fn
> [ 7.869088] pstate: 804000c9 (Nzcv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> [ 7.876037] pc : valid_col+0x14/0x24
> [ 7.879600] lr : its_build_mapti_cmd+0x84/0x90

Ah, of course. the CPU hasn't booted yet, so its collection isn't
mapped. I was hoping that the core code would keep the interrupt in
shutdown state, but it doesn't seem to be the case...

> Apart from this, I assume that if another cpu comes online later in
> the affinity mask I would figure that we want to target the irq to
> that cpu (which I think we would not do here).

That's probably also something that should come from core code, as
we're not really in a position to decide this in the ITS driver.

M.

--
Without deviation from the norm, progress is not possible.