irq domain hierarchy vs. chaining w/ PCI MSI-X...

From: David Daney
Date: Thu Jan 12 2017 - 17:36:21 EST


Hi Thomas,

I am trying to figure out how to handle this situation:

handle_level_irq()
+---------------+ handle_fasteoi_irq()
| PCIe hosted | +-----------+ +-----+
--level_gpio---->| GPIO to MSI-X |--MSI_message--+>| gicv3-ITS |---> | CPU |
| widget | | +-----------+ +-----+
+---------------+ |
|
+-------------------+ |
| other PCIe device |---MSI_message-----+
+-------------------+


The question is how to structure the interrupt handling. My initial
attempt was a chaining arrangement where the GPIO driver does
request_irq() for the appropriate MSI-X vector, and the handler calls
back into the irq system like this:


static irqreturn_t thunderx_gpio_chain_handler(int irq, void *dev)
{
struct thunderx_irqdev *irqdev = dev;
int chained_irq;
int ret;

chained_irq = irq_find_mapping(irqdev->gpio->chip.irqdomain,
irqdev->line);
if (!chained_irq)
return IRQ_NONE;

ret = generic_handle_irq(chained_irq);

return ret ? IRQ_NONE : IRQ_HANDLED;
}

Thus getting the proper GPIO irq_chip functions called to manage the
level triggering semantics.

The drawbacks of this approach are that there are then two irqs
associated with the GPIO line (the base MSI-X and the chained GPIO),
also there can be up to 80-100 of these widgets, so potentially we can
consume twice that many irq numbers.

It was suggested by Linus Walleij that using an irq domain hierarchy
might be a better idea. However, I cannot figure out how this might
work. The gicv3-ITS needs to use handle_fasteoi_irq(), and we need
handle_level_irq() for the GPIO-level lines. Getting the proper
irq_chip functions called in a hierarchical configuration doesn't seem
doable given the heterogeneous flow handlers.

Can you think of a better way of structuring this than chaining from the MSI-X handler as I outlined above?

Thanks in advance for any insight,
David Daney