Re: [PATCH 04/13] irqchip: Add driver for Loongson-3 I/O interrupt controller

From: Jiaxun Yang
Date: Wed Aug 28 2019 - 11:31:55 EST



On 2019/8/28 äå2:59, Marc Zyngier wrote:
On Wed, 28 Aug 2019 08:27:05 +0800
Jiaxun Yang <jiaxun.yang@xxxxxxxxxxx> wrote:

On 2019/8/28 äå12:45, Marc Zyngier wrote:
On 27/08/2019 09:52, Jiaxun Yang wrote:
+ chained_irq_enter(chip, desc);
+
+ pending = readl(priv->intc_base + LS3_REG_INTC_EN_STATUS) &
+ readl(priv->intc_base + LS3_REG_INTC_STATUS);
Reading the enabled status from the HW on each interrupt? I'm sure
that's pretty cheap...
Seems expensive but to deal with a buggy hardware... That's worthy.
How broken is it? You very much seem to rely on the HW being correct
here, since you trust it exclusively. I'd expect the enable mask to be
a SW construct if you didn't blindly trust it
Hi Marc

Thanks for your answering.

The vendor code did this and said there is a HW issue. I just don't have the guts to remove this check.
Seems like sometimes masked interrupt may get ISR set wrongly.
And if this is truly the right way to do it, please document the
various problems with the controller so that we don't break it at a
later time.
Thanks, will do.

Then how comes this comes from the irqchip's DT node? This should be
part of the endpoint's interrupt specifier.

In theory it should be, However if we set different interrupt lines/cores on that controller, interrupts may get lost. It means we can only have single parent core/interrupt.

So I'd prefer just set them uniformly by controller's dt-binding to prevent confusing.

It's parent IRQ (mti,cpu-interrupt-controller) is a percpu IRQ.
But then why is that interrupt described using the "core" property? It
should be an interrupt specifier, just like any other interrupt.
The same.

In design, it allows us to decide affinity at runtime but actually hardware is seriously broken.
I understand the HW is terrible. But the binding looks pretty bad too.
This needs fixing.

M.
--
Jiaxun Yang