Re: [PATCH v3] genirq: avoid long loops in handle_edge_irq
From: Thomas Gleixner
Date: Thu Oct 19 2023 - 04:29:05 EST
On Thu, Oct 19 2023 at 15:02, Wei Gong wrote:
> O Fri, Oct 13, 2023 at 10:44:36AM +0200, Thomas Gleixner wrote:
>> > By maintaining the original loop exit condition, if a mask mismatch is
>> > detected within the loop, we will not perform the unmask_irq operation.
>> > Instead, we will wait until the loop exits before executing unmask_irq.
>> > Could this approach potentially solve the issue of lost interrupts?
>>
>> How so exactly?
>>
> diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
> index 8f8f1d62f..b846659ce 100644
> --- a/kernel/irq/chip.c
> +++ b/kernel/irq/chip.c
> @@ -823,7 +823,9 @@ void handle_edge_irq(struct irq_desc *desc)
> */
> if (unlikely(desc->istate & IRQS_PENDING)) {
> if (!irqd_irq_disabled(&desc->irq_data) &&
> - irqd_irq_masked(&desc->irq_data))
> + irqd_irq_masked(&desc->irq_data) &&
> + cpumask_test_cpu(smp_processor_id(),
> + irq_data_get_effective_affinity_mask(&desc->irq_data)))
> unmask_irq(desc);
> }
>
> @@ -832,6 +834,10 @@ void handle_edge_irq(struct irq_desc *desc)
> } while ((desc->istate & IRQS_PENDING) &&
> !irqd_irq_disabled(&desc->irq_data));
>
> +if (!irqd_irq_disabled(&desc->irq_data) &&
> + irqd_irq_masked(&desc->irq_data))
> + unmask_irq(desc);
What is this supposed to solve? The last interrupt has been delivered
already. It's the one which sets the PENDING bit.
Again:
CPU 0 CPU 1
interrupt #1
set IN_PROGRESS
do {
change_affinity_to(CPU1);
handle_irq_event()
ack_in_device(interrupt #1)
interrupt #2
set PENDING
mask();
} while (COND)
unmask();
The unmask does not make the interrupt #2 magically reappear. This is
edge, which is a fire and forget mechanism. That's why we have the
PENDING bit logic for edge to ensure that no interrupt is lost.
With your change interrupt #2 is lost forever.
So if the device depends on ack_in_device() for being able to send the
next interrupt, then the lack of ack_in_device() for interrupt #2 makes
it stale.
It may well be that your particular device does not need the
ack_in_device() sequence, but this is generic core code which has to
work for everything.
I'm still having a hard time to understand why this is such a big
issue at all. What's the practical impact of processing a bunch of
interrupts on CPU0 after changing the affinity to CPU1?
It's obviously suboptimal in terms of locality, but that's a temporary
problem which resolves itself. It's suboptimal, but correct behaviour.
Your attempts of "fixing" it at the core edge handler level result in a
fundamental correctness problem.
There are certainly ways to solve it at that level, but for that you
have to fully understand and accept the fundamental properties and
intricacies of edge triggered interrupts in the first place.
Whether the resulting complexity is worth it, is a different
question. As long as there is not a compelling reason to do so, the
answer is simply no.
Thanks,
tglx