Currently the cpu allowed mask for the threaded part of a threaded irq
handler will be set to the effective affinity of the hard irq.
Typically the effective affinity of the hard irq will be for a single
cpu. As such,
the threaded handler would always run on the same cpu as the hard irq.
We have seen scenarios in high data-rate throughput testing that the cpu
handling the interrupt can be totally saturated handling both the hard
interrupt and threaded handler parts, limiting throughput.
For when the interrupt is managed, allow the threaded part to run on all
cpus in the irq affinity mask.
Signed-off-by: John Garry <john.garry@xxxxxxxxxx>
---
kernel/irq/manage.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 1753486b440c..8e7f8e758a88 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -968,7 +968,11 @@ irq_thread_check_affinity(struct irq_desc *desc,
struct irqaction *action)
if (cpumask_available(desc->irq_common_data.affinity)) {
const struct cpumask *m;
- m = irq_data_get_effective_affinity_mask(&desc->irq_data);
+ if (irqd_affinity_is_managed(&desc->irq_data))
+ m = desc->irq_common_data.affinity;
+ else
+ m = irq_data_get_effective_affinity_mask(
+ &desc->irq_data);
cpumask_copy(mask, m);
} else {
valid = false;