on 10/13/2016 11:31 PM, Marc Zyngier wrote:
On Thu, 13 Oct 2016 18:57:14 +0800
Cheng Chao <cs.os.kernel@xxxxxxxxx> wrote:
GIC can distribute an interrupt to more than one cpu,
but now, gic_set_affinity sets only one cpu to handle interrupt.
What makes you think this is a good idea? What purpose does it serves?
I can only see drawbacks to this: You're waking up more than one CPU,
wasting power, adding jitter and clobbering the cache.
I assume you see a benefit to that approach, so can you please spell it
Ok, You are right, but the performance is another point that we should consider.
We use E1 device to transmit/receive video stream. we find that E1's interrupt is
only on the one cpu that cause this cpu usage is almost 100%,
but other cpus is much lower load, so the performance is not good.
the cpu is 4-core.
It looks to me like you're barking up the wrong tree. We have
NAPI-enabled network drivers for this exact reason, and adding more
interrupts to an already overloaded system doesn't strike me as going in
the right direction. May I suggest that you look at integrating NAPI
into your E1 driver?
so add CONFIG_ARM_GIC_AFFINITY_SINGLE_CPU is better?
thus we can make a trade-off between the performance with the power etc.
No, that's pretty horrible, and I'm not even going to entertain the
I suggest you start investigating how to mitigate your interrupt
rate instead of just taking more of them.