Re: [PATCH RFC 1/1] genirq: Make threaded handler use irq affinity for managed interrupt

From: John Garry
Date: Mon Dec 16 2019 - 09:17:46 EST


Hi Marc,


I'm just wondering if non-managed interrupts should be included in
the load balancing calculation? Couldn't irqbalance (if active) start
moving non-managed interrupts around anyway?

But they are, aren't they? See what we do in irq_set_affinity:

+ÂÂÂÂÂÂÂ atomic_inc(per_cpu_ptr(&cpu_lpi_count, cpu));
+ÂÂÂÂÂÂÂ atomic_dec(per_cpu_ptr(&cpu_lpi_count,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ its_dev->event_map.col_map[id]));

We don't try to "rebalance" anything based on that though, not that
I think we should.

Ah sorry, I meant whether they should not be included. In its_irq_domain_activate(), we increment the per-cpu lpi count and also use its_pick_target_cpu() to find the least loaded cpu. I am asking whether we should just stick with the old policy for non-managed interrupts here.

After checking D05, I see a very significant performance hit for SAS controller performance - ~40% throughout lowering.

With this patch, now we have effective affinity targeted at seemingly "random" CPUs, as opposed to all just using CPU0. This affects performance.

The difference is that when we use managed interrupts - like for NVME or D06 SAS controller - the irq cpu affinity mask matches the CPUs which enqueue the requests to the queue associated with the interrupt. So there is an efficiency is enqueuing and deqeueing on same CPU group - all related to blk multi-queue. And this is not the case for non-managed interrupts.


Please give this new patch a shot on your system (my D05 doesn't have
any managed devices):

We could consider supporting platform msi managed interrupts, but I
doubt the value.

It shouldn't be hard to do, and most of the existing code could be
moved to the generic level. As for the value, I'm not convinced
either. For example D05 uses the MBIGEN as an intermediate interrupt
controller, so MSIs are from the PoV of MBIGEN, and not the SAS device
attached to it. Not the best design...

JFYI, I did raise this following topic before, but that's as far as I got:

https://marc.info/?l=linux-block&m=150722088314310&w=2



https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/commit/?h=irq/its-balance-mappings&id=1e987d83b8d880d56c9a2d8a86289631da94e55a


I quickly tested that in my NVMe env, and I see a performance boost
of 1055K -> 1206K IOPS. Results at bottom.

OK, that's encouraging.

Here's the irq mapping dump:

[...]

Looks good.

I'm still getting the CPU lockup (even on CPUs which have a single
NVMe completion interrupt assigned), which taints these results. That
lockup needs to be fixed.

Is this interrupt screaming to the point where it prevents the completion
thread from making forward progress? What if you don't use threaded
interrupts?

Yeah, just switching to threaded interrupts solves it (nvme core has a switch for this). So there was a big discussion on this topic a while ago:

https://lkml.org/lkml/2019/8/20/45 (couldn't find this on lore)

The conclusion there was to switch to irq poll, but leiming though that it was another issue - see earlier mail:

https://lore.kernel.org/lkml/20191210014335.GA25022@xxxxxxxxxx/


We'll check on our SAS env also. I did already hack something up
similar to your change and again we saw a boost there.

OK. Please keep me posted. If the result is overall positive, I'll
push this into -next for some soaking.


ok, thanks

John