Re: [PATCH RFC v2 02/18] irq/dev-msi: Add support for a new DEV_MSI irq domain

From: Thomas Gleixner
Date: Fri Aug 07 2020 - 11:22:48 EST


Jason,

Jason Gunthorpe <jgg@xxxxxxxxxx> writes:
> On Thu, Aug 06, 2020 at 10:21:11PM +0200, Thomas Gleixner wrote:
>
>> Optionally? Please tell the hardware folks to make this mandatory. We
>> have enough pain with non maskable MSI interrupts already so introducing
>> yet another non maskable interrupt trainwreck is not an option.
>
> Can you elaborate on the flows where Linux will need to trigger
> masking?

1) disable/enable_irq() obviously needs masking

2) Affinity changes are preferrably done with masking to avoid a
boatload of nasty side effect. We have a "fix" for 32bit addressing
mode which works by chance due to the layout but it would fail
miserably with 64bit addressing mode. 64bit addressing mode is only
relevant for more than 256 CPUs which requires X2APIC which in turn
requires interrupt remapping. Interrupt remappind saves us here
because the interrupt can be disabled at the remapping level.

3) The ability to shutdown an irq at the interrupt level in case of
malfunction. Of course that's pure paranoia because devices are
perfect and never misbehave :)

So it's nowhere in the hot path of interrupt handling itself.

> I expect that masking will be available in our NIC HW too - but it
> will require a spin loop if masking has to be done in an atomic
> context.

Yes, it's all in atomic context.

We have functionality in the interrupt core to do #1 and #2 from task
context (requires the caller to be in task context as well). #3 not so
much.

>> It's more than a decade now that I tell HW people not to repeat the
>> non-maskable MSI failure, but obviously they still think that
>> non-maskable interrupts are a brilliant idea. I know that HW folks
>> believe that everything they omit can be fixed in software, but they
>> have to finally understand that this particular issue _cannot_ be fixed
>> at all.
>
> Sure, the CPU should always be able to shut off an interrupt!

Oh yes!

> Maybe explaining the goals would help understand the HW perspective.
>
> Today HW can process > 100k queues of work at once. Interrupt delivery
> works by having a MSI index in each queue's metadata and the interrupt
> indirects through a MSI-X table on-chip which has the
> addr/data/mask/etc.
>
> What IMS proposes is that the interrupt data can move into the queue
> meta data (which is not required to be on-chip), eg along side the
> producer/consumer pointers, and the central MSI-X table is not
> needed. This is necessary because the PCI spec has very harsh design
> requirements for a MSI-X table that make scaling it prohibitive.

I know.

> So an IRQ can be silenced by deleting or stopping the queue(s)
> triggering it.

We cannot do that from the interrupt layer without squaring the
circle and violating all locking and layering rules in one go.

> It can be masked by including masking in the queue metadata. We can
> detect pending by checking the producer/consumer values.
>
> However synchronizing all the HW and all the state is now more
> complicated than just writing a mask bit via MMIO to an on-die memory.

That's one of the reasons why I think that the IMS handling has to be a
per device irqdomain with it's own interrupt chip because the way how
IMS is managed is completely device specific.

There is certainly opportunity for sharing some of the functionality and
code, but not by creating a pseudo-shared entity which is customized per
device with indirections and magic storage plus device specific IMS slot
management glued at it as a wart. Such concepts fall apart in no time or
end up in a completely unmaintainable mess.

Coming back to mask/unmask. We could lift that requirement if and only
if irq remapping is mandatory to make use of those magic devices because
the remapping unit allows us to do the masking. That still would not
justify the pseudo-shared irqdomain because the IMS slot management
still stays per device.

Thanks,

tglx