Re: [PATCH RFC v2 02/18] irq/dev-msi: Add support for a new DEV_MSI irq domain

From: Jason Gunthorpe
Date: Fri Aug 07 2020 - 09:34:38 EST


On Fri, Aug 07, 2020 at 02:38:31PM +0200, gregkh@xxxxxxxxxxxxxxxxxxx wrote:
> On Fri, Aug 07, 2020 at 09:06:50AM -0300, Jason Gunthorpe wrote:
> > On Thu, Aug 06, 2020 at 10:21:11PM +0200, Thomas Gleixner wrote:
> >
> > > Optionally? Please tell the hardware folks to make this mandatory. We
> > > have enough pain with non maskable MSI interrupts already so introducing
> > > yet another non maskable interrupt trainwreck is not an option.
> >
> > Can you elaborate on the flows where Linux will need to trigger
> > masking?
> >
> > I expect that masking will be available in our NIC HW too - but it
> > will require a spin loop if masking has to be done in an atomic
> > context.
> >
> > > It's more than a decade now that I tell HW people not to repeat the
> > > non-maskable MSI failure, but obviously they still think that
> > > non-maskable interrupts are a brilliant idea. I know that HW folks
> > > believe that everything they omit can be fixed in software, but they
> > > have to finally understand that this particular issue _cannot_ be fixed
> > > at all.
> >
> > Sure, the CPU should always be able to shut off an interrupt!
> >
> > Maybe explaining the goals would help understand the HW perspective.
> >
> > Today HW can process > 100k queues of work at once. Interrupt delivery
> > works by having a MSI index in each queue's metadata and the interrupt
> > indirects through a MSI-X table on-chip which has the
> > addr/data/mask/etc.
> >
> > What IMS proposes is that the interrupt data can move into the queue
> > meta data (which is not required to be on-chip), eg along side the
> > producer/consumer pointers, and the central MSI-X table is not
> > needed. This is necessary because the PCI spec has very harsh design
> > requirements for a MSI-X table that make scaling it prohibitive.
> >
> > So an IRQ can be silenced by deleting or stopping the queue(s)
> > triggering it. It can be masked by including masking in the queue
> > metadata. We can detect pending by checking the producer/consumer
> > values.
> >
> > However synchronizing all the HW and all the state is now more
> > complicated than just writing a mask bit via MMIO to an on-die memory.
>
> Because doing all of the work that used to be done in HW in software is
> so much faster and scalable? Feels really wrong to me :(

Yes, it is more scalable. The problem with MSI-X is you need actual
physical silicon for each and every vector. This really limits the
number of vectors.

Placing the vector metadata with the queue means it can potentially
live in system memory which is significantly more scalable.

setup/mask/unmask will be slower. The driver might have
complexity. They are not performance path, right?

I don't think it is wrong or right. IMHO the current design where the
addr/data is hidden inside the platform is an artifact of x86's
compatibility legacy back when there was no such thing as message
interrupts.

If you were starting from a green field I don't think a design would
include the IOAPIC/MSI/MSI-X indirection tables.

> Do you all have a pointer to the spec for this newly proposed stuff
> anywhere to try to figure out how the HW wants this to all work?

Intel's SIOV document is an interesting place to start:

https://software.intel.com/content/www/us/en/develop/download/intel-scalable-io-virtualization-technical-specification.html

Though it is more of a rational and a cookbook on how to combine
existing technology pieces. (eg PASID, platform_msi, etc)

The basic approach of SIOV's IMS is that there is no longer a generic
interrupt indirection from numbers to addr/data pairs like
IOAPIC/MSI/MSI-X owned by the common OS code.

Instead the driver itself is responsible to set the addr/data pair
into the device in a device specific way, deal with masking, etc.

This lets the device use an implementation that is not limited by the
harsh MSI-X semantics.

In Linux we already have 'IMS' it is called platform_msi and a few
embedded drivers already work like this. The idea here is to bring it
to PCI.

Jason