Re: [PATCH] PCI: Fix devres regression in pci_intx()
From: Andy Shevchenko
Date: Wed Sep 04 2024 - 16:25:08 EST
Wed, Sep 04, 2024 at 12:07:21PM -0600, Alex Williamson kirjoitti:
> On Wed, 04 Sep 2024 15:37:25 +0200
> Philipp Stanner <pstanner@xxxxxxxxxx> wrote:
> > On Wed, 2024-09-04 at 17:25 +0900, Damien Le Moal wrote:
...
> > If vfio-pci can get rid of pci_intx() alltogether, that might be a good
> > thing. As far as I understood Andy Shevchenko, pci_intx() is outdated.
> > There's only a hand full of users anyways.
>
> What's the alternative?
>From API perspective the pci_alloc_irq_vectors() & Co should be used.
> vfio-pci has a potentially unique requirement
> here, we don't know how to handle the device interrupt, we only forward
> it to the userspace driver. As a level triggered interrupt, INTx will
> continue to assert until that userspace driver handles the device.
> That's obviously unacceptable from a host perspective, so INTx is
> masked at the device via pci_intx() where available, or at the
> interrupt controller otherwise. The API with the userspace driver
> requires that driver to unmask the interrupt, again resulting in a call
> to pci_intx() or unmasking the interrupt controller, in order to receive
> further interrupts from the device. Thanks,
I briefly read the discussion and if I understand it correctly the problem here
is in the flow: when the above mentioned API is being called. Hence it's design
(or architectural) level of issue and changing call from foo() to bar() won't
magically make problem go away. But I might be mistaken.
--
With Best Regards,
Andy Shevchenko