[tip: timers/urgent] PCI/MSI: Handle the NOMASK flag correctly for all PCI/MSI backends
From: tip-bot2 for Thomas Gleixner
Date: Wed Mar 26 2025 - 10:42:54 EST
The following commit has been merged into the timers/urgent branch of tip:
Commit-ID: 3ece3e8e5976c49c3f887e5923f998eabd54ff40
Gitweb: https://git.kernel.org/tip/3ece3e8e5976c49c3f887e5923f998eabd54ff40
Author: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
AuthorDate: Wed, 26 Mar 2025 13:05:35 +01:00
Committer: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
CommitterDate: Wed, 26 Mar 2025 15:28:43 +01:00
PCI/MSI: Handle the NOMASK flag correctly for all PCI/MSI backends
The conversion of the XEN specific global variable pci_msi_ignore_mask to a
MSI domain flag, missed the facts that:
1) Legacy architectures do not provide a interrupt domain
2) Parent MSI domains do not necessarily have a domain info attached
Both cases result in an unconditional NULL pointer dereference. This was
unfortunatly missed in review and testing revealed it late.
Cure this by using the existing pci_msi_domain_supports() helper, which
handles all possible cases correctly.
Fixes: c3164d2e0d18 ("PCI/MSI: Convert pci_msi_ignore_mask to per MSI domain flag")
Reported-by: Daniel Gomez <da.gomez@xxxxxxxxxx>
Reported-by: Borislav Petkov <bp@xxxxxxxxx>
Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Reviewed-by: Juergen Gross <jgross@xxxxxxxx>
Tested-by: Marek Szyprowski <m.szyprowski@xxxxxxxxxxx>
Tested-by: Borislav Petkov <bp@xxxxxxxxx>
Tested-by: Daniel Gomez <da.gomez@xxxxxxxxxx>
Link: https://lore.kernel.org/all/87iknwyp2o.ffs@tglx
Closes: https://lore.kernel.org/all/qn7fzggcj6qe6r6gdbwcz23pzdz2jx64aldccmsuheabhmjgrt@tawf5nfwuvw7
---
drivers/pci/msi/msi.c | 18 ++++++------------
1 file changed, 6 insertions(+), 12 deletions(-)
diff --git a/drivers/pci/msi/msi.c b/drivers/pci/msi/msi.c
index d741628..7058d59 100644
--- a/drivers/pci/msi/msi.c
+++ b/drivers/pci/msi/msi.c
@@ -285,8 +285,6 @@ static void pci_msi_set_enable(struct pci_dev *dev, int enable)
static int msi_setup_msi_desc(struct pci_dev *dev, int nvec,
struct irq_affinity_desc *masks)
{
- const struct irq_domain *d = dev_get_msi_domain(&dev->dev);
- const struct msi_domain_info *info = d->host_data;
struct msi_desc desc;
u16 control;
@@ -297,7 +295,7 @@ static int msi_setup_msi_desc(struct pci_dev *dev, int nvec,
/* Lies, damned lies, and MSIs */
if (dev->dev_flags & PCI_DEV_FLAGS_HAS_MSI_MASKING)
control |= PCI_MSI_FLAGS_MASKBIT;
- if (info->flags & MSI_FLAG_NO_MASK)
+ if (pci_msi_domain_supports(dev, MSI_FLAG_NO_MASK, DENY_LEGACY))
control &= ~PCI_MSI_FLAGS_MASKBIT;
desc.nvec_used = nvec;
@@ -604,20 +602,18 @@ static void __iomem *msix_map_region(struct pci_dev *dev,
*/
void msix_prepare_msi_desc(struct pci_dev *dev, struct msi_desc *desc)
{
- const struct irq_domain *d = dev_get_msi_domain(&dev->dev);
- const struct msi_domain_info *info = d->host_data;
-
desc->nvec_used = 1;
desc->pci.msi_attrib.is_msix = 1;
desc->pci.msi_attrib.is_64 = 1;
desc->pci.msi_attrib.default_irq = dev->irq;
desc->pci.mask_base = dev->msix_base;
- desc->pci.msi_attrib.can_mask = !(info->flags & MSI_FLAG_NO_MASK) &&
- !desc->pci.msi_attrib.is_virtual;
- if (desc->pci.msi_attrib.can_mask) {
+
+ if (!pci_msi_domain_supports(dev, MSI_FLAG_NO_MASK, DENY_LEGACY) &&
+ !desc->pci.msi_attrib.is_virtual) {
void __iomem *addr = pci_msix_desc_addr(desc);
+ desc->pci.msi_attrib.can_mask = 1;
desc->pci.msix_ctrl = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL);
}
}
@@ -715,8 +711,6 @@ static int msix_setup_interrupts(struct pci_dev *dev, struct msix_entry *entries
static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
int nvec, struct irq_affinity *affd)
{
- const struct irq_domain *d = dev_get_msi_domain(&dev->dev);
- const struct msi_domain_info *info = d->host_data;
int ret, tsize;
u16 control;
@@ -747,7 +741,7 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
/* Disable INTX */
pci_intx_for_msi(dev, 0);
- if (!(info->flags & MSI_FLAG_NO_MASK)) {
+ if (!pci_msi_domain_supports(dev, MSI_FLAG_NO_MASK, DENY_LEGACY)) {
/*
* Ensure that all table entries are masked to prevent
* stale entries from firing in a crash kernel.