Re: [EXT] Re: [PATCH] bus: fsl-mc: Add ACPI support for fsl-mc

From: Robin Murphy
Date: Fri Jan 31 2020 - 08:39:23 EST


On 2020-01-31 1:11 pm, Jon Nettleton wrote:
On Fri, Jan 31, 2020 at 1:48 PM Robin Murphy <robin.murphy@xxxxxxx> wrote:

On 2020-01-31 12:28 pm, Jon Nettleton wrote:
On Fri, Jan 31, 2020 at 1:02 PM Ard Biesheuvel
<ard.biesheuvel@xxxxxxxxxx> wrote:

On Fri, 31 Jan 2020 at 12:06, Marc Zyngier <maz@xxxxxxxxxx> wrote:

On 2020-01-31 10:35, Makarand Pawagi wrote:
-----Original Message-----
From: Lorenzo Pieralisi <lorenzo.pieralisi@xxxxxxx>
Sent: Tuesday, January 28, 2020 4:39 PM
To: Makarand Pawagi <makarand.pawagi@xxxxxxx>
Cc: netdev@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; linux-arm-
kernel@xxxxxxxxxxxxxxxxxxx; linux-acpi@xxxxxxxxxxxxxxx;
linux@xxxxxxxxxxxxxxx;
jon@xxxxxxxxxxxxx; Cristi Sovaiala <cristian.sovaiala@xxxxxxx>;
Laurentiu
Tudor <laurentiu.tudor@xxxxxxx>; Ioana Ciornei
<ioana.ciornei@xxxxxxx>;
Varun Sethi <V.Sethi@xxxxxxx>; Calvin Johnson
<calvin.johnson@xxxxxxx>;
Pankaj Bansal <pankaj.bansal@xxxxxxx>; guohanjun@xxxxxxxxxx;
sudeep.holla@xxxxxxx; rjw@xxxxxxxxxxxxx; lenb@xxxxxxxxxx;
stuyoder@xxxxxxxxx; tglx@xxxxxxxxxxxxx; jason@xxxxxxxxxxxxxx;
maz@xxxxxxxxxx; shameerali.kolothum.thodi@xxxxxxxxxx; will@xxxxxxxxxx;
robin.murphy@xxxxxxx; nleeder@xxxxxxxxxxxxxx
Subject: [EXT] Re: [PATCH] bus: fsl-mc: Add ACPI support for fsl-mc

Caution: EXT Email

On Tue, Jan 28, 2020 at 01:38:45PM +0530, Makarand Pawagi wrote:
ACPI support is added in the fsl-mc driver. Driver will parse MC DSDT
table to extract memory and other resorces.

Interrupt (GIC ITS) information will be extracted from MADT table by
drivers/irqchip/irq-gic-v3-its-fsl-mc-msi.c.

IORT table will be parsed to configure DMA.

Signed-off-by: Makarand Pawagi <makarand.pawagi@xxxxxxx>
---
drivers/acpi/arm64/iort.c | 53 +++++++++++++++++++++
drivers/bus/fsl-mc/dprc-driver.c | 3 +-
drivers/bus/fsl-mc/fsl-mc-bus.c | 48 +++++++++++++------
drivers/bus/fsl-mc/fsl-mc-msi.c | 10 +++-
drivers/bus/fsl-mc/fsl-mc-private.h | 4 +-
drivers/irqchip/irq-gic-v3-its-fsl-mc-msi.c | 71
++++++++++++++++++++++++++++-
include/linux/acpi_iort.h | 5 ++
7 files changed, 174 insertions(+), 20 deletions(-)

diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
index 33f7198..beb9cd5 100644
--- a/drivers/acpi/arm64/iort.c
+++ b/drivers/acpi/arm64/iort.c
@@ -15,6 +15,7 @@
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/pci.h>
+#include <linux/fsl/mc.h>
#include <linux/platform_device.h>
#include <linux/slab.h>

@@ -622,6 +623,29 @@ static int iort_dev_find_its_id(struct device
*dev, u32 req_id, }

/**
+ * iort_get_fsl_mc_device_domain() - Find MSI domain related to a
+device
+ * @dev: The device.
+ * @mc_icid: ICID for the fsl_mc device.
+ *
+ * Returns: the MSI domain for this device, NULL otherwise */ struct
+irq_domain *iort_get_fsl_mc_device_domain(struct device *dev,
+ u32 mc_icid) {
+ struct fwnode_handle *handle;
+ int its_id;
+
+ if (iort_dev_find_its_id(dev, mc_icid, 0, &its_id))
+ return NULL;
+
+ handle = iort_find_domain_token(its_id);
+ if (!handle)
+ return NULL;
+
+ return irq_find_matching_fwnode(handle, DOMAIN_BUS_FSL_MC_MSI);
+}

NAK

I am not willing to take platform specific code in the generic IORT
layer.

ACPI on ARM64 works on platforms that comply with SBSA/SBBR
guidelines:


https://developer.arm.com/architectures/platform-design/server-systems

Deviating from those requires butchering ACPI specifications (ie IORT)
and
related kernel code which goes totally against what ACPI is meant for
on ARM64
systems, so there is no upstream pathway for this code I am afraid.

Reason of adding this platform specific function in the generic IORT
layer is
That iort_get_device_domain() only deals with PCI bus
(DOMAIN_BUS_PCI_MSI).

fsl-mc objects when probed, need to find irq_domain which is associated
with
the fsl-mc bus (DOMAIN_BUS_FSL_MC_MSI). It will not be possible to do
that
if we do not add this function because there are no other suitable APIs
exported
by IORT layer to do the job.

I think we all understood the patch. What both Lorenzo and myself are
saying is
that we do not want non-PCI support in IORT.


IORT supports platform devices (aka named components) as well, and
there is some support for platform MSIs in the GIC layer.

So it may be possible to hide your exotic bus from the OS entirely,
and make the firmware instantiate a DSDT with device objects and
associated IORT nodes that describe whatever lives on that bus as
named components.

That way, you will not have to change the OS at all, so your hardware
will not only be supported in linux v5.7+, it will also be supported
by OSes that commercial distro vendors are shipping today. *That* is
the whole point of using ACPI.

If you are going to bother and modify the OS, you lose this advantage,
and ACPI gives you no benefit over DT at all.

You beat me to it, but thanks for the clarification Ard. No where in
the SBSA spec that I have read does it state that only PCIe devices
are supported by the SMMU. It uses PCIe devices as an example, but
the SMMU section is very generic in term and only says "devices".

I feel the SBSA omission of SerDes best practices is an oversight in
the standard and something that probably needs to be revisited.
Forcing high speed networking interfaces to be hung off a bus just for
the sake of having a "standard" PCIe interface seems like a step
backward in this regard. I would much rather have the Spec include a
common standard that could be exposed in a consistent manner. But
this is a conversation for a different place.

Just to clarify further, it's not about serdes or high-speed networking
per se - describing a fixed-function network adapter as a named
component is entirely within scope. The issue is when the hardware is
merely a pool of accelerator components that can be dynamically
configured at runtime into something that looks like one or more
'virtual' network adapters - there is no standard interface for *that*
for SBSA to consider.

Robin.


I will work with NXP and find a better way to implement this.

-Jon


But by design SFP, SFP+, and QSFP cages are not fixed function network
adapters. They are physical and logical devices that can adapt to
what is plugged into them. How the devices are exposed should be
irrelevant to this conversation it is about the underlying
connectivity.

Apologies - I was under the impression that SFP and friends were a physical-layer thing and that a MAC in the SoC would still be fixed such that its DMA and interrupt configuration could be statically described regardless of what transceiver was plugged in (even if some configurations might not use every interrupt/stream ID/etc.) If that isn't the case I shall go and educate myself further.

For instance if this were an accelerator block on a
PCIe card then we wouldn't be having this discussion, even if it did
run a firmware and have a third party driver that exposed virtual
network interfaces.

Right, because in that case the interrupts and DMA have to travel through the PCIe layer, and thus generic code only needs to worry about things from the point of the PCI host bridge. That's rather the point of having an industry-standard interface.

Robin.