Hi Martin,Almost all the drivers are following the same way. I don't see any issue in this way.
Thanks for your valuable comments. I reply some of them as below.
Regards,
Chuanhua
On 8/25/2019 5:03 AM, Martin Blumenstingl wrote:
Hi Dilip,
first of all: thank you for submitting this upstream!
I hope that we can use this driver to replace the out-of-tree PCIe
driver that's used in OpenWrt for the Lantiq VRX200 SoCs.
a small disclaimer: I don't have access to any Lantiq, Intel or
DesignWare datasheets. so everything I write below is based on my own
understanding of the Tegra public datasheet (which describes the
DesignWare PCIe registers) as well as the public Lantiq UGW code drops
with out-of-tree drivers for an older version of this PCIe IP.
thus some of my statements below may be wrong - in this case I would
appreciate an explanation of how the hardware really works.
please keep me CC'ed on further revisions of this series. I am highly
interested in making this driver work on the Lantiq VRX200 SoCs once
the initial version has landed in linux-next.
+config PCIE_INTEL_AXII believe that this is mostly the same IP block as it's used in Lantiq
+ÂÂÂÂÂÂÂ bool "Intel AHB/AXI PCIe host controller support"
(xDSL) VRX200 SoCs (with MIPS cores) which was introduced in 2010
(before Intel acquired Lantiq).
This is why I would have personally called the driver PCIE_LANTIQ
VRX200 SoC(internally called VR9) was the first PCIe SoC product which was using synopsys
controller v3.30a. It only supports PCIe Gen1.1/1.0. The phy is internal phy from infineon.
After that, we have other PCe 1.1/10. products such as ARX300/390. However, these products are EOL,
that is why we don't put effort to support VRX200/ARX300/390.
PCIE_LANTIQ was also a name used internally in the product as in linux-3.10.x.
no need for IOP
[...]
+#define PCIE_CCRIDÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ 0x8
+
+#define PCIE_LCAPÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ 0x7C
+#define PCIE_LCAP_MAX_LINK_SPEEDÂÂÂÂÂÂÂ GENMASK(3, 0)
+#define PCIE_LCAP_MAX_LENGTH_WIDTHÂÂÂÂÂÂÂ GENMASK(9, 4)
+
+/* Link Control and Status Register */
+#define PCIE_LCTLSTSÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ 0x80
+#define PCIE_LCTLSTS_ASPM_ENABLEÂÂÂÂÂÂÂ GENMASK(1, 0)
+#define PCIE_LCTLSTS_RCB128ÂÂÂÂÂÂÂÂÂÂÂ BIT(3)
+#define PCIE_LCTLSTS_LINK_DISABLEÂÂÂÂÂÂÂ BIT(4)
+#define PCIE_LCTLSTS_COM_CLK_CFGÂÂÂÂÂÂÂ BIT(6)
+#define PCIE_LCTLSTS_HW_AW_DISÂÂÂÂÂÂÂÂÂÂÂ BIT(9)
+#define PCIE_LCTLSTS_LINK_SPEEDÂÂÂÂÂÂÂÂÂÂÂ GENMASK(19, 16)
+#define PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTHÂÂÂ GENMASK(25, 20)
+#define PCIE_LCTLSTS_SLOT_CLK_CFGÂÂÂÂÂÂÂ BIT(28)
+
+#define PCIE_LCTLSTS2ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ 0xA0
+#define PCIE_LCTLSTS2_TGT_LINK_SPEEDÂÂÂÂÂÂÂ GENMASK(3, 0)
+#define PCIE_LCTLSTS2_TGT_LINK_SPEED_25GTÂÂÂ 0x1
+#define PCIE_LCTLSTS2_TGT_LINK_SPEED_5GTÂÂÂ 0x2
+#define PCIE_LCTLSTS2_TGT_LINK_SPEED_8GTÂÂÂ 0x3
+#define PCIE_LCTLSTS2_TGT_LINK_SPEED_16GTÂÂÂ 0x4
+#define PCIE_LCTLSTS2_HW_AUTO_DISÂÂÂÂÂÂÂ BIT(5)
+
+/* Ack Frequency Register */
+#define PCIE_AFRÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ 0x70C
+#define PCIE_AFR_FTS_NUMÂÂÂÂÂÂÂÂÂÂÂ GENMASK(15, 8)
+#define PCIE_AFR_COM_FTS_NUMÂÂÂÂÂÂÂÂÂÂÂ GENMASK(23, 16)
+#define PCIE_AFR_GEN12_FTS_NUM_DFTÂÂÂÂÂÂÂ (SZ_128 - 1)
+#define PCIE_AFR_GEN3_FTS_NUM_DFTÂÂÂÂÂÂÂ 180
+#define PCIE_AFR_GEN4_FTS_NUM_DFTÂÂÂÂÂÂÂ 196
+
+#define PCIE_PLCR_DLL_LINK_ENÂÂÂÂÂÂÂÂÂÂÂ BIT(5)
+#define PCIE_PORT_LOGIC_FTSÂÂÂÂÂÂÂÂÂÂÂ GENMASK(7, 0)
+#define PCIE_PORT_LOGIC_DFT_FTS_NUMÂÂÂÂÂÂÂ (SZ_128 - 1)
+
+#define PCIE_MISC_CTRLÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ 0x8BC
+#define PCIE_MISC_CTRL_DBI_RO_WR_ENÂÂÂÂÂÂÂ BIT(0)
+
+#define PCIE_MULTI_LANE_CTRLÂÂÂÂÂÂÂÂÂÂÂ 0x8C0
+#define PCIE_UPCONFIG_SUPPORTÂÂÂÂÂÂÂÂÂÂÂ BIT(7)
+#define PCIE_DIRECT_LINK_WIDTH_CHANGEÂÂÂÂÂÂÂ BIT(6)
+#define PCIE_TARGET_LINK_WIDTHÂÂÂÂÂÂÂÂÂÂÂ GENMASK(5, 0)
+
+#define PCIE_IOP_CTRLÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ 0x8C4
+#define PCIE_IOP_RX_STANDBY_CTRLÂÂÂÂÂÂÂ GENMASK(6, 0)
are you sure that you need any of the registers above?
as far as I can tell most (all?) of them are part of the DesignWare
register set. why doesn't pcie-designware-host initialize these?
I don't have any datasheet or register documentation for the DesignWare
PCIe core. In my own experiment from a month ago on the Lantiq VRX200
SoC I didn't need any of this: [0]
As I mentioned, VRX200 was a very old PCIe Gen1.1 product. In our latest SoC Lightning
Mountain, we are using synopsys controller 5.20/5.50a. We support Gen2(XRX350/550),
Gen3(PRX300) and GEN4(X86 based SoC). We also supported dual lane and single lane.
Some of the above registers are needed to control FTS, link width and link speed.
this also makes me wonder if various functions below like
intel_pcie_link_setup() and intel_pcie_max_speed_setup() (and probably
more) are really needed or if it's just cargo cult / copy paste from
an out-of-tree driver).
intel_pcie_link_setup is to record maximum speed and and link width. We need these
to change speed and link width on the fly which is not supported by dwc driver common
source.
There are two major purposes.
1. For cable applications, they have battery support mode. In this case, it has to
switch from x2 and gen4 to x1 and gen1 on the fly
2. Some customers have high EMI issues. They can try to switch to lower speed and
lower link width to check on the fly. Otherwise, they have to change the device tree
and reboot the system.
Agree.
+/* APP RC Core Control Register */I would rename all of the APP register #defines to match the pattern
+#define PCIE_RC_CCRÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ 0x10
+#define PCIE_RC_CCR_LTSSM_ENABLEÂÂÂÂÂÂÂ BIT(0)
+#define PCIE_DEVICE_TYPEÂÂÂÂÂÂÂÂÂÂÂ GENMASK(7, 4)
+#define PCIE_RC_CCR_RC_MODEÂÂÂÂÂÂÂÂÂÂÂ BIT(2)
+
+/* PCIe Message Control */
+#define PCIE_MSG_CRÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ 0x30
+#define PCIE_XMT_PM_TURNOFFÂÂÂÂÂÂÂÂÂÂÂ BIT(0)
+
+/* PCIe Power Management Control */
+#define PCIE_PMCÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ 0x44
+#define PCIE_PM_IN_L2ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ BIT(20)
+
+/* Interrupt Enable Register */
+#define PCIE_IRNENÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ 0xF4
+#define PCIE_IRNCRÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ 0xF8
+#define PCIE_IRN_AER_REPORTÂÂÂÂÂÂÂÂÂÂÂ BIT(0)
+#define PCIE_IRN_PMEÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ BIT(2)
+#define PCIE_IRN_HOTPLUGÂÂÂÂÂÂÂÂÂÂÂ BIT(3)
+#define PCIE_IRN_RX_VDM_MSGÂÂÂÂÂÂÂÂÂÂÂ BIT(4)
+#define PCIE_IRN_PM_TO_ACKÂÂÂÂÂÂÂÂÂÂÂ BIT(9)
+#define PCIE_IRN_PM_TURNOFF_ACKÂÂÂÂÂÂÂÂÂÂÂ BIT(10)
+#define PCIE_IRN_LINK_AUTO_BW_STATUSÂÂÂÂÂÂÂ BIT(11)
+#define PCIE_IRN_BW_MGTÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ BIT(12)
+#define PCIE_IRN_WAKEUPÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ BIT(17)
+#define PCIE_IRN_MSG_LTRÂÂÂÂÂÂÂÂÂÂÂ BIT(18)
+#define PCIE_IRN_SYS_INTÂÂÂÂÂÂÂÂÂÂÂ BIT(28)
+#define PCIE_IRN_SYS_ERR_RCÂÂÂÂÂÂÂÂÂÂÂ BIT(29)
+
+#define PCIE_IRN_IR_INTÂÂÂ (PCIE_IRN_AER_REPORT | PCIE_IRN_PME | \
+ÂÂÂÂÂÂÂÂÂÂÂ PCIE_IRN_RX_VDM_MSG | PCIE_IRN_SYS_ERR_RC | \
+ÂÂÂÂÂÂÂÂÂÂÂ PCIE_IRN_PM_TO_ACK | PCIE_IRN_LINK_AUTO_BW_STATUS | \
+ÂÂÂÂÂÂÂÂÂÂÂ PCIE_IRN_BW_MGT | PCIE_IRN_MSG_LTR)
PCIE_APP_*
That makes it easy to differentiate the PCIe (DBI) registers from the
APP registers.
[...]
+static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)do you have plans to support the MIPS SoCs (VRX200, ARX300, XRX350,
+{
+ÂÂÂ return readl(lpp->app_base + ofs);
+}
+
+static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 val, u32 ofs)
+{
+ÂÂÂ writel(val, lpp->app_base + ofs);
+}
+
+static void pcie_app_wr_mask(struct intel_pcie_port *lpp,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ u32 mask, u32 val, u32 ofs)
+{
+ÂÂÂ pcie_update_bits(lpp->app_base, mask, val, ofs);
+}
XRX550)?
These will need register writes in big endian. in my own experiment [0]
I simply used the regmap interface which will default to little endian
register access but switch to big endian when the devicetree node is
marked with big-endian.
[...]
We can support up to XRX350/XRX500/PRX300 for MIPS SoC since we still
sell these products. However, we have no effort to support EOL product
such as VRX200 which also makes driver quite complex since the glue
logic(reset, clock and endianness). For MIPS based platform, we have
endianness control in device tree such as inbound_swap and outbound_swap
For VRX200, we have another big concern, that is PCI and PCIe has coupling
for endiannes which makes things very bad.
However, if you are interested in supporting VRX200, it is highly appreciated.
+static int intel_pcie_bios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)my interpretation is that there's an interrupt controller embedded into
+{
+
+ÂÂÂ struct pcie_port *pp = dev->bus->sysdata;
+ÂÂÂ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ÂÂÂ struct intel_pcie_port *lpp = dev_get_drvdata(pci->dev);
+ÂÂÂ struct device *pdev = lpp->pci->dev;
+ÂÂÂ u32 irq_bit;
+ÂÂÂ int irq;
+
+ÂÂÂ if (pin == PCI_INTERRUPT_UNKNOWN || pin > PCI_NUM_INTX) {
+ÂÂÂÂÂÂÂ dev_warn(pdev, "WARNING: dev %s: invalid interrupt pin %d\n",
+ÂÂÂÂÂÂÂÂÂÂÂÂ pci_name(dev), pin);
+ÂÂÂÂÂÂÂ return -1;
+ÂÂÂ }
+ÂÂÂ irq = of_irq_parse_and_map_pci(dev, slot, pin);
+ÂÂÂ if (!irq) {
+ÂÂÂÂÂÂÂ dev_err(pdev, "trying to map irq for unknown slot:%d pin:%d\n",
+ÂÂÂÂÂÂÂÂÂÂÂ slot, pin);
+ÂÂÂÂÂÂÂ return -1;
+ÂÂÂ }
+ÂÂÂ /* Pin to irq offset bit position */
+ÂÂÂ irq_bit = BIT(pin + PCIE_INTX_OFFSET);
+
+ÂÂÂ /* Clear possible pending interrupts first */
+ÂÂÂ pcie_app_wr(lpp, irq_bit, PCIE_IRNCR);
+
+ÂÂÂ pcie_app_wr_mask(lpp, irq_bit, irq_bit, PCIE_IRNEN);
+ÂÂÂ return irq;
+}
the APP registers. The integrated interrupt controller takes 5
interrupts and provides the legacy PCI_INTA/B/C/D interrupts as well as
a WAKEUP IRQ. Each of it's own interrupts is tied to one of the parent
interrupts.
For MIPS base SoC, there is no interrupt controller for such APP registers.
They are directly connected with centralized PIC(ICU for VRX200/ARX300, GIC
for XRX500/PRX300, IOAPIC for lightning mountain).That is why we don't
implement the interrupt controller here.
my solution (in the experiment on the VRX200 SoC [1]) was to implement an
interrupt controller and have it as a child devicetree node. then I used
interrupt-map to route the interrupts to the PCIe interrupt controller.
with that I didn't have to overwrite .map_irq.
(note that this comment is only related to the PCI_INTx and WAKE
interrupts but not the other interrupt configuration bits, because as
far as I understand the other ones are only related to the controller)
+static void intel_pcie_bridge_class_code_setup(struct intel_pcie_port *lpp)in my own experiments I didn't need this - have you confirmed that it's
+{
+ÂÂÂ pcie_rc_cfg_wr_mask(lpp, PCIE_MISC_CTRL_DBI_RO_WR_EN,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ PCIE_MISC_CTRL_DBI_RO_WR_EN, PCIE_MISC_CTRL);
+ÂÂÂ pcie_rc_cfg_wr_mask(lpp, 0xffffff00, PCI_CLASS_BRIDGE_PCI << 16,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ PCIE_CCRID);
+ÂÂÂ pcie_rc_cfg_wr_mask(lpp, PCIE_MISC_CTRL_DBI_RO_WR_EN, 0,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ PCIE_MISC_CTRL);
+}
required? and if it is required: why is that?
if others are curious as well then maybe add the explanation as comment
to the driver
[...]
This is needed. In the old driver, we fixed this by fixup. The original comment as follows,
/*
Â* The root complex has a hardwired class of PCI_CLASS_NETWORK_OTHER or
Â* PCI_CLASS_BRIDGE_HOST, when it is operating as a root complex this
Â* needs to be switched to * PCI_CLASS_BRIDGE_PCI
Â*/
+static const char *pcie_link_gen_to_str(int gen)why duplicate PCIE_SPEED2STR from drivers/pci/pci.h?
+{
+ÂÂÂ switch (gen) {
+ÂÂÂ case PCIE_LINK_SPEED_GEN1:
+ÂÂÂÂÂÂÂ return "2.5";
+ÂÂÂ case PCIE_LINK_SPEED_GEN2:
+ÂÂÂÂÂÂÂ return "5.0";
+ÂÂÂ case PCIE_LINK_SPEED_GEN3:
+ÂÂÂÂÂÂÂ return "8.0";
+ÂÂÂ case PCIE_LINK_SPEED_GEN4:
+ÂÂÂÂÂÂÂ return "16.0";
+ÂÂÂ default:
+ÂÂÂÂÂÂÂ return "???";
+ÂÂÂ }
+}
Great! even link_device_setup can be reduced since pcie_get_speed_cap is
implementing similar stuff.
+static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp)why is there lpp->rst_interval when you hardcode 100ms here?
+{
+ÂÂÂ struct device *dev = lpp->pci->dev;
+ÂÂÂ int ret = 0;
+
+ÂÂÂ lpp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
+ÂÂÂ if (IS_ERR(lpp->reset_gpio)) {
+ÂÂÂÂÂÂÂ ret = PTR_ERR(lpp->reset_gpio);
+ÂÂÂÂÂÂÂ if (ret != -EPROBE_DEFER)
+ÂÂÂÂÂÂÂÂÂÂÂ dev_err(dev, "failed to request PCIe GPIO: %d\n", ret);
+ÂÂÂÂÂÂÂ return ret;
+ÂÂÂ }
+ÂÂÂ /* Make initial reset last for 100ms */
+ÂÂÂ msleep(100);
There are different purpose. rst_interval is purely for asserted reset pulse.
Here 100ms is to make sure the initial state keeps at least 100ms, then we
can reset.
[...]
+static int intel_pcie_setup_irq(struct intel_pcie_port *lpp)you are only requesting one IRQ line for the whole driver. personally
+{
+ÂÂÂ struct device *dev = lpp->pci->dev;
+ÂÂÂ struct platform_device *pdev;
+ÂÂÂ char *irq_name;
+ÂÂÂ int irq, ret;
+
+ÂÂÂ pdev = to_platform_device(dev);
+ÂÂÂ irq = platform_get_irq(pdev, 0);
+ÂÂÂ if (irq < 0) {
+ÂÂÂÂÂÂÂ dev_err(dev, "missing sys integrated irq resource\n");
+ÂÂÂÂÂÂÂ return irq;
+ÂÂÂ }
+
+ÂÂÂ irq_name = devm_kasprintf(dev, GFP_KERNEL, "pcie_misc%d", lpp->id);
+ÂÂÂ if (!irq_name) {
+ÂÂÂÂÂÂÂ dev_err(dev, "failed to alloc irq name\n");
+ÂÂÂÂÂÂÂ return -ENOMEM;
I would drop the custom irq_name and pass NULL to devm_request_irq
because that will then fall-back to auto-generating an IRQ name based
on the devicetree node. I assume it's the same for ACPI but I haven't
tried that yet.
Not sure I understand what you mean. As you know from the code, we have lpp->id which means
we have multiple instances of Root Complex(1,2,3,4,8), so we need this for identification.
this irq in old product called ir(integrated interrupt or misc interrupt).
+static void intel_pcie_disable_clks(struct intel_pcie_port *lpp)you have some functions (using these two as an example) which are only
+{
+ÂÂÂ clk_disable_unprepare(lpp->core_clk);
+}
+
+static int intel_pcie_enable_clks(struct intel_pcie_port *lpp)
+{
+ÂÂÂ int ret = clk_prepare_enable(lpp->core_clk);
+
+ÂÂÂ if (ret)
+ÂÂÂÂÂÂÂ dev_err(lpp->pci->dev, "Core clock enable failed: %d\n", ret);
+
+ÂÂÂ return ret;
+}
used once. they add some boilerplate and (in my opinion) make the code
harder to read.
Yes. If we plan to support old MIPS SoC, we have a lot of clocks. The
code is from old code. We can remove this wrapper for new SoC. Later we
can add them back.
Agree.
+static int intel_pcie_get_resources(struct platform_device *pdev)I suggest using platform_get_resource_byname(pdev, "app") because
+{
+ÂÂÂ struct intel_pcie_port *lpp;
+ÂÂÂ struct device *dev;
+ÂÂÂ int ret;
+
+ÂÂÂ lpp = platform_get_drvdata(pdev);
+ÂÂÂ dev = lpp->pci->dev;
+
+ÂÂÂ lpp->core_clk = devm_clk_get(dev, NULL);
+ÂÂÂ if (IS_ERR(lpp->core_clk)) {
+ÂÂÂÂÂÂÂ ret = PTR_ERR(lpp->core_clk);
+ÂÂÂÂÂÂÂ if (ret != -EPROBE_DEFER)
+ÂÂÂÂÂÂÂÂÂÂÂ dev_err(dev, "failed to get clks: %d\n", ret);
+ÂÂÂÂÂÂÂ return ret;
+ÂÂÂ }
+
+ÂÂÂ lpp->core_rst = devm_reset_control_get(dev, NULL);
+ÂÂÂ if (IS_ERR(lpp->core_rst)) {
+ÂÂÂÂÂÂÂ ret = PTR_ERR(lpp->core_rst);
+ÂÂÂÂÂÂÂ if (ret != -EPROBE_DEFER)
+ÂÂÂÂÂÂÂÂÂÂÂ dev_err(dev, "failed to get resets: %d\n", ret);
+ÂÂÂÂÂÂÂ return ret;
+ÂÂÂ }
+
+ÂÂÂ ret = device_property_match_string(dev, "device_type", "pci");
+ÂÂÂ if (ret) {
+ÂÂÂÂÂÂÂ dev_err(dev, "failed to find pci device type: %d\n", ret);
+ÂÂÂÂÂÂÂ return ret;
+ÂÂÂ }
+
+ÂÂÂ if (device_property_read_u32(dev, "intel,rst-interval",
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ &lpp->rst_interval))
+ÂÂÂÂÂÂÂ lpp->rst_interval = RST_INTRVL_DFT_MS;
+
+ÂÂÂ if (device_property_read_u32(dev, "max-link-speed", &lpp->link_gen))
+ÂÂÂÂÂÂÂ lpp->link_gen = 0; /* Fallback to auto */
+
+ÂÂÂ lpp->app_base = devm_platform_ioremap_resource(pdev, 2);
pcie-designware uses named resources for the "config" space already
that said, devm_platform_ioremap_resource_byname would be a great
addition in my opinion ;)
Agree.
+ÂÂÂ if (IS_ERR(lpp->app_base))I suggest to use "pcie" as phy-name, otherwise the binding looks odd:
+ÂÂÂÂÂÂÂ return PTR_ERR(lpp->app_base);
+
+ÂÂÂ ret = intel_pcie_ep_rst_init(lpp);
+ÂÂÂ if (ret)
+ÂÂÂÂÂÂÂ return ret;
+
+ÂÂÂ lpp->phy = devm_phy_get(dev, "phy");
ÂÂ phys = <&pcie0_phy>;
ÂÂ phy-names = "phy";
versus:
ÂÂ phys = <&pcie0_phy>;
ÂÂ phy-names = "pcie";
+static ssize_t"lspci -vv | grep LnkSta" already shows the link speed and width.
+pcie_link_status_show(struct device *dev, struct device_attribute *attr,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂ char *buf)
+{
+ÂÂÂ u32 reg, width, gen;
+ÂÂÂ struct intel_pcie_port *lpp;
+
+ÂÂÂ lpp = dev_get_drvdata(dev);
+
+ÂÂÂ reg = pcie_rc_cfg_rd(lpp, PCIE_LCTLSTS);
+ÂÂÂ width = FIELD_GET(PCIE_LCTLSTS_NEGOTIATED_LINK_WIDTH, reg);
+ÂÂÂ gen = FIELD_GET(PCIE_LCTLSTS_LINK_SPEED, reg);
+ÂÂÂ if (gen > lpp->max_speed)
+ÂÂÂÂÂÂÂ return -EINVAL;
+
+ÂÂÂ return sprintf(buf, "Port %2u Width x%u Speed %s GT/s\n", lpp->id,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ width, pcie_link_gen_to_str(gen));
+}
+static DEVICE_ATTR_RO(pcie_link_status);
why do you need this?
In most embedded package, lspci from busybox only showed deviceid and vendor id.
They don't install lspci utilities.
On the fly change as I mentioned in the beginning.+static ssize_t pcie_speed_store(struct device *dev,this is already configurable via devicetree, why do you need this?
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ struct device_attribute *attr,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ const char *buf, size_t len)
+{
+ÂÂÂ struct intel_pcie_port *lpp;
+ÂÂÂ unsigned long val;
+ÂÂÂ int ret;
+
+ÂÂÂ lpp = dev_get_drvdata(dev);
+
+ÂÂÂ ret = kstrtoul(buf, 10, &val);
+ÂÂÂ if (ret)
+ÂÂÂÂÂÂÂ return ret;
+
+ÂÂÂ if (val > lpp->max_speed)
+ÂÂÂÂÂÂÂ return -EINVAL;
+
+ÂÂÂ lpp->link_gen = val;
+ÂÂÂ intel_pcie_max_speed_setup(lpp);
+ÂÂÂ intel_pcie_speed_change_disable(lpp);
+ÂÂÂ intel_pcie_speed_change_enable(lpp);
+
+ÂÂÂ return len;
+}
+static DEVICE_ATTR_WO(pcie_speed);
On the fly change as I mentioned in the beginning.
+/*is it needed because of a bug somewhere? who do you expect to use this
+ * Link width change on the fly is not always successful.
+ * It also depends on the partner.
+ */
+static ssize_t pcie_width_store(struct device *dev,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ struct device_attribute *attr,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ const char *buf, size_t len)
+{
+ÂÂÂ struct intel_pcie_port *lpp;
+ÂÂÂ unsigned long val;
+
+ÂÂÂ lpp = dev_get_drvdata(dev);
+
+ÂÂÂ if (kstrtoul(buf, 10, &val))
+ÂÂÂÂÂÂÂ return -EINVAL;
+
+ÂÂÂ if (val > lpp->max_width)
+ÂÂÂÂÂÂÂ return -EINVAL;
+
+ÂÂÂ lpp->lanes = val;
+ÂÂÂ intel_pcie_max_link_width_setup(lpp);
+
+ÂÂÂ return len;
+}
+static DEVICE_ATTR_WO(pcie_width);
sysfs attribute and when (which condition) do you expect people to use
this?
[...]
+static void __intel_pcie_remove(struct intel_pcie_port *lpp)I expect logic like this to be part of the PCI subsystem in Linux.
+{
+ÂÂÂ pcie_rc_cfg_wr_mask(lpp, PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ 0, PCI_COMMAND);
why is this needed?
[...]
bind/unbind case we use this. For extreme cases, we use unbind and bind to reset
PCI instead of rebooting.
+int intel_pcie_msi_init(struct pcie_port *pp)I would rephrase this to "MSI is handled by a separate MSI interrupt
+{
+ÂÂÂ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+
+ÂÂÂ dev_dbg(pci->dev, "MSI is handled in x86 arch\n");
controller"
on the VRX200 SoC there's also a MSI interrupt controller and it seems
that "x86" has this as well (even though it might be two different MSI
IRQ IP blocks).
[...]
Agree. For X86/64, MSI is handled by X86 arch. We don't need to
implement another MSI controller separately.
For MIPS based SoC, we don't use synopsys MSI controller. The MSI still
connects with central interrupt controller with MSI decoding (we can name it
as MSI controller)
Dilip should explain this+static int intel_pcie_probe(struct platform_device *pdev)other drivers have a "struct dw_pcieÂÂÂ pci" struct member (I assume
+{
+ÂÂÂ struct device *dev = &pdev->dev;
+ÂÂÂ const struct intel_pcie_soc *data;
+ÂÂÂ struct intel_pcie_port *lpp;
+ÂÂÂ struct pcie_port *pp;
+ÂÂÂ struct dw_pcie *pci;
+ÂÂÂ int ret;
+
+ÂÂÂ lpp = devm_kzalloc(dev, sizeof(*lpp), GFP_KERNEL);
+ÂÂÂ if (!lpp)
+ÂÂÂÂÂÂÂ return -ENOMEM;
+
+ÂÂÂ pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL);
+ÂÂÂ if (!pci)
+ÂÂÂÂÂÂÂ return -ENOMEM;
that it's to prevent memory fragmentation). why not do the same here
and embed it into struct intel_pcie_port?
Agree.
+ÂÂÂ platform_set_drvdata(pdev, lpp);as stated above I would use the _byname variant
+ÂÂÂ lpp->pci = pci;
+ÂÂÂ pci->dev = dev;
+ÂÂÂ pp = &pci->pp;
+
+ÂÂÂ ret = device_property_read_u32(dev, "linux,pci-domain", &lpp->id);
+ÂÂÂ if (ret) {
+ÂÂÂÂÂÂÂ dev_err(dev, "failed to get domain id, errno %d\n", ret);
+ÂÂÂÂÂÂÂ return ret;
+ÂÂÂ }
+
+ÂÂÂ pci->dbi_base = devm_platform_ioremap_resource(pdev, 0);
+ÂÂÂ if (IS_ERR(pci->dbi_base))
+ÂÂÂÂÂÂÂ return PTR_ERR(pci->dbi_base);
+
Martin
[0] https://github.com/xdarklight/linux/blob/lantiq-pcie-driver-next-20190727/drivers/pci/controller/dwc/pcie-lantiq.c
[1] https://github.com/xdarklight/linux/blob/lantiq-pcie-driver-next-20190727/Documentation/devicetree/bindings/pci/lantiq%2Cdw-pcie.yaml