On Sun, May 12, 2024 at 01:58:56PM +0530, devi priya wrote:okay
Add PCIe0, PCIe1, PCIe2, PCIe3 (and corresponding PHY) devices
found on IPQ9574 platform. The PCIe0 & PCIe1 are 1-lane Gen3
host whereas PCIe2 & PCIe3 are 2-lane Gen3 host.
Signed-off-by: devi priya <quic_devipriy@xxxxxxxxxxx>
---
Changes in V5:
- Dropped anoc and snoc lane clocks from Phy nodes and enabled them
via interconnect.
- Dropped msi-parent as it is handled via msi IRQ
arch/arm64/boot/dts/qcom/ipq9574.dtsi | 365 +++++++++++++++++++++++++-
1 file changed, 361 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/boot/dts/qcom/ipq9574.dtsi b/arch/arm64/boot/dts/qcom/ipq9574.dtsi
index 5b3e69379b1f..da6418c9d52b 100644
--- a/arch/arm64/boot/dts/qcom/ipq9574.dtsi
+++ b/arch/arm64/boot/dts/qcom/ipq9574.dtsi
[...]
+ pcie1: pci@10000000 {
'pcie@' since this is a PCIe controller.
okay
+ compatible = "qcom,pcie-ipq9574";
+ reg = <0x10000000 0xf1d>,
+ <0x10000F20 0xa8>,
Please use lower case for hex everywhere.
It has 8 MSI SPI IRQs, will define all of them
+ <0x10001000 0x1000>,
+ <0x000F8000 0x4000>,
+ <0x10100000 0x1000>;
+ reg-names = "dbi", "elbi", "atu", "parf", "config";
+ device_type = "pci";
+ linux,pci-domain = <2>;
+ bus-range = <0x00 0xff>;
+ num-lanes = <1>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+
+ ranges = <0x01000000 0x0 0x00000000 0x10200000 0x0 0x100000>, /* I/O */
+ <0x02000000 0x0 0x10300000 0x10300000 0x0 0x7d00000>; /* MEM */
+
+ interrupts = <GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "msi";
Are you sure that this platform only has single MSI SPI IRQ?
okay
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0x7>;
+ interrupt-map = <0 0 0 1 &intc 0 0 35 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
+ <0 0 0 2 &intc 0 0 49 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
+ <0 0 0 3 &intc 0 0 84 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
+ <0 0 0 4 &intc 0 0 85 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
+
+ /* clocks and clock-names are used to enable the clock in CBCR */
This comment is redundant.
+ clocks = <&gcc GCC_PCIE1_AHB_CLK>,
+ <&gcc GCC_PCIE1_AUX_CLK>,
+ <&gcc GCC_PCIE1_AXI_M_CLK>,
+ <&gcc GCC_PCIE1_AXI_S_CLK>,
+ <&gcc GCC_PCIE1_AXI_S_BRIDGE_CLK>,
+ <&gcc GCC_PCIE1_RCHNG_CLK>;
+ clock-names = "ahb",
+ "aux",
+ "axi_m",
+ "axi_s",
+ "axi_bridge",
+ "rchng";
+
+ resets = <&gcc GCC_PCIE1_PIPE_ARES>,
+ <&gcc GCC_PCIE1_CORE_STICKY_ARES>,
+ <&gcc GCC_PCIE1_AXI_S_STICKY_ARES>,
+ <&gcc GCC_PCIE1_AXI_S_ARES>,
+ <&gcc GCC_PCIE1_AXI_M_STICKY_ARES>,
+ <&gcc GCC_PCIE1_AXI_M_ARES>,
+ <&gcc GCC_PCIE1_AUX_ARES>,
+ <&gcc GCC_PCIE1_AHB_ARES>;
+ reset-names = "pipe",
+ "sticky",
+ "axi_s_sticky",
+ "axi_s",
+ "axi_m_sticky",
+ "axi_m",
+ "aux",
+ "ahb";
+
+ phys = <&pcie1_phy>;
+ phy-names = "pciephy";
+ interconnects = <&gcc MASTER_ANOC_PCIE1 &gcc SLAVE_ANOC_PCIE1>,
+ <&gcc MASTER_SNOC_PCIE1 &gcc SLAVE_SNOC_PCIE1>;
Is this really the interconnect paths between PCIe-DDR and PCIe-CPU? I doubt...
- Mani