Re: [PATCH RESEND v10 1/2] dmaengine: dw-edma: Add AMD MDB Endpoint Support

From: Frank Li

Date: Mon Feb 16 2026 - 14:28:31 EST


On Mon, Feb 16, 2026 at 04:25:45PM +0530, Devendra K Verma wrote:
> AMD MDB PCIe endpoint support. For AMD specific support
> added the following
> - AMD supported PCIe Device IDs and Vendor ID (Xilinx).
> - AMD MDB specific driver data
> - AMD MDB specific VSEC capability to retrieve the device DDR
> base address.
>
> Signed-off-by: Devendra K Verma <devendra.verma@xxxxxxx>
> ---
> Changes in v10:
> For Xilinx VSEC function kept only HDMA map format as
> Xilinx only supports HDMA.
>
> Changes in v9:
> Moved Xilinx specific VSEC capability functions under
> the vendor ID condition.
>
> Changes in v8:
> Changed the contant names to includer product vendor.
> Moved the vendor specific code to vendor specific functions.
>
> Changes in v7:
> Introduced vendor specific functions to retrieve the
> vsec data.
>
> Changes in v6:
> Included "sizes.h" header and used the appropriate
> definitions instead of constants.
>
> Changes in v5:
> Added the definitions for Xilinx specific VSEC header id,
> revision, and register offsets.
> Corrected the error type when no physical offset found for
> device side memory.
> Corrected the order of variables.
>
> Changes in v4:
> Configured 8 read and 8 write channels for Xilinx vendor
> Added checks to validate vendor ID for vendor
> specific vsec id.
> Added Xilinx specific vendor id for vsec specific to Xilinx
> Added the LL and data region offsets, size as input params to
> function dw_edma_set_chan_region_offset().
> Moved the LL and data region offsets assignment to function
> for Xilinx specific case.
> Corrected comments.
>
> Changes in v3:
> Corrected a typo when assigning AMD (Xilinx) vsec id macro
> and condition check.
>
> Changes in v2:
> Reverted the devmem_phys_off type to u64.
> Renamed the function appropriately to suit the
> functionality for setting the LL & data region offsets.
>
> Changes in v1:
> Removed the pci device id from pci_ids.h file.
> Added the vendor id macro as per the suggested method.
> Changed the type of the newly added devmem_phys_off variable.
> Added to logic to assign offsets for LL and data region blocks
> in case more number of channels are enabled than given in
> amd_mdb_data struct.
> ---
> drivers/dma/dw-edma/dw-edma-pcie.c | 190 ++++++++++++++++++++++++++---
> 1 file changed, 176 insertions(+), 14 deletions(-)
>
...
>
> +static void dw_edma_pcie_get_xilinx_dma_data(struct pci_dev *pdev,
> + struct dw_edma_pcie_data *pdata)
> +{
> + u32 val, map;
> + u16 vsec;
> + u64 off;
> +
> + pdata->devmem_phys_off = DW_PCIE_XILINX_MDB_INVALID_ADDR;
> +
> + vsec = pci_find_vsec_capability(pdev, PCI_VENDOR_ID_XILINX,
> + DW_PCIE_XILINX_MDB_VSEC_DMA_ID);
> + if (!vsec)
> + return;
> +
> + pci_read_config_dword(pdev, vsec + PCI_VNDR_HEADER, &val);
> + if (PCI_VNDR_HEADER_REV(val) != 0x00 ||
> + PCI_VNDR_HEADER_LEN(val) != 0x18)
> + return;
> +
> + pci_dbg(pdev, "Detected Xilinx PCIe Vendor-Specific Extended Capability DMA\n");
> + pci_read_config_dword(pdev, vsec + 0x8, &val);
> + map = FIELD_GET(DW_PCIE_XILINX_MDB_VSEC_DMA_MAP, val);
> + if (map != EDMA_MF_HDMA_NATIVE)
> + return;
> +
> + pdata->mf = map;
> + pdata->rg.bar = FIELD_GET(DW_PCIE_XILINX_MDB_VSEC_DMA_BAR, val);
> +
> + pci_read_config_dword(pdev, vsec + 0xc, &val);
> + pdata->wr_ch_cnt = min_t(u16, pdata->wr_ch_cnt,
> + FIELD_GET(DW_PCIE_XILINX_MDB_VSEC_DMA_WR_CH, val));
> + pdata->rd_ch_cnt = min_t(u16, pdata->rd_ch_cnt,
> + FIELD_GET(DW_PCIE_XILINX_MDB_VSEC_DMA_RD_CH, val));

In https://lore.kernel.org/all/20251119224140.8616-1-david.laight.linux@xxxxxxxxx/

suggest direct use min()

Frank
> +
> + pci_read_config_dword(pdev, vsec + 0x14, &val);
> + off = val;
> + pci_read_config_dword(pdev, vsec + 0x10, &val);
> + off <<= 32;
> + off |= val;
> + pdata->rg.off = off;
> +
> + vsec = pci_find_vsec_capability(pdev, PCI_VENDOR_ID_XILINX,
> + DW_PCIE_XILINX_MDB_VSEC_ID);
> + if (!vsec)
> + return;
> +
> + pci_read_config_dword(pdev,
> + vsec + DW_PCIE_XILINX_MDB_DEVMEM_OFF_REG_HIGH,
> + &val);
> + off = val;
> + pci_read_config_dword(pdev,
> + vsec + DW_PCIE_XILINX_MDB_DEVMEM_OFF_REG_LOW,
> + &val);
> + off <<= 32;
> + off |= val;
> + pdata->devmem_phys_off = off;
> +}
> +
> static int dw_edma_pcie_probe(struct pci_dev *pdev,
> const struct pci_device_id *pid)
> {
> @@ -184,7 +322,29 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev,
> * Tries to find if exists a PCIe Vendor-Specific Extended Capability
> * for the DMA, if one exists, then reconfigures it.
> */
> - dw_edma_pcie_get_vsec_dma_data(pdev, vsec_data);
> + dw_edma_pcie_get_synopsys_dma_data(pdev, vsec_data);
> +
> + if (pdev->vendor == PCI_VENDOR_ID_XILINX) {
> + dw_edma_pcie_get_xilinx_dma_data(pdev, vsec_data);
> +
> + /*
> + * There is no valid address found for the LL memory
> + * space on the device side.
> + */
> + if (vsec_data->devmem_phys_off == DW_PCIE_XILINX_MDB_INVALID_ADDR)
> + return -ENOMEM;
> +
> + /*
> + * Configure the channel LL and data blocks if number of
> + * channels enabled in VSEC capability are more than the
> + * channels configured in xilinx_mdb_data.
> + */
> + dw_edma_set_chan_region_offset(vsec_data, BAR_2, 0,
> + DW_PCIE_XILINX_MDB_LL_OFF_GAP,
> + DW_PCIE_XILINX_MDB_LL_SIZE,
> + DW_PCIE_XILINX_MDB_DT_OFF_GAP,
> + DW_PCIE_XILINX_MDB_DT_SIZE);
> + }
>
> /* Mapping PCI BAR regions */
> mask = BIT(vsec_data->rg.bar);
> @@ -367,6 +527,8 @@ static void dw_edma_pcie_remove(struct pci_dev *pdev)
>
> static const struct pci_device_id dw_edma_pcie_id_table[] = {
> { PCI_DEVICE_DATA(SYNOPSYS, EDDA, &snps_edda_data) },
> + { PCI_VDEVICE(XILINX, PCI_DEVICE_ID_XILINX_B054),
> + (kernel_ulong_t)&xilinx_mdb_data },
> { }
> };
> MODULE_DEVICE_TABLE(pci, dw_edma_pcie_id_table);
> --
> 2.43.0
>