RE: [PATCH 2/2] fpga: dfl: look for vendor specific capability

From: Wu, Hao
Date: Tue Nov 17 2020 - 04:48:25 EST


> Subject: [PATCH 2/2] fpga: dfl: look for vendor specific capability
>
> From: Matthew Gerlach <matthew.gerlach@xxxxxxxxxxxxxxx>
>
> A DFL may not begin at offset 0 of BAR 0. A PCIe vendor
> specific capability can be used to specify the start of a
> number of DFLs.
>
> Signed-off-by: Matthew Gerlach <matthew.gerlach@xxxxxxxxxxxxxxx>
> ---
> Documentation/fpga/dfl.rst | 10 +++++
> drivers/fpga/dfl-pci.c | 88 +++++++++++++++++++++++++++++++++++++-
> 2 files changed, 97 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/fpga/dfl.rst b/Documentation/fpga/dfl.rst
> index 0404fe6ffc74..c81ceb1e79e2 100644
> --- a/Documentation/fpga/dfl.rst
> +++ b/Documentation/fpga/dfl.rst
> @@ -501,6 +501,16 @@ Developer only needs to provide a sub feature
> driver with matched feature id.
> FME Partial Reconfiguration Sub Feature driver (see drivers/fpga/dfl-fme-
> pr.c)
> could be a reference.
>
> +Location of DFLs on PCI bus

Location of DFLs on PCI Device

> +===========================
> +The start of the DFL is assumed to be offset 0 of bar 0.

the first DFL

> +Alternatively, a vendor specific capability structure can be used to
> +specify the location of one or more DFLs. Intel has reserved the

I believe this capability is used to specify all DFLs on this the PCI device.

> +vendor specific id of 0x43 for this purpose. The vendor specific

VSEC ID is 0x43.

One more question here is, does it require vendor id to be intel firstly?
Or other vendors could implement the same one but with a different id?

> +data begins with a 4 byte count of the number of DFLs followed 4 byte

vendor specific register

> +Offset/BIR fields for each DFL. Bits 2:0 of Offset/BIR field indicates
> +the BAR, and bits 31:3 form the 8 byte aligned offset where bits 2:0 are
> +zero.
>
> Open discussion
> ===============
> diff --git a/drivers/fpga/dfl-pci.c b/drivers/fpga/dfl-pci.c
> index b1b157b41942..5418e8bf2496 100644
> --- a/drivers/fpga/dfl-pci.c
> +++ b/drivers/fpga/dfl-pci.c
> @@ -27,6 +27,13 @@
> #define DRV_VERSION "0.8"
> #define DRV_NAME "dfl-pci"
>
> +#define PCI_VNDR_ID_DFLS 0x43

What about PCI_VSEC_ID_INTEL_DFLS?

Is it possible a different ID chosen by different vendor?

> +
> +#define PCI_VNDR_DFLS_CNT_OFFSET 8
> +#define PCI_VNDR_DFLS_RES_OFFSET 0x0c

What about VSEC_DFLS_CNT ?

> +
> +#define PCI_VND_DFLS_RES_BAR_MASK 0x7
> +
> struct cci_drvdata {
> struct dfl_fpga_cdev *cdev; /* container device */
> };
> @@ -119,6 +126,82 @@ static int *cci_pci_create_irq_table(struct pci_dev
> *pcidev, unsigned int nvec)
> return table;
> }
>
> +static int find_dfl_in_cfg(struct pci_dev *pcidev,
> + struct dfl_fpga_enum_info *info)
> +{
> + u32 bar, offset, vndr_hdr, dfl_cnt, dfl_res;
> + int dfl_res_off, i, voff = 0;
> + resource_size_t start, len;
> +
> + while ((voff = pci_find_next_ext_capability(pcidev, voff,
> PCI_EXT_CAP_ID_VNDR))) {
> +
> + pci_read_config_dword(pcidev, voff + PCI_VNDR_HEADER,
> &vndr_hdr);
> +
> + dev_dbg(&pcidev->dev,
> + "vendor-specific capability id 0x%x, rev 0x%x len
> 0x%x\n",
> + PCI_VNDR_HEADER_ID(vndr_hdr),
> + PCI_VNDR_HEADER_REV(vndr_hdr),
> + PCI_VNDR_HEADER_LEN(vndr_hdr));

Why you need this log?

> +
> + if (PCI_VNDR_HEADER_ID(vndr_hdr) == PCI_VNDR_ID_DFLS)
> + break;
> + }
> +
> + if (!voff) {
> + dev_dbg(&pcidev->dev, "%s no VSEC found\n", __func__);

"no DFL VSEC found"

> + return -ENODEV;
> + }
> +
> + pci_read_config_dword(pcidev, voff + PCI_VNDR_DFLS_CNT_OFFSET,
> &dfl_cnt);

I believe OFFSET can be removed. : )

> + dev_info(&pcidev->dev, "dfl_cnt %d\n", dfl_cnt);

dev_dbg

> + for (i = 0; i < dfl_cnt; i++) {
> + dfl_res_off = voff + PCI_VNDR_DFLS_RES_OFFSET +
> + (i * sizeof(dfl_res));
> + pci_read_config_dword(pcidev, dfl_res_off, &dfl_res);
> +
> + dev_dbg(&pcidev->dev, "dfl_res 0x%x\n", dfl_res);
> +
> + bar = dfl_res & PCI_VND_DFLS_RES_BAR_MASK;
> +
> + if (bar >= PCI_STD_NUM_BARS) {
> + dev_err(&pcidev->dev, "%s bad bar number %d\n",
> + __func__, bar);
> + return -EINVAL;
> + }
> +
> + len = pci_resource_len(pcidev, bar);
> +

Remove this blank line.

> + if (len == 0) {
> + dev_err(&pcidev->dev, "%s unmapped bar
> number %d\n",

Why "unmapped bar"?

> + __func__, bar);
> + return -EINVAL;
> + }
> +
> + offset = dfl_res & ~PCI_VND_DFLS_RES_BAR_MASK;
> +

Remove this blank line.

> + if (offset >= len) {
> + dev_err(&pcidev->dev, "%s bad offset %u >= %llu\n",
> + __func__, offset, len);
> + return -EINVAL;
> + }
> +
> + dev_info(&pcidev->dev, "%s BAR %d offset 0x%x\n",
> __func__, bar, offset);

dev_dbg

> +
> + start = pci_resource_start(pcidev, bar) + offset;
> + len -= offset;
> +
> + if (!PAGE_ALIGNED(start)) {

Is this a hard requirement? Or offset should be page aligned per VSEC definition?
Or this is just the requirement from driver point of view. Actually we don't like
to add rules only in driver, so it's better we have this requirement in VSEC definition
with proper documentation.

> + dev_err(&pcidev->dev, "%s unaliged start 0x%llx\n",

unaligned

> + __func__, start);
> + return -EINVAL;
> + }
> +
> + dfl_fpga_enum_info_add_dfl(info, start, len);
> + }
> +
> + return 0;
> +}
> +
> static int find_dfl_in_bar0(struct pci_dev *pcidev,
> struct dfl_fpga_enum_info *info)
> {
> @@ -221,7 +304,10 @@ static int cci_enumerate_feature_devs(struct
> pci_dev *pcidev)
> goto irq_free_exit;
> }
>
> - ret = find_dfl_in_bar0(pcidev, info);

find_dfl or find_dfl_by_default

Actually the original idea is to hardcode the start of the first DFL per device id.

> + ret = find_dfl_in_cfg(pcidev, info);

find_dfl_by_vsec

> +

Remove blank line.

> + if (ret)
> + ret = find_dfl_in_bar0(pcidev, info);

I am not sure if find_dfl_by_vsec failed, we still can try to find the first dfl in bar0.
Most cases it won't work, especially for multiple DFLs case. Just return error directly.
If someone implements the vsec, it must ensure that vsec contains the correct value,
no fallback solution. Otherwise it doesn't need to implement such vsec, right?

Thanks
Hao

>
> if (ret)
> goto irq_free_exit;
> --
> 2.25.2