Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from add_config
From: Jason Wang
Date: Sun Jun 25 2023 - 23:09:27 EST
On Mon, Jun 26, 2023 at 11:02 AM Angus Chen <angus.chen@xxxxxxxxxxxxxxx> wrote:
>
>
>
> > -----Original Message-----
> > From: Jason Wang <jasowang@xxxxxxxxxx>
> > Sent: Monday, June 26, 2023 10:51 AM
> > To: Angus Chen <angus.chen@xxxxxxxxxxxxxxx>
> > Cc: mst@xxxxxxxxxx; virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx;
> > linux-kernel@xxxxxxxxxxxxxxx
> > Subject: Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from
> > add_config
> >
> > On Mon, Jun 26, 2023 at 10:42 AM Angus Chen <angus.chen@xxxxxxxxxxxxxxx>
> > wrote:
> > >
> > >
> > > Hi,jason.
> > > > -----Original Message-----
> > > > From: Jason Wang <jasowang@xxxxxxxxxx>
> > > > Sent: Monday, June 26, 2023 10:30 AM
> > > > To: Angus Chen <angus.chen@xxxxxxxxxxxxxxx>
> > > > Cc: mst@xxxxxxxxxx; virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx;
> > > > linux-kernel@xxxxxxxxxxxxxxx
> > > > Subject: Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device
> > from
> > > > add_config
> > > >
> > > > On Thu, Jun 8, 2023 at 5:02 PM Angus Chen
> > <angus.chen@xxxxxxxxxxxxxxx>
> > > > wrote:
> > > > >
> > > > > When add virtio_pci vdpa device,check the vqs number of device cap
> > > > > and max_vq_pairs from add_config.
> > > > > Simply starting from failing if the provisioned #qp is not
> > > > > equal to the one that hardware has.
> > > > >
> > > > > Signed-off-by: Angus Chen <angus.chen@xxxxxxxxxxxxxxx>
> > > > > ---
> > > > > v1: Use max_vqs from add_config
> > > > > v2: Just return fail if max_vqs from add_config is not same as device
> > > > > cap. Suggested by jason.
> > > > >
> > > > > drivers/vdpa/virtio_pci/vp_vdpa.c | 35 ++++++++++++++++++-------------
> > > > > 1 file changed, 21 insertions(+), 14 deletions(-)
> > > > >
> > > > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > > b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > > > index 281287fae89f..c1fb6963da12 100644
> > > > > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > > > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > > > @@ -480,32 +480,39 @@ static int vp_vdpa_dev_add(struct
> > > > vdpa_mgmt_dev *v_mdev, const char *name,
> > > > > u64 device_features;
> > > > > int ret, i;
> > > > >
> > > > > - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa,
> > > > > - dev, &vp_vdpa_ops, 1, 1,
> > name,
> > > > false);
> > > > > -
> > > > > - if (IS_ERR(vp_vdpa)) {
> > > > > - dev_err(dev, "vp_vdpa: Failed to allocate vDPA
> > > > structure\n");
> > > > > - return PTR_ERR(vp_vdpa);
> > > > > + if (add_config->mask &
> > > > BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) {
> > > > > + if (add_config->net.max_vq_pairs !=
> > > > (v_mdev->max_supported_vqs / 2)) {
> > > > > + dev_err(&pdev->dev, "max vqs 0x%x should
> > be
> > > > equal to 0x%x which device has\n",
> > > > > + add_config->net.max_vq_pairs*2,
> > > > v_mdev->max_supported_vqs);
> > > > > + return -EINVAL;
> > > > > + }
> > > > > }
> > > > >
> > > > > - vp_vdpa_mgtdev->vp_vdpa = vp_vdpa;
> > > > > -
> > > > > - vp_vdpa->vdpa.dma_dev = &pdev->dev;
> > > > > - vp_vdpa->queues = vp_modern_get_num_queues(mdev);
> > > > > - vp_vdpa->mdev = mdev;
> > > > > -
> > > > > device_features = vp_modern_get_features(mdev);
> > > > > if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES))
> > {
> > > > > if (add_config->device_features & ~device_features) {
> > > > > - ret = -EINVAL;
> > > > > dev_err(&pdev->dev, "Try to provision
> > features
> > > > "
> > > > > "that are not supported by the
> > device:
> > > > "
> > > > > "device_features 0x%llx
> > provisioned
> > > > 0x%llx\n",
> > > > > device_features,
> > > > add_config->device_features);
> > > > > - goto err;
> > > > > + return -EINVAL;
> > > > > }
> > > > > device_features = add_config->device_features;
> > > > > }
> > > > > +
> > > > > + vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa,
> > > > > + dev, &vp_vdpa_ops, 1, 1,
> > name,
> > > > false);
> > > > > +
> > > > > + if (IS_ERR(vp_vdpa)) {
> > > > > + dev_err(dev, "vp_vdpa: Failed to allocate vDPA
> > > > structure\n");
> > > > > + return PTR_ERR(vp_vdpa);
> > > > > + }
> > > > > +
> > > > > + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa;
> > > > > +
> > > > > + vp_vdpa->vdpa.dma_dev = &pdev->dev;
> > > > > + vp_vdpa->queues = v_mdev->max_supported_vqs;
> > > >
> > > > Why bother with those changes?
> > > >
> > > > mgtdev->max_supported_vqs =
> > vp_modern_get_num_queues(mdev);
> > > max_supported_vqs will not be changed, so we can get max_supported_vqs
> > from mgtdev->max_supported_vqs.
> > > If we use vp_modern_get_num_queues(mdev),it will use tlp to communicate
> > with device.
> > > It just reduce some tlp .
> >
> > Ok, but
> >
> > 1) I think we don't care the performance here
> > 2) If we did, let's use a separate patch to do that as an optimization
> >
> Thank you.As mst did not support this patch some days ago,so this patch will be dropped.
> I plan to push a dependent driver of our product rather than reuse vp_vdpa.
That would be fine. But please try best to reuse modern virtio-pci library.
> By the way ,if I want to add sriov support in our vdpa pci driver,would it be accepted or not?
I think the answer is yes.
Thanks
> > Thanks
> >
> > > >
> > > > Thanks
> > > >
> > > >
> > > > > + vp_vdpa->mdev = mdev;
> > > > > vp_vdpa->device_features = device_features;
> > > > >
> > > > > ret = devm_add_action_or_reset(dev,
> > vp_vdpa_free_irq_vectors,
> > > > pdev);
> > > > > --
> > > > > 2.25.1
> > > > >
> > >
>