On Thu, Jun 03, 2021 at 03:06:31PM +0800, Jason Wang wrote:
在 2021/6/3 下午3:00, Jason Wang 写道:Right, although most likely VQs become ready only after all map changes
在 2021/6/2 下午4:59, Eli Cohen 写道:
After device reset, the virtqueues are not ready so clear the ready
field.
Failing to do so can result in virtio_vdpa failing to load if the device
was previously used by vhost_vdpa and the old values are ready.
virtio_vdpa expects to find VQs in "not ready" state.
Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5
devices")
Signed-off-by: Eli Cohen <elic@xxxxxxxxxx>
Acked-by: Jason Wang <jasowang@xxxxxxxxxx>
A second thought.
destroy_virtqueue() could be called many places.
One of them is the mlx5_vdpa_change_map(), if this is case, this looks
wrong.
occur becuase I did not encounter any issue while testing.
It looks to me it's simpler to do this in clear_virtqueues() which can onlyThere is no clear_virtqueues() function. You probably mean to insert a
be called during reset.
call in mlx5_vdpa_set_status() in case it performs reset. This function
will go over all virtqueues and clear their ready flag.
Alternatively we can add boolean argument to teardown_driver() that
signifies if we are in reset flow and in this case we clear ready.
Thanks
---
drivers/vdpa/mlx5/net/mlx5_vnet.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c
b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 02a05492204c..e8bc0842b44c 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -862,6 +862,7 @@ static void destroy_virtqueue(struct
mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtq
return;
}
umems_destroy(ndev, mvq);
+ mvq->ready = false;
}
static u32 get_rqpn(struct mlx5_vdpa_virtqueue *mvq, bool fw)