Re: [PATCH v2 0/7] vfio/pci: SR-IOV support

From: Jason Wang
Date: Tue Feb 25 2020 - 01:09:29 EST



On 2020/2/25 äå10:33, Tian, Kevin wrote:
From: Alex Williamson
Sent: Thursday, February 20, 2020 2:54 AM

Changes since v1 are primarily to patch 3/7 where the commit log is
rewritten, along with option parsing and failure logging based on
upstream discussions. The primary user visible difference is that
option parsing is now much more strict. If a vf_token option is
provided that cannot be used, we generate an error. As a result of
this, opening a PF with a vf_token option will serve as a mechanism of
setting the vf_token. This seems like a more user friendly API than
the alternative of sometimes requiring the option (VFs in use) and
sometimes rejecting it, and upholds our desire that the option is
always either used or rejected.

This also means that the VFIO_DEVICE_FEATURE ioctl is not the only
means of setting the VF token, which might call into question whether
we absolutely need this new ioctl. Currently I'm keeping it because I
can imagine use cases, for example if a hypervisor were to support
SR-IOV, the PF device might be opened without consideration for a VF
token and we'd require the hypservisor to close and re-open the PF in
order to set a known VF token, which is impractical.

Series overview (same as provided with v1):
Thanks for doing this!

The synopsis of this series is that we have an ongoing desire to drive
PCIe SR-IOV PFs from userspace with VFIO. There's an immediate need
for this with DPDK drivers and potentially interesting future use
Can you provide a link to the DPDK discussion?

cases in virtualization. We've been reluctant to add this support
previously due to the dependency and trust relationship between the
VF device and PF driver. Minimally the PF driver can induce a denial
of service to the VF, but depending on the specific implementation,
the PF driver might also be responsible for moving data between VFs
or have direct access to the state of the VF, including data or state
otherwise private to the VF or VF driver.
Just a loud thinking. While the motivation of VF token sounds reasonable
to me, I'm curious why the same concern is not raised in other usages.
For example, there is no such design in virtio framework, where the
virtio device could also be restarted, putting in separate process (vhost-user),
and even in separate VM (virtio-vhost-user), etc.


AFAIK, the restart could only be triggered by either VM or qemu. But yes, the datapath could be offloaded.

But I'm not sure introducing another dedicated mechanism is better than using the exist generic POSIX mechanism to make sure the connection (AF_UINX) is secure.


Of course the para-
virtualized attribute of virtio implies some degree of trust, but as you
mentioned many SR-IOV implementations support VF->PF communication
which also implies some level of trust. It's perfectly fine if VFIO just tries
to do better than other sub-systems, but knowing how other people
tackle the similar problem may make the whole picture clearer. ð

+Jason.


I'm not quite sure e.g allowing userspace PF driver with kernel VF driver would not break the assumption of kernel security model. At least we should forbid a unprivileged PF driver running in userspace.

Thanks