Re: Query on VFIO in Virtual machine

From: Peter Xu
Date: Fri Jun 23 2017 - 00:17:54 EST


On Thu, Jun 22, 2017 at 11:27:09AM -0600, Alex Williamson wrote:
> On Thu, 22 Jun 2017 22:42:19 +0530
> Nitin Saxena <nitin.lnx@xxxxxxxxx> wrote:
>
> > Thanks Alex.
> >
> > >> Without an iommu in the VM, you'd be limited to no-iommu support for VM userspace,
> > So are you trying to say VFIO NO-IOMMU should work inside VM. Does
> > that mean VFIO NO-IOMMU in VM and VFIO IOMMU in host for same device
> > is a legitimate configuration? I did tried this configuration and the
> > application (in VM) seems to get container_fd, group_fd, device_fd
> > successfully but after VFIO_DEVICE_RESET ioctl the PCI link breaks
> > from VM as well as from host. This could be specific to PCI endpoint
> > device which I can dig.
> >
> > I will be happy if VFIO NO-IOMMU in VM and IOMMU in host for same
> > device is legitimate configuration.
>
> Using no-iommu in the guest should work in that configuration, however
> there's no isolation from the user to the rest of VM memory, so the VM
> kernel will be tainted. Host memory does have iommu isolation. Device
> reset from VM userspace sounds like another bug to investigate. Thanks,
>
> Alex

Besides what Alex has mentioned, there is a wiki page for the usage.
The command line will be slightly different on QEMU side comparing to
without vIOMMU:

http://wiki.qemu.org/Features/VT-d#With_Assigned_Devices

One more thing to mention is that, when vfio-pci devices in the guest
are used with emulated VT-d, huge performance degradation will be
expected for dynamic allocations at least for now. While for mostly
static allocations (like DPDK) the performance should be merely the
same as no-IOMMU mode. It's just a hint on performance, and I believe
for your own case it should mostly depend on how the application is
managing DMA map/unmaps.

Thanks,

--
Peter Xu