Re: [PATCH V4 05/18] iommu/ioasid: Redefine IOASID set and allocation APIs

From: Jean-Philippe Brucker
Date: Wed Apr 07 2021 - 14:44:14 EST


On Wed, Apr 07, 2021 at 08:17:50AM +0000, Tian, Kevin wrote:
> btw this discussion was raised when discussing the I/O page fault handling
> process. Currently the IOMMU layer implements a per-device fault reporting
> mechanism, which requires VFIO to register a handler to receive all faults
> on its device and then forwards to ioasid if it's due to 1st-level. Possibly it
> makes more sense to convert it into a per-pgtable reporting scheme, and
> then the owner of each pgtable should register its own handler.

Maybe, but you do need device information in there, since that's how the
fault is reported to the guest and how the response is routed back to the
faulting device (only PASID+PRGI would cause aliasing). And we need to
report non-recoverable faults, as well as recoverable ones without PASID,
once we hand control of level-1 page tables to guests.

> It means
> for 1) VFIO will register a 2nd-level pgtable handler while /dev/ioasid
> will register a 1st-level pgtable handler, while for 3) /dev/ioasid will register
> handlers for both 1st-level and 2nd-level pgtable. Jean? also want to know
> your thoughts...

Moving all IOMMU controls to /dev/ioasid rather that splitting them is
probably better. Hopefully the implementation can reuse most of
vfio_iommu_type1.

I'm trying to sketch what may work for Arm, if we have to reuse
/dev/ioasid to avoid duplication of fault and inval queues:

* Get a container handle out of /dev/ioasid (or /dev/iommu, really.)
No operation available since we don't know what the device and IOMMU
capabilities are.

* Attach the handle to a VF. With VFIO that would be
VFIO_GROUP_SET_CONTAINER. That causes the kernel to associate an IOMMU
with the handle, and decide which operations are available.

* With a map/unmap vIOMMU (or shadow mappings), a single translation level
is supported. With a nesting vIOMMU, we're populating the level-2
translation (some day maybe by binding the KVM page tables, but
currently with map/unmap ioctl).

Single-level translation needs single VF per container. Two level would
allow sharing stage-2 between multiple VFs, though it's a pain to define
and implement.

* Without a vIOMMU or if the vIOMMU starts in bypass, populate the
container page tables.

Start the guest.

* With a map/unmap vIOMMU, guest creates mappings, userspace populates the
page tables with map/unmap ioctl.

It would be possible to add a PASID mode there: guest requests an
address space with a specific PASID, userspace derives an IOASID handle
from the container handle and populate that address space with map/unmap
ioctl. That would enable PASID on sub-VF assignment, which requires the
host to control which PASID is programmed into the VF (with
DEVICE_ALLOW_IOASID, I guess). And either the host allocates the PASID
in this case (which isn't supported by a vSMMU) or we have to do a
vPASID -> pPASID. I don't know if it's worth the effort.

Or
* With a nesting vIOMMU, the guest attaches a PASID table to a VF,
userspace issues a SET_PASID_TABLE ioctl on the container handle. If
we support multiple VFs per container, we first need to derive a child
container from the main one and the device, then attach the PASID table.

Guest programs the PASID table, sends invalidations when removing
mappings which are relayed to the host on the child container. Page
faults and response queue would be per container, so if multiple VF per
container, we could have one queue for the parent (level-2 faults) and
one for each child (level-1 faults).

Thanks,
Jean