On Fri, Jun 18, 2021 at 01:21:47PM +0800, Lu Baolu wrote:
Hi David,So, "iommu group" isn't a perfect name. It came about because
On 6/17/21 1:22 PM, David Gibson wrote:
The iommu_group can guarantee the isolation among different physicalThis seems like a misunderstanding of groups. Groups are not tied to
devices (represented by RIDs). But when it comes to sub-devices (ex. mdev or
vDPA devices represented by RID + SSID), we have to rely on the
device driver for isolation. The devices which are able to generate sub-
devices should either use their own on-device mechanisms or use the
platform features like Intel Scalable IOV to isolate the sub-devices.
any PCI meaning. Groups are the smallest unit of isolation, no matter
what is providing that isolation.
If mdevs are isolated from each other by clever software, even though
they're on the same PCI device they are in different groups from each
other*by definition*. They are also in a different group from their
parent device (however the mdevs only exist when mdev driver is
active, which implies that the parent device's group is owned by the
kernel).
You are right. This is also my understanding of an "isolation group".
But, as I understand it, iommu_group is only the isolation group visible
to IOMMU. When we talk about sub-devices (sw-mdev or mdev w/ pasid),
only the device and device driver knows the details of isolation, hence
iommu_group could not be extended to cover them. The device drivers
should define their own isolation groups.
originally the main mechanism for isolation was the IOMMU, so it was
typically the IOMMU's capabilities that determined if devices were
isolated. However it was always known that there could be other
reasons for failure of isolation. To simplify the model we decided
that we'd put things into the same group if they were non-isolated for
any reason.
The kernel has no notion of "isolation group" as distinct from "iommu
group". What are called iommu groups in the kernel now*are*
"isolation groups" and that was always the intention - it's just not a
great name.
Otherwise, the device driver has to fake an iommu_group and add hackyYeah, that's not ideal.
code to link the related IOMMU elements (iommu device, domain, group
etc.) together. Actually this is part of the problem that this proposal
tries to solve.
I'm not really clear on what that last statement means.If we understand it as multiple levels of isolation, can we classify theUnder above conditions, different sub-device from a same RID deviceThat doesn't necessarily follow. mdevs which can be successfully
could be able to use different IOASID. This seems to means that we can't
support mixed mode where, for example, two RIDs share an iommu_group and
one (or both) of them have sub-devices.
isolated by their mdev driver are in a different group from their
parent device, and therefore need not be affected by whether the
parent device shares a group with some other physical device. They
*might* be, but that's up to the mdev driver to determine based on
what it can safely isolate.
devices into the following categories?
1) Legacy devices
- devices without device-level isolation
- multiple devices could sit in a single iommu_group
- only a single I/O address space could be bound to IOMMU
2) Modern devicesThis will*typically* be true of modern devices, but I don't think we
- devices capable of device-level isolation
can really make it a hard API distinction. Legacy or buggy bridges
can force modern devices into the same group as each other. Modern
devices are not immune from bugs which would force lack of isolation
(e.g. forgotten debug registers on function 0 which affect other
functions).