* Avi Kivity (avi@xxxxxxxxxx) wrote:
On 06/02/2010 12:26 AM, Tom Lyon wrote:Sure it does. That's exactly what happens when there's an iommu
I'm not really opposed to multiple devices per domain, but let me point out how IIn the case of kvm, we don't want isolation between devices, because
ended up here. First, the driver has two ways of mapping pages, one based on the
iommu api and one based on the dma_map_sg api. With the latter, the system
already allocates a domain per device and there's no way to control it. This was
presumably done to help isolation between drivers. If there are multiple drivers
in the user level, do we not want the same isoation to apply to them?
that doesn't happen on real hardware.
involved with bare metal.
So if the guest programsAnd it will as long as ATS is enabled (this is a basic requirement
devices to dma to each other, we want that to succeed.
for PCIe peer-to-peer traffic to succeed with an iommu involved on
bare metal).
That's how things currently are, i.e. we put all devices belonging to a
single guest in the same domain. However, it can be useful to put each
device belonging to a guest in a unique domain. Especially as qemu
grows support for iommu emulation, and guest OSes begin to understand
how to use a hw iommu.
Not sure it's a deficiency. Typically to share page table mappingsAnd then there's the fact that it is possible to have multiple disjoint iommus on a system,That's indeed a deficiency.
so it may not even be possible to bring 2 devices under one domain.
across multiple iommu's you just have to do update/invalidate to each
hw iommu that is sharing the mapping. Alternatively, you can use more
memory and build/maintain identical mappings (as Tom alludes to below).