Re: [PATCH v7 11/11] iommu/vt-d: Add svm/sva invalidate function

From: Jacob Pan
Date: Tue Oct 29 2019 - 15:20:58 EST


On Tue, 29 Oct 2019 18:52:01 +0000
"Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:

> > From: Jacob Pan [mailto:jacob.jun.pan@xxxxxxxxxxxxxxx]
> > Sent: Tuesday, October 29, 2019 12:11 AM
> >
> > On Mon, 28 Oct 2019 06:06:33 +0000
> > "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:
> >
> > > > >>> +ÂÂÂ /* PASID based dev TLBs, only support all PASIDs or
> > > > >>> single PASID */
> > > > >>> +ÂÂÂ {1, 1, 0},
> > > > >>
> > > > >> I forgot previous discussion. is it necessary to pass down
> > > > >> dev TLB invalidation
> > > > >> requests? Can it be handled by host iOMMU driver
> > > > >> automatically?
> > > > >
> > > > > On host SVA, when a memory is unmapped, driver callback will
> > > > > invalidate dev IOTLB explicitly. So I guess we need to pass
> > > > > down it for guest case. This is also required for guest iova
> > > > > over 1st level usage as far as can see.
> > > > >
> > > >
> > > > Sorry, I confused guest vIOVA and guest vSVA. For guest vIOVA,
> > > > no device TLB invalidation pass down. But currently for guest
> > > > vSVA, device TLB invalidation is passed down. Perhaps we can
> > > > avoid passing down dev TLB flush just like what we are doing
> > > > for guest IOVA.
> > >
> > > I think dev TLB is fully handled within IOMMU driver today. It
> > > doesn't require device driver to explicit toggle. With this then
> > > we can fully virtualize guest dev TLB invalidation request to
> > > save one syscall, since the host is supposed to flush dev TLB
> > > when serving the earlier IOTLB invalidation pass-down.
> >
> > In the previous discussions, we thought about making IOTLB flush
> > inclusive, where IOTLB flush would always include device TLB flush.
> > But we thought such behavior cannot be assumed for all VMMs, some
> > may still do explicit dev TLB flush. So for completeness, we
> > included dev TLB here.
>
> is there such example or a link to previous discussion? Here we are
> talking about host IOMMU driver behavior, instead of VMM. But I'm
> not strong on this, since it's more an optimization. But there remains
> one unclear area. If we do want to support such usage with explicit
> dev TLB flush, how does host IOMMU driver avoid doing implicit
> dev TLB flush when serving iotlb invalidation request? Is it already
> designed such way that user-passed-down iotlb invalidation request
> only invalidates iotlb while kernel-triggered iotlb invalidation still
> does implicit dev TLB flush?
>
The current design with vIOMMU in QEMU will prevent explicit dev TLB
flush. Host will always do inclusive IOTLB and dev TLB flush on IOTLB
flush request.

For other VMM which does not do this optimization, we just leave a
path for explicit dev TLB flush. Redundant but for IOMMU driver
perspective it is complete. We don't avoid the redundancy as there is
no damage outside the guest, just as we don't prevent guest doing the
same flush twice.