Re: [PATCH 1/1] iommu: Bind process address spaces to devices

From: Jacob Pan
Date: Thu Feb 28 2019 - 13:51:11 EST


On Thu, 28 Feb 2019 01:10:55 +0000
"Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:

> > From: Jacob Pan [mailto:jacob.jun.pan@xxxxxxxxxxxxxxx]
> > Sent: Thursday, February 28, 2019 5:41 AM
> >
> > On Tue, 26 Feb 2019 12:17:43 +0100
> > Joerg Roedel <joro@xxxxxxxxxx> wrote:
> >
> > >
> > > How about a 'struct iommu_sva' with an iommu-private definition
> > > that is returned by this function:
> > >
> > > struct iommu_sva *iommu_sva_bind_device(struct device
> > > *dev, struct mm_struct *mm);
> > >
> > Just trying to understand how to use this API.
> > So if we bind the same mm to two different devices, we should get
> > two different iommu_sva handle, right?
> > I think intel-svm still needs a flag argument for supervisor pasid
> > etc. Other than that, I think both interface should work for vt-d.
> >
> > Another question is that for nested SVA, we will need to bind guest
> > mm. Do you think we should try to reuse this or have it separate? I
> > am working on a separate API for now.
> >
>
> It has to be different. Host doesn't know guest mm.
>
> Also note that from virtualization p.o.v we just focus on 'nested
> translation' in host side. The 1st level may point to guest CPU
> page table (SVA), or IOVA page table. In that manner, the API
> (as currently defined in your series) is purely about setting up
> nested translation on VFIO assigned device.
>
Sounds good, will keep them separate.