RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
From: Bharat Bhushan
Date: Thu Dec 05 2013 - 23:13:14 EST
> -----Original Message-----
> From: Wood Scott-B07421
> Sent: Friday, December 06, 2013 5:52 AM
> To: Bhushan Bharat-R65777
> Cc: Alex Williamson; linux-pci@xxxxxxxxxxxxxxx; agraf@xxxxxxx; Yoder Stuart-
> B08248; iommu@xxxxxxxxxxxxxxxxxxxxxxxxxx; bhelgaas@xxxxxxxxxx; linuxppc-
> dev@xxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
>
> On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:
> >
> > > -----Original Message-----
> > > From: Bhushan Bharat-R65777
> > > Sent: Wednesday, November 27, 2013 9:39 PM
> > > To: 'Alex Williamson'
> > > Cc: Wood Scott-B07421; linux-pci@xxxxxxxxxxxxxxx; agraf@xxxxxxx;
> > > Yoder Stuart- B08248; iommu@xxxxxxxxxxxxxxxxxxxxxxxxxx;
> > > bhelgaas@xxxxxxxxxx; linuxppc- dev@xxxxxxxxxxxxxxxx;
> > > linux-kernel@xxxxxxxxxxxxxxx
> > > Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale
> > > IOMMU (PAMU)
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Alex Williamson [mailto:alex.williamson@xxxxxxxxxx]
> > > > Sent: Monday, November 25, 2013 10:08 PM
> > > > To: Bhushan Bharat-R65777
> > > > Cc: Wood Scott-B07421; linux-pci@xxxxxxxxxxxxxxx; agraf@xxxxxxx;
> > > > Yoder
> > > > Stuart- B08248; iommu@xxxxxxxxxxxxxxxxxxxxxxxxxx;
> > > > bhelgaas@xxxxxxxxxx;
> > > > linuxppc- dev@xxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
> > > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
> > > > IOMMU
> > > > (PAMU)
> > > >
> > > > On Mon, 2013-11-25 at 05:33 +0000, Bharat Bhushan wrote:
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Alex Williamson [mailto:alex.williamson@xxxxxxxxxx]
> > > > > > Sent: Friday, November 22, 2013 2:31 AM
> > > > > > To: Wood Scott-B07421
> > > > > > Cc: Bhushan Bharat-R65777; linux-pci@xxxxxxxxxxxxxxx;
> > > > > > agraf@xxxxxxx; Yoder Stuart-B08248;
> > > > > > iommu@xxxxxxxxxxxxxxxxxxxxxxxxxx; bhelgaas@xxxxxxxxxx;
> > > > > > linuxppc- dev@xxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
> > > > > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for
> > > > > > Freescale IOMMU (PAMU)
> > > > > >
> > > > > > On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
> > > > > > > On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
> > > > > > > > On Thu, 2013-11-21 at 11:20 +0000, Bharat Bhushan wrote:
> > > > > > > > >
> > > > > > > > > > -----Original Message-----
> > > > > > > > > > From: Alex Williamson
> > > > > > > > > > [mailto:alex.williamson@xxxxxxxxxx]
> > > > > > > > > > Sent: Thursday, November 21, 2013 12:17 AM
> > > > > > > > > > To: Bhushan Bharat-R65777
> > > > > > > > > > Cc: joro@xxxxxxxxxx; bhelgaas@xxxxxxxxxx;
> > > > > > > > > > agraf@xxxxxxx; Wood Scott-B07421; Yoder Stuart-B08248;
> > > > > > > > > > iommu@xxxxxxxxxxxxxxxxxxxxxxxxxx; linux-
> > > > > > > > > > pci@xxxxxxxxxxxxxxx; linuxppc-dev@xxxxxxxxxxxxxxxx;
> > > > > > > > > > linux- kernel@xxxxxxxxxxxxxxx; Bhushan Bharat-R65777
> > > > > > > > > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for
> > > > > > > > > > Freescale IOMMU (PAMU)
> > > > > > > > > >
> > > > > > > > > > Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie.
> > > > > > > > > > each vfio user has $COUNT regions at their disposal
> exclusively)?
> > > > > > > > >
> > > > > > > > > Number of msi-bank count is system wide and not per
> > > > > > > > > aperture, But will be
> > > > > > setting windows for banks in the device aperture.
> > > > > > > > > So say if we are direct assigning 2 pci device (both
> > > > > > > > > have different iommu
> > > > > > group, so 2 aperture in iommu) to VM.
> > > > > > > > > Now qemu can make only one call to know how many
> > > > > > > > > msi-banks are there but
> > > > > > it must set sub-windows for all banks for both pci device in
> > > > > > its respective aperture.
> > > > > > > >
> > > > > > > > I'm still confused. What I want to make sure of is that
> > > > > > > > the banks are independent per aperture. For instance, if
> > > > > > > > we have two separate userspace processes operating
> > > > > > > > independently and they both chose to use msi bank zero for
> > > > > > > > their device, that's bank zero within each aperture and
> > > > > > > > doesn't interfere. Or another way to ask is can a
> > > > > > > > malicious user interfere with other users by
> > > > using the wrong bank.
> > > > > > > > Thanks,
> > > > > > >
> > > > > > > They can interfere.
> > > > >
> > > > > Want to be sure of how they can interfere?
> > > >
> > > > What happens if more than one user selects the same MSI bank?
> > > > Minimally, wouldn't that result in the IOMMU blocking transactions
> > > > from the previous user once the new user activates their mapping?
> > >
> > > Yes and no; With current implementation yes but with a minor change
> > > no. Later in this response I will explain how.
> > >
> > > >
> > > > > >> With this hardware, the only way to prevent that
> > > > > > > is to make sure that a bank is not shared by multiple
> > > > > > > protection
> > > contexts.
> > > > > > > For some of our users, though, I believe preventing this is
> > > > > > > less important than the performance benefit.
> > > > >
> > > > > So should we let this patch series in without protection?
> > > >
> > > > No.
> > > >
> > > > > >
> > > > > > I think we need some sort of ownership model around the msi banks
> then.
> > > > > > Otherwise there's nothing preventing another userspace from
> > > > > > attempting an MSI based attack on other users, or perhaps even
> > > > > > on the host. VFIO can't allow that. Thanks,
> > > > >
> > > > > We have very few (3 MSI bank on most of chips), so we can not
> > > > > assign one to each userspace. What we can do is host and
> > > > > userspace does not share a MSI bank while userspace will share a MSI
> bank.
> > > >
> > > > Then you probably need VFIO to "own" the MSI bank and program
> > > > devices into it rather than exposing the MSI banks to userspace to
> > > > let them have
> > > direct access.
> > >
> > > Overall idea of exposing the details of msi regions to userspace are
> > > 1) User space can define the aperture size to fit MSI mapping in IOMMU.
> > > 2) setup iova for a MSI banks; which is just after guest memory.
> > >
> > > But currently we expose the "size" and "address" of MSI banks,
> > > passing address is of no use and can be problematic.
> >
> > I am sorry, above information is not correct. Currently neither we expose
> "address" nor "size" to user space. We only expose number of MSI BANK count and
> userspace adds one sub-window for each bank.
> >
> > > If we just provide the size of MSI bank to userspace then userspace
> > > cannot do anything wrong.
> >
> > So userspace does not know address, so it cannot mmap and cause any
> interference by directly reading/writing.
>
> That's security through obscurity... Couldn't the malicious user find out the
> address via other means, such as experimentation on another system over which
> they have full control? What would happen if the user reads from their device's
> PCI config space? Or gets the information via some back door in the PCI device
> they own? Or pokes throughout the address space looking for something that
> generates an interrupt to its own device?
So how to solve this problem, Any suggestion ?
We have to map one window in PAMU for MSIs and a malicious user can ask its device to do DMA to MSI window region with any pair of address and data, which can lead to unexpected MSIs in system?
Thanks
-Bharat
>
> -Scott
>
¢éì®&Þ~º&¶¬+-±éÝ¥w®Ë±Êâmébìdz¹Þ)í
æèw*jg¬±¨¶Ýj/êäz¹Þà2Þ¨èÚ&¢)ß«a¶Úþø®G«éh®æj:+v¨wèÙ>W±êÞiÛaxPjØm¶ÿÃ-»+ùd_