Re: [PATCH V12 3/7] dma: add Qualcomm Technologies HIDMA management driver
From: Mark Rutland
Date: Fri Jan 15 2016 - 10:23:26 EST
On Fri, Jan 15, 2016 at 10:12:00AM -0500, Sinan Kaya wrote:
> Hi Mark,
>
> On 1/15/2016 9:56 AM, Mark Rutland wrote:
> > Hi,
> >
> > [adding KVM people, given this is meant for virtualization]
> >
> > On Mon, Jan 11, 2016 at 09:45:43AM -0500, Sinan Kaya wrote:
> >> The Qualcomm Technologies HIDMA device has been designed to support
> >> virtualization technology. The driver has been divided into two to follow
> >> the hardware design.
> >>
> >> 1. HIDMA Management driver
> >> 2. HIDMA Channel driver
> >>
> >> Each HIDMA HW consists of multiple channels. These channels share some set
> >> of common parameters. These parameters are initialized by the management
> >> driver during power up. Same management driver is used for monitoring the
> >> execution of the channels. Management driver can change the performance
> >> behavior dynamically such as bandwidth allocation and prioritization.
> >>
> >> The management driver is executed in hypervisor context and is the main
> >> management entity for all channels provided by the device.
> >
> > You mention repeatedly that this is designed for virtualization, but
> > looking at the series as it stands today I can't see how this operates
> > from the host side.
> >
> > This doesn't seem to tie into KVM or VFIO, and as far as I can tell
> > there's no mechanism for associating channels with a particular virtual
> > address space (i.e. no configuration of an external or internal IOMMU),
> > nor pinning of guest pages to allow for DMA to occur safely.
>
> I'm using VFIO platform driver for this purpose. VFIO platform driver is
> capable of assigning any platform device to a guest machine with this driver.
Typically VFIO-platform also comes with a corresponding reset driver.
You don't need one?
> You just unbind the HIDMA channel driver from the hypervisor and bind to vfio
> driver using the very same approach you'd use with PCIe.
>
> Of course, this all assumes the presence of an IOMMU driver on the system. VFIO
> driver uses the IOMMU driver to create the mappings.
No IOMMU was described in the DT binding. It sounds like you'd need an
optional (not present in the guest) iommus property per-channel
> The mechanism used here is not different from VFIO PCI from user perspective.
>
> >
> > Given that, I'm at a loss as to how this would be used in a hypervisor
> > context. What am I missing?
> >
> > Are there additional patches, or do you have some userspace that works
> > with this in some limited configuration?
>
> No, these are the only patches. We have one patch for the QEMU but from kernel
> perspective this is it.
Do you have a link to that? Seeing it would help to ease my concerns.
Thanks,
Mark.