Re: [PATCH 9/9] x86/iommu: use dma_ops_list in get_dma_ops

From: Joerg Roedel
Date: Mon Sep 29 2008 - 09:33:36 EST


On Mon, Sep 29, 2008 at 10:16:44PM +0900, FUJITA Tomonori wrote:
> On Mon, 29 Sep 2008 11:36:52 +0200
> Joerg Roedel <joro@xxxxxxxxxx> wrote:
>
> > On Mon, Sep 29, 2008 at 12:30:44PM +0300, Muli Ben-Yehuda wrote:
> > > On Sun, Sep 28, 2008 at 09:13:33PM +0200, Joerg Roedel wrote:
> > >
> > > > I think we should try to build a paravirtualized IOMMU for KVM
> > > > guests. It should work this way: We reserve a configurable amount
> > > > of contiguous guest physical memory and map it dma contiguous using
> > > > some kind of hardware IOMMU. This is possible with all hardare
> > > > IOMMUs we have in the field by now, also Calgary and GART. The guest
> > > > does dma_coherent allocations from this memory directly and is done.
> > > > For map_single and map_sg
> > > > the guest can do bounce buffering. We avoid nearly all pvdma hypercalls
> > > > with this approach, keep guest swapping working and solve also the
> > > > problems with device dma_masks and guest memory that is not contigous on
> > > > the host side.
> > >
> > > I'm not sure I follow, but if I understand correctly with this
> > > approach the guest could only DMA into buffers that fall within the
> > > range you allocated for DMA and mapped. Isn't that a pretty nasty
> > > limitation? The guest would need to bounce-bufer every frame that
> > > happened to not fall inside that range, with the resulting loss of
> > > performance.
> >
> > The bounce buffering is needed for map_single/map_sg allocations. For
> > dma_alloc_coherent we can directly allocate from that range. The
> > performance loss of the bounce buffering may be lower than the
> > hypercalls we need as the alternative (we need hypercalls for map, unmap
> > and sync).
>
> Nobody cares about the performance of dma_alloc_coherent. Only the
> performance of map_single/map_sg matters.
>
> I'm not sure how expensive the hypercalls are, but they are more
> expensive than bounce buffering coping lots of data for every I/Os?

I don't think that we can avoid bounce buffering into the guests at all
(with and without my idea of a paravirtualized IOMMU) when we want to
handle dma_masks and requests that cross guest physical pages properly.

With mapping/unmapping through hypercalls we add the world-switch
overhead to the copy-overhead. We can't avoid this when we have no
hardware support at all. But already with older IOMMUs like Calgary and
GART we can at least avoid the world-switch. And since, for example,
every 64 bit capable AMD processor has a GART we can make use of it.

Joerg

--
| AMD Saxony Limited Liability Company & Co. KG
Operating | Wilschdorfer Landstr. 101, 01109 Dresden, Germany
System | Register Court Dresden: HRA 4896
Research | General Partner authorized to represent:
Center | AMD Saxony LLC (Wilmington, Delaware, US)
| General Manager of AMD Saxony LLC: Dr. Hans-R. Deppe, Thomas McCoy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/