On Wednesday, August 6, 2008 7:32 am Prarit Bhargava wrote:
You can't kmalloc pci_dev or setup some trivial values. You need toOops -- I meant struct device, not struct pci_dev.
use a proper value. The pci code does for us.
Anwyay, Jesse -- is this true? I can no longer do something like:
static struct device junk_dev = {
.bus_id = "junk device",
.coherent_dma_mask = 0xffffffff,
.dma_mask = &junk_dev.coherent_dma_mask,
};
And then use that as the device target for dma_alloc_coherent? AFAIK,
that has always worked for me.
It gets dangerous since platforms are in control of some pci_dev and dev fields, and if they don't get initialized you can get into trouble.
Calgary IOMMU has the same code. New AMD IOMMU has the same code too.Then they don't handle the above problem and are broken when
dma_get_seg_boundary() returns 0xffffffff and will require patches.
/me hasn't tried out Calgary of AMD IOMMU.
Would be good to find someone to do some testing on one of those platforms...
/me looks and still doesn't see where the size passed intoMaybe I'm missing something -- what implies size has to be a power ofYes, see iommu_area_alloc().
two?
gart_map_simple() must be a power of two. ... and if that was the case,
shouldn't we be failing all the time? I mean, I've seen values passed
into pci_alloc_consistent like 0x3820 -- clearly not a multiple of 2.
iommu_area_alloc() deals with pages, not actual sizes as
gart_map_simple() does.
If anything, I would make this simple fix:
dma_addr_t map = dma_map_area(dev, paddr, size, dir, size - 1);
should be
dma_addr_t map = dma_map_area(dev, paddr, size, dir, size);
because after my patch we round up the mask argument to get the correct
alignment to # of pages anyway.
Feel like respinning with a full changelog against my for-linus branch? Maybe you can convince Tomonori-san this time. :)
Jesse--