Re: DMA using data buffer vmapped in kernel space
From: Lin Mac
Date: Tue Mar 09 2010 - 02:46:01 EST
2010/3/7 Russell King - ARM Linux <linux@xxxxxxxxxxxxxxxx>:
> On Sat, Mar 06, 2010 at 02:07:12PM +0100, Thomas Koeller wrote:
>> Am Donnerstag, 4. März 2010 22:36:34 schrieb Russell King - ARM Linux:
>> > Cache maintainence is done using virtual addresses for L1, and
>> > physical addresses for L2. There's the need for virtual addresses
>> > to be translatable to physical addresses, which is only true for
>> > the kernel direct mapped region (pages between PAGE_OFFSET and
>> > high_memory).
>>
>> Isn't the mapping created by vmap() sufficient for the virt/phys
>> translation? In which way is this case different from a buffer
>> passed in from user space, where the constituent pages are not
>> in the directly mapped kernel region either?
>
> No different.
>
> The requirement is that dma_map_single() is passed a virtual address
> in the kernel direct-mapped memory region, which is translatable using
> virt_to_phys() and friends.
I had encounter a similiar problem and I simply allocated a new
buffer, copy the data, then DMA. It seems slow and stupid.
I'm wondering wether could I translate the vmap virt to phys(don't
know how to yet), then use phys_to_virt to get the virt in
direct-mapped memory region?
Is there other possible ways?
> Anything which requires a page table lookup to obtain the physical
> address is just not acceptable - that requires taking locks and other
> messy things, plus is grossly inefficient.
Best Regards,
Mac Lin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/