Re: [PATCH 5/9] dma-mapping: support highmem in the generic remap allocator

From: Marek Szyprowski
Date: Tue Dec 04 2018 - 03:38:10 EST


Hi All,

On 2018-11-30 20:05, Robin Murphy wrote:
> On 05/11/2018 12:19, Christoph Hellwig wrote:
>> By using __dma_direct_alloc_pages we can deal entirely with struct page
>> instead of having to derive a kernel virtual address.
>
> Simple enough :)
>
> Reviewed-by: Robin Murphy <robin.murphy@xxxxxxx>

This patch has landed linux-next yesterday and I've noticed that it
breaks operation of many drivers. The change looked simple, but a stupid
bug managed to slip into the code. After a short investigation I've
noticed that __dma_direct_alloc_pages() doesn't set dma_handle and zero
allocated memory, while dma_direct_alloc_pages() did. The other
difference is the lack of set_memory_decrypted() handling.

Following patch fixes the issue, but maybe it would be better to fix it
in kernel/dma/direct.c:

diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c
index dcc82dd668f8..7765ddc56e4e 100644
--- a/kernel/dma/remap.c
+++ b/kernel/dma/remap.c
@@ -219,8 +219,14 @@ void *arch_dma_alloc(struct device *dev, size_t
size, dma_addr_t *dma_handle,
ÂÂÂÂÂÂÂ ret = dma_common_contiguous_remap(page, size, VM_USERMAP,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ arch_dma_mmap_pgprot(dev, PAGE_KERNEL, attrs),
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ __builtin_return_address(0));
-ÂÂÂÂÂÂ if (!ret)
+ÂÂÂÂÂÂ if (!ret) {
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ __dma_direct_free_pages(dev, size, page);
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ return ret;
+ÂÂÂÂÂÂ }
+
+ÂÂÂÂÂÂ *dma_handle = phys_to_dma(dev, page_to_phys(page));
+ÂÂÂÂÂÂ memset(ret, 0, size);
+
ÂÂÂÂÂÂÂ return ret;
Â}

>
>> Signed-off-by: Christoph Hellwig <hch@xxxxxx>
>> ---
>> Â kernel/dma/remap.c | 14 +++++++-------
>> Â 1 file changed, 7 insertions(+), 7 deletions(-)
>>
>> diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c
>> index bc42766f52df..8f1fca34b894 100644
>> --- a/kernel/dma/remap.c
>> +++ b/kernel/dma/remap.c
>> @@ -196,7 +196,7 @@ void *arch_dma_alloc(struct device *dev, size_t
>> size, dma_addr_t *dma_handle,
>> ÂÂÂÂÂÂÂÂÂ gfp_t flags, unsigned long attrs)
>> Â {
>> ÂÂÂÂÂ struct page *page = NULL;
>> -ÂÂÂ void *ret, *kaddr;
>> +ÂÂÂ void *ret;
>> Â ÂÂÂÂÂ size = PAGE_ALIGN(size);
>> Â @@ -208,10 +208,9 @@ void *arch_dma_alloc(struct device *dev,
>> size_t size, dma_addr_t *dma_handle,
>> ÂÂÂÂÂÂÂÂÂ return ret;
>> ÂÂÂÂÂ }
>> Â -ÂÂÂ kaddr = dma_direct_alloc_pages(dev, size, dma_handle, flags,
>> attrs);
>> -ÂÂÂ if (!kaddr)
>> +ÂÂÂ page = __dma_direct_alloc_pages(dev, size, dma_handle, flags,
>> attrs);
>> +ÂÂÂ if (!page)
>> ÂÂÂÂÂÂÂÂÂ return NULL;
>> -ÂÂÂ page = virt_to_page(kaddr);
>> Â ÂÂÂÂÂ /* remove any dirty cache lines on the kernel alias */
>> ÂÂÂÂÂ arch_dma_prep_coherent(page, size);
>> @@ -221,7 +220,7 @@ void *arch_dma_alloc(struct device *dev, size_t
>> size, dma_addr_t *dma_handle,
>> ÂÂÂÂÂÂÂÂÂÂÂÂÂ arch_dma_mmap_pgprot(dev, PAGE_KERNEL, attrs),
>> ÂÂÂÂÂÂÂÂÂÂÂÂÂ __builtin_return_address(0));
>> ÂÂÂÂÂ if (!ret)
>> -ÂÂÂÂÂÂÂ dma_direct_free_pages(dev, size, kaddr, *dma_handle, attrs);
>> +ÂÂÂÂÂÂÂ __dma_direct_free_pages(dev, size, page);
>> ÂÂÂÂÂ return ret;
>> Â }
>> Â @@ -229,10 +228,11 @@ void arch_dma_free(struct device *dev, size_t
>> size, void *vaddr,
>> ÂÂÂÂÂÂÂÂÂ dma_addr_t dma_handle, unsigned long attrs)
>> Â {
>> ÂÂÂÂÂ if (!dma_free_from_pool(vaddr, PAGE_ALIGN(size))) {
>> -ÂÂÂÂÂÂÂ void *kaddr = phys_to_virt(dma_to_phys(dev, dma_handle));
>> +ÂÂÂÂÂÂÂ phys_addr_t phys = dma_to_phys(dev, dma_handle);
>> +ÂÂÂÂÂÂÂ struct page *page = pfn_to_page(__phys_to_pfn(phys));
>> Â ÂÂÂÂÂÂÂÂÂ vunmap(vaddr);
>> -ÂÂÂÂÂÂÂ dma_direct_free_pages(dev, size, kaddr, dma_handle, attrs);
>> +ÂÂÂÂÂÂÂ __dma_direct_free_pages(dev, size, page);
>> ÂÂÂÂÂ }
>> Â }
>> Â
>
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland