Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses
From: Boris Ostrovsky
Date: Thu Jul 15 2021 - 12:59:19 EST
On 7/15/21 3:39 AM, Roman Skakun wrote:
>> This looks like it wasn't picked up? Should it go in rc1?
> Hi, Konrad!
>
> This looks like an unambiguous bug, and should be in rc1.
Looks like you didn't copy Christoph which could be part of the problem. Adding him.
-boris
>
> Cheers!
>
> ср, 14 июл. 2021 г. в 03:15, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>:
>> On Tue, Jun 22, 2021 at 04:34:14PM +0300, Roman Skakun wrote:
>>> This commit is dedicated to fix incorrect conversion from
>>> cpu_addr to page address in cases when we get virtual
>>> address which allocated in the vmalloc range.
>>> As the result, virt_to_page() cannot convert this address
>>> properly and return incorrect page address.
>>>
>>> Need to detect such cases and obtains the page address using
>>> vmalloc_to_page() instead.
>>>
>>> Signed-off-by: Roman Skakun <roman_skakun@xxxxxxxx>
>>> Reviewed-by: Andrii Anisov <andrii_anisov@xxxxxxxx>
>>> ---
>>> Hey!
>>> Thanks for suggestions, Christoph!
>>> I updated the patch according to your advice.
>>> But, I'm so surprised because nobody catches this problem
>>> in the common code before. It looks a bit strange as for me.
>> This looks like it wasn't picked up? Should it go in rc1?
>>>
>>> kernel/dma/ops_helpers.c | 12 ++++++++++--
>>> 1 file changed, 10 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
>>> index 910ae69cae77..782728d8a393 100644
>>> --- a/kernel/dma/ops_helpers.c
>>> +++ b/kernel/dma/ops_helpers.c
>>> @@ -5,6 +5,14 @@
>>> */
>>> #include <linux/dma-map-ops.h>
>>>
>>> +static struct page *cpu_addr_to_page(void *cpu_addr)
>>> +{
>>> + if (is_vmalloc_addr(cpu_addr))
>>> + return vmalloc_to_page(cpu_addr);
>>> + else
>>> + return virt_to_page(cpu_addr);
>>> +}
>>> +
>>> /*
>>> * Create scatter-list for the already allocated DMA buffer.
>>> */
>>> @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
>>> void *cpu_addr, dma_addr_t dma_addr, size_t size,
>>> unsigned long attrs)
>>> {
>>> - struct page *page = virt_to_page(cpu_addr);
>>> + struct page *page = cpu_addr_to_page(cpu_addr);
>>> int ret;
>>>
>>> ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
>>> @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
>>> return -ENXIO;
>>>
>>> return remap_pfn_range(vma, vma->vm_start,
>>> - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
>>> + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff,
>>> user_count << PAGE_SHIFT, vma->vm_page_prot);
>>> #else
>>> return -ENXIO;
>>> --
>>> 2.25.1
>>>
>
>