RE: [PATCH V2 4/4] misc: vop: mapping kernel memory to user space as noncached
From: Sherry Sun
Date: Mon Oct 12 2020 - 22:42:09 EST
Hi David, thanks for your information.
Hi Christoph, please see my comments below.
> Subject: RE: [PATCH V2 4/4] misc: vop: mapping kernel memory to user space
> as noncached
>
> From: Christoph Hellwig
> > Sent: 29 September 2020 11:29
> ...
> > You can't call remap_pfn_range on memory returned from
> > dma_alloc_coherent (which btw is not marked uncached on many
> platforms).
> >
> > You need to use the dma_mmap_coherent helper instead.
>
I tried to use dma_mmap_coherent helper here, but I met the same problem as David said.
Since the user space calls mmap() to map all the device page and vring size at one time.
va = mmap(NULL, MIC_DEVICE_PAGE_END + vr_size * num_vq, PROT_READ, MAP_SHARED, fd, 0);
But the physical addresses of device page and multiple vrings are not consecutive, so we called
multiple remap_pfn_range before. When changing to use dma_mmap_coherent, it will return error
because vma_pages(the size user space want to map) are bigger than the actual size we do multiple
map(one non-continuous memory size at a time).
David believes that we could modify the vm_start address before call the multiple dma_mmap_coherent to
avoid the vma_pages check error and map multiple discontinuous memory.
Do you have any suggestions?
Best regards
Sherry
> Hmmmm. I've a driver that does that.
> Fortunately it only has to work on x86 where it doesn't matter.
> However I can't easily convert it.
> The 'problem' is that the mmap() request can cover multiple dma buffers and
> need not start at the beginning of one.
>
> Basically we have a PCIe card that has an inbuilt iommu to convert internal
> 32bit addresses to 64bit PCIe ones.
> This has 512 16kB pages.
> So we do a number of dma_alloc_coherent() for 16k blocks.
> The user process then does an mmap() for part of the buffer.
> This request is 4k aligned so we do multiple remap_pfn_range() calls to map
> the disjoint physical (and kernel virtual) buffers into contiguous user memory.
>
> So both ends see contiguous addresses even though the physical addresses
> are non-contiguous.
>
> I guess I could modify the vm_start address and then restore it at the end.
>
> I found this big discussion:
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.k
> ernel.org%2Fpatchwork%2Fpatch%2F351245%2F&data=02%7C01%7Csh
> erry.sun%40nxp.com%7C876724689688440581a708d8648dceb3%7C686ea1d
> 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637369907516376294&sdat
> a=amSClQfVGhI0%2F3bZfo8HF7UmCktkPluArWW22YlQzMQ%3D&reser
> ved=0
> about these functions.
>
> The bit about VIPT caches is problematic.
> I don't think you can change the kernel address during mmap.
> What you need to do is defer allocating the user address until you know the
> kernel address.
> Otherwise you get into problems is you try to mmap the same memory into
> two processes.
> This is a general problem even for mmap() of files.
> ISTR SYSV on some sparc systems having to use uncached maps.
>
> If you might want to mmap two kernel buffers (dma or not) into adjacent
> user addresses then you need some way of allocating the second buffer to
> follow the first one in the VIVT cache.
>
> David
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes,
> MK1 1PT, UK Registration No: 1397386 (Wales)