Hi Christoph,
Le mar. 3 nov. 2020 à 18:50, Christoph Hellwig <hch@xxxxxxxxxxxxx> a écrit :On Mon, Nov 02, 2020 at 10:06:49PM +0000, Paul Cercueil wrote:
This function can be used by drivers that need to mmap dumb buffers
created with non-coherent backing memory.
Signed-off-by: Paul Cercueil <paul@xxxxxxxxxxxxxxx>
---
drivers/gpu/drm/drm_gem_cma_helper.c | 39 ++++++++++++++++++++++++++++
include/drm/drm_gem_cma_helper.h | 2 ++
2 files changed, 41 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 3bdd67795e20..4ed63f4896bd 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -387,6 +387,45 @@ int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
}
EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
+/**
+ * drm_gem_cma_mmap_noncoherent - memory-map a CMA GEM object with
+ * non-coherent cache attribute
+ * @filp: file object
+ * @vma: VMA for the area to be mapped
+ *
+ * Just like drm_gem_cma_mmap, but for a GEM object backed by non-coherent
+ * memory.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_cma_mmap_noncoherent(struct file *filp, struct vm_area_struct *vma)
+{
+ struct drm_gem_cma_object *cma_obj;
+ int ret;
+
+ ret = drm_gem_mmap(filp, vma);
+ if (ret)
+ return ret;
+
+ cma_obj = to_drm_gem_cma_obj(vma->vm_private_data);
+
+ /*
+ * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
+ * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
+ * the whole buffer.
+ */
+ vma->vm_flags &= ~VM_PFNMAP;
+ vma->vm_pgoff = 0;
+ vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
+
+ return remap_pfn_range(vma, vma->vm_start,
+ cma_obj->paddr >> PAGE_SHIFT,
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot);
Per patch 1 cma_obj->paddr is the dma address, while remap_pfn_range
expects a physical address. This does not work.
Ok, what would be the correct way to mmap_noncoherent?