Re: [PATCH 2/3] drm/ingenic: Update code to mmap GEM buffers cached

From: Christoph Hellwig
Date: Thu Oct 01 2020 - 01:32:47 EST


On Wed, Sep 30, 2020 at 07:16:43PM +0200, Paul Cercueil wrote:
> The DMA API changed at the same time commit 37054fc81443 ("gpu/drm:
> ingenic: Add option to mmap GEM buffers cached") was added. Rework the
> code to work with the new DMA API.
>
> Signed-off-by: Paul Cercueil <paul@xxxxxxxxxxxxxxx>
> ---
> drivers/gpu/drm/ingenic/ingenic-drm-drv.c | 24 +++++++----------------
> 1 file changed, 7 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
> index 0225dc1f5eb8..07a1da7266e4 100644
> --- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
> +++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
> @@ -526,12 +526,10 @@ void ingenic_drm_sync_data(struct device *dev,
> struct drm_plane_state *state)
> {
> const struct drm_format_info *finfo = state->fb->format;
> - struct ingenic_drm *priv = dev_get_drvdata(dev);
> struct drm_atomic_helper_damage_iter iter;
> unsigned int offset, i;
> struct drm_rect clip;
> dma_addr_t paddr;
> - void *addr;
>
> if (!ingenic_drm_cached_gem_buf)
> return;
> @@ -541,12 +539,11 @@ void ingenic_drm_sync_data(struct device *dev,
> drm_atomic_for_each_plane_damage(&iter, &clip) {
> for (i = 0; i < finfo->num_planes; i++) {
> paddr = drm_fb_cma_get_gem_addr(state->fb, state, i);
> - addr = phys_to_virt(paddr);

No on the old code: drm_fb_cma_get_gem_addr returns a dma_addr_t, so
this was already pretty broken..

> @@ -766,14 +763,6 @@ static int ingenic_drm_gem_mmap(struct drm_gem_object *obj,
> struct vm_area_struct *vma)
> {
> struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
>
> /*
> * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
> @@ -784,12 +773,13 @@ static int ingenic_drm_gem_mmap(struct drm_gem_object *obj,
> vma->vm_pgoff = 0;
> vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>
> + if (!ingenic_drm_cached_gem_buf)
> + vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
>
> + return remap_pfn_range(vma, vma->vm_start,
> + cma_obj->paddr >> PAGE_SHIFT,
> + vma->vm_end - vma->vm_start,
> + vma->vm_page_prot);

both ->vaddr and ->paddr come from dma_alloc_wc as far as I can tell,
and despite the confusing name ->paddr is a dma_addr_t. So this can't
work at all. If you allocate memory using dma_alloc_wc you need to
map it using dma_alloc_wc.