Re: [PATCH 2/2] drm/vgem: use normal cached mmap'ings
From: Daniel Vetter
Date: Fri Jul 19 2019 - 05:09:56 EST
On Tue, Jul 16, 2019 at 09:42:15AM -0700, Rob Clark wrote:
> From: Rob Clark <robdclark@xxxxxxxxxxxx>
>
> Since there is no real device associated with vgem, it is impossible to
> end up with appropriate dev->dma_ops, meaning that we have no way to
> invalidate the shmem pages allocated by vgem. So, at least on platforms
> without drm_cflush_pages(), we end up with corruption when cache lines
> from previous usage of vgem bo pages get evicted to memory.
>
> The only sane option is to use cached mappings.
>
> Signed-off-by: Rob Clark <robdclark@xxxxxxxxxxxx>
> ---
> Possibly we could dma_sync_*_for_{device,cpu}() on dmabuf attach/detach,
> although the ->gem_prime_{pin,unpin}() API isn't quite ideal for that as
> it is. And that doesn't really help for drivers that don't attach/
> detach for each use.
>
> But AFAICT vgem is mainly used for dmabuf testing, so maybe we don't
> need to care too much about use of cached mmap'ings.
Isn't this going to horribly break testing buffer sharing with SoC
devices? I'd assume they all expect writecombining mode to make sure stuff
is coherent?
Also could we get away with this by simply extending drm_cflush_pages for
those arm platforms where we do have a clflush instruction? Yes I know
that'll get people screaming, I'll shrug :-)
If all we need patch 1/2 for is this vgem patch then the auditing needed for
patch 1 doesn't look appealing ...
-Daniel
>
> drivers/gpu/drm/vgem/vgem_drv.c | 7 ++-----
> 1 file changed, 2 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index 11a8f99ba18c..ccf0c3fbd586 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -259,9 +259,6 @@ static int vgem_mmap(struct file *filp, struct vm_area_struct *vma)
> if (ret)
> return ret;
>
> - /* Keep the WC mmaping set by drm_gem_mmap() but our pages
> - * are ordinary and not special.
> - */
> vma->vm_flags = flags | VM_DONTEXPAND | VM_DONTDUMP;
> return 0;
> }
> @@ -382,7 +379,7 @@ static void *vgem_prime_vmap(struct drm_gem_object *obj)
> if (IS_ERR(pages))
> return NULL;
>
> - return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> + return vmap(pages, n_pages, 0, PAGE_KERNEL);
> }
>
> static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> @@ -411,7 +408,7 @@ static int vgem_prime_mmap(struct drm_gem_object *obj,
> fput(vma->vm_file);
> vma->vm_file = get_file(obj->filp);
> vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> - vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
> + vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>
> return 0;
> }
> --
> 2.21.0
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch