Re: [PATCH] drm/panfrost: DMA map all pages shared with the GPU
From: Steven Price
Date: Mon Oct 14 2019 - 11:55:35 EST
On 14/10/2019 16:46, Robin Murphy wrote:
> On 14/10/2019 16:16, Steven Price wrote:
>> Pages shared with the GPU are (often) not cache coherent with the CPU so
>> cache maintenance is required to flush the CPU's caches. This was
>> already done when mapping pages on fault, but wasn't previously done
>> when mapping a freshly allocated page.
>>
>> Fix this by moving the call to dma_map_sg() into mmu_map_sg() ensuring
>> that it is always called when pages are mapped onto the GPU. Since
>> mmu_map_sg() can now fail the code also now has to handle an error
>> return.
>>
>> Not performing this cache maintenance can cause errors in the GPU output
>> (CPU caches are later flushed and overwrite the GPU output). In theory
>> it also allows the GPU (and by extension user space) to observe the
>> memory contents prior to sanitization.
>
> For the non-heap case, aren't the pages already supposed to be mapped by
> drm_gem_shmem_get_pages_sgt()?
Hmm, well yes - it looks like it *should* work - but I was definitely
seeing cache artefacts until I did this change... let me do some more
testing. It's possible that this is actually only affecting buffers
imported from another driver. Perhaps it's
drm_gem_shmem_prime_import_sg_table() that needs fixing.
> (Hmm, maybe I should try hooking up the GPU SMMU on my Juno to serve as
> a cheeky DMA-API-mishap detector...)
Yes I was wondering about that - it would certainly give some confidence
there's no missing cases.
Steve
> Robin.
>
>> Fixes: f3ba91228e8e ("drm/panfrost: Add initial panfrost driver")
>> Signed-off-by: Steven Price <steven.price@xxxxxxx>
>> ---
>> Â drivers/gpu/drm/panfrost/panfrost_mmu.c | 20 ++++++++++++--------
>> Â 1 file changed, 12 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c
>> b/drivers/gpu/drm/panfrost/panfrost_mmu.c
>> index bdd990568476..0495e48c238d 100644
>> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
>> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
>> @@ -248,6 +248,9 @@ static int mmu_map_sg(struct panfrost_device
>> *pfdev, struct panfrost_mmu *mmu,
>> ÂÂÂÂÂ struct io_pgtable_ops *ops = mmu->pgtbl_ops;
>> ÂÂÂÂÂ u64 start_iova = iova;
>> Â +ÂÂÂ if (!dma_map_sg(pfdev->dev, sgt->sgl, sgt->nents,
>> DMA_BIDIRECTIONAL))
>> +ÂÂÂÂÂÂÂ return -EINVAL;
>> +
>> ÂÂÂÂÂ for_each_sg(sgt->sgl, sgl, sgt->nents, count) {
>> ÂÂÂÂÂÂÂÂÂ unsigned long paddr = sg_dma_address(sgl);
>> ÂÂÂÂÂÂÂÂÂ size_t len = sg_dma_len(sgl);
>> @@ -275,6 +278,7 @@ int panfrost_mmu_map(struct panfrost_gem_object *bo)
>> ÂÂÂÂÂ struct panfrost_device *pfdev = to_panfrost_device(obj->dev);
>> ÂÂÂÂÂ struct sg_table *sgt;
>> ÂÂÂÂÂ int prot = IOMMU_READ | IOMMU_WRITE;
>> +ÂÂÂ int ret;
>> Â ÂÂÂÂÂ if (WARN_ON(bo->is_mapped))
>> ÂÂÂÂÂÂÂÂÂ return 0;
>> @@ -286,10 +290,12 @@ int panfrost_mmu_map(struct panfrost_gem_object
>> *bo)
>> ÂÂÂÂÂ if (WARN_ON(IS_ERR(sgt)))
>> ÂÂÂÂÂÂÂÂÂ return PTR_ERR(sgt);
>> Â -ÂÂÂ mmu_map_sg(pfdev, bo->mmu, bo->node.start << PAGE_SHIFT, prot,
>> sgt);
>> -ÂÂÂ bo->is_mapped = true;
>> +ÂÂÂ ret = mmu_map_sg(pfdev, bo->mmu, bo->node.start << PAGE_SHIFT,
>> +ÂÂÂÂÂÂÂÂÂÂÂÂ prot, sgt);
>> +ÂÂÂ if (ret == 0)
>> +ÂÂÂÂÂÂÂ bo->is_mapped = true;
>> Â -ÂÂÂ return 0;
>> +ÂÂÂ return ret;
>> Â }
>> Â Â void panfrost_mmu_unmap(struct panfrost_gem_object *bo)
>> @@ -503,12 +509,10 @@ int panfrost_mmu_map_fault_addr(struct
>> panfrost_device *pfdev, int as, u64 addr)
>> ÂÂÂÂÂ if (ret)
>> ÂÂÂÂÂÂÂÂÂ goto err_pages;
>> Â -ÂÂÂ if (!dma_map_sg(pfdev->dev, sgt->sgl, sgt->nents,
>> DMA_BIDIRECTIONAL)) {
>> -ÂÂÂÂÂÂÂ ret = -EINVAL;
>> +ÂÂÂ ret = mmu_map_sg(pfdev, bo->mmu, addr,
>> +ÂÂÂÂÂÂÂÂÂÂÂÂ IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt);
>> +ÂÂÂ if (ret)
>> ÂÂÂÂÂÂÂÂÂ goto err_map;
>> -ÂÂÂ }
>> -
>> -ÂÂÂ mmu_map_sg(pfdev, bo->mmu, addr, IOMMU_WRITE | IOMMU_READ |
>> IOMMU_NOEXEC, sgt);
>> Â ÂÂÂÂÂ bo->is_mapped = true;
>> Â
> _______________________________________________
> dri-devel mailing list
> dri-devel@xxxxxxxxxxxxxxxxxxxxx
> https://lists.freedesktop.org/mailman/listinfo/dri-devel