Re: [PATCH 2/3] drm/etnaviv: fix dma configuration of the virtual device

From: Robin Murphy
Date: Thu Aug 26 2021 - 16:15:23 EST


On 2021-08-26 16:17, Lucas Stach wrote:
Am Donnerstag, dem 26.08.2021 um 16:00 +0100 schrieb Robin Murphy:
On 2021-08-26 13:10, Michael Walle wrote:
The DMA configuration of the virtual device is inherited from the first
actual etnaviv device. Unfortunately, this doesn't work with an IOMMU:

[ 5.191008] Failed to set up IOMMU for device (null); retaining platform DMA ops

This is because there is no associated iommu_group with the device. The
group is set in iommu_group_add_device() which is eventually called by
device_add() via the platform bus:
device_add()
blocking_notifier_call_chain()
iommu_bus_notifier()
iommu_probe_device()
__iommu_probe_device()
iommu_group_get_for_dev()
iommu_group_add_device()

Move of_dma_configure() into the probe function, which is called after
device_add(). Normally, the platform code will already call it itself
if .of_node is set. Unfortunately, this isn't the case here.

Also move the dma mask assignemnts to probe() to keep all DMA related
settings together.

I assume the driver must already keep track of the real GPU platform
device in order to map registers, request interrupts, etc. correctly -
can't it also correctly use that device for DMA API calls and avoid the
need for these shenanigans altogether?

Not without a bigger rework. There's still quite a bit of midlayer
issues in DRM, where dma-buf imports are dma-mapped and cached via the
virtual DRM device instead of the real GPU device. Also etnaviv is able
to coalesce multiple Vivante GPUs in a single system under one virtual
DRM device, which is used on i.MX6 where the 2D and 3D GPUs are
separate peripherals, but have the same DMA constraints.

Sure, I wouldn't expect it to be trivial to fix properly, but I wanted to point out that this is essentially a hack, relying on an implicit side-effect of of_dma_configure() which is already slated for removal. As such, I for one am not going to be too sympathetic if it stops working in future.

Furthermore, even today it doesn't work in general - it might be OK for LS1028A with a single GPU block behind an SMMU, but as soon as you have multiple GPU blocks with distinct SMMU StreamIDs, or behind different IOMMU instances, then you're stuffed again.

Although in fact I think it's also broken even for LS1028A, since AFAICS there's no guarantee that the relevant SMMU instance will actually be probed, or the SMMU driver even loaded, when etnaviv_pdev_probe() runs.

Effectively we would need to handle N devices for the dma-mapping in a
lot of places instead of only dealing with the one virtual DRM device.
It would probably be the right thing to anyways, but it's not something
that can be changed short-term. I'm also not yet sure about the
performance implications, as we might run into some cache maintenance
bottlenecks if we dma synchronize buffers to multiple real device
instead of doing it a single time with the virtual DRM device. I know,
I know, this has a lot of assumptions baked in that could fall apart if
someone builds a SoC with multiple Vivante GPUs that have differing DMA
constraints, but up until now hardware designers have not been *that*
crazy, fortunately.

I'm not too familiar with the component stuff, but would it be viable to just have etnaviv_gpu_platform_probe() set up the first GPU which comes along as the master component and fundamental DRM device, then treat any subsequent ones as subcomponents as before? That would at least stand to be more robust in terms of obviating the of_dma_configure() hack (only actual bus code should ever be calling that), even if it won't do anything for the multiple IOMMU mapping or differing DMA constraints problems.

Thanks,
Robin.