Re: [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field

From: Boris Brezillon
Date: Wed Aug 02 2023 - 05:06:38 EST


On Wed, 2 Aug 2023 04:31:52 +0200
Danilo Krummrich <dakr@xxxxxxxxxx> wrote:

> On 7/31/23 15:35, Boris Brezillon wrote:
> > +Danilo, to confirm my understanding of the gpuva remap operation is
> > correct.
>
> Your understanding is correct.
>
> Unfortunately, re-mapping things has such implications.
>
> I'm currently working on tracking external GEM objects in the GPUVA
> manager, where, ideally, you'd want to add the extobj to the VM when the
> first mapping being backed by this GEM is created and removed when the
> last mapping being backed by this GEM is removed. Hence, extobjs need to
> be ref-counted based on how many mappings they back.

Uh, right. I went for a much simpler (but also less efficient) approach
where I basically track things at the mapping level (my panthor_vma
object, which inherits from drm_gpuva, has a list node so it can be
inserted in a shared_bos list tracked at the VM level), instead of the
GEM level. So we'd basically be trying to acquire resv locks multiple
times and reserving multiple slots if the same shared GEM is mapped
multiple times. With the IGNORE_DUPLICATES flag passed to drm_exec,
that works, but it might not be ideal if we expect shared BOs to be
mapped multiple times in the same VM.

>
> However, when re-mapping such a mapping, the reference counter might
> drop to 0 temporarily and the slot of the data structure tracking the
> extobj is cleaned up and needs to be re-allocated. Surely, we could just
> increase the reference count while re-mapping or for the whole
> transaction (job), but this would make the API kinda bulky.

With things happening in the dma-signaling path, we'd need to
pre-allocate this shared-bo container object anyway, because we can't
assume there will be one available by the time we get to run the VM
operation. So I think it's safe to assume that, even if the unmap part
of the remap operation drops the last ref of this container object, when
you get to map the same BO again, you'll have another container to play
with. It's just a matter of pre-allocating one more thing when
bo_is_shared==true && op==map, I think.