Re: [RFC] drm/scheduler: Unwrap job dependencies

From: Christian König
Date: Tue Dec 05 2023 - 10:58:27 EST


Am 05.12.23 um 16:41 schrieb Rob Clark:
On Mon, Dec 4, 2023 at 10:46 PM Christian König
<christian.koenig@xxxxxxx> wrote:
Am 04.12.23 um 22:54 schrieb Rob Clark:
On Thu, Mar 23, 2023 at 2:30 PM Rob Clark <robdclark@xxxxxxxxx> wrote:
[SNIP]
So, this patch turns out to blow up spectacularly with dma_fence
refcnt underflows when I enable DRIVER_SYNCOBJ_TIMELINE .. I think,
because it starts unwrapping fence chains, possibly in parallel with
fence signaling on the retire path. Is it supposed to be permissible
to unwrap a fence chain concurrently?
The DMA-fence chain object and helper functions were designed so that
concurrent accesses to all elements are always possible.

See dma_fence_chain_walk() and dma_fence_chain_get_prev() for example.
dma_fence_chain_walk() starts with a reference to the current fence (the
anchor of the walk) and tries to grab an up to date reference on the
previous fence in the chain. Only after that reference is successfully
acquired we drop the reference to the anchor where we started.

Same for dma_fence_array_first(), dma_fence_array_next(). Here we hold a
reference to the array which in turn holds references to each fence
inside the array until it is destroyed itself.

When this blows up we have somehow mixed up the references somewhere.
That's what it looked like to me, but wanted to make sure I wasn't
overlooking something subtle. And in this case, the fence actually
should be the syncobj timeline point fence, not the fence chain.
Virtgpu has essentially the same logic (there we really do want to
unwrap fences so we can pass host fences back to host rather than
waiting in guest), I'm not sure if it would blow up in the same way.

Well do you have a backtrace of what exactly happens?

Maybe we have some _put() before _get() or something like this.

Thanks,
Christian.


BR,
-R

Regards,
Christian.

BR,
-R