Re: [PATCH v4 1/2] drm/virtio: Refactor job submission code path

From: Dmitry Osipenko
Date: Mon Apr 03 2023 - 15:59:47 EST


On 3/30/23 18:32, Emil Velikov wrote:
>> +static int virtio_gpu_dma_fence_wait(struct virtio_gpu_submit *submit,
>> + struct dma_fence *fence)
>> +{
>> + struct dma_fence_unwrap itr;
>> + struct dma_fence *f;
>> + int err;
>> +
>> + dma_fence_unwrap_for_each(f, &itr, fence) {
> The dma_fence_unwrap_for_each() change should be a separate patch,
> highlighting why we want it.

Good point, it actually should be a potential optimization for the
in-fence waiting.

>> + ret = virtio_gpu_init_submit(&submit, exbuf, dev, file,
>> + fence_ctx, ring_idx);
>> + if (ret)
>> + goto cleanup;
>> +
>> + ret = virtio_gpu_wait_in_fence(&submit);
>> + if (ret)
>> + goto cleanup;
>> +
> We have reshuffled the order around in_fence waiting, out_fence install,
> handles, cmdbuf, drm events, etc. Can we get that split up a bit, with
> some comments.
>
> If it were me, I would keep the wait_in_fence early and inline
> virtio_gpu_init_submit (the nesting/abstraction seems a bit much). This
> means one can omit the virtio_gpu_submit::exbuf all together.

I tried to inline and this variant makes code much less readable to me.

The point of having wait_in_fence after submit_init is that it makes
submit code path shorter. If we have to wait for in-fence, then once
fence signals, there is no need to init and instead move directly to a
further submission step.

Perhaps won't hurt to also factor out the wait_fence from parse_deps in
the second patch and do all the waits right before locking the buflist.

--
Best regards,
Dmitry