On 08/03/2023 17.46, Christian König wrote:
Am 07.03.23 um 15:25 schrieb Asahi Lina:I think you misunderstood the intent here... This isn't about job
Some hardware may require more complex resource utilization accountingWell complete NAK.
than the simple job count supported by drm_sched internally. Add a
can_run_job callback to allow drivers to implement more logic before
deciding whether to run a GPU job.
This is clearly going against the idea of having jobs only depend on
fences and nothing else which is mandatory for correct memory management.
If the hw is busy with something you need to return the fence for this
from the prepare_job callback so that the scheduler can be notified when
the hw is available again.
dependencies, it's about in-flight resource limits.
drm_sched already has a hw_submission_limit that specifies the number of
submissions that can be in flight, but that doesn't work for us because
each job from drm_sched's point of view consists of multiple commands
split among 3 firmware queues. The firmware can only support up to 128
work commands in flight per queue (barriers don't count), otherwise it
overflows a fixed-size buffer.
So we need more complex accounting of how many underlying commands are
in flight per queue to determine whether it is safe to run a new job,
and that is what this callback accomplishes. This has to happen even
when individual jobs have no buffer/resource dependencies between them
(which is what the fences would express).
You can see the driver implementation of that callback in
drivers/gpu/drm/asahi/queue/mod.rs (QueueJob::can_run()), which then
calls into drivers/gpu/drm/asahi/workqueue.rs (Job::can_submit()) that
does the actual available slot count checks.
The can_run_job logic is written to mirror the hw_submission_limit logic
(just a bit later in the sched main loop since we need to actually pick
a job to do the check), and just like for that case, completion of any
job in the same scheduler will cause another run of the main loop and
another check (which is exactly what we want here).
This case (potentially scheduling more than the FW job limit) is rare
but handling it is necessary, since otherwise the entire job
completion/tracking logic gets screwed up on the firmware end and queues
end up stuck (I've managed to trigger this before).
~~ Lina