Re: [PATCH] drm/panfrost: Handle resetting on timeout better

From: Steven Price
Date: Wed Oct 09 2019 - 05:42:33 EST


On 07/10/2019 17:14, Tomeu Vizoso wrote:
> On 10/7/19 6:09 AM, Neil Armstrong wrote:
>> Hi Steven,
>>
>> On 07/10/2019 14:50, Steven Price wrote:
>>> Panfrost uses multiple schedulers (one for each slot, so 2 in reality),
>>> and on a timeout has to stop all the schedulers to safely perform a
>>> reset. However more than one scheduler can trigger a timeout at the same
>>> time. This race condition results in jobs being freed while they are
>>> still in use.
>>>
>>> When stopping other slots use cancel_delayed_work_sync() to ensure that
>>> any timeout started for that slot has completed. Also use
>>> mutex_trylock() to obtain reset_lock. This means that only one thread
>>> attempts the reset, the other threads will simply complete without doing
>>> anything (the first thread will wait for this in the call to
>>> cancel_delayed_work_sync()).
>>>
>>> While we're here and since the function is already dependent on
>>> sched_job not being NULL, let's remove the unnecessary checks, along
>>> with a commented out call to panfrost_core_dump() which has never
>>> existed in mainline.
>>>
>>
>> A Fixes: tags would be welcome here so it would be backported to v5.3
>>
>>> Signed-off-by: Steven Price <steven.price@xxxxxxx>
>>> ---
>>> This is a tidied up version of the patch orginally posted here:
>>> http://lkml.kernel.org/r/26ae2a4d-8df1-e8db-3060-41638ed63e2a%40arm.com
>>>
>>> Â drivers/gpu/drm/panfrost/panfrost_job.c | 17 +++++++++++------
>>> Â 1 file changed, 11 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
>>> b/drivers/gpu/drm/panfrost/panfrost_job.c
>>> index a58551668d9a..dcc9a7603685 100644
>>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>>> @@ -381,13 +381,19 @@ static void panfrost_job_timedout(struct
>>> drm_sched_job *sched_job)
>>> ÂÂÂÂÂÂÂÂÂ job_read(pfdev, JS_TAIL_LO(js)),
>>> ÂÂÂÂÂÂÂÂÂ sched_job);
>>> Â -ÂÂÂ mutex_lock(&pfdev->reset_lock);
>>> +ÂÂÂ if (!mutex_trylock(&pfdev->reset_lock))
>>> +ÂÂÂÂÂÂÂ return;
>>> Â -ÂÂÂ for (i = 0; i < NUM_JOB_SLOTS; i++)
>>> -ÂÂÂÂÂÂÂ drm_sched_stop(&pfdev->js->queue[i].sched, sched_job);
>>> +ÂÂÂ for (i = 0; i < NUM_JOB_SLOTS; i++) {
>>> +ÂÂÂÂÂÂÂ struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched;
>>> +
>>> +ÂÂÂÂÂÂÂ drm_sched_stop(sched, sched_job);
>>> +ÂÂÂÂÂÂÂ if (js != i)
>>> +ÂÂÂÂÂÂÂÂÂÂÂ /* Ensure any timeouts on other slots have finished */
>>> +ÂÂÂÂÂÂÂÂÂÂÂ cancel_delayed_work_sync(&sched->work_tdr);
>>> +ÂÂÂ }
>>> Â -ÂÂÂ if (sched_job)
>>> -ÂÂÂÂÂÂÂ drm_sched_increase_karma(sched_job);
>>> +ÂÂÂ drm_sched_increase_karma(sched_job);
>>
>> Indeed looks cleaner.
>>
>>> Â ÂÂÂÂÂ spin_lock_irqsave(&pfdev->js->job_lock, flags);
>>> ÂÂÂÂÂ for (i = 0; i < NUM_JOB_SLOTS; i++) {
>>> @@ -398,7 +404,6 @@ static void panfrost_job_timedout(struct
>>> drm_sched_job *sched_job)
>>> ÂÂÂÂÂ }
>>> ÂÂÂÂÂ spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
>>> Â -ÂÂÂ /* panfrost_core_dump(pfdev); */
>>
>> This should be cleaned in another patch !
>
> Seems to me that this should be some kind of TODO, see
> etnaviv_core_dump() for the kind of things we could be doing.
>
> Maybe we can delete this line and mention this in the TODO file?

Fair enough - I'll split this into a separate patch and add an entry to
the TODO file. kbase has a mechanism to "dump on job fault" [1],[2] so
we could do something similar.

Steve

[1]
https://gitlab.freedesktop.org/panfrost/mali_kbase/blob/master/driver/product/kernel/drivers/gpu/arm/midgard/backend/gpu/mali_kbase_debug_job_fault_backend.c

[2]
https://gitlab.freedesktop.org/panfrost/mali_kbase/blob/master/driver/product/kernel/drivers/gpu/arm/midgard/mali_kbase_debug_job_fault.c

> Cheers,
>
> Tomeu
>
>>> Â ÂÂÂÂÂ panfrost_devfreq_record_transition(pfdev, js);
>>> ÂÂÂÂÂ panfrost_device_reset(pfdev);
>>>
>>
>> Thanks,
>> Testing it right now with the last change removed (doesn't apply on
>> v5.3 with it),
>> results in a few hours... or minutes !
>>
>>
>> Neil
>>
> _______________________________________________
> dri-devel mailing list
> dri-devel@xxxxxxxxxxxxxxxxxxxxx
> https://lists.freedesktop.org/mailman/listinfo/dri-devel