Re: [RESEND PATCH v4] drm: Don't free jobs in wait_event_interruptible()
From: Steven Price
Date: Fri Oct 25 2019 - 05:49:38 EST
On 25/10/2019 10:43, Christian Gmeiner wrote:
> Am Do., 24. Okt. 2019 um 18:25 Uhr schrieb Steven Price <steven.price@xxxxxxx>:
>>
>> drm_sched_cleanup_jobs() attempts to free finished jobs, however because
>> it is called as the condition of wait_event_interruptible() it must not
>> sleep. Unfortuantly some free callbacks (notibly for Panfrost) do sleep.
>>
>> Instead let's rename drm_sched_cleanup_jobs() to
>> drm_sched_get_cleanup_job() and simply return a job for processing if
>> there is one. The caller can then call the free_job() callback outside
>> the wait_event_interruptible() where sleeping is possible before
>> re-checking and returning to sleep if necessary.
>>
>> Signed-off-by: Steven Price <steven.price@xxxxxxx>
>
> Tested-by: Christian Gmeiner <christian.gmeiner@xxxxxxxxx>
>
> Without this patch I get the following warning:
Thanks, if you've got an (easily) reproducible case, can you check which
commit this fixes. I *think*:
Fixes: 5918045c4ed4 ("drm/scheduler: rework job destruction")
But I haven't got a reliable way of reproducing this (with Panfrost).
Thanks,
Steve
>
> [ 242.935254] ------------[ cut here ]------------
> [ 242.940044] WARNING: CPU: 2 PID: 109 at kernel/sched/core.c:6731
> __might_sleep+0x94/0xa8
> [ 242.948242] do not call blocking ops when !TASK_RUNNING; state=1
> set at [<38751e36>] prepare_to_wait_event+0xa8/0x180
> [ 242.958923] Modules linked in:
> [ 242.962010] CPU: 2 PID: 109 Comm: 130000.gpu Not tainted 5.4.0-rc4 #10
> [ 242.968551] Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree)
> [ 242.975112] [<c0113160>] (unwind_backtrace) from [<c010cf34>]
> (show_stack+0x10/0x14)
> [ 242.982879] [<c010cf34>] (show_stack) from [<c0c065ec>]
> (dump_stack+0xd8/0x110)
> [ 242.990213] [<c0c065ec>] (dump_stack) from [<c0128adc>] (__warn+0xc0/0x10c)
> [ 242.997194] [<c0128adc>] (__warn) from [<c0128f10>]
> (warn_slowpath_fmt+0x8c/0xb8)
> [ 243.004697] [<c0128f10>] (warn_slowpath_fmt) from [<c01598bc>]
> (__might_sleep+0x94/0xa8)
> [ 243.012810] [<c01598bc>] (__might_sleep) from [<c0c246e4>]
> (__mutex_lock+0x38/0xa1c)
> [ 243.020571] [<c0c246e4>] (__mutex_lock) from [<c0c250e4>]
> (mutex_lock_nested+0x1c/0x24)
> [ 243.028600] [<c0c250e4>] (mutex_lock_nested) from [<c064f020>]
> (etnaviv_cmdbuf_free+0x40/0x8c)
> [ 243.037233] [<c064f020>] (etnaviv_cmdbuf_free) from [<c06503a0>]
> (etnaviv_submit_put+0x38/0x1c8)
> [ 243.046042] [<c06503a0>] (etnaviv_submit_put) from [<c064177c>]
> (drm_sched_cleanup_jobs+0xc8/0xec)
> [ 243.055021] [<c064177c>] (drm_sched_cleanup_jobs) from [<c06419b4>]
> (drm_sched_main+0x214/0x298)
> [ 243.063826] [<c06419b4>] (drm_sched_main) from [<c0152890>]
> (kthread+0x140/0x158)
> [ 243.071329] [<c0152890>] (kthread) from [<c01010b4>]
> (ret_from_fork+0x14/0x20)
> [ 243.078563] Exception stack(0xec691fb0 to 0xec691ff8)
> [ 243.083630] 1fa0: 00000000
> 00000000 00000000 00000000
> [ 243.091822] 1fc0: 00000000 00000000 00000000 00000000 00000000
> 00000000 00000000 00000000
> [ 243.100013] 1fe0: 00000000 00000000 00000000 00000000 00000013 00000000
> [ 243.106795] irq event stamp: 321
> [ 243.110098] hardirqs last enabled at (339): [<c0193854>]
> console_unlock+0x430/0x620
> [ 243.117864] hardirqs last disabled at (346): [<c01934cc>]
> console_unlock+0xa8/0x620
> [ 243.125592] softirqs last enabled at (362): [<c01024e0>]
> __do_softirq+0x2c0/0x590
> [ 243.133232] softirqs last disabled at (373): [<c0130ed0>]
> irq_exit+0x100/0x18c
> [ 243.140517] ---[ end trace 8afcd79e9e2725b2 ]---
>