On 20/04/21 7:15 am, Adrian Hunter wrote:Hi Adrian,
On 20/04/21 12:53 am, Asutosh Das (asd) wrote:
On 4/19/2021 11:37 AM, Adrian Hunter wrote:
On 16/04/21 10:49 pm, Asutosh Das wrote:Hi Adrian
Co-developed-by: Can Guo <cang@xxxxxxxxxxxxxx>
Signed-off-by: Can Guo <cang@xxxxxxxxxxxxxx>
Signed-off-by: Asutosh Das <asutoshd@xxxxxxxxxxxxxx>
---
I came across 3 issues while testing. See comments below.
Thanks for the comments.
<SNIP>Umm, I didn't understand this deadlock.
@@ -5794,7 +5839,7 @@ static void ufshcd_err_handling_unprepare(struct ufs_hba *hba)
if (ufshcd_is_clkscaling_supported(hba))
ufshcd_clk_scaling_suspend(hba, false);
ufshcd_clear_ua_wluns(hba);
ufshcd_clear_ua_wluns() deadlocks trying to clear UFS_UPIU_RPMB_WLUN
if sdev_rpmb is suspended and sdev_ufs_device is suspending.
e.g. ufshcd_wl_suspend() is waiting on host_sem while ufshcd_err_handler()
is running, at which point sdev_rpmb has already suspended.
When you say, sdev_rpmb is suspended, does it mean runtime_suspended?
sdev_ufs_device is suspending - this can't be runtime_suspending, while ufshcd_err_handling_unprepare is running.
If you've a call-stack of this deadlock, please can you share it with me. I'll also try to reproduce this.
Yes it is system suspend. sdev_rpmb has suspended, sdev_ufs_device is waiting on host_sem.
ufshcd_err_handler() holds host_sem. ufshcd_clear_ua_wlun(UFS_UPIU_RPMB_WLUN) gets stuck.
I will get some call-stacks.
Here are the call stacks
[ 34.094321] Workqueue: ufs_eh_wq_0 ufshcd_err_handler
[ 34.094788] Call Trace:
[ 34.095281] __schedule+0x275/0x6c0
[ 34.095743] schedule+0x41/0xa0
[ 34.096240] blk_queue_enter+0x10d/0x230
[ 34.096693] ? wait_woken+0x70/0x70
[ 34.097167] blk_mq_alloc_request+0x53/0xc0
[ 34.097610] blk_get_request+0x1e/0x60
[ 34.098053] __scsi_execute+0x3c/0x260
[ 34.098529] ufshcd_clear_ua_wlun.cold+0xa6/0x14b
[ 34.098977] ufshcd_clear_ua_wluns.part.0+0x4d/0x92
[ 34.099456] ufshcd_err_handler+0x97a/0x9ff
[ 34.099902] process_one_work+0x1cc/0x360
[ 34.100384] worker_thread+0x45/0x3b0
[ 34.100851] ? process_one_work+0x360/0x360
[ 34.101308] kthread+0xf6/0x130
[ 34.101728] ? kthread_park+0x80/0x80
[ 34.102186] ret_from_fork+0x1f/0x30
[ 34.640751] task:kworker/u10:9 state:D stack:14528 pid: 255 ppid: 2 flags:0x00004000
[ 34.641253] Workqueue: events_unbound async_run_entry_fn
[ 34.641722] Call Trace:
[ 34.642217] __schedule+0x275/0x6c0
[ 34.642683] schedule+0x41/0xa0
[ 34.643179] schedule_timeout+0x18b/0x290
[ 34.643645] ? del_timer_sync+0x30/0x30
[ 34.644131] __down_timeout+0x6b/0xc0
[ 34.644568] ? ufshcd_clkscale_enable_show+0x20/0x20
[ 34.645014] ? async_schedule_node_domain+0x17d/0x190
[ 34.645496] down_timeout+0x42/0x50
[ 34.645947] ufshcd_wl_suspend+0x79/0xa0
[ 34.646432] ? scmd_printk+0x100/0x100
[ 34.646917] scsi_bus_suspend_common+0x56/0xc0
[ 34.647405] ? scsi_bus_freeze+0x10/0x10
[ 34.647858] dpm_run_callback+0x45/0x110
[ 34.648347] __device_suspend+0x117/0x460
[ 34.648788] async_suspend+0x16/0x90
[ 34.649251] async_run_entry_fn+0x26/0x110
[ 34.649676] process_one_work+0x1cc/0x360
[ 34.650137] worker_thread+0x45/0x3b0
[ 34.650563] ? process_one_work+0x360/0x360
[ 34.650994] kthread+0xf6/0x130
[ 34.651455] ? kthread_park+0x80/0x80
[ 34.651882] ret_from_fork+0x1f/0x30
I'll address the other comments in the next version.
Thank you!
- pm_runtime_put(hba->dev);
+ ufshcd_rpm_put(hba);
}
<SNIP>
+void ufshcd_resume_complete(struct device *dev)
+{