On 29. 08. 19, 21:42, Martin HundebÃll wrote:
On 22/08/2019 23.56, Martin HundebÃll wrote:
When tearing down the n_gsm ldisc while one or more of its child ports
are open, a lock dep warning occurs:
[ÂÂ 56.254258] ======================================================
[ÂÂ 56.260447] WARNING: possible circular locking dependency detected
[ÂÂ 56.266641] 5.2.0-00118-g1fd58e20e5b0 #30 Not tainted
[ÂÂ 56.271701] ------------------------------------------------------
[ÂÂ 56.277890] cmux/271 is trying to acquire lock:
[ÂÂ 56.282436] 8215283a (&tty->legacy_mutex){+.+.}, at:
__tty_hangup.part.0+0x58/0x27c
[ÂÂ 56.290128]
[ÂÂ 56.290128] but task is already holding lock:
[ÂÂ 56.295970] e9e2b842 (&gsm->mutex){+.+.}, at:
gsm_cleanup_mux+0x9c/0x15c
[ÂÂ 56.302699]
[ÂÂ 56.302699] which lock already depends on the new lock.
[ÂÂ 56.302699]
[ÂÂ 56.310884]
[ÂÂ 56.310884] the existing dependency chain (in reverse order) is:
[ÂÂ 56.318372]
[ÂÂ 56.318372] -> #2 (&gsm->mutex){+.+.}:
[ÂÂ 56.323624]ÂÂÂÂÂÂÂ mutex_lock_nested+0x1c/0x24
[ÂÂ 56.328079]ÂÂÂÂÂÂÂ gsm_cleanup_mux+0x9c/0x15c
[ÂÂ 56.332448]ÂÂÂÂÂÂÂ gsmld_ioctl+0x418/0x4e8
[ÂÂ 56.336554]ÂÂÂÂÂÂÂ tty_ioctl+0x96c/0xcb0
[ÂÂ 56.340492]ÂÂÂÂÂÂÂ do_vfs_ioctl+0x41c/0xa5c
[ÂÂ 56.344685]ÂÂÂÂÂÂÂ ksys_ioctl+0x34/0x60
[ÂÂ 56.348535]ÂÂÂÂÂÂÂ ret_fast_syscall+0x0/0x28
[ÂÂ 56.352815]ÂÂÂÂÂÂÂ 0xbe97cc04
[ÂÂ 56.355791]
[ÂÂ 56.355791] -> #1 (&tty->ldisc_sem){++++}:
[ÂÂ 56.361388]ÂÂÂÂÂÂÂ tty_ldisc_lock+0x50/0x74
[ÂÂ 56.365581]ÂÂÂÂÂÂÂ tty_init_dev+0x88/0x1c4
[ÂÂ 56.369687]ÂÂÂÂÂÂÂ tty_open+0x1c8/0x430
[ÂÂ 56.373536]ÂÂÂÂÂÂÂ chrdev_open+0xa8/0x19c
[ÂÂ 56.377560]ÂÂÂÂÂÂÂ do_dentry_open+0x118/0x3c4
[ÂÂ 56.381928]ÂÂÂÂÂÂÂ path_openat+0x2fc/0x1190
[ÂÂ 56.386123]ÂÂÂÂÂÂÂ do_filp_open+0x68/0xd4
[ÂÂ 56.390146]ÂÂÂÂÂÂÂ do_sys_open+0x164/0x220
[ÂÂ 56.394257]ÂÂÂÂÂÂÂ kernel_init_freeable+0x328/0x3e4
[ÂÂ 56.399146]ÂÂÂÂÂÂÂ kernel_init+0x8/0x110
[ÂÂ 56.403078]ÂÂÂÂÂÂÂ ret_from_fork+0x14/0x20
[ÂÂ 56.407183]ÂÂÂÂÂÂÂ 0x0
[ÂÂ 56.409548]
[ÂÂ 56.409548] -> #0 (&tty->legacy_mutex){+.+.}:
[ÂÂ 56.415402]ÂÂÂÂÂÂÂ __mutex_lock+0x64/0x90c
[ÂÂ 56.419508]ÂÂÂÂÂÂÂ mutex_lock_nested+0x1c/0x24
[ÂÂ 56.423961]ÂÂÂÂÂÂÂ __tty_hangup.part.0+0x58/0x27c
[ÂÂ 56.428676]ÂÂÂÂÂÂÂ gsm_cleanup_mux+0xe8/0x15c
[ÂÂ 56.433043]ÂÂÂÂÂÂÂ gsmld_close+0x48/0x90
[ÂÂ 56.436979]ÂÂÂÂÂÂÂ tty_ldisc_kill+0x2c/0x6c
[ÂÂ 56.441173]ÂÂÂÂÂÂÂ tty_ldisc_release+0x88/0x194
[ÂÂ 56.445715]ÂÂÂÂÂÂÂ tty_release_struct+0x14/0x44
[ÂÂ 56.450254]ÂÂÂÂÂÂÂ tty_release+0x36c/0x43c
[ÂÂ 56.454365]ÂÂÂÂÂÂÂ __fput+0x94/0x1e8
Avoid the warning by doing the port hangup asynchronously.
Any comments?
I did not manage to reply before vacation, and after having "work =
NULL" in my head, I forgot, sorry.
At the same time, I am a bit lost in the lockdep chain above. It mixes
close (#0), open (#1) and ioctl (#2), so how is this a "chain" in the
first place?
BTW, do you see an actual deadlock? And what tty driver do you use for
backend devices? I.e. what ttys do you set this ldisc to?
See also the comment below.
Signed-off-by: Martin HundebÃll <martin@xxxxxxxxxx>
---
 drivers/tty/n_gsm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
index d30525946892..36a3eb4ad4c5 100644
--- a/drivers/tty/n_gsm.c
+++ b/drivers/tty/n_gsm.c
@@ -1716,7 +1716,7 @@ static void gsm_dlci_release(struct gsm_dlci *dlci)
ÂÂÂÂÂÂÂÂÂ gsm_destroy_network(dlci);
ÂÂÂÂÂÂÂÂÂ mutex_unlock(&dlci->mutex);
 - tty_vhangup(tty);
+ÂÂÂÂÂÂÂ tty_hangup(tty);
 Â tty_port_tty_set(&dlci->port, NULL);
I am afraid there is changed semantics now: the scheduled hangup will
likely happen when the tty in tty_port is set to NULL already, so some
operations done in tty->ops->hangup might be a nop now. For example the
commonly used tty_port_hangup won't set TTY_IO_ERROR on the tty and
won't lower DTR and RTS on the line either.