Re: [syzbot] [net?] possible deadlock in inet6_getname

From: Fernando Fernandez Mancera

Date: Sat Feb 14 2026 - 13:25:26 EST


On 2/13/26 7:51 PM, Gerd Rausch wrote:
Hi,

On 2026-02-13 09:26, Eric Dumazet wrote:
On Fri, Feb 13, 2026 at 1:15 PM syzbot
<syzbot+5efae91f60932839f0a5@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote:


[...]
============================================
WARNING: possible recursive locking detected
syzkaller #0 Not tainted
--------------------------------------------
kworker/u8:6/2985 is trying to acquire lock:
ffff88807a07aa20 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1709 [inline]
ffff88807a07aa20 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: inet6_getname+0x15d/0x650 net/ipv6/af_inet6.c:533

but task is already holding lock:
ffff88807a07aa20 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1709 [inline]
ffff88807a07aa20 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: tcp_sock_set_cork+0x2c/0x2e0 net/ipv4/tcp.c:3694

[...]
  lock_sock_nested+0x48/0x100 net/core/sock.c:3780
  lock_sock include/net/sock.h:1709 [inline]
  inet6_getname+0x15d/0x650 net/ipv6/af_inet6.c:533
  rds_tcp_get_peer_sport net/rds/tcp_listen.c:70 [inline]
  rds_tcp_conn_slots_available+0x288/0x470 net/rds/tcp_listen.c:149
  rds_recv_hs_exthdrs+0x60f/0x7c0 net/rds/recv.c:265
  rds_recv_incoming+0x9f6/0x12d0 net/rds/recv.c:389
  rds_tcp_data_recv+0x7f1/0xa40 net/rds/tcp_recv.c:243
  __tcp_read_sock+0x196/0x970 net/ipv4/tcp.c:1702
  rds_tcp_read_sock net/rds/tcp_recv.c:277 [inline]
  rds_tcp_data_ready+0x369/0x950 net/rds/tcp_recv.c:331
  tcp_rcv_established+0x19e9/0x2670 net/ipv4/tcp_input.c:6675
  tcp_v6_do_rcv+0x8eb/0x1ba0 net/ipv6/tcp_ipv6.c:1609
  sk_backlog_rcv include/net/sock.h:1185 [inline]
  __release_sock+0x1b8/0x3a0 net/core/sock.c:3213

[...]
Gerd, please take a look, thanks.

commit 9d27a0fb122f19b6d01d02f4b4f429ca28811ace
Author: Gerd Rausch <gerd.rausch@xxxxxxxxxx>
Date:   Mon Feb 2 22:57:23 2026 -0700

     net/rds: Trigger rds_send_ping() more than once

Syzbot is right:

inet_getname() acquires a lock_sock() that was already held
as () is about to give it up, but before
doing so, handles the backlog receives & callbacks.

Just need to figure out a way to obtain the peer's port number,
without ending up in such a recursive lock scenario.


Hi,

Shouldn't this be enough?

diff --git a/net/rds/tcp_listen.c b/net/rds/tcp_listen.c
index 6fb5c928b8fd..a36e5dfd6c66 100644
--- a/net/rds/tcp_listen.c
+++ b/net/rds/tcp_listen.c
@@ -59,30 +59,12 @@ void rds_tcp_keepalive(struct socket *sock)
static int
rds_tcp_get_peer_sport(struct socket *sock)
{
- union {
- struct sockaddr_storage storage;
- struct sockaddr addr;
- struct sockaddr_in sin;
- struct sockaddr_in6 sin6;
- } saddr;
- int sport;
-
- if (kernel_getpeername(sock, &saddr.addr) >= 0) {
- switch (saddr.addr.sa_family) {
- case AF_INET:
- sport = ntohs(saddr.sin.sin_port);
- break;
- case AF_INET6:
- sport = ntohs(saddr.sin6.sin6_port);
- break;
- default:
- sport = -1;
- }
- } else {
- sport = -1;
- }
+ struct sock *sk = sock->sk;
+
+ if (!sk)
+ return -1;

- return sport;
+ return ntohs(inet_sk(sk)->inet_dport);
}

It would be safe from rds_tcp_accept_one() path as the new_sock has a reference count of 1 and no other component should be to release it.

In rds_tcp_conn_slots_available() path, fan-out can be only performed from receive path, AFAIU if data is being processed from the socket we should always be holding a lock.

If these premises are not correct, we can always make this conditional. But getting rid of the kernel_getpeername() call is performance-wise too.

I am testing this against the syzbot report/reproducer.

Thanks,
Fernando.

Thanks for forwarding this,

  Gerd