Re: [PATCH v7 bpf-next 00/11] Socket migration for SO_REUSEPORT.

From: Yuchung Cheng
Date: Tue Jun 08 2021 - 13:50:01 EST


On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote:
>
> On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > The SO_REUSEPORT option allows sockets to listen on the same port and to
> > accept connections evenly. However, there is a defect in the current
> > implementation [1]. When a SYN packet is received, the connection is tied
> > to a listening socket. Accordingly, when the listener is closed, in-flight
> > requests during the three-way handshake and child sockets in the accept
> > queue are dropped even if other listeners on the same port could accept
> > such connections.
> >
> > This situation can happen when various server management tools restart
> > server (such as nginx) processes. For instance, when we change nginx
> > configurations and restart it, it spins up new workers that respect the new
> > configuration and closes all listeners on the old workers, resulting in the
> > in-flight ACK of 3WHS is responded by RST.
> >
> > To avoid such a situation, users have to know deeply how the kernel handles
> > SYN packets and implement connection draining by eBPF [2]:
> >
> > 1. Stop routing SYN packets to the listener by eBPF.
> > 2. Wait for all timers to expire to complete requests
> > 3. Accept connections until EAGAIN, then close the listener.
> >
> > or
> >
> > 1. Start counting SYN packets and accept syscalls using the eBPF map.
> > 2. Stop routing SYN packets.
> > 3. Accept connections up to the count, then close the listener.
> >
> > In either way, we cannot close a listener immediately. However, ideally,
> > the application need not drain the not yet accepted sockets because 3WHS
> > and tying a connection to a listener are just the kernel behaviour. The
> > root cause is within the kernel, so the issue should be addressed in kernel
> > space and should not be visible to user space. This patchset fixes it so
> > that users need not take care of kernel implementation and connection
> > draining. With this patchset, the kernel redistributes requests and
> > connections from a listener to the others in the same reuseport group
> > at/after close or shutdown syscalls.
> >
> > Although some software does connection draining, there are still merits in
> > migration. For some security reasons, such as replacing TLS certificates,
> > we may want to apply new settings as soon as possible and/or we may not be
> > able to wait for connection draining. The sockets in the accept queue have
> > not started application sessions yet. So, if we do not drain such sockets,
> > they can be handled by the newer listeners and could have a longer
> > lifetime. It is difficult to drain all connections in every case, but we
> > can decrease such aborted connections by migration. In that sense,
> > migration is always better than draining.
> >
> > Moreover, auto-migration simplifies user space logic and also works well in
> > a case where we cannot modify and build a server program to implement the
> > workaround.
> >
> > Note that the source and destination listeners MUST have the same settings
> > at the socket API level; otherwise, applications may face inconsistency and
> > cause errors. In such a case, we have to use the eBPF program to select a
> > specific listener or to cancel migration.
This looks to be a useful feature. What happens to migrating a
passively fast-opened socket in the old listener but it has not yet
been accepted (TFO is both a mini-socket and a full-socket)?
It gets tricky when the old and new listener have different TFO key


> >
> > Special thanks to Martin KaFai Lau for bouncing ideas and exchanging code
> > snippets along the way.
> >
> >
> > Link:
> > [1] The SO_REUSEPORT socket option
> > https://lwn.net/Articles/542629/
> >
> > [2] Re: [PATCH 1/1] net: Add SO_REUSEPORT_LISTEN_OFF socket option as drain mode
> > https://lore.kernel.org/netdev/1458828813.10868.65.camel@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/
>
> This series needs review/ACKs from TCP maintainers. Eric/Neal/Yuchung please take
> a look again.
>
> Thanks,
> Daniel