Re: [PATCH v6 bpf-next 03/11] tcp: Keep TCP_CLOSE sockets in the reuseport group.
From: Kuniyuki Iwashima
Date: Thu May 20 2021 - 20:27:07 EST
From: Martin KaFai Lau <kafai@xxxxxx>
Date: Thu, 20 May 2021 16:39:06 -0700
> On Fri, May 21, 2021 at 07:54:48AM +0900, Kuniyuki Iwashima wrote:
> > From: Martin KaFai Lau <kafai@xxxxxx>
> > Date: Thu, 20 May 2021 14:22:01 -0700
> > > On Thu, May 20, 2021 at 05:51:17PM +0900, Kuniyuki Iwashima wrote:
> > > > From: Martin KaFai Lau <kafai@xxxxxx>
> > > > Date: Wed, 19 May 2021 23:26:48 -0700
> > > > > On Mon, May 17, 2021 at 09:22:50AM +0900, Kuniyuki Iwashima wrote:
> > > > >
> > > > > > +static int reuseport_resurrect(struct sock *sk, struct sock_reuseport *old_reuse,
> > > > > > + struct sock_reuseport *reuse, bool bind_inany)
> > > > > > +{
> > > > > > + if (old_reuse == reuse) {
> > > > > > + /* If sk was in the same reuseport group, just pop sk out of
> > > > > > + * the closed section and push sk into the listening section.
> > > > > > + */
> > > > > > + __reuseport_detach_closed_sock(sk, old_reuse);
> > > > > > + __reuseport_add_sock(sk, old_reuse);
> > > > > > + return 0;
> > > > > > + }
> > > > > > +
> > > > > > + if (!reuse) {
> > > > > > + /* In bind()/listen() path, we cannot carry over the eBPF prog
> > > > > > + * for the shutdown()ed socket. In setsockopt() path, we should
> > > > > > + * not change the eBPF prog of listening sockets by attaching a
> > > > > > + * prog to the shutdown()ed socket. Thus, we will allocate a new
> > > > > > + * reuseport group and detach sk from the old group.
> > > > > > + */
> > > > > For the reuseport_attach_prog() path, I think it needs to consider
> > > > > the reuse->num_closed_socks != 0 case also and that should belong
> > > > > to the resurrect case. For example, when
> > > > > sk_unhashed(sk) but sk->sk_reuseport == 0.
> > > >
> > > > In the path, reuseport_resurrect() is called from reuseport_alloc() only
> > > > if reuse->num_closed_socks != 0.
> > > >
> > > >
> > > > > @@ -92,6 +117,14 @@ int reuseport_alloc(struct sock *sk, bool bind_inany)
> > > > > reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
> > > > > lockdep_is_held(&reuseport_lock));
> > > > > if (reuse) {
> > > > > + if (reuse->num_closed_socks) {
> > > >
> > > > But, should this be
> > > >
> > > > if (sk->sk_state == TCP_CLOSE && reuse->num_closed_socks)
> > > >
> > > > because we need not allocate a new group when we attach a bpf prog to
> > > > listeners?
> > > The reuseport_alloc() is fine as is. No need to change.
> >
> > I missed sk_unhashed(sk) prevents calling reuseport_alloc()
> > if sk_state == TCP_LISTEN. I'll keep it as is.
> >
> >
> > >
> > > I should have copied reuseport_attach_prog() in the last reply and
> > > commented there instead.
> > >
> > > I meant reuseport_attach_prog() needs a change. In reuseport_attach_prog(),
> > > iiuc, currently passing the "else if (!rcu_access_pointer(sk->sk_reuseport_cb))"
> > > check implies the sk was (and still is) hashed with sk_reuseport enabled
> > > because the current behavior would have set sk_reuseport_cb to NULL during
> > > unhash but it is no longer true now. For example, this will break:
> > >
> > > 1. shutdown(lsk); /* lsk was bound with sk_reuseport enabled */
> > > 2. setsockopt(lsk, ..., SO_REUSEPORT, &zero, ...); /* disable sk_reuseport */
> > > 3. setsockopt(lsk, ..., SO_ATTACH_REUSEPORT_EBPF, &prog_fd, ...);
> > > ^---- /* This will work now because sk_reuseport_cb is not NULL.
> > > * However, it shouldn't be allowed.
> > > */
> >
> > Thank you for explanation, I understood the case.
> >
> > Exactly, I've confirmed that the case succeeded in the setsockopt() and I
> > could change the active listeners' prog via a shutdowned socket.
> >
> >
> > >
> > > I am thinking something like this (uncompiled code):
> > >
> > > int reuseport_attach_prog(struct sock *sk, struct bpf_prog *prog)
> > > {
> > > struct sock_reuseport *reuse;
> > > struct bpf_prog *old_prog;
> > >
> > > if (sk_unhashed(sk)) {
> > > int err;
> > >
> > > if (!sk->sk_reuseport)
> > > return -EINVAL;
> > >
> > > err = reuseport_alloc(sk, false);
> > > if (err)
> > > return err;
> > > } else if (!rcu_access_pointer(sk->sk_reuseport_cb)) {
> > > /* The socket wasn't bound with SO_REUSEPORT */
> > > return -EINVAL;
> > > }
> > >
> > > /* ... */
> > > }
> > >
> > > WDYT?
> >
> > I tested this change worked fine. I think this change should be added in
> > reuseport_detach_prog() also.
> >
> > ---8<---
> > int reuseport_detach_prog(struct sock *sk)
> > {
> > struct sock_reuseport *reuse;
> > struct bpf_prog *old_prog;
> >
> > if (!rcu_access_pointer(sk->sk_reuseport_cb))
> > return sk->sk_reuseport ? -ENOENT : -EINVAL;
> > ---8<---
> Right, a quick thought is something like this for detach:
>
> spin_lock_bh(&reuseport_lock);
> reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
> lockdep_is_held(&reuseport_lock));
Is this necessary because reuseport_grow() can detach sk?
if (!reuse) {
spin_unlock_bh(&reuseport_lock);
return -ENOENT;
}
Then we can remove rcu_access_pointer() check and move sk_reuseport check
here.
> if (sk_unhashed(sk) && reuse->num_closed_socks) {
> spin_unlock_bh(&reuseport_lock);
> return -ENOENT;
> }
>
> Although checking with reuseport_sock_index() will also work,
> the above probably is simpler and faster?
Yes, if sk is unhashed and has sk_reuseport_cb, it stays in the closed
section of socks[] and num_closed_socks is larger than 0.
>
> >
> >
> > Another option is to add the check in sock_setsockopt():
> > SO_ATTACH_REUSEPORT_[CE]BPF, SO_DETACH_REUSEPORT_BPF.
> >
> > Which do you think is better ?
> I think it is better to have this sock_reuseport specific bits
> staying in sock_reuseport.c.
Exactly, I'll keep the change in sock_reuseport.c