Re: [PATCH bpf-next v7 1/3] bpf, sockmap: Fix incorrect copied_seq calculation

From: John Fastabend

Date: Tue Jan 20 2026 - 12:24:44 EST


On 2026-01-20 16:01:21, Jakub Sitnicki wrote:
> On Tue, Jan 13, 2026 at 10:50 AM +08, Jiayuan Chen wrote:
> > A socket using sockmap has its own independent receive queue: ingress_msg.
> > This queue may contain data from its own protocol stack or from other
> > sockets.
> >
> > The issue is that when reading from ingress_msg, we update tp->copied_seq
> > by default. However, if the data is not from its own protocol stack,
> > tcp->rcv_nxt is not increased. Later, if we convert this socket to a
> > native socket, reading from this socket may fail because copied_seq might
> > be significantly larger than rcv_nxt.
> >
> > This fix also addresses the syzkaller-reported bug referenced in the
> > Closes tag.
> >
> > This patch marks the skmsg objects in ingress_msg. When reading, we update
> > copied_seq only if the data is from its own protocol stack.
> >
> > FD1:read()
> > -- FD1->copied_seq++
> > | [read data]
> > |
> > [enqueue data] v
> > [sockmap] -> ingress to self -> ingress_msg queue
> > FD1 native stack ------> ^
> > -- FD1->rcv_nxt++ -> redirect to other | [enqueue data]
> > | |
> > | ingress to FD1
> > v ^
> > ... | [sockmap]
> > FD2 native stack
> >
> > Closes: https://syzkaller.appspot.com/bug?extid=06dbd397158ec0ea4983
> > Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
> > Reviewed-by: Jakub Sitnicki <jakub@xxxxxxxxxxxxxx>
> > Signed-off-by: Jiayuan Chen <jiayuan.chen@xxxxxxxxx>

[...]

> > @@ -487,6 +494,14 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
> > out:
> > return copied;
> > }
> > +EXPORT_SYMBOL_GPL(__sk_msg_recvmsg);
>
> Nit: Sorry, I haven't caught that before. tcp_bpf is a built-in. We
> don't need to export this internal helper to modules.

We could probably push this without the 2/3 patch? If we are debating
that patch still would be good to get this merged.

Reviewed-by: John Fastabend <john.fastabend@xxxxxxxxx>