Re: [PATCH net] net: fix raising a softirq on the current cpu with rps enabled

From: Jason Xing
Date: Sun Mar 26 2023 - 10:57:03 EST


On Sun, Mar 26, 2023 at 6:10 PM Jason Xing <kerneljasonxing@xxxxxxxxx> wrote:
>
> On Sun, Mar 26, 2023 at 12:04 PM Jason Xing <kerneljasonxing@xxxxxxxxx> wrote:
> >
> > On Sat, Mar 25, 2023 at 11:57 PM Eric Dumazet <edumazet@xxxxxxxxxx> wrote:
> > >
> > > On Sat, Mar 25, 2023 at 8:26 AM Jason Xing <kerneljasonxing@xxxxxxxxx> wrote:
> > > >
> > > > From: Jason Xing <kernelxing@xxxxxxxxxxx>
> > > >
> > > > Since we decide to put the skb into a backlog queue of another
> > > > cpu, we should not raise the softirq for the current cpu. When
> > > > to raise a softirq is based on whether we have more data left to
> > > > process later. As to the current cpu, there is no indication of
> > > > more data enqueued, so we do not need this action. After enqueuing
> > > > to another cpu, net_rx_action() function will call ipi and then
> > > > another cpu will raise the softirq as expected.
> > > >
> > > > Also, raising more softirqs which set the corresponding bit field
> > > > can make the IRQ mechanism think we probably need to start ksoftirqd
> > > > on the current cpu. Actually it shouldn't happen.
> > > >
> > > > Fixes: 0a9627f2649a ("rps: Receive Packet Steering")
> > > > Signed-off-by: Jason Xing <kernelxing@xxxxxxxxxxx>
> > > > ---
> > > > net/core/dev.c | 2 --
> > > > 1 file changed, 2 deletions(-)
> > > >
> > > > diff --git a/net/core/dev.c b/net/core/dev.c
> > > > index 1518a366783b..bfaaa652f50c 100644
> > > > --- a/net/core/dev.c
> > > > +++ b/net/core/dev.c
> > > > @@ -4594,8 +4594,6 @@ static int napi_schedule_rps(struct softnet_data *sd)
> > > > if (sd != mysd) {
> > > > sd->rps_ipi_next = mysd->rps_ipi_list;
> > > > mysd->rps_ipi_list = sd;
> > > > -
> > > > - __raise_softirq_irqoff(NET_RX_SOFTIRQ);
> > > > return 1;
> > > > }
> > > > #endif /* CONFIG_RPS */
> > > > --
> > > > 2.37.3
> > > >
> > >
> > > This is not going to work in some cases. Please take a deeper look.
> > >
> > > I have to run, if you (or others) do not find the reason, I will give
> > > more details when I am done traveling.
> >
> > I'm wondering whether we could use @mysd instead of @sd like this:
> >
> > if (!__test_and_set_bit(NAPI_STATE_SCHED, &mysd->backlog.state))
> > __raise_softirq_irqoff(NET_RX_SOFTIRQ);
>
> Ah, I have to add more precise code because the above codes may mislead people.
>
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 1518a366783b..9ac9b32e392f 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -4594,8 +4594,9 @@ static int napi_schedule_rps(struct softnet_data *sd)
> if (sd != mysd) {
> sd->rps_ipi_next = mysd->rps_ipi_list;
> mysd->rps_ipi_list = sd;
> + if (!__test_and_set_bit(NAPI_STATE_SCHED, &mysd->backlog.state))

Forgive me. Really I need some coffee. I made a mistake. This line
above should be:

+ if (!test_bit(NAPI_STATE_SCHED, &mysd->backlog.state))

But the whole thing doesn't feel right. I need a few days to dig into
this part until Eric can help me with more of it.

Thanks,
Jason

> + __raise_softirq_irqoff(NET_RX_SOFTIRQ);
>
> - __raise_softirq_irqoff(NET_RX_SOFTIRQ);
> return 1;
> }
> #endif /* CONFIG_RPS */
>
> Eric, I realized that some paths don't call the ipi to notify another
> cpu. If someone grabs the NAPI_STATE_SCHED flag, we know that at the
> end of net_rx_action() or the beginning of process_backlog(), the
> net_rps_action_and_irq_enable() will handle the information delivery.
> However, if no one grabs the flag, in some paths we could not have a
> chance immediately to tell another cpu to raise the softirq and then
> process those pending data. Thus, I have to make sure if someone owns
> the napi poll as shown above.
>
> If I get this wrong, please correct me if you're available. Thanks in advance.
>
> >
> > I traced back to some historical changes and saw some relations with
> > this commit ("net: solve a NAPI race"):
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=39e6c8208d7b6fb9d2047850fb3327db567b564b
> >
> > Thanks,
> > Jason