Re: [PATCH net-next v5 3/4] virtio-net: Map NAPIs to queues
From: Jakub Kicinski
Date: Mon Mar 03 2025 - 19:04:23 EST
On Mon, 3 Mar 2025 13:33:10 -0500 Joe Damato wrote:
> > > @@ -2880,6 +2880,13 @@ static void refill_work(struct work_struct *work)
> > > bool still_empty;
> > > int i;
> > >
> > > + spin_lock(&vi->refill_lock);
> > > + if (!vi->refill_enabled) {
> > > + spin_unlock(&vi->refill_lock);
> > > + return;
> > > + }
> > > + spin_unlock(&vi->refill_lock);
> > > +
> > > for (i = 0; i < vi->curr_queue_pairs; i++) {
> > > struct receive_queue *rq = &vi->rq[i];
> > >
> >
> > Err, I suppose this also doesn't work because:
> >
> > CPU0 CPU1
> > rtnl_lock (before CPU0 calls disable_delayed_refill)
> > virtnet_close refill_work
> > rtnl_lock()
> > cancel_sync <= deadlock
> >
> > Need to give this a bit more thought.
>
> How about we don't use the API at all from refill_work?
>
> Patch 4 adds consistent NAPI config state and refill_work isn't a
> queue resize maybe we don't need to call the netif_queue_set_napi at
> all since the NAPI IDs are persisted in the NAPI config state and
> refill_work shouldn't change that?
>
> In which case, we could go back to what refill_work was doing
> before and avoid the problem entirely.
>
> What do you think ?
Should work, I think. Tho, I suspect someone will want to add queue API
support to virtio sooner or later, and they will run into the same
problem with the netdev instance lock, as all of ndo_close() will then
be covered with netdev->lock.
More thorough and idiomatic way to solve the problem would be to cancel
the work non-sync in ndo_close, add cancel with _sync after netdev is
unregistered (in virtnet_remove()) when the lock is no longer held, then
wrap the entire work with a relevant lock and check if netif_running()
to return early in case of a race.
Middle ground would be to do what you suggested above and just leave
a well worded comment somewhere that will show up in diffs adding queue
API support?