Re: [PATCH net-next 3/3] virtio_net: Map NAPIs to queues
From: Jakub Kicinski
Date: Mon Jan 13 2025 - 17:04:55 EST
On Mon, 13 Jan 2025 09:30:20 -0800 Joe Damato wrote:
> > > static void virtnet_napi_enable_lock(struct virtqueue *vq,
> > > - struct napi_struct *napi)
> > > + struct napi_struct *napi,
> > > + bool need_rtnl)
> > > {
> > > + struct virtnet_info *vi = vq->vdev->priv;
> > > + int q = vq2rxq(vq);
> > > +
> > > virtnet_napi_do_enable(vq, napi);
> > > +
> > > + if (q < vi->curr_queue_pairs) {
> > > + if (need_rtnl)
> > > + rtnl_lock();
> >
> > Can we tweak the caller to call rtnl_lock() instead to avoid this trick?
>
> The major problem is that if the caller calls rtnl_lock() before
> calling virtnet_napi_enable_lock, then virtnet_napi_do_enable (and
> thus napi_enable) happen under the lock.
>
> Jakub mentioned in a recent change [1] that napi_enable may soon
> need to sleep.
>
> Given the above constraints, the only way to avoid the "need_rtnl"
> would be to refactor the code much more, placing calls (or wrappers)
> to netif_queue_set_napi in many locations.
>
> IMHO: This implementation seemed cleaner than putting calls to
> netif_queue_set_napi throughout the driver.
>
> Please let me know how you'd like to proceed on this.
>
> [1]: https://lore.kernel.org/netdev/20250111024742.3680902-1-kuba@xxxxxxxxxx/
I'm going to make netif_queue_set_napi() take netdev->lock, and remove
the rtnl_lock requirement ~this week. If we need conditional locking
perhaps we're better off waiting?