Re: [RFC net-next v4 0/9] Add support for per-NAPI config via netlink
From: Stanislav Fomichev
Date: Thu Oct 03 2024 - 19:29:52 EST
On 10/01, Joe Damato wrote:
> Greetings:
>
> Welcome to RFC v4.
>
> Very important and significant changes have been made since RFC v3 [1],
> please see the changelog below for details.
>
> A couple important call outs for this revision for reviewers:
>
> 1. idpf embeds a napi_struct in an internal data structure and
> includes an assertion on the size of napi_struct. The maintainers
> have stated that they think anyone touching napi_struct should update
> the assertion [2], so I've done this in patch 3.
>
> Even though the assertion has been updated, I've given the
> cacheline placement of napi_struct within idpf's internals no
> thought or consideration.
>
> Would appreciate other opinions on this; I think idpf should be
> fixed. It seems unreasonable to me that anyone changing the size of
> a struct in the core should need to think about cachelines in idpf.
[..]
> 2. This revision seems to work (see below for a full walk through). Is
> this the behavior we want? Am I missing some use case or some
> behavioral thing other folks need?
The walk through looks good!
> 3. Re a previous point made by Stanislav regarding "taking over a NAPI
> ID" when the channel count changes: mlx5 seems to call napi_disable
> followed by netif_napi_del for the old queues and then calls
> napi_enable for the new ones. In this RFC, the NAPI ID generation
> is deferred to napi_enable. This means we won't end up with two of
> the same NAPI IDs added to the hash at the same time (I am pretty
> sure).
[..]
> Can we assume all drivers will napi_disable the old queues before
> napi_enable the new ones? If yes, we might not need to worry about
> a NAPI ID takeover function.
With the explicit driver opt-in via netif_napi_add_config, this
shouldn't matter? When somebody gets to converting the drivers that
don't follow this common pattern they'll have to solve the takeover
part :-)