[patch 23/30] net/mlx5: Use effective interrupt affinity

From: Thomas Gleixner
Date: Thu Dec 10 2020 - 14:49:37 EST


Using the interrupt affinity mask for checking locality is not really
working well on architectures which support effective affinity masks.

The affinity mask is either the system wide default or set by user space,
but the architecture can or even must reduce the mask to the effective set,
which means that checking the affinity mask itself does not really tell
about the actual target CPUs.

Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Saeed Mahameed <saeedm@xxxxxxxxxx>
Cc: Leon Romanovsky <leon@xxxxxxxxxx>
Cc: "David S. Miller" <davem@xxxxxxxxxxxxx>
Cc: Jakub Kicinski <kuba@xxxxxxxxxx>
Cc: netdev@xxxxxxxxxxxxxxx
Cc: linux-rdma@xxxxxxxxxxxxxxx
---
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -1998,7 +1998,7 @@ static int mlx5e_open_channel(struct mlx
c->num_tc = params->num_tc;
c->xdp = !!params->xdp_prog;
c->stats = &priv->channel_stats[ix].ch;
- c->aff_mask = irq_get_affinity_mask(irq);
+ c->aff_mask = irq_get_effective_affinity_mask(irq);
c->lag_port = mlx5e_enumerate_lag_port(priv->mdev, ix);

netif_napi_add(netdev, &c->napi, mlx5e_napi_poll, 64);