Re: [PATCH v3 0/5] cpumask: improve on cpumask_local_spread() locality

From: Jacob Keller
Date: Thu Dec 08 2022 - 13:45:36 EST




On 12/8/2022 10:30 AM, Yury Norov wrote:
cpumask_local_spread() currently checks local node for presence of i'th
CPU, and then if it finds nothing makes a flat search among all non-local
CPUs. We can do it better by checking CPUs per NUMA hops.

This series is inspired by Tariq Toukan and Valentin Schneider's
"net/mlx5e: Improve remote NUMA preferences used for the IRQ affinity
hints"

https://patchwork.kernel.org/project/netdevbpf/patch/20220728191203.4055-3-tariqt@xxxxxxxxxx/

According to their measurements, for mlx5e:

Bottleneck in RX side is released, reached linerate (~1.8x speedup).
~30% less cpu util on TX.

This patch makes cpumask_local_spread() traversing CPUs based on NUMA
distance, just as well, and I expect comparable improvement for its
users, as in case of mlx5e.

I tested new behavior on my VM with the following NUMA configuration:

root@debian:~# numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3
node 0 size: 3869 MB
node 0 free: 3740 MB
node 1 cpus: 4 5
node 1 size: 1969 MB
node 1 free: 1937 MB
node 2 cpus: 6 7
node 2 size: 1967 MB
node 2 free: 1873 MB
node 3 cpus: 8 9 10 11 12 13 14 15
node 3 size: 7842 MB
node 3 free: 7723 MB
node distances:
node 0 1 2 3
0: 10 50 30 70
1: 50 10 70 30
2: 30 70 10 50
3: 70 30 50 10

And the cpumask_local_spread() for each node and offset traversing looks
like this:

node 0: 0 1 2 3 6 7 4 5 8 9 10 11 12 13 14 15
node 1: 4 5 8 9 10 11 12 13 14 15 0 1 2 3 6 7
node 2: 6 7 0 1 2 3 8 9 10 11 12 13 14 15 4 5
node 3: 8 9 10 11 12 13 14 15 4 5 6 7 0 1 2 3

v1: https://lore.kernel.org/lkml/20221111040027.621646-5-yury.norov@xxxxxxxxx/T/
v2: https://lore.kernel.org/all/20221112190946.728270-3-yury.norov@xxxxxxxxx/T/
v3:
- fix typo in find_nth_and_andnot_bit();
- add 5th patch that simplifies cpumask_local_spread();
- address various coding style nits.


The whole series look reasonable to me!

Reviewed-by: Jacob Keller <jacob.e.keller@xxxxxxxxx>