[PATCH] sched/topology: Optimize sched_numa_find_nth_cpu() by inlining bsearch()
From: Kuan-Wei Chiu
Date: Thu Dec 05 2024 - 11:23:49 EST
When CONFIG_MITIGATION_RETPOLINE is enabled, indirect function calls
become costly. Replacing bsearch() with an inline version of the binary
search reduces the overhead of indirect function calls, improving
efficiency. This change also results in a reduction of the code size by
128 bytes on x86-64 systems.
Before the patch:
$ size ./kernel/sched/build_utility.o
text data bss dec hex filename
40113 12379 2176 54668 d58c ./kernel/sched/build_utility.o
After the patch:
$ size ./kernel/sched/build_utility.o
text data bss dec hex filename
39993 12371 2176 54540 d50c ./kernel/sched/build_utility.o
Signed-off-by: Kuan-Wei Chiu <visitorckw@xxxxxxxxx>
---
kernel/sched/topology.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 9748a4c8d668..7790060d12ca 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2173,7 +2173,8 @@ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
if (!k.masks)
goto unlock;
- hop_masks = bsearch(&k, k.masks, sched_domains_numa_levels, sizeof(k.masks[0]), hop_cmp);
+ hop_masks = __inline_bsearch(&k, k.masks, sched_domains_numa_levels, sizeof(k.masks[0]),
+ hop_cmp);
hop = hop_masks - k.masks;
ret = hop ?
--
2.34.1