[PATCH v2 15/17] sched/core: Handle steal values and mark CPUs as preferred

From: Shrikanth Hegde

Date: Tue Apr 07 2026 - 15:22:58 EST


This is the main periodic work which handles the steal time values.

- Compute the steal time by looking CPUTIME_STEAL across all online CPUs

- Compute steal ratio. It is multiplied by 100 to handle the fractional
values.

- If the steal time higher than threshold, reduce the number of preferred
CPUs by 1 core. The last core in the intersection of online and
preferred CPUs will be marked as non-preferred.
Ensure at least one core is left as preferred always.

- If the steal time lower than threshold, increase the number of preferred
CPUs by 1 core. First online core which is not in cpu_preferred_mask will
be marked as preferred.
If all cores are aleady set to preferred, bail out.

Increase/Decrease may need to modify the splicing across NUMA nodes. It is
being kept simple for now.

Signed-off-by: Shrikanth Hegde <sshegde@xxxxxxxxxxxxx>
---
kernel/sched/core.c | 52 ++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 51 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1c6fcf1ae4fe..6e2b733adf45 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -11349,15 +11349,65 @@ void sched_init_steal_monitor(void)
steal_mon.sampling_period_ms = 1000; /* once per second */
}

-/* This is only a skeleton. Subsequent patches introduce more of it */
void sched_steal_detection_work(struct work_struct *work)
{
struct steal_monitor_t *sm = container_of(work, struct steal_monitor_t, work);
+ int this_cpu = raw_smp_processor_id();
+ u64 delta_steal, delta_ns, steal = 0;
+ u64 steal_ratio;
ktime_t now;
+ int tmp_cpu;
+
+ for_each_cpu(tmp_cpu, cpu_online_mask)
+ steal += kcpustat_cpu(tmp_cpu).cpustat[CPUTIME_STEAL];

/* Update the prev_time for next iteration*/
now = ktime_get();
+ delta_steal = steal > sm->prev_steal ? steal - sm->prev_steal : 0;
+ delta_ns = max_t(u64, ktime_to_ns(ktime_sub(now, sm->prev_time)), 1);
+
sm->prev_time = now;
+ sm->prev_steal = steal;
+
+#ifdef CONFIG_SCHED_SMT
+ /* Multiply by 100 to consider the fractional values of steal time */
+ steal_ratio = (delta_steal * 100 * 100) / (delta_ns * num_online_cpus());
+
+ /* If the steal time values are high, reduce one core from preferred CPUs */
+ if (steal_ratio > sm->high_threshold) {
+ int last_cpu;
+
+ cpumask_and(sm->tmp_mask, cpu_online_mask, cpu_preferred_mask);
+ last_cpu = cpumask_last(sm->tmp_mask);
+
+ /*
+ * If the core belongs to the housekeeping CPUs, no action is
+ * taken. This leaves at least one core preferred always.
+ * This ensures at least some CPUs are available to run
+ */
+ if (cpumask_equal(cpu_smt_mask(last_cpu), cpu_smt_mask(this_cpu)))
+ return;
+
+ for_each_cpu(tmp_cpu, cpu_smt_mask(last_cpu)) {
+ set_cpu_preferred(tmp_cpu, false);
+ if (tick_nohz_full_cpu(tmp_cpu))
+ tick_nohz_dep_set_cpu(tmp_cpu, TICK_DEP_BIT_SCHED);
+ }
+ }
+
+ /* If the steal time values are low, increase one core as preferred CPUs */
+ if (steal_ratio < sm->low_threshold) {
+ int first_cpu;
+
+ first_cpu = cpumask_first_andnot(cpu_online_mask, cpu_preferred_mask);
+ /* All CPUs are preferred. Nothing to increase further */
+ if (first_cpu >= nr_cpu_ids)
+ return;
+
+ for_each_cpu(tmp_cpu, cpu_smt_mask(first_cpu))
+ set_cpu_preferred(tmp_cpu, true);
+ }
+#endif
}

void sched_trigger_steal_computation(int cpu)
--
2.47.3