[PATCH] cpufreq/schedutil: Only bind threads if needed

From: Christian Loehle
Date: Thu Sep 12 2024 - 09:53:46 EST


Remove the unconditional binding of sugov kthreads to the affected CPUs
if the cpufreq driver indicates that updates can happen from any CPU.
This allows userspace to set affinities to either save power (waking up
bigger CPUs on HMP can be expensive) or increasing performance (by
letting the utilized CPUs run without preemption of the sugov kthread).

Without this patch the behavior of sugov threads will basically be a
boot-time dice roll on which CPU of the PD has to handle all the
cpufreq updates. With the recent decreases of update filtering these
two basic problems become more and more apparent:
1. The wake_cpu might be idle and we are waking it up from another
CPU just for the cpufreq update. Apart from wasting power, the exit
latency of it's idle state might be longer than the sugov threads
running time, essentially delaying the cpufreq update unnecessarily.
2. We are preempting either the requesting or another busy CPU of the
PD, while the update could be done from a CPU that we deem less
important and pay the price of an IPI and two context-switches.

The change is essentially not setting PF_NO_SETAFFINITY on
dvfs_possible_from_any_cpu, no behavior change if userspace doesn't
touch affinities.

Signed-off-by: Christian Loehle <christian.loehle@xxxxxxx>
---
kernel/sched/cpufreq_schedutil.c | 6 +++++-
kernel/sched/syscalls.c | 3 +++
2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 43111a515a28..466fb79e0b81 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -683,7 +683,11 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
}

sg_policy->thread = thread;
- kthread_bind_mask(thread, policy->related_cpus);
+ if (policy->dvfs_possible_from_any_cpu)
+ set_cpus_allowed_ptr(thread, policy->related_cpus);
+ else
+ kthread_bind_mask(thread, policy->related_cpus);
+
init_irq_work(&sg_policy->irq_work, sugov_irq_work);
mutex_init(&sg_policy->work_lock);

diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
index c62acf509b74..7d4a4edfcfb9 100644
--- a/kernel/sched/syscalls.c
+++ b/kernel/sched/syscalls.c
@@ -1159,6 +1159,9 @@ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
if (!task_has_dl_policy(p) || !dl_bandwidth_enabled())
return 0;

+ if (dl_entity_is_special(&p->dl))
+ return 0;
+
/*
* Since bandwidth control happens on root_domain basis,
* if admission test is enabled, we only admit -deadline
--
2.34.1