So essentially what you want to do is:
Make EAS aware of the frequency clamping schedutil can be faced with:
get_next_freq() -> cpufreq_driver_resolve_freq() ->
clamp_val(target_freq, policy->min, policy->max) (1)
by subtracting CPU's Thermal Pressure (ThPr) signal from the original
CPU capacity `arch_scale_cpu_capacity()` (2).
---
Isn't there a conceptional flaw in this design? Let's say we have a
big.Little system with two cpufreq cooling devices and a thermal zone
(something like Hikey 960). To create a ThPr scenario we have to run
stuff on the CPUs (e.g. hackbench (3)).
Eventually cpufreq_set_cur_state() [drivers/thermal/cpufreq_cooling.c]
will set thermal_pressure to `(2) - (2)*freq/policy->cpuinfo.max_freq`
and PELT will provide the ThPr signal via thermal_load_avg().
But to create this scenario, the system will become overutilized
(system-wide data, if one CPU is overutilized, the whole system is) so
EAS is disabled (i.e. find_energy_efficient_cpu() and compute_emergy()
are not executed).
I can see that there are episodes in which EAS is running and
thermal_load_avg() != 0 but those have to be when (3) has stopped and
you see the ThPr signal just decaying (no accruing of new ThPr). The
cpufreq cooling device can still issue cpufreq_set_cur_state() but only
with decreasing states.
---
IMHO, a precise description of how you envision the system setup,
incorporating all participating subsystems, would be helpful here.
Signed-off-by: Lukasz Luba <lukasz.luba@xxxxxxx>
---
kernel/sched/fair.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 161b92aa1c79..1aeddecabc20 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
struct cpumask *pd_mask = perf_domain_span(pd);
unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
unsigned long max_util = 0, sum_util = 0;
+ unsigned long _cpu_cap = cpu_cap;
int cpu;
/*
@@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
cpu_util_next(cpu, p, -1) + task_util_est(p);
}
+ /*
+ * Take the thermal pressure from non-idle CPUs. They have
+ * most up-to-date information. For idle CPUs thermal pressure
+ * signal is not updated so often.
+ */
+ if (!idle_cpu(cpu))
+ _cpu_cap = cpu_cap - thermal_load_avg(cpu_rq(cpu));
+
This one is probably the result of the fact that cpufreq cooling device
sets the ThPr for all CPUs of the policy (Frequency Domain (FD) or
Performance Domain (PD)) but PELT updates are happening per-CPU. And
only !idle CPUs get the update in scheduler_tick().
Looks like thermal_pressure [per_cpu(thermal_pressure, cpu),
drivers/base/arch_topology.c] set by cpufreq_set_cur_state() is always
in sync with policy->max/cpuinfo_max_freq).
So for your use case this instantaneous `signal` is better than the PELT
one. It's precise (no decaying when frequency clamping is already gone)
and you avoid the per-cpu update issue.