[PATCH v2 2/2] cpufreq: schedutil: consolidate capacity margin calculation

From: Leo Yan
Date: Sun Oct 01 2017 - 20:30:54 EST


Scheduler CFS class has variable 'capacity_margin' to calculate the
capacity margin, and schedutil governor also needs to compensate the
same margin for frequency tipping point. Below are formulas used in
CFS class and schedutil governor separately:

CFS: U` = U * capacity_margin / 1024 = U * 1.25
Schedutil: U` = U + U >> 2 = U + U * 0.25 = U * 1.25

This patch consolidates the capacity margin calculation so let
schedutil to use same formula with CFS class. As result this can avoid
the mismatch issue between schedutil and CFS class after change
'capacity_margin' to other values.

Cc: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
Cc: Morten Rasmussen <morten.rasmussen@xxxxxxx>
Cc: Chris Redpath <Chris.Redpath@xxxxxxx>
Cc: Joel Fernandes <joelaf@xxxxxxxxxx>
Cc: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
Cc: Patrick Bellasi <patrick.bellasi@xxxxxxx>
Cc: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx>
Signed-off-by: Leo Yan <leo.yan@xxxxxxxxxx>
---
kernel/sched/cpufreq_schedutil.c | 6 ++++--
kernel/sched/sched.h | 1 +
2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 9209d83..13cc243 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -155,7 +155,8 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
*
* next_freq = C * curr_freq * util_raw / max
*
- * Take C = 1.25 for the frequency tipping point at (util / max) = 0.8.
+ * Take C = capacity_margin / 1024 = 1.25, so it's for the frequency tipping
+ * point at (util / max) = 0.8.
*
* The lowest driver-supported frequency which is equal or greater than the raw
* next_freq (as calculated above) is returned, subject to policy min/max and
@@ -168,7 +169,8 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
unsigned int freq = arch_scale_freq_invariant() ?
policy->cpuinfo.max_freq : policy->cur;

- freq = (freq + (freq >> 2)) * util / max;
+ freq = freq * capacity_margin >> SCHED_CAPACITY_SHIFT;
+ freq = freq * util / max;

if (freq == sg_policy->cached_raw_freq && sg_policy->next_freq != UINT_MAX)
return sg_policy->next_freq;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 14db76c..cf75bdc 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -52,6 +52,7 @@ struct cpuidle_state;
#define TASK_ON_RQ_MIGRATING 2

extern __read_mostly int scheduler_running;
+extern unsigned int capacity_margin __read_mostly;

extern unsigned long calc_load_update;
extern atomic_long_t calc_load_tasks;
--
2.7.4