On Wed, Feb 04, 2015 at 06:30:49PM +0000, Morten Rasmussen wrote:
From: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
Besides the existing frequency scale-invariance correction factor, apply
cpu scale-invariance correction factor to usage tracking.
Cpu scale-invariance takes cpu performance deviations due to
micro-architectural differences (i.e. instructions per seconds) between
cpus in HMP systems (e.g. big.LITTLE) and differences in the frequency
value of the highest OPP between cpus in SMP systems into consideration.
Each segment of the sched_avg::running_avg_sum geometric series is now
scaled by the cpu performance factor too so the
sched_avg::utilization_avg_contrib of each entity will be invariant from
the particular cpu of the HMP/SMP system it is gathered on.
So the usage level that is returned by get_cpu_usage stays relative to
the max cpu performance of the system.
@@ -2547,6 +2549,10 @@ static __always_inline int __update_entity_runnable_avg(u64 now, int cpu,
if (runnable)
sa->runnable_avg_sum += scaled_delta_w;
+
+ scaled_delta_w *= scale_cpu;
+ scaled_delta_w >>= SCHED_CAPACITY_SHIFT;
+
if (running)
sa->running_avg_sum += scaled_delta_w;
sa->avg_period += delta_w;
Maybe help remind me why we want this asymmetry between runnable and
running in terms of scaling?
The above talks about why we want running scaled with the cpu metric,
but it forgets to tell me why we do not want to scale runnable.
(even if I were to have a vague recollection it seems like a good thing
to write down someplace ;-).