Re: [PATCH 2/4] sched/deadline: Improve admission control for asymmetric CPU capacities
From: Dietmar Eggemann
Date: Wed Apr 08 2020 - 08:26:12 EST
On 08.04.20 12:42, Valentin Schneider wrote:
>
> On 08/04/20 10:50, Dietmar Eggemann wrote:
>> +++ b/kernel/sched/sched.h
>> @@ -304,11 +304,14 @@ void __dl_add(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
>> __dl_update(dl_b, -((s32)tsk_bw / cpus));
>> }
>>
>> +static inline unsigned long rd_capacity(int cpu);
>> +
>> static inline
>> -bool __dl_overflow(struct dl_bw *dl_b, int cpus, u64 old_bw, u64 new_bw)
>> +bool __dl_overflow(struct dl_bw *dl_b, int cpu, u64 old_bw, u64 new_bw)
>> {
>> return dl_b->bw != -1 &&
>> - dl_b->bw * cpus < dl_b->total_bw - old_bw + new_bw;
>> + cap_scale(dl_b->bw, rd_capacity(cpu)) <
>> + dl_b->total_bw - old_bw + new_bw;
>> }
>>
>
> I don't think this is strictly equivalent to what we have now for the SMP
> case. 'cpus' used to come from dl_bw_cpus(), which is an ugly way of
> writing
>
> cpumask_weight(rd->span AND cpu_active_mask);
>
> The rd->cpu_capacity_orig field you added gets set once per domain rebuild,
> so it also happens in sched_cpu_(de)activate() but is separate from
> touching cpu_active_mask. AFAICT this mean we can observe a CPU as !active
> but still see its capacity_orig accounted in a root_domain.
I see what you mean.
The
int dl_bw_cpus(int i) {
...
for_each_cpu_and(i, rd->span, cpu_active_mask)
cpus++;
...
}
should be there to handle the 'rd->span &nsub cpu_active_mask' case.
We could use a similar implementation for s/cpus/capacity:
unsigned long dl_bw_capacity(int i) {
...
for_each_cpu_and(i, rd->span, cpu_active_mask)
cap += arch_scale_cpu_capacity(i);
...
}
[...]