Re: [PATCH 2/4] sched/deadline: Improve admission control for asymmetric CPU capacities

From: Valentin Schneider
Date: Wed Apr 08 2020 - 06:42:32 EST



On 08/04/20 10:50, Dietmar Eggemann wrote:
> +++ b/kernel/sched/sched.h
> @@ -304,11 +304,14 @@ void __dl_add(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
> __dl_update(dl_b, -((s32)tsk_bw / cpus));
> }
>
> +static inline unsigned long rd_capacity(int cpu);
> +
> static inline
> -bool __dl_overflow(struct dl_bw *dl_b, int cpus, u64 old_bw, u64 new_bw)
> +bool __dl_overflow(struct dl_bw *dl_b, int cpu, u64 old_bw, u64 new_bw)
> {
> return dl_b->bw != -1 &&
> - dl_b->bw * cpus < dl_b->total_bw - old_bw + new_bw;
> + cap_scale(dl_b->bw, rd_capacity(cpu)) <
> + dl_b->total_bw - old_bw + new_bw;
> }
>

I don't think this is strictly equivalent to what we have now for the SMP
case. 'cpus' used to come from dl_bw_cpus(), which is an ugly way of
writing

cpumask_weight(rd->span AND cpu_active_mask);

The rd->cpu_capacity_orig field you added gets set once per domain rebuild,
so it also happens in sched_cpu_(de)activate() but is separate from
touching cpu_active_mask. AFAICT this mean we can observe a CPU as !active
but still see its capacity_orig accounted in a root_domain.


> extern void init_dl_bw(struct dl_bw *dl_b);