Re: [PATCH 2/4] sched/deadline: Improve admission control for asymmetric CPU capacities

From: Juri Lelli
Date: Fri Apr 17 2020 - 08:19:56 EST


On 09/04/20 19:29, Dietmar Eggemann wrote:

[...]

>
> Maybe we can do a hybrid. We have rd->span and rd->sum_cpu_capacity and
> with the help of an extra per-cpu cpumask we could just

Hummm, I like the idea, but

> DEFINE_PER_CPU(cpumask_var_t, dl_bw_mask);
>
> dl_bw_cpus(int i) {

This works if calls are always local to the rd we are interested into
(argument 'i' isn't used). Are we always doing that?

> struct cpumask *cpus = this_cpu_cpumask_var_ptr(dl_bw_mask);
> ...
> cpumask_and(cpus, rd->span, cpu_active_mask);
>
> return cpumask_weight(cpus);
> }
>
> and
>
> dl_bw_capacity(int i) {
>
> struct cpumask *cpus = this_cpu_cpumask_var_ptr(dl_bw_mask);
> ...
> cpumask_and(cpus, rd->span, cpu_active_mask);
> if (cpumask_equal(cpus, rd->span))
> return rd->sum_cpu_capacity;

What if capacities change between invocations (with the same span)?
Can that happen?

>
> for_each_cpu(i, cpus)
> cap += capacity_orig_of(i);
>
> return cap;
> }
>
> So only in cases in which rd->span and cpu_active_mask differ we would
> have to sum up again.

Thanks,

Juri