Re: [RFC PATCH] sched: fix the nonsense shares when load of cfs_rq is too, small

From: bsegall
Date: Wed Mar 04 2020 - 13:47:28 EST


Peter Zijlstra <peterz@xxxxxxxxxxxxx> writes:

> On Tue, Mar 03, 2020 at 10:17:03PM +0800, çè wrote:
>> During our testing, we found a case that shares no longer
>> working correctly, the cgroup topology is like:
>>
>> /sys/fs/cgroup/cpu/A (shares=102400)
>> /sys/fs/cgroup/cpu/A/B (shares=2)
>> /sys/fs/cgroup/cpu/A/B/C (shares=1024)
>>
>> /sys/fs/cgroup/cpu/D (shares=1024)
>> /sys/fs/cgroup/cpu/D/E (shares=1024)
>> /sys/fs/cgroup/cpu/D/E/F (shares=1024)
>>
>> The same benchmark is running in group C & F, no other tasks are
>> running, the benchmark is capable to consumed all the CPUs.
>>
>> We suppose the group C will win more CPU resources since it could
>> enjoy all the shares of group A, but it's F who wins much more.
>>
>> The reason is because we have group B with shares as 2, which make
>> the group A 'cfs_rq->load.weight' very small.
>>
>> And in calc_group_shares() we calculate shares as:
>>
>> load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
>> shares = (tg_shares * load) / tg_weight;
>>
>> Since the 'cfs_rq->load.weight' is too small, the load become 0
>> in here, although 'tg_shares' is 102400, shares of the se which
>> stand for group A on root cfs_rq become 2.
>
> Argh, because A->cfs_rq.load.weight is B->se.load.weight which is
> B->shares/nr_cpus.
>
>> While the se of D on root cfs_rq is far more bigger than 2, so it
>> wins the battle.
>>
>> This patch add a check on the zero load and make it as MIN_SHARES
>> to fix the nonsense shares, after applied the group C wins as
>> expected.
>>
>> Signed-off-by: Michael Wang <yun.wang@xxxxxxxxxxxxxxxxx>
>> ---
>> kernel/sched/fair.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 84594f8aeaf8..53d705f75fa4 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -3182,6 +3182,8 @@ static long calc_group_shares(struct cfs_rq *cfs_rq)
>> tg_shares = READ_ONCE(tg->shares);
>>
>> load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
>> + if (!load && cfs_rq->load.weight)
>> + load = MIN_SHARES;
>>
>> tg_weight = atomic_long_read(&tg->load_avg);
>
> Yeah, I suppose that'll do. Hurmph, wants a comment though.
>
> But that has me looking at other users of scale_load_down(), and doesn't
> at least update_tg_cfs_load() suffer the same problem?

I think instead we should probably scale_load_down(tg_shares) and
scale_load(load_avg). tg_shares is always a scaled integer, so just
moving the source of the scaling in the multiply should do the job.

ie

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fcc968669aea..6d7a9d72d742 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3179,9 +3179,9 @@ static long calc_group_shares(struct cfs_rq *cfs_rq)
long tg_weight, tg_shares, load, shares;
struct task_group *tg = cfs_rq->tg;

- tg_shares = READ_ONCE(tg->shares);
+ tg_shares = scale_load_down(READ_ONCE(tg->shares));

- load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
+ load = max(cfs_rq->load.weight, scale_load(cfs_rq->avg.load_avg));

tg_weight = atomic_long_read(&tg->load_avg);