Re: [PATCH 1/1] sched/fair: Fix unfairness caused by missing load decay
From: Vincent Guittot
Date: Tue Apr 27 2021 - 10:26:25 EST
Le dimanche 25 avril 2021 à 10:09:02 (+0200), Odin Ugedal a écrit :
> This fixes an issue where old load on a cfs_rq is not properly decayed,
> resulting in strange behavior where fairness can decrease drastically.
> Real workloads with equally weighted control groups have ended up
> getting a respective 99% and 1%(!!) of cpu time.
>
> When an idle task is attached to a cfs_rq by attaching a pid to a cgroup,
> the old load of the task is attached to the new cfs_rq and sched_entity by
> attach_entity_cfs_rq. If the task is then moved to another cpu (and
> therefore cfs_rq) before being enqueued/woken up, the load will be moved
> to cfs_rq->removed from the sched_entity. Such a move will happen when
> enforcing a cpuset on the task (eg. via a cgroup) that force it to move.
Would be good to mention that the problem happens only if the new cfs_rq has
been removed from the leaf_cfs_rq_list because its PELT metrics were already
null. In such case __update_blocked_fair() never updates the blocked load of
the new cfs_rq and never propagate the removed load in the hierarchy.
>
> The load will however not be removed from the task_group itself, making
> it look like there is a constant load on that cfs_rq. This causes the
> vruntime of tasks on other sibling cfs_rq's to increase faster than they
> are supposed to; causing severe fairness issues. If no other task is
> started on the given cfs_rq, and due to the cpuset it would not happen,
> this load would never be properly unloaded. With this patch the load
> will be properly removed inside update_blocked_averages. This also
> applies to tasks moved to the fair scheduling class and moved to another
> cpu, and this path will also fix that. For fork, the entity is queued
> right away, so this problem does not affect that.
>
> For a simple cgroup hierarchy (as seen below) with two equally weighted
> groups, that in theory should get 50/50 of cpu time each, it often leads
> to a load of 60/40 or 70/30.
>
> parent/
> cg-1/
> cpu.weight: 100
> cpuset.cpus: 1
> cg-2/
> cpu.weight: 100
> cpuset.cpus: 1
>
> If the hierarchy is deeper (as seen below), while keeping cg-1 and cg-2
> equally weighted, they should still get a 50/50 balance of cpu time.
> This however sometimes results in a balance of 10/90 or 1/99(!!) between
> the task groups.
>
> $ ps u -C stress
> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
> root 18568 1.1 0.0 3684 100 pts/12 R+ 13:36 0:00 stress --cpu 1
> root 18580 99.3 0.0 3684 100 pts/12 R+ 13:36 0:09 stress --cpu 1
>
> parent/
> cg-1/
> cpu.weight: 100
> sub-group/
> cpu.weight: 1
> cpuset.cpus: 1
> cg-2/
> cpu.weight: 100
> sub-group/
> cpu.weight: 10000
> cpuset.cpus: 1
>
> This can be reproduced by attaching an idle process to a cgroup and
> moving it to a given cpuset before it wakes up. The issue is evident in
> many (if not most) container runtimes, and has been reproduced
> with both crun and runc (and therefore docker and all its "derivatives"),
> and with both cgroup v1 and v2.
>
> Fixes: 3d30544f0212 ("sched/fair: Apply more PELT fixes")
The fix tag should be :
Fixes: 039ae8bcf7a5 ("sched/fair: Fix O(nr_cgroups) in the load balancing path")
This patch re-introduced the del of idle cfs_rq from leaf_cfs_rq_list in order to
skip useless update of blocked load.
> Signed-off-by: Odin Ugedal <odin@xxxxxxx>
> ---
> kernel/sched/fair.c | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 794c2cb945f8..ad7556f99b4a 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10916,6 +10916,19 @@ static void attach_task_cfs_rq(struct task_struct *p)
>
> if (!vruntime_normalized(p))
> se->vruntime += cfs_rq->min_vruntime;
> +
> + /*
> + * Make sure the attached load will decay properly
> + * in case the task is moved to another cpu before
> + * being queued.
> + */
> + if (!task_on_rq_queued(p)) {
> + for_each_sched_entity(se) {
> + if (se->on_rq)
> + break;
> + list_add_leaf_cfs_rq(cfs_rq_of(se));
> + }
> + }
propagate_entity_cfs_rq() already goes across the tg tree to
propagate the attach/detach.
would be better to call list_add_leaf_cfs_rq(cfs_rq) inside this function
instead of looping twice the tg tree. Something like:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 33b1ee31ae0f..18441ce7316c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -11026,10 +11026,10 @@ static void propagate_entity_cfs_rq(struct sched_entity *se)
for_each_sched_entity(se) {
cfs_rq = cfs_rq_of(se);
- if (cfs_rq_throttled(cfs_rq))
- break;
+ if (!cfs_rq_throttled(cfs_rq))
+ update_load_avg(cfs_rq, se, UPDATE_TG);
- update_load_avg(cfs_rq, se, UPDATE_TG);
+ list_add_leaf_cfs_rq(cfs_rq);
}
}
#else
>
> }
>
> static void switched_from_fair(struct rq *rq, struct task_struct *p)
> --
> 2.31.1
>