Re: [PATCH] sched: fix infinity loop in update_blocked_averages

From: Linus Torvalds
Date: Thu Dec 27 2018 - 13:15:48 EST


On Thu, Dec 27, 2018 at 9:02 AM Vincent Guittot
<vincent.guittot@xxxxxxxxxx> wrote:
>
> In the original behavior, the cs_rq was removed from the list only
> when the cgroup was removed.
> patch a9e7f6544b9c (sched/fair: Fix O(nr_cgroups) in load balance
> path) has added an optimization which remove the cfs_rq when there
> were no blocked load to update in order to optimize the loop but it
> has introduced a race condition that create this infinite loop. The
> patch fixes the problem by removing the optimization.
> I will look at re-adding the optimization once i will have afix for
> the race condition

Hmm. What's the race? We seem to take the rq lock for all the cases,
but maybe I'm missing something?

That commit a9e7f6544b9c is a year and a half old, why did this start
being reported now?

[ goes off and looks ]

Oh. unthrottle_cfs_rq -> enqueue_entity -> list_add_leaf_cfs_rq()
doesn't actually seem to hold the rq lock at all. It's just called
under a rcu read lock.

So it all seems to depend on that "on_list" flag for exclusion. Which
seems fundamentally racy, since it's not protected by a lock.

So yeah, the whole logic seems to depend on "on_list is sticky and
stays set until the whole task group is destroyed".

So commit a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance
path") would appear to be entirely wrong, because on_list isn't
actually protected by a lock, and that can confuse things.

But that still makes me go "how come is this only noticed 18 months
after the fact"?

So I'm probably still missing something.

Tejun? PeterZ? Tell my why I'm being dense.

Linus