Re: [PATCH?] Livelock in pick_next_task_fair() / idle_balance()
From: Yuyang Du
Date: Mon Jul 06 2015 - 00:04:57 EST
Hi Morten,
On Fri, Jul 03, 2015 at 10:34:41AM +0100, Morten Rasmussen wrote:
> > > IOW, since task groups include blocked load in the load_avg_contrib (see
> > > __update_group_entity_contrib() and __update_cfs_rq_tg_load_contrib()) the
> > > imbalance includes blocked load and hence env->imbalance >=
> > > sum(task_h_load(p)) for all tasks p on the rq. Which leads to
> > > detach_tasks() emptying the rq completely in the reported scenario where
> > > blocked load > runnable load.
> >
> > Whenever I want to know the load avg concerning task group, I need to
> > walk through the complete codes again, I prefer not to do it this time.
> > But it should not be that simply to say "the 118 comes from the blocked load".
>
> But the whole hierarchy of group entities is updated each time we enqueue
> or dequeue a task. I don't see how the group entity load_avg_contrib is
> not up to date? Why do you need to update it again?
>
> In any case, we have one task in the group hierarchy which has a
> load_avg_contrib of 0 and the grand-grand parent group entity has a
> load_avg_contrib of 118 and no additional tasks. That load contribution
> must be from tasks which are no longer around on the rq? No?
load_avg_contrib has WEIGHT inside, so the most I can say is:
SE: 8f456e00's load_avg_contrib 118 = (its cfs_rq's runnable + blocked) / (tg->load_avg + 1) * tg->shares
The tg->shares is probably 1024 (at least 911). So we are just left with:
cfs_rq / tg = 11.5%
I myself did question the sudden jump from 0 to 118 (see a previous reply).
But anyway, this really is irrelevant to the discusstion.
> > Anyway, with blocked load, yes, we definitely can't move (or even find) some
> > ammount of the imbalance if we only look at the tasks on the queue. But this
> > may or may not be a problem.
> >
> > Firstly, the question comes to whether we want blocked load anywhere.
> > This is just about a "now vs. average" question.
>
> That is what I meant in the paragraph below. It is a scheduling policy
> question.
>
> > Secondly, if we stick to average, we just need to treat the blocked load
> > consistently, not that group SE has it, but task SE does not, or somewhere
> > has it, others not.
>
> I agree that inconsistent use of blocked load will lead us into trouble.
> The problem is that none of the load-balance logic was designed for
> blocked load. It was written to deal load that is currently on the rq
> and slightly biased by average cpu load, not dealing with load
> contribution of tasks which we can't migrate at the moment because they
> are blocked. The load-balance code has to be updated to deal with
> blocked load. We will run into all sorts of issues if we don't and roll
> out use of blocked load everywhere.
>
> However, before we can rework the load-balance code, we have to agree on
> the now vs average balance policy.
>
> Your proposed patch implements a policy somewhere in between. We try to
> balance based on average, but we don't allow idle_balance() to empty a
> cpu completely. A pure average balance policy would allow a cpu to go
> idle even if we could do have tasks waiting on other rqs if the blocked
> load indicates that other task will show up shortly one the cpu. A pure
> "now" balance would balance based on runnable_load_avg for all entities
> including groups ignoring all blocked load, but that goes against the
> PELT group balancing design.
>
> I'm not against having a policy that sits somewhere in between, we just
> have to agree it is the right policy and clean up the load-balance code
> such that the implemented policy is clear.
The proposed patch sits in between? I agree, but would rather see it from
another perspective.
First, I don't think it merits a solution/policy. It is just a cheap
"last guard" to protect the "king" - no crash.
Second, a "pure average" policy is pretty fine in general, but it does not
mean we would simply allow a CPU to be pulled empty, that is because we are
making a bet from a prediction (average) here. By prediction, it basically
means sometimes it fails. As the failure could lead to a disater, without
blaming the prediction, it is reasonable we make a sort of "damage control".
Thanks,
Yuyang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/