Re: [PATCH 1/2] sched/fair: Fix how load gets propagated from cfs_rq to its sched_entity
From: Tejun Heo
Date: Tue Apr 25 2017 - 14:12:49 EST
Hello,
On Tue, Apr 25, 2017 at 10:35:53AM +0200, Vincent Guittot wrote:
> not sure to catch your example:
> a task TA with a load_avg = 1 is the only task in a task group GB so
> the cfs_rq load_avg = 1 too and the group_entity of this cfs_rq has
> got a weight of 1024 (I use 10bits format for readability) which is
> the total share of task group GB
The group_entity (the sched_entity corresponding to the cfs_rq) should
behave as if it's a task which has the weight of 1024.
> Are you saying that the group_entity load_avg should be around 1024 and not 1 ?
Yes.
> I would say it depends of TA weight. I assume that TA weight is the
> default value (1024) as you don't specify any value in your example
Please consider the following configuration, where GA is a group
entity, and TA and TB are tasks.
ROOT - GA (weight 1024) - TA (weight 1)
\ GB (weight 1 ) - TB (weight 1)
Let's say both TA and TB are running full-tilt. Now let's take out GA
and GB.
ROOT - TA1 (weight 1024)
\ TB1 (weight 1 )
GA should behave the same as TA1 and GB TB1. GA's load should match
TA1's, and GA's load when seen from ROOT's cfs_rq has nothing to do
with how much total absolute weight it has inside it.
ROOT - GA2 (weight 1024) - TA2 (weight 1 )
\ GB2 (weight 1 ) - TB2 (weight 1024)
If TA2 and TB2 are constantly running, GA2 and GB2's in ROOT's cfs_rq
should match GA and GB's, respectively.
> If TA directly runs at parent level, its sched_entity would have a
> load_avg of 1 so why the group entity load_avg should be 1024 ? it
Because then the hierarchical weight configuration doesn't mean
anything.
> will just temporally show the cfs_rq more loaded than it is really and
> at the end the group entity load_avg will go back to 1
It's not temporary. The weight of a group is its shares, which is its
load fraction of the configured weight of the group. Assuming UP, if
you configure a group to the weight of 1024 and have any task running
full-tilt in it, the group will converge to the load of 1024. The
problem is that the propagation logic is currently doing something
completely different and temporarily push down the load whenever it
triggers.
Thanks.
--
tejun