Re: [PATCH v6 1/4] sched/fair: Fix attaching task sched avgs twice when switching to fair or changing task group

From: Peter Zijlstra
Date: Fri Jun 17 2016 - 07:31:17 EST


On Thu, Jun 16, 2016 at 11:21:55PM +0200, Vincent Guittot wrote:
> Your proposal below looks good to me

> > ---
> > kernel/sched/fair.c | 4 ++++
> > 1 file changed, 4 insertions(+)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index f75930bdd326..5d8fa135bbc5 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -8349,6 +8349,7 @@ static void detach_task_cfs_rq(struct task_struct *p)
> > {
> > struct sched_entity *se = &p->se;
> > struct cfs_rq *cfs_rq = cfs_rq_of(se);
> > + u64 now = cfs_rq_clock_task(cfs_rq);
> >
> > if (!vruntime_normalized(p)) {
> > /*
> > @@ -8360,6 +8361,7 @@ static void detach_task_cfs_rq(struct task_struct *p)
> > }
> >
> > /* Catch up with the cfs_rq and remove our load when we leave */
> > + update_cfs_rq_load_avg(now, cfs_rq, false);
> > detach_entity_load_avg(cfs_rq, se);
> > }
> >
> > @@ -8367,6 +8369,7 @@ static void attach_task_cfs_rq(struct task_struct *p)
> > {
> > struct sched_entity *se = &p->se;
> > struct cfs_rq *cfs_rq = cfs_rq_of(se);
> > + u64 now = cfs_rq_clock_task(cfs_rq);
> >
> > #ifdef CONFIG_FAIR_GROUP_SCHED
> > /*
> > @@ -8377,6 +8380,7 @@ static void attach_task_cfs_rq(struct task_struct *p)
> > #endif
> >
> > /* Synchronize task with its cfs_rq */
> > + update_cfs_rq_load_avg(now, cfs_rq, false);
> > attach_entity_load_avg(cfs_rq, se);
> >
> > if (!vruntime_normalized(p))

Should we also call update_tg_load_avg() if update_cfs_rq_load_avg()
returns? Most sites seem to do this.

Someone should document these things somewhere....