Re: [PATCH 2/2] sched/fair: Skip detach and attach load avgs for new group task
From: Yuyang Du
Date: Mon May 30 2016 - 05:31:01 EST
Hi Vincent,
On Fri, May 27, 2016 at 03:37:11PM +0200, Vincent Guittot wrote:
> On 26 May 2016 at 21:44, Yuyang Du <yuyang.du@xxxxxxxxx> wrote:
> > Hi Vincent,
> >
> > On Thu, May 26, 2016 at 01:50:56PM +0200, Vincent Guittot wrote:
> >> On 26 May 2016 at 03:14, Yuyang Du <yuyang.du@xxxxxxxxx> wrote:
> >> > Vincent reported that the first task to a new task group's cfs_rq will
> >> > be attached in attach_task_cfs_rq() and once more when it is enqueued
> >> > (see https://lkml.org/lkml/2016/5/25/388).
> >> >
> >> > Actually, it is worse, attach_task_cfs_rq() is called for new task even
> >> > way before init_entity_runnable_average().
> >> >
> >> > Solve this by avoiding attach as well as detach new task's sched avgs
> >> > in task_move_group_fair(). To do it, we need to know whether the task
> >> > is forked or not, so we pass this info all the way from sched_move_task()
> >> > to attach_task_cfs_rq().
> >>
> >> Not sure that this is the right way to solve this problem because you
> >> continue to attach the task twice without detaching it in the mean
> >> time:
> >> - once during the copy of the process in cpu_cgroup_fork (you skip the
> >> attach of load average but the task is still attached to the local
> >> cpu)
> >
> > Sorry, the task's what is still attached, and how? You mean the vruntime
> > thingy? But the load/util avgs are not.
>
> yes that's it. The sequence still looks weird IMHO.
> the detach is called for a newly forked task that is not fully init
> and has not been attached yet
> IIUC the fork sequence, we only need to set group at this point so you
> can skip completely the detach/attach_task_cfs_rq not only the
> detach/attach_entity_load_avg
Ok, I previously didn't attempt to touch the vruntime part, because I'm not
entirely familiar with it (and never attempted to).
Avoiding attach/detach new task in fork indeed makes sense, but I am not sure,
Peter, Byungchul?
Thanks,
Yuyang