Re: [PATCH v2] sched/fair: enqueue_task_fair optimization
From: Phil Auld
Date: Wed May 13 2020 - 09:13:52 EST
On Wed, May 13, 2020 at 03:10:28PM +0200 Vincent Guittot wrote:
> On Wed, 13 May 2020 at 14:45, Phil Auld <pauld@xxxxxxxxxx> wrote:
> >
> > Hi Vincent,
> >
> > On Wed, May 13, 2020 at 02:33:35PM +0200 Vincent Guittot wrote:
> > > enqueue_task_fair jumps to enqueue_throttle label when cfs_rq_of(se) is
> > > throttled which means that se can't be NULL and we can skip the test.
> > >
> >
> > s/be NULL/be non-NULL/
> >
> > I think.
>
> This sentence refers to the move of enqueue_throttle and the fact that
> se can't be null when goto enqueue_throttle and we can jump directly
> after the if statement, which is now removed in v2 because se is
> always NULL if we don't use goto enqueue_throttle.
>
> I haven't change the commit message for the remove of if statement
>
Fair enough, it just seems backwards from the intent of the patch now.
There is also an extra } after the update_overutilized_status.
Cheers,
Phil
> >
> > It's more like if it doesn't jump to the label then se must be NULL for
> > the loop to terminate. The final loop is a NOP if se is NULL. The check
> > wasn't protecting that.
> >
> > Otherwise still
> >
> > > Reviewed-by: Phil Auld <pauld@xxxxxxxxxx>
> >
> > Cheers,
> > Phil
> >
> >
> > > Signed-off-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
> > > ---
> > >
> > > v2 changes:
> > > - Remove useless if statement
> > >
> > > kernel/sched/fair.c | 39 ++++++++++++++++++++-------------------
> > > 1 file changed, 20 insertions(+), 19 deletions(-)
> > >
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index a0c690d57430..b51b12d63c39 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -5513,28 +5513,29 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> > > list_add_leaf_cfs_rq(cfs_rq);
> > > }
> > >
> > > -enqueue_throttle:
> > > - if (!se) {
> > > - add_nr_running(rq, 1);
> > > - /*
> > > - * Since new tasks are assigned an initial util_avg equal to
> > > - * half of the spare capacity of their CPU, tiny tasks have the
> > > - * ability to cross the overutilized threshold, which will
> > > - * result in the load balancer ruining all the task placement
> > > - * done by EAS. As a way to mitigate that effect, do not account
> > > - * for the first enqueue operation of new tasks during the
> > > - * overutilized flag detection.
> > > - *
> > > - * A better way of solving this problem would be to wait for
> > > - * the PELT signals of tasks to converge before taking them
> > > - * into account, but that is not straightforward to implement,
> > > - * and the following generally works well enough in practice.
> > > - */
> > > - if (flags & ENQUEUE_WAKEUP)
> > > - update_overutilized_status(rq);
> > > + /* At this point se is NULL and we are at root level*/
> > > + add_nr_running(rq, 1);
> > > +
> > > + /*
> > > + * Since new tasks are assigned an initial util_avg equal to
> > > + * half of the spare capacity of their CPU, tiny tasks have the
> > > + * ability to cross the overutilized threshold, which will
> > > + * result in the load balancer ruining all the task placement
> > > + * done by EAS. As a way to mitigate that effect, do not account
> > > + * for the first enqueue operation of new tasks during the
> > > + * overutilized flag detection.
> > > + *
> > > + * A better way of solving this problem would be to wait for
> > > + * the PELT signals of tasks to converge before taking them
> > > + * into account, but that is not straightforward to implement,
> > > + * and the following generally works well enough in practice.
> > > + */
> > > + if (flags & ENQUEUE_WAKEUP)
> > > + update_overutilized_status(rq);
> > >
> > > }
> > >
> > > +enqueue_throttle:
> > > if (cfs_bandwidth_used()) {
> > > /*
> > > * When bandwidth control is enabled; the cfs_rq_throttled()
> > > --
> > > 2.17.1
> > >
> >
> > --
> >
>
--