Re: [PATCH v8 -tip 06/26] sched: Add core wide task selection and scheduling.
From: Joel Fernandes
Date: Thu Nov 05 2020 - 13:50:28 EST
On Mon, Oct 26, 2020 at 10:31:31AM +0100, Peter Zijlstra wrote:
> On Fri, Oct 23, 2020 at 05:31:18PM -0400, Joel Fernandes wrote:
> > On Fri, Oct 23, 2020 at 09:26:54PM +0200, Peter Zijlstra wrote:
>
> > > How about this then?
> >
> > This does look better. It makes sense and I think it will work. I will look
> > more into it and also test it.
>
> Hummm... Looking at it again I wonder if I can make something like the
> below work.
>
> (depends on the next patch that pulls core_forceidle into core-wide
> state)
>
> That would retain the CFS-cgroup optimization as well, for as long as
> there's no cookies around.
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4691,8 +4691,6 @@ pick_next_task(struct rq *rq, struct tas
> return next;
> }
>
> - put_prev_task_balance(rq, prev, rf);
> -
> smt_mask = cpu_smt_mask(cpu);
>
> /*
> @@ -4707,14 +4705,25 @@ pick_next_task(struct rq *rq, struct tas
> */
> rq->core->core_task_seq++;
> need_sync = !!rq->core->core_cookie;
> -
> - /* reset state */
> -reset:
> - rq->core->core_cookie = 0UL;
> if (rq->core->core_forceidle) {
> need_sync = true;
> rq->core->core_forceidle = false;
> }
> +
> + if (!need_sync) {
> + next = __pick_next_task(rq, prev, rf);
This could end up triggering pick_next_task_fair's newidle balancing;
> + if (!next->core_cookie) {
> + rq->core_pick = NULL;
> + return next;
> + }
.. only to realize here that pick_next_task_fair() that we have to put_prev
the task back as it has a cookie, but the effect of newidle balancing cannot
be reverted.
Would that be a problem as the newly pulled task might be incompatible and
would have been better to leave it alone?
TBH, this is a drastic change and we've done a lot of testing with the
current code and its looking good. I'm a little scared of changing it right
now and introducing regression. Can we maybe do this after the existing
patches are upstream?
thanks,
- Joel
> + put_prev_task(next);
> + need_sync = true;
> + } else {
> + put_prev_task_balance(rq, prev, rf);
> + }
> +
> + /* reset state */
> + rq->core->core_cookie = 0UL;
> for_each_cpu(i, smt_mask) {
> struct rq *rq_i = cpu_rq(i);
>
> @@ -4744,35 +4752,8 @@ pick_next_task(struct rq *rq, struct tas
> * core.
> */
> p = pick_task(rq_i, class, max);
> - if (!p) {
> - /*
> - * If there weren't no cookies; we don't need to
> - * bother with the other siblings.
> - */
> - if (i == cpu && !need_sync)
> - goto next_class;
> -
> + if (!p)
> continue;
> - }
> -
> - /*
> - * Optimize the 'normal' case where there aren't any
> - * cookies and we don't need to sync up.
> - */
> - if (i == cpu && !need_sync) {
> - if (p->core_cookie) {
> - /*
> - * This optimization is only valid as
> - * long as there are no cookies
> - * involved.
> - */
> - need_sync = true;
> - goto reset;
> - }
> -
> - next = p;
> - goto done;
> - }
>
> rq_i->core_pick = p;
>