Re: [RFC PATCH v3 00/16] Core scheduling v3
From: Aaron Lu
Date: Fri Jul 19 2019 - 01:53:57 EST
On Thu, Jul 18, 2019 at 04:27:19PM -0700, Tim Chen wrote:
>
>
> On 7/18/19 3:07 AM, Aaron Lu wrote:
> > On Wed, Jun 19, 2019 at 02:33:02PM -0400, Julien Desfossez wrote:
>
> >
> > With the below patch on top of v3 that makes use of util_avg to decide
> > which task win, I can do all 8 steps and the final scores of the 2
> > workloads are: 1796191 and 2199586. The score number are not close,
> > suggesting some unfairness, but I can finish the test now...
>
> Aaron,
>
> Do you still see high variance in terms of workload throughput that
> was a problem with the previous version?
Any suggestion how to measure this?
It's not clear how Aubrey did his test, will need to take a look at
sysbench.
> >
> >
> > }
> > +
> > +bool cfs_prio_less(struct task_struct *a, struct task_struct *b)
> > +{
> > + struct sched_entity *sea = &a->se;
> > + struct sched_entity *seb = &b->se;
> > + bool samecore = task_cpu(a) == task_cpu(b);
>
>
> Probably "samecpu" instead of "samecore" will be more accurate.
> I think task_cpu(a) and task_cpu(b)
> can be different, but still belong to the same cpu core.
Right, definitely, guess I'm brain damaged.
>
> > + struct task_struct *p;
> > + s64 delta;
> > +
> > + if (samecore) {
> > + /* vruntime is per cfs_rq */
> > + while (!is_same_group(sea, seb)) {
> > + int sea_depth = sea->depth;
> > + int seb_depth = seb->depth;
> > +
> > + if (sea_depth >= seb_depth)
>
> Should this be strictly ">" instead of ">=" ?
Same depth doesn't necessarily mean same group while the purpose here is
to make sure they are in the same cfs_rq. When they are of the same
depth but in different cfs_rqs, we will continue to go up till we reach
rq->cfs.
>
> > + sea = parent_entity(sea);
> > + if (sea_depth <= seb_depth)
>
> Should use "<" ?
Ditto here.
When they are of the same depth but no in the same cfs_rq, both se will
move up.
> > + seb = parent_entity(seb);
> > + }
> > +
> > + delta = (s64)(sea->vruntime - seb->vruntime);
> > + }
> > +
Thanks.