Re: [PATCH updated] sched/fair: core wide cfs task priority comparison

From: Aaron Lu
Date: Mon Apr 20 2020 - 22:51:43 EST


On Mon, Apr 20, 2020 at 06:26:34PM -0400, Vineeth Remanan Pillai wrote:
> On Mon, Apr 20, 2020 at 4:08 AM Aaron Lu <aaron.lwe@xxxxxxxxx> wrote:
> >
> > On Fri, Apr 17, 2020 at 05:40:45PM +0800, Aaron Lu wrote:
>
> > The adjust is only needed when core scheduling is enabled while I
> > mistakenly called it on both enable and disable. And I come to think
> > normalize is a better name than adjust.
> >
> I guess we would also need to update the min_vruntime of the sibling
> to match the rq->core->min_vruntime on coresched disable. Otherwise
> a new enqueue on root cfs of the sibling would inherit the very old
> min_vruntime before coresched enable and thus would starve all the
> already queued tasks until the newly enqueued se's vruntime catches up.

Yes this is a concern but AFAICS, there is no problem. Consider:
- when there is no queued task across the disable boundary, the stale
min_vruntime doesn't matter as you said;
- when there are queued tasks across the disable boundary, the newly
queued task will normalize its vruntime against the sibling_cfs_rq's
min_vruntime, if the min_vruntime is stale and problem would occur.
But my reading of the code made me think this min_vruntime should
have already been updated by update_curr() in enqueue_entity() before
being used by this newly enqueued task and update_curr() would bring
the stale min_vruntime to the smallest vruntime of the queued ones so
again, no problem should occur.

I have done a simple test locally before sending the patch out and didn't
find any problem but maybe I failed to hit the race window. Let me know
if I misunderstood something.

> Other than that, I think the patch looks good. We haven't tested it
> yet. Will do a round of testing and let you know soon.

Thanks.