Re: [RFC PATCH v3 00/16] Core scheduling v3

From: Aaron Lu
Date: Tue Aug 06 2019 - 09:50:17 EST


On Tue, Aug 06, 2019 at 08:24:17AM -0400, Vineeth Remanan Pillai wrote:
> > >
> > > I also think a way to make fairness per cookie per core, is this what you
> > > want to propose?
> >
> > Yes, that's what I meant.
>
> I think that would hurt some kind of workloads badly, especially if
> one tenant is
> having way more tasks than the other. Tenant with more task on the same core
> might have immediate requirements from some threads than the other and we
> would fail to take that into account. With some hierarchical management, we can
> alleviate this, but as Aaron said, it would be a bit messy.

I think tenant will have per core weight, similar to sched entity's per
cpu weight. The tenant's per core weight could derive from its
corresponding taskgroup's per cpu sched entities' weight(sum them up
perhaps). Tenant with higher weight will have its core wide vruntime
advance slower than tenant with lower weight. Does this address the
issue here?

> Peter's rebalance logic actually takes care of most of the runq
> imbalance caused
> due to cookie tagging. What we have found from our testing is, fairness issue is
> caused mostly due to a Hyperthread going idle and not waking up. Aaron's 3rd
> patch works around that. As Julien mentioned, we are working on a per thread
> coresched idle thread concept. The problem that we found was, idle thread causes
> accounting issues and wakeup issues as it was not designed to be used in this
> context. So if we can have a low priority thread which looks like any other task
> to the scheduler, things becomes easy for the scheduler and we achieve security
> as well. Please share your thoughts on this idea.

Care to elaborate the idea of coresched idle thread concept?
How it solved the hyperthread going idle problem and what the accounting
issues and wakeup issues are, etc.

Thanks,
Aaron

> The results are encouraging, but we do not yet have the coresched idle
> to not spin
> 100%. We will soon post the patch once it is a bit more stable for
> running the tests
> that we all have done so far.
>
> Thanks,
> Vineeth