Re: [RFC 00/60] Coscheduling for Linux

From: Jan H. SchÃnherr
Date: Fri Oct 19 2018 - 15:08:32 EST


On 19/10/2018 17.45, Rik van Riel wrote:
> On Fri, 2018-10-19 at 17:33 +0200, Frederic Weisbecker wrote:
>> On Fri, Oct 19, 2018 at 11:16:49AM -0400, Rik van Riel wrote:
>>> On Fri, 2018-10-19 at 13:40 +0200, Jan H. SchÃnherr wrote:
>>>>
>>>> Now, it would be possible to "invent" relocatable cpusets to
>>>> address that issue ("I want affinity restricted to a core, I don't
>>>> care which"), but then, the current way how cpuset affinity is
>>>> enforced doesn't scale for making use of it from within the
>>>> balancer. (The upcoming load balancing portion of the coscheduler
>>>> currently uses a file similar to cpu.scheduled to restrict
>>>> affinity to a load-balancer-controlled subset of the system.)
>>>
>>> Oh boy, so the coscheduler is going to get its own load balancer?

Not "its own". The load balancer already aggregates statistics about
sched-groups. With the coscheduler as posted, there is now a runqueue per
scheduling group. The current "ad-hoc" gathering of data per scheduling
group is then basically replaced with looking up that data at the
corresponding runqueue, where it is kept up-to-date automatically.


>>> At that point, why bother integrating the coscheduler into CFS,
>>> instead of making it its own scheduling class?
>>>
>>> CFS is already complicated enough that it borders on unmaintainable.
>>> I would really prefer to have the coscheduler code separate from
>>> CFS, unless there is a really compelling reason to do otherwise.
>>
>> I guess he wants to reuse as much as possible from the CFS features
>> and code present or to come (nice, fairness, load balancing, power
>> aware, NUMA aware, etc...).

Exactly. I want a user to be able to "switch on" coscheduling for those
parts of the workload that profit from it, without affecting the behavior
we are all used to. For both: scheduling behavior for tasks that are not
coscheduled, as well as scheduling behavior for tasks *within* the group of
coscheduled tasks.


> I wonder if things like nice levels, fairness, and balancing could be
> broken out into code that could be reused from both CFS and a new
> co-scheduler scheduling class.
>
> A bunch of the cgroup code is already broken out, but maybe some more
> could be broken out and shared, too?

Maybe.


>> OTOH you're right, the thing has specific enough requirements to
>> consider a new sched policy.

The primary issue that I have with a new scheduling class, is that they are
strictly priority ordered. If there is a runnable task in a higher class,
it is executed, no matter the situation in lower classes. "Coscheduling"
would have to be higher in the class hierarchy than CFS. And then, all
kinds of issues appear from starvation of CFS threads and other unfairness,
to the necessity of (re-)defining a set of preemption rules, nice and other
things that are given with CFS.


> Some bits of functionality come to mind:
>
> - track groups of tasks that should be co-scheduled (eg all the VCPUs of
> a virtual machine)

cgroups

> - track the subsets of those groups that are runnable (eg. the currently
> runnable VCPUs of a virtual machine)

runqueues

> - figure out time slots and CPU assignments to efficiently use CPU time
> for the co-scheduled tasks (while leaving some configurable(?) amount of
> CPU time available for other tasks)

CFS runqueues and associated rules for preemption/time slices/etc.

> - configuring some lower-level code on each affected CPU to "run task A
> in slot X", etc

There is no "slot" concept, as it does not fit my idea of interactive
usage. (As in "slot X will execute from time T to T+1.) It is purely
event-driven right now (eg, "group X just became runnable, it is considered
more important than the currently running group Y; all CPUs (in the
affected part of the system) switch to group X", or "group X ran long
enough, next group").

While some planning ahead seems possible (as demonstrated by the Tableau
scheduler that Peter already pointed me at), I currently cannot imagine
such an approach working for general purpose workloads. The absence of true
preemption being my primary concern.


> This really does not seem like something that could be shoehorned into
> CFS without making it unmaintainable.
>
> Furthermore, it also seems like the thing that you could never really
> get into a highly efficient state as long as it is weighed down by the
> rest of CFS.

I still have this idealistic notion, that there is no "weighing down". I
see it more as profiting from all the hard work that went into CFS,
avoiding the same mistakes, being backwards compatible, etc.



If I were to do this "outside of CFS", I'd overhaul the scheduling class
concept as it exists today. Instead, I'd probably attempt to schedule
instantiations of scheduling classes. In its easiest setup, nothing would
change: one CFS instance, one RT instance, one DL instance, strictly
ordered by priority (on each CPU). The coscheduler as it is posted (and
task groups in general), are effectively some form of multiple CFS
instances being governed by a CFS instance.

This approach would allow, for example, multiple CFS instances that are
scheduled with explicit priorities; or some tasks that are scheduled with a
custom scheduling class, while the whole group of tasks competes for time
with other tasks via CFS rules.

I'd still keep the feature of "coscheduling" orthogonal to everything else,
though. Essentially, I'd just give the user/admin the possibility to choose
the set of rules that shall be applied to entities in a runqueue.



Your idea of further modularization seems to go in a similar direction, or
at least is not incompatible with that. If it helps keeping things
maintainable, I'm all for it. For example, some of the (upcoming) load
balancing changes are just generalizations, so that the functions don't
operate on *the* set of CFS runqueues, but just *a* set of CFS runqueues.
Similarly in the already posted code, where task picking now starts at
*some* top CFS runqueue, instead of *the* top CFS runqueue.

Regards
Jan