Re: [PATCH v6 2/2] cpuset: Add cpuset.sched_load_balance to v2

From: Mike Galbraith
Date: Wed Mar 28 2018 - 02:58:02 EST


On Tue, 2018-03-27 at 10:23 -0400, Waiman Long wrote:
> On 03/27/2018 10:02 AM, Tejun Heo wrote:
> > Hello,
> >
> > On Mon, Mar 26, 2018 at 04:28:49PM -0400, Waiman Long wrote:
> >> Maybe we can have a different root level flag, say,
> >> sched_partition_domain that is equivalent to !sched_load_balnace.
> >> However, I am still not sure if we should enforce that no task should be
> >> in the root cgroup when the flag is set.
> >>
> >> Tejun and Peter, what are your thoughts on this?
> > I haven't looked into the other issues too much but we for sure cannot
> > empty the root cgroup.
> >
> > Thanks.
> >
> Now, I have a different idea. How about we add a special root-only knob,
> say, "cpuset.cpus.isolated" that contains the list of CPUs that are
> still owned by root, but not participated in load balancing. All the
> tasks in the root are load-balanced among the remaining CPUs.
>
> A child can then be created that hold some or all the CPUs in the
> isolated set. It will then have a separate root domain if load balancing
> is on, or an isolated cpuset if load balancing is off.
>
> Will that idea work?

Hrm. Sounds very much like the typical v1 setup today..

root
/\
peons vips

...with v2 root effectively shrinking to become the v1 "peons" set
*rd/sd/sd_llc wise only* when you poke /cpuset.cpus.isolated, but still
actually spanning all CPUs. True?

If so, a user would also still have to create a real "peons" subset as
in v1 and migrate everything not nailed to the floor into it for
containment, else any task can be placed or place itself anywhere in
the box, or merely wake to find itself sitting on it's previous, but
now vip turf CPU.

-Mike