Re: [PATCH v4 6/8] sched/fair: Add sched group latency support
From: Qais Yousef
Date: Thu Sep 22 2022 - 06:49:19 EST
On 09/22/22 08:40, Vincent Guittot wrote:
> On Wed, 21 Sept 2022 at 19:12, Tejun Heo <tj@xxxxxxxxxx> wrote:
> >
> > On Wed, Sep 21, 2022 at 07:02:57PM +0200, Vincent Guittot wrote:
> > > > One option could be just using the same mapping as cpu.weight so that 100
> > > > maps to neutral, 10000 maps close to -20, 1 maps close to 19. It isn't great
> > > > that the value can't be interpreted in any intuitive way (e.g. a time
> > > > duration based interface would be a lot easier to grok even if it still is
> > > > best effort) but if that's what the per-task interface is gonna be, it'd be
> > > > best to keep cgroup interface in line.
> > >
> > > I would prefer a signed range like the [-1000:1000] as the behavior is
> > > different for sensitive and non sensitive task unlike the cpu.weight
> > > which is reflect that a bigger value get more
> >
> > How about just sticking with .nice?
>
> Looks good to me. I will just implement the cpu.latency.nice
+1
Keeping both interfaces exposing the same thing would make the most sense IMHO.
Though it begs the question, how the per-task and cgroup interface should
interact?
For example, one of the proposed use cases was to use this knob to control how
hard we search for the best cpu in the load balancer (IIRC). If a task sets
latency_nice to -19, but it is attached to a cgroup that has cpu.latency.nice
to 20. How should the new consumer (load balancer) interpret the effective
value for the task?
IIUC, current use case doesn't care about effective value as it considers the
group and the task as separate entities. But other paths like above wouldn't
see this separation and would want to know which of the 2 to consider.
We should update Documentation/admin-guide/cgroup-v2.rst with these details for
cpu.latency.nice.
Thanks!
--
Qais Yousef