Re: [PATCH v3 06/14] sched/cpufreq: uclamp: add utilization clamping for RT tasks

From: Vincent Guittot
Date: Mon Aug 13 2018 - 10:06:39 EST


On Mon, 13 Aug 2018 at 14:49, Patrick Bellasi <patrick.bellasi@xxxxxxx> wrote:
>
> On 13-Aug 14:07, Vincent Guittot wrote:
> > On Mon, 13 Aug 2018 at 12:12, Patrick Bellasi <patrick.bellasi@xxxxxxx> wrote:
> > >
> > > Hi Vincent!
> > >
> > > On 09-Aug 18:03, Vincent Guittot wrote:
> > > > > On 07-Aug 15:26, Juri Lelli wrote:
> > >
> > > [...]
> > >
> > > > > > > + util_cfs = cpu_util_cfs(rq);
> > > > > > > + util_rt = cpu_util_rt(rq);
> > > > > > > + if (sched_feat(UCLAMP_SCHED_CLASS)) {
> > > > > > > + util = 0;
> > > > > > > + if (util_cfs)
> > > > > > > + util += uclamp_util(cpu_of(rq), util_cfs);
> > > > > > > + if (util_rt)
> > > > > > > + util += uclamp_util(cpu_of(rq), util_rt);
> > > > > > > + } else {
> > > > > > > + util = cpu_util_cfs(rq);
> > > > > > > + util += cpu_util_rt(rq);
> > > > > > > + util = uclamp_util(cpu_of(rq), util);
> > > > > > > + }
> > > > >
> > > > > Regarding the two policies, do you have any comment?
> > > >
> > > > Does the policy for (sched_feat(UCLAMP_SCHED_CLASS)== true) really
> > > > make sense as it is ?
> > > > I mean, uclamp_util doesn't make any difference between rt and cfs
> > > > tasks when clamping the utilization so why should be add twice the
> > > > returned value ?
> > > > IMHO, this policy would make sense if there were something like
> > > > uclamp_util_rt() and a uclamp_util_cfs()
> > >
> > > The idea for the UCLAMP_SCHED_CLASS policy is to improve fairness on
> > > low-priority classese, especially when we have high RT utilization.
> > >
> > > Let say we have:
> > >
> > > util_rt = 40%, util_min=0%
> > > util_cfs = 10%, util_min=50%
> > >
> > > the two policies will select:
> > >
> > > UCLAMP_SCHED_CLASS: util = uclamp(40) + uclamp(10) = 50 + 50 = 100%
> > > !UCLAMP_SCHED_CLASS: util = uclamp(40 + 10) = uclmp(50) = 50%
> > >
> > > Which means that, despite the CPU's util_min will be set to 50% when
> > > CFS is running, these tasks will have almost no boost at all, since
> > > their bandwidth margin is eclipsed by RT tasks.
> >
> > Hmm ... At the opposite, even if there is no running rt task but only
> > some remaining blocked rt utilization,
> > even if util_rt = 10%, util_min=0%
> > and util_cfs = 40%, util_min=50%
> > the UCLAMP_SCHED_CLASS: util = uclamp(10) + uclamp(40) = 50 + 50 = 100%
>
> Yes, that's true... since now I clamp util_rt if it's non zero.
> Perhaps this can be fixed by clamping util_rt only:
> if (rt_rq_is_runnable(&rq->rt))
> ?
>
> > So cfs task can get double boosted by a small rt task.
>
> Well, in principle we don't know if the 50% clamp was asserted by the
> RT or the CFS task, since in the current implementation we max
> aggregate clamp values across all RT and CFS tasks.

Yes it was just the assumption of your example above.
IMHO, having util = 100% for your use case looks more like a bug than a feature

As you said below: "what utilization clamping aims to do is to defined
the minimum capacity to run _all_
the RUNNABLE tasks... not the minimum capacity for _each_ one of them "


>
> > Furthermore, if there is no rt task but 2 cfs tasks of 40% and 10%
> > the UCLAMP_SCHED_CLASS: util = uclamp(0) + uclamp(40) = 50 = 50%
>
> True, but here we are within the same class and what utilization
> clamping aims to do is to defined the minimum capacity to run _all_
> the RUNNABLE tasks... not the minimum capacity for _each_ one of them.

I fully agree and that's exactly what I want to highlight: With
UCLAMP_SCHED_CLASS policy, you try (but fail because the clamping is
not done per class) to distinguish rt and cfs as different kind of
runnable tasks.

>
> > So in this case cfs tasks don't get more boost and have to share the
> > bandwidth and you don't ensure 50% for each unlike what you try to do
> > for rt.
>
> Above I'm not trying to fix a per-task issue. The UCLAMP_SCHED_CLASS
> policy is just "trying" to fix a cross-class issue... if we agree
> there can be a cross-class issue worth to be fixed.

But the cross class issue that you are describing can also exists
between cfs tasks with different uclamp_min
So I'm not sure that's there is more cross-class issue than in class issue

>
> > You create a difference in the behavior depending of the class of the
> > others co-schedule tasks which is not sane IMHO
>
> Yes I agree that the current behavior is not completely clean... still
> the question is: do you reckon the problem I depicted above, i.e. RT
> workloads eclipsing the min_util required by lower priority classes?

As said above, I don't think that there is a problem that is specific
to cross class scheduling that can't also happen in the same class.

Regarding your example:
task TA util=40% with uclamp_min 50%
task TB util=10% with uclamp_min 0%

If TA and TB are cfs, util=50% and it doesn't seem to be a problem
whereas TB will steal some bandwidth to TA and delay it (and i even
don't speak about the impact of the nice priority of TB)
If TA is cfs and TB is rt, Why util=50% is now a problem for TA ?

>
> To a certain extend I see this problem similar to the rt/dl/irq pressure
> in defining cpu_capacity, isn't it?
>
> Maybe we can make use of (cpu_capacity_orig - cpu_capacity) to factor
> in a util_min compensation for CFS tasks?
>
> --
> #include <best/regards.h>
>
> Patrick Bellasi