Re: [PATCH v7 2/7] sched: move cfs task on a CPU with higher capacity
From: Vincent Guittot
Date: Fri Oct 10 2014 - 03:47:10 EST
On 9 October 2014 17:30, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> On Thu, Oct 09, 2014 at 04:59:36PM +0200, Vincent Guittot wrote:
>> On 9 October 2014 13:23, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>> > On Tue, Oct 07, 2014 at 02:13:32PM +0200, Vincent Guittot wrote:
>> >> +++ b/kernel/sched/fair.c
>> >> @@ -5896,6 +5896,18 @@ fix_small_capacity(struct sched_domain *sd, struct sched_group *group)
>> >> }
>> >>
>> >> /*
>> >> + * Check whether the capacity of the rq has been noticeably reduced by side
>> >> + * activity. The imbalance_pct is used for the threshold.
>> >> + * Return true is the capacity is reduced
>> >> + */
>> >> +static inline int
>> >> +check_cpu_capacity(struct rq *rq, struct sched_domain *sd)
>> >> +{
>> >> + return ((rq->cpu_capacity * sd->imbalance_pct) <
>> >> + (rq->cpu_capacity_orig * 100));
>> >> +}
>> >> +
>> >> +/*
>> >> * Group imbalance indicates (and tries to solve) the problem where balancing
>> >> * groups is inadequate due to tsk_cpus_allowed() constraints.
>> >> *
>> >> @@ -6567,6 +6579,14 @@ static int need_active_balance(struct lb_env *env)
>> >> */
>> >> if ((sd->flags & SD_ASYM_PACKING) && env->src_cpu > env->dst_cpu)
>> >> return 1;
>> >> +
>> >> + /*
>> >> + * The src_cpu's capacity is reduced because of other
>> >> + * sched_class or IRQs, we trig an active balance to move the
>> >> + * task
>> >> + */
>> >> + if (check_cpu_capacity(env->src_rq, sd))
>> >> + return 1;
>> >> }
>> >
>> > So does it make sense to first check if there's a better candidate at
>> > all? By this time we've already iterated the current SD while trying
>> > regular load balancing, so we could know this.
>>
>> i'm not sure to completely catch your point.
>> Normally, f_b_g and f_b_q have already looked at the best candidate
>> when we call need_active_balance. And src_cpu has been elected.
>> Or i have missed your point ?
>
> Yep you did indeed miss my point.
>
> So I've always disliked this patch for its arbitrary nature, why
> unconditionally try and active balance every time there is 'some' RT/IRQ
> usage, it could be all CPUs are over that arbitrary threshold and we'll
> end up active balancing for no point.
>
> So, since we've already iterated all CPUs in our domain back in
> update_sd_lb_stats() we could have computed the CFS fraction:
>
> 1024 * capacity / capacity_orig
>
> for every cpu and collected the min/max of this. Then we can compute if
> src is significantly (and there I suppose we can indeed use imb)
> affected compared to others.
ok, so we should put additional check in f_b_g to be sure that we will
jump to force_balance only if there will be a real gain by moving the
task on the local group (from an available capacity for the task point
of view)
and probably in f_b_q too
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/