Re: [PATCH 2/4] sched/fair: Small cleanup to sched_balance_newidle()
From: Peter Zijlstra
Date: Fri Nov 14 2025 - 04:49:12 EST
On Wed, Nov 12, 2025 at 08:58:23PM +0530, Shrikanth Hegde wrote:
> > @@ -12865,6 +12869,8 @@ static int sched_balance_newidle(struct
> > if (!cpu_active(this_cpu))
> > return 0;
> > + __sched_balance_update_blocked_averages(this_rq);
> > +
>
> is this done only when sd == null ?
Its done always.
> > /*
> > * This is OK, because current is on_cpu, which avoids it being picked
> > * for load-balance and preemption/IRQs are still disabled avoiding
> > @@ -12891,7 +12897,6 @@ static int sched_balance_newidle(struct
> > raw_spin_rq_unlock(this_rq);
> > t0 = sched_clock_cpu(this_cpu);
> > - sched_balance_update_blocked_averages(this_cpu);
> > rcu_read_lock();
> > for_each_domain(this_cpu, sd) {
>
> Referring to commit,
> 9d783c8dd112a (sched/fair: Skip update_blocked_averages if we are defering load balance)
> I think vincent added the max_newidle_lb_cost check since sched_balance_update_blocked_averages is costly.
That seems to suggest we only should do
sched_balance_update_blocked_averages() when we're going to do
balancing and so skipping when !sd is fine.