RE: [PATCH] sched: prevent compiler from optimisingsched_avg_update loop

From: Peter Zijlstra
Date: Tue Mar 23 2010 - 14:10:47 EST


On Tue, 2010-03-23 at 18:03 +0000, Will Deacon wrote:
> Hello Eric,
>
> Thanks for looking at the patch.
>
> > > diff --git a/kernel/sched.c b/kernel/sched.c
> > > index 9ab3cd7..6b74f21 100644
> > > --- a/kernel/sched.c
> > > +++ b/kernel/sched.c
> > > @@ -1238,11 +1238,10 @@ static u64 sched_avg_period(void)
> > > static void sched_avg_update(struct rq *rq)
> > > {
> > > s64 period = sched_avg_period();
> > > + s64 elapsed_periods = div_s64(rq->clock - rq->age_stamp - 1, period);
> > >
> > > - while ((s64)(rq->clock - rq->age_stamp) > period) {
> > > - rq->age_stamp += period;
> > > - rq->rt_avg /= 2;
> > > - }
> > > + rq->age_stamp += (u64)(elapsed_periods * period);
> > > + rq->rt_avg >>= elapsed_periods;
> > > }
> > >
> > > static void sched_rt_avg_update(struct rq *rq, u64 rt_delta)
> >
> > Please take a look at __iter_div_u64_rem() , because we had a similar
> > problem in the past. We want to avoid this div_s64() call.
>
> Yes, I saw the inline assembly fix there. I avoided that fix because
> I was trying not to execute the loop body multiple times. Is the iterative
> approach preferred over a single call to div_s64? I don't have a handle on
> how many iterations are typically executed for this loop.

I expect it to be mostly 0 and occasionally 1 cycle, except when someone
pokes at a sysctl with funny values, at which point it might go round
much much faster.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/