Re: [PATCH] sched: fix incorrect PELT values on SMT
From: Steve Muckle
Date: Fri Aug 19 2016 - 16:14:07 EST
On Fri, Aug 19, 2016 at 04:00:57PM +0100, Dietmar Eggemann wrote:
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 61d485421bed..95d34b337152 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -2731,7 +2731,7 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa,
> > sa->last_update_time = now;
> >
> > scale_freq = arch_scale_freq_capacity(NULL, cpu);
> > - scale_cpu = arch_scale_cpu_capacity(NULL, cpu);
> > + scale_cpu = arch_scale_cpu_capacity(cpu_rq(cpu)->sd, cpu);
>
> Wouldn't you have to subscribe to this rcu pointer rq->sd w/ something
> like 'rcu_dereference(cpu_rq(cpu)->sd)'?
>
> IMHO, __update_load_avg() is called outside existing RCU read-side
> critical sections as well so there would be a pair of
> rcu_read_lock()/rcu_read_unlock() required in this case.
Thanks Dietmar for the review.
Yeah I didn't consider that this was protected with rcu. It looks like
I'm abandoning this approach anyway though and doing something limited
just to schedutil.
thanks,
Steve