Re: [RFC PATCH 0/7] Introduce thermal pressure
From: Ingo Molnar
Date: Thu Oct 18 2018 - 02:48:57 EST
* Thara Gopinath <thara.gopinath@xxxxxxxxxx> wrote:
> On 10/16/2018 03:33 AM, Ingo Molnar wrote:
> >
> > * Thara Gopinath <thara.gopinath@xxxxxxxxxx> wrote:
> >
> >>>> Regarding testing, basic build, boot and sanity testing have been
> >>>> performed on hikey960 mainline kernel with debian file system.
> >>>> Further aobench (An occlusion renderer for benchmarking realworld
> >>>> floating point performance) showed the following results on hikey960
> >>>> with debain.
> >>>>
> >>>> Result Standard Standard
> >>>> (Time secs) Error Deviation
> >>>> Hikey 960 - no thermal pressure applied 138.67 6.52 11.52%
> >>>> Hikey 960 - thermal pressure applied 122.37 5.78 11.57%
> >>>
> >>> Wow, +13% speedup, impressive! We definitely want this outcome.
> >>>
> >>> I'm wondering what happens if we do not track and decay the thermal
> >>> load at all at the PELT level, but instantaneously decrease/increase
> >>> effective CPU capacity in reaction to thermal events we receive from
> >>> the CPU.
> >>
> >> The problem with instantaneous update is that sometimes thermal events
> >> happen at a much faster pace than cpu_capacity is updated in the
> >> scheduler. This means that at the moment when scheduler uses the
> >> value, it might not be correct anymore.
> >
> > Let me offer a different interpretation: if we average throttling events
> > then we create a 'smooth' average of 'true CPU capacity' that doesn't
> > fluctuate much. This allows more stable yet asymmetric task placement if
> > the thermal characteristics of the different cores is different
> > (asymmetric). This, compared to instantaneous updates, would reduce
> > unnecessary task migrations between cores.
> >
> > Is that accurate?
>
> Yes. I think it is accurate. I will also add that if we don't average
> throttling events, we will miss the events that occur in between load
> balancing(LB) period.
Yeah, so I'd definitely suggest to not integrate this averaging into
pelt.c in the fashion presented, because:
- This couples your thermal throttling averaging to the PELT decay
half-time AFAICS, which would break the other user every time the
decay is changed/tuned.
- The boolean flag that changes behavior in pelt.c is not particularly
clean either and complicates the code.
- Instead maybe factor out a decaying average library into
kernel/sched/avg.h perhaps (if this truly improves the code), and use
those methods both in pelt.c and any future thermal.c - and maybe
other places where we do decaying averages.
- But simple decaying averages are not that complex either, so I think
your original solution of open coding it is probably fine as well.
Furthermore, any logic introduced by thermal.c and the resulting change
to load-balancing behavior would have to be in perfect sync with cpufreq
governor actions - one mechanism should not work against the other.
The only long term maintainable solution is to move all high level
cpufreq logic and policy handling code into kernel/sched/cpufreq*.c,
which has been done to a fair degree already in the past ~2 years - but
it's unclear to me to what extent this is true for thermal throttling
policy currently: there might be more governor surgery and code
reshuffling required?
The short term goal would be to at minimum have all the bugs lined up in
kernel/sched/* neatly, so that we have the chance to see and fix them in
a single place. ;-)
Thanks,
Ingo