Re: [RFC/PATCH] powerpc/smp: Add SD_SHARE_PKG_RESOURCES flag to MC sched-domain
From: Mel Gorman
Date: Mon Apr 12 2021 - 06:48:26 EST
On Mon, Apr 12, 2021 at 11:06:19AM +0100, Valentin Schneider wrote:
> On 12/04/21 10:37, Mel Gorman wrote:
> > On Mon, Apr 12, 2021 at 11:54:36AM +0530, Srikar Dronamraju wrote:
> >> * Gautham R. Shenoy <ego@xxxxxxxxxxxxxxxxxx> [2021-04-02 11:07:54]:
> >>
> >> >
> >> > To remedy this, this patch proposes that the LLC be moved to the MC
> >> > level which is a group of cores in one half of the chip.
> >> >
> >> > SMT (SMT4) --> MC (Hemisphere)[LLC] --> DIE
> >> >
> >>
> >> I think marking Hemisphere as a LLC in a P10 scenario is a good idea.
> >>
> >> > While there is no cache being shared at this level, this is still the
> >> > level where some amount of cache-snooping takes place and it is
> >> > relatively faster to access the data from the caches of the cores
> >> > within this domain. With this change, we no longer see regressions on
> >> > P10 for applications which require single threaded performance.
> >>
> >> Peter, Valentin, Vincent, Mel, etal
> >>
> >> On architectures where we have multiple levels of cache access latencies
> >> within a DIE, (For example: one within the current LLC or SMT core and the
> >> other at MC or Hemisphere, and finally across hemispheres), do you have any
> >> suggestions on how we could handle the same in the core scheduler?
> >>
> >
> > Minimally I think it would be worth detecting when there are multiple
> > LLCs per node and detecting that in generic code as a static branch. In
> > select_idle_cpu, consider taking two passes -- first on the LLC domain
> > and if no idle CPU is found then taking a second pass if the search depth
> > allows within the node with the LLC CPUs masked out.
>
> I think that's actually a decent approach. Tying SD_SHARE_PKG_RESOURCES to
> something other than pure cache topology in a generic manner is tough (as
> it relies on murky, ill-defined hardware fabric properties).
>
Agreed. The LLC->node scan idea has been on my TODO list to try for
a while.
> Last I tried thinking about that, I stopped at having a core-to-core
> latency matrix, building domains off of that, and having some knob
> specifying the highest distance value below which we'd set
> SD_SHARE_PKG_RESOURCES. There's a few things I 'hate' about that; for one
> it makes cpus_share_cache() somewhat questionable.
>
And I thought about something like this too but worried it might get
complex, particularly on chiplets where we do not necessarily have
hardware info on latency depending on how it's wired up. It also might
lead to excessive cpumask manipulation in a fast path if we have to
traverse multiple distances with search cost exceeding gains from latency
reduction. Hence -- keeping it simple with two level only, LLC then node
within the allowed search depth and see what that gets us. It might be
"good enough" in most cases and would be a basis for comparison against
complex approaches.
At minimum, I expect IBM can evaluate the POWER10 aspect and I can run
an evaluation on Zen generations.
--
Mel Gorman
SUSE Labs