Re: [PATCH] sched/numa: advanced per-cgroup numa statistic
From: çè
Date: Mon Oct 28 2019 - 22:02:42 EST
On 2019/10/28 äå9:02, Peter Zijlstra wrote:
[snip]
>> + tg = task_group(p);
>> + while (tg) {
>> + /* skip account when there are no faults records */
>> + if (idx != -1)
>> + this_cpu_inc(tg->numa_stat->locality[idx]);
>> +
>> + this_cpu_inc(tg->numa_stat->jiffies);
>> +
>> + tg = tg->parent;
>> + }
>> +
>> + rcu_read_unlock();
>> +}
>
> Thing is, we already have a cgroup hierarchy walk in the tick; see
> task_tick_fair().
>
> On top of that, you're walking an entirely different set of pointers,
> instead of cfs_rq, you're walking tg->parent, which pretty much
> guarantees you're adding even more cache misses.
>
> How about you stick those numa_stats in cfs_rq (with cacheline
> alignment) and see if you can frob your update loop into the cgroup walk
> we already do.
Thanks for the reply :-)
The hierarchy walk here you mean is the loop of entity_tick(), correct?
Should working if we introduce the per-cfs_rq numa_stat accounting and
do update there, I'll try to reform the implementation in next version.
Regards,
Michael Wang
>