Re: [PATCH v2 1/3] sched/numa: advanced per-cgroup numa statistic

From: çè
Date: Thu Nov 28 2019 - 20:52:26 EST




On 2019/11/28 äå11:58, Michal Koutnà wrote:
> On Thu, Nov 28, 2019 at 09:41:37PM +0800, çè <yun.wang@xxxxxxxxxxxxxxxxx> wrote:
>> There are used to be a discussion on this, Peter mentioned we no longer
>> expose raw ticks into userspace and micro seconds could be fine.
> I don't mean the unit presented but the precision.
>
>> Basically we use this to calculate percentages, for which jiffy could be
>> accurate enough :-)
> You also report the raw times.>
> Ad percentages (or raw times precision), on average, it should be fine
> but can't there be any "aliasing" artifacts when only an unimportant
> task is regularly sampled, hence not capturing the real pattern on the
> CPU? (Again, I'm not confident I'm not missing anything that prevents
> that behavior.)

Hmm.. I think I get your point now, so the concern is about the missing
situation between each ticks, correct?

It could be, like one tick hit task A running, then A switched to B, B
switched back to A before next tick, then we missing the exectime of B
in next tick, since it hit A again.

Actually we have the same issue for those data in /proc/stat too, don't
we? The user, sys, iowait was sampled in the similar way.

So if we have to pick a precision, I may still pick jiffy since the
exectime is some thing similar to user/sys time IMHO.

>
>> But still, what if folks don't use v2... any good suggestions?
> (Note this applies to exectimes not locality.) On v1, they can add up
> per CPU values from cpuacct. (So it's v2 that's missing the records.)

Whatabout move the whole stuff into cpuacct cgroup?

I'm not sure but maybe we could use some data there to save the sample
of jiffies, for those v1 user who need these statistics, they should have
cpuacct enabled.

>
>
>> Yes, since they don't have NUMA balancing to do optimization, and
>> generally they are not that much.
> Aha, I didn't realize that.
>
>> Sorry but I don't get it... at first it was 10 regions, as Peter suggested
>> we pick 8, but now to insert member 'jiffies' it become 7,
> See, there are various arguments for different values :-)
>
> I meant that the currently chosen one is imprinted into the API file.
> That is IMO fixable by documenting (e.g. the number of bands may change,
> assume uniform division) or making all this just a debug API. Or, see
> below.
>
>> Yes, here what I try to highlight is the similar usage, but not the way of
>> monitoring ;-) as the docs tell, we monitoring increments.
> I see, the docs give me an idea what's the supposed use case.
>
> What about exposing only the counters for local, remote and let the user
> do their monitoring based on Îlocal/(Îlocal + Îremote)?
>
> That would avoid the partitioning question completely, exposed values
> would be simple numbers and provided information should be equal. A
> drawback is that such a sampling would be slower (but sufficient for the
> illustrating example).

You mean the cgroup numa stat just give the accumulated local/remote access?

As long as the counter won't overflow, maybe... sounds easier to explain too.

So user tracing locality will then get just one percentage (calculated on
their own) from a cgroup, but one should be enough to represent the situation.

Sounds like a good idea to me :-) will try to do that in next version.

Regards,
Michael Wang

>
> Michal
>