Re: Re: [PATCH 3/3] mm: mmap_lock: add ip to mmap_lock tracepoints

From: Gang Li
Date: Fri Jul 30 2021 - 01:32:21 EST


Thanks! I have tried your suggestion. They are great, especially synthetic-events.

If don't print ip per event, we can only guess which one cause the contention by "hitcount".

> (https://www.kernel.org/doc/html/latest/trace/histogram.html#synthetic-events)

But it seems that they only support histogram, can I print the
synthetic-events args per event in /sys/kernel/debug/tracing/trace
like other events? I haven't found that in kernel doc.

On 7/30/21 1:33 AM, Axel Rasmussen wrote:
Not a strong objection, but I think this can be achieved already using either:

- The "stacktrace" feature which histogram triggers support
(https://www.kernel.org/doc/html/latest/trace/histogram.html)
- bpftrace's kstack/ustack feature
(https://github.com/iovisor/bpftrace/blob/master/docs/tutorial_one_liners.md#lesson-9-profile-on-cpu-kernel-stacks)

I haven't tried it out myself, but I suspect you could construct a
synthetic event
(https://www.kernel.org/doc/html/latest/trace/histogram.html#synthetic-events)
which adds in the stack trace, then it ought to function a lot like it
would with this patch.

Then again, it's not like this change is huge by any means. So, if you
find this more convenient than those alternatives, you can take:

Reviewed-by: Axel Rasmussen <axelrasmussen@xxxxxxxxxx>

It's possible Steven or Tom have a more strong opinion on this though. ;)

On Thu, Jul 29, 2021 at 2:29 AM Gang Li <ligang.bdlg@xxxxxxxxxxxxx> wrote:

The mmap_lock is acquired on most (all?) mmap / munmap / page fault
operations, so a multi-threaded process which does a lot of these
can experience significant contention. Sometimes we want to know
where the lock is hold. And it's hard to locate without collecting ip.

Here's an example: TP_printk("ip=%pS",ip)
Log looks like this: "ip=do_user_addr_fault+0x274/0x640"

We can find out who cause the contention amd make some improvements
for it.

Signed-off-by: Gang Li <ligang.bdlg@xxxxxxxxxxxxx>