Re: [PATCH 6/8] trace_uprobe/sdt: Fix multiple update of same reference counter

From: Ravi Bangoria
Date: Fri Mar 16 2018 - 08:10:40 EST




On 03/15/2018 08:19 PM, Oleg Nesterov wrote:
> On 03/13, Ravi Bangoria wrote:
>> For tiny binaries/libraries, different mmap regions points to the
>> same file portion. In such cases, we may increment reference counter
>> multiple times.
> Yes,
>
>> But while de-registration, reference counter will get
>> decremented only by once
> could you explain why this happens? sdt_increment_ref_ctr() and
> sdt_decrement_ref_ctr() look symmetrical, _decrement_ should see
> the same mappings?

Sorry, I thought this happens only for tiny binaries. But that is not the case.
This happens for binary / library of any length.

Also, it's not a problem with sdt_increment_ref_ctr() / sdt_increment_ref_ctr().
The problem happens with trace_uprobe_mmap_callback().

To illustrate in detail, I'm adding a pr_info() in trace_uprobe_mmap_callback():

ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ vaddr = vma_offset_to_vaddr(vma, tu->ref_ctr_offset);
+ÂÂÂÂÂÂÂÂÂÂÂÂ pr_info("0x%lx-0x%lx : 0x%lx\n", vma->vm_start, vma->vm_end, vaddr);
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ sdt_update_ref_ctr(vma->vm_mm, vaddr, 1);


Ok now, libpython has SDT markers with reference counter:

  # readelf -n /usr/lib64/libpython2.7.so.1.0 | grep -A2 Provider
    Provider: python
ÂÂÂ Â Â Name: function__entry
ÂÂ ÂÂÂÂ ... Semaphore: 0x00000000002899d8

Probing on that marker:

  # cd /sys/kernel/debug/tracing/
ÂÂÂ # echo "p:sdt_python/function__entry /usr/lib64/libpython2.7.so.1.0:0x16a4d4(0x2799d8)" > uprobe_events
ÂÂÂ # echo 1 > events/sdt_python/function__entry/enable

When I run python:

ÂÂÂ # strace -o out python
ÂÂ ÂÂ mmap(NULL, 2738968, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fff92460000
ÂÂÂÂÂ mmap(0x7fff926a0000, 327680, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x230000) = 0x7fff926a0000
ÂÂÂÂÂ mprotect(0x7fff926a0000, 65536, PROT_READ) = 0

The first mmap() maps the whole library into one region. Second mmap()
and third mprotect() split out the whole region into smaller vmas and sets
appropriate protection flags.

Now, in this case, trace_uprobe_mmap_callback() updates reference counter
twice -- by second mmap() call and by third mprotect() call -- because both
regions contain reference counter offset. This I can verify in dmesg:

ÂÂÂ # dmesg | tail
ÂÂÂÂÂ trace_kprobe: 0x7fff926a0000-0x7fff926f0000 : 0x7fff926e99d8
ÂÂÂÂÂ trace_kprobe: 0x7fff926b0000-0x7fff926f0000 : 0x7fff926e99d8

Final vmas of libpython:

ÂÂÂ # cat /proc/`pgrep python`/maps | grep libpython
ÂÂÂÂÂ 7fff92460000-7fff926a0000 r-xp 00000000 08:05 403934Â /usr/lib64/libpython2.7.so.1.0
ÂÂÂÂÂ 7fff926a0000-7fff926b0000 r--p 00230000 08:05 403934Â /usr/lib64/libpython2.7.so.1.0
ÂÂÂÂÂ 7fff926b0000-7fff926f0000 rw-p 00240000 08:05 403934Â /usr/lib64/libpython2.7.so.1.0


I see similar problem with normal binary as well. I'm using Brendan Gregg's
example[1]:

ÂÂÂ # readelf -n /tmp/tick | grep -A2 Provider
ÂÂÂ ÂÂÂ Provider: tick
 Â Â Name: loop2
ÂÂÂÂÂÂÂ ... Semaphore: 0x000000001005003c

Probing that marker:

ÂÂÂ # echo "p:sdt_tick/loop2 /tmp/tick:0x6e4(0x10036)" > uprobe_events
ÂÂÂ # echo 1 > events/sdt_tick/loop2/enable

Now when I run the binary

ÂÂÂ # /tmp/tick

load_elf_binary() internally calls mmap() and I see trace_uprobe_mmap_callback()
updating reference counter twice:

ÂÂÂ # dmesg | tail
ÂÂÂÂÂ trace_kprobe: 0x10010000-0x10030000 : 0x10020036
ÂÂÂÂÂ trace_kprobe: 0x10020000-0x10030000 : 0x10020036

proc/<pid>/maps of the tick:

ÂÂÂ # cat /proc/`pgrep tick`/maps
ÂÂÂÂÂ 10000000-10010000 r-xp 00000000 08:05 1335712Â /tmp/tick
ÂÂÂ Â 10010000-10020000 r--p 00000000 08:05 1335712Â /tmp/tick
ÂÂÂÂÂ 10020000-10030000 rw-p 00010000 08:05 1335712Â /tmp/tick

[1] https://github.com/iovisor/bcc/issues/327#issuecomment-200576506


> Ether way, this patch doesn't look right at first glance... Just
> for example,
>
>> +static bool sdt_check_mm_list(struct trace_uprobe *tu, struct mm_struct *mm)
>> +{
>> + struct sdt_mm_list *tmp = tu->sml;
>> +
>> + if (!tu->sml || !mm)
>> + return false;
>> +
>> + while (tmp) {
>> + if (tmp->mm == mm)
>> + return true;
>> + tmp = tmp->next;
>> + }
>> +
>> + return false;
> ...
>
>> +}
>> +
>> +static void sdt_add_mm_list(struct trace_uprobe *tu, struct mm_struct *mm)
>> +{
>> + struct sdt_mm_list *tmp;
>> +
>> + tmp = kzalloc(sizeof(*tmp), GFP_KERNEL);
>> + if (!tmp)
>> + return;
>> +
>> + tmp->mm = mm;
>> + tmp->next = tu->sml;
>> + tu->sml = tmp;
>> +}
>> +
> ...
>
>> @@ -1020,8 +1104,16 @@ void trace_uprobe_mmap_callback(struct vm_area_struct *vma)
>> !trace_probe_is_enabled(&tu->tp))
>> continue;
>>
>> + down_write(&tu->sml_rw_sem);
>> + if (sdt_check_mm_list(tu, vma->vm_mm))
>> + goto cont;
>> +
>> vaddr = vma_offset_to_vaddr(vma, tu->ref_ctr_offset);
>> - sdt_update_ref_ctr(vma->vm_mm, vaddr, 1);
>> + if (!sdt_update_ref_ctr(vma->vm_mm, vaddr, 1))
>> + sdt_add_mm_list(tu, vma->vm_mm);
>> +
>> +cont:
>> + up_write(&tu->sml_rw_sem);
> To simplify, suppose that tu->sml is empty.
>
> Some process calls this function, increments the counter and adds its ->mm into
> the list.
>
> Then it exits, ->mm is freed.
>
> The next fork/exec allocates the same memory for the new ->mm, the new process
> calls trace_uprobe_mmap_callback() and sdt_check_mm_list() returns T?

Yes. This can happen. May be we can use mmu_notifier for this?

We register a release() callback from trace_uprobe while adding mm
in tu->sml. When mm gets freed, trace_uprobe will get notified.

Though, I don't know much about mmu_notifier. I need to think on this.
Let me know if you have better ideas.

Thanks for the review :)
Ravi