Re: [PATCH 1/1] sched/numa: Fix memory leak due to the overwritten vma->numab_state

From: Raghavendra K T
Date: Fri Nov 08 2024 - 23:04:20 EST


On 11/8/2024 7:01 PM, Adrian Huang wrote:
From: Adrian Huang <ahuang12@xxxxxxxxxx>

[Problem Description]
When running the hackbench program of LTP, the following memory leak is
reported by kmemleak.

# /opt/ltp/testcases/bin/hackbench 20 thread 1000
Running with 20*40 (== 800) tasks.

# dmesg | grep kmemleak
...
kmemleak: 480 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
kmemleak: 665 new suspected memory leaks (see /sys/kernel/debug/kmemleak)

# cat /sys/kernel/debug/kmemleak
unreferenced object 0xffff888cd8ca2c40 (size 64):
comm "hackbench", pid 17142, jiffies 4299780315
hex dump (first 32 bytes):
ac 74 49 00 01 00 00 00 4c 84 49 00 01 00 00 00 .tI.....L.I.....
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace (crc bff18fd4):
[<ffffffff81419a89>] __kmalloc_cache_noprof+0x2f9/0x3f0
[<ffffffff8113f715>] task_numa_work+0x725/0xa00
[<ffffffff8110f878>] task_work_run+0x58/0x90
[<ffffffff81ddd9f8>] syscall_exit_to_user_mode+0x1c8/0x1e0
[<ffffffff81dd78d5>] do_syscall_64+0x85/0x150
[<ffffffff81e0012b>] entry_SYSCALL_64_after_hwframe+0x76/0x7e
...

This issue can be consistently reproduced on three different servers:
* a 448-core server
* a 256-core server
* a 192-core server

[Root Cause]
Since multiple threads are created by the hackbench program (along with
the command argument 'thread'), a shared vma might be accessed by two or
more cores simultaneously. When two or more cores observe that
vma->numab_state is NULL at the same time, vma->numab_state will be
overwritten.


Thanks for reporting.

IIRC, This is not the entire scenario. Chunk above the vma->numab code
ideally ensures, only one thread descend down to scan the VMA's in a
single 'numa_scan_period'

migrate = mm->numa_next_scan;
if (time_before(now, migrate))
return;
next_scan = now + msecs_to_jiffies(p->numa_scan_period);
if (!try_cmpxchg(&mm->numa_next_scan, &migrate, next_scan))
return;

However since there are 800 threads, I see there might be an opportunity
for another thread to enter in the next 'numa_scan_period' while
we have not gotten till numab_state allocation.

There should be simpler ways to overcome like Vlastimil already pointed
in the other thread, and having lock is an overkill.

for e.g.,
numab_state = kzalloc(..)

if we see that some other thread able to successfully assign
vma->numab_state with their allocation (with cmpxchg), simply
free your allocation.

Can you please check if my understanding is correct?

Thanks
- Raghu

[...]