Re: [PATCH] bpf-next: Prevent out of bound buffer write in __bpf_get_stack
From: Yonghong Song
Date: Mon Jan 05 2026 - 00:50:48 EST
On 1/4/26 12:52 PM, Arnaud Lecomte wrote:
Syzkaller reported a KASAN slab-out-of-bounds write in __bpf_get_stack()
during stack trace copying.
The issue occurs when: the callchain entry (stored as a per-cpu variable)
grow between collection and buffer copy, causing it to exceed the initially
calculated buffer size based on max_depth.
The callchain collection intentionally avoids locking for performance
reasons, but this creates a window where concurrent modifications can
occur during the copy operation.
To prevent this from happening, we clamp the trace len to the max
depth initially calculated with the buffer size and the size of
a trace.
Reported-by: syzbot+d1b7fa1092def3628bd7@xxxxxxxxxxxxxxxxxxxxxxxxx
Closes: https://lore.kernel.org/all/691231dc.a70a0220.22f260.0101.GAE@xxxxxxxxxx/T/
Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
Tested-by: syzbot+d1b7fa1092def3628bd7@xxxxxxxxxxxxxxxxxxxxxxxxx
Cc: Brahmajit Das <listout@xxxxxxxxxxx>
Signed-off-by: Arnaud Lecomte <contact@xxxxxxxxxxxxxx>
LGTM.
Acked-by: Yonghong Song <yonghong.song@xxxxxxxxx>