Re: [PATCH bpf-next v3] bpf: Clamp trace length in __bpf_get_stack to fix OOB write

From: Lecomte, Arnaud

Date: Wed Nov 12 2025 - 03:50:10 EST


I am a not sure this is the right solution and I am scared that by
forcing this clamping, we are hiding something else.
If we have a look at the code below:
```

|

if (trace_in) {
trace = trace_in;
trace->nr = min_t(u32, trace->nr, max_depth);
} else if (kernel && task) {
trace = get_callchain_entry_for_task(task, max_depth);
} else {
trace = get_perf_callchain(regs, kernel, user, max_depth,
crosstask, false, 0);
} ``` trace should be (if I remember correctly) clamped there. If not, it might hide something else. I would like to have a look at the return for each if case through gdb. |

On 11/11/2025 08:12, Brahmajit Das wrote:
syzbot reported a stack-out-of-bounds write in __bpf_get_stack()
triggered via bpf_get_stack() when capturing a kernel stack trace.

After the recent refactor that introduced stack_map_calculate_max_depth(),
the code in stack_map_get_build_id_offset() (and related helpers) stopped
clamping the number of trace entries (`trace_nr`) to the number of elements
that fit into the stack map value (`num_elem`).

As a result, if the captured stack contained more frames than the map value
can hold, the subsequent memcpy() would write past the end of the buffer,
triggering a KASAN report like:

BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x...
Write of size N at addr ... by task syz-executor...

Restore the missing clamp by limiting `trace_nr` to `num_elem` before
computing the copy length. This mirrors the pre-refactor logic and ensures
we never copy more bytes than the destination buffer can hold.

No functional change intended beyond reintroducing the missing bound check.

Reported-by: syzbot+d1b7fa1092def3628bd7@xxxxxxxxxxxxxxxxxxxxxxxxx
Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
Signed-off-by: Brahmajit Das <listout@xxxxxxxxxxx>
---
Changes in v3:
Revert back to num_elem based logic for setting trace_nr. This was
suggested by bpf-ci bot, mainly pointing out the chances of underflow
when max_depth < skip.

Quoting the bot's reply:
The stack_map_calculate_max_depth() function can return a value less than
skip when sysctl_perf_event_max_stack is lowered below the skip value:

max_depth = size / elem_size;
max_depth += skip;
if (max_depth > curr_sysctl_max_stack)
return curr_sysctl_max_stack;

If sysctl_perf_event_max_stack = 10 and skip = 20, this returns 10.

Then max_depth - skip = 10 - 20 underflows to 4294967286 (u32 wraps),
causing min_t() to not limit trace_nr at all. This means the original OOB
write is not fixed in cases where skip > max_depth.

With the default sysctl_perf_event_max_stack = 127 and skip up to 255, this
scenario is reachable even without admin changing sysctls.

Changes in v2:
- Use max_depth instead of num_elem logic, this logic is similar to what
we are already using __bpf_get_stackid
Link: https://lore.kernel.org/all/20251111003721.7629-1-listout@xxxxxxxxxxx/

Changes in v1:
- RFC patch that restores the number of trace entries by setting
trace_nr to trace_nr or num_elem based on whichever is the smallest.
Link: https://lore.kernel.org/all/20251110211640.963-1-listout@xxxxxxxxxxx/
---
kernel/bpf/stackmap.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 2365541c81dd..cef79d9517ab 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -426,7 +426,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
struct perf_callchain_entry *trace_in,
void *buf, u32 size, u64 flags, bool may_fault)
{
- u32 trace_nr, copy_len, elem_size, max_depth;
+ u32 trace_nr, copy_len, elem_size, num_elem, max_depth;
bool user_build_id = flags & BPF_F_USER_BUILD_ID;
bool crosstask = task && task != current;
u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
@@ -480,6 +480,8 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
}
trace_nr = trace->nr - skip;
+ num_elem = size / elem_size;
+ trace_nr = min_t(u32, trace_nr, num_elem);
copy_len = trace_nr * elem_size;
ips = trace->ip + skip;

Thanks,
Arnaud