Re: [PATCH bpf-next v2] bpf: Clamp trace length in __bpf_get_stack to fix OOB write

From: bot+bpf-ci
Date: Mon Nov 10 2025 - 20:06:03 EST


> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> index 2365541c8..f9081de43 100644
> --- a/kernel/bpf/stackmap.c
> +++ b/kernel/bpf/stackmap.c
> @@ -480,6 +480,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
> }
>
> trace_nr = trace->nr - skip;
> + trace_nr = min_t(u32, trace_nr, max_depth - skip);
> copy_len = trace_nr * elem_size;
>
> ips = trace->ip + skip;

Can max_depth - skip underflow when max_depth < skip?

The stack_map_calculate_max_depth() function can return a value less than
skip when sysctl_perf_event_max_stack is lowered below the skip value:

max_depth = size / elem_size;
max_depth += skip;
if (max_depth > curr_sysctl_max_stack)
return curr_sysctl_max_stack;

If sysctl_perf_event_max_stack = 10 and skip = 20, this returns 10.

Then max_depth - skip = 10 - 20 underflows to 4294967286 (u32 wraps),
causing min_t() to not limit trace_nr at all. This means the original OOB
write is not fixed in cases where skip > max_depth.

With the default sysctl_perf_event_max_stack = 127 and skip up to 255, this
scenario is reachable even without admin changing sysctls.

The pre-refactor code used:

num_elem = size / elem_size;
trace_nr = (trace_nr <= num_elem) ? trace_nr : num_elem;

Perhaps the fix should directly use num_elem instead of max_depth - skip:

u32 num_elem = size / elem_size;
trace_nr = min_t(u32, trace_nr, num_elem);

Or check for underflow:

if (max_depth > skip)
trace_nr = min_t(u32, trace_nr, max_depth - skip);
else
trace_nr = 0;


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/19251115736