Re: [PATCH bpf-next v2 2/2] bpf: Add preempt disable for bpf_get_stackid

From: Tao Chen

Date: Fri Feb 06 2026 - 05:00:29 EST


在 2026/2/6 17:34, bot+bpf-ci@xxxxxxxxxx 写道:
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 1b100a03ef2e..d263f851f084 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c

[ ... ]

@@ -280,12 +297,22 @@ static long __bpf_get_stackid(struct bpf_map *map,
/* for build_id+offset, pop a bucket before slow cmp */
new_bucket = (struct stack_map_bucket *)
pcpu_freelist_pop(&smap->freelist);
- if (unlikely(!new_bucket))
- return -ENOMEM;
+ if (unlikely(!new_bucket)) {
+ ret = -ENOMEM;
+ goto go_out;
+ }
new_bucket->nr = trace_nr;
id_offs = (struct bpf_stack_build_id *)new_bucket->data;
for (i = 0; i < trace_nr; i++)
id_offs[i].ip = ips[i];
+
+ /*
+ * It is safe after perf callchain ips copied to bucket buffer
+ * to reduce the length of preempt section, we can enable preempt here.
+ */
+ if (!trace_in)
+ preempt_enable();
+

When the stack_map_use_build_id() path falls through to insert a new entry
(neither the hash_matches early return nor the -EEXIST early return is
taken), this preempt_enable() is called here, and then the code continues
to go_out where preempt_enable() is called again.

The sequence when trace_in == NULL:

preempt_disable() // entering __bpf_get_stackid
...
if (!trace_in)
preempt_enable(); // here, after copying IPs
...
// fall through to insert new entry
new_bucket->hash = hash;
...
xchg(&smap->buckets[id], new_bucket);
...
go_out:
if (!trace_in)
preempt_enable(); // second enable - imbalanced

The two direct returns (return id and return -EEXIST) after the early
preempt_enable() correctly bypass go_out. However, when inserting a new
stack trace (the normal case when no exact match exists and the bucket
can be reused), the code falls through to go_out and calls preempt_enable()
a second time.


will fix it in v3, and wait for other comments from matainers.

Could this cause preempt_count underflow when using BPF_F_STACK_BUILD_ID
maps with bpf_get_stackid()?

@@ -317,31 +349,21 @@ static long __bpf_get_stackid(struct bpf_map *map,
old_bucket = xchg(&smap->buckets[id], new_bucket);
if (old_bucket)
pcpu_freelist_push(&smap->freelist, &old_bucket->fnode);
- return id;
+
+go_out:
+ if (!trace_in)
+ preempt_enable();
+ return ret;
}


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21745231308

AI-authorship-score: low
AI-authorship-explanation: The commit shows typical human patterns including a control-flow bug from manual refactoring that AI would likely catch, combined with natural but slightly awkward phrasing in comments.
issues-found: 1
issue-severity-score: high
issue-severity-explanation: Preempt count underflow in the build_id path causes kernel warnings and potential system instability when inserting new stack traces.


--
Best Regards
Tao Chen