Re: [PATCH bpf v4 1/2] bpf: Fix use-after-free of arena VMA on fork
From: Alexei Starovoitov
Date: Mon Apr 13 2026 - 14:54:01 EST
On Mon, Apr 13, 2026 at 3:12 AM Weiming Shi <bestswngs@xxxxxxxxx> wrote:
>
> On 26-04-12 14:30, Alexei Starovoitov wrote:
> > On Sun, Apr 12, 2026 at 10:50 AM Emil Tsalapatis <emil@xxxxxxxxxxxxxxx> wrote:
> > >
> > > On Sat Apr 11, 2026 at 10:27 PM EDT, Weiming Shi wrote:
> > > > arena_vm_open() only increments a refcount on the shared vma_list entry
> > > > but never registers the new VMA or updates the stored vma pointer. When
> > > > the original VMA is unmapped while a forked copy still exists,
> > > > arena_vm_close() drops the refcount without freeing the vma_list entry.
> > > > The entry's vma pointer now refers to a freed vm_area_struct. A
> > > > subsequent bpf_arena_free_pages() call iterates vma_list and passes
> > > > the dangling pointer to zap_page_range_single(), causing a
> > > > use-after-free.
> > > >
> > > > The bug is reachable by any process with CAP_BPF and CAP_PERFMON that
> > > > can create a BPF_MAP_TYPE_ARENA, mmap it, and fork. It triggers
> > > > deterministically -- no race condition is involved.
> > > >
> > > > BUG: KASAN: slab-use-after-free in zap_page_range_single (mm/memory.c:2234)
> > > > Call Trace:
> > > > <TASK>
> > > > zap_page_range_single+0x101/0x110 mm/memory.c:2234
> > > > zap_pages+0x80/0xf0 kernel/bpf/arena.c:658
> > > > arena_free_pages+0x67a/0x860 kernel/bpf/arena.c:712
> > > > bpf_prog_test_run_syscall+0x3da net/bpf/test_run.c:1640
> > > > __sys_bpf+0x1662/0x50b0 kernel/bpf/syscall.c:6267
> > > > __x64_sys_bpf+0x73/0xb0 kernel/bpf/syscall.c:6360
> > > > do_syscall_64+0xf1/0x530 arch/x86/entry/syscall_64.c:63
> > > > entry_SYSCALL_64_after_hwframe+0x77 arch/x86/entry/entry_64.S:130
> > > > </TASK>
> > > >
> > > > Fix this by tracking each child VMA separately. arena_vm_open() now
> > > > clears the inherited vm_private_data and calls remember_vma() to
> > > > register a fresh vma_list entry for the new VMA. If remember_vma()
> > > > fails due to OOM, vm_private_data stays NULL and arena_vm_close()
> > > > skips the cleanup for that VMA. The shared refcount is no longer
> > > > needed and is removed.
> > > >
> > > > Also add arena_vm_may_split() returning -EINVAL to prevent VMA
> > > > splitting, so that arena_vm_open() only needs to handle fork and the
> > > > vma_list tracking stays simple.
> > > >
> > > > Fixes: b90d77e5fd78 ("bpf: Fix remap of arena.")
> > > > Reported-by: Xiang Mei <xmei5@xxxxxxx>
> > > > Signed-off-by: Weiming Shi <bestswngs@xxxxxxxxx>
> > > > ---
> > > > kernel/bpf/arena.c | 23 +++++++++++++++++------
> > > > 1 file changed, 17 insertions(+), 6 deletions(-)
> > > >
> > > > diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
> > > > index f355cf1c1a16..3462c4463617 100644
> > > > --- a/kernel/bpf/arena.c
> > > > +++ b/kernel/bpf/arena.c
> > > > @@ -317,7 +317,6 @@ static u64 arena_map_mem_usage(const struct bpf_map *map)
> > > > struct vma_list {
> > > > struct vm_area_struct *vma;
> > > > struct list_head head;
> > > > - refcount_t mmap_count;
> > > > };
> > > >
> > > > static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
> > > > @@ -327,7 +326,6 @@ static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
> > > > vml = kmalloc_obj(*vml);
> > > > if (!vml)
> > > > return -ENOMEM;
> > > > - refcount_set(&vml->mmap_count, 1);
> > > > vma->vm_private_data = vml;
> > > > vml->vma = vma;
> > > > list_add(&vml->head, &arena->vma_list);
> > > > @@ -336,9 +334,17 @@ static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
> > > >
> > > > static void arena_vm_open(struct vm_area_struct *vma)
> > > > {
> > > > - struct vma_list *vml = vma->vm_private_data;
> > > > + struct bpf_map *map = vma->vm_file->private_data;
> > > > + struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
> > > >
> > > > - refcount_inc(&vml->mmap_count);
> > > > + /*
> > > > + * vm_private_data points to the parent's vma_list entry after fork.
> > > > + * Clear it and register this VMA separately.
> > > > + */
> > > > + vma->vm_private_data = NULL;
> > > > + guard(mutex)(&arena->lock);
> > > > + /* OOM is silently ignored; arena_vm_close() handles NULL. */
> > >
> > > I don't see any way this approach gonna work, and frankly makes no sense
> > > to me. This patch doesn't take into account how the vma_list is actually
> > > used. It frankly makes no sense, Please think through this: If we could
> > > silently just not allocate the vml, why do we need it in the first place?
> >
> > +1
> >
> > Weiming,
> >
> > you should stop trusting AI so blindly.
> > First, analyze the root cause (the first paragraph of the commit log).
> > Is this really the case?
> >
> > Second, I copy pasted it to claude and got the same "fix" back,
> > but implemented without your bug:
> > + vml = kmalloc_obj(*vml);
> > + if (!vml) {
> > + vma->vm_private_data = NULL;
> > + return;
> > + }
> > + vml->vma = vma;
> > + vma->vm_private_data = vml;
> > + guard(mutex)(&arena->lock);
> > + list_add(&vml->head, &arena->vma_list);
> >
> > at least this part is kinda makes sense...
> >
> > and, of course, this part too:
> >
> > - if (!refcount_dec_and_test(&vml->mmap_count))
> > + if (!vml)
> > return;
> >
> > when you look at it you MUST ask AI back:
> > "Is this buggy?"
> >
> > and it will reply:
> > "
> > Right — silently dropping the VMA from the list means zap_pages()
> > won't unmap pages from it, which is a correctness problem, not just
> > degraded behavior. Since vm_open can't fail, the allocation should use
> > __GFP_NOFAIL. The struct is tiny so that's fine.
> > "
> >
> > and it proceeded adding __GFP_NOFAIL.
> >
> > which is wrong too.
> >
> > So please don't just throw broken patches at maintainers.
> > Do your homework. Fixing one maybe-bug and introducing
> > more real bugs is not a step forward.
> >
> > pw-bot: cr
>
> Thanks for the detailed review, really appreciate it.
>
> I traced through it with GDB + KASAN in QEMU. Here's what happens:
>
> 1. mmap → remember_vma()
> vml->vma = 0xffff88800abfe700, mmap_count = 1
> now Parent VMA = 0xffff88800abfe700
> 2. fork → arena_vm_open(child_vma)
> vml->vma = 0xffff88800abfe700 (unchanged), mmap_count = 2
>
> 3. parent munmap → arena_vm_close(parent_vma)
> mmap_count = 1
> vml->vma is now dangling
>
> 4. child bpf_arena_free_pages → zap_pages()
> reads vml->vma = 0xffff88800abfe700 → UAF
>
> The core issue is that arena_vm_open() never registers the child
> VMA -- it only bumps the mmap_count . So vml->vma always points at
> the parent, and dangles once the parent unmaps.
>
> What approach would you suggest for fixing this?
I'm thinking to just do this:
diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
index f355cf1c1a16..a4f1df1bf0f4 100644
--- a/kernel/bpf/arena.c
+++ b/kernel/bpf/arena.c
@@ -489,7 +489,7 @@ static int arena_map_mmap(struct bpf_map *map,
struct vm_area_struct *vma)
* clears VM_MAYEXEC. Set VM_DONTEXPAND as well to avoid
* potential change of user_vm_start.
*/
- vm_flags_set(vma, VM_DONTEXPAND);
+ vm_flags_set(vma, VM_DONTEXPAND | VM_DONTCOPY);
vma->vm_ops = &arena_vm_ops;
return 0;
}