Re: [PATCH v2] perf/core: Fix refcount bug and potential UAF in perf_mmap
From: Ian Rogers
Date: Fri Mar 06 2026 - 14:04:35 EST
On Fri, Mar 6, 2026 at 1:37 AM Haocheng Yu <yuhaocheng035@xxxxxxxxx> wrote:
>
> That makes a lot of sense. It's indeed possible for a self deadlock to occur.
>
> I tried updating my patch by modifying `perf_mmap_close` to get a
> `perf_mmap_close_locked`
> function that handles the case where event->mutex is held from start.
> But this approach
> isn't very concise, and I'm not so sure if I changed the original
> logic for some unexpected reasons.
> Nevertheless, releasing the mutex before perf_mmap_close finishes
> executing might cause the
> original race condition issue again, which puts me in a dilemma.
>
> Do you have any suggestions?
With the:
```
+ if (ret)
+ perf_mmap_close_locked(vma);
```
Wouldn't moving it outside the "scoped_guard(mutex,
&event->mmap_mutex)" be a fix?
Thanks,
Ian
> Thanks,
> Haocheng
>
>
>
> > On Mon, Feb 2, 2026 at 8:30 AM <yuhaocheng035@xxxxxxxxx> wrote:
> > >
> > > From: Haocheng Yu <yuhaocheng035@xxxxxxxxx>
> > >
> > > Syzkaller reported a refcount_t: addition on 0; use-after-free warning
> > > in perf_mmap.
> > >
> > > The issue is caused by a race condition between a failing mmap() setup
> > > and a concurrent mmap() on a dependent event (e.g., using output
> > > redirection).
> > >
> > > In perf_mmap(), the ring_buffer (rb) is allocated and assigned to
> > > event->rb with the mmap_mutex held. The mutex is then released to
> > > perform map_range().
> > >
> > > If map_range() fails, perf_mmap_close() is called to clean up.
> > > However, since the mutex was dropped, another thread attaching to
> > > this event (via inherited events or output redirection) can acquire
> > > the mutex, observe the valid event->rb pointer, and attempt to
> > > increment its reference count. If the cleanup path has already
> > > dropped the reference count to zero, this results in a
> > > use-after-free or refcount saturation warning.
> > >
> > > Fix this by extending the scope of mmap_mutex to cover the
> > > map_range() call. This ensures that the ring buffer initialization
> > > and mapping (or cleanup on failure) happens atomically effectively,
> > > preventing other threads from accessing a half-initialized or
> > > dying ring buffer.
> >
> > As perf_mmap_close is now called inside the guarded region, is there
> > potential for self deadlock?
> >
> > In perf_mmap it is now calling perf_mmap_close holding the event->mmap_mutex:
> > ```
> > scoped_guard (mutex, &event->mmap_mutex) {
> > [...]
> > ret = map_range(event->rb, vma);
> > if (ret)
> > perf_mmap_close(vma);
> > }
> > ```
> > and in perf_mmap_close the mutex will be taken again:
> > ```
> > static void perf_mmap_close(struct vm_area_struct *vma)
> > {
> > struct perf_event *event = vma->vm_file->private_data;
> > [...]
> > if (!refcount_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex))
> > goto out_put;
> > ```
> >
> > Thanks,
> > Ian
> >
> > > Reported-by: kernel test robot <lkp@xxxxxxxxx>
> > > Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@xxxxxxxxx/
> > > Signed-off-by: Haocheng Yu <yuhaocheng035@xxxxxxxxx>
> > > ---
> > > kernel/events/core.c | 38 +++++++++++++++++++-------------------
> > > 1 file changed, 19 insertions(+), 19 deletions(-)
> > >
> > > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > > index 2c35acc2722b..abefd1213582 100644
> > > --- a/kernel/events/core.c
> > > +++ b/kernel/events/core.c
> > > @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
> > > ret = perf_mmap_aux(vma, event, nr_pages);
> > > if (ret)
> > > return ret;
> > > - }
> > >
> > > - /*
> > > - * Since pinned accounting is per vm we cannot allow fork() to copy our
> > > - * vma.
> > > - */
> > > - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
> > > - vma->vm_ops = &perf_mmap_vmops;
> > > + /*
> > > + * Since pinned accounting is per vm we cannot allow fork() to copy our
> > > + * vma.
> > > + */
> > > + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
> > > + vma->vm_ops = &perf_mmap_vmops;
> > >
> > > - mapped = get_mapped(event, event_mapped);
> > > - if (mapped)
> > > - mapped(event, vma->vm_mm);
> > > + mapped = get_mapped(event, event_mapped);
> > > + if (mapped)
> > > + mapped(event, vma->vm_mm);
> > >
> > > - /*
> > > - * Try to map it into the page table. On fail, invoke
> > > - * perf_mmap_close() to undo the above, as the callsite expects
> > > - * full cleanup in this case and therefore does not invoke
> > > - * vmops::close().
> > > - */
> > > - ret = map_range(event->rb, vma);
> > > - if (ret)
> > > - perf_mmap_close(vma);
> > > + /*
> > > + * Try to map it into the page table. On fail, invoke
> > > + * perf_mmap_close() to undo the above, as the callsite expects
> > > + * full cleanup in this case and therefore does not invoke
> > > + * vmops::close().
> > > + */
> > > + ret = map_range(event->rb, vma);
> > > + if (ret)
> > > + perf_mmap_close(vma);
> > > + }
> > >
> > > return ret;
> > > }
> > >
> > > base-commit: 7d0a66e4bb9081d75c82ec4957c50034cb0ea449
> > > --
> > > 2.51.0
> > >
> > >