Re: [syzbot] [trace?] WARNING in tracing_buffers_mmap_close (3)
From: Steven Rostedt
Date: Thu Feb 26 2026 - 13:14:55 EST
On Thu, 26 Feb 2026 17:16:57 +0800
Qing Wang <wangqing7171@xxxxxxxxx> wrote:
> #syz test
>
> diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
> index 876358cfe1b1..07f5127c8255 100644
> --- a/include/linux/ring_buffer.h
> +++ b/include/linux/ring_buffer.h
> @@ -248,6 +248,7 @@ int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node);
>
> int ring_buffer_map(struct trace_buffer *buffer, int cpu,
> struct vm_area_struct *vma);
> +void ring_buffer_map_user_mapped_inc(struct trace_buffer *buffer, int cpu);
> int ring_buffer_unmap(struct trace_buffer *buffer, int cpu);
> int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu);
> #endif /* _LINUX_RING_BUFFER_H */
> diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> index f16f053ef77d..59516b89e612 100644
> --- a/kernel/trace/ring_buffer.c
> +++ b/kernel/trace/ring_buffer.c
> @@ -7310,6 +7310,30 @@ int ring_buffer_map(struct trace_buffer *buffer, int cpu,
> return err;
> }
>
> +/**
> + * ring_buffer_map_user_mapped_inc - Increment user_mapped counter for VMA duplication
> + * @buffer: The ring buffer
> + * @cpu: The CPU of the ring buffer to increment
> + *
> + * This is called when a VMA is duplicated (e.g., on fork()) to increment
> + * the user_mapped counter without remapping pages.
OK, so the issue is that the ring buffer was mapped, then the process that
mapped it forked duplicating the mappings. And then on exit (or unmap),
the first one to unmap the buffer will cause the ring buffer to think it
was fully unmapped causing the next one to unmap to trigger the error.
> + */
> +void ring_buffer_map_user_mapped_inc(struct trace_buffer *buffer, int cpu)
Let's call this ring_buffer_map_dup() to be consistent with ring_buffer_map().
inc would expect a dec, but dup() is more of what it is doing.
> +{
> + struct ring_buffer_per_cpu *cpu_buffer;
> +
> + if (!cpumask_test_cpu(cpu, buffer->cpumask))
> + return;
I wonder if this fails we should warn. As it should never be called unless
it was successfully mapped.
> +
> + cpu_buffer = buffer->buffers[cpu];
> +
> + guard(mutex)(&cpu_buffer->mapping_lock);
> +
> + if (cpu_buffer->user_mapped)
> + __rb_inc_dec_mapped(cpu_buffer, true);
Probably should also warn if user_mapped is not set. Again, this should not
ever not be mapped if we get here.
-- Steve
> +}
> +EXPORT_SYMBOL_GPL(ring_buffer_map_user_mapped_inc);
> +
> int ring_buffer_unmap(struct trace_buffer *buffer, int cpu)
> {
> struct ring_buffer_per_cpu *cpu_buffer;
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index 23de3719f495..b2ab95ed8d41 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -8213,6 +8213,14 @@ static inline int get_snapshot_map(struct trace_array *tr) { return 0; }
> static inline void put_snapshot_map(struct trace_array *tr) { }
> #endif
>
> +static void tracing_buffers_mmap_open(struct vm_area_struct *vma)
> +{
> + struct ftrace_buffer_info *info = vma->vm_file->private_data;
> + struct trace_iterator *iter = &info->iter;
> +
> + ring_buffer_map_user_mapped_inc(iter->array_buffer->buffer, iter->cpu_file);
> +}
> +
> static void tracing_buffers_mmap_close(struct vm_area_struct *vma)
> {
> struct ftrace_buffer_info *info = vma->vm_file->private_data;
> @@ -8232,6 +8240,7 @@ static int tracing_buffers_may_split(struct vm_area_struct *vma, unsigned long a
> }
>
> static const struct vm_operations_struct tracing_buffers_vmops = {
> + .open = tracing_buffers_mmap_open,
> .close = tracing_buffers_mmap_close,
> .may_split = tracing_buffers_may_split,
> };