Re: [PATCH v2] tracing: Do not allow mmap() of persistent ring buffer

From: David Laight
Date: Fri Feb 14 2025 - 12:19:07 EST


On Fri, 14 Feb 2025 11:55:47 -0500
Steven Rostedt <rostedt@xxxxxxxxxxx> wrote:

> From: Steven Rostedt <rostedt@xxxxxxxxxxx>
....
> The reason was that the code that maps the ring buffer pages to user space
> has:
>
> page = virt_to_page((void *)cpu_buffer->subbuf_ids[s]);
...
> But virt_to_page() does not work with vmap()'d memory which is what the
> persistent ring buffer has. It is rather trivial to allow this, but for
> now just disable mmap() of instances that have their ring buffer from the
> reserve_mem option.

I've recently fallen foul of the same issue elsewhere [1].
Perhaps there ought to be a variant of virt_to_page() that returns an
error for addresses outside the kernel map.
Or even a fast version that doesn't check for places where the cost
of the additional conditional would matter.

Even a kernel panic for an unchecked NULL pointer would be easier
to diagnose than the current situation.

[1] In my case it was dma_alloc_coherent() using vmalloc() when an iommu
is enabled and then the wrong things happening when I try to mmap()
the buffer into userspace (offset in both the dma buffer and the user file).
I do need to check that the iommu is honouring the buffer alignment.

David