Re: [PATCH v5 1/2] ring-buffer: Introducing ring-buffer mapping functions

From: Vincent Donnefort
Date: Thu Aug 03 2023 - 06:34:09 EST


[...]

> > And on the kernel side, just a function to update the "writer fields" of the
> > meta-page:
> >
> > static void rb_wake_up_waiters(struct irq_work *work)
> > {
> > struct rb_irq_work *rbwork = container_of(work, struct rb_irq_work, work);
> > + struct ring_buffer_per_cpu *cpu_buffer =
> > + container_of(rbwork, struct ring_buffer_per_cpu, irq_work);
> > +
> > + rb_update_meta_page(cpu_buffer);
> >
> > wake_up_all(&rbwork->waiters);
> >
> > That would rate limit the number of updates to the meta-page without any irq storm?
> >
>
> Is poll an issue? It requires user space to do a system call to see if
> there's more data? But I guess that's not too much of an issue, as it needs
> to do the ioctl to get the reader page.

I don't think there's any problem with this approach, beside the extra system
call...

>
> We could also add an option to the ioctl to block, or have the ioctl honor
> the NON_BLOCK flags of the fd?

... but indeed, we could block there. The userspace interface would be even simpler.
How about?

+++ b/kernel/trace/trace.c
@@ -8499,12 +8499,22 @@ static long tracing_buffers_ioctl(struct file *file, unsigned int cmd, unsigned
{
struct ftrace_buffer_info *info = file->private_data;
struct trace_iterator *iter = &info->iter;
+ int err;
+
+ if (cmd == TRACE_MMAP_IOCTL_GET_READER_PAGE) {
+ if (!(file->f_flags & O_NONBLOCK)) {
+ err = ring_buffer_wait(iter->array_buffer->buffer,
+ iter->cpu_file,
+ iter->tr->buffer_percent);
+ if (err)
+ return err;
+ }

- if (cmd == TRACE_MMAP_IOCTL_GET_READER_PAGE)
return ring_buffer_map_get_reader_page(iter->array_buffer->buffer,
iter->cpu_file);

>
> -- Steve
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@xxxxxxxxxxx.
>