Re: [PATCH v5 1/2] ring-buffer: Introducing ring-buffer mapping functions

From: Vincent Donnefort
Date: Wed Aug 02 2023 - 08:31:12 EST


On Wed, Aug 02, 2023 at 07:45:26AM -0400, Steven Rostedt wrote:
> On Tue, 1 Aug 2023 13:26:03 -0400
> Steven Rostedt <rostedt@xxxxxxxxxxx> wrote:
>
> > > +
> > > + if (READ_ONCE(cpu_buffer->mapped)) {
> > > + /* Ensure the meta_page is ready */
> > > + smp_rmb();
> > > + WRITE_ONCE(cpu_buffer->meta_page->pages_touched,
> > > + local_read(&cpu_buffer->pages_touched));
> > > + }
> >
> > I was thinking instead of doing this in the semi fast path, put this logic
> > into the rb_wakeup_waiters() code. That is, if a task is mapped, we call
> > the irq_work() to do this for us. It could even do more, like handle
> > blocked mapped waiters.
>
> I was thinking how to implement this, and I worry that it may cause an irq
> storm. Let's keep this (and the other locations) as is, where we do the
> updates in place. Then we can look at seeing if it is possible to do it in
> a delayed fashion another time.

I actually looking at this. How about:

On the userspace side, a simple poll:

static void wait_entries(int fd)
{
struct pollfd pollfd = {
.fd = fd,
.events = POLLIN,
};

if (poll(&pollfd, 1, -1) == -1)
pdie("poll");
}

And on the kernel side, just a function to update the "writer fields" of the
meta-page:

static void rb_wake_up_waiters(struct irq_work *work)
{
struct rb_irq_work *rbwork = container_of(work, struct rb_irq_work, work);
+ struct ring_buffer_per_cpu *cpu_buffer =
+ container_of(rbwork, struct ring_buffer_per_cpu, irq_work);
+
+ rb_update_meta_page(cpu_buffer);

wake_up_all(&rbwork->waiters);

That would rate limit the number of updates to the meta-page without any irq storm?

>
> -- Steve