Re: [PATCH] ring-buffer: Have the buffer update counter be atomic

From: Petr Pavlu
Date: Fri Oct 11 2024 - 11:20:27 EST


On 10/11/24 16:48, Steven Rostedt wrote:
> On Fri, 11 Oct 2024 10:01:32 -0400
> Steven Rostedt <rostedt@xxxxxxxxxxx> wrote:
>
>>> Sorry for not replying to your last comment on my patch, I was ill.
>>>
>>> The member ring_buffer_per_cpu.cnt is intended to be accessed under the
>>> reader_lock, same as the pages pointer which it is tied to, so this
>>> change shouldn't be strictly needed.
>>>
>>
>> Right, but there was one location that the cnt was updated outside the
>> lock. The one I commented on. But instead of adding a lock around it, I
>> decided to just make it an atomic. Then there would be no need for the
>> lock. Hmm, it still needs a memory barrier though. At least a
>> smp_mb__after_atomic().
>
> Hmm, I don't like how the update in ring_buffer_subbuf_order_set() is
> unprotected. I think this is the proper patch:
>
> diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> index 696d422d5b35..0672df07b599 100644
> --- a/kernel/trace/ring_buffer.c
> +++ b/kernel/trace/ring_buffer.c
> @@ -6774,6 +6774,7 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
> }
>
> for_each_buffer_cpu(buffer, cpu) {
> + unsigned long flags;
>
> if (!cpumask_test_cpu(cpu, buffer->cpumask))
> continue;
> @@ -6800,11 +6801,15 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
> struct buffer_page, list);
> list_del_init(&cpu_buffer->reader_page->list);
>
> + /* Synchronize with rb_check_pages() */
> + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
> +
> /* The cpu_buffer pages are a link list with no head */
> cpu_buffer->pages = cpu_buffer->new_pages.next;
> cpu_buffer->new_pages.next->prev = cpu_buffer->new_pages.prev;
> cpu_buffer->new_pages.prev->next = cpu_buffer->new_pages.next;
> cpu_buffer->cnt++;
> + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
>
> /* Clear the new_pages list */
> INIT_LIST_HEAD(&cpu_buffer->new_pages);
>
> Even though it's also protected by the buffer->mutex, I feel more
> comfortable with this.

Sorry, I missed context of your comment and that it is about
ring_buffer_subbuf_order_set().

I agree, I also noticed the missing locking in this function and it
looked to me as something that should be fixed. I happen to have
a somewhat more complex patch for it from a few months ago (pasted
below). I think I didn't send it to the list because I then noticed
other potential locking problems with the subbuf code, which I wanted to
examine more closely first.

--
Petr