Re: [PATCH] tracing: Fix race when concurrently splice_read trace_pipe
From: Google
Date: Sat Aug 12 2023 - 16:47:11 EST
On Sat, 12 Aug 2023 09:45:52 +0800
Zheng Yejian <zhengyejian1@xxxxxxxxxx> wrote:
> On 2023/8/12 03:25, Steven Rostedt wrote:
> > On Thu, 10 Aug 2023 20:39:05 +0800
> > Zheng Yejian <zhengyejian1@xxxxxxxxxx> wrote:
> >
> >> When concurrently splice_read file trace_pipe and per_cpu/cpu*/trace_pipe,
> >> there are more data being read out than expected.
>
> Sorry, I didn't make clear here. It not just read more but also lost
> some data. My case is that, for example:
> 1) Inject 3 events into ring_buffer: event1, event2, event3;
> 2) Concurrently splice_read through trace_pipes;
> 3) Then actually read out: event1, event3, event3. No event2, but 2
> event3.
>
> >
> > Honestly the real fix is to prevent that use case. We should probably have
> > access to trace_pipe lock all the per_cpu trace_pipes too.
>
> Yes, we could do that, but would it seem not that effective?
> because per_cpu trace_pipe only read its own ring_buffer and not race
> with ring_buffers in other cpus.
I think Steve said that only one of below is usable.
- Read trace_pipe
or
- Read per_cpu/cpu*/trace_pipe concurrently
And I think this makes sence, especially if you use splice (this *moves*
the page from the ring_buffer to other pipe).
Thank you,
>
> >
> > -- Steve
> >
>
--
Masami Hiramatsu (Google) <mhiramat@xxxxxxxxxx>