Re: linux-next: manual merge of the rcu tree with the ftrace tree
From: Steven Rostedt
Date: Fri Nov 14 2025 - 12:11:32 EST
On Fri, 14 Nov 2025 18:02:32 +0100
Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> wrote:
> > I don't know. Is there more overhead with disabling migration than
> > disabling preemption?
>
> On the first and last invocation, yes. But we if disabling migration is
> not required for SRCU then why doing it?
I'll yield to the BPF experts here.
> >
> > We also would need to audit all tracepoint callbacks, as there may be some
> > assumptions about staying on the same CPU.
>
> Sure. Okay. What would I need to grep for in order to audit it?
Probably anything that uses per-cpu or smp_processor_id().
> > void *trace_event_buffer_reserve(struct trace_event_buffer *fbuffer,
> > struct trace_event_file *trace_file,
> > unsigned long len)
> > {
> > return event_buffer_reserve(fbuffer, trace_file, len, true);
> > }
> >
> > void *trace_syscall_event_buffer_reserve(struct trace_event_buffer *fbuffer,
> > struct trace_event_file *trace_file,
> > unsigned long len)
> > {
> > return event_buffer_reserve(fbuffer, trace_file, len, false);
> > }
> >
> > Hmm
>
> Yeah. I *think* in the preempt case we always use the one or the other.
OK, we can do this instead. Probably cleaner anyway.
>
> So I would prefer this instead of explicitly disable migration so the a
> function down in the stack can decrement the counter again.
> Ideally, we don't disable migration to begin with.
>
> _If_ the BPF program disables migrations before invocation of its
> program then any trace recording that happens within this program
> _should_ record the migration counter at that time. Which would be 1 at
> the minimum.
Again, I yield to the BPF folks.
Frederic, it may be good to zap this patch from your repo. It looks like it
still needs more work.
Thanks,
-- Steve