Re: [PATCH v5] tracing: Guard __DECLARE_TRACE() use of __DO_TRACE_CALL() with SRCU-fast

From: Peter Zijlstra

Date: Mon Jan 12 2026 - 10:31:36 EST


On Fri, Jan 09, 2026 at 04:02:02PM -0500, Steven Rostedt wrote:
> On Fri, 9 Jan 2026 15:21:19 -0500
> Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> wrote:
>
> > * preempt disable/enable pair: 1.1 ns
> > * srcu-fast lock/unlock: 1.5 ns
> >
> > CONFIG_RCU_REF_SCALE_TEST=y
> > * migrate disable/enable pair: 3.0 ns
> > * calls to migrate disable/enable pair within noinline functions: 17.0 ns
> >
> > CONFIG_RCU_REF_SCALE_TEST=m
> > * migrate disable/enable pair: 22.0 ns
>
> OUCH! So migrate disable/enable has a much larger overhead when executed in
> a module than in the kernel? This means all spin_locks() in modules
> converted to mutexes in PREEMPT_RT are taking this hit!

Not so, the migrate_disable() for PREEMPT_RT is still in core code --
kernel/locking/spinlock_rt.c is very much not build as a module.