[GIT PULL] tracing updates for v7.0
From: Steven Rostedt
Date: Fri Feb 13 2026 - 12:00:01 EST
Linus,
[
Note, there's two merge conflicts:
1. With BPF code. This PR made tracepoint callbacks preemptable. BPF
needs to run on the same CPU until completion. Alexei recommended
using rcu_read_lock_dont_migrate() in the BPF callback. This
conflicted with some other changes in the BPF tree.
2. With mm tree. This PR moved trace_printk() code out of trace.c and
into trace_printk.c. A change in Andrew Morton's mm tree removed the
string size parameter from __trace_puts(). That function was one of
the functions moved to trace_printk.c. That update now needs to
happen in that file instead.
The updates in linux-next master branch have the proper merge resolution.
]
tracing updates for 7.0:
User visible changes:
- Add an entry into MAINTAINERS file for RUST versions of code
There's now RUST code for tracing and static branches. To differentiate
that code from the C code, add entries in for the RUST version (with "[RUST]"
around it) so that the right maintainers get notified on changes.
- New bitmask-list option added to tracefs
When this is set, bitmasks in trace event are not displayed as hex
numbers, but instead as lists: e.g. 0-5,7,9 instead of 0000015f
- New show_event_filters file in tracefs
Instead of having to search all events/*/*/filter for any active filters
enabled in the trace instance, the file show_event_filters will list them
so that there's only one file that needs to be examined to see if any
filters are active.
- New show_event_triggers file in tracefs
Instead of having to search all events/*/*/trigger for any active triggers
enabled in the trace instance, the file show_event_triggers will list them
so that there's only one file that needs to be examined to see if any
triggers are active.
- Have traceoff_on_warning disable trace pintk buffer too
Recently recording of trace_printk() could go to other trace instances
instead of the top level instance. But if traceoff_on_warning triggers, it
doesn't stop the buffer with trace_printk() and that data can easily be
lost by being overwritten. Have traceoff_on_warning also disable the
instance that has trace_printk() being written to it.
- Update the hist_debug file to show what function the field uses
When CONFIG_HIST_TRIGGERS_DEBUG is enabled, a hist_debug file exists for
every event. This displays the internal data of any histogram enabled for
that event. But it is lacking the function that is called to process one
of its fields. This is very useful information that was missing when
debugging histograms.
- Up the histogram stack size from 16 to 31
Stack traces can be used as keys for event histograms. Currently the size
of the stack that is stored is limited to just 16 entries. But the storage
space in the histogram is 256 bytes, meaning that it can store up to 31
entries (plus one for the count of entries). Instead of letting that space
go to waste, up the limit from 16 to 31. This makes the keys much more
useful.
- Fix permissions of per CPU file buffer_size_kb
The per CPU file of buffer_size_kb was incorrectly set to read only in a
previous cleanup. It should be writable.
- Reset "last_boot_info" if the persistent buffer is cleared
The last_boot_info shows address information of a persistent ring buffer
if it contains data from a previous boot. It is cleared when recording
starts again, but it is not cleared when the buffer is reset. The data is
useless after a reset so clear it on reset too.
Internal changes:
- A change was made to allow tracepoint callbacks to have preemption
enabled, and instead be protected by SRCU. This required some updates to
the callbacks for perf and BPF.
perf needed to disable preemption directly in its callback because it
expects preemption disabled in the later code.
BPF needed to disable migration, as its code expects to run completely on
the same CPU.
- Have irq_work wake up other CPU if current CPU is "isolated"
When there's a waiter waiting on ring buffer data and a new event happens,
an irq work is triggered to wake up that waiter. This is noisy on isolated
CPUs (running NO_HZ_FULL). Trigger an IPI to a house keeping CPU instead.
- Use proper free of trigger_data instead of open coding it in.
- Remove redundant call of event_trigger_reset_filter()
It was called immediately in a function that was called right after it.
- Workqueue cleanups
- Report errors if tracing_update_buffers() were to fail.
- Make the enum update workqueue generic for other parts of tracing
On boot up, a work queue is created to convert enum names into their
numbers in the trace event format files. This work queue can also be used
for other aspects of tracing that takes some time and shouldn't be called
by the init call code.
The blk_trace initialization takes a bit of time. Have the initialization
code moved to the new tracing generic work queue function.
- Skip kprobe boot event creation call if there's no kprobes defined on cmdline
The kprobe initialization to set up kprobes if they are defined on the
cmdline requires taking the event_mutex lock. This can be held by other
tracing code doing initialization for a long time. Since kprobes added to
the kernel command line need to be setup immediately, as they may be
tracing early initialization code, they cannot be postponed in a work
queue and must be setup in the initcall code.
If there's no kprobe on the kernel cmdline, there's no reason to take the
mutex and slow down the boot up code waiting to get the lock only to find
out there's nothing to do. Simply exit out early if there's no kprobes on
the kernel cmdline.
If there are kprobes on the cmdline, then someone cares more about tracing
over the speed of boot up.
- Clean up the trigger code a bit
- Move code out of trace.c and into their own files
trace.c is now over 11,000 lines of code and has become more difficult to
maintain. Start splitting it up so that related code is in their own
files.
Move all the trace_printk() related code into trace_printk.c.
Move the __always_inline stack functions into trace.h.
Move the pid filtering code into a new trace_pid.c file.
- Better define the max latency and snapshot code
The latency tracers have a "max latency" buffer that is a copy of the main
buffer and gets swapped with it when a new high latency is detected. This
keeps the trace up to the highest latency around where this max_latency
buffer is never written to. It is only used to save the last max latency
trace.
A while ago a snapshot feature was added to tracefs to allow user space to
perform the same logic. It could also enable events to trigger a
"snapshot" if one of their fields hit a new high. This was built on top of
the latency max_latency buffer logic.
Because snapshots came later, they were dependent on the latency tracers
to be enabled. In reality, the latency tracers depend on the snapshot code
and not the other way around. It was just that they came first.
Restructure the code and the kconfigs to have the latency tracers depend
on snapshot code instead. This actually simplifies the logic a bit and
allows to disable more when the latency tracers are not defined and the
snapshot code is.
- Fix a "false sharing" in the hwlat tracer code
The loop to search for latency in hardware was using a variable that could
be changed by user space for each sample. If the user change this
variable, it could cause a bus contention, and reading that variable can
show up as a large latency in the trace causing a false positive. Read
this variable at the start of the sample with a READ_ONCE() into a local
variable and keep the code from sharing cache lines with readers.
- Fix function graph tracer static branch optimization code
When only one tracer is defined for function graph tracing, it uses a
static branch to call that tracer directly. When another tracer is added,
it goes into loop logic to call all the registered callbacks.
The code was incorrect when going back to one tracer and never re-enabled
the static branch again to do the optimization code.
- And other small fixes and cleanups.
Please pull the latest trace-v7.0 tree, which can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
trace-v7.0
Tag SHA1: 550103013647f1293433ce34d38faa20b0aeb8c9
Head SHA1: 53b2fae90ff01fede6520ca744ed5e8e366497ba
Aaron Tomlin (3):
tracing: Add bitmask-list option for human-readable bitmask display
tracing: Add show_event_filters to expose active event filters
tracing: Add show_event_triggers to expose active event triggers
Alice Ryhl (1):
MAINTAINERS: add Rust files to STATIC BRANCH/CALL and TRACING
Colin Lord (1):
tracing: Fix false sharing in hwlat get_sample()
Guenter Roeck (1):
ftrace: Introduce and use ENTRIES_PER_PAGE_GROUP macro
Haoyang LIU (1):
tracing: Fix indentation of return statement in print_trace_fmt()
Marco Crivellari (1):
tracing: Replace use of system_wq with system_dfl_wq
Masami Hiramatsu (Google) (2):
tracing: Fix to set write permission to per-cpu buffer_size_kb
tracing: Reset last_boot_info if ring buffer is reset
Miaoqian Lin (1):
tracing: Properly process error handling in event_hist_trigger_parse()
Paul E. McKenney (1):
srcu: Fix warning to permit SRCU-fast readers in NMI handlers
Petr Tesarik (1):
ring-buffer: Use a housekeeping CPU to wake up waiters
Shengming Hu (1):
function_graph: Restore direct mode when callbacks drop to one
Steven Rostedt (29):
tracing: Remove redundant call to event_trigger_reset_filter() in event_hist_trigger_parse()
tracing: Check the return value of tracing_update_buffers()
tracing: Have show_event_trigger/filter format a bit more in columns
tracing: Disable trace_printk buffer on warning too
tracing: Have hist_debug show what function a field uses
tracing: Remove notrace from trace_event_raw_event_synth()
tracing: Up the hist stacktrace size from 16 to 31
tracing: Remove duplicate ENABLE_EVENT_STR and DISABLE_EVENT_STR macros
tracing: perf: Have perf tracepoint callbacks always disable preemption
bpf: Have __bpf_trace_run() use rcu_read_lock_dont_migrate()
tracing: Guard __DECLARE_TRACE() use of __DO_TRACE_CALL() with SRCU-fast
tracing: Add kerneldoc to trace_event_buffer_reserve()
tracing: Have all triggers expect a file parameter
tracing: Move tracing_set_filter_buffering() into trace_events_hist.c
tracing: Clean up use of trace_create_maxlat_file()
tracing: Make tracing_disabled global for tracing system
tracing: Make tracing_selftest_running global to the tracing subsystem
tracing: Move __trace_buffer_{un}lock_*() functions to trace.h
tracing: Move ftrace_trace_stack() out of trace.c and into trace.h
tracing: Make printk_trace global for tracing system
tracing: Make tracing_update_buffers() take NULL for global_trace
tracing: Have trace_printk functions use flags instead of using global_trace
tracing: Use system_state in trace_printk_init_buffers()
tracing: Move trace_printk functions out of trace.c and into trace_printk.c
tracing: Move pid filtering into trace_pid.c
tracing: Rename trace_array field max_buffer to snapshot_buffer
tracing: Add tracer_uses_snapshot() helper to remove #ifdefs
tracing: Better separate SNAPSHOT and MAX_TRACE options
tracing: Move d_max_latency out of CONFIG_FSNOTIFY protection
Yaxiong Tian (3):
tracing: Rename `eval_map_wq` and allow other parts of tracing use it
blktrace: Make init_blk_tracer() asynchronous
tracing/kprobes: Skip setup_boot_kprobe_events() when no cmdline event
----
Documentation/trace/ftrace.rst | 25 +
MAINTAINERS | 15 +
include/linux/trace_events.h | 8 +-
include/linux/trace_seq.h | 12 +-
include/linux/tracepoint.h | 9 +-
include/trace/perf.h | 4 +-
include/trace/stages/stage3_trace_output.h | 4 +-
include/trace/trace_events.h | 4 +-
kernel/rcu/srcutree.c | 3 +-
kernel/trace/Kconfig | 8 +-
kernel/trace/Makefile | 1 +
kernel/trace/blktrace.c | 23 +-
kernel/trace/bpf_trace.c | 5 +-
kernel/trace/fgraph.c | 2 +-
kernel/trace/ftrace.c | 7 +-
kernel/trace/ring_buffer.c | 24 +-
kernel/trace/trace.c | 1059 ++++------------------------
kernel/trace/trace.h | 131 +++-
kernel/trace/trace_events.c | 163 ++++-
kernel/trace/trace_events_filter.c | 2 +-
kernel/trace/trace_events_hist.c | 101 ++-
kernel/trace/trace_events_synth.c | 6 +-
kernel/trace/trace_events_trigger.c | 62 +-
kernel/trace/trace_hwlat.c | 15 +-
kernel/trace/trace_kprobe.c | 6 +-
kernel/trace/trace_output.c | 30 +-
kernel/trace/trace_pid.c | 246 +++++++
kernel/trace/trace_printk.c | 431 +++++++++++
kernel/trace/trace_selftest.c | 10 +-
kernel/trace/trace_seq.c | 29 +-
kernel/tracepoint.c | 18 +-
31 files changed, 1389 insertions(+), 1074 deletions(-)
create mode 100644 kernel/trace/trace_pid.c
---------------------------