[PATCH RT 3/5] trace: correct off by one while recording the trace-event
From: Steven Rostedt
Date: Tue Jul 12 2016 - 10:24:07 EST
4.4.12-rt20-rc1 stable review patch.
If anyone has any objections, please let me know.
From: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
Trace events like raw_syscalls show always a preempt code of one. The
reason is that on PREEMPT kernels rcu_read_lock_sched_notrace()
increases the preemption counter and the function recording the counter
is caller within the RCU section.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
[ Changed this to upstream version. See commit e947841c0dce ]
Signed-off-by: Steven Rostedt <rostedt@xxxxxxxxxxx>
kernel/trace/trace_events.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 4a48f97a2256..5bd79b347398 100644
@@ -246,6 +246,14 @@ void *trace_event_buffer_reserve(struct trace_event_buffer *fbuffer,
fbuffer->pc = preempt_count();
+ * If CONFIG_PREEMPT is enabled, then the tracepoint itself disables
+ * preemption (adding one to the preempt_count). Since we are
+ * interested in the preempt_count at the time the tracepoint was
+ * hit, we need to subtract one to offset the increment.
+ if (IS_ENABLED(CONFIG_PREEMPT))
fbuffer->trace_file = trace_file;