[PATCH V3] time/sched_clock: mark sched_clock_read_begin/retry as notrace
From: quanyang.wang
Date: Tue Sep 29 2020 - 04:20:57 EST
From: Quanyang Wang <quanyang.wang@xxxxxxxxxxxxx>
Since sched_clock_read_begin and sched_clock_read_retry are called
by notrace function sched_clock, they shouldn't be traceable either,
or else ftrace_graph_caller will run into a dead loop on the path
as below (arm for instance):
ftrace_graph_caller
prepare_ftrace_return
function_graph_enter
ftrace_push_return_trace
trace_clock_local
sched_clock
sched_clock_read_begin/retry
Fixes: 1b86abc1c645 ("sched_clock: Expose struct clock_read_data")
Signed-off-by: Quanyang Wang <quanyang.wang@xxxxxxxxxxxxx>
Acked-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
---
Changes:
V2: Add notrace to sched_clock_read_retry according to Peter's suggestion.
V3: Adjust the placement of notrace according to Peter's suggestion.
kernel/time/sched_clock.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
index 1c03eec6ca9b..f629e3f5afbe 100644
--- a/kernel/time/sched_clock.c
+++ b/kernel/time/sched_clock.c
@@ -68,13 +68,13 @@ static inline u64 notrace cyc_to_ns(u64 cyc, u32 mult, u32 shift)
return (cyc * mult) >> shift;
}
-struct clock_read_data *sched_clock_read_begin(unsigned int *seq)
+notrace struct clock_read_data *sched_clock_read_begin(unsigned int *seq)
{
*seq = raw_read_seqcount_latch(&cd.seq);
return cd.read_data + (*seq & 1);
}
-int sched_clock_read_retry(unsigned int seq)
+notrace int sched_clock_read_retry(unsigned int seq)
{
return read_seqcount_retry(&cd.seq, seq);
}
--
2.17.1