[tip:sched/core] sched/debug: Add new tracepoint to track PELT at se level

From: tip-bot for Qais Yousef
Date: Tue Jun 25 2019 - 04:27:45 EST


Commit-ID: 8de6242cca17d9299e654e29c966d8612d397272
Gitweb: https://git.kernel.org/tip/8de6242cca17d9299e654e29c966d8612d397272
Author: Qais Yousef <qais.yousef@xxxxxxx>
AuthorDate: Tue, 4 Jun 2019 12:14:57 +0100
Committer: Ingo Molnar <mingo@xxxxxxxxxx>
CommitDate: Mon, 24 Jun 2019 19:23:42 +0200

sched/debug: Add new tracepoint to track PELT at se level

The new tracepoint allows tracking PELT signals at sched_entity level.
Which is supported in CFS tasks and taskgroups only.

Signed-off-by: Qais Yousef <qais.yousef@xxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Cc: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Pavankumar Kondeti <pkondeti@xxxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Quentin Perret <quentin.perret@xxxxxxx>
Cc: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Uwe Kleine-Konig <u.kleine-koenig@xxxxxxxxxxxxxx>
Link: https://lkml.kernel.org/r/20190604111459.2862-5-qais.yousef@xxxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
---
include/trace/events/sched.h | 4 ++++
kernel/sched/fair.c | 1 +
kernel/sched/pelt.c | 2 ++
3 files changed, 7 insertions(+)

diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 520b89d384ec..c7dd9bc7f001 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -617,6 +617,10 @@ DECLARE_TRACE(pelt_irq_tp,
TP_PROTO(struct rq *rq),
TP_ARGS(rq));

+DECLARE_TRACE(pelt_se_tp,
+ TP_PROTO(struct sched_entity *se),
+ TP_ARGS(se));
+
#endif /* _TRACE_SCHED_H */

/* This part must be outside protection */
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e883d7e17e36..75218ab1fa07 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3348,6 +3348,7 @@ static inline int propagate_entity_load_avg(struct sched_entity *se)
update_tg_cfs_runnable(cfs_rq, se, gcfs_rq);

trace_pelt_cfs_tp(cfs_rq);
+ trace_pelt_se_tp(se);

return 1;
}
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index 4e961b55b5ea..a96db50d40e0 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -267,6 +267,7 @@ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se)
{
if (___update_load_sum(now, &se->avg, 0, 0, 0)) {
___update_load_avg(&se->avg, se_weight(se), se_runnable(se));
+ trace_pelt_se_tp(se);
return 1;
}

@@ -280,6 +281,7 @@ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se

___update_load_avg(&se->avg, se_weight(se), se_runnable(se));
cfs_se_util_change(&se->avg);
+ trace_pelt_se_tp(se);
return 1;
}