[tip: timers/urgent] timers/migration: Improve tracing

From: tip-bot2 for Anna-Maria Behnsen
Date: Mon Jul 22 2024 - 15:36:27 EST


The following commit has been merged into the timers/urgent branch of tip:

Commit-ID: 92506741521fd09dfaa9d6ef3c3620a9dd6bbafd
Gitweb: https://git.kernel.org/tip/92506741521fd09dfaa9d6ef3c3620a9dd6bbafd
Author: Anna-Maria Behnsen <anna-maria@xxxxxxxxxxxxx>
AuthorDate: Tue, 16 Jul 2024 16:19:21 +02:00
Committer: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
CommitterDate: Mon, 22 Jul 2024 18:03:34 +02:00

timers/migration: Improve tracing

Trace points of inactive and active propagation are located at the end of
the related functions. The interesting information of those trace points is
the updated group state. When trace points are not located directly at the
place where group state changed, order of trace points in traces could be
confusing.

Move inactive and active propagation trace points directly after update of
group state values.

Signed-off-by: Anna-Maria Behnsen <anna-maria@xxxxxxxxxxxxx>
Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Reviewed-by: Frederic Weisbecker <frederic@xxxxxxxxxx>
Link: https://lore.kernel.org/r/20240716-tmigr-fixes-v4-3-757baa7803fe@xxxxxxxxxxxxx

---
kernel/time/timer_migration.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 867f0ec..4fbd930 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -656,6 +656,8 @@ static bool tmigr_active_up(struct tmigr_group *group,

} while (!atomic_try_cmpxchg(&group->migr_state, &curstate.state, newstate.state));

+ trace_tmigr_group_set_cpu_active(group, newstate, childmask);
+
if (walk_done == false)
data->childmask = group->childmask;

@@ -673,8 +675,6 @@ static bool tmigr_active_up(struct tmigr_group *group,
*/
group->groupevt.ignore = true;

- trace_tmigr_group_set_cpu_active(group, newstate, childmask);
-
return walk_done;
}

@@ -1306,9 +1306,10 @@ static bool tmigr_inactive_up(struct tmigr_group *group,

WARN_ON_ONCE((newstate.migrator != TMIGR_NONE) && !(newstate.active));

- if (atomic_try_cmpxchg(&group->migr_state, &curstate.state,
- newstate.state))
+ if (atomic_try_cmpxchg(&group->migr_state, &curstate.state, newstate.state)) {
+ trace_tmigr_group_set_cpu_inactive(group, newstate, childmask);
break;
+ }

/*
* The memory barrier is paired with the cmpxchg() in
@@ -1327,8 +1328,6 @@ static bool tmigr_inactive_up(struct tmigr_group *group,
if (walk_done == false)
data->childmask = group->childmask;

- trace_tmigr_group_set_cpu_inactive(group, newstate, childmask);
-
return walk_done;
}