[PATCH rcu/next 3/3] rcu: Call trace_rcu_callback() also for bypass queuing (v2)
From: Joel Fernandes (Google)
Date: Sat Sep 17 2022 - 12:42:51 EST
If any callback is queued into the bypass list, then
trace_rcu_callback() does not show it. This makes it unclear when a
callback was actually queued, as the resulting trace only includes a
rcu_invoke_callback event. Fix it by calling the tracing function even
if queuing into bypass. This is needed for the future rcutop tool which
monitors enqueuing of callbacks.
Note that, in case of bypass queuing, the new tracing happens without
the nocb_lock. This should be OK since on CONFIG_RCU_NOCB_CPU systems,
the total callbacks is represented by an atomic counter. Also, other
paths like rcu_barrier() also sample the total number of callback
without the nocb_lock.
Also, while at it, optimize the tracing so that rcu_state is not
accessed if tracing is disabled, because that's useless if we are
not tracing. A quick inspection of the generated assembler shows that
rcu_state is accessed even if the jump label for the tracepoint is
disabled.
Here is gcc -S output of the bad asm (note that I un-inlined it just for
testing and illustration however the final __trace_rcu_callback in the
patch is marked static inline):
__trace_rcu_callback:
movq 8(%rdi), %rcx
movq rcu_state+3640(%rip), %rax
movq %rdi, %rdx
cmpq $4095, %rcx
ja .L3100
movq 192(%rsi), %r8
1:jmp .L3101 # objtool NOPs this
.pushsection __jump_table, "aw"
.balign 8
.long 1b - .
.long .L3101 - .
.quad __tracepoint_rcu_kvfree_callback+8 + 2 - .
.popsection
With this change, the jump label check which is NOOPed is moved to the
beginning of the function.
Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
---
kernel/rcu/tree.c | 30 ++++++++++++++++++++++--------
1 file changed, 22 insertions(+), 8 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 5ec97e3f7468..18f07e167d5e 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2728,6 +2728,22 @@ static void check_cb_ovld(struct rcu_data *rdp)
raw_spin_unlock_rcu_node(rnp);
}
+/*
+ * Trace RCU callback helper, call after enqueuing callback.
+ */
+static inline void __trace_rcu_callback(struct rcu_head *head,
+ struct rcu_data *rdp)
+{
+ if (trace_rcu_kvfree_callback_enabled() &&
+ __is_kvfree_rcu_offset((unsigned long)head->func))
+ trace_rcu_kvfree_callback(rcu_state.name, head,
+ (unsigned long)head->func,
+ rcu_segcblist_n_cbs(&rdp->cblist));
+ else if (trace_rcu_callback_enabled())
+ trace_rcu_callback(rcu_state.name, head,
+ rcu_segcblist_n_cbs(&rdp->cblist));
+}
+
/**
* call_rcu() - Queue an RCU callback for invocation after a grace period.
* @head: structure to be used for queueing the RCU updates.
@@ -2809,17 +2825,15 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func)
}
check_cb_ovld(rdp);
- if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags))
+
+ if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags)) {
+ __trace_rcu_callback(head, rdp);
return; // Enqueued onto ->nocb_bypass, so just leave.
+ }
+
// If no-CBs CPU gets here, rcu_nocb_try_bypass() acquired ->nocb_lock.
rcu_segcblist_enqueue(&rdp->cblist, head);
- if (__is_kvfree_rcu_offset((unsigned long)func))
- trace_rcu_kvfree_callback(rcu_state.name, head,
- (unsigned long)func,
- rcu_segcblist_n_cbs(&rdp->cblist));
- else
- trace_rcu_callback(rcu_state.name, head,
- rcu_segcblist_n_cbs(&rdp->cblist));
+ __trace_rcu_callback(head, rdp);
trace_rcu_segcb_stats(&rdp->cblist, TPS("SegCBQueued"));
--
2.37.3.968.ga6b4b080e4-goog