[PATCH 1/3] perf: Optimize perf_install_in_event()
From: Peter Zijlstra
Date: Tue Oct 22 2019 - 05:25:17 EST
Andi reported that when creating a lot of events, a lot of time is
spend in IPIs and asked if it would be possible to elide some of that.
Now when, as for example the perf-tool always does, events are created
disabled, then these events will not need to be scheduled when added
to the context (they're still disable) and therefore the IPI is not
required -- except for the very first event, that will need to set
ctx->is_active.
( it might be possible to set ctx->is_active remotely for cpu_ctx, but
we really need the IPI for task_ctx, so lets not make that
distinction )
Also use __perf_effective_state() since group events depend on the
state of the leader, if the leader is OFF, the whole group is OFF.
So when sibling events are created enabled (XXX check tool) then we
only need a single IPI to create and enable the whole group (+ that
initial IPI to initialize the context).
Reported-by: Andi Kleen <andi@xxxxxxxxxxxxxx>
Suggested-by: Andi Kleen <andi@xxxxxxxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Cc: kan.liang@xxxxxxxxxxxxxxx
Cc: jolsa@xxxxxxxxxx
Cc: acme@xxxxxxxxxx
---
kernel/events/core.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2666,6 +2666,25 @@ perf_install_in_context(struct perf_even
*/
smp_store_release(&event->ctx, ctx);
+ /*
+ * perf_event_attr::disabled events will not run and can be initialized
+ * without IPI. Except when this is the first event for the context, in
+ * that case we need the magic of the IPI to set ctx->is_active.
+ *
+ * The IOC_ENABLE that is sure to follow the creation of a disabled
+ * event will issue the IPI and reprogram the hardware.
+ */
+ if (__perf_effective_state(event) == PERF_EVENT_STATE_OFF && ctx->nr_events) {
+ raw_spin_lock_irq(&ctx->lock);
+ if (task && ctx->task == TASK_TOMBSTONE) {
+ raw_spin_unlock_irq(&ctx->lock);
+ return;
+ }
+ add_event_to_ctx(event, ctx);
+ raw_spin_unlock_irq(&ctx->lock);
+ return;
+ }
+
if (!task) {
cpu_function_call(cpu, __perf_install_in_context, event);
return;