Instead of providing asynchronous checks for the nohz subsystem to verify
perf event tick dependency, migrate perf to the new mask.
Perf needs the tick for two situations:
1) Freq events. We could set the tick dependency when those are
installed on a CPU context. But setting a global dependency on top of
the global freq events accounting is much easier. If people want that
to be optimized, we can still refine that on the per-CPU tick dependency
level. This patch dooesn't change the current behaviour anyway.
2) Throttled events: this is a per-cpu dependency.
@@ -3540,8 +3530,10 @@ static void unaccount_event(struct perf_event *event)
atomic_dec(&nr_comm_events);
if (event->attr.task)
atomic_dec(&nr_task_events);
- if (event->attr.freq)
- atomic_dec(&nr_freq_events);
+ if (event->attr.freq) {
+ if (atomic_dec_and_test(&nr_freq_events))
+ tick_nohz_clear_dep(TICK_PERF_EVENTS_BIT);
+ }
if (event->attr.context_switch) {
static_key_slow_dec_deferred(&perf_sched_events);
atomic_dec(&nr_switch_events);
@@ -7695,7 +7687,7 @@ static void account_event(struct perf_event *event)
atomic_inc(&nr_task_events);
if (event->attr.freq) {
if (atomic_inc_return(&nr_freq_events) == 1)
- tick_nohz_full_kick_all();
+ tick_nohz_set_dep(TICK_PERF_EVENTS_BIT);
}
if (event->attr.context_switch) {
atomic_inc(&nr_switch_events);