Re: [PATCH v2 2/2] perf: Don't throttle based on NMI watchdog events

From: Calvin Owens

Date: Sat May 02 2026 - 05:53:06 EST


On Friday 05/01 at 22:54 +0200, Peter Zijlstra wrote:
> On Wed, Apr 29, 2026 at 10:36:11AM -0700, Calvin Owens wrote:
> > The throttling logic in perf_sample_event_took() assumes the NMI is
> > running at the maximum allowed sample rate. While this makes sense most
> > of the time, it wildly overestimates the runtime of the NMI for the perf
> > hardware watchdog:
> >
> > # bpftrace -e 'kprobe:perf_sample_event_took { \
> > printf("%s: cpu=%02d time_taken=%dns\n", \
> > strftime("%H:%M:%S.%f", nsecs), cpu(), arg0); }'
> > 03:12:13.087003: cpu=00 time_taken=3190ns
> > 03:12:13.486789: cpu=01 time_taken=2918ns
> > 03:12:18.075288: cpu=03 time_taken=3308ns
> > 03:12:19.797207: cpu=02 time_taken=2581ns
> > 03:12:23.110317: cpu=00 time_taken=2823ns
> > 03:12:23.510308: cpu=01 time_taken=2943ns
> > 03:12:29.229348: cpu=03 time_taken=3669ns
> > 03:12:31.656306: cpu=02 time_taken=3262ns
> >
> > The NMI for the watchdog runs for 2-4us every ten seconds, but the
> > math done in perf_sample_event_took() concludes it is running for
> > 200-400ms every second!
>
> For arguments sake, lets say this is an even 3us, this means we can run:
>
> 250ms / 3us = 83333
>
> such NMIs every second to consume 25% of CPU time. Which is in line with
> the numbers it then reports no?

The watchdog NMI latency is not remotely predictive of the "real" NMI
latency in the way I think you're assuming.

These are watchdog NMIs on a znver4 machine:

17:50:15.322551: cpu=11 time_taken=3878ns
17:50:15.624184: cpu=02 time_taken=3547ns
17:50:15.756226: cpu=15 time_taken=3817ns
17:50:15.826175: cpu=19 time_taken=3386ns

...vs the "real thing" with perf running on the same machine:

02:21:02.801929: cpu=13 time_taken=321ns
02:21:02.801937: cpu=24 time_taken=270ns
02:21:02.801966: cpu=23 time_taken=461ns
02:21:02.801971: cpu=12 time_taken=310ns

This machine ends up with a lower perf_event_max_sample_rate when the
hardware watchdog is enabled, because of this effect (which obviously
varies a lot with what options you pass to perf).

But the point I was trying to make is that perf_event_max_sample_rate is
completely orthogonal to the 0.1hz watchdog NMI.

The current logic updates a sysctl that can have no possible effect on
the watchdog, based on an extrapolated worst case from the watchdog,
that cannot possibly actually occur with the watchdog. That seems
fundamentally silly to me.

I only actually care because it is user visible in the form of the
random confusing throttling messages. I don't care that
perf_event_max_sample_rate ends up artifically lower, and I didn't try
to fix that.

> > When it is the only PMU event running, it can take minutes to hours of
> > samples from the watchdog for the moving average to accumulate to
> > something near the real mean, which causes the same little "litany" of
> > sample rate throttles to happen every time Linux boots with the perf
> > hardware watchdog enabled:
> >
> > perf: interrupt took too long (2526 > 2500), lowering kernel.perf_event_max_sample_rate to 79000
> > perf: interrupt took too long (3177 > 3157), lowering kernel.perf_event_max_sample_rate to 62000
> > perf: interrupt took too long (3979 > 3971), lowering kernel.perf_event_max_sample_rate to 50000
> > perf: interrupt took too long (4983 > 4973), lowering kernel.perf_event_max_sample_rate to 40000
> >
> > This serves no purpose: it doesn't actually affect the runtime of the
> > watchdog NMI at all. It confuses users, because it suggests their
> > machine is spinning its wheels in interrupts when it isn't.
> >
> > Because the watchdog NMI is so infrequent, we can avoid throttling it by
> > making the throttling a two-step process: load and update a timestamp
> > whenever we think we need to throttle, and only actually proceed to
> > throttle if the last time that happened was less than one second ago.
> >
> > This is inelegant, but it avoids touching the hot path and preserves
> > current throttling behavior for real PMU use, at the cost of delaying
> > the throttling by a single NMI.
>
> This makes no sense, and it quite broken. There is no throttling and you
> still need to update the numbers.

The ewma is updated above the patch context, that behavior doesn't
change at all.

Are you seeing __report_avg below it? That's for the deferred printk().

I don't understand what "there is no throttling" means here, sorry.

In practice this all works exactly the way I'm describing, the
throttling happens immmediately the first time perf is actually used on
the system:

10:24:55 mahler kernel: perf: interrupt took too long (2503 > 2500), lowering kernel.perf_event_max_sample_rate to 79000
10:24:55 mahler kernel: perf: interrupt took too long (3178 > 3128), lowering kernel.perf_event_max_sample_rate to 62000
10:24:55 mahler kernel: perf: interrupt took too long (3974 > 3972), lowering kernel.perf_event_max_sample_rate to 50000

...instead of randomly over the first hour of uptime like it does today:

15:55:44 mahler kernel: perf: interrupt took too long (2518 > 2500), lowering kernel.perf_event_max_sample_rate to 79000
16:00:23 mahler kernel: perf: interrupt took too long (3163 > 3147), lowering kernel.perf_event_max_sample_rate to 63000
16:10:18 mahler kernel: perf: interrupt took too long (3978 > 3953), lowering kernel.perf_event_max_sample_rate to 50000

This random throttling after boot isn't unique to my machines: most bare
metal servers I've interacted with over 10+ years do this. If I had a
nickel for every time somebody asked me why it happens when perf isn't
running, I could almost afford to pay what it cost google to give us
that worthless LLM review :)

> I'm thinking less AI and more real human should be involved here. If you
> cannot make sense of neither the code nor the AI babbling, step away.

The only LLM involved at all here is this one autoreview bot from google
that didn't ask for my permission to be involved.

I was simply trying to be generous by engaging with it. Generally, I've
been impressed with it, but in this particular case I feel strongly it's
been actively worse than nothing.

I will ignore it completely in the future when sending you patches.

> > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > index 6d1f8bad7e1c..c2a33cb194ce 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -623,6 +623,7 @@ core_initcall(init_events_core_sysctls);
> > */
> > #define NR_ACCUMULATED_SAMPLES 128
> > static DEFINE_PER_CPU(u64, running_sample_length);
> > +static DEFINE_PER_CPU(u64, last_throttle_clock);
> >
> > static u64 __report_avg;
> > static u64 __report_allowed;
> > @@ -643,6 +644,8 @@ void perf_sample_event_took(u64 sample_len_ns)
> > u64 max_len = READ_ONCE(perf_sample_allowed_ns);
> > u64 running_len;
> > u64 avg_len;
> > + u64 last;
> > + u64 now;
> > u32 max;
> >
> > if (max_len == 0)
> > @@ -663,6 +666,19 @@ void perf_sample_event_took(u64 sample_len_ns)
> > if (avg_len <= max_len)
> > return;
> >
> > + /*
> > + * Very infrequent events like the perf counter hard watchdog
> > + * can trigger spurious throttling: skip throttling if the prior
> > + * NMI got here more than one second before this NMI began. But
> > + * never skip throttling if NMIs are nesting, or if any NMI runs
> > + * for longer than one second.
> > + */
> > + now = local_clock();
> > + last = __this_cpu_read(last_throttle_clock);
> > + if (__this_cpu_cmpxchg(last_throttle_clock, last, now) == last &&
> > + now - last > NSEC_PER_SEC && sample_len_ns < NSEC_PER_SEC)
> > + return;
> > +
> > __report_avg = avg_len;
> > __report_allowed = max_len;
> >
> > --
> > 2.47.3
> >