Re: clocksource: Reduce watchdog readout delay limit to prevent false positives

From: Paul E. McKenney

Date: Wed Dec 17 2025 - 19:49:07 EST


On Wed, Dec 17, 2025 at 06:21:05PM +0100, Thomas Gleixner wrote:
> The "valid" readout delay between the two reads of the watchdog is larger
> than the valid delta between the resulting watchdog and clocksource
> intervals, which results in false positive watchdog results.
>
> Assume TSC is the clocksource and HPET is the watchdog and both have a
> uncertainty margin of 250us (default). The watchdog readout does:
>
> 1) wdnow = read(HPET);
> 2) csnow = read(TSC);
> 3) wdend = read(HPET);
4) wd_end2 = read(HPET);
>
> The valid window for the delta between #1 and #3 is calculated by the
> uncertainty margins of the watchdog and the clocksource:
>
> m = 2 * watchdog.uncertainty_margin + cs.uncertainty margin;
>
> which results in 750us for the TSC/HPET case.

Yes, because this interval includes two watchdog reads (#1 and #3 above)
and one clocksource read (#2 above). We therefore need to allow two
watchdog uncertainties and one clocksource uncertainty.

> The actual interval comparison uses a smaller margin:
>
> m = watchdog.uncertainty_margin + cs.uncertainty margin;
>
> which results in 500us for the TSC/HPET case.

This is the (wd_seq_delay > md) comparison, righr? If so, the reason
for this is because it is measuring only a pair of watchdog reads (#3
and #4). There is no clocksource read on the latency recheck, so we do
not include the cs->uncertainty_margin value, only the pair of watchdog
uncertainty values.

If this check fails, that indicates that the watchdog clocksource is much
slower than expected (for example, due to memory-system overload affecting
HPET on multicore systems), so we skip this measurement interval.

> That means the following scenario will trigger the watchdog:
>
> Watchdog cycle N:
>
> 1) wdnow[N] = read(HPET);
> 2) csnow[N] = read(TSC);
> 3) wdend[N] = read(HPET);
>
> Assume the delay between #1 and #2 is 100us and the delay between #1 and
> #3 is within the 750us margin, i.e. the readout is considered valid.

Yes. We expect at most 250us for #1, another 250us for #2, and yet
another 250us for #3.

> Watchdog cycle N + 1:
>
> 4) wdnow[N + 1] = read(HPET);
> 5) csnow[N + 1] = read(TSC);
> 6) wdend[N + 1] = read(HPET);
>
> If the delay between #4 and #6 is within the 750us margin then any delay
> between #4 and #5 which is larger than 600us will fail the interval check
> and mark the TSC unstable because the intervals are calculated against the
> previous value:
>
> wd_int = wdnow[N + 1] - wdnow[N];
> cs_int = csnow[N + 1] - csnow[N];

Except that getting 600us latency between #4 and #5 is not consistent
with a 250us uncertainty. If that is happening, the uncertainty should
instead be at least 300us.

> Putting the above delays in place this results in:
>
> cs_int = (wdnow[N + 1] + 610us) - (wdnow[N] + 100us);
> -> cs_int = wd_int + 510us;
>
> which is obviously larger than the allowed 500us margin and results in
> marking TSC unstable.

Agreed, but due to the ->uncertainty_margin values being too small.

> Fix this by using the same margin as the interval comparison. If the delay
> between two watchdog reads is larger than that, then the readout was either
> disturbed by interconnect congestion, NMIs or SMIs.
>
> Fixes: 4ac1dd3245b9 ("clocksource: Set cs_watchdog_read() checks based on .uncertainty_margin")
> Reported-by: Daniel J Blueman <daniel@xxxxxxxxx>

If this is happening in real life, we have a couple of choices:

1. Increase the ->uncertainty_margin values to match the objective
universe.

2. In clocksource_watchdog(), replace "(abs(cs_nsec - wd_nsec) > md)"
with "(abs(cs_nsec - wd_nsec) > 2 * md)".

The rationale here is that the ->uncertainty_margin values are
two-tailed, as in the clocksource might report a value that is
->uncertainty_margin and ->uncertainty_margin too late. When I
was coding this, I instead assumed that ->uncertainty_margin
covered the full range, centered on the correct time value.

You would know better than would I.

My concern is that the patch below would force needless cs_watchdog_read()
retries.

Thoughts?

Thanx, Paul

> Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Link: https://lore.kernel.org/lkml/20250602223251.496591-1-daniel@xxxxxxxxx/
> ---
> kernel/time/clocksource.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> --- a/kernel/time/clocksource.c
> +++ b/kernel/time/clocksource.c
> @@ -252,7 +252,7 @@ enum wd_read_status {
>
> static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow)
> {
> - int64_t md = 2 * watchdog->uncertainty_margin;
> + int64_t md = watchdog->uncertainty_margin;
> unsigned int nretries, max_retries;
> int64_t wd_delay, wd_seq_delay;
> u64 wd_end, wd_end2;
> @@ -285,7 +285,7 @@ static enum wd_read_status cs_watchdog_r
> * watchdog test.
> */
> wd_seq_delay = cycles_to_nsec_safe(watchdog, wd_end, wd_end2);
> - if (wd_seq_delay > md)
> + if (wd_seq_delay > 2 * md)
> goto skip_test;
> }
>