[PATCH] clocksource: Add heuristics to avoid switching away from TSC due to timer delay

From: Roland Dreier
Date: Fri Nov 30 2018 - 16:17:55 EST

On a modern x86 system, the TSC is used as a clocksource, with HPET
used in the clocksource watchdog to make sure that the TSC is stable.

If the clocksource watchdog_timer is delayed for an extremely long
time (for example if softirqs are being serviced in ksoftirqd, and
realtime threads are starving ksoftirqd), then the 32-bit HPET counter
may wrap around. For example, with an HPET running at 24 MHz, 2^32
cycles is about 179 seconds - a long time for timers to be starved,
but possible with a poorly behaved realtime thread.

If this happens, since the TSC is a 64-bit counter and won't wrap, the
watchdog will detect skew - the TSC interval will be 179 seconds
longer than the HPET interval - and will mark the TSC as unstable.
This causes the system to switch to the HPET as a clocksource, which
has a huge negative performance impact.

In this case, switching to the HPET just makes a bad situation (timers
starved) that the system might recover from turn permanently even
worse (more expensive clock_gettime() calls), due to a spurious false
positive detection of TSC instability.

To improve this, add some heuristics to detect cases where the
watchdog is delayed long enough for the instability detection to be
likely to be wrong:

- If the clocksource being tested (eg TSC) has counted so many cycles
that converting to nsecs will overflow multiplication, *AND* the
watchdog clocksource (eg HPET) shows that the watchdog timer has
missed its interval by at least a factor of 3, skip marking the
clocksource as unstable for a timer interation. This is not
perfect - for example it is possible for the watchdog clocksource
to wrap around and show a small interval - but at least in the
specific x86 it is unlikely, since the watchdog interval is a small
fraction of the wraparound interval.

- If there is a skew between the clocksource being tested and the
watchdog clocksource that is at least as big as the wraparound
interval for the watchdog clocksource, then don't mark the
clocksource as unstable. Again, this might fail to mark a
clocksource as unstable for one iteration, but it is unlikely that
the instability is bad enough that we will see a larger skew than
the wraparound interval for many iterations.

These heuristics are imperfect but are chosen to make false detection
of instability much less likely, while leaving detection of true
instability very likely within a few clocksource watchdog iterations.

Signed-off-by: Roland Dreier <roland@xxxxxxxxxxxxxxx>
kernel/time/clocksource.c | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)

diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index ffe081623aec..f1b3d8ff2437 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -243,12 +243,47 @@ static void clocksource_watchdog(struct timer_list *unused)

delta = clocksource_delta(csnow, cs->cs_last, cs->mask);
+ /* If the cycle delta is beyond what we can safely
+ * convert to nsecs, and the watchdog clocksource
+ * suggests that we've overslept, skip checking this
+ * iteration to avoid marking a clocksource as
+ * unstable because of a severely delayed timer. */
+ if (delta > cs->max_cycles &&
+ wd_nsec > 3 * jiffies_to_nsecs(WATCHDOG_INTERVAL)) {
+ pr_warn("timekeeping watchdog: Clocksource '%s' not checked due to apparent long timer delay:\n",
+ cs->name);
+ pr_warn(" Delta %llx > max_cycles %llx, wd_nsec %lld\n",
+ delta, cs->max_cycles, wd_nsec);
+ continue;
+ }
cs_nsec = clocksource_cyc2ns(delta, cs->mult, cs->shift);
wdlast = cs->wd_last; /* save these in case we print them */
cslast = cs->cs_last;
cs->cs_last = csnow;
cs->wd_last = wdnow;

+ /* If the clocksource interval is far off from the
+ * watchdog clocksource interval but the interval is
+ * big enough that the watchdog may have wrapped
+ * around (again due to a severely delayed timer),
+ * skip this iteration. For example, this saves us
+ * from marking the TSC as unstable just because the
+ * 32-bit HPET wrapped around on x86. */
+ if (abs(cs_nsec - wd_nsec) >
+ clocksource_cyc2ns(watchdog->max_cycles, watchdog->mult,
+ watchdog->shift) - WATCHDOG_THRESHOLD) {
+ pr_warn("timekeeping watchdog: Clocksource '%s' not checked due to apparent timer delay:\n",
+ cs->name);
+ pr_warn(" Skew %lld watchdog wrap %lld\n",
+ abs(cs_nsec - wd_nsec),
+ clocksource_cyc2ns(watchdog->max_cycles,
+ watchdog->mult,
+ watchdog->shift));
+ continue;
+ }
if (atomic_read(&watchdog_reset_pending))