[RFC PATCH v2] tick: Make tick_periodic() check for missing ticks
From: Waiman Long
Date: Fri Feb 07 2020 - 14:39:52 EST
The tick_periodic() function is used at the beginning part of the
bootup process for time keeping while the other clock sources are
being initialized.
The current code assumes that all the timer interrupts are handled in
a timely manner with no missing ticks. That is not actually true. Some
ticks are missed and there are some discrepancies between the tick time
(jiffies) and the timestamp reported in the kernel log. Some systems,
however, are more prone to missing ticks than the others. In the extreme
case, the discrepancy can actually cause a soft lockup message to be
printed by the watchdog kthread. For example, on a Cavium ThunderX2
Sabre arm64 system:
[ 25.496379] watchdog: BUG: soft lockup - CPU#14 stuck for 22s!
On that system, the missing ticks are especially prevalent during the
smp_init() phase of the boot process. With an instrumented kernel,
it was found that it took about 24s as reported by the timestamp for
the tick to accumulate 4s of time.
Investigation and bisection done by others seemed to point to the
commit 73f381660959 ("arm64: Advertise mitigation of Spectre-v2, or
lack thereof") as the culprit. It could also be a firmware issue as
new firmware was promised that would fix the issue.
To properly address this problem, we cannot assume that there will
be no missing tick in tick_periodic(). This function is now modified
to follow the example of tick_do_update_jiffies64() by using another
reference clock to check for missing ticks. Since the watchdog timer
uses running_clock(), it is used here as the reference. With this patch
applied, the soft lockup problem in the arm64 system is gone and tick
time tracks much more closely to the timestamp time.
Signed-off-by: Waiman Long <longman@xxxxxxxxxx>
---
kernel/time/tick-common.c | 36 +++++++++++++++++++++++++++++++++---
1 file changed, 33 insertions(+), 3 deletions(-)
v2: Avoid direct u64 division and better ns-ktime conversion.
diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
index 7e5d3524e924..55dbbe0f5573 100644
--- a/kernel/time/tick-common.c
+++ b/kernel/time/tick-common.c
@@ -16,6 +16,7 @@
#include <linux/profile.h>
#include <linux/sched.h>
#include <linux/module.h>
+#include <linux/sched/clock.h>
#include <trace/events/power.h>
#include <asm/irq_regs.h>
@@ -84,12 +85,41 @@ int tick_is_oneshot_available(void)
static void tick_periodic(int cpu)
{
if (tick_do_timer_cpu == cpu) {
+ /*
+ * Use running_clock() as reference to check for missing ticks.
+ */
+ static ktime_t last_update;
+ ktime_t now;
+ int ticks = 1;
+
+ now = ns_to_ktime(running_clock());
write_seqlock(&jiffies_lock);
- /* Keep track of the next tick event */
- tick_next_period = ktime_add(tick_next_period, tick_period);
+ if (last_update) {
+ u64 delta = ktime_sub(now, last_update);
- do_timer(1);
+ /*
+ * Compute missed ticks
+ *
+ * There is likely a persistent delta between
+ * last_update and tick_next_period. So they are
+ * updated separately.
+ */
+ if (delta >= 2 * tick_period) {
+ s64 period = ktime_to_ns(tick_period);
+
+ ticks = ktime_divns(delta, period);
+ }
+ last_update = ktime_add(last_update,
+ ticks * tick_period);
+ } else {
+ last_update = now;
+ }
+
+ /* Keep track of the next tick event */
+ tick_next_period = ktime_add(tick_next_period,
+ ticks * tick_period);
+ do_timer(ticks);
write_sequnlock(&jiffies_lock);
update_wall_time();
}
--
2.18.1