RE: [PATCH V2] kernel/watchdog: fix spurious hard lockups

From: Liang, Kan
Date: Thu Jun 22 2017 - 11:48:15 EST




> Subject: Re: [PATCH V2] kernel/watchdog: fix spurious hard lockups
>
> On Wed, Jun 21, 2017 at 11:53:57PM +0200, Thomas Gleixner wrote:
> > On Wed, 21 Jun 2017, kan.liang@xxxxxxxxx wrote:
> > > We now have more and more systems where the Turbo range is wide
> > > enough that the NMI watchdog expires faster than the soft watchdog
> > > timer that updates the interrupt tick the NMI watchdog relies on.
> > >
> > > This problem was originally added by commit 58687acba592
> > > ("lockup_detector: Combine nmi_watchdog and softlockup detector").
> > > Previously the NMI watchdog would always check jiffies, which were
> > > ticking fast enough. But now the backing is quite slow so the expire
> > > time becomes more sensitive.
> >
> > And slapping a factor 3 on the NMI period is the wrong answer to the
> > problem. The simple solution would be to increase the hrtimer
> > frequency, but that's not really desired either.
> >
> > Find an untested patch below, which should cure the issue.
>
> A simple low pass filter. It compiles. :-) I don't think I have knowledge to test
> it. Kan?
>

Yes, we are doing the test.

Thanks,
Kan

> Cheers,
> Don
>
> >
> > Thanks,
> >
> > tglx
> >
> > 8<---------------
> > --- a/arch/x86/Kconfig
> > +++ b/arch/x86/Kconfig
> > @@ -96,6 +96,7 @@ config X86
> > select GENERIC_STRNCPY_FROM_USER
> > select GENERIC_STRNLEN_USER
> > select GENERIC_TIME_VSYSCALL
> > + select HARDLOCKUP_CHECK_TIMESTAMP if X86_64
> > select HAVE_ACPI_APEI if ACPI
> > select HAVE_ACPI_APEI_NMI if ACPI
> > select HAVE_ALIGNED_STRUCT_PAGE if SLUB
> > --- a/include/linux/nmi.h
> > +++ b/include/linux/nmi.h
> > @@ -155,6 +155,14 @@ extern int sysctl_hardlockup_all_cpu_bac #define
> > sysctl_softlockup_all_cpu_backtrace 0 #define
> > sysctl_hardlockup_all_cpu_backtrace 0 #endif
> > +
> > +#if defined(CONFIG_HARDLOCKUP_CHECK_TIMESTAMP) && \
> > + defined(CONFIG_HARDLOCKUP_DETECTOR)
> > +void watchdog_update_hrtimer_threshold(u64 period); #else static
> > +inline void watchdog_update_hrtimer_threshold(u64 period) { } #endif
> > +
> > extern bool is_hardlockup(void);
> > struct ctl_table;
> > extern int proc_watchdog(struct ctl_table *, int ,
> > --- a/kernel/watchdog.c
> > +++ b/kernel/watchdog.c
> > @@ -161,6 +161,7 @@ static void set_sample_period(void)
> > * hardlockup detector generates a warning
> > */
> > sample_period = get_softlockup_thresh() * ((u64)NSEC_PER_SEC / 5);
> > + watchdog_update_hrtimer_threshold(sample_period);
> > }
> >
> > /* Commands for resetting the watchdog */
> > --- a/kernel/watchdog_hld.c
> > +++ b/kernel/watchdog_hld.c
> > @@ -70,6 +70,54 @@ void touch_nmi_watchdog(void) }
> > EXPORT_SYMBOL(touch_nmi_watchdog);
> >
> > +#ifdef CONFIG_HARDLOCKUP_CHECK_TIMESTAMP static
> > +DEFINE_PER_CPU(ktime_t, last_timestamp); static ktime_t
> > +watchdog_hrtimer_sample_threshold __read_mostly;
> > +
> > +void watchdog_update_hrtimer_threshold(u64 period) {
> > + /*
> > + * The hrtimer runs with a period of (watchdog_threshold * 2) / 5
> > + *
> > + * So it runs effectively with 2.5 times the rate of the NMI
> > + * watchdog. That means the hrtimer should fire 2-3 times before
> > + * the NMI watchdog expires. The NMI watchdog on x86 is based on
> > + * unhalted CPU cycles, so if Turbo-Mode is enabled the CPU cycles
> > + * might run way faster than expected and the NMI fires in a
> > + * smaller period than the one deduced from the nominal CPU
> > + * frequency. Depending on the Turbo-Mode factor this might be fast
> > + * enough to get the NMI period smaller than the hrtimer watchdog
> > + * period and trigger false positives.
> > + *
> > + * The sample threshold is used to check in the NMI handler whether
> > + * the minimum time between two NMI samples has elapsed. That
> > + * prevents false positives.
> > + *
> > + * Set this to 4/5 of the actual watchdog threshold period so the
> > + * hrtimer is guaranteed to fire at least once within the real
> > + * watchdog threshold.
> > + */
> > + watchdog_hrtimer_sample_threshold = period * 2; }
> > +
> > +static bool watchdog_check_timestamp(void) {
> > + ktime_t delta, now = ktime_get_mono_fast_ns();
> > +
> > + delta = now - __this_cpu_read(last_timestamp);
> > + if (delta < watchdog_hrtimer_sample_threshold)
> > + return false;
> > + __this_cpu_write(last_timestamp, now);
> > + return true;
> > +}
> > +#else
> > +static inline bool watchdog_check_timestamp(void) {
> > + return true;
> > +}
> > +#endif
> > +
> > +
> > static struct perf_event_attr wd_hw_attr = {
> > .type = PERF_TYPE_HARDWARE,
> > .config = PERF_COUNT_HW_CPU_CYCLES,
> > @@ -94,6 +142,9 @@ static void watchdog_overflow_callback(s
> > return;
> > }
> >
> > + if (!watchdog_check_timestamp())
> > + return;
> > +
> > /* check for a hardlockup
> > * This is done by making sure our timer interrupt
> > * is incrementing. The timer interrupt should have
> > --- a/lib/Kconfig.debug
> > +++ b/lib/Kconfig.debug
> > @@ -806,6 +806,9 @@ config HARDLOCKUP_DETECTOR
> > depends on LOCKUP_DETECTOR && !HAVE_NMI_WATCHDOG
> > depends on PERF_EVENTS && HAVE_PERF_EVENTS_NMI
> >
> > +config HARDLOCKUP_CHECK_TIMESTAMP
> > + bool
> > +
> > config BOOTPARAM_HARDLOCKUP_PANIC
> > bool "Panic (Reboot) On Hard Lockups"
> > depends on HARDLOCKUP_DETECTOR