[patch V2 09/29] lockup_detector/perf: Remove broken self disable on failure

From: Thomas Gleixner
Date: Tue Sep 12 2017 - 15:56:36 EST


The self disabling feature is broken vs. CPU hotplug locking:

CPU 0 CPU 1
cpus_write_lock();
cpu_up(1)
wait_for_completion()
....
unpark_watchdog()
->unpark()
perf_event_create() <- fails
watchdog_enable &= ~NMI_WATCHDOG;
....
cpus_write_unlock();
CPU 2
cpus_write_lock()
cpu_down(2)
wait_for_completion()
wakeup(watchdog);
watchdog()
if (!(watchdog_enable & NMI_WATCHDOG))
watchdog_nmi_disable()
perf_event_disable()
....
cpus_read_lock();

stop_smpboot_threads()
park_watchdog();
wait_for_completion(watchdog->parked);

Result: End of hotplug and instantaneous full lockup of the machine.

There is a similar problem with disabling the watchdog via the user space
interface as the sysctl function fiddles with watchdog_enable directly.

It's very debatable whether this is required at all. If the watchdog works
nicely on N CPUs and it fails to enable on the N + 1 CPU either during
hotplug or because the user space interface disabled it via sysctl cpumask
and then some perf user grabbed the counter which is then unavailable for
the watchdog when the sysctl cpumask gets changed back.

There is no real justification for this.

One of the reasons WHY this is done is the utter stupidity of the init code
of the perf NMI watchdog. Instead of checking upfront at boot whether PERF
is available and functional at all, it just does this check at run time
over and over when user space fiddles with the sysctl. That's broken beyond
repair along with the idiotic error code dependent warn level printks and
the even more silly printk rate limiting.

If the init code checks whether perf works at boot time, then this mess can
be more or less avoided completely. Perf does not come magically into life
at runtime. Brain usage while coding is overrated.

Remove the cruft and add a temporary safe guard which gets removed later.

Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Don Zickus <dzickus@xxxxxxxxxx>
Cc: Chris Metcalf <cmetcalf@xxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Sebastian Siewior <bigeasy@xxxxxxxxxxxxx>
Cc: Nicholas Piggin <npiggin@xxxxxxxxx>
Cc: Ulrich Obergfell <uobergfe@xxxxxxxxxx>
Cc: Borislav Petkov <bp@xxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Link: http://lkml.kernel.org/r/20170831073053.680086886@xxxxxxxxxxxxx

---
kernel/watchdog.c | 15 ---------------
kernel/watchdog_hld.c | 20 +++++++-------------
2 files changed, 7 insertions(+), 28 deletions(-)

--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -485,21 +485,6 @@ static void watchdog(unsigned int cpu)
__this_cpu_write(soft_lockup_hrtimer_cnt,
__this_cpu_read(hrtimer_interrupts));
__touch_watchdog();
-
- /*
- * watchdog_nmi_enable() clears the NMI_WATCHDOG_ENABLED bit in the
- * failure path. Check for failures that can occur asynchronously -
- * for example, when CPUs are on-lined - and shut down the hardware
- * perf event on each CPU accordingly.
- *
- * The only non-obvious place this bit can be cleared is through
- * watchdog_nmi_enable(), so a pr_info() is placed there. Placing a
- * pr_info here would be too noisy as it would result in a message
- * every few seconds if the hardlockup was disabled but the softlockup
- * enabled.
- */
- if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED))
- watchdog_nmi_disable(cpu);
}

static struct smp_hotplug_thread watchdog_threads = {
--- a/kernel/watchdog_hld.c
+++ b/kernel/watchdog_hld.c
@@ -23,6 +23,7 @@ static DEFINE_PER_CPU(bool, watchdog_nmi
static DEFINE_PER_CPU(struct perf_event *, watchdog_ev);

static unsigned long hardlockup_allcpu_dumped;
+static bool hardlockup_detector_disabled;

void arch_touch_nmi_watchdog(void)
{
@@ -178,6 +179,10 @@ int watchdog_nmi_enable(unsigned int cpu
if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED))
goto out;

+ /* A failure disabled the hardlockup detector permanently */
+ if (hardlockup_detector_disabled)
+ return -ENODEV;
+
/* is it already setup and enabled? */
if (event && event->state > PERF_EVENT_STATE_OFF)
goto out;
@@ -206,18 +211,6 @@ int watchdog_nmi_enable(unsigned int cpu
goto out_save;
}

- /*
- * Disable the hard lockup detector if _any_ CPU fails to set up
- * set up the hardware perf event. The watchdog() function checks
- * the NMI_WATCHDOG_ENABLED bit periodically.
- *
- * The barriers are for syncing up watchdog_enabled across all the
- * cpus, as clear_bit() does not use barriers.
- */
- smp_mb__before_atomic();
- clear_bit(NMI_WATCHDOG_ENABLED_BIT, &watchdog_enabled);
- smp_mb__after_atomic();
-
/* skip displaying the same error again */
if (!firstcpu && (PTR_ERR(event) == firstcpu_err))
return PTR_ERR(event);
@@ -232,7 +225,8 @@ int watchdog_nmi_enable(unsigned int cpu
pr_err("disabled (cpu%i): unable to create perf event: %ld\n",
cpu, PTR_ERR(event));

- pr_info("Shutting down hard lockup detector on all cpus\n");
+ pr_info("Disabling hard lockup detector permanently\n");
+ hardlockup_detector_disabled = true;

return PTR_ERR(event);