[PATCH] locking/lock_events: Use this_cpu_add() when necessary

From: Waiman Long
Date: Wed May 22 2019 - 11:43:06 EST


The kernel test robot has reported that the use of __this_cpu_add()
causes bug messages like:

BUG: using __this_cpu_add() in preemptible [00000000] code: ...

This is only an issue on preempt kernel where preemption can happen
in the middle of the multi-instruction percpu operation. It is not an
issue on x86 as the percpu operation is a single instruction. The lock
events code is updated to use the slower this_cpu_add() for non-x86
preempt kernel or when CONFIG_DEBUG_PREEMPT is defined.

Fixes: a8654596f0371 ("locking/rwsem: Enable lock event counting")
Signed-off-by: Waiman Long <longman@xxxxxxxxxx>
---
kernel/locking/lock_events.h | 27 +++++++++++++++++++++++++--
1 file changed, 25 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/lock_events.h b/kernel/locking/lock_events.h
index feb1acc54611..2b6c8b7588dc 100644
--- a/kernel/locking/lock_events.h
+++ b/kernel/locking/lock_events.h
@@ -30,13 +30,36 @@ enum lock_events {
*/
DECLARE_PER_CPU(unsigned long, lockevents[lockevent_num]);

+/*
+ * The purpose of the lock event counting subsystem is to provide a low
+ * overhead way to record the number of specific locking events by using
+ * percpu counters. It is the percpu sum that matters, not specifically
+ * how many of them happens in each cpu.
+ *
+ * In !preempt kernel, we can just use __this_cpu_{inc|add}() as preemption
+ * won't happen in the middle of the percpu operation. In preempt kernel,
+ * it depends on whether the percpu operation is atomic (1 instruction)
+ * or not. We know x86 generates a single instruction to do percpu op, but
+ * we can't guarantee that for other architectures. We also need to use
+ * the slower this_cpu_{inc|add}() when CONFIG_DEBUG_PREEMPT is defined
+ * to make the checking code happy.
+ */
+#if defined(CONFIG_PREEMPT) && \
+ (defined(CONFIG_DEBUG_PREEMPT) || !defined(CONFIG_X86))
+#define lockevent_percpu_inc(x) this_cpu_inc(x)
+#define lockevent_percpu_add(x, v) this_cpu_add(x, v)
+#else
+#define lockevent_percpu_inc(x) __this_cpu_inc(x)
+#define lockevent_percpu_add(x, v) __this_cpu_add(x, v)
+#endif
+
/*
* Increment the PV qspinlock statistical counters
*/
static inline void __lockevent_inc(enum lock_events event, bool cond)
{
if (cond)
- __this_cpu_inc(lockevents[event]);
+ lockevent_percpu_inc(lockevents[event]);
}

#define lockevent_inc(ev) __lockevent_inc(LOCKEVENT_ ##ev, true)
@@ -44,7 +67,7 @@ static inline void __lockevent_inc(enum lock_events event, bool cond)

static inline void __lockevent_add(enum lock_events event, int inc)
{
- __this_cpu_add(lockevents[event], inc);
+ lockevent_percpu_add(lockevents[event], inc);
}

#define lockevent_add(ev, c) __lockevent_add(LOCKEVENT_ ##ev, c)
--
2.18.1