[tip: locking/core] locking/percpu-rwsem: Move __this_cpu_inc() into the slowpath

From: tip-bot2 for Peter Zijlstra
Date: Tue Feb 11 2020 - 07:49:12 EST


The following commit has been merged into the locking/core branch of tip:

Commit-ID: 71365d40232110f7b029befc9033ea311d680611
Gitweb: https://git.kernel.org/tip/71365d40232110f7b029befc9033ea311d680611
Author: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
AuthorDate: Wed, 30 Oct 2019 20:17:51 +01:00
Committer: Ingo Molnar <mingo@xxxxxxxxxx>
CommitterDate: Tue, 11 Feb 2020 13:10:54 +01:00

locking/percpu-rwsem: Move __this_cpu_inc() into the slowpath

As preparation to rework __percpu_down_read() move the
__this_cpu_inc() into it.

Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
Reviewed-by: Davidlohr Bueso <dbueso@xxxxxxx>
Acked-by: Will Deacon <will@xxxxxxxxxx>
Acked-by: Waiman Long <longman@xxxxxxxxxx>
Tested-by: Juri Lelli <juri.lelli@xxxxxxxxxx>
Link: https://lkml.kernel.org/r/20200131151540.041600199@xxxxxxxxxxxxx
---
include/linux/percpu-rwsem.h | 10 ++++++----
kernel/locking/percpu-rwsem.c | 2 ++
2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
index 4ceaa19..bb5b71c 100644
--- a/include/linux/percpu-rwsem.h
+++ b/include/linux/percpu-rwsem.h
@@ -59,8 +59,9 @@ static inline void percpu_down_read(struct percpu_rw_semaphore *sem)
* and that once the synchronize_rcu() is done, the writer will see
* anything we did within this RCU-sched read-size critical section.
*/
- __this_cpu_inc(*sem->read_count);
- if (unlikely(!rcu_sync_is_idle(&sem->rss)))
+ if (likely(rcu_sync_is_idle(&sem->rss)))
+ __this_cpu_inc(*sem->read_count);
+ else
__percpu_down_read(sem, false); /* Unconditional memory barrier */
/*
* The preempt_enable() prevents the compiler from
@@ -77,8 +78,9 @@ static inline bool percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
/*
* Same as in percpu_down_read().
*/
- __this_cpu_inc(*sem->read_count);
- if (unlikely(!rcu_sync_is_idle(&sem->rss)))
+ if (likely(rcu_sync_is_idle(&sem->rss)))
+ __this_cpu_inc(*sem->read_count);
+ else
ret = __percpu_down_read(sem, true); /* Unconditional memory barrier */
preempt_enable();
/*
diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
index 969389d..becf925 100644
--- a/kernel/locking/percpu-rwsem.c
+++ b/kernel/locking/percpu-rwsem.c
@@ -47,6 +47,8 @@ EXPORT_SYMBOL_GPL(percpu_free_rwsem);

bool __percpu_down_read(struct percpu_rw_semaphore *sem, bool try)
{
+ __this_cpu_inc(*sem->read_count);
+
/*
* Due to having preemption disabled the decrement happens on
* the same CPU as the increment, avoiding the