Re: [PATCH 3/5] locking/rwsem: Enable reader optimistic lock stealing
From: Peter Zijlstra
Date: Fri Nov 20 2020 - 09:36:38 EST
On Tue, Nov 17, 2020 at 10:04:27PM -0500, Waiman Long wrote:
> diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
> index ee374ae061c3..930dd4af3639 100644
> --- a/kernel/locking/rwsem.c
> +++ b/kernel/locking/rwsem.c
> @@ -957,6 +957,12 @@ static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
> }
> return false;
> }
> +
> +static inline bool osq_is_empty(struct rw_semaphore *sem)
> +{
> + return !osq_is_locked(&sem->osq);
> +}
> +
> #else
> static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem,
> unsigned long nonspinnable)
> @@ -977,6 +983,10 @@ static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
> return false;
> }
>
> +static inline bool osq_is_empty(sem)
> +{
> + return false;
> +}
Hurph, the naming seems to suggest this ought to be in osq_lock.h, but
it really is part of rwsem, it captures the lack of osq member for this
configuration.
How about: rwsem_no_spinners() instead ?
> static inline int
> rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
> {
> @@ -1007,6 +1017,22 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
> !(count & RWSEM_WRITER_LOCKED))
> goto queue;
>
> + /*
> + * Reader optimistic lock stealing
> + *
> + * We can take the read lock directly without doing
> + * rwsem_optimistic_spin() if the conditions are right.
> + * Also wake up other readers if it is the first reader.
> + */
> + if (!(count & (RWSEM_WRITER_LOCKED | RWSEM_FLAG_HANDOFF)) &&
> + osq_is_empty(sem)) {
> + rwsem_set_reader_owned(sem);
> + lockevent_inc(rwsem_rlock_steal);
> + if (rcnt == 1)
> + goto wake_readers;
> + return sem;
> + }
AFAICT this saves at least 3 atomic ops; how common is this case
(you did add a counter but forgot to mention this).