Re: [PATCH v2 2/2] locking/rwsem: Make reader optimistic spinning optional

From: Peter Zijlstra
Date: Mon Sep 04 2023 - 11:11:13 EST


On Fri, Sep 01, 2023 at 10:07:04AM +0900, Bongkyu Kim wrote:

> diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
> index 9c0462d515c1..47c467880af5 100644
> --- a/kernel/locking/rwsem.c
> +++ b/kernel/locking/rwsem.c
> @@ -117,6 +117,17 @@
> # define DEBUG_RWSEMS_WARN_ON(c, sem)
> #endif
>
> +static bool __ro_after_init rwsem_opt_rspin;
> +
> +static int __init opt_rspin(char *str)
> +{
> + rwsem_opt_rspin = true;
> +
> + return 0;
> +}
> +
> +early_param("rwsem.opt_rspin", opt_rspin);
> +
> /*
> * On 64-bit architectures, the bit definitions of the count are:
> *
> @@ -1083,7 +1094,7 @@ static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
> return false;
> }
>
> -static inline bool rwsem_no_spinners(sem)
> +static inline bool rwsem_no_spinners(struct rw_semaphore *sem)
> {
> return false;
> }
> @@ -1157,6 +1168,9 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat
> return sem;
> }
>
> + if (!IS_ENABLED(CONFIG_RWSEM_SPIN_ON_OWNER) || !rwsem_opt_rspin)
> + goto queue;
> +

At the very least this should be a static_branch(), but I still very
much want an answer on how all this interacts with the handoff stuff.