Re: [RFC PATCH 1/1] remove redundant compare, cmpxchg already does it
From: Peter Zijlstra
Date: Thu Jun 05 2014 - 03:22:56 EST
On Wed, Jun 04, 2014 at 04:56:50PM -0400, Andev wrote:
> On Wed, Jun 4, 2014 at 4:38 PM, Pranith Kumar <pranith@xxxxxxxxxx> wrote:
> > remove a redundant comparision
> >
> > Signed-off-by: Pranith Kumar <bobby.prani@xxxxxxxxx>
> > ---
> > kernel/locking/rwsem-xadd.c | 3 +--
> > 1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
> > index 1f99664b..6f8bd3c 100644
> > --- a/kernel/locking/rwsem-xadd.c
> > +++ b/kernel/locking/rwsem-xadd.c
> > @@ -249,8 +249,7 @@ static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
> > {
> > if (!(count & RWSEM_ACTIVE_MASK)) {
> > /* try acquiring the write lock */
> > - if (sem->count == RWSEM_WAITING_BIAS &&
> > - cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
> > + if (cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
> > RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) {
>
> This was mainly done to avoid the cost of a cmpxchg in case where they
> are not equal. Not sure if it really makes a difference though.
It does, a cache hot cmpxchg instruction is 24 cycles (as is pretty much
any other LOCKed ins, as measured on my WSM-EP), not to mention that
cmpxchg is a RMW so it needs to grab the cacheline in exclusive mode.
A read, which allows the cacheline to remain in shared, and non LOCKed
ops are way faster.
Attachment:
pgpQoK6JVqB5o.pgp
Description: PGP signature