Re: [RFC] Cancellable MCS spinlock rework
From: Jason Low
Date: Thu Jul 03 2014 - 00:39:27 EST
On Wed, 2014-07-02 at 10:30 -0700, Jason Low wrote:
> On Wed, 2014-07-02 at 19:23 +0200, Peter Zijlstra wrote:
> > On Wed, Jul 02, 2014 at 09:59:16AM -0700, Jason Low wrote:
> > >
> > > Why I converted pointers to atomic_t?
> > >
> > > This would avoid the potentially racy ACCESS_ONCE stores + cmpxchg while
> > > also using less overhead, since atomic_t is often only 32 bits while
> > > pointers could be 64 bits.
> >
> > So no real good reason.. The ACCESS_ONCE stores + cmpxchg stuff is
> > likely broken all over the place, and 'fixing' this one place doesn't
> > cure the problem.
>
> Right, fixing the ACCESS_ONCE + cmpxchg and avoiding the architecture
> workarounds for optimistic spinning was just a nice side effect.
>
> Would potentially reducing the size of the rw semaphore structure by 32
> bits (for all architectures using optimistic spinning) be a nice
> benefit?
And due to padding, the additional modification below reduces the
size of struct rw_semaphore by 64 bits on my machine :)
struct rw_semaphore {
long count;
- raw_spinlock_t wait_lock;
struct list_head wait_list;
+ raw_spinlock_t wait_lock;
#ifdef CONFIG_SMP
+ struct optimistic_spin_tail osq; /* spinner MCS lock */
/*
* Write owner. Used as a speculative check to see
* if the owner is running on the cpu.
*/
struct task_struct *owner;
- struct optimistic_spin_tail osq; /* spinner MCS lock */
#endif
#ifdef CONFIG_DEBUG_LOCK_ALLOC
struct lockdep_map dep_map;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/