Re: [LKP] [mm] c8c06efa8b5: -7.6% unixbench.score

From: Peter Zijlstra
Date: Thu Jan 08 2015 - 05:37:30 EST


On Thu, Jan 08, 2015 at 12:59:59AM -0800, Davidlohr Bueso wrote:
> > > > 721721 ± 1% +303.6% 2913110 ± 3% unixbench.time.voluntary_context_switches
> > > > 11767 ± 0% -7.6% 10867 ± 1% unixbench.score

> heh I was actually looking at the reader code. We really do:
>
> /* wait until we successfully acquire the lock */
> set_current_state(TASK_UNINTERRUPTIBLE);
> while (true) {
> if (rwsem_try_write_lock(count, sem))
> break;
> raw_spin_unlock_irq(&sem->wait_lock);
>
> /* Block until there are no active lockers. */
> do {
> schedule();
> set_current_state(TASK_UNINTERRUPTIBLE);
> } while ((count = sem->count) & RWSEM_ACTIVE_MASK);
>
> raw_spin_lock_irq(&sem->wait_lock);
> }
>
>
> Which still has similar issues with even two barriers, I guess for both
> the rwsem_try_write_lock call (less severe) and count checks. Anyway...

So its actually scheduling a lot more, this could also mean the opt
spinning thing isn't working as well (I've no real idea what the
workload is).

One thing I noticed is that we set sem->owner very late in comparison
with the mutex code, this could cause us to break out of the spin loop
prematurely.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/