Pardon me for barging in, but I found this whole interchange extremelyOption 1:
confusing...
On Sat, 8 Jul 2017, Ingo Molnar wrote:
* Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx> wrote:This statement doesn't seem to make sense. Did Manfred mean to write
On Sat, Jul 08, 2017 at 10:35:43AM +0200, Ingo Molnar wrote:
* Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> wrote:
Hi Ingo,
On 07/07/2017 10:31 AM, Ingo Molnar wrote:
There's another, probably just as significant advantage: queued_spin_unlock_wait()At least for ipc/sem:
is 'read-only', while spin_lock()+spin_unlock() dirties the lock cache line. On
any bigger system this should make a very measurable difference - if
spin_unlock_wait() is ever used in a performance critical code path.
Dirtying the cacheline (in the slow path) allows to remove a smp_mb() in the
hot path.
So for sem_lock(), I either need a primitive that dirties the cacheline or
sem_lock() must continue to use spin_lock()/spin_unlock().
"smp_mb()" instead of "spin_lock()/spin_unlock()"?
I agree :-)This is even more confusing. Did Ingo mean to suggest usingTechnically you could use spin_trylock()+spin_unlock() and avoid the lock acquire
spinning on spin_unlock() and get very close to the slow path performance of a
pure cacheline-dirtying behavior.
"spin_trylock()+spin_unlock()" in place of "spin_lock()+spin_unlock()"
could provide the desired ordering guarantee without delaying other
CPUs that may try to acquire the lock? That seems highly questionable.