No, that's not what they claim. These are the claims:
"Queued spinlocks have another more significant advantage. As a
processor spins waiting for its spinlock field to have the low bit
cleared, it is spinning on memory private to its own processor."
We already do this in Linux. mingo has explained: while spinning, we
don't use locked operations. So the cache line lives in a shared state
and there is *no* memory bus traffic.
"When a processor busy waits for a standard spinlock it spins on the
global spinlock itself, which is shared by all the processors. Thus,
queued spinlocks have better multiprocessor bus characteristics
because there is no shared cache-line access during the busy wait."
Shared cache-line access does not correspond to SMP memory bus traffic.
There is no traffic if everyone agrees the line is in a shared state.
"In addition, because of the queuing nature of queued spinlocks, there
are typically fewer bus lock operations than for standard spinlocks
when a lock is under contention from several processors."
Here is the only real step forward. But it is only a net gain when
there are multiple processors spinning. I.e., you must have 3
processors: 1 holding the lock, and at least 2 waiting for it. It is
also only a gain for a few cycles, when the first processor releases the
lock and the others contend to acquire it. When the decision is made,
there is again no bus traffic.
As a Linux design goal, if there is so much contention that there are 3
processors wanting to lock a resource at the same time, something is
already amiss. Contention is supposed to be rare. If there are > "1
and a bit" processors wanting to lock a resource, we try to write the
code in a more scalable way that doesn't require the lock.
In cases where a lot of contention occurs and it is unavoidable,
semaphores are used instead. Note that our semaphores _do_ use a queued
implementation.
-- Jamie
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/