On 02/12, Raghavendra K T wrote:
@@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
* check again make sure it didn't become free while
* we weren't looking.
*/
- if (ACCESS_ONCE(lock->tickets.head) == want) {
+ head = ACCESS_ONCE(lock->tickets.head);
+ if (__tickets_equal(head, want)) {
add_stats(TAKEN_SLOW_PICKUP, 1);
While at it, perhaps it makes sense to s/ACCESS_ONCE/READ_ONCE/ but this
is cosmetic.
We also need to change another user of enter_slow_path, xen_lock_spinning()
in arch/x86/xen/spinlock.c.
Other than that looks correct at first glance... but this is up to
maintainers.