Can you explain this more. How can you acquire the ticket and update atdiff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.hIt is theoretically possible that a large number of CPUs (says 64 or
index ad0ad07..cdaefdd 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -7,12 +7,18 @@
#include<linux/types.h>
-#if (CONFIG_NR_CPUS< 256)
+#if (CONFIG_NR_CPUS< 128)
typedef u8 __ticket_t;
typedef u16 __ticketpair_t;
-#else
+#define TICKET_T_CMP_GE(a, b) (UCHAR_MAX / 2>= (unsigned char)((a) - (b)))
+#elif (CONFIG_NR_CPUS< 32768)
typedef u16 __ticket_t;
typedef u32 __ticketpair_t;
+#define TICKET_T_CMP_GE(a, b) (USHRT_MAX / 2>= (unsigned short)((a) - (b)))
+#else
+typedef u32 __ticket_t;
+typedef u64 __ticketpair_t;
+#define TICKET_T_CMP_GE(a, b) (UINT_MAX / 2>= (unsigned int)((a) - (b)))
#endif
more with CONFIG_NR_CPUS< 128) can acquire a ticket from the lock
before the check for TICKET_T_CMP_GE() in tkt_spin_pass(). So the check
will fail even when there is a large number of CPUs contending for the
lock. The chance of this happening is, of course, extremely rare. This
is not an error as the lock is still working as it should be without
your change.
the same time? If a queue has been set, then you can't acquire the
ticket as the head has a 1 appended to it.
We could add a limit, but in practice I'm not sure that would have any
+/*With a table size of 256 and you have to scan the whole table to find
+ * Return a pointer to the queue header associated with the specified lock,
+ * or return NULL if there is no queue for the lock or if the lock's queue
+ * is in transition.
+ */
+static struct tkt_q_head *tkt_q_find_head(arch_spinlock_t *asp)
+{
+ int i;
+ int start;
+
+ start = i = tkt_q_hash(asp);
+ do
+ if (tkt_q_heads[i].ref == asp)
+ return&tkt_q_heads[i];
+ while ((i = tkt_q_next_slot(i)) != start);
+ return NULL;
+}
the right head queue. This can be a significant overhead. I will suggest
setting a limiting of how many entries it scans before it aborts rather
than checking the whole table.
issue. I thought the same thing when I first saw this, but to hit most
of the list, would require a large collision in the hash algorithm,
would could probably be fixed with a better hash.