Re: [PATCH v2 1/2] spinlock: New spinlock_refcount.h for locklessupdate of refcount
From: Waiman Long
Date: Wed Jun 26 2013 - 17:07:27 EST
On 06/26/2013 04:17 PM, Andi Kleen wrote:
+ * The combined data structure is 8-byte aligned. So proper placement of this
+ * structure in the larger embedding data structure is needed to ensure that
+ * there is no hole in it.
On i386 u64 is only 4 bytes aligned. So you need to explicitely align
it to 8 bytes. Otherwise you risk the two members crossing a cache line, which
would be really expensive with atomics.
Do you mean the original i386 or the i586 that are now used by most
distribution now? If it is the former, I recall that i386 is now no
longer supported.
I also look around some existing codes that use cmpxchg64. It doesn't
seem like they use explicit alignment. I will need more investigation to
see if it is a real problem.
+ /*
+ * Code doesn't work if raw spinlock is larger than 4 bytes
+ * or is empty.
+ */
+ BUG_ON((sizeof(arch_spinlock_t)> 4) || (sizeof(arch_spinlock_t) == 0));
BUILD_BUG_ON
Thank for the suggestion, will make the change.
+
+ spin_unlock_wait(plock); /* Wait until lock is released */
+ old.__lock_count = ACCESS_ONCE(*plockcnt);
+ get_lock = ((threshold>= 0)&& (old.count == threshold));
+ if (likely(!get_lock&& spin_can_lock(&old.lock))) {
What is that for? Why can't you do the CMPXCHG unconditially ?
An unconditional CMPXCHG can be as bad as acquiring the spinlock. So we
need to check the conditions are ready before doing an actual CMPXCHG.
If it's really needed, it is most likely a race?
If there is a race going on between threads, the code will fall back to
the old way of acquiring the spinlock.
The duplicated code should be likely an inline.
The duplicated code is only used once in the function. I don't think an
additional inline is really needed, but I can do it if other people also
think that is a good idea.
+/*
+ * The presence of either one of the CONFIG_DEBUG_SPINLOCK or
+ * CONFIG_DEBUG_LOCK_ALLOC configuration parameter will force the
+ * spinlock_t structure to be 8-byte aligned.
+ *
+ * To support the spinlock/reference count combo data type for 64-bit SMP
+ * environment with spinlock debugging turned on, the reference count has
+ * to be integrated into the spinlock_t data structure in this special case.
+ * The spinlock_t data type will be 8 bytes larger if CONFIG_GENERIC_LOCKBREAK
+ * is also defined.
I would rather just disable the optimization when these CONFIGs are set
Looking from the other perspective, we may want the locking code to have
the same behavior whether spinlock debugging is enabled or not.
Disabling the optimization will cause the code path to differ which may
not be what we want. Of course, I can change it if other people also
think it is the right way to do it.
Regards,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/