[PATCH] mm: slub: Ensure that slab_unlock() is atomic
From: Vineet Gupta
Date: Tue Mar 08 2016 - 09:31:48 EST
We observed livelocks on ARC SMP setup when running hackbench with SLUB.
This hardware configuration lacks atomic instructions (LLOCK/SCOND) thus
kernel resorts to a central @smp_bitops_lock to protect any R-M-W ops
suh as test_and_set_bit()
The spinlock itself is implemented using Atomic [EX]change instruction
which is always available.
The race happened when both cores tried to slab_lock() the same page.
c1 c0
----------- -----------
slab_lock
slab_lock
slab_unlock
Not observing the unlock
This in turn happened because slab_unlock() doesn't serialize properly
(doesn't use atomic clear) with a concurrent running
slab_lock()->test_and_set_bit()
Cc: Christoph Lameter <cl@xxxxxxxxx>
Cc: Pekka Enberg <penberg@xxxxxxxxxx>
Cc: David Rientjes <rientjes@xxxxxxxxxx>
Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Noam Camus <noamc@xxxxxxxxxx>
Cc: <stable@xxxxxxxxxxxxxxx>
Cc: <linux-mm@xxxxxxxxx>
Cc: <linux-kernel@xxxxxxxxxxxxxxx>
Cc: <linux-snps-arc@xxxxxxxxxxxxxxxxxxx>
Signed-off-by: Vineet Gupta <vgupta@xxxxxxxxxxxx>
---
mm/slub.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index d8fbd4a6ed59..b7d345a508dc 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -345,7 +345,7 @@ static __always_inline void slab_lock(struct page *page)
static __always_inline void slab_unlock(struct page *page)
{
VM_BUG_ON_PAGE(PageTail(page), page);
- __bit_spin_unlock(PG_locked, &page->flags);
+ bit_spin_unlock(PG_locked, &page->flags);
}
static inline void set_page_slub_counters(struct page *page, unsigned long counters_new)
--
2.5.0