[PATCH RFC] slub: do not drop slab_mutex for sysfs_slab_{add,remove}
From: Vladimir Davydov
Date: Thu Feb 06 2014 - 10:58:26 EST
When creating/destroying a kmem cache, we do a lot of work holding the
slab_mutex, but we drop it for sysfs_slab_{add,remove} for some reason.
Since __kmem_cache_create and __kmem_cache_shutdown are extremely rare,
I propose to simplify locking by calling sysfs_slab_{add,remove} w/o
dropping the slab_mutex.
I'm interested in this, because when creating a memcg cache I need the
slab_mutex locked until the cache is fully initialized and registered to
the memcg subsys (memcg_cache_register() is called). If this is not
true, I get races when several threads try to create a cache for the
same memcg. An alternative fix for my problem would be moving
sysfs_slab_{add,remove} after the slab_mutex is dropped, but I'd like to
try the shortest path first.
Any objections to this?
Thanks.
---
mm/slub.c | 15 +--------------
1 file changed, 1 insertion(+), 14 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 3d3a8a7a0f8c..6f4393892d2d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3229,19 +3229,8 @@ int __kmem_cache_shutdown(struct kmem_cache *s)
{
int rc = kmem_cache_close(s);
- if (!rc) {
- /*
- * We do the same lock strategy around sysfs_slab_add, see
- * __kmem_cache_create. Because this is pretty much the last
- * operation we do and the lock will be released shortly after
- * that in slab_common.c, we could just move sysfs_slab_remove
- * to a later point in common code. We should do that when we
- * have a common sysfs framework for all allocators.
- */
- mutex_unlock(&slab_mutex);
+ if (!rc)
sysfs_slab_remove(s);
- mutex_lock(&slab_mutex);
- }
return rc;
}
@@ -3772,9 +3761,7 @@ int __kmem_cache_create(struct kmem_cache *s, unsigned long flags)
return 0;
memcg_propagate_slab_attrs(s);
- mutex_unlock(&slab_mutex);
err = sysfs_slab_add(s);
- mutex_lock(&slab_mutex);
if (err)
kmem_cache_close(s);
--
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/