[RFC] slub: Per object NUMA support

From: Christoph Lameter
Date: Fri Apr 15 2011 - 15:56:22 EST

I am not sure if such a feature is needed/wanted/desired. It would make
the object allocation method similar to SLAB instead of relying on page
based policy application (which IMHO was the intend of the memory policy
system before Paul Jackson got that changed in SLAB).

Anyways the implementation is rather simple.

Currently slub applies NUMA policies per allocated slab page. Change
that to apply memory policies for each individual object allocated.

F.e. before this patch MPOL_INTERLEAVE would return objects from the
same slab page until a new slab page was allocated. Now an object
from a different page is taken for each allocation.

This increases the overhead of the fastpath under NUMA.

Signed-off-by: Christoph Lameter <cl@xxxxxxxxx>

mm/slub.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)

Index: linux-2.6/mm/slub.c
--- linux-2.6.orig/mm/slub.c 2011-04-15 12:54:42.000000000 -0500
+++ linux-2.6/mm/slub.c 2011-04-15 13:11:25.000000000 -0500
@@ -1887,6 +1887,21 @@ debug:
goto unlock_out;

+static __always_inline int alternate_slab_node(struct kmem_cache *s,
+ gfp_t flags, int node)
+ if (unlikely(node == NUMA_NO_NODE &&
+ !(flags & __GFP_THISNODE) &&
+ !in_interrupt())) {
+ if ((s->flags & SLAB_MEM_SPREAD) && cpuset_do_slab_mem_spread())
+ node = cpuset_slab_spread_node();
+ else if (current->mempolicy)
+ node = slab_node(current->mempolicy);
+ }
+ return node;
* Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc)
* have the fastpath folded into their functions. So no function call
@@ -1911,6 +1926,7 @@ static __always_inline void *slab_alloc(
if (slab_pre_alloc_hook(s, gfpflags))
return NULL;

+ node = alternate_slab_node(s, gfpflags, node);
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/