[slubllv4 01/16] slub: Per object NUMA support

From: Christoph Lameter
Date: Fri May 06 2011 - 14:10:12 EST


Currently slub applies NUMA policies per allocated slab page. Change
that to apply memory policies for each individual object allocated.

F.e. before this patch MPOL_INTERLEAVE would return objects from the
same slab page until a new slab page was allocated. Now an object
from a different page is taken for each allocation.

This increases the overhead of the fastpath under NUMA.

Signed-off-by: Christoph Lameter <cl@xxxxxxxxx>

---
mm/slub.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)

Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2011-05-05 15:21:51.000000000 -0500
+++ linux-2.6/mm/slub.c 2011-05-05 15:28:33.000000000 -0500
@@ -1873,6 +1873,21 @@ debug:
goto unlock_out;
}

+static __always_inline int alternate_slab_node(struct kmem_cache *s,
+ gfp_t flags, int node)
+{
+#ifdef CONFIG_NUMA
+ if (unlikely(node == NUMA_NO_NODE &&
+ !(flags & __GFP_THISNODE) &&
+ !in_interrupt())) {
+ if ((s->flags & SLAB_MEM_SPREAD) && cpuset_do_slab_mem_spread())
+ node = cpuset_slab_spread_node();
+ else if (current->mempolicy)
+ node = slab_node(current->mempolicy);
+ }
+#endif
+ return node;
+}
/*
* Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc)
* have the fastpath folded into their functions. So no function call
@@ -1893,6 +1908,8 @@ static __always_inline void *slab_alloc(
if (slab_pre_alloc_hook(s, gfpflags))
return NULL;

+ node = alternate_slab_node(s, gfpflags, node);
+
redo:

/*

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/