Re: [RFC][PATCH v2] slub: Keep page and object in sync inslab_alloc_node()

From: Steven Rostedt
Date: Fri Jan 18 2013 - 10:29:38 EST


On Fri, 2013-01-18 at 13:42 +0900, Joonsoo Kim wrote:

> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index 9db4825..b54dffa 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -46,6 +46,9 @@ enum stat_item {
> struct kmem_cache_cpu {
> void **freelist; /* Pointer to next available object */
> unsigned long tid; /* Globally unique transaction id */
> +#ifdef CONFIG_NUMA
> + int node;

Note, you put an int between a long and a pointer, which will waste 4
bytes on 64bit machines.

> +#endif
> struct page *page; /* The slab from which we are allocating */
> struct page *partial; /* Partially allocated frozen slabs */
> #ifdef CONFIG_SLUB_STATS



> @@ -2038,10 +2049,10 @@ static void flush_all(struct kmem_cache *s)
> * Check if the objects in a per cpu structure fit numa
> * locality expectations.
> */
> -static inline int node_match(struct page *page, int node)
> +static inline int node_match(struct kmem_cache_cpu *c, int node)
> {
> #ifdef CONFIG_NUMA
> - if (node != NUMA_NO_NODE && page_to_nid(page) != node)
> + if (node != NUMA_NO_NODE && c->node != node)

We still have the issue of cpu fetching c->node before c->tid and
c->freelist.

I still believe the only solution is to prevent the task from migrating
via a preempt disable.

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/