Re: [patch 08/10] SLUB: Optional fast path using cmpxchg_local

From: Andrew Morton
Date: Tue Oct 30 2007 - 14:30:30 EST


On Sun, 28 Oct 2007 15:05:50 +0200 (EET)
Pekka J Enberg <penberg@xxxxxxxxxxxxxx> wrote:

> On Sat, 27 Oct 2007, Christoph Lameter wrote:
> > The alternate path is realized using #ifdef's. Several attempts to do the
> > same with macros and in line functions resulted in a mess (in particular due
> > to the strange way that local_interrupt_save() handles its argument and due
> > to the need to define macros/functions that sometimes disable interrupts
> > and sometimes do something else. The macro based approaches made it also
> > difficult to preserve the optimizations for the non cmpxchg paths).
>
> I think at least slub_alloc() and slub_free() can be made simpler. See the
> included patch below.

Both versions look pretty crappy to me. The code duplication in the two
version of do_slab_alloc() could be tidied up considerably.

> +#ifdef CONFIG_FAST_CMPXHG_LOCAL
> +static __always_inline void *do_slab_alloc(struct kmem_cache *s,
> + struct kmem_cache_cpu *c, gfp_t gfpflags, int node, void *addr)
> +{
> + unsigned long flags;
> + void **object;
> +
> + do {
> + object = c->freelist;
> + if (unlikely(is_end(object) || !node_match(c, node))) {
> + object = __slab_alloc(s, gfpflags, node, addr, c);
> + break;
> + }
> + } while (cmpxchg_local(&c->freelist, object, object[c->offset])
> + != object);
> + put_cpu();
> +
> + return object;
> +}

Unmatched put_cpu()

> +
> +static __always_inline void *do_slab_alloc(struct kmem_cache *s,
> + struct kmem_cache_cpu *c, gfp_t gfpflags, int node, void *addr)
> +{
> + unsigned long flags;
> + void **object;
> +
> + local_irq_save(flags);
> + if (unlikely((is_end(c->freelist)) || !node_match(c, node))) {
> + object = __slab_alloc(s, gfpflags, node, addr, c);
> + } else {
> + object = c->freelist;
> + c->freelist = object[c->offset];
> + }
> + local_irq_restore(flags);
> + return object;
> +}
> +#endif
> +
> /*
> * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc)
> * have the fastpath folded into their functions. So no function call
> @@ -1591,24 +1639,13 @@ debug:
> static void __always_inline *slab_alloc(struct kmem_cache *s,
> gfp_t gfpflags, int node, void *addr)
> {
> - void **object;
> - unsigned long flags;
> struct kmem_cache_cpu *c;
> + void **object;
>
> - local_irq_save(flags);
> c = get_cpu_slab(s, smp_processor_id());

smp_processor_id() in preemptible code.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/