Re: [thiscpuops upgrade 10/10] Lockless (and preemptless) fastpathsfor slub

From: Pekka Enberg
Date: Wed Nov 24 2010 - 02:16:23 EST


On Wed, Nov 24, 2010 at 1:51 AM, Christoph Lameter <cl@xxxxxxxxx> wrote:
> @@ -1737,23 +1770,53 @@ static __always_inline void *slab_alloc(
>  {
>        void **object;
>        struct kmem_cache_cpu *c;
> -       unsigned long flags;
> +       unsigned long tid;
>
>        if (slab_pre_alloc_hook(s, gfpflags))
>                return NULL;
>
> -       local_irq_save(flags);
> +redo:
> +       /*
> +        * Must read kmem_cache cpu data via this cpu ptr. Preemption is
> +        * enabled. We may switch back and forth between cpus while
> +        * reading from one cpu area. That does not matter as long
> +        * as we end up on the original cpu again when doing the cmpxchg.
> +        */
>        c = __this_cpu_ptr(s->cpu_slab);
> +
> +       /*
> +        * The transaction ids are globally unique per cpu and per operation on
> +        * a per cpu queue. Thus they can be guarantee that the cmpxchg_double
> +        * occurs on the right processor and that there was no operation on the
> +        * linked list in between.
> +        */
> +       tid = c->tid;
> +       barrier();

You're using a compiler barrier after every load from c->tid. Why?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/