Re: [PATCH] Implement slub fastpath in terms of freebase andfreeoffset

From: Mathieu Desnoyers
Date: Thu Feb 28 2008 - 20:56:34 EST


* Christoph Lameter (clameter@xxxxxxx) wrote:
>
> > Index: linux-2.6-lttng/mm/slub.c
> > ===================================================================
> > --- linux-2.6-lttng.orig/mm/slub.c 2008-02-27 20:41:14.000000000 -0500
> > +++ linux-2.6-lttng/mm/slub.c 2008-02-28 18:22:14.000000000 -0500
> > @@ -138,6 +138,22 @@ static inline void ClearSlabDebug(struct
> > page->flags &= ~SLABDEBUG;
> > }
> >
> > +static inline void **get_freelist_ptr(struct kmem_cache_cpu *c)
> > +{
> > + return (void **)((unsigned long)page_address(c->page)
> > + | (c->freeoffset & c->off_mask));
> > +}
>
> Hmmmm... page_address() is an expensive operation.
>

Hrm, this is why I kept a separate "freebase", as a sort of
page_address() cache.

> > @@ -1779,21 +1815,21 @@ static __always_inline void slab_free(st
> > struct kmem_cache_cpu *c;
> >
> > #ifdef SLUB_FASTPATH
> > - void **freelist;
> > + unsigned long freeoffset, newoffset;
> >
> > c = get_cpu_slab(s, raw_smp_processor_id());
> > debug_check_no_locks_freed(object, s->objsize);
> > do {
> > - freelist = c->freelist;
> > + freeoffset = c->freeoffset;
> > barrier();
> > /*
> > * If the compiler would reorder the retrieval of c->page to
> > - * come before c->freelist then an interrupt could
> > - * change the cpu slab before we retrieve c->freelist. We
> > - * could be matching on a page no longer active and put the
> > - * object onto the freelist of the wrong slab.
> > + * come before c->freeoffset then an interrupt could change the
> > + * cpu slab before we retrieve c->freelist. We could be matching
> > + * on a page no longer active and put the object onto the
> > + * freelist of the wrong slab.
> > *
> > - * On the other hand: If we already have the freelist pointer
> > + * On the other hand: If we already have the freeoffset
> > * then any change of cpu_slab will cause the cmpxchg to fail
> > * since the freelist pointers are unique per slab.
> > */
> > @@ -1801,9 +1837,15 @@ static __always_inline void slab_free(st
> > __slab_free(s, page, x, addr, c->offset);
> > break;
> > }
> > - object[c->offset] = freelist;
> > + object[c->offset] =
> > + (void **)((unsigned long)page_address(c->page) |
> > + (freeoffset & c->off_mask));
> > stat(c, FREE_FASTPATH);
> > - } while (cmpxchg_local(&c->freelist, freelist, object) != freelist);
> > + newoffset = freeoffset + PAGE_SIZE;
> > + newoffset &= ~c->off_mask;
> > + newoffset |= (unsigned long)object & c->off_mask;
> > + } while (cmpxchg_local(&c->freeoffset, freeoffset, newoffset)
> > + != freeoffset);
>
> Hmmm.. The cmpxchg_local was intended to check for a change of slab too
> (addresses from different slabs are disjunct).
> That is no longer possible since the upper bits are used by the
> versioning? (Same issue in slab_alloc).
>

Here is the trick : if, on top of the cmpxchg_local, we have an
interrupt that calls slab_alloc or slab_free which causes a change of
slab, this interrupt will have to go through the slow path.

These slow paths use set_freelist_ptr() to update the c->freeoffset,
which also increments the sequence number. When we return from
interrupt, the cmpxchg_local loop will fail because the sequence number
has changed.

In short, the we also use the versioning to check for change of slab.

Mathieu

--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/