Re: [patch] SLQB slab allocator
From: Nick Piggin
Date: Mon Feb 02 2009 - 20:49:20 EST
On Tuesday 27 January 2009 04:34:21 Christoph Lameter wrote:
> On Fri, 23 Jan 2009, Nick Piggin wrote:
> > > SLUB can directly free an object to any slab page. "Queuing" on free
> > > via the per cpu slab is only possible if the object came from that per
> > > cpu slab. This is typically only the case for objects that were
> > > recently allocated.
> >
> > Ah yes ok that's right. But then you don't get LIFO allocation
> > behaviour for those cases.
>
> But you get more TLB local allocations.
Not necessarily at all. Because when the "active" page runs out, you've
lost all the LIFO information about objects with active caches and TLBs.
> > > Yes you can loose track of caching hot objects. That is one of the
> > > concerns with the SLUB approach. On the other hand: Caching
> > > architectures get more and more complex these days (especially in a
> > > NUMA system). The
> >
> > Because it is more important to get good cache behaviour.
>
> Its going to be quite difficult to realize algorithm that guestimate what
> information the processor keeps in its caches. The situation is quite
> complex in NUMA systems.
LIFO is fine.
> > So I think it is wrong to say it requires more metadata handling. SLUB
> > will have to switch pages more often or free objects to pages other than
> > the "fast" page (what do you call it?), so quite often I think you'll
> > find SLUB has just as much if not more metadata handling.
>
> Its the per cpu slab. SLUB does not switch pages often but frees objects
> not from the per cpu slab directly with minimal overhead compared to a per
> cpu slab free. The overhead is much less than the SLAB slowpath which has
> to be taken for alien caches etc.
But the slab allocator isn't just about allocating. It is also about
freeing. And you can be switching pages frequently in the freeing path.
And depending on allocation patterns, it can still be quite frequent
in the allocation path too (and even if you have gigantic pages, they
can still get mostly filled up which reduces your queue size and
increases rate of switching between them).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/