Re: [patch] SLQB slab allocator

From: Christoph Lameter
Date: Tue Feb 03 2009 - 12:38:20 EST


On Tue, 3 Feb 2009, Nick Piggin wrote:

> Quite obviously it should. Behaviour of a slab allocation on behalf of
> some task constrained within a given node should not depend on the task
> which has previously run on this CPU and made some allocations. Surely
> you can see this behaviour is not nice.

If you want cache hot objects then its better to use what a prior task
has used. This opportunistic use is only done if the task is not asking
for memory from a specifc node. There is another tradeoff here.

SLABs method there is to ignore all caching advantages even if the task
did not ask for memory from a specific node. So it gets cache cold objects
and if the node to allow from is remote then it always must use the slow
path.

> > Which have similar issues since memory policy application is depending on
> > a task policy and on memory migration that has been applied to an address
> > range.
>
> What similar issues? If a task ask to have slab allocations constrained
> to node 0, then SLUB hands out objects from other nodes, then that's bad.

Of course. A task can ask to have allocations from node 0 and it will get
the object from node 0. But if the task does not care to ask for data
from a specific node then it can be satisfied from the cpu slab which
contains cache hot objects.

> > > But that is wrong. The lists obviously have high water marks that
> > > get trimmed down. Periodic trimming as I keep saying basically is
> > > alrady so infrequent that it is irrelevant (millions of objects
> > > per cpu can be allocated anyway between existing trimming interval)
> >
> > Trimming through water marks and allocating memory from the page allocator
> > is going to be very frequent if you continually allocate on one processor
> > and free on another.
>
> Um yes, that's the point. But you previously claimed that it would just
> grow unconstrained. Which is obviously wrong. So I don't understand what
> your point is.

It will grow unconstrained if you elect to defer queue processing. That
was what we discussed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/