Re: Avoid allocating during interleave from almost full nodes

From: Christoph Lameter
Date: Mon Nov 06 2006 - 15:32:04 EST


On Mon, 6 Nov 2006, Andrew Morton wrote:

> I'm referring to the metadata rather than to the pages themselves: the zone
> structure at least. I bet there are a couple of cache misses in there.

Yes, in particular in large systems.

> > The number
> > of pages to take will vary depending on the size of the shared data. For
> > shared data areas that are just a couple of pages this wont work.
>
> What is "shared data"?

Interleave is used for data accessed from many nodes otherwise one would
prefer to allocate from the current zone. The shared data may be very
frequently accessed from multiple nodes and one would like different NUMA
nodes to respond to these requests.

> > > Umm, but that's exactly what the patch we're discussing will do.
> > Not if we have a set of remaining nodes.
>
> Yes it is. You're proposing taking an arbitrarily large number of
> successive pages from the same node rather than interleaving the allocations.
> That will create "hotspots or imbalances" (whatever they are).

No I proposed to go round robin over the remaining nodes. The special case
of one node left could be dealt with.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/