Re: [Lse-tech] [PATCH 1/2] node affine NUMA scheduler

From: Martin J. Bligh (
Date: Mon Sep 23 2002 - 13:47:21 EST

> OK, sounds encouraging. So here is my first attempt (attached). You'll
> have to apply it on top of the two NUMA scheduler patches and hack
> The patch adds a node_mem[NR_NODES] array to each task. When allocating
> memory (in rmqueue) and freeing it (in free_pages_ok) the number of
> pages is added/subtracted from that array and the homenode is set to
> the node having the largest entry. Is there a better place where to put
> this in (other than rmqueue/free_pages_ok)?
> I have two problems with this approach:
> 1: Freeing memory is quite expensive, as it currently involves finding the
> maximum of the array node_mem[].

Bleh ... why? This needs to be calculated much more lazily than this,
or you're going to kick the hell out of any cache affinity. Can you
recalc this in the rebalance code or something instead?

> 2: I have no idea how tasks sharing the mm structure will behave. I'd
> like them to run on different nodes (that's why node_mem is not in mm),
> but they could (legally) free pages which they did not allocate and
> have wrong values in node_mem[].

Yes, that really ought to be per-process, not per task. Which means
locking or atomics ... and overhead. Ick.

For the first cut of the NUMA sched, maybe you could just leave page
allocation alone, and do that seperately? or is that what the second
patch was meant to be?


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

This archive was generated by hypermail 2b29 : Mon Sep 23 2002 - 22:00:39 EST