here is an update of the NUMA scheduler for the 2.5.37 kernel. It
contains some bugfixes and the coupling to discontigmem memory
allocation (memory is allocated from the processes' homenode).
The node affine NUMA scheduler is targeted for multi-node platforms
and built on top of the O(1) scheduler. Its main objective is to
keep the memory access latency for each task as low as possible by
scheduling it on or near the node on which its memory is allocated.
This should achieve the hard-affinity benefits automatically.
The patch comes in two parts. The first part is the core NUMA scheduler,
it is functional without the second part and provides following features:
- Node aware scheduler (implemented CPU pools).
- Scheduler behaves like O(1) scheduler within a node.
- Equal load among nodes is targeted, stealing tasks from remote nodes
is delayed more if the current node is averagely loaded, less if it's
- Multi-level node hierarchies are supported by stealing delays adjusted
by the relative "node-distance" (memory access latency ratio).
The second part of the patch extends the pooling NUMA scheduler to
have node affine tasks:
- Each process has a homenode assigned to it at creation time
(initial load balancing). Memory will be allocated from this node.
- Each process is preferentially scheduled on its homenode and
attracted back to it if scheduled away for some reason. For
multi-level node hierarchies the task is attracted to its
The patch was tested on IA64 platforms but should work on NUMAQ i386,
too. Similar code for 2.4.18 (cf. http://home.arcor.de/efocht/sched)
runs in production environments since months.
Comments, tests, ports to other platforms/architectures are very welcome!
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to firstname.lastname@example.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Mon Sep 23 2002 - 22:00:33 EST