Re: [RFC PATCH 00/10] Improve numa scheduling by consolidating tasks
From: Srikar Dronamraju
Date: Tue Jul 30 2013 - 05:04:12 EST
* Peter Zijlstra <peterz@xxxxxxxxxxxxx> [2013-07-30 10:20:01]:
> On Tue, Jul 30, 2013 at 10:17:55AM +0200, Peter Zijlstra wrote:
> > On Tue, Jul 30, 2013 at 01:18:15PM +0530, Srikar Dronamraju wrote:
> > > Here is an approach that looks to consolidate workloads across nodes.
> > > This results in much improved performance. Again I would assume this work
> > > is complementary to Mel's work with numa faulting.
> > I highly dislike the use of task weights here. It seems completely
> > unrelated to the problem at hand.
> I also don't particularly like the fact that it's purely process based.
> The faults information we have gives much richer task relations.
With just pure fault information based approach, I am not seeing any
major improvement in tasks/memory consolidation. I still see memory
spread across different nodes and tasks getting ping-ponged to different
nodes. And if there are multiple unrelated processes, then we see a mix
of tasks of different processes in each of the node.
This spreading of load as per my observation, isn't helping the
performance. This is esp true with bigger boxes and would take this as a
hint that we need to consolidate tasks for better performance.
Now I can just use the number of tasks rather than task weights as I do
with the current patchset. But I don't think that would be ideal either.
Esp this wouldn't work with Fair share scheduling.
For example: lets say there are 2 vm's running similar loads on a 2 node
machine. We would get the best performance if we could easily segregate
the load. I know all problems cannot be generalized into just this set.
My thinking is to get atleast these set of problems solved.
Do you see any alternatives other than numa faults/task weights that we
could use to better consolidate tasks?
Thanks and Regards
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/