Re: [patch v2 1/2] sched: check for prev_cpu == this_cpu beforecalling wake_affine()
From: Suresh Siddha
Date: Fri Apr 02 2010 - 13:06:48 EST
On Thu, 2010-04-01 at 23:20 -0700, Mike Galbraith wrote:
> Yes, if task A and task B are more or less unrelated, you'd want them to
> stay in separate domains, you'd not want some random event to pull. The
> other side of the coin is tasks which fork off partners that they will
> talk to at high frequency. They land just as far away, and desperately
> need to move into a shared cache domain. There's currently no
> discriminator, so while always asking wake_affine() may reduce the risk
> of moving a task with a large footprint, it also increases the risk of
> leaving buddies jabbering cross cache.
Mike, Apart from this small tweak that you added in wake_up() path there
is no extra logic that keeps buddies together for long. As I was saying,
fork/exec balance starts apart and in the partial loaded case (i.e.,
when # of running tasks <= # of sockets or # of total cores) the default
load balancer policy also tries to distribute the load equally among
sockets/cores (for peak cache/memory controller bw etc). While the
wakeup() may keep the buddies on SMT siblings, next load balancing event
will move them far away. If we need to keep buddies together we need
more changes than this small tweak.
> Do you have a compute load bouncing painfully which this patch cures?
> I have no strong objections, and the result is certainly easier on the
> eye. If I were making the decision, I'd want to see some numbers.
All I saw in the changelog when you added this new tweak was:
Author: Mike Galbraith <efault@xxxxxx>
Date: Thu Mar 11 17:17:16 2010 +0100
sched: Fix select_idle_sibling()
Don't bother with selection when the current cpu is idle. ....
Is it me or you who need to provide the data for justification for your
new tweak that changes the current behavior ;)
I will run some workloads aswell!
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/