Re: [PATCH 0/8] Announcement: Enhanced NUMA scheduling with adaptiveaffinity

From: Christoph Lameter
Date: Fri Nov 16 2012 - 15:57:21 EST


On Fri, 16 Nov 2012, Ingo Molnar wrote:

> > The interleaving of memory areas that have an equal amount of
> > shared accesses from multiple nodes is essential to limit the
> > traffic on the interconnect and get top performance.
>
> That is true only if the load is symmetric.

Which is usually true of an HPC workload.

> > I guess through that in a non HPC environment where you are
> > not interested in one specific load running at top speed
> > varying contention on the interconnect and memory busses are
> > acceptable. But this means that HPC loads cannot be auto
> > tuned.
>
> I'm not against improving these workloads (at all) - I just
> pointed out that interleaving isn't necessarily the best
> placement strategy for 'large' workloads.

Depends on what you mean by "large" workloads. If it is a typically large
HPC workload with data structures distributed over nodes then the
placement of those data structure spread over all nodes is the best
placement startegy.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/