Re: [PATCH 00/35] AutoNUMA alpha14

From: Rik van Riel
Date: Sat May 26 2012 - 13:29:44 EST


On 05/25/2012 01:02 PM, Andrea Arcangeli wrote:

I believe (realistically speaking) nobody is going to change
applications to specify which thread is using which memory (for
threaded apps) with the only exception of QEMU and a few others.

This is the point of contention. I believe that for some
programs these kinds of modifications might happen, but
that for some other programs - managed runtimes like JVMs -
it is fundamentally impossible to do proper NUMA hinting,
because the programming languages that run on top of the
runtimes have no concept of pointers or memory ranges, making
it impossible to do those kinds of hints without fundamentally
changing the programming languages in question.

It would be good to get everybody's ideas out there on this
topic, because this is the fundamental factor in deciding
between Peter's approach to NUMA and Andrea's approach.

Ingo? Andrew? Linus? Paul?

For not threaded apps that fits in a NUMA node, there's no way a blind
home node can perform nearly as good as AutoNUMA:

The small tasks are easy. I suspect that either implementation
can be tuned to produce good results there.

It is the large programs (that do not fit in a NUMA node, either
due to too much memory, or due to too many threads) that will
force our hand in deciding whether to go with Peter's approach
or your approach.

I believe we do need to carefully think about this issue, decide
on a NUMA approach based on the fundamental technical properties of
each approach.

After we figure out what we want to do, we can nit-pick on the
codebase in question, and make sure it gets completely fixed.
I am sure neither codebase is perfect right now, but both are
entirely fixable.

--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/