On Mon, Mar 25, 2019 at 12:28 PM Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> wrote:
I understand the intent, but I don't think the kernel should have such
On 3/23/19 10:21 AM, Dan Williams wrote:
On Fri, Mar 22, 2019 at 9:45 PM Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> wrote:Actually, here I would like to initialize a node mask for default
When running applications on the machine with NVDIMM as NUMA node, theHmm, no, I don't think we should do this. Especially considering
memory allocation may end up on NVDIMM node. This may result in silent
performance degradation and regression due to the difference of hardware
property.
DRAM first should be obeyed to prevent from surprising regression. Any
non-DRAM nodes should be excluded from default allocation. Use nodemask
to control the memory placement. Introduce def_alloc_nodemask which has
DRAM nodes set only. Any non-DRAM allocation should be specified by
NUMA policy explicitly.
In the future we may be able to extract the memory charasteristics from
HMAT or other source to build up the default allocation nodemask.
However, just distinguish DRAM and PMEM (non-DRAM) nodes by SRAT flag
for the time being.
Signed-off-by: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx>
---
arch/x86/mm/numa.c | 1 +
drivers/acpi/numa.c | 8 ++++++++
include/linux/mmzone.h | 3 +++
mm/page_alloc.c | 18 ++++++++++++++++--
4 files changed, 28 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index dfb6c4d..d9e0ca4 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -626,6 +626,7 @@ static int __init numa_init(int (*init_func)(void))
nodes_clear(numa_nodes_parsed);
nodes_clear(node_possible_map);
nodes_clear(node_online_map);
+ nodes_clear(def_alloc_nodemask);
memset(&numa_meminfo, 0, sizeof(numa_meminfo));
WARN_ON(memblock_set_node(0, ULLONG_MAX, &memblock.memory,
MAX_NUMNODES));
diff --git a/drivers/acpi/numa.c b/drivers/acpi/numa.c
index 867f6e3..79dfedf 100644
--- a/drivers/acpi/numa.c
+++ b/drivers/acpi/numa.c
@@ -296,6 +296,14 @@ void __init acpi_numa_slit_init(struct acpi_table_slit *slit)
goto out_err_bad_srat;
}
+ /*
+ * Non volatile memory is excluded from zonelist by default.
+ * Only regular DRAM nodes are set in default allocation node
+ * mask.
+ */
+ if (!(ma->flags & ACPI_SRAT_MEM_NON_VOLATILE))
+ node_set(node, def_alloc_nodemask);
current generation NVDIMMs are energy backed DRAM there is no
performance difference that should be assumed by the non-volatile
flag.
allocation. Memory allocation should not end up on any nodes excluded by
this node mask unless they are specified by mempolicy.
We may have a few different ways or criteria to initialize the node
mask, for example, we can read from HMAT (when HMAT is ready in the
future), and we definitely could have non-DRAM nodes set if they have no
performance difference (I'm supposed you mean NVDIMM-F or HBM).
As long as there are different tiers, distinguished by performance, for
main memory, IMHO, there should be a defined default allocation node
mask to control the memory placement no matter where we get the information.
a hardline policy by default. However, it would be worthwhile
mechanism and policy to consider for the dax-hotplug userspace
tooling. I.e. arrange for a given device-dax instance to be onlined,
but set the policy to require explicit opt-in by numa binding for it
to be an allocation / migration option.
I added Vishal to the cc who is looking into such policy tooling.
But, for now we haven't had such information ready for such use yet, soI think it's a useful semantic, but let's leave the selection of that
the SRAT flag might be a choice.
Why isn't default SLIT distance sufficient for ensuring a DRAM-first"DRAM-first" may sound ambiguous, actually I mean "DRAM only by
default policy?
default". SLIT should just can tell us what node is local what node is
remote, but can't tell us the performance difference.
policy to an explicit userspace decision.