Re: [PATCH 00/16] Memory Hierarchy: Enable target node lookups for reserved memory

From: Aneesh Kumar K.V
Date: Tue Nov 12 2019 - 06:46:00 EST


Dan Williams <dan.j.williams@xxxxxxxxx> writes:

> Yes, this patch series looks like a pile of boring libnvdimm cleanups,
> but buried at the end are some small gems that testing with libnvdimm
> uncovered. These gems will prove more valuable over time for Memory
> Hierarchy management as more platforms, via the ACPI HMAT and EFI
> Specific Purpose Memory, publish reserved or "soft-reserved" ranges to
> Linux. Linux system administrators will expect to be able to interact
> with those ranges with a unique numa node number when/if that memory is
> onlined via the dax_kmem driver [1].
>
> One configuration that currently fails to properly convey the target
> node for the resulting memory hotplug operation is persistent memory
> defined by the memmap=nn!ss parameter. For example, today if node1 is a
> memory only node, and all the memory from node1 is specified to
> memmap=nn!ss and subsequently onlined, it will end up being onlined as
> node0 memory. As it stands, memory_add_physaddr_to_nid() can only
> identify online nodes and since node1 in this example has no online cpus
> / memory the target node is initialized node0.
>
> The fix is to preserve rather than discard the numa_meminfo entries that
> are relevant for reserved memory ranges, and to uplevel the node
> distance helper for determining the "local" (closest) node relative to
> an initiator node.
>
> The first 12 patches are cleanups to make sure that all nvdimm devices
> and their children properly export a numa_node attribute. The switch to
> a device-type is less code and less error prone as a result.


Will this still allow leaf driver to have platform specific attribute
exposed via sysfs? Or do we want to still keep them in nvdimm core and
control the visibility via is_visible() callback?

-aneesh