Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

From: Aneesh Kumar K V
Date: Tue Apr 26 2022 - 22:57:48 EST


On 4/27/22 6:59 AM, ying.huang@xxxxxxxxx wrote:
On Mon, 2022-04-25 at 20:14 +0530, Aneesh Kumar K V wrote:
On 4/25/22 7:27 PM, Jonathan Cameron wrote:
On Mon, 25 Apr 2022 16:45:38 +0530
Jagdish Gediya <jvgediya@xxxxxxxxxxxxx> wrote:

On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@xxxxxxxxx wrote:
On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
NUMA node which are N_MEMORY and slow memory(persistent memory)
only NUMA node which are also N_MEMORY. As the current demotion
target finding algorithm works based on N_MEMORY and best distance,
it will choose DRAM only NUMA node as demotion target instead of
persistent memory node on such systems. If DRAM only NUMA node is
filled with demoted pages then at some point new allocations can
start falling to persistent memory, so basically cold pages are in
fast memor (due to demotion) and new pages are in slow memory, this
is why persistent memory nodes should be utilized for demotion and
dram node should be avoided for demotion so that they can be used
for new allocations.

Current implementation can work fine on the system where the memory
only numa nodes are possible only for persistent/slow memory but it
is not suitable for the like of systems mentioned above.

Can you share the NUMA topology information of your machine? And the
demotion order before and after your change?

Whether it's good to use the PMEM nodes as the demotion targets of the
DRAM-only node too?

$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 14272 MB
node 0 free: 13392 MB
node 1 cpus:
node 1 size: 2028 MB
node 1 free: 1971 MB
node distances:
node 0 1
   0: 10 40
   1: 40 10

1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
    for 0 even when 1 is DRAM node and there is no demotion targets for 1.

I'm not convinced the distinction between DRAM and persistent memory is
valid. There will definitely be systems with a large pool
of remote DRAM (and potentially no NV memory) where the right choice
is to demote to that DRAM pool.

Basing the decision on whether the memory is from kmem or
normal DRAM doesn't provide sufficient information to make the decision.


Hence the suggestion for the ability to override this from userspace.
Now, for example, we could build a system with memory from the remote
machine (memory inception in case of power which will mostly be plugged
in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI
memory.

In the former case, we won't consider that for demotion with this series
because that is not instantiated via dax kmem. So yes definitely we
would need the ability to override this from userspace so that we could
put these remote memory NUMA nodes as demotion targets if we want.


Is there a driver for the device (memory from the remote machine)? If
so, we can adjust demotion order for it in the driver.


At this point, it is managed by hypervisor, is hotplugged into the the LPAR with more additional properties specified via device tree. So there is no inception specific device driver.

In general, I think that we can adjust demotion order inside kernel from
various information sources. In addition to ACPI SLIT, we also have
HMAT, kmem driver, other drivers, etc.


Managing inception memory will any way requires a userspace component to track the owner machine for the remote memory. So we should be ok to have userspace manage demotion order.

-aneesh