On 6/1/2022 7:19 PM, Aneesh Kumar K V wrote:
On 6/1/22 11:59 AM, Bharata B Rao wrote:
I was experimenting with this patchset and found this behaviour.
Here's what I did:
Boot a KVM guest with vNVDIMM device which ends up with device_dax
driver by default.
Use it as RAM by binding it to dax kmem driver. It now appears as
RAM with a new NUMA node that is put to memtier1 (the existing tier
where DRAM already exists)
That should have placed it in memtier2.
I can move it to memtier2 (MEMORY_RANK_PMEM) manually, but isn't
that expected to happen automatically when a node with dax kmem
device comes up?
This can happen if we have added the same NUMA node to memtier1 before dax kmem driver initialized the pmem memory. Can you check before the above node_set_memory_tier_rank() whether the specific NUMA node is already part of any memory tier?
When we reach node_set_memory_tier_rank(), node1 (that has the pmem device)
is already part of memtier1 whose nodelist shows 0-1.