Re: [PATCH v5 0/9] mm/demotion: Memory tiers and demotion

From: Aneesh Kumar K V
Date: Wed Jun 08 2022 - 10:20:53 EST


On 6/8/22 7:27 PM, Johannes Weiner wrote:
Hi Aneesh,

On Fri, Jun 03, 2022 at 07:12:28PM +0530, Aneesh Kumar K.V wrote:
* The current tier initialization code always initializes
each memory-only NUMA node into a lower tier. But a memory-only
NUMA node may have a high performance memory device (e.g. a DRAM
device attached via CXL.mem or a DRAM-backed memory-only node on
a virtual machine) and should be put into a higher tier.

I have to disagree with this premise. The CXL.mem bus has different
latency and bandwidth characteristics. It's also conceivable that
cheaper and slower DRAM is connected to the CXL bus (think recycling
DDR4 DIMMS after switching to DDR5). DRAM != DRAM.

Our experiments with production workloads show regressions between
15-30% in serviced requests when you don't distinguish toptier DRAM
from lower tier DRAM. While it's fixable with manual tuning, your
patches would bring reintroduce this regression it seems.

Making tiers explicit is a good idea, but can we keep the current
default that CPU-less nodes are of a lower tier than ones with CPU?
I'm having a hard time imagining where this wouldn't be true... Or why
it shouldn't be those esoteric cases that need the manual tuning.

This was mostly driven by virtual machine configs where we can find memory only NUMA nodes depending on the resource availability in the hypervisor.

Will these CXL devices be initialized by a driver? For example, if they are going to be initialized via dax kmem, we already consider them lower memory tier as with this patch series.

-aneesh