[PATCH] memory tiering: Do not allow promotion if NUMA_BALANCING_MEMORY_TIERING is disabled
From: Donet Tom
Date: Fri Mar 20 2026 - 05:24:04 EST
In the current implementation, if NUMA_BALANCING_MEMORY_TIERING is
disabled and the pages are on the lower tier, the pages may still be
promoted.
This happens because task_numa_work() updates the last_cpupid field to
record the last access time only when NUMA_BALANCING_MEMORY_TIERING is
enabled and the folio is on the lower tier. If
NUMA_BALANCING_MEMORY_TIERING is disabled, the last_cpupid field
retains a valid last CPU id.
In should_numa_migrate_memory(), the decision checks whether
NUMA_BALANCING_MEMORY_TIERING is disabled, the folio is on the lower
tier, and last_cpupid is invalid. However, since last_cpupid remains
valid when NUMA_BALANCING_MEMORY_TIERING is disabled, the condition
evaluates to false and migration is allowed.
This patch prevents promotion when NUMA_BALANCING_MEMORY_TIERING is
disabled and the folio is on the lower tier.
Also, when NUMA_BALANCING_MEMORY_TIERING is enabled, last_cpupid is always
invalid. Therefore, the !cpupid_valid(last_cpupid) check in
task_numa_fault() is redundant. Removed the unnecessary check and simplify
the condition.
Behavior before this change:
============================
- If NUMA_BALANCING_NORMAL is enabled, migration occurs between
nodes within the same memory tier, and promotion from lower
tier to higher tier may also happen.
- If NUMA_BALANCING_MEMORY_TIERING is enabled, promotion from
lower tier to higher tier nodes is allowed.
Behavior after this change:
===========================
- If NUMA_BALANCING_NORMAL is enabled, migration will occur only
between nodes within the same memory tier.
- If NUMA_BALANCING_MEMORY_TIERING is enabled, promotion from lower
tier to higher tier nodes will be allowed.
- If both NUMA_BALANCING_MEMORY_TIERING and NUMA_BALANCING_NORMAL are
enabled, both migration (same tier) and promotion (cross tier) are
allowed.
Fixes: 33024536bafd ("memory tiering: hot page selection with hint page fault latency")
Signed-off-by: Donet Tom <donettom@xxxxxxxxxxxxx>
---
kernel/sched/fair.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bf948db905ed..39e860fce85a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1990,6 +1990,13 @@ bool should_numa_migrate_memory(struct task_struct *p, struct folio *folio,
*/
if (!node_state(dst_nid, N_MEMORY))
return false;
+ /*
+ * Do not allow promotion if NUMA_BALANCING_MEMORY_TIERING is disabled
+ * and the pages are on the lower tier.
+ */
+ if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) &&
+ !node_is_toptier(src_nid))
+ return false;
/*
* The pages in slow memory node should be migrated according
@@ -2024,10 +2031,6 @@ bool should_numa_migrate_memory(struct task_struct *p, struct folio *folio,
this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
last_cpupid = folio_xchg_last_cpupid(folio, this_cpupid);
- if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) &&
- !node_is_toptier(src_nid) && !cpupid_valid(last_cpupid))
- return false;
-
/*
* Allow first faults or private faults to migrate immediately early in
* the lifetime of a task. The magic number 4 is based on waiting for
@@ -3242,8 +3245,7 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
* node for memory tiering mode.
*/
if (!node_is_toptier(mem_node) &&
- (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING ||
- !cpupid_valid(last_cpupid)))
+ (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
return;
/* Allocate buffer to track faults on a per-node basis */
--
2.52.0