Re: [PATCH v2] cgroup/cpuset: call fmeter_init() when cpuset.memory_pressure disabled

From: Waiman Long
Date: Wed Jan 15 2025 - 10:38:15 EST


On 1/15/25 12:05 AM, Jin Guojie wrote:
When running LTP's cpuset_memory_pressure program, the following error occurs:

(1) Create a cgroup, enable cpuset subsystem, set memory limit, and
then set cpuset_memory_pressure to 1
(2) In this cgroup, create a process to allocate a large amount of
memory and generate pressure counts
(3) Set cpuset_memory_pressure to 0
(4) Check cpuset.memory_pressure: LTP thinks it should be 0, but the
current kernel returns a value of 1, so LTP determines it as FAIL

V2:
* call fmeter_init() when writing 0 to the memory_pressure_enabled

Compared with patch v1 [1], this version implements clearer logic.

[1] https://lore.kernel.org/cgroups/CA+B+MYRNsdKcYxC8kbyzVrdH9fT8c2if5UxGguKep36ZHe6HMQ@xxxxxxxxxxxxxx/T/#u

Signed-off-by: Jin Guojie <guojie.jin@xxxxxxxxx>
Suggested-by: Michal Koutný <mkoutny@xxxxxxxx>
Suggested-by: Waiman Long <longman@xxxxxxxxxx>
---
kernel/cgroup/cpuset-v1.c | 4 +++-
kernel/cgroup/cpuset.c | 2 ++
2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/cgroup/cpuset-v1.c b/kernel/cgroup/cpuset-v1.c
index 25c1d7b77e2f..7520eb31598a 100644
--- a/kernel/cgroup/cpuset-v1.c
+++ b/kernel/cgroup/cpuset-v1.c
@@ -66,7 +66,6 @@ void fmeter_init(struct fmeter *fmp)
fmp->cnt = 0;
fmp->val = 0;
fmp->time = 0;
- spin_lock_init(&fmp->lock);
}

/* Internal meter update - process cnt events and update value */
@@ -437,6 +436,9 @@ static int cpuset_write_u64(struct
cgroup_subsys_state *css, struct cftype *cft,
break;
case FILE_MEMORY_PRESSURE_ENABLED:
cpuset_memory_pressure_enabled = !!val;
+ if (cpuset_memory_pressure_enabled == 0) {
+ fmeter_init(&cs->fmeter);
+ }
Nit: you don't need parentheses when there is only one statement underneath "if".
break;
case FILE_SPREAD_PAGE:
retval = cpuset_update_flag(CS_SPREAD_PAGE, cs, val);
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 0f910c828973..3583c898ff77 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -3378,6 +3378,7 @@ cpuset_css_alloc(struct cgroup_subsys_state *parent_css)

__set_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
fmeter_init(&cs->fmeter);
+ spin_lock_init(&cs->fmeter.lock);
cs->relax_domain_level = -1;
INIT_LIST_HEAD(&cs->remote_sibling);

@@ -3650,6 +3651,7 @@ int __init cpuset_init(void)
nodes_setall(top_cpuset.effective_mems);

fmeter_init(&top_cpuset.fmeter);
+ spin_lock_init(&top_cpuset.fmeter.lock);
INIT_LIST_HEAD(&remote_children);

BUG_ON(!alloc_cpumask_var(&cpus_attach, GFP_KERNEL));
--
2.34.1

I just realize that cpuset.memory_pressure_enabled is on root cgroup only and affect a global flag that impact the behavior of all the existing cpusets. Your current patch will clear the memory pressure data in the root cgroup only. The other child cpusets will not be affected and will still show existing data. This inconsistency isn't good.

OTOH, I also don't think iterating the whole cpuset hierarchy and clearing all the fmeter data is worth the effort given that cgroup v1 is in maintenance mode. Perhaps just a simple check to return 0 if cpuset.memory_pressure_enabled isn't set like in the v1 patch. I also don't think we need to clear the fmeter data in that case as it will lead to data clearing only on cpusets where cpuset.memory_pressure is read while cpuset.memory_pressure_enabled has been cleared.

Cheers,
Longman