Re: [PATCH mm-new v3] mm/memcontrol: Add memory.stat_refresh for on-demand stats flushing
From: Chen Ridong
Date: Mon Nov 10 2025 - 20:03:47 EST
On 2025/11/10 21:50, Michal Koutný wrote:
> Hello Leon.
>
> On Mon, Nov 10, 2025 at 06:19:48PM +0800, Leon Huang Fu <leon.huangfu@xxxxxxxxxx> wrote:
>> Memory cgroup statistics are updated asynchronously with periodic
>> flushing to reduce overhead. The current implementation uses a flush
>> threshold calculated as MEMCG_CHARGE_BATCH * num_online_cpus() for
>> determining when to aggregate per-CPU memory cgroup statistics. On
>> systems with high core counts, this threshold can become very large
>> (e.g., 64 * 256 = 16,384 on a 256-core system), leading to stale
>> statistics when userspace reads memory.stat files.
>>
We have encountered this problem multiple times when running LTP tests. It can easily occur when
using a 64K page size.
error:
memcg_stat_rss 10 TFAIL: rss is 0, 266240 expected
>> This is particularly problematic for monitoring and management tools
>> that rely on reasonably fresh statistics, as they may observe data
>> that is thousands of updates out of date.
>>
>> Introduce a new write-only file, memory.stat_refresh, that allows
>> userspace to explicitly trigger an immediate flush of memory statistics.
>
> I think it's worth thinking twice when introducing a new file like
> this...
>
>> Writing any value to this file forces a synchronous flush via
>> __mem_cgroup_flush_stats(memcg, true) for the cgroup and all its
>> descendants, ensuring that subsequent reads of memory.stat and
>> memory.numa_stat reflect current data.
>>
>> This approach follows the pattern established by /proc/sys/vm/stat_refresh
>> and memory.peak, where the written value is ignored, keeping the
>> interface simple and consistent with existing kernel APIs.
>>
>> Usage example:
>> echo 1 > /sys/fs/cgroup/mygroup/memory.stat_refresh
>> cat /sys/fs/cgroup/mygroup/memory.stat
>>
>> The feature is available in both cgroup v1 and v2 for consistency.
>
> First, I find the motivation by the testcase (not real world) weak when
> considering such an API change (e.g. real world would be confined to
> fewer CPUs or there'd be other "traffic" causing flushes making this a
> non-issue, we don't know here).
>
> Second, this is open to everyone (non-root) who mkdir's their cgroups.
> Then why not make it the default memory.stat behavior? (Tongue-in-cheek,
> but [*].)
>
> With this change, we admit the implementation (async flushing) and leak
> it to the users which is hard to take back. Why should we continue doing
> any implicit in-kernel flushing afterwards?
>
> Next, v1 and v2 haven't been consistent since introduction of v2 (unlike
> some other controllers that share code or even cftypes between v1 and
> v2). So I'd avoid introducing a new file to V1 API.
>
We encountered this problem in v1, I think this is a common problem should be fixed.
> When looking for analogies, I admittedly like memory.reclaim's
> O_NONBLOCK better (than /proc/sys/vm/stat_refresh). That would be an
> argument for flushing by default mentioned abovee [*]).
>
> Also, this undercuts the hooking of rstat flushing into BPF. I think the
> attempts were given up too early (I read about the verifier vs
> seq_file). Have you tried bypassing bailout from
> __mem_cgroup_flush_stats via trace_memcg_flush_stats?
>
>
> All in all, I'd like to have more backing data on insufficiency of (all
> the) rstat optimizations before opening explicit flushes like this
> (especially when it's meant to be exposed by BPF already).
>
> Thanks,
> Michal
>
>
--
Best regards,
Ridong