Re: [v2 PATCH 1/1] fs/proc: Expose mm_cpumask in /proc/[pid]/status
From: David Hildenbrand (Red Hat)
Date: Tue Dec 30 2025 - 16:16:35 EST
On 12/26/25 22:14, Aaron Tomlin wrote:
This patch introduces two new fields to /proc/[pid]/status to display the
set of CPUs, representing the CPU affinity of the process's active
memory context, in both mask and list format: "Cpus_active_mm" and
"Cpus_active_mm_list". The mm_cpumask is primarily used for TLB and
cache synchronisation.
Exposing this information allows userspace to easily describe the
relationship between CPUs where a memory descriptor is "active" and the
CPUs where the thread is allowed to execute. The primary intent is to
provide visibility into the "memory footprint" across CPUs, which is
invaluable for debugging performance issues related to IPI storms and
TLB shootdowns in large-scale NUMA systems. The CPU-affinity sets the
boundary; the mm_cpumask records the arrival; they complement each
other.
Frequent mm_cpumask changes may indicate instability in placement
policies or excessive task migration overhead.
Just a note: I have the faint recollection that there are some arch-specific oddities around mm_cpumask().
In particular, that some architectures never clear CPUs from the mask, while others (e.g., x86) clear them one the TLB for them is clean.
I'd assume that all architectures at least set the CPUs once they ever ran an MM. But are we sure about that?
$ git grep mm_cpumask | grep m68k
gives me no results and I don't see common code to ever set a cpu in
the mm_cpumask.
--
Cheers
David