Re: [PATCH v2 2/2] sched/numa: Add tracepoint that tracks the skipping of numa balancing due to cpuset memory pinning

From: Libo Chen
Date: Wed Mar 26 2025 - 20:40:57 EST


forgot to add Steven Rostedt.

On 3/26/25 17:23, Libo Chen wrote:
> Unlike sched_skip_vma_numa tracepoint which tracks skipped VMAs, this
> tracks the task subjected to cpuset.mems pinning and prints out its
> allowed memory node mask.
> ---
> include/trace/events/sched.h | 31 +++++++++++++++++++++++++++++++
> kernel/sched/fair.c | 4 +++-
> 2 files changed, 34 insertions(+), 1 deletion(-)
>
> diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
> index bfd97cce40a1a..133d9a671734a 100644
> --- a/include/trace/events/sched.h
> +++ b/include/trace/events/sched.h
> @@ -745,6 +745,37 @@ TRACE_EVENT(sched_skip_vma_numa,
> __entry->vm_end,
> __print_symbolic(__entry->reason, NUMAB_SKIP_REASON))
> );
> +
> +TRACE_EVENT(sched_skip_cpuset_numa,
> +
> + TP_PROTO(struct task_struct *tsk, nodemask_t *mem_allowed_ptr),
> +
> + TP_ARGS(tsk, mem_allowed_ptr),
> +
> + TP_STRUCT__entry(
> + __array( char, comm, TASK_COMM_LEN )
> + __field( pid_t, pid )
> + __field( pid_t, tgid )
> + __field( pid_t, ngid )
> + __array( unsigned long, mem_allowed, BITS_TO_LONGS(MAX_NUMNODES))
> + ),
> +
> + TP_fast_assign(
> + memcpy(__entry->comm, tsk->comm, TASK_COMM_LEN);
> + __entry->pid = task_pid_nr(tsk);
> + __entry->tgid = task_tgid_nr(tsk);
> + __entry->ngid = task_numa_group_id(tsk);
> + memcpy(__entry->mem_allowed, mem_allowed_ptr->bits,
> + sizeof(__entry->mem_allowed));
> + ),
> +
> + TP_printk("comm=%s pid=%d tgid=%d ngid=%d mem_node_allowed_mask=%lx",


I cannot find a way to print out nodemask_t nicely here with %*pbl.
So I fall back to just raw hex value. Will be grateful if someone
knows a better way to print nodemask nicely in a tracepoint


> + __entry->comm,
> + __entry->pid,
> + __entry->tgid,
> + __entry->ngid,
> + __entry->mem_allowed[0])
> +);
> #endif /* CONFIG_NUMA_BALANCING */
>
> /*
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6f405e00c9c7e..a98842a96eda0 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3333,8 +3333,10 @@ static void task_numa_work(struct callback_head *work)
> * Memory is pinned to only one NUMA node via cpuset.mems, naturally
> * no page can be migrated.
> */
> - if (nodes_weight(cpuset_current_mems_allowed) == 1)
> + if (nodes_weight(cpuset_current_mems_allowed) == 1) {
> + trace_sched_skip_cpuset_numa(current, &cpuset_current_mems_allowed);
> return;
> + }
>
> if (!mm->numa_next_scan) {
> mm->numa_next_scan = now +