Re: [PATCH] mm/vmstat: spread vmstat_update requeue across the stat interval
From: Breno Leitao
Date: Wed Apr 08 2026 - 13:01:16 EST
On Wed, Apr 08, 2026 at 08:13:43AM -0700, Breno Leitao wrote:
> On Wed, Apr 08, 2026 at 12:13:04PM +0200, Vlastimil Babka (SUSE) wrote:
>
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 2370c6fb1fcd6..8d53242e7aa66 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -2139,8 +2139,12 @@ static void vmstat_shepherd(struct work_struct *w)
> if (cpu_is_isolated(cpu))
> continue;
>
> - if (!delayed_work_pending(dw) && need_update(cpu))
> + if (!delayed_work_pending(dw) && need_update(cpu)) {
> + WARN_ONCE(work_busy(&dw->work) & WORK_BUSY_RUNNING,
> + "cpu%d: vmstat_update already running, scheduling again\n",
> + cpu);
> queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
> + }
> }
>
> cond_resched();
>
> The fix is a one-line change: !delayed_work_pending(dw) → !work_busy(&dw->work)
In my testing, this race condition occurs more frequently than expected,
likely due to the timer configurations we've been discussing throughout
this thread.
I developed a diagnostic patch to monitor vmstat_update worker scheduling
frequency, and the results show consistently low values. Avoiding
rescheduling a worker that is already also reduces the contention on
stress-ng test case.
commit d725f0664b70aa5c677215b0fc1abc0117aaf114
Author: Breno Leitao <leitao@xxxxxxxxxx>
Date: Wed Apr 8 09:01:02 2026 -0700
mm/vmstat: fix vmstat_shepherd double-scheduling vmstat_update
vmstat_shepherd uses delayed_work_pending() to check whether
vmstat_update is already scheduled for a given CPU before queuing it.
However, delayed_work_pending() only tests WORK_STRUCT_PENDING_BIT,
which is cleared the moment a worker thread picks up the work to
execute it.
This means that while vmstat_update is actively running on a CPU,
delayed_work_pending() returns false. If need_update() also returns
true at that point (per-cpu counters not yet zeroed mid-flush), the
shepherd queues a second invocation with delay=0, causing vmstat_update
to run again immediately after finishing.
On a 72-CPU system this race is readily observable: before the fix,
many CPUs show invocation gaps well below 500 jiffies (the minimum
round_jiffies_relative() can produce), with the most extreme cases
reaching 0 jiffies—vmstat_update called twice within the same jiffy.
Fix this by replacing delayed_work_pending() with work_busy(), which
returns non-zero for both WORK_BUSY_PENDING (timer armed or work
queued) and WORK_BUSY_RUNNING (work currently executing). The shepherd
now correctly skips a CPU in all busy states.
After the fix, all sub-jiffy and most sub-100-jiffie gaps disappear.
The remaining early invocations have gaps in the 700–999 jiffie range,
attributable to round_jiffies_relative() aligning to a nearer
jiffie-second boundary rather than to this race.
Each spurious vmstat_update invocation has a measurable side effect:
refresh_cpu_vm_stats() calls decay_pcp_high() for every zone, which
drains idle per-CPU pages back to the buddy allocator via
free_pcppages_bulk(), taking the zone spinlock each time. Eliminating
the double-scheduling therefore reduces zone lock contention directly.
On a 72-CPU stress-ng workload measured with perf lock contention:
free_pcppages_bulk contention count: ~55% reduction
free_pcppages_bulk total wait time: ~57% reduction
free_pcppages_bulk max wait time: ~47% reduction
Note: work_busy() is inherently racy—between the check and the
subsequent queue_delayed_work_on() call, vmstat_update can finish
execution, leaving the work neither pending nor running. In that
narrow window the shepherd can still queue a second invocation.
After the fix, this residual race is rare and produces only occasional
small gaps, a significant improvement over the systematic
double-scheduling seen with delayed_work_pending().
Signed-off-by: Breno Leitao <leitao@xxxxxxxxxx>
diff --git a/mm/vmstat.c b/mm/vmstat.c
index d59eff1582547..5489549241b51 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -2156,7 +2156,7 @@ static void vmstat_shepherd(struct work_struct *w)
if (cpu_is_isolated(cpu))
continue;
- if (!delayed_work_pending(dw) && need_update(cpu))
+ if (!work_busy(&dw->work) && need_update(cpu))
queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
}