Re: [PATCH v2 0/5] workqueue: Detect stalled in-flight workers
From: Petr Mladek
Date: Thu Mar 12 2026 - 12:45:11 EST
On Thu 2026-03-05 08:15:36, Breno Leitao wrote:
> There is a blind spot exists in the work queue stall detecetor (aka
> show_cpu_pool_hog()). It only prints workers whose task_is_running() is
> true, so a busy worker that is sleeping (e.g. wait_event_idle())
> produces an empty backtrace section even though it is the cause of the
> stall.
>
> Additionally, when the watchdog does report stalled pools, the output
> doesn't show how long each in-flight work item has been running, making
> it harder to identify which specific worker is stuck.
>
> Example of the sample code:
>
> BUG: workqueue lockup - pool cpus=4 node=0 flags=0x0 nice=0 stuck for 132s!
> Showing busy workqueues and worker pools:
> workqueue events: flags=0x100
> pwq 18: cpus=4 node=0 flags=0x0 nice=0 active=4 refcnt=5
> in-flight: 178:stall_work1_fn [wq_stall]
> pending: stall_work2_fn [wq_stall], free_obj_work, psi_avgs_work
> ...
> Showing backtraces of running workers in stalled
> CPU-bound worker pools:
> <nothing here>
>
> I see it happening on real machines, causing some stalls that doesn't
> have any backtrace. This is one of the code path:
>
> 1) kfence executes toggle_allocation_gate() as a delayed workqueue
> item (kfence_timer) on the system WQ.
>
> 2) toggle_allocation_gate() enables a static key, which IPIs every
> CPU to patch code:
> static_branch_enable(&kfence_allocation_key);
>
> 3) toggle_allocation_gate() then sleeps in TASK_IDLE waiting for a
> kfence allocation to occur:
> wait_event_idle(allocation_wait,
> atomic_read(&kfence_allocation_gate) > 0 || ...);
>
> This can last indefinitely if no allocation goes through the
> kfence path (or IPIing all the CPUs take longer, which is common on
> platforms that do not have NMI).
>
> The worker remains in the pool's busy_hash
> (in-flight) but is no longer task_is_running().
>
> 4) The workqueue watchdog detects the stall and calls
> show_cpu_pool_hog(), which only prints backtraces for workers
> that are actively running on CPU:
>
> static void show_cpu_pool_hog(struct worker_pool *pool) {
> ...
> if (task_is_running(worker->task))
> sched_show_task(worker->task);
> }
>
> 5) Nothing is printed because the offending worker is in TASK_IDLE
> state. The output shows "Showing backtraces of running workers in
> stalled CPU-bound worker pools:" followed by nothing, effectively
> hiding the actual culprit.
I am trying to better understand the situation. There was a reason
why only the worker in the running state was shown.
Normally, a sleeping worker should not cause a stall. The scheduler calls
wq_worker_sleeping() which should wake up another idle worker. There is
always at least one idle worker in the poll. It should start processing
the next pending work. Or it should fork another worker when it was
the last idle one.
I wonder what blocked the idle worker from waking or forking
a new worker. Was it caused by the IPIs?
Did printing the sleeping workers helped to analyze the problem?
I wonder if we could do better in this case. For example, warn
that the scheduler failed to wake up another idle worker when
no worker is in the running state. And maybe, print backtrace
of the currently running process on the given CPU because it
likely blocks waking/scheduling the idle worker.
Otherwise, I like the other improvements.
Best Regards,
Petr