Re: [PATCH] xfs: convert alloc_workqueue users to WQ_UNBOUND
From: Dave Chinner
Date: Thu Feb 19 2026 - 18:05:33 EST
On Thu, Feb 19, 2026 at 08:25:56AM +0100, Sebastian Andrzej Siewior wrote:
> On 2026-02-19 12:24:38 [+1100], Dave Chinner wrote:
> > > The changes from per-cpu to unbound will help to improve situations where
> > > CPU isolation is used, because unbound work can be moved away from
> > > isolated CPUs.
> >
> > If you are running operations through the XFS filesystem on isolated
> > CPUs, then you absolutely need some of these the per-cpu workqueues
> > running on those isolated CPUs too.
> >
> > Also, these workqueues are typically implemented these ways to meet
> > performancei targets, concurrency constraints or algorithm
> > requirements. Changes like this need a bunch of XFS metadata
> > scalability benchmarks on high end server systems under a variety of
> > conditions to at least show there aren't any obvious any behavioural
> > or performance regressions that result from the change.
>
> So all of those (below) where you say "performance critical", those
> work items are only enqueued from an interrupt?
No.
> Never origin from a user task?
Inode GC is most definitely driven from user tasks with unbound
concurrency (e.g. unlink(), close() and other syscalls that can drop
a file reference). It can also be driven by the kernel through
direct reclaim (again, from user task context with unbound
concurrency), and from pure kernel context via reclaim from kswapd
(strictly bound concurrency in this case).
The lockless per-cpu queuing and processing algorithm was added
because the inode eviction path from user context is performance
critical. The original version using unbound workqueues had major
performance regressions. There's discussion of the reasons for
those performance regressions and numbers around those in the
original discussions and prototypes:
https://lore.kernel.org/linux-xfs/20210802103559.GH2757197@xxxxxxxxxxxxxxxxxxx/
https://lore.kernel.org/linux-xfs/20210804032030.GT3601443@magnolia/
-Dave.
--
Dave Chinner
dgc@xxxxxxxxxx