Re: [PATCH v2 0/5] workqueue: Introduce a sharded cache affinity scope
From: Breno Leitao
Date: Mon Mar 23 2026 - 11:35:05 EST
Hello Chuck,
On Mon, Mar 23, 2026 at 10:11:07AM -0400, Chuck Lever wrote:
> On Fri, Mar 20, 2026, at 1:56 PM, Breno Leitao wrote:
> > TL;DR: Some modern processors have many CPUs per LLC (L3 cache), and
> > unbound workqueues using the default affinity (WQ_AFFN_CACHE) collapse
> > to a single worker pool, causing heavy spinlock (pool->lock) contention.
> > Create a new affinity (WQ_AFFN_CACHE_SHARD) that caps each pool at
> > wq_cache_shard_size CPUs (default 8).
> >
> > Changes from RFC:
> >
> > * wq_cache_shard_size is in terms of cores (not vCPU). So,
> > wq_cache_shard_size=8 means the pool will have 8 cores and their siblings,
> > like 16 threads/CPUs if SMT=1
>
> My concern about the "cores per shard" approach is that it
> improves the default situation for moderately-sized machines
> little or not at all.
>
> A machine with one L3 and 10 cores will go from 1 UNBOUND
> pool to only 2. For virtual machines commonly deployed as
> cloud instances, which are 2, 4, or 8 core systems (up to
> 16 threads) there will still be significant contention for
> UNBOUND workers.
Could you clarify your concern? Are you suggesting the default value of
wq_cache_shard_size=8 is too high, or that the cores-per-shard approach
fundamentally doesn't scale well for moderately-sized systems?
Any approach—whether sharding by cores or by LLC—ultimately relies on
heuristics that may need tuning for specific workloads. The key difference
is where we draw the line. The current default of 8 cores prevents the
worst-case scenario: severe lock contention on large systems with 16+ CPUs
all hammering a single unbound workqueue.
For smaller systems (2-4 CPUs), contention is usually negligible
regardless of the approach. My perf lock contention measurements
consistently show minimal contention in that range.
> IOW, if you want good scaling, human intervention (via a
> boot command-line option) is still needed.
I am not convinced. The wq_cache_shard_size approach creates multiple
pools on large systems while leaving small systems (<8 cores) unchanged.
This eliminates the pathological lock contention we're observing on
high-core-count machines without impacting smaller deployments.
In contrast, splitting pools per LLC would force fragmentation even on
systems that aren't experiencing contention, increasing the need for
manual tuning across a wider range of configurations.
Thanks for the review,
--breno