Re: [PATCH] workqueue: Reduce expensive locks for unbound workqueue
From: Lai Jiangshan
Date: Fri Nov 15 2024 - 01:39:01 EST
On Fri, Nov 15, 2024 at 2:00 PM Wangyang Guo <wangyang.guo@xxxxxxxxx> wrote:
>
> For unbound workqueue, pwqs usually map to just a few pools. Most of
> the time, pwqs will be linked sequentially to wq->pwqs list by cpu
> index. Usually, consecutive CPUs have the same workqueue attribute
> (e.g. belong to the same NUMA node). This makes pwqs with the same
> pool cluster together in the pwq list.
>
> Only do lock/unlock if the pool has changed in flush_workqueue_prep_pwqs().
> This reduces the number of expensive lock operations.
>
> The performance data shows this change boosts FIO by 65x in some cases
> when multiple concurrent threads write to xfs mount points with fsync.
>
> FIO Benchmark Details
> - FIO version: v3.35
> - FIO Options: ioengine=libaio,iodepth=64,norandommap=1,rw=write,
> size=128M,bs=4k,fsync=1
> - FIO Job Configs: 64 jobs in total writing to 4 mount points (ramdisks
> formatted as xfs file system).
> - Kernel Codebase: v6.12-rc5
> - Test Platform: Xeon 8380 (2 sockets)
>
> Reviewed-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> Signed-off-by: Wangyang Guo <wangyang.guo@xxxxxxxxx>
> ---
> kernel/workqueue.c | 22 ++++++++++++++++++----
> 1 file changed, 18 insertions(+), 4 deletions(-)
Reviewed-by: Lai Jiangshan <jiangshanlai@xxxxxxxxx>
This is a problem caused by 636b927eba5b("workqueue: Make unbound
workqueues to use per-cpu pool_workqueues").
Before the said commit, there is much less likely that two or more PWQs
in the same WQs share the same pool. After the commit, it becomes a common case.
I planned to make the PWQs shared for different CPUs if possible.
But the patch[1] has a problem which is easy to fix.
I will update it if it is needed.
Thanks
Lai
[1] https://lore.kernel.org/lkml/20231227145143.2399-3-jiangshanlai@xxxxxxxxx/