Re: [PATCH] nvme/tcp: Add support to set the tcp worker cpu affinity
From: Li Feng
Date: Mon Apr 17 2023 - 23:38:57 EST
Hi Sagi,
> 2023年4月17日 下午9:45,Sagi Grimberg <sagi@xxxxxxxxxxx> 写道:
>
> Hey Li,
>
>> The default worker affinity policy is using all online cpus, e.g. from 0
>> to N-1. However, some cpus are busy for other jobs, then the nvme-tcp will
>> have a bad performance.
>> This patch adds a module parameter to set the cpu affinity for the nvme-tcp
>> socket worker threads. The parameter is a comma separated list of CPU
>> numbers. The list is parsed and the resulting cpumask is used to set the
>> affinity of the socket worker threads. If the list is empty or the
>> parsing fails, the default affinity is used.
>
> I can see how this may benefit a specific set of workloads, but I have a
> few issues with this.
>
> - This is exposing a user interface for something that is really
> internal to the driver.
>
> - This is something that can be misleading and could be tricky to get
> right, my concern is that this would only benefit a very niche case.
Our storage products needs this feature~
If the user doesn’t know what this is, they can keep it default, so I thinks this is
not unacceptable.
>
> - If the setting should exist, it should not be global.
V2 has fixed it.
>
> - I prefer not to introduce new modparams.
>
> - I'd prefer to find a way to support your use-case without introducing
> a config knob for it.
>
I’m looking forward to it.
> - It is not backed by performance improvements, but more importantly
> does not cover any potential regressions in key metrics (bw/iops/lat)
> or lack there of.
I can do more tests if needed.
Thanks,
Feng Li