Re: [PATCH RESEND 1/1] lib/Kconfig: remove DEBUG_PER_CPU_MAPS dependency for CPUMASK_OFFSTACK

From: Libo Chen
Date: Thu Apr 14 2022 - 14:01:46 EST




On 4/14/22 04:41, Arnd Bergmann wrote:
On Wed, Apr 13, 2022 at 11:50 PM Libo Chen <libo.chen@xxxxxxxxxx> wrote:
On 4/13/22 13:52, Arnd Bergmann wrote:
Yes, it is. I don't know that the problem is...
Masahiro explained that CPUMASK_OFFSTACK can only be configured by
options not users if DEBUG_PER_CPU_MASK is not enabled. This doesn't
seem to be what we want.
I think the correct way to do it is to follow x86 and powerpc, and tying
CPUMASK_OFFSTACK to "large" values of CONFIG_NR_CPUS.
For smaller values of NR_CPUS, the onstack masks are obviously
cheaper, we just need to decide what the cut-off point is.
I agree. It appears enabling CPUMASK_OFFSTACK breaks kernel builds on
some architectures such as parisc and nios2 as reported by kernel test
robot. Maybe it makes sense to use DEBUG_PER_CPU_MAPS as some kind of
guard on CPUMASK_OFFSTACK.
NIOS2 does not support SMP builds at all, so it should never be possible to
select CPUMASK_OFFSTACK there. We may want to guard
DEBUG_PER_CPU_MAPS by adding a 'depends on SMP' in order to
prevent it from getting selected.

For PARISC, the largest configuration is 32-way SMP, so CPUMASK_OFFSTACK
is clearly pointless there as well, even though it should technically
be possible
to support. What is the build error on parisc?
Similar to NIOS2, A bunch of undefined references to *_cpumask_var() calls.  It seems that PARISC doesn't support the cpumask offstack API at all

In x86, the onstack masks can be selected for normal SMP builds with
up to 512 CPUs, while CONFIG_MAXSMP=y raises the limit to 8192
CPUs while selecting CPUMASK_OFFSTACK.
PowerPC does it the other way round, selecting CPUMASK_OFFSTACK
implicitly whenever NR_CPUS is set to 8192 or more.

I think we can easily do the same as powerpc on arm64. With the
I am leaning more towards x86's way because even NR_CPUS=160 is too
expensive for 4-core arm64 VMs according to apachebench. I highly doubt
that there is a good cut-off point to make everybody happy (or not unhappy).
It seems surprising that you would see any improvement for offstack masks
when using NR_CPUS=160, that is just three 64-bit words worth of data, but
it requires allocating the mask dynamically, which takes way more memory
to initialize.

ApacheBench test you cite in the patch description, what is the
value of NR_CPUS at which you start seeing a noticeable
benefit for offstack masks? Can you do the same test for
NR_CPUS=1024 or 2048?
As mentioned above, a good cut-off point moves depends on the actual
number of CPUs. But yeah I can do the same test for 1024 or even smaller
NR_CPUs values on the same 64-core arm64 VM setup.
If you see an improvement for small NR_CPUS values using offstack masks,
it's possible that the actual difference is something completely
different and we
can just make the on-stack case faster, possibly the cause is something about
cacheline alignment or inlining decisions using your specific kernel config.

Are you able to compare the 'perf report' output between runs with either
size to see where the extra time gets spent?
Okay yeah I will take some time to gather more data. It does appear that something else may also contribute to the performance difference.

Thanks
Libo
Arnd