On Wed, Apr 13, 2022 at 11:50 PM Libo Chen <libo.chen@xxxxxxxxxx> wrote:Similar to NIOS2, A bunch of undefined references to *_cpumask_var() calls. It seems that PARISC doesn't support the cpumask offstack API at all
On 4/13/22 13:52, Arnd Bergmann wrote:NIOS2 does not support SMP builds at all, so it should never be possible to
I agree. It appears enabling CPUMASK_OFFSTACK breaks kernel builds onI think the correct way to do it is to follow x86 and powerpc, and tyingYes, it is. I don't know that the problem is...Masahiro explained that CPUMASK_OFFSTACK can only be configured by
options not users if DEBUG_PER_CPU_MASK is not enabled. This doesn't
seem to be what we want.
CPUMASK_OFFSTACK to "large" values of CONFIG_NR_CPUS.
For smaller values of NR_CPUS, the onstack masks are obviously
cheaper, we just need to decide what the cut-off point is.
some architectures such as parisc and nios2 as reported by kernel test
robot. Maybe it makes sense to use DEBUG_PER_CPU_MAPS as some kind of
guard on CPUMASK_OFFSTACK.
select CPUMASK_OFFSTACK there. We may want to guard
DEBUG_PER_CPU_MAPS by adding a 'depends on SMP' in order to
prevent it from getting selected.
For PARISC, the largest configuration is 32-way SMP, so CPUMASK_OFFSTACK
is clearly pointless there as well, even though it should technically
be possible
to support. What is the build error on parisc?
Okay yeah I will take some time to gather more data. It does appear that something else may also contribute to the performance difference.It seems surprising that you would see any improvement for offstack masksIn x86, the onstack masks can be selected for normal SMP builds withI am leaning more towards x86's way because even NR_CPUS=160 is too
up to 512 CPUs, while CONFIG_MAXSMP=y raises the limit to 8192
CPUs while selecting CPUMASK_OFFSTACK.
PowerPC does it the other way round, selecting CPUMASK_OFFSTACK
implicitly whenever NR_CPUS is set to 8192 or more.
I think we can easily do the same as powerpc on arm64. With the
expensive for 4-core arm64 VMs according to apachebench. I highly doubt
that there is a good cut-off point to make everybody happy (or not unhappy).
when using NR_CPUS=160, that is just three 64-bit words worth of data, but
it requires allocating the mask dynamically, which takes way more memory
to initialize.
If you see an improvement for small NR_CPUS values using offstack masks,ApacheBench test you cite in the patch description, what is theAs mentioned above, a good cut-off point moves depends on the actual
value of NR_CPUS at which you start seeing a noticeable
benefit for offstack masks? Can you do the same test for
NR_CPUS=1024 or 2048?
number of CPUs. But yeah I can do the same test for 1024 or even smaller
NR_CPUs values on the same 64-core arm64 VM setup.
it's possible that the actual difference is something completely
different and we
can just make the on-stack case faster, possibly the cause is something about
cacheline alignment or inlining decisions using your specific kernel config.
Are you able to compare the 'perf report' output between runs with either
size to see where the extra time gets spent?
Arnd