Re: [PATCH v5 02/27] mm, cma: support multiple contiguous ranges, if requested
From: Frank van der Linden
Date: Wed Mar 05 2025 - 13:02:35 EST
On Tue, Mar 4, 2025 at 10:29 PM kernel test robot <oliver.sang@xxxxxxxxx> wrote:
>
>
>
> Hello,
>
> kernel test robot noticed a 15.1% improvement of netperf.Throughput_tps on:
>
>
> commit: a957f140831b0d42e4fdbe83cf93997ef1b51bda ("[PATCH v5 02/27] mm, cma: support multiple contiguous ranges, if requested")
> url: https://github.com/intel-lab-lkp/linux/commits/Frank-van-der-Linden/mm-cma-export-total-and-free-number-of-pages-for-CMA-areas/20250301-023339
> base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git 276f98efb64a2c31c099465ace78d3054c662a0f
> patch link: https://lore.kernel.org/all/20250228182928.2645936-3-fvdl@xxxxxxxxxx/
> patch subject: [PATCH v5 02/27] mm, cma: support multiple contiguous ranges, if requested
>
> testcase: netperf
> config: x86_64-rhel-9.4
> compiler: gcc-12
> test machine: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
> parameters:
>
> ip: ipv4
> runtime: 300s
> nr_threads: 200%
> cluster: cs-localhost
> test: TCP_CRR
> cpufreq_governor: performance
>
>
>
>
>
>
> Details are as below:
> -------------------------------------------------------------------------------------------------->
>
>
> The kernel config and materials to reproduce are available at:
> https://download.01.org/0day-ci/archive/20250305/202503051327.e87dce82-lkp@xxxxxxxxx
>
> =========================================================================================
> cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/tbox_group/test/testcase:
> cs-localhost/gcc-12/performance/ipv4/x86_64-rhel-9.4/200%/debian-12-x86_64-20240206.cgz/300s/lkp-icl-2sp2/TCP_CRR/netperf
>
> commit:
> cdc31e6532 ("mm/cma: export total and free number of pages for CMA areas")
> a957f14083 ("mm, cma: support multiple contiguous ranges, if requested")
>
> cdc31e65328522c6 a957f140831b0d42e4fdbe83cf9
> ---------------- ---------------------------
> %stddev %change %stddev
> \ | \
> 2.43 +0.5 2.90 ą 4% mpstat.cpu.all.usr%
> 4718850 +15.4% 5446771 vmstat.system.cs
> 62006 ą 43% -59.6% 25067 ą137% numa-meminfo.node0.Mapped
> 2884295 ą 41% -59.4% 1171696 ą135% numa-meminfo.node0.Unevictable
> 28159 ą 2% -17.7% 23164 ą 2% perf-c2c.HITM.local
> 5426 ą 3% +28.5% 6973 ą 8% perf-c2c.HITM.remote
> 33586 ą 2% -10.3% 30137 ą 3% perf-c2c.HITM.total
> 5642375 ą 2% +15.5% 6519596 sched_debug.cpu.nr_switches.avg
> 7473763 ą 4% +18.0% 8815709 ą 2% sched_debug.cpu.nr_switches.max
> 4352931 ą 3% +12.7% 4906391 ą 2% sched_debug.cpu.nr_switches.min
> 2485115 ą 6% +31.9% 3277456 ą 11% numa-numastat.node0.local_node
> 2526446 ą 6% +32.8% 3356120 ą 11% numa-numastat.node0.numa_hit
> 3522582 ą 10% +28.7% 4535065 ą 23% numa-numastat.node1.local_node
> 3613797 ą 10% +27.0% 4588978 ą 22% numa-numastat.node1.numa_hit
> 40617 +5.4% 42811 ą 5% proc-vmstat.nr_slab_reclaimable
> 6144430 ą 4% +29.4% 7948120 ą 16% proc-vmstat.numa_hit
> 6011884 ą 4% +30.0% 7815542 ą 16% proc-vmstat.numa_local
> 26402145 ą 2% +40.6% 37129548 ą 14% proc-vmstat.pgalloc_normal
> 25226079 +42.1% 35834032 ą 13% proc-vmstat.pgfree
> 15712 ą 43% -59.6% 6348 ą137% numa-vmstat.node0.nr_mapped
> 721073 ą 41% -59.4% 292924 ą135% numa-vmstat.node0.nr_unevictable
> 721073 ą 41% -59.4% 292924 ą135% numa-vmstat.node0.nr_zone_unevictable
> 2526848 ą 6% +32.8% 3355902 ą 11% numa-vmstat.node0.numa_hit
> 2485517 ą 6% +31.9% 3277238 ą 11% numa-vmstat.node0.numa_local
> 3614259 ą 10% +27.0% 4589442 ą 22% numa-vmstat.node1.numa_hit
> 3523043 ą 10% +28.7% 4535533 ą 23% numa-vmstat.node1.numa_local
> 1711802 +15.1% 1969470 netperf.ThroughputBoth_total_tps
> 6686 +15.1% 7693 netperf.ThroughputBoth_tps
> 1711802 +15.1% 1969470 netperf.Throughput_total_tps
> 6686 +15.1% 7693 netperf.Throughput_tps
> 4.052e+08 ą 5% +16.7% 4.728e+08 ą 4% netperf.time.involuntary_context_switches
> 535.88 +18.1% 633.12 netperf.time.user_time
> 3.175e+08 ą 3% +13.9% 3.615e+08 ą 3% netperf.time.voluntary_context_switches
> 5.135e+08 +15.1% 5.908e+08 netperf.workload
> 0.07 ą 8% -31.3% 0.05 ą 23% perf-sched.sch_delay.avg.ms.__cond_resched.kmem_cache_alloc_node_noprof.kmalloc_reserve.__alloc_skb.tcp_stream_alloc_skb
> 0.46 ą114% -71.4% 0.13 ą 34% perf-sched.sch_delay.max.ms.__cond_resched.lock_sock_nested.__inet_stream_connect.inet_stream_connect.__sys_connect
> 5.70 ą 90% +2752.3% 162.72 ą202% perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
> 33.94 ą 19% +50.3% 50.99 ą 18% perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
> 30764 ą 22% -32.1% 20881 ą 22% perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
> 7.03 ą 60% +11736.2% 832.16 ą150% perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
> 0.14 ą 8% -33.5% 0.09 ą 26% perf-sched.wait_time.avg.ms.__cond_resched.kmem_cache_alloc_node_noprof.kmalloc_reserve.__alloc_skb.tcp_stream_alloc_skb
> 0.11 ą 8% -14.3% 0.10 ą 11% perf-sched.wait_time.avg.ms.__cond_resched.lock_sock_nested.inet_stream_connect.__sys_connect.__x64_sys_connect
> 33.61 ą 19% +50.4% 50.57 ą 18% perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
> 0.69 ą109% -59.0% 0.28 ą 27% perf-sched.wait_time.max.ms.__cond_resched.sk_wait_data.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg
> 0.76 -39.9% 0.46 ą 12% perf-stat.i.MPKI
> 3.959e+10 +14.9% 4.55e+10 perf-stat.i.branch-instructions
> 0.92 -0.0 0.90 perf-stat.i.branch-miss-rate%
> 3.564e+08 +12.7% 4.017e+08 perf-stat.i.branch-misses
> 1.561e+08 -32.2% 1.058e+08 ą 12% perf-stat.i.cache-misses
> 6.91e+08 -33.8% 4.574e+08 ą 6% perf-stat.i.cache-references
> 4760614 +15.5% 5496803 perf-stat.i.context-switches
> 1.54 -13.5% 1.33 perf-stat.i.cpi
> 2048 +49.1% 3054 ą 9% perf-stat.i.cycles-between-cache-misses
> 2.084e+11 +14.9% 2.394e+11 perf-stat.i.instructions
> 0.65 +15.3% 0.75 perf-stat.i.ipc
> 37.20 +15.5% 42.97 perf-stat.i.metric.K/sec
> 0.75 -41.0% 0.44 ą 12% perf-stat.overall.MPKI
> 0.90 -0.0 0.88 perf-stat.overall.branch-miss-rate%
> 1.54 -13.6% 1.33 perf-stat.overall.cpi
> 2060 +48.5% 3060 ą 10% perf-stat.overall.cycles-between-cache-misses
> 0.65 +15.7% 0.75 perf-stat.overall.ipc
> 3.947e+10 +14.9% 4.536e+10 perf-stat.ps.branch-instructions
> 3.553e+08 +12.7% 4.005e+08 perf-stat.ps.branch-misses
> 1.557e+08 -32.2% 1.055e+08 ą 12% perf-stat.ps.cache-misses
> 6.889e+08 -33.8% 4.56e+08 ą 6% perf-stat.ps.cache-references
> 4746041 +15.5% 5479885 perf-stat.ps.context-switches
> 2.078e+11 +14.9% 2.387e+11 perf-stat.ps.instructions
> 6.363e+13 +14.9% 7.312e+13 perf-stat.total.instructions
>
>
>
>
> Disclaimer:
> Results have been estimated based on internal Intel analysis and are provided
> for informational purposes only. Any difference in system hardware or software
> design or configuration may affect actual performance.
>
>
> --
> 0-DAY CI Kernel Test Service
> https://github.com/intel/lkp-tests/wiki
>
Since there should be no functional change for existing callers of CMA
interfaces, I'm flattered by this report, but it's definitely not
these commits that cause any change in performance :-)
- Frank