Thanks for your quick response, if you need any more test information about the regression, please let me known.
On 4/13/2020 6:56 PM, Ritesh Harjani wrote:
On 4/13/20 2:07 PM, Xing Zhengjun wrote:
Hi Harjani,
ÂÂÂ Do you have time to take a look at this? Thanks.
Hello Xing,
I do want to look into this. But as of now I am stuck with another
mballoc failure issue. I will get back at this once I have some handle
over that one.
BTW, are you planning to take look at this?
-ritesh
On 4/7/2020 4:00 PM, kernel test robot wrote:
Greeting,
FYI, we noticed a -60.5% regression of stress-ng.fiemap.ops_per_sec due to commit:
commit: d3b6f23f71670007817a5d59f3fbafab2b794e8c ("ext4: move ext4_fiemap to use iomap framework")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: stress-ng
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
with following parameters:
ÂÂÂÂnr_threads: 10%
ÂÂÂÂdisk: 1HDD
ÂÂÂÂtesttime: 1s
ÂÂÂÂclass: os
ÂÂÂÂcpufreq_governor: performance
ÂÂÂÂucode: 0x500002c
ÂÂÂÂfs: ext4
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
ÂÂÂÂÂÂÂÂ git clone https://github.com/intel/lkp-tests.git
ÂÂÂÂÂÂÂÂ cd lkp-tests
 bin/lkp install job.yaml # job file is attached in this email
ÂÂÂÂÂÂÂÂ bin/lkp runÂÂÂÂ job.yaml
=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
os/gcc-7/performance/1HDD/ext4/x86_64-rhel-7.6/10%/debian-x86_64-20191114.cgz/lkp-csl-2sp5/stress-ng/1s/0x500002c
commit:
ÂÂ b2c5764262 ("ext4: make ext4_ind_map_blocks work with fiemap")
ÂÂ d3b6f23f71 ("ext4: move ext4_fiemap to use iomap framework")
b2c5764262edded1 d3b6f23f71670007817a5d59f3f
---------------- ---------------------------
 fail:runs %reproduction fail:runs
ÂÂÂÂÂÂÂÂÂÂÂ |ÂÂÂÂÂÂÂÂÂÂÂÂ |ÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂÂÂÂÂÂÂÂÂÂ :4ÂÂÂÂÂÂÂÂÂÂ 25%ÂÂÂÂÂÂÂÂÂÂ 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
ÂÂÂÂÂÂÂÂÂÂ 2:4ÂÂÂÂÂÂÂÂÂÂÂ 5%ÂÂÂÂÂÂÂÂÂÂ 2:4 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
ÂÂÂÂÂÂÂÂÂÂ 2:4ÂÂÂÂÂÂÂÂÂÂÂ 6%ÂÂÂÂÂÂÂÂÂÂ 3:4 perf-profile.calltrace.cycles-pp.error_entry
ÂÂÂÂÂÂÂÂÂÂ 3:4ÂÂÂÂÂÂÂÂÂÂÂ 9%ÂÂÂÂÂÂÂÂÂÂ 3:4 perf-profile.children.cycles-pp.error_entry
ÂÂÂÂÂÂÂÂÂÂ 0:4ÂÂÂÂÂÂÂÂÂÂÂ 1%ÂÂÂÂÂÂÂÂÂÂ 0:4 perf-profile.self.cycles-pp.error_entry
ÂÂÂÂÂÂÂÂÂ %stddevÂÂÂÂ %changeÂÂÂÂÂÂÂÂ %stddev
ÂÂÂÂÂÂÂÂÂÂÂÂÂ \ÂÂÂÂÂÂÂÂÂ |ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ \
ÂÂÂÂÂ 28623ÂÂÂÂÂÂÂÂÂÂ +28.2%ÂÂÂÂÂ 36703 Â 12%Â stress-ng.daemon.ops
ÂÂÂÂÂ 28632ÂÂÂÂÂÂÂÂÂÂ +28.2%ÂÂÂÂÂ 36704 Â 12% stress-ng.daemon.ops_per_sec
ÂÂÂÂ 566.00 Â 22%ÂÂÂÂ -53.2%ÂÂÂÂ 265.00 Â 53%Â stress-ng.dev.ops
ÂÂÂÂ 278.81 Â 22%ÂÂÂÂ -53.0%ÂÂÂÂ 131.00 Â 54% stress-ng.dev.ops_per_sec
ÂÂÂÂÂ 73160ÂÂÂÂÂÂÂÂÂÂ -60.6%ÂÂÂÂÂ 28849 ÂÂ 3%Â stress-ng.fiemap.ops
ÂÂÂÂÂ 72471ÂÂÂÂÂÂÂÂÂÂ -60.5%ÂÂÂÂÂ 28612 ÂÂ 3% stress-ng.fiemap.ops_per_sec
ÂÂÂÂÂ 23421 Â 12%ÂÂÂÂ +21.2%ÂÂÂÂÂ 28388 ÂÂ 6%Â stress-ng.filename.ops
ÂÂÂÂÂ 22638 Â 12%ÂÂÂÂ +20.3%ÂÂÂÂÂ 27241 ÂÂ 6% stress-ng.filename.ops_per_sec
ÂÂÂÂÂ 21.25 ÂÂ 7%ÂÂÂÂ -10.6%ÂÂÂÂÂ 19.00 ÂÂ 3%Â stress-ng.iomix.ops
ÂÂÂÂÂ 38.75 Â 49%ÂÂÂÂ -47.7%ÂÂÂÂÂ 20.25 Â 96%Â stress-ng.memhotplug.ops
ÂÂÂÂÂ 34.45 Â 52%ÂÂÂÂ -51.8%ÂÂÂÂÂ 16.62 Â106% stress-ng.memhotplug.ops_per_sec
ÂÂÂÂÂÂ 1734 Â 10%ÂÂÂÂ +31.4%ÂÂÂÂÂÂ 2278 Â 10%Â stress-ng.resources.ops
ÂÂÂÂ 807.56 ÂÂ 5%ÂÂÂÂ +35.2%ÂÂÂÂÂÂ 1091 ÂÂ 8% stress-ng.resources.ops_per_sec
ÂÂÂ 1007356 ÂÂ 3%ÂÂÂÂ -16.5%ÂÂÂÂ 840642 ÂÂ 9%Â stress-ng.revio.ops
ÂÂÂ 1007692 ÂÂ 3%ÂÂÂÂ -16.6%ÂÂÂÂ 840711 ÂÂ 9% stress-ng.revio.ops_per_sec
ÂÂÂÂÂ 21812 ÂÂ 3%ÂÂÂÂ +16.0%ÂÂÂÂÂ 25294 ÂÂ 5%Â stress-ng.sysbadaddr.ops
ÂÂÂÂÂ 21821 ÂÂ 3%ÂÂÂÂ +15.9%ÂÂÂÂÂ 25294 ÂÂ 5% stress-ng.sysbadaddr.ops_per_sec
ÂÂÂÂ 440.75 ÂÂ 4%ÂÂÂÂ +21.9%ÂÂÂÂ 537.25 ÂÂ 9%Â stress-ng.sysfs.ops
ÂÂÂÂ 440.53 ÂÂ 4%ÂÂÂÂ +21.9%ÂÂÂÂ 536.86 ÂÂ 9% stress-ng.sysfs.ops_per_sec
ÂÂ 13286582ÂÂÂÂÂÂÂÂÂÂ -11.1%ÂÂ 11805520 ÂÂ 6% stress-ng.time.file_system_outputs
ÂÂ 68253896ÂÂÂÂÂÂÂÂÂÂÂ +2.4%ÂÂ 69860122 stress-ng.time.minor_page_faults
ÂÂÂÂ 197.00 ÂÂ 4%ÂÂÂÂ -15.9%ÂÂÂÂ 165.75 Â 12%Â stress-ng.xattr.ops
ÂÂÂÂ 192.45 ÂÂ 5%ÂÂÂÂ -16.1%ÂÂÂÂ 161.46 Â 11% stress-ng.xattr.ops_per_sec
ÂÂÂÂÂ 15310ÂÂÂÂÂÂÂÂÂÂ +62.5%ÂÂÂÂÂ 24875 Â 22%Â stress-ng.zombie.ops
ÂÂÂÂÂ 15310ÂÂÂÂÂÂÂÂÂÂ +62.5%ÂÂÂÂÂ 24874 Â 22% stress-ng.zombie.ops_per_sec
ÂÂÂÂ 203.50 Â 12%ÂÂÂÂ -47.3%ÂÂÂÂ 107.25 Â 49%Â vmstat.io.bi
ÂÂÂÂ 861318 Â 18%ÂÂÂÂ -29.7%ÂÂÂÂ 605884 ÂÂ 5%Â meminfo.AnonHugePages
ÂÂÂ 1062742 Â 14%ÂÂÂÂ -20.2%ÂÂÂÂ 847853 ÂÂ 3%Â meminfo.AnonPages
ÂÂÂÂÂ 31093 ÂÂ 6%ÂÂÂÂÂ +9.6%ÂÂÂÂÂ 34090 ÂÂ 3%Â meminfo.KernelStack
ÂÂÂÂÂÂ 7151 Â 34%ÂÂÂÂ +55.8%ÂÂÂÂÂ 11145 ÂÂ 9%Â meminfo.Mlocked
 1.082e+08  5% -40.2% 64705429  31% numa-numastat.node0.local_node
 1.082e+08  5% -40.2% 64739883  31% numa-numastat.node0.numa_hit
ÂÂ 46032662 Â 21%ÂÂÂ +104.3%ÂÂ 94042918 Â 20% numa-numastat.node1.local_node
ÂÂ 46074205 Â 21%ÂÂÂ +104.2%ÂÂ 94072810 Â 20% numa-numastat.node1.numa_hit
ÂÂÂÂÂÂ 3942 ÂÂ 3%ÂÂÂÂ +14.2%ÂÂÂÂÂÂ 4501 ÂÂ 4% slabinfo.pool_workqueue.active_objs
ÂÂÂÂÂÂ 4098 ÂÂ 3%ÂÂÂÂ +14.3%ÂÂÂÂÂÂ 4683 ÂÂ 4% slabinfo.pool_workqueue.num_objs
ÂÂÂÂÂÂ 4817 ÂÂ 7%ÂÂÂÂ +13.3%ÂÂÂÂÂÂ 5456 ÂÂ 8% slabinfo.proc_dir_entry.active_objs
ÂÂÂÂÂÂ 5153 ÂÂ 6%ÂÂÂÂ +12.5%ÂÂÂÂÂÂ 5797 ÂÂ 8% slabinfo.proc_dir_entry.num_objs
ÂÂÂÂÂ 18598 Â 13%ÂÂÂÂ -33.1%ÂÂÂÂÂ 12437 Â 20% sched_debug.cfs_rq:/.load.avg
ÂÂÂÂ 452595 Â 56%ÂÂÂÂ -71.4%ÂÂÂÂ 129637 Â 76% sched_debug.cfs_rq:/.load.max
ÂÂÂÂÂ 67675 Â 35%ÂÂÂÂ -55.1%ÂÂÂÂÂ 30377 Â 42% sched_debug.cfs_rq:/.load.stddev
ÂÂÂÂÂ 18114 Â 12%ÂÂÂÂ -33.7%ÂÂÂÂÂ 12011 Â 20% sched_debug.cfs_rq:/.runnable_weight.avg
ÂÂÂÂ 448215 Â 58%ÂÂÂÂ -72.8%ÂÂÂÂ 121789 Â 82% sched_debug.cfs_rq:/.runnable_weight.max
ÂÂÂÂÂ 67083 Â 37%ÂÂÂÂ -56.3%ÂÂÂÂÂ 29305 Â 43% sched_debug.cfs_rq:/.runnable_weight.stddev
ÂÂÂÂ -38032ÂÂÂÂÂÂÂÂÂ +434.3%ÂÂÂ -203212 sched_debug.cfs_rq:/.spread0.avg
ÂÂÂ -204466ÂÂÂÂÂÂÂÂÂÂ +95.8%ÂÂÂ -400301 sched_debug.cfs_rq:/.spread0.min
ÂÂÂÂÂ 90.02 Â 25%ÂÂÂÂ -58.1%ÂÂÂÂÂ 37.69 Â 52% sched_debug.cfs_rq:/.util_est_enqueued.avg
ÂÂÂÂ 677.54 ÂÂ 6%ÂÂÂÂ -39.3%ÂÂÂÂ 411.50 Â 22% sched_debug.cfs_rq:/.util_est_enqueued.max
ÂÂÂÂ 196.57 ÂÂ 8%ÂÂÂÂ -47.6%ÂÂÂÂ 103.05 Â 36% sched_debug.cfs_rq:/.util_est_enqueued.stddev
ÂÂÂÂÂÂ 3.34 Â 23%ÂÂÂÂ +34.1%ÂÂÂÂÂÂ 4.48 ÂÂ 4% sched_debug.cpu.clock.stddev
ÂÂÂÂÂÂ 3.34 Â 23%ÂÂÂÂ +34.1%ÂÂÂÂÂÂ 4.48 ÂÂ 4% sched_debug.cpu.clock_task.stddev
ÂÂÂÂ 402872 ÂÂ 7%ÂÂÂÂ -11.9%ÂÂÂÂ 354819 ÂÂ 2% proc-vmstat.nr_active_anon
ÂÂÂ 1730331ÂÂÂÂÂÂÂÂÂÂÂ -9.5%ÂÂÂ 1566418 ÂÂ 5%Â proc-vmstat.nr_dirtied
ÂÂÂÂÂ 31042 ÂÂ 6%ÂÂÂÂÂ +9.3%ÂÂÂÂÂ 33915 ÂÂ 3% proc-vmstat.nr_kernel_stack
ÂÂÂÂ 229047ÂÂÂÂÂÂÂÂÂÂÂ -2.4%ÂÂÂÂ 223615ÂÂÂÂÂÂÂ proc-vmstat.nr_mapped
ÂÂÂÂÂ 74008 ÂÂ 7%ÂÂÂÂ +20.5%ÂÂÂÂÂ 89163 ÂÂ 8%Â proc-vmstat.nr_written
ÂÂÂÂ 402872 ÂÂ 7%ÂÂÂÂ -11.9%ÂÂÂÂ 354819 ÂÂ 2% proc-vmstat.nr_zone_active_anon
ÂÂÂÂÂ 50587 Â 11%ÂÂÂÂ -25.2%ÂÂÂÂÂ 37829 Â 14% proc-vmstat.numa_pages_migrated
ÂÂÂÂ 457500ÂÂÂÂÂÂÂÂÂÂ -23.1%ÂÂÂÂ 351918 Â 31% proc-vmstat.numa_pte_updates
ÂÂ 81382485ÂÂÂÂÂÂÂÂÂÂÂ +1.9%ÂÂ 82907822ÂÂÂÂÂÂÂ proc-vmstat.pgfault
 2.885e+08  5% -13.3% 2.502e+08  6% proc-vmstat.pgfree
ÂÂÂÂÂ 42206 Â 12%ÂÂÂÂ -46.9%ÂÂÂÂÂ 22399 Â 49%Â proc-vmstat.pgpgin
ÂÂÂÂ 431233 Â 13%ÂÂÂÂ -64.8%ÂÂÂÂ 151736 Â109%Â proc-vmstat.pgrotated
ÂÂÂÂ 176754 ÂÂ 7%ÂÂÂÂ -40.2%ÂÂÂÂ 105637 Â 31% proc-vmstat.thp_fault_alloc
ÂÂÂÂ 314.50 Â 82%ÂÂÂ +341.5%ÂÂÂÂÂÂ 1388 Â 44% proc-vmstat.unevictable_pgs_stranded
ÂÂÂ 1075269 Â 14%ÂÂÂÂ -41.3%ÂÂÂÂ 631388 Â 17% numa-meminfo.node0.Active
ÂÂÂÂ 976056 Â 12%ÂÂÂÂ -39.7%ÂÂÂÂ 588727 Â 19% numa-meminfo.node0.Active(anon)
ÂÂÂÂ 426857 Â 22%ÂÂÂÂ -36.4%ÂÂÂÂ 271375 Â 13% numa-meminfo.node0.AnonHugePages
ÂÂÂÂ 558590 Â 19%ÂÂÂÂ -36.4%ÂÂÂÂ 355402 Â 14% numa-meminfo.node0.AnonPages
ÂÂÂ 1794824 ÂÂ 9%ÂÂÂÂ -28.8%ÂÂÂ 1277157 Â 20% numa-meminfo.node0.FilePages
ÂÂÂÂÂÂ 8517 Â 92%ÂÂÂÂ -82.7%ÂÂÂÂÂÂ 1473 Â 89% numa-meminfo.node0.Inactive(file)
ÂÂÂÂ 633118 ÂÂ 2%ÂÂÂÂ -41.7%ÂÂÂÂ 368920 Â 36% numa-meminfo.node0.Mapped
ÂÂÂ 2958038 Â 12%ÂÂÂÂ -27.7%ÂÂÂ 2139271 Â 12% numa-meminfo.node0.MemUsed
ÂÂÂÂ 181401 ÂÂ 5%ÂÂÂÂ -13.7%ÂÂÂÂ 156561 ÂÂ 4% numa-meminfo.node0.SUnreclaim
ÂÂÂÂ 258124 ÂÂ 6%ÂÂÂÂ -13.0%ÂÂÂÂ 224535 ÂÂ 5%Â numa-meminfo.node0.Slab
ÂÂÂÂ 702083 Â 16%ÂÂÂÂ +31.0%ÂÂÂÂ 919406 Â 11% numa-meminfo.node1.Active
ÂÂÂÂÂ 38663 Â107%ÂÂÂ +137.8%ÂÂÂÂÂ 91951 Â 31% numa-meminfo.node1.Active(file)
ÂÂÂ 1154975 ÂÂ 7%ÂÂÂÂ +41.6%ÂÂÂ 1635593 Â 12% numa-meminfo.node1.FilePages
ÂÂÂÂ 395813 Â 25%ÂÂÂÂ +62.8%ÂÂÂÂ 644533 Â 16% numa-meminfo.node1.Inactive
ÂÂÂÂ 394313 Â 25%ÂÂÂÂ +62.5%ÂÂÂÂ 640686 Â 16% numa-meminfo.node1.Inactive(anon)
ÂÂÂÂ 273317ÂÂÂÂÂÂÂÂÂÂ +88.8%ÂÂÂÂ 515976 Â 25% numa-meminfo.node1.Mapped
ÂÂÂ 2279237 ÂÂ 6%ÂÂÂÂ +25.7%ÂÂÂ 2865582 ÂÂ 7% numa-meminfo.node1.MemUsed
ÂÂÂÂÂ 10830 Â 18%ÂÂÂÂ +29.6%ÂÂÂÂÂ 14033 ÂÂ 9% numa-meminfo.node1.PageTables
ÂÂÂÂ 149390 ÂÂ 3%ÂÂÂÂ +23.2%ÂÂÂÂ 184085 ÂÂ 3% numa-meminfo.node1.SUnreclaim
ÂÂÂÂ 569542 Â 16%ÂÂÂÂ +74.8%ÂÂÂÂ 995336 Â 21%Â numa-meminfo.node1.Shmem
ÂÂÂÂ 220774 ÂÂ 5%ÂÂÂÂ +20.3%ÂÂÂÂ 265656 ÂÂ 3%Â numa-meminfo.node1.Slab
ÂÂ 35623587 ÂÂ 5%ÂÂÂÂ -11.7%ÂÂ 31444514 ÂÂ 3%Â perf-stat.i.cache-misses
 2.576e+08  5% -6.8% 2.4e+08  2% perf-stat.i.cache-references
ÂÂÂÂÂÂ 3585ÂÂÂÂÂÂÂÂÂÂÂ -7.3%ÂÂÂÂÂÂ 3323 ÂÂ 5% perf-stat.i.cpu-migrations
ÂÂÂÂ 180139 ÂÂ 2%ÂÂÂÂÂ +4.2%ÂÂÂÂ 187668ÂÂÂÂÂÂÂ perf-stat.i.minor-faults
ÂÂÂÂÂ 69.13ÂÂÂÂÂÂÂÂÂÂÂ +2.6ÂÂÂÂÂÂ 71.75 perf-stat.i.node-load-miss-rate%
ÂÂÂ 4313695 ÂÂ 2%ÂÂÂÂÂ -7.4%ÂÂÂ 3994957 ÂÂ 2% perf-stat.i.node-load-misses
ÂÂÂ 5466253 Â 11%ÂÂÂÂ -17.3%ÂÂÂ 4521173 ÂÂ 6%Â perf-stat.i.node-loads
ÂÂÂ 2818674 ÂÂ 6%ÂÂÂÂ -15.8%ÂÂÂ 2372542 ÂÂ 5%Â perf-stat.i.node-stores
ÂÂÂÂ 227810ÂÂÂÂÂÂÂÂÂÂÂ +4.6%ÂÂÂÂ 238290ÂÂÂÂÂÂÂ perf-stat.i.page-faults
ÂÂÂÂÂ 12.67 ÂÂ 4%ÂÂÂÂÂ -7.2%ÂÂÂÂÂ 11.76 ÂÂ 2%Â perf-stat.overall.MPKI
ÂÂÂÂÂÂ 1.01 ÂÂ 4%ÂÂÂÂÂ -0.0ÂÂÂÂÂÂÂ 0.97 ÂÂ 3% perf-stat.overall.branch-miss-rate%
ÂÂÂÂÂÂ 1044ÂÂÂÂÂÂÂÂÂÂ +13.1%ÂÂÂÂÂÂ 1181 ÂÂ 4% perf-stat.overall.cycles-between-cache-misses
ÂÂÂÂÂ 40.37 ÂÂ 4%ÂÂÂÂÂ +3.6ÂÂÂÂÂÂ 44.00 ÂÂ 2% perf-stat.overall.node-store-miss-rate%
ÂÂ 36139526 ÂÂ 5%ÂÂÂÂ -12.5%ÂÂ 31625519 ÂÂ 3% perf-stat.ps.cache-misses
 2.566e+08  5% -6.9% 2.389e+08  2% perf-stat.ps.cache-references
ÂÂÂÂÂÂ 3562ÂÂÂÂÂÂÂÂÂÂÂ -7.2%ÂÂÂÂÂÂ 3306 ÂÂ 5% perf-stat.ps.cpu-migrations
ÂÂÂÂ 179088ÂÂÂÂÂÂÂÂÂÂÂ +4.2%ÂÂÂÂ 186579 perf-stat.ps.minor-faults
ÂÂÂ 4323383 ÂÂ 2%ÂÂÂÂÂ -7.5%ÂÂÂ 3999214 perf-stat.ps.node-load-misses
ÂÂÂ 5607721 Â 10%ÂÂÂÂ -18.5%ÂÂÂ 4568664 ÂÂ 6%Â perf-stat.ps.node-loads
ÂÂÂ 2855134 ÂÂ 7%ÂÂÂÂ -16.4%ÂÂÂ 2387345 ÂÂ 5%Â perf-stat.ps.node-stores
ÂÂÂÂ 226270ÂÂÂÂÂÂÂÂÂÂÂ +4.6%ÂÂÂÂ 236709ÂÂÂÂÂÂÂ perf-stat.ps.page-faults
ÂÂÂÂ 242305 Â 10%ÂÂÂÂ -42.4%ÂÂÂÂ 139551 Â 18% numa-vmstat.node0.nr_active_anon
ÂÂÂÂ 135983 Â 17%ÂÂÂÂ -37.4%ÂÂÂÂÂ 85189 Â 10% numa-vmstat.node0.nr_anon_pages
ÂÂÂÂ 209.25 Â 16%ÂÂÂÂ -38.1%ÂÂÂÂ 129.50 Â 10% numa-vmstat.node0.nr_anon_transparent_hugepages
ÂÂÂÂ 449367 ÂÂ 9%ÂÂÂÂ -29.7%ÂÂÂÂ 315804 Â 20% numa-vmstat.node0.nr_file_pages
ÂÂÂÂÂÂ 2167 Â 90%ÂÂÂÂ -80.6%ÂÂÂÂ 419.75 Â 98% numa-vmstat.node0.nr_inactive_file
ÂÂÂÂ 157405 ÂÂ 3%ÂÂÂÂ -41.4%ÂÂÂÂÂ 92206 Â 35% numa-vmstat.node0.nr_mapped
ÂÂÂÂÂÂ 2022 Â 30%ÂÂÂÂ -73.3%ÂÂÂÂ 539.25 Â 91% numa-vmstat.node0.nr_mlock
ÂÂÂÂÂÂ 3336 Â 10%ÂÂÂÂ -24.3%ÂÂÂÂÂÂ 2524 Â 25% numa-vmstat.node0.nr_page_table_pages
ÂÂÂÂ 286158 Â 10%ÂÂÂÂ -41.2%ÂÂÂÂ 168337 Â 37% numa-vmstat.node0.nr_shmem
ÂÂÂÂÂ 45493 ÂÂ 5%ÂÂÂÂ -14.1%ÂÂÂÂÂ 39094 ÂÂ 4% numa-vmstat.node0.nr_slab_unreclaimable
ÂÂÂÂ 242294 Â 10%ÂÂÂÂ -42.4%ÂÂÂÂ 139547 Â 18% numa-vmstat.node0.nr_zone_active_anon
ÂÂÂÂÂÂ 2167 Â 90%ÂÂÂÂ -80.6%ÂÂÂÂ 419.75 Â 98% numa-vmstat.node0.nr_zone_inactive_file
ÂÂ 54053924 ÂÂ 8%ÂÂÂÂ -39.3%ÂÂ 32786242 Â 34% numa-vmstat.node0.numa_hit
ÂÂ 53929628 ÂÂ 8%ÂÂÂÂ -39.5%ÂÂ 32619715 Â 34% numa-vmstat.node0.numa_local
ÂÂÂÂÂÂ 9701 Â107%ÂÂÂ +136.9%ÂÂÂÂÂ 22985 Â 31% numa-vmstat.node1.nr_active_file
ÂÂÂÂ 202.50 Â 16%ÂÂÂÂ -25.1%ÂÂÂÂ 151.75 Â 23% numa-vmstat.node1.nr_anon_transparent_hugepages
ÂÂÂÂ 284922 ÂÂ 7%ÂÂÂÂ +43.3%ÂÂÂÂ 408195 Â 13% numa-vmstat.node1.nr_file_pages
ÂÂÂÂÂ 96002 Â 26%ÂÂÂÂ +67.5%ÂÂÂÂ 160850 Â 17% numa-vmstat.node1.nr_inactive_anon
ÂÂÂÂÂ 68077 ÂÂ 2%ÂÂÂÂ +90.3%ÂÂÂÂ 129533 Â 25% numa-vmstat.node1.nr_mapped
ÂÂÂÂ 138482 Â 15%ÂÂÂÂ +79.2%ÂÂÂÂ 248100 Â 22% numa-vmstat.node1.nr_shmem
ÂÂÂÂÂ 37396 ÂÂ 3%ÂÂÂÂ +23.3%ÂÂÂÂÂ 46094 ÂÂ 3% numa-vmstat.node1.nr_slab_unreclaimable
ÂÂÂÂÂÂ 9701 Â107%ÂÂÂ +136.9%ÂÂÂÂÂ 22985 Â 31% numa-vmstat.node1.nr_zone_active_file
ÂÂÂÂÂ 96005 Â 26%ÂÂÂÂ +67.5%ÂÂÂÂ 160846 Â 17% numa-vmstat.node1.nr_zone_inactive_anon
ÂÂ 23343661 Â 17%ÂÂÂÂ +99.9%ÂÂ 46664267 Â 23% numa-vmstat.node1.numa_hit
ÂÂ 23248487 Â 17%ÂÂÂ +100.5%ÂÂ 46610447 Â 23% numa-vmstat.node1.numa_local
ÂÂÂÂ 105745 Â 23%ÂÂÂ +112.6%ÂÂÂÂ 224805 Â 24%Â softirqs.CPU0.NET_RX
ÂÂÂÂ 133310 Â 36%ÂÂÂÂ -45.3%ÂÂÂÂÂ 72987 Â 52%Â softirqs.CPU1.NET_RX
ÂÂÂÂ 170110 Â 55%ÂÂÂÂ -66.8%ÂÂÂÂÂ 56407 Â147%Â softirqs.CPU11.NET_RX
ÂÂÂÂÂ 91465 Â 36%ÂÂÂÂ -65.2%ÂÂÂÂÂ 31858 Â112%Â softirqs.CPU13.NET_RX
ÂÂÂÂ 164491 Â 57%ÂÂÂÂ -77.7%ÂÂÂÂÂ 36641 Â121%Â softirqs.CPU15.NET_RX
ÂÂÂÂ 121069 Â 55%ÂÂÂÂ -99.3%ÂÂÂÂ 816.75 Â 96%Â softirqs.CPU17.NET_RX
ÂÂÂÂÂ 81019 ÂÂ 4%ÂÂÂÂÂ -8.7%ÂÂÂÂÂ 73967 ÂÂ 4%Â softirqs.CPU20.RCU
ÂÂÂÂÂ 72143 Â 63%ÂÂÂÂ -89.8%ÂÂÂÂÂÂ 7360 Â172%Â softirqs.CPU22.NET_RX
ÂÂÂÂ 270663 Â 17%ÂÂÂÂ -57.9%ÂÂÂÂ 113915 Â 45%Â softirqs.CPU24.NET_RX
ÂÂÂÂÂ 20149 Â 76%ÂÂÂ +474.1%ÂÂÂÂ 115680 Â 62%Â softirqs.CPU26.NET_RX
ÂÂÂÂÂ 14033 Â 70%ÂÂÂ +977.5%ÂÂÂÂ 151211 Â 75%Â softirqs.CPU27.NET_RX
ÂÂÂÂÂ 27834 Â 94%ÂÂÂ +476.1%ÂÂÂÂ 160357 Â 28%Â softirqs.CPU28.NET_RX
ÂÂÂÂÂ 35346 Â 68%ÂÂÂ +212.0%ÂÂÂÂ 110290 Â 30%Â softirqs.CPU29.NET_RX
ÂÂÂÂÂ 34347 Â103%ÂÂÂ +336.5%ÂÂÂÂ 149941 Â 32%Â softirqs.CPU32.NET_RX
ÂÂÂÂÂ 70077 ÂÂ 3%ÂÂÂÂ +10.8%ÂÂÂÂÂ 77624 ÂÂ 3%Â softirqs.CPU34.RCU
ÂÂÂÂÂ 36453 Â 84%ÂÂÂ +339.6%ÂÂÂÂ 160253 Â 42%Â softirqs.CPU36.NET_RX
ÂÂÂÂÂ 72367 ÂÂ 2%ÂÂÂÂ +10.6%ÂÂÂÂÂ 80043ÂÂÂÂÂÂÂ softirqs.CPU37.RCU
ÂÂÂÂÂ 25239 Â118%ÂÂÂ +267.7%ÂÂÂÂÂ 92799 Â 45%Â softirqs.CPU38.NET_RX
ÂÂÂÂÂÂ 4995 Â170%ÂÂ +1155.8%ÂÂÂÂÂ 62734 Â 62%Â softirqs.CPU39.NET_RX
ÂÂÂÂÂÂ 4641 Â145%ÂÂ +1611.3%ÂÂÂÂÂ 79432 Â 90%Â softirqs.CPU42.NET_RX
ÂÂÂÂÂÂ 7192 Â 65%ÂÂÂ +918.0%ÂÂÂÂÂ 73225 Â 66%Â softirqs.CPU45.NET_RX
ÂÂÂÂÂÂ 1772 Â166%ÂÂ +1837.4%ÂÂÂÂÂ 34344 Â 63%Â softirqs.CPU46.NET_RX
ÂÂÂÂÂ 13149 Â 81%ÂÂÂ +874.7%ÂÂÂÂ 128170 Â 58%Â softirqs.CPU47.NET_RX
ÂÂÂÂÂ 86484 Â 94%ÂÂÂÂ -92.6%ÂÂÂÂÂÂ 6357 Â172%Â softirqs.CPU48.NET_RX
ÂÂÂÂ 129128 Â 27%ÂÂÂÂ -95.8%ÂÂÂÂÂÂ 5434 Â172%Â softirqs.CPU55.NET_RX
ÂÂÂÂÂ 82772 Â 59%ÂÂÂÂ -91.7%ÂÂÂÂÂÂ 6891 Â164%Â softirqs.CPU56.NET_RX
ÂÂÂÂ 145313 Â 57%ÂÂÂÂ -87.8%ÂÂÂÂÂ 17796 Â 88%Â softirqs.CPU57.NET_RX
ÂÂÂÂ 118160 Â 33%ÂÂÂÂ -86.3%ÂÂÂÂÂ 16226 Â109%Â softirqs.CPU58.NET_RX
ÂÂÂÂÂ 94576 Â 56%ÂÂÂÂ -94.1%ÂÂÂÂÂÂ 5557 Â173%Â softirqs.CPU6.NET_RX
ÂÂÂÂÂ 82900 Â 77%ÂÂÂÂ -66.8%ÂÂÂÂÂ 27508 Â171%Â softirqs.CPU62.NET_RX
ÂÂÂÂ 157291 Â 30%ÂÂÂÂ -81.1%ÂÂÂÂÂ 29656 Â111%Â softirqs.CPU64.NET_RX
ÂÂÂÂ 135101 Â 28%ÂÂÂÂ -80.2%ÂÂÂÂÂ 26748 Â 90%Â softirqs.CPU67.NET_RX
ÂÂÂÂ 146574 Â 56%ÂÂÂ -100.0%ÂÂÂÂÂ 69.75 Â 98%Â softirqs.CPU68.NET_RX
ÂÂÂÂÂ 81347 ÂÂ 2%ÂÂÂÂÂ -9.0%ÂÂÂÂÂ 74024 ÂÂ 2%Â softirqs.CPU68.RCU
ÂÂÂÂ 201729 Â 37%ÂÂÂÂ -99.6%ÂÂÂÂ 887.50 Â107%Â softirqs.CPU69.NET_RX
ÂÂÂÂ 108454 Â 78%ÂÂÂÂ -97.9%ÂÂÂÂÂÂ 2254 Â169%Â softirqs.CPU70.NET_RX
ÂÂÂÂÂ 55289 Â104%ÂÂÂÂ -89.3%ÂÂÂÂÂÂ 5942 Â172%Â softirqs.CPU71.NET_RX
ÂÂÂÂÂ 10112 Â172%ÂÂÂ +964.6%ÂÂÂÂ 107651 Â 89%Â softirqs.CPU72.NET_RX
ÂÂÂÂÂÂ 3136 Â171%ÂÂ +1522.2%ÂÂÂÂÂ 50879 Â 66%Â softirqs.CPU73.NET_RX
ÂÂÂÂÂ 13353 Â 79%ÂÂÂ +809.2%ÂÂÂÂ 121407 Â101%Â softirqs.CPU74.NET_RX
ÂÂÂÂÂ 75194 ÂÂ 3%ÂÂÂÂ +10.3%ÂÂÂÂÂ 82957 ÂÂ 5%Â softirqs.CPU75.RCU
ÂÂÂÂÂ 11002 Â173%ÂÂ +1040.8%ÂÂÂÂ 125512 Â 61%Â softirqs.CPU76.NET_RX
ÂÂÂÂÂÂ 2463 Â173%ÂÂ +2567.3%ÂÂÂÂÂ 65708 Â 77%Â softirqs.CPU78.NET_RX
ÂÂÂÂÂ 25956 ÂÂ 3%ÂÂÂÂÂ -7.8%ÂÂÂÂÂ 23932 ÂÂ 3%Â softirqs.CPU78.SCHED
ÂÂÂÂÂ 16366 Â150%ÂÂÂ +340.7%ÂÂÂÂÂ 72125 Â 91%Â softirqs.CPU82.NET_RX
ÂÂÂÂÂ 14553 Â130%ÂÂ +1513.4%ÂÂÂÂ 234809 Â 27%Â softirqs.CPU93.NET_RX
ÂÂÂÂÂ 26314ÂÂÂÂÂÂÂÂÂÂÂ -9.2%ÂÂÂÂÂ 23884 ÂÂ 3%Â softirqs.CPU93.SCHED
ÂÂÂÂÂÂ 4582 Â 88%ÂÂ +4903.4%ÂÂÂÂ 229268 Â 23%Â softirqs.CPU94.NET_RX
ÂÂÂÂÂ 11214 Â111%ÂÂ +1762.5%ÂÂÂÂ 208867 Â 18%Â softirqs.CPU95.NET_RX
ÂÂÂÂÂÂ 1.53 Â 27%ÂÂÂÂÂ -0.5ÂÂÂÂÂÂÂ 0.99 Â 17% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
ÂÂÂÂÂÂ 1.52 Â 27%ÂÂÂÂÂ -0.5ÂÂÂÂÂÂÂ 0.99 Â 17% perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
ÂÂÂÂÂÂ 1.39 Â 29%ÂÂÂÂÂ -0.5ÂÂÂÂÂÂÂ 0.88 Â 21% perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64
ÂÂÂÂÂÂ 1.39 Â 29%ÂÂÂÂÂ -0.5ÂÂÂÂÂÂÂ 0.88 Â 21% perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
ÂÂÂÂÂÂ 0.50 Â 59%ÂÂÂÂÂ +0.3ÂÂÂÂÂÂÂ 0.81 Â 13% perf-profile.calltrace.cycles-pp.filemap_map_pages.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_page_fault
ÂÂÂÂÂÂ 5.70 ÂÂ 9%ÂÂÂÂÂ +0.8ÂÂÂÂÂÂÂ 6.47 ÂÂ 7% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop
ÂÂÂÂÂÂ 5.48 ÂÂ 9%ÂÂÂÂÂ +0.8ÂÂÂÂÂÂÂ 6.27 ÂÂ 7% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.get_signal
ÂÂÂÂÂÂ 5.49 ÂÂ 9%ÂÂÂÂÂ +0.8ÂÂÂÂÂÂÂ 6.28 ÂÂ 7% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.get_signal.do_signal
ÂÂÂÂÂÂ 4.30 ÂÂ 4%ÂÂÂÂÂ +1.3ÂÂÂÂÂÂÂ 5.60 ÂÂ 7% perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.prepare_exit_to_usermode
ÂÂÂÂÂÂ 4.40 ÂÂ 4%ÂÂÂÂÂ +1.3ÂÂÂÂÂÂÂ 5.69 ÂÂ 7% perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
ÂÂÂÂÂÂ 4.37 ÂÂ 4%ÂÂÂÂÂ +1.3ÂÂÂÂÂÂÂ 5.66 ÂÂ 7% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
ÂÂÂÂÂÂ 4.36 ÂÂ 4%ÂÂÂÂÂ +1.3ÂÂÂÂÂÂÂ 5.66 ÂÂ 7% perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
ÂÂÂÂÂÂ 4.33 ÂÂ 4%ÂÂÂÂÂ +1.3ÂÂÂÂÂÂÂ 5.62 ÂÂ 7% perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
ÂÂÂÂÂÂ 4.44 ÂÂ 4%ÂÂÂÂÂ +1.3ÂÂÂÂÂÂÂ 5.74 ÂÂ 7% perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
ÂÂÂÂÂÂ 3.20 Â 10%ÂÂÂÂÂ -2.4ÂÂÂÂÂÂÂ 0.78 Â156% perf-profile.children.cycles-pp.copy_page
ÂÂÂÂÂÂ 0.16 ÂÂ 9%ÂÂÂÂÂ -0.1ÂÂÂÂÂÂÂ 0.08 Â 64% perf-profile.children.cycles-pp.irq_work_interrupt
ÂÂÂÂÂÂ 0.16 ÂÂ 9%ÂÂÂÂÂ -0.1ÂÂÂÂÂÂÂ 0.08 Â 64% perf-profile.children.cycles-pp.smp_irq_work_interrupt
ÂÂÂÂÂÂ 0.24 ÂÂ 5%ÂÂÂÂÂ -0.1ÂÂÂÂÂÂÂ 0.17 Â 18% perf-profile.children.cycles-pp.irq_work_run_list
ÂÂÂÂÂÂ 0.16 ÂÂ 9%ÂÂÂÂÂ -0.1ÂÂÂÂÂÂÂ 0.10 Â 24% perf-profile.children.cycles-pp.irq_work_run
ÂÂÂÂÂÂ 0.16 ÂÂ 9%ÂÂÂÂÂ -0.1ÂÂÂÂÂÂÂ 0.10 Â 24% perf-profile.children.cycles-pp.printk
ÂÂÂÂÂÂ 0.23 ÂÂ 6%ÂÂÂÂÂ -0.1ÂÂÂÂÂÂÂ 0.17 ÂÂ 9% perf-profile.children.cycles-pp.__do_execve_file
ÂÂÂÂÂÂ 0.08 Â 14%ÂÂÂÂÂ -0.1ÂÂÂÂÂÂÂ 0.03 Â100% perf-profile.children.cycles-pp.delay_tsc
ÂÂÂÂÂÂ 0.16 ÂÂ 6%ÂÂÂÂÂ -0.1ÂÂÂÂÂÂÂ 0.11 ÂÂ 9% perf-profile.children.cycles-pp.load_elf_binary
ÂÂÂÂÂÂ 0.16 ÂÂ 7%ÂÂÂÂÂ -0.0ÂÂÂÂÂÂÂ 0.12 Â 13% perf-profile.children.cycles-pp.search_binary_handler
ÂÂÂÂÂÂ 0.20 ÂÂ 7%ÂÂÂÂÂ -0.0ÂÂÂÂÂÂÂ 0.15 Â 10% perf-profile.children.cycles-pp.call_usermodehelper_exec_async
ÂÂÂÂÂÂ 0.19 ÂÂ 6%ÂÂÂÂÂ -0.0ÂÂÂÂÂÂÂ 0.15 Â 11% perf-profile.children.cycles-pp.do_execve
ÂÂÂÂÂÂ 0.08 Â 10%ÂÂÂÂÂ -0.0ÂÂÂÂÂÂÂ 0.04 Â 59% perf-profile.children.cycles-pp.__vunmap
ÂÂÂÂÂÂ 0.15 ÂÂ 3%ÂÂÂÂÂ -0.0ÂÂÂÂÂÂÂ 0.11 ÂÂ 7% perf-profile.children.cycles-pp.rcu_idle_exit
ÂÂÂÂÂÂ 0.12 Â 10%ÂÂÂÂÂ -0.0ÂÂÂÂÂÂÂ 0.09 Â 14% perf-profile.children.cycles-pp.__switch_to_asm
ÂÂÂÂÂÂ 0.09 Â 13%ÂÂÂÂÂ -0.0ÂÂÂÂÂÂÂ 0.07 ÂÂ 5% perf-profile.children.cycles-pp.des3_ede_encrypt
ÂÂÂÂÂÂ 0.06 Â 11%ÂÂÂÂÂ +0.0ÂÂÂÂÂÂÂ 0.09 Â 13% perf-profile.children.cycles-pp.mark_page_accessed
ÂÂÂÂÂÂ 0.15 ÂÂ 5%ÂÂÂÂÂ +0.0ÂÂÂÂÂÂÂ 0.19 Â 12% perf-profile.children.cycles-pp.apparmor_cred_prepare
ÂÂÂÂÂÂ 0.22 ÂÂ 8%ÂÂÂÂÂ +0.0ÂÂÂÂÂÂÂ 0.27 Â 11% perf-profile.children.cycles-pp.mem_cgroup_throttle_swaprate
ÂÂÂÂÂÂ 0.17 ÂÂ 2%ÂÂÂÂÂ +0.0ÂÂÂÂÂÂÂ 0.22 Â 12% perf-profile.children.cycles-pp.security_prepare_creds
ÂÂÂÂÂÂ 0.95 Â 17%ÂÂÂÂÂ +0.3ÂÂÂÂÂÂÂ 1.22 Â 14% perf-profile.children.cycles-pp.filemap_map_pages
ÂÂÂÂÂÂ 5.92 ÂÂ 8%ÂÂÂÂÂ +0.7ÂÂÂÂÂÂÂ 6.65 ÂÂ 7% perf-profile.children.cycles-pp.get_signal
ÂÂÂÂÂÂ 5.66 ÂÂ 9%ÂÂÂÂÂ +0.8ÂÂÂÂÂÂÂ 6.44 ÂÂ 7% perf-profile.children.cycles-pp.mmput
ÂÂÂÂÂÂ 5.65 ÂÂ 9%ÂÂÂÂÂ +0.8ÂÂÂÂÂÂÂ 6.43 ÂÂ 7% perf-profile.children.cycles-pp.exit_mmap
ÂÂÂÂÂÂ 4.40 ÂÂ 4%ÂÂÂÂÂ +1.3ÂÂÂÂÂÂÂ 5.70 ÂÂ 7% perf-profile.children.cycles-pp.prepare_exit_to_usermode
ÂÂÂÂÂÂ 4.45 ÂÂ 4%ÂÂÂÂÂ +1.3ÂÂÂÂÂÂÂ 5.75 ÂÂ 7% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
ÂÂÂÂÂÂ 3.16 Â 10%ÂÂÂÂÂ -2.4ÂÂÂÂÂÂÂ 0.77 Â155% perf-profile.self.cycles-pp.copy_page
ÂÂÂÂÂÂ 0.08 Â 14%ÂÂÂÂÂ -0.1ÂÂÂÂÂÂÂ 0.03 Â100% perf-profile.self.cycles-pp.delay_tsc
ÂÂÂÂÂÂ 0.12 Â 10%ÂÂÂÂÂ -0.0ÂÂÂÂÂÂÂ 0.09 Â 14% perf-profile.self.cycles-pp.__switch_to_asm
ÂÂÂÂÂÂ 0.08 Â 12%ÂÂÂÂÂ -0.0ÂÂÂÂÂÂÂ 0.06 Â 17% perf-profile.self.cycles-pp.enqueue_task_fair
ÂÂÂÂÂÂ 0.09 Â 13%ÂÂÂÂÂ -0.0ÂÂÂÂÂÂÂ 0.07 ÂÂ 5% perf-profile.self.cycles-pp.des3_ede_encrypt
ÂÂÂÂÂÂ 0.07 Â 13%ÂÂÂÂÂ +0.0ÂÂÂÂÂÂÂ 0.08 Â 19% perf-profile.self.cycles-pp.__lru_cache_add
ÂÂÂÂÂÂ 0.19 ÂÂ 9%ÂÂÂÂÂ +0.0ÂÂÂÂÂÂÂ 0.22 Â 10% perf-profile.self.cycles-pp.mem_cgroup_throttle_swaprate
ÂÂÂÂÂÂ 0.15 ÂÂ 5%ÂÂÂÂÂ +0.0ÂÂÂÂÂÂÂ 0.19 Â 11% perf-profile.self.cycles-pp.apparmor_cred_prepare
ÂÂÂÂÂÂ 0.05 Â 58%ÂÂÂÂÂ +0.0ÂÂÂÂÂÂÂ 0.09 Â 13% perf-profile.self.cycles-pp.mark_page_accessed
ÂÂÂÂÂÂ 0.58 Â 10%ÂÂÂÂÂ +0.2ÂÂÂÂÂÂÂ 0.80 Â 20% perf-profile.self.cycles-pp.release_pages
ÂÂÂÂÂÂ 0.75 Â173%Â +1.3e+05%ÂÂÂÂÂÂ 1005 Â100% interrupts.127:PCI-MSI.31981660-edge.i40e-eth0-TxRx-91
ÂÂÂÂ 820.75 Â111%ÂÂÂÂ -99.9%ÂÂÂÂÂÂ 0.50 Â173% interrupts.47:PCI-MSI.31981580-edge.i40e-eth0-TxRx-11
ÂÂÂÂ 449.25 Â 86%ÂÂÂ -100.0%ÂÂÂÂÂÂ 0.00 interrupts.53:PCI-MSI.31981586-edge.i40e-eth0-TxRx-17
ÂÂÂÂÂ 33.25 Â157%ÂÂÂ -100.0%ÂÂÂÂÂÂ 0.00 interrupts.57:PCI-MSI.31981590-edge.i40e-eth0-TxRx-21
ÂÂÂÂÂÂ 0.75 Â110%Â +63533.3%ÂÂÂÂ 477.25 Â162% interrupts.61:PCI-MSI.31981594-edge.i40e-eth0-TxRx-25
ÂÂÂÂ 561.50 Â160%ÂÂÂ -100.0%ÂÂÂÂÂÂ 0.00 interrupts.65:PCI-MSI.31981598-edge.i40e-eth0-TxRx-29
ÂÂÂÂÂ 82921 ÂÂ 8%ÂÂÂÂ -11.1%ÂÂÂÂÂ 73748 ÂÂ 6% interrupts.CPU11.CAL:Function_call_interrupts
ÂÂÂÂÂ 66509 Â 30%ÂÂÂÂ -32.6%ÂÂÂÂÂ 44828 ÂÂ 8% interrupts.CPU14.TLB:TLB_shootdowns
ÂÂÂÂÂ 43105 Â 98%ÂÂÂÂ -90.3%ÂÂÂÂÂÂ 4183 Â 21% interrupts.CPU17.RES:Rescheduling_interrupts
ÂÂÂÂ 148719 Â 70%ÂÂÂÂ -69.4%ÂÂÂÂÂ 45471 Â 16% interrupts.CPU17.TLB:TLB_shootdowns
ÂÂÂÂÂ 85589 Â 42%ÂÂÂÂ -52.2%ÂÂÂÂÂ 40884 ÂÂ 5% interrupts.CPU20.TLB:TLB_shootdowns
ÂÂÂÂ 222472 Â 41%ÂÂÂÂ -98.0%ÂÂÂÂÂÂ 4360 Â 45% interrupts.CPU22.RES:Rescheduling_interrupts
ÂÂÂÂÂÂ 0.50 Â173%Â +95350.0%ÂÂÂÂ 477.25 Â162% interrupts.CPU25.61:PCI-MSI.31981594-edge.i40e-eth0-TxRx-25
ÂÂÂÂÂ 76029 Â 10%ÂÂÂÂ +14.9%ÂÂÂÂÂ 87389 ÂÂ 5% interrupts.CPU25.CAL:Function_call_interrupts
ÂÂÂÂ 399042 ÂÂ 6%ÂÂÂÂ +13.4%ÂÂÂÂ 452479 ÂÂ 8% interrupts.CPU27.LOC:Local_timer_interrupts
ÂÂÂÂ 561.00 Â161%ÂÂÂ -100.0%ÂÂÂÂÂÂ 0.00 interrupts.CPU29.65:PCI-MSI.31981598-edge.i40e-eth0-TxRx-29
ÂÂÂÂÂÂ 7034 Â 46%ÂÂ +1083.8%ÂÂÂÂÂ 83279 Â138% interrupts.CPU29.RES:Rescheduling_interrupts
ÂÂÂÂÂ 17829 Â 99%ÂÂÂÂ -71.0%ÂÂÂÂÂÂ 5172 Â 16% interrupts.CPU30.RES:Rescheduling_interrupts
ÂÂÂÂÂÂ 5569 Â 15%ÂÂ +2414.7%ÂÂÂÂ 140059 Â 94% interrupts.CPU31.RES:Rescheduling_interrupts
ÂÂÂÂÂ 37674 Â 16%ÂÂÂÂ +36.6%ÂÂÂÂÂ 51473 Â 25% interrupts.CPU31.TLB:TLB_shootdowns
ÂÂÂÂÂ 47905 Â 39%ÂÂÂÂ +76.6%ÂÂÂÂÂ 84583 Â 38% interrupts.CPU34.TLB:TLB_shootdowns
ÂÂÂÂ 568.75 Â140%ÂÂÂ +224.8%ÂÂÂÂÂÂ 1847 Â 90% interrupts.CPU36.NMI:Non-maskable_interrupts
ÂÂÂÂ 568.75 Â140%ÂÂÂ +224.8%ÂÂÂÂÂÂ 1847 Â 90% interrupts.CPU36.PMI:Performance_monitoring_interrupts
ÂÂÂÂÂÂ 4236 Â 25%ÂÂ +2168.5%ÂÂÂÂÂ 96092 Â 90% interrupts.CPU36.RES:Rescheduling_interrupts
ÂÂÂÂÂ 52717 Â 27%ÂÂÂÂ +43.3%ÂÂÂÂÂ 75565 Â 28% interrupts.CPU37.TLB:TLB_shootdowns
ÂÂÂÂÂ 41418 ÂÂ 9%ÂÂÂ +136.6%ÂÂÂÂÂ 98010 Â 50% interrupts.CPU39.TLB:TLB_shootdowns
ÂÂÂÂÂÂ 5551 ÂÂ 8%ÂÂÂ +847.8%ÂÂÂÂÂ 52615 Â 66% interrupts.CPU40.RES:Rescheduling_interrupts
ÂÂÂÂÂÂ 4746 Â 25%ÂÂÂ +865.9%ÂÂÂÂÂ 45841 Â 91% interrupts.CPU42.RES:Rescheduling_interrupts
ÂÂÂÂÂ 37556 Â 11%ÂÂÂÂ +24.6%ÂÂÂÂÂ 46808 ÂÂ 6% interrupts.CPU42.TLB:TLB_shootdowns
ÂÂÂÂÂ 21846 Â124%ÂÂÂÂ -84.4%ÂÂÂÂÂÂ 3415 Â 46% interrupts.CPU48.RES:Rescheduling_interrupts
ÂÂÂÂ 891.50 Â 22%ÂÂÂÂ -35.2%ÂÂÂÂ 577.25 Â 40% interrupts.CPU49.NMI:Non-maskable_interrupts
ÂÂÂÂ 891.50 Â 22%ÂÂÂÂ -35.2%ÂÂÂÂ 577.25 Â 40% interrupts.CPU49.PMI:Performance_monitoring_interrupts
ÂÂÂÂÂ 20459 Â120%ÂÂÂÂ -79.2%ÂÂÂÂÂÂ 4263 Â 14% interrupts.CPU49.RES:Rescheduling_interrupts
ÂÂÂÂÂ 59840 Â 21%ÂÂÂÂ -23.1%ÂÂÂÂÂ 46042 Â 16% interrupts.CPU5.TLB:TLB_shootdowns
ÂÂÂÂÂ 65200 Â 19%ÂÂÂÂ -34.5%ÂÂÂÂÂ 42678 ÂÂ 9% interrupts.CPU51.TLB:TLB_shootdowns
ÂÂÂÂÂ 70923 Â153%ÂÂÂÂ -94.0%ÂÂÂÂÂÂ 4270 Â 29% interrupts.CPU53.RES:Rescheduling_interrupts
ÂÂÂÂÂ 65312 Â 22%ÂÂÂÂ -28.7%ÂÂÂÂÂ 46578 Â 14% interrupts.CPU56.TLB:TLB_shootdowns
ÂÂÂÂÂ 65828 Â 24%ÂÂÂïï -33.4%ÂÂÂÂÂ 43846 ÂÂ 4% interrupts.CPU59.TLB:TLB_shootdowns
ÂÂÂÂÂ 72558 Â156%ÂÂÂÂ -93.2%ÂÂÂÂÂÂ 4906 ÂÂ 9% interrupts.CPU6.RES:Rescheduling_interrupts
ÂÂÂÂÂ 68698 Â 34%ÂÂÂÂ -32.6%ÂÂÂÂÂ 46327 Â 18% interrupts.CPU61.TLB:TLB_shootdowns
ÂÂÂÂ 109745 Â 44%ÂÂÂÂ -57.4%ÂÂÂÂÂ 46711 Â 16% interrupts.CPU62.TLB:TLB_shootdowns
ÂÂÂÂÂ 89714 Â 44%ÂÂÂÂ -48.5%ÂÂÂÂÂ 46198 ÂÂ 7% interrupts.CPU63.TLB:TLB_shootdowns
ÂÂÂÂÂ 59380 Â136%ÂÂÂÂ -91.5%ÂÂÂÂÂÂ 5066 Â 13% interrupts.CPU69.RES:Rescheduling_interrupts
ÂÂÂÂÂ 40094 Â 18%ÂÂÂ +133.9%ÂÂÂÂÂ 93798 Â 44% interrupts.CPU78.TLB:TLB_shootdowns
ÂÂÂÂ 129884 Â 72%ÂÂÂÂ -55.3%ÂÂÂÂÂ 58034 Â157% interrupts.CPU8.RES:Rescheduling_interrupts
ÂÂÂÂÂ 69984 Â 11%ÂÂÂÂ +51.4%ÂÂÂÂ 105957 Â 20% interrupts.CPU80.CAL:Function_call_interrupts
ÂÂÂÂÂ 32857 Â 10%ÂÂÂ +128.7%ÂÂÂÂÂ 75131 Â 36% interrupts.CPU80.TLB:TLB_shootdowns
ÂÂÂÂÂ 35726 Â 16%ÂÂÂÂ +34.6%ÂÂÂÂÂ 48081 Â 12% interrupts.CPU82.TLB:TLB_shootdowns
ÂÂÂÂÂ 73820 Â 17%ÂÂÂÂ +28.2%ÂÂÂÂÂ 94643 ÂÂ 8% interrupts.CPU84.CAL:Function_call_interrupts
ÂÂÂÂÂ 38829 Â 28%ÂÂÂ +190.3%ÂÂÂÂ 112736 Â 42% interrupts.CPU84.TLB:TLB_shootdowns
ÂÂÂÂÂ 36129 ÂÂ 4%ÂÂÂÂ +47.6%ÂÂÂÂÂ 53329 Â 13% interrupts.CPU85.TLB:TLB_shootdowns
ÂÂÂÂÂÂ 4693 ÂÂ 7%ÂÂ +1323.0%ÂÂÂÂÂ 66793 Â145% interrupts.CPU86.RES:Rescheduling_interrupts
ÂÂÂÂÂ 38003 Â 11%ÂÂÂÂ +94.8%ÂÂÂÂÂ 74031 Â 43% interrupts.CPU86.TLB:TLB_shootdowns
ÂÂÂÂÂ 78022 ÂÂ 3%ÂÂÂÂÂ +7.9%ÂÂÂÂÂ 84210 ÂÂ 3% interrupts.CPU87.CAL:Function_call_interrupts
ÂÂÂÂÂ 36359 ÂÂ 6%ÂÂÂÂ +54.9%ÂÂÂÂÂ 56304 Â 48% interrupts.CPU88.TLB:TLB_shootdowns
ÂÂÂÂÂ 89031 Â105%ÂÂÂÂ -95.0%ÂÂÂÂÂÂ 4475 Â 40% interrupts.CPU9.RES:Rescheduling_interrupts
ÂÂÂÂÂ 40085 Â 11%ÂÂÂÂ +60.6%ÂÂÂÂÂ 64368 Â 27% interrupts.CPU91.TLB:TLB_shootdowns
ÂÂÂÂÂ 42244 Â 10%ÂÂÂÂ +44.8%ÂÂÂÂÂ 61162 Â 35% interrupts.CPU94.TLB:TLB_shootdowns
ÂÂÂÂÂ 40959 Â 15%ÂÂÂ +109.4%ÂÂÂÂÂ 85780 Â 41% interrupts.CPU95.TLB:TLB_shootdowns
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ stress-ng.fiemap.ops
ÂÂ 80000 +-------------------------------------------------------------------+
 75000 |..+. .+.. .+..+.. .+. .+.. |
 | +..+..+..+.+. .+..+.. .+ +. +. +.+..+..+..+.+..|
ÂÂ 70000 |-+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ + +.ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 65000 |-+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 60000 |-+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 55000 |-+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
|ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 50000 |-+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 45000 |-+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 40000 |-+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 35000 |-+ OÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂÂÂÂÂÂÂ |Â OÂÂÂÂÂÂ OÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ OÂÂÂÂ O OÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 30000 |-+Â OÂ OÂÂÂÂ O OÂÂÂÂ O OÂÂÂÂ OÂ OÂÂÂ OÂÂÂÂ OÂÂÂ OÂ O OÂ O OÂ O OÂ |
ÂÂ 25000 +-------------------------------------------------------------------+
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ stress-ng.fiemap.ops_per_sec
ÂÂ 80000 +-------------------------------------------------------------------+
ÂÂ 75000 |..ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ .+.. .+..ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
 | +. .+..+..+.+. .+..+.. .+.+. +..+.+..+..+.+..+..+..+.+..|
ÂÂ 70000 |-+Â +.ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ + +.ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 65000 |-+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 60000 |-+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 55000 |-+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
|ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 50000 |-+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 45000 |-+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 40000 |-+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 35000 |-+ OÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂÂÂÂÂÂÂ |Â OÂÂÂÂÂÂ OÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ O OÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |
ÂÂ 30000 |-+Â OÂ OÂÂÂÂ O OÂÂÂÂ O OÂÂÂÂ OÂÂÂÂÂÂ OÂÂÂÂ O OÂ OÂ OÂÂÂ O OÂ O OÂ |
ÂÂ 25000 +-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
_______________________________________________
LKP mailing list -- lkp@xxxxxxxxxxxx
To unsubscribe send an email to lkp-leave@xxxxxxxxxxxx