Re: [LKP] Re: [sched/fair] 6c8116c914: stress-ng.mmapfork.ops_per_sec -38.0% regression

From: Vincent Guittot
Date: Fri Apr 24 2020 - 11:17:12 EST


Hi Xing,

On Fri, 24 Apr 2020 at 10:15, Xing Zhengjun
<zhengjun.xing@xxxxxxxxxxxxxxx> wrote:
>
> Hi Tao,
>
> Do you have time to take a look at this? Thanks.

I have tried to reproduce the regression on my systems which are Arm
ones but I can't see such regression on small 8 cores and large 224
cores/2 numa nodes

I have only run the mmapfork test to shorten the duration of my tests

Regards,
Vincent
>
> On 4/21/2020 8:47 AM, kernel test robot wrote:
> > Greeting,
> >
> > FYI, we noticed a 56.4% improvement of stress-ng.fifo.ops_per_sec due to commit:
> >
> >
> > commit: 6c8116c914b65be5e4d6f66d69c8142eb0648c22 ("sched/fair: Fix condition of avg_load calculation")
> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
> >
> > in testcase: stress-ng
> > on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
> > with following parameters:
> >
> > nr_threads: 100%
> > disk: 1HDD
> > testtime: 1s
> > class: scheduler
> > cpufreq_governor: performance
> > ucode: 0xb000038
> > sc_pid_max: 4194304
> >
> >
> > In addition to that, the commit also has significant impact on the following tests:
> >
> > +------------------+----------------------------------------------------------------------+
> > | testcase: change | stress-ng: stress-ng.mmapfork.ops_per_sec -19.2% regression |
> > | test machine | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory |
> > | test parameters | class=vm |
> > | | cpufreq_governor=performance |
> > | | disk=1HDD |
> > | | fs=ext4 |
> > | | nr_threads=10% |
> > | | testtime=1s |
> > | | ucode=0x500002c |
> > +------------------+----------------------------------------------------------------------+
> >
> >
> >
> >
> > Details are as below:
> > -------------------------------------------------------------------------------------------------->
> >
> >
> > To reproduce:
> >
> > git clone https://github.com/intel/lkp-tests.git
> > cd lkp-tests
> > bin/lkp install job.yaml # job file is attached in this email
> > bin/lkp run job.yaml
> >
> > =========================================================================================
> > class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/sc_pid_max/tbox_group/testcase/testtime/ucode:
> > scheduler/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-20191114.cgz/4194304/lkp-bdw-ep6/stress-ng/1s/0xb000038
> >
> > commit:
> > e94f80f6c4 ("sched/rt: cpupri_find: Trigger a full search as fallback")
> > 6c8116c914 ("sched/fair: Fix condition of avg_load calculation")
> >
> > e94f80f6c4902000 6c8116c914b65be5e4d6f66d69c
> > ---------------- ---------------------------
> > fail:runs %reproduction fail:runs
> > | | |
> > :4 25% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
> > :4 25% 1:4 dmesg.WARNING:at_ip___perf_sw_event/0x
> > :4 25% 1:4 dmesg.WARNING:at_ip__slab_free/0x
> > %stddev %change %stddev
> > \ | \
> > 3986602 Â 12% +56.5% 6237290 Â 11% stress-ng.fifo.ops
> > 3982065 Â 12% +56.4% 6228889 Â 11% stress-ng.fifo.ops_per_sec
> > 20066 Â 5% -9.0% 18250 Â 5% stress-ng.fork.ops
> > 20043 Â 5% -9.0% 18239 Â 6% stress-ng.fork.ops_per_sec
> > 29.08 Â 2% -38.0% 18.01 stress-ng.mmapfork.ops_per_sec
> > 2200 +3.8% 2284 stress-ng.time.system_time
> > 19775571 Â 12% +28.2% 25357609 Â 6% numa-numastat.node1.local_node
> > 19843883 Â 12% +28.1% 25427059 Â 6% numa-numastat.node1.numa_hit
> > 4489 Â 15% -21.5% 3523 Â 3% sched_debug.cfs_rq:/.runnable_avg.max
> > 929.25 -12.0% 818.19 Â 7% sched_debug.cfs_rq:/.runnable_avg.stddev
> > 1449 Â 26% -25.8% 1075 Â 5% sched_debug.cfs_rq:/.util_avg.max
> > 28692 +9.2% 31327 Â 5% softirqs.CPU44.TIMER
> > 22999 Â 3% +13.7% 26141 Â 6% softirqs.CPU56.RCU
> > 28464 Â 4% +9.9% 31279 Â 6% softirqs.CPU56.TIMER
> > 30.25 Â 2% -6.6% 28.25 vmstat.cpu.id
> > 60.00 +4.6% 62.75 vmstat.cpu.sy
> > 2526959 Â 3% +69.1% 4273296 Â 2% vmstat.memory.cache
> > 371.25 Â 9% +27.1% 472.00 Â 5% vmstat.procs.r
> > 30.16 Â 3% -6.0% 28.35 Â 2% iostat.cpu.idle
> > 60.99 +3.6% 63.22 iostat.cpu.system
> > 8.39 Â 2% -4.9% 7.98 iostat.cpu.user
> > 3.10 Â173% -100.0% 0.00 iostat.sdc.await.max
> > 3.10 Â173% -100.0% 0.00 iostat.sdc.r_await.max
> > 1082 Â 9% +11.4% 1206 Â 4% slabinfo.kmalloc-128.active_slabs
> > 34667 Â 9% +11.3% 38602 Â 4% slabinfo.kmalloc-128.num_objs
> > 1082 Â 9% +11.4% 1206 Â 4% slabinfo.kmalloc-128.num_slabs
> > 454.50 Â 11% +21.0% 549.75 Â 22% slabinfo.kmalloc-192.active_slabs
> > 19110 Â 10% +20.9% 23108 Â 22% slabinfo.kmalloc-192.num_objs
> > 454.50 Â 11% +21.0% 549.75 Â 22% slabinfo.kmalloc-192.num_slabs
> > 106621 -7.8% 98257 Â 5% slabinfo.kmalloc-32.active_objs
> > 37329 Â 3% +20.6% 45034 slabinfo.radix_tree_node.active_objs
> > 706.25 Â 3% +24.5% 879.25 slabinfo.radix_tree_node.active_slabs
> > 39573 Â 3% +24.5% 49252 slabinfo.radix_tree_node.num_objs
> > 706.25 Â 3% +24.5% 879.25 slabinfo.radix_tree_node.num_slabs
> > 1318829 Â 8% +49.7% 1974635 Â 10% meminfo.Active
> > 1318549 Â 8% +49.7% 1974352 Â 10% meminfo.Active(anon)
> > 723296 Â 17% +41.1% 1020590 Â 15% meminfo.AnonHugePages
> > 905050 Â 11% +40.9% 1274793 Â 16% meminfo.AnonPages
> > 2271966 Â 3% +83.7% 4173221 meminfo.Cached
> > 62076918 Â 3% +12.6% 69917857 Â 5% meminfo.Committed_AS
> > 815680 Â 7% +198.4% 2434139 Â 2% meminfo.Inactive
> > 815324 Â 7% +198.5% 2433786 Â 2% meminfo.Inactive(anon)
> > 840619 Â 7% +192.4% 2457795 Â 2% meminfo.Mapped
> > 4535703 Â 4% +51.0% 6849231 Â 3% meminfo.Memused
> > 66386 Â 5% +20.2% 79763 Â 4% meminfo.PageTables
> > 1210719 Â 5% +157.1% 3112278 Â 2% meminfo.Shmem
> > 509312 Â 6% +75.4% 893498 meminfo.max_used_kB
> > 323591 Â 5% +48.3% 479732 Â 10% proc-vmstat.nr_active_anon
> > 225088 Â 9% +40.2% 315590 Â 14% proc-vmstat.nr_anon_pages
> > 362.50 Â 16% +35.7% 492.00 Â 14% proc-vmstat.nr_anon_transparent_hugepages
> > 3164981 -1.8% 3108432 proc-vmstat.nr_dirty_background_threshold
> > 6337701 -1.8% 6224466 proc-vmstat.nr_dirty_threshold
> > 562051 Â 3% +82.2% 1023916 Â 2% proc-vmstat.nr_file_pages
> > 31851080 -1.8% 31284538 proc-vmstat.nr_free_pages
> > 201936 Â 8% +196.7% 599169 Â 3% proc-vmstat.nr_inactive_anon
> > 208366 Â 8% +190.5% 605371 Â 3% proc-vmstat.nr_mapped
> > 16350 Â 3% +24.7% 20391 Â 7% proc-vmstat.nr_page_table_pages
> > 296735 Â 6% +155.7% 758662 Â 3% proc-vmstat.nr_shmem
> > 323592 Â 5% +48.3% 479731 Â 10% proc-vmstat.nr_zone_active_anon
> > 201936 Â 8% +196.7% 599168 Â 3% proc-vmstat.nr_zone_inactive_anon
> > 14509555 Â 2% +6.5% 15449984 proc-vmstat.pgactivate
> > 565861 Â 54% -93.7% 35832 Â 28% numa-meminfo.node0.Inactive
> > 565602 Â 54% -93.7% 35573 Â 28% numa-meminfo.node0.Inactive(anon)
> > 583972 Â 52% -91.4% 50225 Â 20% numa-meminfo.node0.Mapped
> > 627138 Â 20% +131.7% 1453311 Â 9% numa-meminfo.node1.Active
> > 627070 Â 20% +131.8% 1453285 Â 9% numa-meminfo.node1.Active(anon)
> > 327555 Â 17% +109.1% 684899 Â 9% numa-meminfo.node1.AnonHugePages
> > 460420 Â 11% +76.5% 812769 Â 16% numa-meminfo.node1.AnonPages
> > 968393 Â 45% +265.8% 3542629 Â 3% numa-meminfo.node1.FilePages
> > 270293 Â115% +784.7% 2391349 Â 3% numa-meminfo.node1.Inactive
> > 270196 Â116% +785.0% 2391255 Â 3% numa-meminfo.node1.Inactive(anon)
> > 94282 Â 6% +14.1% 107588 Â 4% numa-meminfo.node1.KReclaimable
> > 277310 Â113% +765.8% 2401026 Â 2% numa-meminfo.node1.Mapped
> > 2101472 Â 20% +143.0% 5106156 Â 3% numa-meminfo.node1.MemUsed
> > 30839 Â 14% +75.9% 54240 Â 5% numa-meminfo.node1.PageTables
> > 94282 Â 6% +14.1% 107588 Â 4% numa-meminfo.node1.SReclaimable
> > 428801 Â102% +603.2% 3015356 Â 3% numa-meminfo.node1.Shmem
> > 319011 Â 32% -53.5% 148357 Â 3% numa-vmstat.node0.nr_file_pages
> > 136650 Â 54% -90.3% 13199 Â 47% numa-vmstat.node0.nr_inactive_anon
> > 141251 Â 52% -88.0% 16942 Â 37% numa-vmstat.node0.nr_mapped
> > 9345 Â 15% -29.0% 6638 Â 9% numa-vmstat.node0.nr_page_table_pages
> > 188582 Â 54% -92.1% 14926 Â 42% numa-vmstat.node0.nr_shmem
> > 136645 Â 54% -90.3% 13189 Â 47% numa-vmstat.node0.nr_zone_inactive_anon
> > 11178939 Â 9% -17.1% 9271727 Â 7% numa-vmstat.node0.numa_hit
> > 10982245 Â 9% -17.1% 9101421 Â 7% numa-vmstat.node0.numa_local
> > 158792 Â 23% +130.0% 365220 Â 12% numa-vmstat.node1.nr_active_anon
> > 114790 Â 11% +76.4% 202539 Â 17% numa-vmstat.node1.nr_anon_pages
> > 160.50 Â 19% +107.3% 332.75 Â 9% numa-vmstat.node1.nr_anon_transparent_hugepages
> > 244507 Â 46% +262.7% 886783 Â 2% numa-vmstat.node1.nr_file_pages
> > 68248 Â122% +773.8% 596360 numa-vmstat.node1.nr_inactive_anon
> > 70043 Â119% +755.1% 598917 numa-vmstat.node1.nr_mapped
> > 8117 Â 22% +66.1% 13481 Â 3% numa-vmstat.node1.nr_page_table_pages
> > 109596 Â104% +588.9% 754961 Â 3% numa-vmstat.node1.nr_shmem
> > 23655 Â 7% +13.4% 26828 Â 4% numa-vmstat.node1.nr_slab_reclaimable
> > 158794 Â 23% +130.0% 365223 Â 12% numa-vmstat.node1.nr_zone_active_anon
> > 68248 Â122% +773.8% 596359 numa-vmstat.node1.nr_zone_inactive_anon
> > 10597757 Â 6% +31.0% 13877833 Â 2% numa-vmstat.node1.numa_hit
> > 10518704 Â 6% +30.8% 13763501 Â 3% numa-vmstat.node1.numa_local
> > 130.75 Â 26% -78.4% 28.25 Â 11% interrupts.36:IR-PCI-MSI.1572867-edge.eth0-TxRx-2
> > 454956 Â 13% -45.1% 249594 Â 38% interrupts.CPU14.LOC:Local_timer_interrupts
> > 17625 Â 28% +80.2% 31751 Â 37% interrupts.CPU14.RES:Rescheduling_interrupts
> > 130.75 Â 26% -78.4% 28.25 Â 11% interrupts.CPU15.36:IR-PCI-MSI.1572867-edge.eth0-TxRx-2
> > 263948 Â 23% -25.7% 196017 Â 2% interrupts.CPU15.LOC:Local_timer_interrupts
> > 425840 Â 19% -53.7% 197051 Â 4% interrupts.CPU17.LOC:Local_timer_interrupts
> > 44187 Â 27% -53.8% 20406 Â 39% interrupts.CPU18.RES:Rescheduling_interrupts
> > 2400 Â149% -91.2% 211.50 Â143% interrupts.CPU2.IWI:IRQ_work_interrupts
> > 432176 Â 16% -51.2% 211015 Â 9% interrupts.CPU2.LOC:Local_timer_interrupts
> > 444388 Â 20% -44.4% 246924 Â 34% interrupts.CPU20.LOC:Local_timer_interrupts
> > 1763 Â 11% +31.8% 2324 Â 13% interrupts.CPU25.TLB:TLB_shootdowns
> > 428063 Â 7% -33.9% 282779 Â 33% interrupts.CPU27.LOC:Local_timer_interrupts
> > 19879 Â 66% +94.7% 38706 Â 47% interrupts.CPU29.RES:Rescheduling_interrupts
> > 1459 Â 17% +62.3% 2369 Â 22% interrupts.CPU32.TLB:TLB_shootdowns
> > 1593 Â 17% +35.2% 2154 Â 15% interrupts.CPU33.TLB:TLB_shootdowns
> > 1388 Â 21% +57.5% 2185 Â 19% interrupts.CPU34.TLB:TLB_shootdowns
> > 44877 Â 50% -52.3% 21390 Â 19% interrupts.CPU36.RES:Rescheduling_interrupts
> > 6002 Â 35% -34.7% 3920 Â 8% interrupts.CPU37.CAL:Function_call_interrupts
> > 519.00 Â159% -95.2% 24.75 Â 51% interrupts.CPU4.IWI:IRQ_work_interrupts
> > 371753 Â 25% -46.6% 198407 Â 6% interrupts.CPU4.LOC:Local_timer_interrupts
> > 1490 Â 19% +42.9% 2130 Â 7% interrupts.CPU41.TLB:TLB_shootdowns
> > 6738 Â 41% -34.0% 4449 Â 19% interrupts.CPU45.CAL:Function_call_interrupts
> > 1145 Â124% -82.6% 199.25 Â153% interrupts.CPU46.IWI:IRQ_work_interrupts
> > 275.75 Â141% -94.2% 16.00 Â 84% interrupts.CPU48.IWI:IRQ_work_interrupts
> > 310.00 Â134% -93.1% 21.25 Â103% interrupts.CPU49.IWI:IRQ_work_interrupts
> > 463385 Â 3% -57.3% 197853 Â 6% interrupts.CPU49.LOC:Local_timer_interrupts
> > 7206 Â 33% -36.7% 4558 Â 10% interrupts.CPU5.CAL:Function_call_interrupts
> > 264579 Â 13% -24.5% 199834 Â 2% interrupts.CPU5.LOC:Local_timer_interrupts
> > 5463 Â 14% -26.3% 4025 Â 11% interrupts.CPU50.CAL:Function_call_interrupts
> > 7063 Â 31% -41.3% 4147 Â 13% interrupts.CPU54.CAL:Function_call_interrupts
> > 287711 Â 25% -31.7% 196499 Â 6% interrupts.CPU55.LOC:Local_timer_interrupts
> > 415854 Â 8% -41.4% 243719 Â 33% interrupts.CPU57.LOC:Local_timer_interrupts
> > 324710 Â 32% -38.3% 200427 Â 11% interrupts.CPU6.LOC:Local_timer_interrupts
> > 343106 Â 32% -27.0% 250512 Â 35% interrupts.CPU61.LOC:Local_timer_interrupts
> > 395834 Â 19% -49.0% 201844 Â 17% interrupts.CPU63.LOC:Local_timer_interrupts
> > 483611 Â 24% -48.4% 249593 Â 31% interrupts.CPU64.LOC:Local_timer_interrupts
> > 1885 Â 13% +37.5% 2591 Â 23% interrupts.CPU69.TLB:TLB_shootdowns
> > 382720 Â 33% -38.5% 235483 Â 34% interrupts.CPU7.LOC:Local_timer_interrupts
> > 1437 Â 11% +37.1% 1970 Â 16% interrupts.CPU70.TLB:TLB_shootdowns
> > 3844 Â 34% +64.2% 6312 Â 2% interrupts.CPU71.NMI:Non-maskable_interrupts
> > 3844 Â 34% +64.2% 6312 Â 2% interrupts.CPU71.PMI:Performance_monitoring_interrupts
> > 54451 Â 37% -65.6% 18725 Â 51% interrupts.CPU71.RES:Rescheduling_interrupts
> > 1710 Â 6% +27.2% 2176 Â 10% interrupts.CPU72.TLB:TLB_shootdowns
> > 350141 Â 20% -32.6% 236073 Â 29% interrupts.CPU74.LOC:Local_timer_interrupts
> > 1172 Â 18% +80.1% 2112 Â 10% interrupts.CPU76.TLB:TLB_shootdowns
> > 59169 Â 36% -56.0% 26026 Â 56% interrupts.CPU84.RES:Rescheduling_interrupts
> > 409027 Â 39% -41.4% 239528 Â 25% interrupts.CPU86.LOC:Local_timer_interrupts
> > 1543 Â 14% +24.6% 1922 Â 9% interrupts.CPU86.TLB:TLB_shootdowns
> > 8.71 Â 7% -3.8 4.92 Â 23% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write.ksys_write
> > 8.66 Â 7% -3.8 4.89 Â 23% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
> > 8.61 Â 7% -3.8 4.84 Â 23% perf-profile.calltrace.cycles-pp.pollwake.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write
> > 8.60 Â 7% -3.8 4.83 Â 23% perf-profile.calltrace.cycles-pp.try_to_wake_up.pollwake.__wake_up_common.__wake_up_common_lock.pipe_write
> > 10.58 Â 4% -3.7 6.87 Â 17% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task
> > 10.46 Â 5% -3.7 6.77 Â 17% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
> > 11.46 Â 4% -3.6 7.81 Â 14% perf-profile.calltrace.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
> > 11.71 Â 4% -3.6 8.07 Â 13% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
> > 8.92 Â 7% -3.6 5.31 Â 22% perf-profile.calltrace.cycles-pp.pipe_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
> > 9.02 Â 7% -3.6 5.42 Â 22% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 7.95 Â 6% -3.6 4.36 Â 23% perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.pollwake.__wake_up_common
> > 7.95 Â 6% -3.6 4.36 Â 23% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.pollwake.__wake_up_common.__wake_up_common_lock
> > 7.94 Â 6% -3.6 4.36 Â 23% perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.pollwake
> > 14.89 Â 4% -3.6 11.31 Â 10% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
> > 14.94 Â 4% -3.6 11.37 Â 10% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
> > 15.40 Â 3% -3.4 12.01 Â 10% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
> > 15.42 Â 3% -3.4 12.04 Â 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_write
> > 15.70 Â 3% -3.3 12.39 Â 10% perf-profile.calltrace.cycles-pp.__GI___libc_write
> > 4.89 Â 13% -2.0 2.94 Â 25% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.fsnotify_add_event.fanotify_handle_event.fsnotify
> > 1.67 Â 23% -0.6 1.11 Â 10% perf-profile.calltrace.cycles-pp._raw_spin_lock.fsnotify_add_event.fanotify_handle_event.fsnotify.do_sys_openat2
> > 1.98 Â 8% -0.5 1.50 Â 10% perf-profile.calltrace.cycles-pp._raw_spin_lock.fsnotify_add_event.fanotify_handle_event.fsnotify.__fput
> > 2.54 Â 3% -0.4 2.14 Â 7% perf-profile.calltrace.cycles-pp.fsnotify_add_event.fanotify_handle_event.fsnotify.__fput.task_work_run
> > 0.66 Â 9% -0.4 0.27 Â100% perf-profile.calltrace.cycles-pp._raw_spin_lock.fsnotify_add_event.fanotify_handle_event.fsnotify.vfs_read
> > 2.84 -0.3 2.55 Â 6% perf-profile.calltrace.cycles-pp.fsnotify_add_event.fanotify_handle_event.fsnotify.do_sys_openat2.do_sys_open
> > 2.83 Â 2% -0.3 2.56 Â 4% perf-profile.calltrace.cycles-pp.fanotify_handle_event.fsnotify.__fput.task_work_run.exit_to_usermode_loop
> > 0.69 Â 5% -0.3 0.43 Â 58% perf-profile.calltrace.cycles-pp._raw_spin_lock.fsnotify_add_event.fanotify_handle_event.fsnotify.vfs_write
> > 2.91 Â 2% -0.3 2.65 Â 4% perf-profile.calltrace.cycles-pp.fsnotify.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
> > 7.23 -0.2 6.99 perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 3.10 -0.2 2.90 Â 4% perf-profile.calltrace.cycles-pp.fanotify_handle_event.fsnotify.do_sys_openat2.do_sys_open.do_syscall_64
> > 0.94 Â 8% -0.1 0.80 Â 3% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap
> > 0.95 Â 8% -0.1 0.82 Â 3% perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap.__vm_munmap
> > 0.77 Â 10% -0.1 0.64 Â 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
> > 0.79 Â 10% -0.1 0.66 Â 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
> > 0.95 Â 8% -0.1 0.82 Â 3% perf-profile.calltrace.cycles-pp.lru_add_drain.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
> > 0.87 Â 3% -0.1 0.75 Â 6% perf-profile.calltrace.cycles-pp.fsnotify_add_event.fanotify_handle_event.fsnotify.vfs_read.ksys_read
> > 0.85 Â 3% -0.1 0.74 Â 8% perf-profile.calltrace.cycles-pp.fsnotify_add_event.fanotify_handle_event.fsnotify.vfs_write.ksys_write
> > 1.25 Â 6% -0.1 1.14 Â 2% perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
> > 4.21 -0.1 4.10 perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_close
> > 4.27 -0.1 4.16 perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_close
> > 1.35 Â 5% -0.1 1.25 Â 2% perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
> > 1.33 Â 5% -0.1 1.23 Â 2% perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 1.34 Â 5% -0.1 1.24 Â 2% perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
> > 1.85 -0.1 1.75 Â 4% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
> > 0.97 Â 2% -0.1 0.89 Â 4% perf-profile.calltrace.cycles-pp.fanotify_handle_event.fsnotify.vfs_read.ksys_read.do_syscall_64
> > 1.02 Â 2% -0.1 0.96 Â 4% perf-profile.calltrace.cycles-pp.fsnotify.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 0.86 Â 8% +0.1 0.96 Â 5% perf-profile.calltrace.cycles-pp.copy_page_range.dup_mm.copy_process._do_fork.__x64_sys_clone
> > 0.85 Â 7% +0.1 0.95 Â 4% perf-profile.calltrace.cycles-pp.copy_p4d_range.copy_page_range.dup_mm.copy_process._do_fork
> > 0.66 Â 5% +0.1 0.79 Â 12% perf-profile.calltrace.cycles-pp.apparmor_file_alloc_security.security_file_alloc.__alloc_file.alloc_empty_file.dentry_open
> > 0.69 Â 4% +0.1 0.82 Â 11% perf-profile.calltrace.cycles-pp.security_file_alloc.__alloc_file.alloc_empty_file.dentry_open.fanotify_read
> > 0.67 Â 4% +0.1 0.81 Â 13% perf-profile.calltrace.cycles-pp.apparmor_file_free_security.security_file_free.__fput.task_work_run.exit_to_usermode_loop
> > 0.68 Â 5% +0.1 0.82 Â 12% perf-profile.calltrace.cycles-pp.security_file_free.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
> > 0.30 Â100% +0.4 0.67 Â 5% perf-profile.calltrace.cycles-pp.fanotify_merge.fsnotify_add_event.fanotify_handle_event.fsnotify.do_sys_openat2
> > 5.06 Â 3% +0.5 5.55 Â 2% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
> > 5.11 Â 3% +0.5 5.60 Â 2% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
> > 5.53 Â 3% +0.7 6.22 Â 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
> > 5.55 Â 3% +0.7 6.24 Â 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_read
> > 5.87 Â 3% +0.8 6.66 Â 3% perf-profile.calltrace.cycles-pp.__GI___libc_read
> > 2.06 +2.0 4.05 Â 71% perf-profile.calltrace.cycles-pp.page_fault
> > 2.00 +2.0 3.99 Â 72% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
> > 1.72 +2.0 3.73 Â 78% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_page_fault.page_fault
> > 1.77 +2.0 3.78 Â 77% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_page_fault.page_fault
> > 30.85 Â 2% -5.8 25.06 Â 4% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
> > 16.38 Â 4% -4.0 12.35 Â 10% perf-profile.children.cycles-pp.try_to_wake_up
> > 14.81 Â 3% -3.9 10.96 Â 11% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
> > 9.39 Â 6% -3.8 5.57 Â 20% perf-profile.children.cycles-pp.__wake_up_common_lock
> > 8.72 Â 7% -3.8 4.90 Â 23% perf-profile.children.cycles-pp.pollwake
> > 13.03 Â 4% -3.8 9.24 Â 12% perf-profile.children.cycles-pp.enqueue_task_fair
> > 13.03 Â 4% -3.8 9.26 Â 12% perf-profile.children.cycles-pp.ttwu_do_activate
> > 13.05 Â 4% -3.8 9.28 Â 12% perf-profile.children.cycles-pp.activate_task
> > 12.34 Â 4% -3.8 8.58 Â 13% perf-profile.children.cycles-pp.__account_scheduler_latency
> > 12.75 Â 4% -3.7 9.00 Â 12% perf-profile.children.cycles-pp.enqueue_entity
> > 13.76 Â 4% -3.7 10.04 Â 11% perf-profile.children.cycles-pp.__wake_up_common
> > 8.93 Â 7% -3.6 5.31 Â 22% perf-profile.children.cycles-pp.pipe_write
> > 9.30 Â 6% -3.6 5.72 Â 20% perf-profile.children.cycles-pp.new_sync_write
> > 15.09 Â 3% -3.5 11.57 Â 10% perf-profile.children.cycles-pp.ksys_write
> > 15.26 Â 3% -3.5 11.75 Â 9% perf-profile.children.cycles-pp.vfs_write
> > 15.80 Â 3% -3.3 12.55 Â 10% perf-profile.children.cycles-pp.__GI___libc_write
> > 77.73 -3.0 74.69 Â 4% perf-profile.children.cycles-pp.do_syscall_64
> > 77.85 -3.0 74.83 Â 4% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
> > 17.37 Â 4% -1.7 15.68 Â 2% perf-profile.children.cycles-pp._raw_spin_lock
> > 6.95 Â 5% -1.2 5.79 Â 26% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
> > 6.87 Â 5% -1.2 5.71 Â 26% perf-profile.children.cycles-pp.rwsem_optimistic_spin
> > 7.29 Â 2% -1.0 6.33 Â 6% perf-profile.children.cycles-pp.fsnotify_add_event
> > 8.05 -0.6 7.42 Â 4% perf-profile.children.cycles-pp.fanotify_handle_event
> > 8.41 -0.6 7.84 Â 4% perf-profile.children.cycles-pp.fsnotify
> > 7.33 -0.3 7.08 perf-profile.children.cycles-pp.__fput
> > 7.76 -0.2 7.51 perf-profile.children.cycles-pp.exit_to_usermode_loop
> > 1.01 Â 7% -0.1 0.88 Â 3% perf-profile.children.cycles-pp.lru_add_drain
> > 0.96 Â 7% -0.1 0.83 Â 5% perf-profile.children.cycles-pp.__pagevec_release
> > 1.03 Â 7% -0.1 0.91 Â 3% perf-profile.children.cycles-pp.lru_add_drain_cpu
> > 1.02 Â 7% -0.1 0.90 Â 2% perf-profile.children.cycles-pp.pagevec_lru_move_fn
> > 1.46 Â 6% -0.1 1.35 Â 2% perf-profile.children.cycles-pp.unmap_region
> > 1.87 -0.1 1.78 Â 4% perf-profile.children.cycles-pp.schedule_idle
> > 1.49 Â 5% -0.1 1.40 perf-profile.children.cycles-pp.__x64_sys_munmap
> > 0.58 Â 11% -0.1 0.52 Â 12% perf-profile.children.cycles-pp.load_balance
> > 0.60 Â 5% -0.1 0.54 Â 3% perf-profile.children.cycles-pp.truncate_inode_pages_range
> > 0.11 Â 3% -0.0 0.10 Â 4% perf-profile.children.cycles-pp.seq_show
> > 0.11 Â 3% -0.0 0.10 Â 4% perf-profile.children.cycles-pp.seq_printf
> > 0.11 -0.0 0.10 Â 4% perf-profile.children.cycles-pp.seq_vprintf
> > 0.09 Â 4% +0.0 0.11 Â 4% perf-profile.children.cycles-pp.__check_object_size
> > 0.09 Â 4% +0.0 0.11 Â 11% perf-profile.children.cycles-pp.current_time
> > 0.25 Â 2% +0.0 0.27 Â 2% perf-profile.children.cycles-pp.update_process_times
> > 0.29 Â 4% +0.0 0.31 Â 2% perf-profile.children.cycles-pp.generic_file_write_iter
> > 0.31 Â 3% +0.0 0.33 perf-profile.children.cycles-pp.new_inode_pseudo
> > 0.18 Â 5% +0.0 0.21 Â 6% perf-profile.children.cycles-pp.__might_sleep
> > 0.34 Â 4% +0.0 0.38 Â 6% perf-profile.children.cycles-pp.do_wp_page
> > 0.01 Â173% +0.0 0.06 Â 16% perf-profile.children.cycles-pp.icmp_sk_exit
> > 0.30 Â 4% +0.0 0.35 Â 9% perf-profile.children.cycles-pp.wp_page_copy
> > 0.22 Â 3% +0.0 0.27 Â 2% perf-profile.children.cycles-pp.fput_many
> > 0.60 Â 5% +0.1 0.66 Â 2% perf-profile.children.cycles-pp.rcu_core
> > 0.23 Â 4% +0.1 0.28 Â 9% perf-profile.children.cycles-pp.__pte_alloc
> > 0.45 Â 6% +0.1 0.51 Â 3% perf-profile.children.cycles-pp.pte_alloc_one
> > 0.23 Â 12% +0.1 0.29 Â 16% perf-profile.children.cycles-pp.cleanup_net
> > 0.42 Â 6% +0.1 0.48 Â 8% perf-profile.children.cycles-pp.prep_new_page
> > 0.39 Â 6% +0.1 0.45 Â 2% perf-profile.children.cycles-pp.memset_erms
> > 0.76 Â 2% +0.1 0.83 Â 2% perf-profile.children.cycles-pp.kmem_cache_alloc
> > 0.23 Â 19% +0.1 0.31 perf-profile.children.cycles-pp.path_put
> > 0.73 Â 3% +0.1 0.81 Â 3% perf-profile.children.cycles-pp.__softirqentry_text_start
> > 0.22 Â 13% +0.1 0.32 Â 14% perf-profile.children.cycles-pp.put_pid
> > 0.47 Â 8% +0.1 0.57 Â 3% perf-profile.children.cycles-pp.___might_sleep
> > 0.68 Â 4% +0.1 0.79 Â 6% perf-profile.children.cycles-pp.get_page_from_freelist
> > 0.76 Â 4% +0.1 0.87 Â 5% perf-profile.children.cycles-pp.__alloc_pages_nodemask
> > 0.35 Â 4% +0.1 0.48 Â 11% perf-profile.children.cycles-pp.fanotify_alloc_event
> > 0.79 Â 4% +0.1 0.92 Â 9% perf-profile.children.cycles-pp.apparmor_file_alloc_security
> > 0.86 Â 3% +0.1 0.99 Â 9% perf-profile.children.cycles-pp.security_file_alloc
> > 1.30 Â 3% +0.1 1.45 Â 7% perf-profile.children.cycles-pp.__alloc_file
> > 0.85 Â 6% +0.1 1.00 Â 6% perf-profile.children.cycles-pp.syscall_return_via_sysret
> > 0.73 Â 4% +0.1 0.88 Â 12% perf-profile.children.cycles-pp.security_file_free
> > 0.72 Â 4% +0.2 0.87 Â 12% perf-profile.children.cycles-pp.apparmor_file_free_security
> > 1.32 Â 3% +0.2 1.47 Â 7% perf-profile.children.cycles-pp.alloc_empty_file
> > 1.29 Â 5% +0.2 1.45 Â 3% perf-profile.children.cycles-pp.copy_page_range
> > 1.26 Â 4% +0.2 1.41 Â 3% perf-profile.children.cycles-pp.copy_p4d_range
> > 0.46 Â 17% +0.2 0.64 Â 7% perf-profile.children.cycles-pp.fanotify_free_event
> > 1.45 Â 24% +0.4 1.90 Â 2% perf-profile.children.cycles-pp.fanotify_merge
> > 5.23 Â 2% +0.5 5.70 Â 2% perf-profile.children.cycles-pp.vfs_read
> > 5.24 Â 2% +0.5 5.72 Â 2% perf-profile.children.cycles-pp.ksys_read
> > 5.90 Â 3% +0.8 6.69 Â 3% perf-profile.children.cycles-pp.__GI___libc_read
> > 2.83 Â 2% +2.0 4.79 Â 59% perf-profile.children.cycles-pp.page_fault
> > 2.69 Â 2% +2.0 4.66 Â 61% perf-profile.children.cycles-pp.do_page_fault
> > 30.71 -5.8 24.94 Â 4% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
> > 0.09 +0.0 0.11 Â 12% perf-profile.self.cycles-pp.vma_interval_tree_remove
> > 0.10 +0.0 0.12 Â 13% perf-profile.self.cycles-pp.__rb_insert_augmented
> > 0.13 Â 5% +0.0 0.15 Â 7% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
> > 0.15 Â 4% +0.0 0.18 Â 2% perf-profile.self.cycles-pp.fput_many
> > 0.28 +0.0 0.32 Â 3% perf-profile.self.cycles-pp.kmem_cache_alloc
> > 0.14 Â 9% +0.0 0.19 Â 5% perf-profile.self.cycles-pp.fanotify_alloc_event
> > 0.19 Â 3% +0.0 0.24 Â 21% perf-profile.self.cycles-pp.anon_vma_clone
> > 0.31 Â 10% +0.1 0.37 Â 4% perf-profile.self.cycles-pp.fsnotify
> > 0.38 Â 7% +0.1 0.43 perf-profile.self.cycles-pp.memset_erms
> > 0.21 Â 13% +0.1 0.30 Â 14% perf-profile.self.cycles-pp.put_pid
> > 0.68 Â 5% +0.1 0.78 Â 5% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
> > 0.45 Â 8% +0.1 0.55 Â 3% perf-profile.self.cycles-pp.___might_sleep
> > 0.77 Â 4% +0.1 0.89 Â 10% perf-profile.self.cycles-pp.apparmor_file_alloc_security
> > 0.99 Â 4% +0.1 1.12 Â 2% perf-profile.self.cycles-pp._raw_spin_lock
> > 0.85 Â 6% +0.1 1.00 Â 6% perf-profile.self.cycles-pp.syscall_return_via_sysret
> > 0.71 Â 4% +0.1 0.86 Â 11% perf-profile.self.cycles-pp.apparmor_file_free_security
> > 1.44 Â 23% +0.4 1.88 Â 2% perf-profile.self.cycles-pp.fanotify_merge
> >
> >
> >
> > stress-ng.mmapfork.ops_per_sec
> >
> > 36 +----------------------------------------------------------------------+
> > 34 |-+ + |
> > | + : +.. + + |
> > 32 |..+.. + : + : : : : +..+.. |
> > 30 |-+ + : + +.. .+.. : : : : + .+.. |
> > | : .+.. .+ +. + : : : + +..+..+. +..|
> > 28 |-+ +. +. + + |
> > 26 |-+ |
> > 24 |-+ |
> > | |
> > 22 |-+ |
> > 20 |-+ O |
> > | O O O O O O O O O O O O |
> > 18 |-+ O O O O O O O |
> > 16 +----------------------------------------------------------------------+
> >
> >
> > [*] bisect-good sample
> > [O] bisect-bad sample
> >
> > ***************************************************************************************************
> > lkp-csl-2sp5: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
> > =========================================================================================
> > class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
> > vm/gcc-7/performance/1HDD/ext4/x86_64-rhel-7.6/10%/debian-x86_64-20191114.cgz/lkp-csl-2sp5/stress-ng/1s/0x500002c
> >
> > commit:
> > e94f80f6c4 ("sched/rt: cpupri_find: Trigger a full search as fallback")
> > 6c8116c914 ("sched/fair: Fix condition of avg_load calculation")
> >
> > e94f80f6c4902000 6c8116c914b65be5e4d6f66d69c
> > ---------------- ---------------------------
> > fail:runs %reproduction fail:runs
> > | | |
> > :4 50% 2:4 dmesg.WARNING:at_ip_native_sched_clock/0x
> > 1:4 -25% :4 kmsg.Memory_failure:#:recovery_action_for_clean_LRU_page:Recovered
> > :4 25% 1:4 kmsg.Memory_failure:#:recovery_action_for_high-order_kernel_page:Ignored
> > 1:4 -25% :4 kmsg.Memory_failure:#:recovery_action_for_reserved_kernel_page:Failed
> > 1:4 -25% :4 kmsg.Memory_failure:#:reserved_kernel_page_still_referenced_by#users
> > 0:4 15% 1:4 perf-profile.calltrace.cycles-pp.error_entry
> > 2:4 -9% 1:4 perf-profile.children.cycles-pp.error_entry
> > 0:4 -2% 0:4 perf-profile.self.cycles-pp.error_entry
> > %stddev %change %stddev
> > \ | \
> > 1.45 Â 4% -19.2% 1.17 stress-ng.mmapfork.ops_per_sec
> > 34.69 +3.8% 36.02 stress-ng.time.elapsed_time
> > 34.69 +3.8% 36.02 stress-ng.time.elapsed_time.max
> > 25456 Â 3% +61.0% 40992 stress-ng.time.involuntary_context_switches
> > 48979390 -1.7% 48167776 stress-ng.time.minor_page_faults
> > 2216 +8.6% 2407 stress-ng.time.percent_of_cpu_this_job_got
> > 678.84 +13.8% 772.64 stress-ng.time.system_time
> > 90.09 Â 2% +5.1% 94.70 stress-ng.time.user_time
> > 3736135 -8.1% 3432912 Â 4% stress-ng.vm-splice.ops
> > 3736645 -8.1% 3433013 Â 4% stress-ng.vm-splice.ops_per_sec
> > 22.94 +2.9 25.82 mpstat.cpu.all.sys%
> > 64068 +20.9% 77445 slabinfo.radix_tree_node.active_objs
> > 1191 +24.5% 1483 slabinfo.radix_tree_node.active_slabs
> > 66763 +24.5% 83089 slabinfo.radix_tree_node.num_objs
> > 1191 +24.5% 1483 slabinfo.radix_tree_node.num_slabs
> > 13465 Â 5% -7.5% 12458 Â 4% softirqs.CPU54.RCU
> > 21991 Â 9% -12.2% 19314 Â 2% softirqs.CPU67.TIMER
> > 18381 Â 3% +15.7% 21272 Â 11% softirqs.CPU78.TIMER
> > 19718 Â 5% -6.2% 18501 Â 6% softirqs.CPU85.TIMER
> > 75.25 -4.3% 72.00 vmstat.cpu.id
> > 7158306 +55.5% 11133290 vmstat.memory.cache
> > 41.00 +21.1% 49.67 vmstat.procs.r
> > 164992 -2.7% 160484 vmstat.system.cs
> > 5119 Â 27% +26.0% 6449 sched_debug.cfs_rq:/.min_vruntime.min
> > 40.68 Â 43% -44.2% 22.70 Â 56% sched_debug.cfs_rq:/.removed.load_avg.avg
> > 193.17 Â 22% -25.4% 144.08 Â 27% sched_debug.cfs_rq:/.removed.load_avg.stddev
> > 40.68 Â 43% -44.2% 22.70 Â 56% sched_debug.cfs_rq:/.removed.runnable_avg.avg
> > 193.17 Â 22% -25.4% 144.08 Â 27% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
> > 75.65 -3.8% 72.79 iostat.cpu.idle
> > 21.71 +12.9% 24.51 iostat.cpu.system
> > 3.38 Â173% -100.0% 0.00 iostat.sdb.avgqu-sz.max
> > 55.24 Â173% -100.0% 0.00 iostat.sdb.await.max
> > 7.86 Â173% -100.0% 0.00 iostat.sdb.r_await.max
> > 0.50 Â173% -100.0% 0.00 iostat.sdb.svctm.max
> > 73.58 Â173% -100.0% 0.00 iostat.sdb.w_await.max
> > 3441402 +37.0% 4713857 Â 2% meminfo.Active
> > 3435631 +37.0% 4708095 Â 2% meminfo.Active(anon)
> > 7080694 Â 2% +57.1% 11123426 meminfo.Cached
> > 15441325 +16.2% 17935592 meminfo.Committed_AS
> > 4095008 Â 2% +64.3% 6730005 meminfo.Inactive
> > 4091903 Â 2% +64.4% 6726722 meminfo.Inactive(anon)
> > 3878376 Â 2% +68.4% 6532231 meminfo.Mapped
> > 9729489 +40.4% 13657740 meminfo.Memused
> > 24152 Â 2% +27.3% 30748 meminfo.PageTables
> > 5975926 Â 2% +67.6% 10018605 meminfo.Shmem
> > 1449016 +34.8% 1953751 meminfo.max_used_kB
> > 858313 +36.8% 1173864 Â 2% proc-vmstat.nr_active_anon
> > 406781 -8.2% 373619 Â 2% proc-vmstat.nr_anon_pages
> > 4645359 -2.1% 4548445 proc-vmstat.nr_dirty_background_threshold
> > 9302077 -2.1% 9108013 proc-vmstat.nr_dirty_threshold
> > 1771956 +56.2% 2768047 proc-vmstat.nr_file_pages
> > 46738540 -2.1% 45771374 proc-vmstat.nr_free_pages
> > 1030382 Â 2% +62.8% 1677583 proc-vmstat.nr_inactive_anon
> > 975596 Â 2% +67.0% 1628805 proc-vmstat.nr_mapped
> > 15306 Â 2% -4.8% 14573 Â 4% proc-vmstat.nr_mlock
> > 6053 +26.9% 7682 proc-vmstat.nr_page_table_pages
> > 1495457 Â 2% +66.6% 2491556 proc-vmstat.nr_shmem
> > 28335 +7.4% 30444 proc-vmstat.nr_slab_reclaimable
> > 858313 +36.8% 1173863 Â 2% proc-vmstat.nr_zone_active_anon
> > 1030382 Â 2% +62.8% 1677583 proc-vmstat.nr_zone_inactive_anon
> > 44230 Â 7% +38.1% 61075 Â 6% proc-vmstat.numa_pages_migrated
> > 44230 Â 7% +38.1% 61075 Â 6% proc-vmstat.pgmigrate_success
> > 21392 Â 7% +17.8% 25205 Â 3% interrupts.CPU1.CAL:Function_call_interrupts
> > 29824 Â 8% +60.3% 47813 Â 16% interrupts.CPU1.TLB:TLB_shootdowns
> > 93.75 Â 22% +298.6% 373.67 Â 34% interrupts.CPU10.RES:Rescheduling_interrupts
> > 28425 Â 9% +35.6% 38542 Â 10% interrupts.CPU10.TLB:TLB_shootdowns
> > 28648 Â 9% +21.9% 34913 Â 9% interrupts.CPU11.TLB:TLB_shootdowns
> > 20812 Â 10% +15.8% 24090 Â 3% interrupts.CPU12.CAL:Function_call_interrupts
> > 28668 Â 11% +35.8% 38941 Â 13% interrupts.CPU12.TLB:TLB_shootdowns
> > 97.50 Â 18% +205.3% 297.67 Â 43% interrupts.CPU14.RES:Rescheduling_interrupts
> > 152.00 Â 77% +627.0% 1105 Â 81% interrupts.CPU15.RES:Rescheduling_interrupts
> > 30393 Â 12% +43.5% 43611 Â 19% interrupts.CPU15.TLB:TLB_shootdowns
> > 20253 Â 5% +18.7% 24046 Â 4% interrupts.CPU18.CAL:Function_call_interrupts
> > 19382 Â 14% +26.8% 24576 Â 8% interrupts.CPU19.CAL:Function_call_interrupts
> > 26649 Â 12% +57.4% 41941 Â 18% interrupts.CPU19.TLB:TLB_shootdowns
> > 30299 Â 9% +38.1% 41829 Â 20% interrupts.CPU22.TLB:TLB_shootdowns
> > 43754 Â 18% -37.5% 27337 Â 2% interrupts.CPU25.TLB:TLB_shootdowns
> > 40282 Â 31% -27.2% 29321 Â 18% interrupts.CPU26.TLB:TLB_shootdowns
> > 524.75 Â 52% -62.3% 198.00 Â 57% interrupts.CPU27.RES:Rescheduling_interrupts
> > 381.25 Â 53% -45.1% 209.33 Â102% interrupts.CPU30.RES:Rescheduling_interrupts
> > 224.00 Â 84% -54.9% 101.00 Â 54% interrupts.CPU35.RES:Rescheduling_interrupts
> > 39260 Â 29% -33.2% 26214 Â 25% interrupts.CPU36.TLB:TLB_shootdowns
> > 20901 Â 11% +13.9% 23801 Â 4% interrupts.CPU4.CAL:Function_call_interrupts
> > 29418 Â 11% -13.1% 25571 Â 6% interrupts.CPU40.TLB:TLB_shootdowns
> > 22467 Â 6% -21.6% 17610 Â 10% interrupts.CPU43.CAL:Function_call_interrupts
> > 20028 Â 9% +20.9% 24219 Â 7% interrupts.CPU47.CAL:Function_call_interrupts
> > 28186 Â 10% +33.3% 37576 Â 21% interrupts.CPU47.TLB:TLB_shootdowns
> > 20503 Â 4% +15.4% 23664 Â 3% interrupts.CPU49.CAL:Function_call_interrupts
> > 29724 Â 6% +44.8% 43046 Â 34% interrupts.CPU52.TLB:TLB_shootdowns
> > 20812 Â 6% +23.5% 25710 Â 2% interrupts.CPU53.CAL:Function_call_interrupts
> > 28228 Â 7% +25.4% 35402 Â 3% interrupts.CPU53.TLB:TLB_shootdowns
> > 30617 Â 5% +13.0% 34602 Â 5% interrupts.CPU56.TLB:TLB_shootdowns
> > 28393 Â 9% +14.2% 32419 Â 4% interrupts.CPU59.TLB:TLB_shootdowns
> > 26886 Â 14% +33.6% 35911 Â 17% interrupts.CPU6.TLB:TLB_shootdowns
> > 3607 Â 30% -71.4% 1031 Â 40% interrupts.CPU60.NMI:Non-maskable_interrupts
> > 3607 Â 30% -71.4% 1031 Â 40% interrupts.CPU60.PMI:Performance_monitoring_interrupts
> > 20497 Â 7% +17.8% 24149 Â 6% interrupts.CPU61.CAL:Function_call_interrupts
> > 28713 Â 11% +29.1% 37066 Â 14% interrupts.CPU61.TLB:TLB_shootdowns
> > 20400 Â 2% +17.9% 24051 Â 3% interrupts.CPU63.CAL:Function_call_interrupts
> > 28404 Â 2% +36.6% 38793 Â 21% interrupts.CPU63.TLB:TLB_shootdowns
> > 332.50 Â 74% -84.0% 53.33 Â 39% interrupts.CPU88.RES:Rescheduling_interrupts
> > 55727 Â 23% -47.1% 29476 Â 9% interrupts.CPU91.TLB:TLB_shootdowns
> > 41957 Â 29% -42.7% 24035 Â 6% interrupts.CPU92.TLB:TLB_shootdowns
> > 516.25 Â 57% -83.0% 88.00 Â 65% interrupts.CPU93.RES:Rescheduling_interrupts
> > 21481 Â 6% -17.5% 17720 Â 9% interrupts.CPU95.CAL:Function_call_interrupts
> > 43882 Â 33% -45.1% 24079 Â 10% interrupts.CPU95.TLB:TLB_shootdowns
> > 34.47 Â 18% -7.4 27.02 Â 3% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
> > 36.75 Â 16% -7.2 29.55 Â 3% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
> > 36.91 Â 15% -7.1 29.79 Â 2% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
> > 37.86 Â 14% -6.7 31.20 Â 2% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
> > 37.86 Â 14% -6.6 31.21 Â 2% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
> > 37.86 Â 14% -6.6 31.21 Â 2% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
> > 38.16 Â 15% -6.5 31.62 Â 2% perf-profile.calltrace.cycles-pp.secondary_startup_64
> > 0.61 Â 9% +0.1 0.69 Â 4% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.zap_page_range.do_madvise.__x64_sys_madvise.do_syscall_64
> > 0.84 Â 5% +0.1 0.93 Â 4% perf-profile.calltrace.cycles-pp.clear_page_erms.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
> > 0.93 Â 4% +0.1 1.03 Â 4% perf-profile.calltrace.cycles-pp.mem_cgroup_try_charge.mem_cgroup_try_charge_delay.handle_pte_fault.__handle_mm_fault.handle_mm_fault
> > 1.13 Â 5% +0.1 1.26 Â 3% perf-profile.calltrace.cycles-pp.mem_cgroup_try_charge_delay.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_page_fault
> > 1.17 Â 6% +0.1 1.30 Â 3% perf-profile.calltrace.cycles-pp.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault
> > 1.55 Â 2% +0.1 1.70 Â 2% perf-profile.calltrace.cycles-pp._raw_spin_lock.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_page_fault
> > 0.92 Â 13% +0.2 1.09 Â 5% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
> > 1.63 Â 6% +0.2 1.83 Â 3% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault
> > 1.75 Â 6% +0.2 1.96 Â 2% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault
> > 1.86 Â 6% +0.2 2.08 Â 2% perf-profile.calltrace.cycles-pp.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_page_fault
> > 2.01 Â 9% +0.3 2.28 Â 2% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
> > 0.28 Â100% +0.3 0.58 Â 2% perf-profile.calltrace.cycles-pp.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
> > 0.28 Â100% +0.3 0.59 Â 5% perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.zap_page_range.do_madvise
> > 0.81 Â 8% +0.4 1.17 Â 16% perf-profile.calltrace.cycles-pp.release_pages.__pagevec_release.shmem_undo_range.shmem_truncate_range.shmem_evict_inode
> > 0.82 Â 8% +0.4 1.17 Â 17% perf-profile.calltrace.cycles-pp.__pagevec_release.shmem_undo_range.shmem_truncate_range.shmem_evict_inode.evict
> > 2.18 Â 4% +0.4 2.57 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.unmap_page_range.unmap_vmas.exit_mmap
> > 2.22 Â 4% +0.4 2.62 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.unmap_page_range.unmap_vmas.exit_mmap.mmput
> > 2.35 Â 8% +0.4 2.77 Â 6% perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
> > 2.34 Â 8% +0.4 2.76 Â 6% perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.prepare_exit_to_usermode
> > 2.36 Â 8% +0.4 2.78 Â 7% perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
> > 2.36 Â 8% +0.4 2.78 Â 6% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
> > 2.40 Â 8% +0.4 2.83 Â 6% perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
> > 2.37 Â 8% +0.4 2.80 Â 6% perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
> > 2.76 Â 7% +0.5 3.27 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
> > 2.77 Â 7% +0.5 3.28 perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
> > 5.51 Â 3% +0.6 6.14 Â 2% perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_page_fault.page_fault
> > 0.35 Â100% +0.7 1.01 Â 8% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
> > 6.29 Â 11% +1.0 7.33 Â 2% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_read_slowpath.__x64_sys_msync.do_syscall_64
> > 7.17 Â 11% +1.2 8.34 Â 2% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_read_slowpath.__x64_sys_msync.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 7.57 Â 11% +1.2 8.81 Â 2% perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.__x64_sys_msync.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 9.61 Â 11% +1.3 10.93 perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write_killable.do_madvise
> > 9.17 Â 11% +1.4 10.56 Â 3% perf-profile.calltrace.cycles-pp.__x64_sys_msync.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 11.68 Â 11% +1.6 13.29 Â 2% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write_killable.do_madvise.__x64_sys_madvise
> > 12.11 Â 12% +1.6 13.76 Â 2% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.down_write_killable.do_madvise.__x64_sys_madvise.do_syscall_64
> > 34.59 Â 18% -7.4 27.14 Â 2% perf-profile.children.cycles-pp.intel_idle
> > 37.20 Â 16% -7.0 30.20 Â 2% perf-profile.children.cycles-pp.cpuidle_enter_state
> > 37.20 Â 16% -7.0 30.20 Â 2% perf-profile.children.cycles-pp.cpuidle_enter
> > 37.86 Â 14% -6.6 31.21 Â 2% perf-profile.children.cycles-pp.start_secondary
> > 38.16 Â 15% -6.5 31.62 Â 2% perf-profile.children.cycles-pp.secondary_startup_64
> > 38.16 Â 15% -6.5 31.62 Â 2% perf-profile.children.cycles-pp.cpu_startup_entry
> > 38.17 Â 15% -6.5 31.63 Â 2% perf-profile.children.cycles-pp.do_idle
> > 0.23 Â 25% -0.1 0.17 Â 14% perf-profile.children.cycles-pp.irq_enter
> > 0.08 Â 5% +0.0 0.09 Â 5% perf-profile.children.cycles-pp.select_task_rq_fair
> > 0.07 Â 13% +0.0 0.08 Â 5% perf-profile.children.cycles-pp.security_file_alloc
> > 0.06 Â 11% +0.0 0.08 Â 10% perf-profile.children.cycles-pp.__pthread_enable_asynccancel
> > 0.18 Â 2% +0.0 0.21 Â 2% perf-profile.children.cycles-pp.__perf_sw_event
> > 0.04 Â 57% +0.0 0.06 Â 7% perf-profile.children.cycles-pp.apparmor_file_alloc_security
> > 0.19 Â 7% +0.0 0.22 Â 3% perf-profile.children.cycles-pp.page_remove_rmap
> > 0.08 Â 13% +0.0 0.11 Â 12% perf-profile.children.cycles-pp.uncharge_batch
> > 0.15 Â 14% +0.0 0.18 Â 2% perf-profile.children.cycles-pp.__alloc_file
> > 0.15 Â 10% +0.0 0.18 Â 2% perf-profile.children.cycles-pp.alloc_empty_file
> > 0.20 Â 10% +0.0 0.24 Â 5% perf-profile.children.cycles-pp.___slab_alloc
> > 0.14 Â 11% +0.0 0.18 Â 9% perf-profile.children.cycles-pp.alloc_set_pte
> > 0.15 Â 14% +0.0 0.18 Â 2% perf-profile.children.cycles-pp.alloc_file
> > 0.39 Â 6% +0.0 0.44 Â 2% perf-profile.children.cycles-pp.___might_sleep
> > 0.01 Â173% +0.0 0.06 Â 13% perf-profile.children.cycles-pp.free_pcp_prepare
> > 0.28 Â 10% +0.0 0.33 perf-profile.children.cycles-pp.syscall_return_via_sysret
> > 0.10 Â 23% +0.1 0.15 Â 14% perf-profile.children.cycles-pp.irq_work_run_list
> > 0.00 +0.1 0.05 perf-profile.children.cycles-pp.call_rcu
> > 0.52 Â 6% +0.1 0.58 Â 2% perf-profile.children.cycles-pp.do_brk_flags
> > 0.21 Â 14% +0.1 0.27 perf-profile.children.cycles-pp.alloc_file_pseudo
> > 0.39 Â 9% +0.1 0.45 Â 2% perf-profile.children.cycles-pp.up_write
> > 0.46 Â 4% +0.1 0.52 Â 8% perf-profile.children.cycles-pp.sync_regs
> > 0.36 Â 10% +0.1 0.43 Â 4% perf-profile.children.cycles-pp.on_each_cpu_cond_mask
> > 0.16 Â 33% +0.1 0.24 Â 5% perf-profile.children.cycles-pp.tick_nohz_irq_exit
> > 0.23 Â 19% +0.1 0.32 Â 6% perf-profile.children.cycles-pp.filemap_map_pages
> > 0.42 Â 15% +0.1 0.52 Â 4% perf-profile.children.cycles-pp.osq_unlock
> > 0.61 Â 12% +0.1 0.72 Â 4% perf-profile.children.cycles-pp.smp_call_function_many_cond
> > 1.28 Â 5% +0.1 1.39 Â 3% perf-profile.children.cycles-pp.mem_cgroup_try_charge_delay
> > 1.31 Â 5% +0.1 1.43 Â 3% perf-profile.children.cycles-pp.prep_new_page
> > 0.86 Â 13% +0.1 1.01 perf-profile.children.cycles-pp.mmap_region
> > 2.23 Â 6% +0.2 2.43 Â 3% perf-profile.children.cycles-pp.get_page_from_freelist
> > 2.44 Â 6% +0.2 2.67 Â 3% perf-profile.children.cycles-pp.__alloc_pages_nodemask
> > 2.48 Â 7% +0.2 2.73 Â 3% perf-profile.children.cycles-pp.alloc_pages_vma
> > 2.40 Â 8% +0.4 2.83 Â 6% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
> > 2.38 Â 8% +0.4 2.80 Â 6% perf-profile.children.cycles-pp.prepare_exit_to_usermode
> > 3.12 Â 2% +0.4 3.55 perf-profile.children.cycles-pp.unmap_page_range
> > 3.07 Â 2% +0.4 3.50 perf-profile.children.cycles-pp.unmap_vmas
> > 0.57 Â 25% +0.5 1.02 Â 8% perf-profile.children.cycles-pp.menu_select
> > 4.06 Â 4% +0.5 4.58 perf-profile.children.cycles-pp.tlb_flush_mmu
> > 4.83 Â 4% +0.5 5.36 perf-profile.children.cycles-pp.release_pages
> > 6.64 Â 5% +0.8 7.41 Â 2% perf-profile.children.cycles-pp.handle_pte_fault
> > 9.18 Â 11% +1.4 10.57 Â 3% perf-profile.children.cycles-pp.__x64_sys_msync
> > 9.69 Â 11% +1.5 11.19 perf-profile.children.cycles-pp.rwsem_down_read_slowpath
> > 12.15 Â 12% +1.7 13.87 Â 2% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
> > 17.80 Â 11% +2.6 20.44 Â 2% perf-profile.children.cycles-pp.osq_lock
> > 20.98 Â 11% +3.1 24.05 Â 2% perf-profile.children.cycles-pp.rwsem_optimistic_spin
> > 34.59 Â 18% -7.4 27.14 Â 2% perf-profile.self.cycles-pp.intel_idle
> > 0.05 Â 8% +0.0 0.07 Â 7% perf-profile.self.cycles-pp.do_brk_flags
> > 0.07 Â 11% +0.0 0.09 perf-profile.self.cycles-pp.anon_vma_interval_tree_remove
> > 0.06 Â 11% +0.0 0.08 Â 10% perf-profile.self.cycles-pp.__pthread_enable_asynccancel
> > 0.15 Â 7% +0.0 0.18 Â 2% perf-profile.self.cycles-pp.page_remove_rmap
> > 0.17 Â 6% +0.0 0.19 Â 4% perf-profile.self.cycles-pp.find_get_entries
> > 0.17 Â 6% +0.0 0.19 Â 4% perf-profile.self.cycles-pp.handle_mm_fault
> > 0.23 Â 6% +0.0 0.26 Â 4% perf-profile.self.cycles-pp._raw_spin_lock_irq
> > 0.37 Â 6% +0.0 0.42 Â 2% perf-profile.self.cycles-pp.___might_sleep
> > 0.21 Â 8% +0.0 0.26 Â 4% perf-profile.self.cycles-pp.do_madvise
> > 0.28 Â 10% +0.0 0.33 perf-profile.self.cycles-pp.syscall_return_via_sysret
> > 0.39 Â 8% +0.1 0.44 Â 2% perf-profile.self.cycles-pp.up_write
> > 0.13 Â 22% +0.1 0.19 Â 6% perf-profile.self.cycles-pp.filemap_map_pages
> > 0.45 Â 3% +0.1 0.52 Â 8% perf-profile.self.cycles-pp.sync_regs
> > 0.67 Â 6% +0.1 0.75 perf-profile.self.cycles-pp.get_page_from_freelist
> > 0.42 Â 16% +0.1 0.51 Â 3% perf-profile.self.cycles-pp.osq_unlock
> > 0.20 Â 18% +0.2 0.40 Â 7% perf-profile.self.cycles-pp.cpuidle_enter_state
> > 1.94 Â 5% +0.2 2.15 perf-profile.self.cycles-pp._raw_spin_lock
> > 0.20 Â 39% +0.4 0.56 Â 5% perf-profile.self.cycles-pp.menu_select
> > 17.31 Â 11% +2.6 19.86 Â 2% perf-profile.self.cycles-pp.osq_lock
> >
> >
> >
> > ***************************************************************************************************
> > lkp-hsw-d01: 8 threads Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz with 8G memory
> >
> >
> >
> >
> > Disclaimer:
> > Results have been estimated based on internal Intel analysis and are provided
> > for informational purposes only. Any difference in system hardware or software
> > design or configuration may affect actual performance.
> >
> >
> > Thanks,
> > Rong Chen
> >
> >
> > _______________________________________________
> > LKP mailing list -- lkp@xxxxxxxxxxxx
> > To unsubscribe send an email to lkp-leave@xxxxxxxxxxxx
> >
>
> --
> Zhengjun Xing
> _______________________________________________
> LKP mailing list -- lkp@xxxxxxxxxxxx
> To unsubscribe send an email to lkp-leave@xxxxxxxxxxxx