Re: [btrfs] 4c468fd7485: +7.8% blogbench.write_score, -5.1% turbostat.Pkg_W

From: Fengguang Wu
Date: Sat Aug 16 2014 - 09:10:52 EST


Hi Abhay,

On Sat, Aug 16, 2014 at 05:30:35PM +0530, Abhay Sachan wrote:
> Hi Fengguag,
> Sorry for the out of topic question, but what benchmark is this?
> I have heard about blogbench, but it doesn't give output in this format AFAIK.

It is blogbench run in the lkp-tests framework.

https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/

It will collect various system stats when blogbench runs. Then it
presents you the collected blogbench stats together with slabinfo,
meminfo, proc-vmstat, turbostat, softirqs etc. stats.

The basic steps to reproduce this report are

$ split-job jobs/blogbench.yaml
jobs/blogbench.yaml => ./blogbench-1HDD-ext4.yaml
jobs/blogbench.yaml => ./blogbench-1HDD-xfs.yaml
jobs/blogbench.yaml => ./blogbench-1HDD-btrfs.yaml

# requires debian/ubuntu for now
$ bin/setup-local --hdd /dev/sdaX ./blogbench-1HDD-btrfs.yaml

$ bin/run-local ./blogbench-1HDD-btrfs.yaml

The report is generated by the "sbin/compare" script.

Thanks,
Fengguang

> On Sat, Aug 16, 2014 at 1:22 PM, Fengguang Wu <fengguang.wu@xxxxxxxxx> wrote:
> > Hi Chris,
> >
> > FYI, we noticed increased performance and reduced power consumption on
> >
> > commit 4c468fd74859d901c0b78b42bef189295e00d74f ("btrfs: disable strict file flushes for renames and truncates")
> >
> > test case: lkp-sb02/blogbench/1HDD-btrfs
> >
> > 0954d74f8f37a47 4c468fd74859d901c0b78b42b
> > --------------- -------------------------
> > 1094 Â 1% +7.8% 1180 Â 2% TOTAL blogbench.write_score
> > 1396 Â19% -100.0% 0 Â 0% TOTAL slabinfo.btrfs_delalloc_work.active_objs
> > 1497 Â17% -100.0% 0 Â 0% TOTAL slabinfo.btrfs_delalloc_work.num_objs
> > 426 Â45% -100.0% 0 Â 0% TOTAL proc-vmstat.nr_vmscan_write
> > 1.02 Â38% +193.1% 2.99 Â37% TOTAL turbostat.%pc6
> > 0.12 Â48% +113.8% 0.25 Â29% TOTAL turbostat.%pc3
> > 0.38 Â18% +117.7% 0.84 Â34% TOTAL turbostat.%pc2
> > 19377 Â14% -50.9% 9520 Â20% TOTAL proc-vmstat.workingset_refault
> > 44 Â41% +68.8% 75 Â28% TOTAL cpuidle.POLL.usage
> > 31549 Â 1% +95.7% 61732 Â 1% TOTAL softirqs.BLOCK
> > 4547 Â10% -38.3% 2804 Â 9% TOTAL slabinfo.btrfs_ordered_extent.active_objs
> > 4628 Â10% -37.1% 2913 Â 9% TOTAL slabinfo.btrfs_ordered_extent.num_objs
> > 17597 Â 8% -30.2% 12291 Â14% TOTAL proc-vmstat.nr_writeback
> > 70335 Â 8% -30.1% 49174 Â14% TOTAL meminfo.Writeback
> > 3606 Â 6% -29.1% 2556 Â10% TOTAL slabinfo.mnt_cache.active_objs
> > 14763 Â12% -29.9% 10350 Â 8% TOTAL proc-vmstat.nr_dirty
> > 3766 Â 5% -27.8% 2720 Â10% TOTAL slabinfo.mnt_cache.num_objs
> > 3509 Â 6% -28.5% 2510 Â11% TOTAL slabinfo.kmalloc-4096.active_objs
> > 59201 Â11% -30.1% 41396 Â 8% TOTAL meminfo.Dirty
> > 479 Â13% -30.5% 333 Â10% TOTAL slabinfo.kmalloc-4096.num_slabs
> > 479 Â13% -30.5% 333 Â10% TOTAL slabinfo.kmalloc-4096.active_slabs
> > 3636 Â 6% -26.6% 2669 Â10% TOTAL slabinfo.kmalloc-4096.num_objs
> > 6040 Â 8% -28.6% 4314 Â 6% TOTAL slabinfo.kmalloc-96.num_objs
> > 5358 Â 5% -25.1% 4011 Â 7% TOTAL slabinfo.kmalloc-96.active_objs
> > 757208 Â 4% -22.1% 589874 Â 4% TOTAL meminfo.MemFree
> > 189508 Â 4% -22.2% 147518 Â 4% TOTAL proc-vmstat.nr_free_pages
> > 762781 Â 4% -21.1% 601525 Â 4% TOTAL vmstat.memory.free
> > 10491 Â 2% -16.8% 8725 Â 2% TOTAL slabinfo.kmalloc-64.num_objs
> > 2513 Â 4% +16.3% 2923 Â 4% TOTAL slabinfo.kmalloc-128.active_objs
> > 9768 Â 3% -15.1% 8298 Â 1% TOTAL slabinfo.kmalloc-64.active_objs
> > 2627 Â 4% +14.0% 2995 Â 4% TOTAL slabinfo.kmalloc-128.num_objs
> > 96242 Â 2% +15.5% 111120 Â 2% TOTAL slabinfo.btrfs_path.active_objs
> > 3448 Â 2% +15.1% 3968 Â 2% TOTAL slabinfo.btrfs_path.num_slabs
> > 3448 Â 2% +15.1% 3968 Â 2% TOTAL slabinfo.btrfs_path.active_slabs
> > 96580 Â 2% +15.1% 111132 Â 2% TOTAL slabinfo.btrfs_path.num_objs
> > 2526 Â 2% +13.5% 2867 Â 1% TOTAL slabinfo.btrfs_extent_state.num_slabs
> > 2526 Â 2% +13.5% 2867 Â 1% TOTAL slabinfo.btrfs_extent_state.active_slabs
> > 106133 Â 2% +13.5% 120434 Â 1% TOTAL slabinfo.btrfs_extent_state.num_objs
> > 104326 Â 2% +12.3% 117189 Â 1% TOTAL slabinfo.btrfs_extent_state.active_objs
> > 110759 Â 2% +13.4% 125640 Â 2% TOTAL slabinfo.btrfs_inode.active_objs
> > 110759 Â 2% +13.4% 125642 Â 2% TOTAL slabinfo.btrfs_delayed_node.active_objs
> > 4261 Â 2% +13.4% 4832 Â 2% TOTAL slabinfo.btrfs_delayed_node.num_slabs
> > 4261 Â 2% +13.4% 4832 Â 2% TOTAL slabinfo.btrfs_delayed_node.active_slabs
> > 110797 Â 2% +13.4% 125663 Â 2% TOTAL slabinfo.btrfs_delayed_node.num_objs
> > 110815 Â 2% +13.4% 125669 Â 2% TOTAL slabinfo.btrfs_inode.num_objs
> > 6926 Â 2% +13.4% 7853 Â 2% TOTAL slabinfo.btrfs_inode.num_slabs
> > 6926 Â 2% +13.4% 7853 Â 2% TOTAL slabinfo.btrfs_inode.active_slabs
> > 5607 Â 3% -11.0% 4991 Â 3% TOTAL slabinfo.kmalloc-256.active_objs
> > 6077 Â 2% -9.9% 5476 Â 3% TOTAL slabinfo.kmalloc-256.num_objs
> > 11153 Â 1% -7.7% 10295 Â 2% TOTAL proc-vmstat.nr_slab_unreclaimable
> > 547824 Â 3% +16.5% 638368 Â 8% TOTAL meminfo.Inactive(file)
> > 112124 Â 2% +11.6% 125105 Â 2% TOTAL slabinfo.radix_tree_node.active_objs
> > 112169 Â 2% +11.6% 125134 Â 2% TOTAL slabinfo.radix_tree_node.num_objs
> > 4005 Â 2% +11.6% 4468 Â 2% TOTAL slabinfo.radix_tree_node.num_slabs
> > 4005 Â 2% +11.6% 4468 Â 2% TOTAL slabinfo.radix_tree_node.active_slabs
> > 551119 Â 3% +16.4% 641663 Â 8% TOTAL meminfo.Inactive
> > 285596 Â 2% +11.4% 318160 Â 2% TOTAL meminfo.SReclaimable
> > 156 Â 3% +118.0% 340 Â 2% TOTAL iostat.sda.w/s
> > 282 Â 3% -43.2% 160 Â 3% TOTAL iostat.sda.avgrq-sz
> > 1.45 Â12% -28.9% 1.03 Â18% TOTAL iostat.sda.rrqm/s
> > 633 Â 2% -26.5% 465 Â 2% TOTAL iostat.sda.wrqm/s
> > 154423 Â 5% +17.4% 181309 Â 3% TOTAL time.voluntary_context_switches
> > 536 Â 5% -11.5% 474 Â 9% TOTAL iostat.sda.await
> > 102.71 Â 5% +10.4% 113.36 Â 6% TOTAL iostat.sda.avgqu-sz
> > 20842 Â 2% -6.5% 19493 Â 2% TOTAL iostat.sda.wkB/s
> > 20856 Â 2% -6.4% 19525 Â 2% TOTAL vmstat.io.bo
> > 75.48 Â 4% -6.9% 70.27 Â 5% TOTAL turbostat.%c0
> > 285 Â 4% -6.6% 266 Â 5% TOTAL time.percent_of_cpu_this_job_got
> > 34.58 Â 2% -5.5% 32.68 Â 3% TOTAL turbostat.Cor_W
> > 39.86 Â 2% -5.1% 37.82 Â 3% TOTAL turbostat.Pkg_W
> > 5805 Â 1% -4.3% 5558 Â 3% TOTAL vmstat.system.in
> > 10069454 Â 1% +6.3% 10699830 Â 1% TOTAL time.file_system_outputs
> >
> >
> > Disclaimer:
> > Results have been estimated based on internal Intel analysis and are provided
> > for informational purposes only. Any difference in system hardware or software
> > design or configuration may affect actual performance.
> >
> > Thanks,
> > Fengguang
>
>
>
> --
> Abhay
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/