[btrfs] 4c468fd7485: +7.8% blogbench.write_score, -5.1% turbostat.Pkg_W

From: Fengguang Wu
Date: Sat Aug 16 2014 - 03:59:01 EST


Hi Chris,

FYI, we noticed increased performance and reduced power consumption on

commit 4c468fd74859d901c0b78b42bef189295e00d74f ("btrfs: disable strict file flushes for renames and truncates")

test case: lkp-sb02/blogbench/1HDD-btrfs

0954d74f8f37a47 4c468fd74859d901c0b78b42b
--------------- -------------------------
1094 Â 1% +7.8% 1180 Â 2% TOTAL blogbench.write_score
1396 Â19% -100.0% 0 Â 0% TOTAL slabinfo.btrfs_delalloc_work.active_objs
1497 Â17% -100.0% 0 Â 0% TOTAL slabinfo.btrfs_delalloc_work.num_objs
426 Â45% -100.0% 0 Â 0% TOTAL proc-vmstat.nr_vmscan_write
1.02 Â38% +193.1% 2.99 Â37% TOTAL turbostat.%pc6
0.12 Â48% +113.8% 0.25 Â29% TOTAL turbostat.%pc3
0.38 Â18% +117.7% 0.84 Â34% TOTAL turbostat.%pc2
19377 Â14% -50.9% 9520 Â20% TOTAL proc-vmstat.workingset_refault
44 Â41% +68.8% 75 Â28% TOTAL cpuidle.POLL.usage
31549 Â 1% +95.7% 61732 Â 1% TOTAL softirqs.BLOCK
4547 Â10% -38.3% 2804 Â 9% TOTAL slabinfo.btrfs_ordered_extent.active_objs
4628 Â10% -37.1% 2913 Â 9% TOTAL slabinfo.btrfs_ordered_extent.num_objs
17597 Â 8% -30.2% 12291 Â14% TOTAL proc-vmstat.nr_writeback
70335 Â 8% -30.1% 49174 Â14% TOTAL meminfo.Writeback
3606 Â 6% -29.1% 2556 Â10% TOTAL slabinfo.mnt_cache.active_objs
14763 Â12% -29.9% 10350 Â 8% TOTAL proc-vmstat.nr_dirty
3766 Â 5% -27.8% 2720 Â10% TOTAL slabinfo.mnt_cache.num_objs
3509 Â 6% -28.5% 2510 Â11% TOTAL slabinfo.kmalloc-4096.active_objs
59201 Â11% -30.1% 41396 Â 8% TOTAL meminfo.Dirty
479 Â13% -30.5% 333 Â10% TOTAL slabinfo.kmalloc-4096.num_slabs
479 Â13% -30.5% 333 Â10% TOTAL slabinfo.kmalloc-4096.active_slabs
3636 Â 6% -26.6% 2669 Â10% TOTAL slabinfo.kmalloc-4096.num_objs
6040 Â 8% -28.6% 4314 Â 6% TOTAL slabinfo.kmalloc-96.num_objs
5358 Â 5% -25.1% 4011 Â 7% TOTAL slabinfo.kmalloc-96.active_objs
757208 Â 4% -22.1% 589874 Â 4% TOTAL meminfo.MemFree
189508 Â 4% -22.2% 147518 Â 4% TOTAL proc-vmstat.nr_free_pages
762781 Â 4% -21.1% 601525 Â 4% TOTAL vmstat.memory.free
10491 Â 2% -16.8% 8725 Â 2% TOTAL slabinfo.kmalloc-64.num_objs
2513 Â 4% +16.3% 2923 Â 4% TOTAL slabinfo.kmalloc-128.active_objs
9768 Â 3% -15.1% 8298 Â 1% TOTAL slabinfo.kmalloc-64.active_objs
2627 Â 4% +14.0% 2995 Â 4% TOTAL slabinfo.kmalloc-128.num_objs
96242 Â 2% +15.5% 111120 Â 2% TOTAL slabinfo.btrfs_path.active_objs
3448 Â 2% +15.1% 3968 Â 2% TOTAL slabinfo.btrfs_path.num_slabs
3448 Â 2% +15.1% 3968 Â 2% TOTAL slabinfo.btrfs_path.active_slabs
96580 Â 2% +15.1% 111132 Â 2% TOTAL slabinfo.btrfs_path.num_objs
2526 Â 2% +13.5% 2867 Â 1% TOTAL slabinfo.btrfs_extent_state.num_slabs
2526 Â 2% +13.5% 2867 Â 1% TOTAL slabinfo.btrfs_extent_state.active_slabs
106133 Â 2% +13.5% 120434 Â 1% TOTAL slabinfo.btrfs_extent_state.num_objs
104326 Â 2% +12.3% 117189 Â 1% TOTAL slabinfo.btrfs_extent_state.active_objs
110759 Â 2% +13.4% 125640 Â 2% TOTAL slabinfo.btrfs_inode.active_objs
110759 Â 2% +13.4% 125642 Â 2% TOTAL slabinfo.btrfs_delayed_node.active_objs
4261 Â 2% +13.4% 4832 Â 2% TOTAL slabinfo.btrfs_delayed_node.num_slabs
4261 Â 2% +13.4% 4832 Â 2% TOTAL slabinfo.btrfs_delayed_node.active_slabs
110797 Â 2% +13.4% 125663 Â 2% TOTAL slabinfo.btrfs_delayed_node.num_objs
110815 Â 2% +13.4% 125669 Â 2% TOTAL slabinfo.btrfs_inode.num_objs
6926 Â 2% +13.4% 7853 Â 2% TOTAL slabinfo.btrfs_inode.num_slabs
6926 Â 2% +13.4% 7853 Â 2% TOTAL slabinfo.btrfs_inode.active_slabs
5607 Â 3% -11.0% 4991 Â 3% TOTAL slabinfo.kmalloc-256.active_objs
6077 Â 2% -9.9% 5476 Â 3% TOTAL slabinfo.kmalloc-256.num_objs
11153 Â 1% -7.7% 10295 Â 2% TOTAL proc-vmstat.nr_slab_unreclaimable
547824 Â 3% +16.5% 638368 Â 8% TOTAL meminfo.Inactive(file)
112124 Â 2% +11.6% 125105 Â 2% TOTAL slabinfo.radix_tree_node.active_objs
112169 Â 2% +11.6% 125134 Â 2% TOTAL slabinfo.radix_tree_node.num_objs
4005 Â 2% +11.6% 4468 Â 2% TOTAL slabinfo.radix_tree_node.num_slabs
4005 Â 2% +11.6% 4468 Â 2% TOTAL slabinfo.radix_tree_node.active_slabs
551119 Â 3% +16.4% 641663 Â 8% TOTAL meminfo.Inactive
285596 Â 2% +11.4% 318160 Â 2% TOTAL meminfo.SReclaimable
156 Â 3% +118.0% 340 Â 2% TOTAL iostat.sda.w/s
282 Â 3% -43.2% 160 Â 3% TOTAL iostat.sda.avgrq-sz
1.45 Â12% -28.9% 1.03 Â18% TOTAL iostat.sda.rrqm/s
633 Â 2% -26.5% 465 Â 2% TOTAL iostat.sda.wrqm/s
154423 Â 5% +17.4% 181309 Â 3% TOTAL time.voluntary_context_switches
536 Â 5% -11.5% 474 Â 9% TOTAL iostat.sda.await
102.71 Â 5% +10.4% 113.36 Â 6% TOTAL iostat.sda.avgqu-sz
20842 Â 2% -6.5% 19493 Â 2% TOTAL iostat.sda.wkB/s
20856 Â 2% -6.4% 19525 Â 2% TOTAL vmstat.io.bo
75.48 Â 4% -6.9% 70.27 Â 5% TOTAL turbostat.%c0
285 Â 4% -6.6% 266 Â 5% TOTAL time.percent_of_cpu_this_job_got
34.58 Â 2% -5.5% 32.68 Â 3% TOTAL turbostat.Cor_W
39.86 Â 2% -5.1% 37.82 Â 3% TOTAL turbostat.Pkg_W
5805 Â 1% -4.3% 5558 Â 3% TOTAL vmstat.system.in
10069454 Â 1% +6.3% 10699830 Â 1% TOTAL time.file_system_outputs


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.

Thanks,
Fengguang
echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
mkfs -t btrfs /dev/sda2
mount -t btrfs /dev/sda2 /fs/sda2
./blogbench -d /fs/sda2