[LKP] [mutex] 6aa15f5a2fe: -9.2% will-it-scale.per_process_ops

From: Huang Ying
Date: Fri Feb 13 2015 - 00:37:43 EST


FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core
commit 6aa15f5a2febe058056180786bb39513ad5ae70d ("mutex: In mutex_spin_on_owner(), return true when owner changes")


testbox/testcase/testparams: wsm/will-it-scale/performance-writeseek3

afffc6c1805d98e0 6aa15f5a2febe058056180786b
---------------- --------------------------
%stddev %change %stddev
\ | \
27329774 Â 5% -98.7% 350559 Â 4% will-it-scale.time.voluntary_context_switches
1401 Â 4% +340.4% 6172 Â 9% will-it-scale.time.involuntary_context_switches
402 Â 7% +157.9% 1036 Â 0% will-it-scale.time.system_time
141 Â 6% +146.3% 347 Â 0% will-it-scale.time.percent_of_cpu_this_job_got
28.29 Â 4% -25.7% 21.01 Â 1% will-it-scale.time.user_time
777773 Â 0% -9.2% 706114 Â 7% will-it-scale.per_process_ops
4995546 Â 11% -99.4% 31990 Â 29% sched_debug.cpu#11.sched_count
332497 Â 9% -87.9% 40257 Â 4% softirqs.SCHED
0.96 Â 20% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.hrtimer_try_to_cancel.hrtimer_cancel.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry
0.99 Â 14% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.hrtimer_cancel.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry.start_secondary
1.40 Â 3% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.dequeue_entity.dequeue_task_fair.dequeue_task.deactivate_task.__schedule
1.36 Â 15% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.get_nohz_timer_target.__hrtimer_start_range_ns.hrtimer_start.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter
1.62 Â 4% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.pick_next_task_fair.__schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
1.69 Â 2% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.dequeue_task_fair.dequeue_task.deactivate_task.__schedule.schedule_preempt_disabled
15.54 Â 32% +351.5% 70.18 Â 2% perf-profile.cpu-cycles.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter.new_sync_write
1.94 Â 3% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.dequeue_task.deactivate_task.__schedule.schedule_preempt_disabled.__mutex_lock_slowpath
1.96 Â 4% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.deactivate_task.__schedule.schedule_preempt_disabled.__mutex_lock_slowpath.mutex_lock
2.08 Â 11% -100.0% 0.00 Â 0% perf-profile.cpu-cycles._raw_spin_lock.try_to_wake_up.wake_up_process.__mutex_unlock_slowpath.mutex_unlock
20.14 Â 10% -82.3% 3.56 Â 34% perf-profile.cpu-cycles.start_secondary
20.04 Â 10% -82.2% 3.56 Â 34% perf-profile.cpu-cycles.cpu_startup_entry.start_secondary
2.67 Â 8% -100.0% 0.00 Â 0% perf-profile.cpu-cycles._raw_spin_unlock_irqrestore.__hrtimer_start_range_ns.hrtimer_start_range_ns.tick_nohz_restart.tick_nohz_idle_exit
4495470 Â 7% -99.4% 25868 Â 12% sched_debug.cpu#9.nr_switches
4496190 Â 7% -99.4% 26052 Â 12% sched_debug.cpu#9.sched_count
40599.18 Â 41% -100.0% 0.00 Â 0% sched_debug.cfs_rq[6]:/.max_vruntime
3.30 Â 8% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.__hrtimer_start_range_ns.hrtimer_start_range_ns.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry
3.33 Â 6% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.hrtimer_start_range_ns.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry.start_secondary
2247155 Â 7% -99.4% 12447 Â 14% sched_debug.cpu#9.sched_goidle
40599.18 Â 41% -100.0% 0.00 Â 0% sched_debug.cfs_rq[6]:/.MIN_vruntime
3.99 Â 6% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.__schedule.schedule_preempt_disabled.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter
2448071 Â 2% -99.4% 13482 Â 9% sched_debug.cpu#6.ttwu_count
2554386 Â 2% -99.5% 13020 Â 8% sched_debug.cpu#6.sched_goidle
5111673 Â 2% -99.5% 26787 Â 8% sched_debug.cpu#6.sched_count
2527550 Â 6% -99.5% 12538 Â 14% sched_debug.cpu#9.ttwu_count
5109913 Â 2% -99.5% 26531 Â 8% sched_debug.cpu#6.nr_switches
54085 Â 37% +845.7% 511463 Â 40% sched_debug.cfs_rq[5]:/.min_vruntime
8424 Â 16% +560.9% 55678 Â 42% sched_debug.cfs_rq[5]:/.exec_clock
1201871 Â 19% -83.0% 204673 Â 49% sched_debug.cpu#5.sched_count
10 Â 28% +490.5% 62 Â 31% sched_debug.cpu#5.cpu_load[4]
93687275 Â 19% -94.0% 5611171 Â 33% cpuidle.C1-NHM.time
558262 Â 18% -96.2% 21169 Â 6% cpuidle.C1-NHM.usage
9.831e+08 Â 15% -98.8% 11473350 Â 25% cpuidle.C3-NHM.time
3976387 Â 13% -99.3% 26803 Â 6% cpuidle.C3-NHM.usage
1390481 Â 7% -95.5% 63030 Â 4% cpuidle.C6-NHM.usage
0.92 Â 8% -100.0% 0.00 Â 0% perf-profile.cpu-cycles._raw_spin_lock_irqsave.try_to_wake_up.wake_up_process.__mutex_unlock_slowpath.mutex_unlock
5406764 Â 9% -99.3% 38661 Â 36% sched_debug.cpu#8.nr_switches
27329774 Â 5% -98.7% 350559 Â 4% time.voluntary_context_switches
2702648 Â 9% -99.3% 18971 Â 37% sched_debug.cpu#8.sched_goidle
5408351 Â 9% -99.3% 38886 Â 36% sched_debug.cpu#8.sched_count
23.91 Â 12% -99.1% 0.22 Â 40% turbostat.CPU%c3
1588237 Â 8% -87.6% 197379 Â 28% sched_debug.cpu#1.sched_goidle
12 Â 21% +400.0% 61 Â 31% sched_debug.cpu#5.cpu_load[3]
14 Â 15% +324.6% 60 Â 30% sched_debug.cpu#5.cpu_load[2]
2804909 Â 4% -99.5% 15331 Â 33% sched_debug.cpu#11.ttwu_count
2496582 Â 11% -99.4% 14895 Â 31% sched_debug.cpu#11.sched_goidle
2449189 Â 2% -99.2% 19646 Â 34% sched_debug.cpu#8.ttwu_count
2672337 Â 9% -98.3% 44488 Â 29% sched_debug.cpu#0.nr_switches
4995012 Â 11% -99.4% 31857 Â 29% sched_debug.cpu#11.nr_switches
1201660 Â 19% -83.0% 204494 Â 49% sched_debug.cpu#5.nr_switches
955232 Â 15% -85.7% 136563 Â 20% sched_debug.cpu#4.ttwu_count
995565 Â 13% -86.3% 136302 Â 19% sched_debug.cpu#4.sched_goidle
1992443 Â 13% -86.2% 274424 Â 19% sched_debug.cpu#4.sched_count
2673257 Â 9% -98.3% 44671 Â 29% sched_debug.cpu#0.sched_count
1334237 Â 9% -98.5% 20205 Â 32% sched_debug.cpu#0.sched_goidle
1354162 Â 5% -98.3% 22809 Â 28% sched_debug.cpu#0.ttwu_count
26659.70 Â 14% -100.0% 0.00 Â 0% sched_debug.cfs_rq[9]:/.MIN_vruntime
1992050 Â 13% -86.2% 274292 Â 19% sched_debug.cpu#4.nr_switches
88563 Â 27% +365.2% 411997 Â 25% sched_debug.cfs_rq[3]:/.min_vruntime
26659.70 Â 14% -100.0% 0.00 Â 0% sched_debug.cfs_rq[9]:/.max_vruntime
1531974 Â 19% -89.7% 157815 Â 23% sched_debug.cpu#2.ttwu_count
1650173 Â 16% -90.5% 157023 Â 24% sched_debug.cpu#2.sched_goidle
2264230 Â 4% -99.5% 12323 Â 7% sched_debug.cpu#10.ttwu_count
3177630 Â 8% -87.5% 395822 Â 28% sched_debug.cpu#1.nr_switches
3302396 Â 16% -90.4% 315490 Â 24% sched_debug.cpu#2.sched_count
4365273 Â 5% -99.4% 24251 Â 6% sched_debug.cpu#10.nr_switches
3301465 Â 16% -90.4% 315323 Â 24% sched_debug.cpu#2.nr_switches
4366326 Â 5% -99.4% 24433 Â 6% sched_debug.cpu#10.sched_count
2182166 Â 5% -99.5% 11526 Â 7% sched_debug.cpu#10.sched_goidle
1462067 Â 3% -86.5% 197145 Â 29% sched_debug.cpu#1.ttwu_count
3178415 Â 8% -87.5% 396013 Â 28% sched_debug.cpu#1.sched_count
145590 Â 45% +280.1% 553319 Â 14% sched_debug.cfs_rq[1]:/.min_vruntime
1.14 Â 19% -78.9% 0.24 Â 34% perf-profile.cpu-cycles.shmem_file_llseek.sys_lseek.system_call_fastpath
14.24 Â 4% -82.6% 2.48 Â 43% perf-profile.cpu-cycles.__mutex_unlock_slowpath.mutex_unlock.generic_file_write_iter.new_sync_write.vfs_write
14.38 Â 4% -82.2% 2.56 Â 41% perf-profile.cpu-cycles.mutex_unlock.generic_file_write_iter.new_sync_write.vfs_write.sys_write
98519 Â 38% +276.0% 370400 Â 40% sched_debug.cfs_rq[4]:/.min_vruntime
12 Â 23% +324.5% 52 Â 23% sched_debug.cpu#3.cpu_load[4]
148 Â 13% +291.3% 582 Â 28% sched_debug.cfs_rq[5]:/.tg_runnable_contrib
6860 Â 12% +288.6% 26657 Â 28% sched_debug.cfs_rq[5]:/.avg->runnable_avg_sum
1401 Â 4% +340.4% 6172 Â 9% time.involuntary_context_switches
962541 Â 20% -74.3% 247827 Â 22% sched_debug.cpu#3.ttwu_count
27121946 Â 39% +207.6% 83418729 Â 41% cpuidle.POLL.time
13141 Â 10% +248.4% 45786 Â 28% sched_debug.cfs_rq[3]:/.exec_clock
871810 Â 19% -71.6% 247691 Â 22% sched_debug.cpu#3.sched_goidle
1744545 Â 19% -71.5% 497212 Â 22% sched_debug.cpu#3.nr_switches
1744810 Â 19% -71.5% 497350 Â 22% sched_debug.cpu#3.sched_count
8013 Â 48% +177.9% 22273 Â 26% sched_debug.cfs_rq[4]:/.avg->runnable_avg_sum
175 Â 48% +177.1% 485 Â 26% sched_debug.cfs_rq[4]:/.tg_runnable_contrib
19 Â 21% +236.4% 64 Â 30% sched_debug.cfs_rq[5]:/.runnable_load_avg
18 Â 35% +236.0% 63 Â 14% sched_debug.cpu#1.cpu_load[4]
18 Â 14% +226.0% 59 Â 28% sched_debug.cpu#5.cpu_load[1]
20.76 Â 23% +239.0% 70.38 Â 2% perf-profile.cpu-cycles.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter.new_sync_write.vfs_write
152212 Â 36% +234.5% 509104 Â 9% sched_debug.cfs_rq[2]:/.min_vruntime
7857 Â 13% +193.3% 23042 Â 16% sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
171 Â 13% +192.1% 501 Â 16% sched_debug.cfs_rq[3]:/.tg_runnable_contrib
14 Â 26% +267.2% 53 Â 21% sched_debug.cpu#3.cpu_load[3]
18 Â 33% +213.5% 58 Â 7% sched_debug.cpu#2.cpu_load[4]
23.57 Â 20% +205.2% 71.91 Â 2% perf-profile.cpu-cycles.mutex_lock.generic_file_write_iter.new_sync_write.vfs_write.sys_write
20 Â 30% +214.8% 63 Â 14% sched_debug.cpu#1.cpu_load[3]
2.08 Â 12% -66.5% 0.70 Â 11% perf-profile.cpu-cycles.sys_lseek.system_call_fastpath
1.42 Â 15% -73.2% 0.38 Â 22% perf-profile.cpu-cycles.file_update_time.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write
18 Â 37% +194.7% 55 Â 18% sched_debug.cpu#3.cpu_load[2]
740 Â 11% +182.5% 2091 Â 17% sched_debug.cpu#5.curr->pid
21881 Â 23% +163.3% 57608 Â 16% sched_debug.cfs_rq[1]:/.exec_clock
246670 Â 26% +164.8% 653087 Â 5% sched_debug.cfs_rq[8]:/.min_vruntime
114092 Â 5% -67.0% 37644 Â 5% softirqs.RCU
30 Â 30% +125.6% 68 Â 15% sched_debug.cpu#1.cpu_load[0]
36 Â 44% +132.7% 85 Â 19% sched_debug.cfs_rq[0]:/.runnable_load_avg
28 Â 16% +163.7% 74 Â 5% sched_debug.cpu#8.cpu_load[4]
258 Â 22% +139.6% 620 Â 9% sched_debug.cfs_rq[1]:/.tg_runnable_contrib
11830 Â 21% +139.7% 28352 Â 9% sched_debug.cfs_rq[1]:/.avg->runnable_avg_sum
22 Â 27% +185.6% 64 Â 13% sched_debug.cpu#1.cpu_load[2]
430107 Â 20% +153.1% 1088393 Â 0% softirqs.TIMER
21 Â 21% +178.6% 58 Â 7% sched_debug.cpu#2.cpu_load[3]
24 Â 19% +144.3% 59 Â 26% sched_debug.cpu#5.cpu_load[0]
818 Â 29% +131.7% 1895 Â 6% sched_debug.cpu#3.curr->pid
11124 Â 29% +135.6% 26209 Â 5% sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
31 Â 13% +141.9% 75 Â 5% sched_debug.cpu#8.cpu_load[3]
243 Â 29% +134.7% 571 Â 5% sched_debug.cfs_rq[2]:/.tg_runnable_contrib
28 Â 20% +142.6% 69 Â 13% sched_debug.cpu#7.cpu_load[4]
402 Â 7% +157.9% 1036 Â 0% time.system_time
834 Â 18% +152.6% 2108 Â 14% sched_debug.cpu#1.curr->pid
1.13 Â 8% -59.6% 0.46 Â 3% perf-profile.cpu-cycles.__sb_end_write.vfs_write.sys_write.system_call_fastpath
30 Â 17% +130.1% 70 Â 14% sched_debug.cpu#7.cpu_load[3]
253168 Â 22% +141.0% 610051 Â 10% sched_debug.cfs_rq[7]:/.min_vruntime
26 Â 9% +147.7% 66 Â 2% sched_debug.cfs_rq[1]:/.runnable_load_avg
141 Â 6% +146.3% 347 Â 0% time.percent_of_cpu_this_job_got
25 Â 24% +154.4% 65 Â 14% sched_debug.cpu#1.cpu_load[1]
31986 Â 39% +129.0% 73256 Â 16% sched_debug.cfs_rq[0]:/.exec_clock
1.19 Â 11% -53.6% 0.55 Â 3% perf-profile.cpu-cycles.__sb_start_write.vfs_write.sys_write.system_call_fastpath
215936 Â 21% +135.4% 508378 Â 30% sched_debug.cfs_rq[10]:/.min_vruntime
233932 Â 23% +100.7% 469477 Â 20% sched_debug.cfs_rq[9]:/.min_vruntime
0.93 Â 13% -55.3% 0.41 Â 8% perf-profile.cpu-cycles.__srcu_read_unlock.fsnotify.vfs_write.sys_write.system_call_fastpath
1.23 Â 16% -57.2% 0.53 Â 6% perf-profile.cpu-cycles.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_write_begin.generic_perform_write
32 Â 10% +123.7% 73 Â 16% sched_debug.cpu#7.cpu_load[2]
187996 Â 37% +213.3% 588915 Â 10% sched_debug.cfs_rq[0]:/.min_vruntime
1684 Â 8% -59.5% 681 Â 19% cpuidle.POLL.usage
1.50 Â 14% -56.7% 0.65 Â 7% perf-profile.cpu-cycles.unlock_page.shmem_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
35 Â 9% +115.5% 76 Â 6% sched_debug.cpu#8.cpu_load[2]
30 Â 13% +94.2% 58 Â 24% sched_debug.cpu#10.cpu_load[4]
3.56 Â 8% -51.2% 1.74 Â 19% perf-profile.cpu-cycles.shmem_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
7.42 Â 6% -52.0% 3.56 Â 2% perf-profile.cpu-cycles.copy_user_generic_string.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
3508 Â 17% +104.5% 7174 Â 0% sched_debug.cfs_rq[2]:/.tg->runnable_avg
3508 Â 17% +104.5% 7175 Â 0% sched_debug.cfs_rq[3]:/.tg->runnable_avg
3513 Â 17% +104.3% 7177 Â 0% sched_debug.cfs_rq[4]:/.tg->runnable_avg
3516 Â 17% +104.2% 7179 Â 0% sched_debug.cfs_rq[5]:/.tg->runnable_avg
3517 Â 17% +104.2% 7181 Â 0% sched_debug.cfs_rq[6]:/.tg->runnable_avg
3520 Â 17% +104.1% 7183 Â 0% sched_debug.cfs_rq[7]:/.tg->runnable_avg
3522 Â 17% +104.0% 7184 Â 0% sched_debug.cfs_rq[8]:/.tg->runnable_avg
3524 Â 17% +103.9% 7185 Â 0% sched_debug.cfs_rq[9]:/.tg->runnable_avg
2.21 Â 11% -55.5% 0.98 Â 7% perf-profile.cpu-cycles.find_lock_entry.shmem_getpage_gfp.shmem_write_begin.generic_perform_write.__generic_file_write_iter
3528 Â 17% +103.7% 7186 Â 0% sched_debug.cfs_rq[10]:/.tg->runnable_avg
3532 Â 17% +103.5% 7188 Â 0% sched_debug.cfs_rq[11]:/.tg->runnable_avg
3509 Â 17% +104.5% 7177 Â 0% sched_debug.cfs_rq[0]:/.tg->runnable_avg
3513 Â 17% +104.2% 7173 Â 0% sched_debug.cfs_rq[1]:/.tg->runnable_avg
3.50 Â 6% -52.9% 1.65 Â 8% perf-profile.cpu-cycles.shmem_write_begin.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
2.96 Â 9% -54.3% 1.35 Â 5% perf-profile.cpu-cycles.shmem_getpage_gfp.shmem_write_begin.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
1.33 Â 10% -52.2% 0.64 Â 1% perf-profile.cpu-cycles.system_call
2.91 Â 6% -49.0% 1.48 Â 13% perf-profile.cpu-cycles.fsnotify.vfs_write.sys_write.system_call_fastpath
968 Â 32% +111.7% 2049 Â 11% sched_debug.cpu#2.curr->pid
22881 Â 17% +125.4% 51569 Â 9% sched_debug.cfs_rq[2]:/.exec_clock
20.55 Â 4% -50.4% 10.20 Â 3% perf-profile.cpu-cycles.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.sys_write
25 Â 16% +131.7% 58 Â 7% sched_debug.cpu#2.cpu_load[2]
12228 Â 24% +149.6% 30527 Â 9% sched_debug.cfs_rq[0]:/.avg->runnable_avg_sum
267 Â 24% +149.2% 666 Â 8% sched_debug.cfs_rq[0]:/.tg_runnable_contrib
36965 Â 14% +90.9% 70555 Â 6% sched_debug.cfs_rq[8]:/.exec_clock
974 Â 17% +118.5% 2129 Â 7% sched_debug.cpu#0.curr->pid
33 Â 15% +79.3% 60 Â 22% sched_debug.cpu#10.cpu_load[3]
1018 Â 10% +86.4% 1898 Â 26% sched_debug.cpu#10.curr->pid
344 Â 20% +92.7% 664 Â 9% sched_debug.cfs_rq[7]:/.tg_runnable_contrib
15780 Â 20% +93.2% 30482 Â 9% sched_debug.cfs_rq[7]:/.avg->runnable_avg_sum
1.18 Â 6% -50.1% 0.59 Â 6% perf-profile.cpu-cycles.__srcu_read_lock.fsnotify.vfs_write.sys_write.system_call_fastpath
1.36 Â 8% -50.6% 0.67 Â 3% perf-profile.cpu-cycles.system_call_after_swapgs
17.36 Â 6% -48.1% 9.00 Â 2% perf-profile.cpu-cycles.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write
348 Â 12% +98.7% 691 Â 4% sched_debug.cfs_rq[8]:/.tg_runnable_contrib
15908 Â 12% +99.2% 31685 Â 4% sched_debug.cfs_rq[8]:/.avg->runnable_avg_sum
2 Â 15% +109.1% 5 Â 7% vmstat.procs.r
57 Â 18% +76.6% 102 Â 26% sched_debug.cfs_rq[0]:/.load
39 Â 19% +61.1% 63 Â 20% sched_debug.cpu#10.cpu_load[2]
979 Â 12% +93.5% 1894 Â 19% sched_debug.cpu#4.curr->pid
36 Â 12% +113.0% 77 Â 22% sched_debug.cpu#7.cpu_load[1]
2.58 Â 4% -56.6% 1.12 Â 41% perf-profile.cpu-cycles._raw_spin_lock.__mutex_unlock_slowpath.mutex_unlock.generic_file_write_iter.new_sync_write
29 Â 20% +89.9% 56 Â 19% sched_debug.cpu#9.cpu_load[4]
328 Â 16% +66.5% 547 Â 15% sched_debug.cfs_rq[9]:/.tg_runnable_contrib
28 Â 21% +67.5% 47 Â 43% sched_debug.cpu#11.cpu_load[3]
15011 Â 16% +67.0% 25062 Â 15% sched_debug.cfs_rq[9]:/.avg->runnable_avg_sum
6.06 Â 27% +64.2% 9.95 Â 3% perf-profile.cpu-cycles.mutex_spin_on_owner.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter
1328 Â 8% +72.3% 2288 Â 11% sched_debug.cpu#6.curr->pid
3035 Â 6% -43.1% 1727 Â 0% uptime.idle
62 Â 19% +64.1% 103 Â 26% sched_debug.cpu#0.load
27 Â 25% +65.8% 46 Â 46% sched_debug.cpu#11.cpu_load[4]
30 Â 23% +91.0% 58 Â 8% sched_debug.cpu#2.cpu_load[1]
33 Â 20% +73.5% 57 Â 18% sched_debug.cpu#9.cpu_load[3]
335 Â 6% +70.3% 570 Â 23% sched_debug.cfs_rq[10]:/.tg_runnable_contrib
15314 Â 6% +70.7% 26147 Â 23% sched_debug.cfs_rq[10]:/.avg->runnable_avg_sum
43 Â 15% +83.2% 79 Â 8% sched_debug.cpu#8.cpu_load[1]
1002 Â 29% +96.0% 1964 Â 16% sched_debug.cpu#9.curr->pid
37 Â 26% +56.3% 59 Â 16% sched_debug.cpu#9.cpu_load[2]
38074 Â 11% +70.5% 64907 Â 14% sched_debug.cfs_rq[7]:/.exec_clock
35574 Â 12% +37.3% 48848 Â 26% sched_debug.cfs_rq[9]:/.exec_clock
1446 Â 22% +53.7% 2223 Â 10% sched_debug.cpu#7.curr->pid
303929 Â 8% +49.3% 453870 Â 17% sched_debug.cpu#9.avg_idle
39159 Â 16% +64.5% 64413 Â 14% sched_debug.cpu#3.nr_load_updates
487643 Â 21% +71.0% 834025 Â 5% sched_debug.cfs_rq[6]:/.min_vruntime
45 Â 30% +90.2% 87 Â 30% sched_debug.cpu#7.cpu_load[0]
55 Â 18% +65.3% 91 Â 26% sched_debug.cpu#1.load
59.50 Â 5% +43.1% 85.14 Â 2% perf-profile.cpu-cycles.generic_file_write_iter.new_sync_write.vfs_write.sys_write.system_call_fastpath
24882 Â 19% +51.9% 37790 Â 7% sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
543 Â 19% +51.8% 824 Â 7% sched_debug.cfs_rq[6]:/.tg_runnable_contrib
63.20 Â 5% +38.4% 87.45 Â 3% perf-profile.cpu-cycles.new_sync_write.vfs_write.sys_write.system_call_fastpath
2.83 Â 3% -31.4% 1.94 Â 14% perf-profile.cpu-cycles.mutex_unlock.new_sync_write.vfs_write.sys_write.system_call_fastpath
2391 Â 3% +43.0% 3419 Â 1% proc-vmstat.pgactivate
1330 Â 33% +71.0% 2275 Â 4% sched_debug.cpu#8.curr->pid
28.29 Â 4% -25.7% 21.01 Â 1% time.user_time
31.42 Â 7% -24.9% 23.60 Â 3% turbostat.CPU%c1
70.99 Â 4% +28.4% 91.13 Â 2% perf-profile.cpu-cycles.vfs_write.sys_write.system_call_fastpath
72.19 Â 3% +27.1% 91.74 Â 2% perf-profile.cpu-cycles.sys_write.system_call_fastpath
74.71 Â 3% +24.0% 92.63 Â 2% perf-profile.cpu-cycles.system_call_fastpath
2766 Â 1% +26.2% 3491 Â 0% proc-vmstat.nr_shmem
11068 Â 1% +26.2% 13969 Â 0% meminfo.Shmem
54 Â 17% +40.1% 76 Â 13% sched_debug.cfs_rq[1]:/.load
55 Â 32% +43.0% 79 Â 13% sched_debug.cfs_rq[8]:/.runnable_load_avg
2690 Â 5% -10.1% 2417 Â 2% slabinfo.kmalloc-256.active_objs
60 Â 1% +10.4% 66 Â 2% turbostat.CoreTmp
6214 Â 2% -7.0% 5777 Â 5% slabinfo.vm_area_struct.num_objs
24089 Â 4% +9.4% 26356 Â 0% meminfo.Active(anon)
6024 Â 4% +9.4% 6589 Â 0% proc-vmstat.nr_active_anon
17.44 Â 1% -9.4% 15.80 Â 2% turbostat.CPU%c6
1064863 Â 12% -13.3% 923694 Â 0% cpuidle.C1E-NHM.usage
378254 Â 4% -97.0% 11409 Â 2% vmstat.system.cs
27.23 Â 16% +121.7% 60.38 Â 1% turbostat.%Busy
959 Â 16% +121.5% 2124 Â 1% turbostat.Avg_MHz
7791 Â 4% +33.8% 10421 Â 0% vmstat.system.in

wsm: Westmere
Memory: 6G




time.system_time

1100 ++-------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O O O O O O
1000 ++ |
900 ++ |
| |
800 ++ |
| |
700 ++ * |
| + : |
600 ++ + : |
500 ++ * : |
| .*.. + : .*.. .* |
400 *+.*.*. * *.*..*..* *..*.*..*.*.. .*. |
| *..* |
300 ++-------------------------------------------------------------------+


time.percent_of_cpu_this_job_got

350 O+-O-O--O--O-O--O--O--O-O--O--O-O--O--O-O--O--O-O--O--O--O-O--O--O-O--O
| |
| |
300 ++ |
| |
| |
250 ++ * |
| + : |
200 ++ + : |
| * : |
| + : |
150 ++ .*..*..* : .*.. .*. .*.. .* |
*..* *..* *. *. *.*..*..*.*..*. |
| |
100 ++--------------------------------------------------------------------+


time.voluntary_context_switches

3e+07 ++-----------------------------------------*----------------------+
*.. *.. + *.*.. |
2.5e+07 ++ .*.*..*. .*.*.. .*. .. *.*..* * |
| *.*. *. *.*. * |
| |
2e+07 ++ |
| |
1.5e+07 ++ |
| |
1e+07 ++ |
| |
| |
5e+06 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


softirqs.SCHED

450000 ++-----------------------------------------------------------------+
| * |
400000 ++ + + .* |
350000 ++ + *.. .*. + * |
| .*..*.* .*..*.*..*.. *..* *.. .. |
300000 *+.* *..* + * |
250000 ++ * |
| |
200000 ++ |
150000 ++ |
| |
100000 ++ |
50000 ++ |
O O O O O O O O O O O O O O O O O O O O O O O O O O O
0 ++-----------------------------------------------------------------+


softirqs.RCU

130000 ++-----------------------------------------------------------------+
120000 ++ *.. * |
| .*. .*.*.. .*.. + *.. * .*.. .. |
110000 *+ *. *..* * * + + .* * |
100000 ++ *. .. + + *. |
| * * |
90000 ++ |
80000 ++ |
70000 ++ |
| |
60000 ++ |
50000 ++ |
| O O |
40000 O+ O O O O O O O O O O O O O O O O O O O O O O
30000 ++-------------------------------------O---------O-----------------+


cpuidle.C1-NHM.usage

900000 ++-----------------------------------------------------------------+
| .* *.* |
800000 ++ *. *. : : : |
700000 ++ : *..*.. : : : : * |
| : : : : : + |
600000 *+.*.*.. : *.*..* : .* * + |
500000 ++ *.* * + .* |
| *. |
400000 ++ |
300000 ++ |
| |
200000 ++ |
100000 ++ |
| O O O O |
0 O+-O-O--O-O--O-O--O--O------------O-O--O-O--O-O--O--O-O--O-O--O-O--O


cpuidle.C6-NHM.usage

1.8e+06 ++----------------------------------------------------------------+
| * *.*.. |
1.6e+06 *+. .*.. + + .*.. + *.. *..* .* |
1.4e+06 ++ * * + * *. + + + .*. |
| *. .. * * *..* |
1.2e+06 ++ * |
1e+06 ++ |
| |
800000 ++ |
600000 ++ |
| |
400000 ++ |
200000 ++ |
| O O O O O O |
0 O+-O----O----O-O--O-O--O----O-O--O--O----O-O--O----O-O--O-O--O----O


will-it-scale.time.system_time

1100 ++-------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O O O O O O
1000 ++ |
900 ++ |
| |
800 ++ |
| |
700 ++ * |
| + : |
600 ++ + : |
500 ++ * : |
| .*.. + : .*.. .* |
400 *+.*.*. * *.*..*..* *..*.*..*.*.. .*. |
| *..* |
300 ++-------------------------------------------------------------------+


will-it-scale.time.percent_of_cpu_this_job_got

350 O+-O-O--O--O-O--O--O--O-O--O--O-O--O--O-O--O--O-O--O--O--O-O--O--O-O--O
| |
| |
300 ++ |
| |
| |
250 ++ * |
| + : |
200 ++ + : |
| * : |
| + : |
150 ++ .*..*..* : .*.. .*. .*.. .* |
*..* *..* *. *. *.*..*..*.*..*. |
| |
100 ++--------------------------------------------------------------------+


will-it-scale.time.voluntary_context_switches

3e+07 ++-----------------------------------------*----------------------+
*.. *.. + *.*.. |
2.5e+07 ++ .*.*..*. .*.*.. .*. .. *.*..* * |
| *.*. *. *.*. * |
| |
2e+07 ++ |
| |
1.5e+07 ++ |
| |
1e+07 ++ |
| |
| |
5e+06 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


vmstat.system.cs

450000 ++-----------------------------------------------------------------+
*.. |
400000 ++ * *.*.. *.*.. .*.. *. .*..*..* |
350000 ++ + .. .. * *.. .. *..* |
| * * * *.* |
300000 ++ + .. |
250000 ++ * |
| |
200000 ++ |
150000 ++ |
| |
100000 ++ |
50000 ++ |
| |
0 O+-O-O--O-O--O-O--O--O-O--O-O--O--O-O--O-O--O-O--O--O-O--O-O--O-O--O


sched_debug.cpu#0.nr_switches

3.5e+06 ++-----------------------*----------------------------------------+
| * + |
3e+06 *+. .*.. *. .. + *.. .*.. |
| * * + * *. .. .*.. * |
2.5e+06 ++ + .. *. + * * *. .. * |
| * * * |
2e+06 ++ |
| |
1.5e+06 ++ |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


sched_debug.cpu#0.sched_count

3.5e+06 ++-----------------------*----------------------------------------+
| * + |
3e+06 *+. .*.. *. .. + *.. .*.. |
| * * + * *. .. .*.. .* |
2.5e+06 ++ + .. *. + * * *.*. * |
| * * |
2e+06 ++ |
| |
1.5e+06 ++ |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


sched_debug.cpu#0.sched_goidle

1.8e+06 ++----------------------------------------------------------------+
| *.*.. |
1.6e+06 *+. + *.. .*.. |
1.4e+06 ++ * *.*.. *. + *. .. .*.. * |
| + .. * + * * * .. * |
1.2e+06 ++ * + + *.* |
1e+06 ++ * |
| |
800000 ++ |
600000 ++ |
| |
400000 ++ |
200000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


sched_debug.cpu#0.ttwu_count

1.8e+06 ++-----------------------*----------------------------------------+
| : + |
1.6e+06 ++ : + |
1.4e+06 *+. *. *.*..* * *.. *.. .*.*..* |
| * + *..* + + .. + .*. |
1.2e+06 ++ + + + + * * * |
1e+06 ++ * * |
| |
800000 ++ |
600000 ++ |
| |
400000 ++ |
200000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


sched_debug.cpu#1.nr_switches

4e+06 *+----------------------------------------------------------------+
|: |
3.5e+06 ++: .*.. .*. |
3e+06 ++: .*.*.. *..* * .*.*. *.. |
| *.*. + + .*.*. * |
2.5e+06 ++ *. .* *..*. |
| *. |
2e+06 ++ |
| |
1.5e+06 ++ |
1e+06 ++ |
| |
500000 ++ O O O O O O O |
O O O O O O O O O O O O O O O O O O O O
0 ++----------------------------------------------------------------+


sched_debug.cpu#1.sched_count

4e+06 *+----------------------------------------------------------------+
|: |
3.5e+06 ++: .*.. .*. |
3e+06 ++: .*.*.. *..* * .*.*. *.. |
| *.*. + + .*.*. * |
2.5e+06 ++ *. .* *..*. |
| *. |
2e+06 ++ |
| |
1.5e+06 ++ |
1e+06 ++ |
| |
500000 ++ O O O O O O O |
O O O O O O O O O O O O O O O O O O O O
0 ++----------------------------------------------------------------+


sched_debug.cpu#1.sched_goidle

2e+06 *+----------------------------------------------------------------+
1.8e+06 ++ |
| : .*.. .*. |
1.6e+06 ++: .*.*.. *..* * .*.*. *.. |
1.4e+06 ++ *.*. + + .*.*. * |
| *. .* *..*. |
1.2e+06 ++ *. |
1e+06 ++ |
800000 ++ |
| |
600000 ++ |
400000 ++ |
| O O O O O O O |
200000 O+ O O O O O O O O O O O O O O O O O O O
0 ++----------------------------------------------------------------+


sched_debug.cpu#1.ttwu_count

1.8e+06 ++----------------------------------------------------------------+
*.. |
1.6e+06 ++ *.*.. *..*.*..* .*. |
1.4e+06 ++ *. .. + + .*.. .*..*.*. *..* |
| * *. .* *. * |
1.2e+06 ++ *. |
1e+06 ++ |
| |
800000 ++ |
600000 ++ |
| |
400000 ++ O |
200000 ++ O O O O O O O O O O O
O O O O O O O O O O O O O O O |
0 ++----------------------------------------------------------------+


sched_debug.cpu#2.nr_switches

4.5e+06 ++----------------------------------------------------------------+
| * |
4e+06 ++ .. |
3.5e+06 ++ * |
| .* .*..* * .*.. .*..*.. .*.. + |
3e+06 *+ : * : + + .* * * *. .* |
2.5e+06 ++ : .. : + *. *. |
| * * |
2e+06 ++ |
1.5e+06 ++ |
| |
1e+06 ++ |
500000 ++ O O O O O O O |
O O O O O O O O O O O O O O O O O O
0 ++------O---------------------------O-----------------------------+


sched_debug.cpu#2.sched_count

4.5e+06 ++----------------------------------------------------------------+
| * |
4e+06 ++ .. |
3.5e+06 ++ * |
| .* .*..* * .*.. .*..*.. .*.. + |
3e+06 *+ : * : + + .* * * *. .* |
2.5e+06 ++ : .. : + *. *. |
| * * |
2e+06 ++ |
1.5e+06 ++ |
| |
1e+06 ++ |
500000 ++ O O O O O O O |
O O O O O O O O O O O O O O O O O O
0 ++------O---------------------------O-----------------------------+


sched_debug.cpu#2.sched_goidle

2.2e+06 ++----------------------------------------------------------------+
2e+06 ++ * |
| .. |
1.8e+06 ++ *.. .* |
1.6e+06 ++.* + * * .*.. .*..*.. .*.. * |
1.4e+06 *+ + .* : + + .* * * *. .. |
1.2e+06 ++ *. : + *. * |
| * |
1e+06 ++ |
800000 ++ |
600000 ++ |
400000 ++ |
| O O O O O O O |
200000 O+ O O O O O O O O O O O O O O O O O O
0 ++------O---------------------------------------------------------+


sched_debug.cpu#2.ttwu_count

2e+06 ++-------------------------------------------------*--------------+
1.8e+06 ++ + |
| *.. + |
1.6e+06 *+. : * *. .*.. *.. .* |
1.4e+06 ++ * : : + *..* *.*..*.. + * * |
| + .* : + * + .. |
1.2e+06 ++ *. * * |
1e+06 ++ |
800000 ++ |
| |
600000 ++ |
400000 ++ |
| O O O O O O O |
200000 O+ O O O O O O O O O O O O O O O O O O
0 ++------O---------------------------------------------------------+


sched_debug.cpu#6.nr_switches

6e+06 ++--------*---------------------------------------------------------+
| .* : : *.. |
5e+06 *+ : : : .* + .*..*.* |
| : .* : *. : * * *..*.*. |
| *. : + : .. : : |
4e+06 ++ : .*. + * : : |
| *. * : : |
3e+06 ++ * |
| |
2e+06 ++ |
| |
| |
1e+06 ++ |
| |
0 O+-O-O--O-O--O--O-O--O--O-O--O-O--O--O-O--O-O--O--O-O--O--O-O--O-O--O


sched_debug.cpu#6.sched_count

6e+06 ++--------*---------------------------------------------------------+
| .* : : *.. |
5e+06 *+ : : : .* + .*..*.* |
| : .* : *. : * * *..*.*. |
| *. : + : .. : : |
4e+06 ++ : .*. + * : : |
| *. * : : |
3e+06 ++ * |
| |
2e+06 ++ |
| |
| |
1e+06 ++ |
| |
0 O+-O-O--O-O--O--O-O--O--O-O--O-O--O--O-O--O-O--O--O-O--O--O-O--O-O--O


sched_debug.cpu#6.sched_goidle

3e+06 ++--------*-------------------------------------------------------+
| .* : : *.. |
2.5e+06 *+ : : : .* + .*.*..* |
| : .* : *. : * * *..*.*. |
| *. : : : .. : : |
2e+06 ++ :.*.. : * : : |
| * * : : |
1.5e+06 ++ * |
| |
1e+06 ++ |
| |
| |
500000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


sched_debug.cpu#6.ttwu_count

3e+06 ++----------------------------------------------------------------+
| .* *.. |
2.5e+06 *+. * : : .*.. .*..* |
| * + : : * * *..*.*..* |
| + + : : + .. * : |
2e+06 ++ * *.*.. : * + : |
| : + : |
1.5e+06 ++ * * |
| |
1e+06 ++ |
| |
| |
500000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


sched_debug.cpu#7.nr_switches

6e+06 ++------------------------------------------------------------------+
| *.* *.. .*. .* |
5e+06 *+.* + : : .*. *.. *.. .*..*..* |
| + + : : * * : * |
| * : : + : |
4e+06 ++ *..*. : + : |
| * * |
3e+06 ++ |
| |
2e+06 ++ |
| |
| |
1e+06 ++ |
| O |
0 O+-O-O--O-O--O--O-O--O--O-O--O-O--O--O-O--O-O--O--O-O--O--O-O----O--O


sched_debug.cpu#7.sched_count

6e+06 ++------------------------------------------------------------------+
| *.* *.. .*. .* |
5e+06 *+.* + : : .*. *.. *.. .*..*..* |
| + + : : * * : * |
| * : : + : |
4e+06 ++ *..*. : + : |
| * * |
3e+06 ++ |
| |
2e+06 ++ |
| |
| |
1e+06 ++ |
| O |
0 O+-O-O--O-O--O--O-O--O--O-O--O-O--O--O-O--O-O--O--O-O--O--O-O----O--O


sched_debug.cpu#7.sched_goidle

3e+06 ++----------------------------------------------------------------+
| *.* *.. .*. .* |
2.5e+06 *+.* + : : .*. *.. *.. .*..*.*. |
| + + : : * * : * |
| * : : + : |
2e+06 ++ *.*..: + : |
| * * |
1.5e+06 ++ |
| |
1e+06 ++ |
| |
| |
500000 ++ |
| O |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O----O--O


sched_debug.cpu#7.ttwu_count

3.5e+06 ++----------------------------------------------------------------+
| |
3e+06 ++.* |
*. : *.* .*. .*.*.. .*.. |
2.5e+06 ++ : + + *. *. * *..*.*..* * |
| :+ + : : : |
2e+06 ++ * *.*.. : : : |
| * : : |
1.5e+06 ++ * |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


sched_debug.cpu#8.nr_switches

7e+06 ++------------------------------------------------------------------+
| * * |
6e+06 ++ : : : + .*.. |
*.. : : : + .*.. *.. *. |
5e+06 ++ * : *.. : *. .* : + *. |
| + : .*. : *. *.. : * * |
4e+06 ++ * *. * : |
| * |
3e+06 ++ |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O-O--O--O-O--O--O-O--O-O--O--O-O--O-O--O--O-O--O--O-O--O-O--O


sched_debug.cpu#8.sched_count

7e+06 ++------------------------------------------------------------------+
| * * |
6e+06 ++ : : : + .*.. |
*.. : : : + .*.. *.. *. |
5e+06 ++ * : *.. : *. .* : + *. |
| + : .*. : *. *.. : * * |
4e+06 ++ * *. * : |
| * |
3e+06 ++ |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O-O--O--O-O--O--O-O--O-O--O--O-O--O-O--O--O-O--O--O-O--O-O--O


sched_debug.cpu#8.sched_goidle

3.5e+06 ++----------------------------------------------------------------+
| * * |
3e+06 ++ : : :+ .* |
*.. : : : + .*.. *.. *. + |
2.5e+06 ++ * : *.. : *. .* : + *.. |
| + : .*..: *. *.. : * * |
2e+06 ++ * * * : |
| * |
1.5e+06 ++ |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


sched_debug.cpu#8.ttwu_count

3e+06 ++----------------------------------------------------------------+
*.. |
2.5e+06 ++ *. *..* *. .*. |
| * + *.. : + .. *..* *..*.*. *..* |
| + + .*.. : * : : |
2e+06 ++ * * : : : |
| * : : |
1.5e+06 ++ * |
| |
1e+06 ++ |
| |
| |
500000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


sched_debug.cpu#9.ttwu_count

3e+06 ++----------------------------------------------------------------+
*.. * .*.. |
2.5e+06 ++ * + : *.. .*.. * *..* |
| : * : : .* :+ : * |
| : .. : *.. : *.*. * : + : |
2e+06 ++ * : : : : : * |
| :: * : : |
1.5e+06 ++ * :: |
| * |
1e+06 ++ |
| |
| |
500000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


sched_debug.cpu#10.nr_switches

6e+06 ++------------------------------------------------------------------+
* |
5e+06 ++ .*. .*.*.. *.. |
| + .* *. *. * : .*.. .* |
| *.*..* : : : : *.*. * |
4e+06 ++ : * : : : |
| : .. + : : : |
3e+06 ++ * * :: |
| * |
2e+06 ++ |
| |
| |
1e+06 ++ |
| |
0 O+-O-O--O-O--O--O-O--O--O-O--O-O--O--O-O--O-O--O--O-O--O--O-O--O-O--O


sched_debug.cpu#10.sched_count

6e+06 ++------------------------------------------------------------------+
* |
5e+06 ++ .*. .*.*.. *.. |
| + .* *. *. * : .*.. .* |
| *.*..* : : : : *.*. * |
4e+06 ++ : * : : : |
| : .. + : : : |
3e+06 ++ * * :: |
| * |
2e+06 ++ |
| |
| |
1e+06 ++ |
| |
0 O+-O-O--O-O--O--O-O--O--O-O--O-O--O--O-O--O-O--O--O-O--O--O-O--O-O--O


sched_debug.cpu#10.sched_goidle

3e+06 ++----------------------------------------------------------------+
* |
2.5e+06 ++ .*. .*.*.. *.. |
| + .* *. *. * : .*. .* |
| *.*..* : : : : *.*. *. |
2e+06 ++ : *.. : : : |
| : + : : : |
1.5e+06 ++ * * :: |
| * |
1e+06 ++ |
| |
| |
500000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


sched_debug.cpu#10.ttwu_countTO:
CC:
CC:
CC:



3e+06 ++-*--------------------------------------------------------------+
*. : |
2.5e+06 ++ : .*. * |
| : *. *..*. .. : *..*. .*.*..* |
| : .*.* * : * : : *. |
2e+06 ++ *. + : + : : : |
| + : + : : : |
1.5e+06 ++ * * * |
| |
1e+06 ++ |
| |
| |
500000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


sched_debug.cpu#11.nr_switches

7e+06 ++------------------------------------------------------------------+
| |
6e+06 *+.* * |
| : *..*. *.. + + |
5e+06 ++ : .* : *..*.*..* : *. + * |
| : * + * : : : *..* |
4e+06 ++ : + + .. + : : : |
| :+ * * :: |
3e+06 ++ * * |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O-O--O--O-O--O--O-O--O-O--O--O-O--O-O--O--O-O--O--O-O--O-O--O


sched_debug.cpu#11.sched_count

7e+06 ++------------------------------------------------------------------+
| |
6e+06 *+.* * |
| : *..*. *.. + + |
5e+06 ++ : .* : *..*.*..* : *. + * |
| : * + * : : : *..* |
4e+06 ++ : + + .. + : : : |
| :+ * * :: |
3e+06 ++ * * |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O-O--O--O-O--O--O-O--O-O--O--O-O--O-O--O--O-O--O--O-O--O-O--O


sched_debug.cpu#11.sched_goidle

3.5e+06 ++----------------------------------------------------------------+
| |
3e+06 *+.* *.. |
| : *..*. *.. : |
2.5e+06 ++ : .* : *..*.*..* : *. : * |
| : * + *.. : : : *..* |
2e+06 ++ : + + + : : : |
| :+ * * :: |
1.5e+06 ++ * * |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O


sched_debug.cpu#11.ttwu_count

3.5e+06 ++----------------------------------------------------------------+
| .* |
3e+06 *+. * : .*.. |
| : : *..*. *.. *.. .*..* * |
2.5e+06 ++ * : : : *.. + * : * |
| + : : *.. : * + : |
2e+06 ++ * : : * + : |
| :: * |
1.5e+06 ++ * |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O-O--O-O--O

[*] bisect-good sample
[O] bisect-bad sample

To reproduce:

apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Fengguang



---
testcase: will-it-scale
default-monitors:
wait: pre-test
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 10
default_watchdogs:
watch-oom:
watchdog:
cpufreq_governor: performance
commit: 85d88acdca34ebb3c0fe35205aa9dbd4e5ba4445
model: Westmere
memory: 6G
nr_hdd_partitions: 1
hdd_partitions:
swap_partitions:
rootfs_partition:
netconsole_port: 6667
perf-profile:
freq: 800
will-it-scale:
test: writeseek3
testbox: wsm
tbox_group: wsm
kconfig: x86_64-rhel
enqueue_time: 2015-02-12 07:25:08.935926504 +08:00
head_commit: 85d88acdca34ebb3c0fe35205aa9dbd4e5ba4445
base_commit: bfa76d49576599a4b9f9b7a71f23d73d6dcff735
branch: linux-devel/devel-hourly-2015021204
kernel: "/kernel/x86_64-rhel/85d88acdca34ebb3c0fe35205aa9dbd4e5ba4445/vmlinuz-3.19.0-wl-ath-g85d88ac"
user: lkp
queue: cyclic
rootfs: debian-x86_64-2015-02-07.cgz
result_root: "/result/wsm/will-it-scale/performance-writeseek3/debian-x86_64-2015-02-07.cgz/x86_64-rhel/85d88acdca34ebb3c0fe35205aa9dbd4e5ba4445/0"
job_file: "/lkp/scheduled/wsm/cyclic_will-it-scale-performance-writeseek3-x86_64-rhel-HEAD-85d88acdca34ebb3c0fe35205aa9dbd4e5ba4445-0-20150212-31440-14vodto.yaml"
dequeue_time: 2015-02-12 19:56:22.772882095 +08:00
nr_cpu: "$(nproc)"
job_state: finished
loadavg: 8.41 4.93 2.03 1/161 5649
start_time: '1423742206'
end_time: '1423742510'
version: "/lkp/lkp/.src-20150212-184024"
./runtest.py writeseek3 32 both 1 6 9 12
_______________________________________________
LKP mailing list
LKP@xxxxxxxxxxxxxxx