[LKP] [mm] 3484b2de949: -46.2% aim7.jobs-per-min
From: Huang Ying
Date: Fri Feb 27 2015 - 02:21:50 EST
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 3484b2de9499df23c4604a513b36f96326ae81ad ("mm: rearrange zone fields into read-only, page alloc, statistics and page reclaim lines")
The perf cpu-cycles for spinlock (zone->lock) increased a lot. I suspect there are some cache ping-pong or false sharing.
testbox/testcase/testparams: brickland1/aim7/performance-6000-page_test
24b7e5819ad5cbef 3484b2de9499df23c4604a513b
---------------- --------------------------
%stddev %change %stddev
\ | \
152288 Â 0% -46.2% 81911 Â 0% aim7.jobs-per-min
237 Â 0% +85.6% 440 Â 0% aim7.time.elapsed_time
237 Â 0% +85.6% 440 Â 0% aim7.time.elapsed_time.max
25026 Â 0% +70.7% 42712 Â 0% aim7.time.system_time
2186645 Â 5% +32.0% 2885949 Â 4% aim7.time.voluntary_context_switches
4576561 Â 1% +24.9% 5715773 Â 0% aim7.time.involuntary_context_switches
695 Â 0% -3.7% 669 Â 1% aim7.time.user_time
5.37 Â 8% +303.5% 21.67 Â 2% perf-profile.cpu-cycles._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault
13.53 Â 18% +244.5% 46.61 Â 2% perf-profile.cpu-cycles._raw_spin_lock.free_pcppages_bulk.free_hot_cold_page.free_hot_cold_page_list.release_pages
7.88 Â 24% +495.3% 46.89 Â 3% perf-profile.cpu-cycles.free_pcppages_bulk.free_hot_cold_page.free_hot_cold_page_list.release_pages.free_pages_and_swap_cache
362 Â 14% +621.7% 2617 Â 8% numa-vmstat.node2.nr_inactive_anon
0.22 Â 38% +8387.7% 18.39 Â 4% perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault
7.99 Â 5% -98.5% 0.12 Â 27% perf-profile.cpu-cycles.do_brk.sys_brk.system_call_fastpath
12 Â 29% +352.8% 54 Â 38% sched_debug.cfs_rq[86]:/.tg_load_contrib
2 Â 40% +2133.3% 44 Â 27% sched_debug.cfs_rq[52]:/.nr_spread_over
2 Â 46% +1337.5% 38 Â 25% sched_debug.cfs_rq[87]:/.nr_spread_over
2 Â 40% +2533.3% 52 Â 14% sched_debug.cfs_rq[53]:/.nr_spread_over
3 Â 0% +1377.8% 44 Â 29% sched_debug.cfs_rq[48]:/.nr_spread_over
1 Â 28% +2620.0% 45 Â 30% sched_debug.cfs_rq[47]:/.nr_spread_over
4 Â 28% +1276.9% 59 Â 12% sched_debug.cfs_rq[117]:/.load
4 Â 28% +1276.9% 59 Â 12% sched_debug.cpu#117.load
2685 Â 19% +348.6% 12046 Â 48% sched_debug.cpu#116.ttwu_count
1 Â 28% +2240.0% 39 Â 10% sched_debug.cfs_rq[63]:/.nr_spread_over
5 Â 8% +912.5% 54 Â 24% sched_debug.cpu#65.load
2 Â 40% +1950.0% 41 Â 13% sched_debug.cfs_rq[113]:/.nr_spread_over
1508 Â 11% +581.1% 10271 Â 6% numa-meminfo.node2.Inactive(anon)
2 Â 35% +1512.5% 43 Â 21% sched_debug.cfs_rq[112]:/.nr_spread_over
5 Â 8% +912.5% 54 Â 24% sched_debug.cfs_rq[65]:/.load
6 Â 41% +2036.8% 135 Â 20% sched_debug.cpu#0.load
1 Â 28% +3220.0% 55 Â 22% sched_debug.cfs_rq[66]:/.nr_spread_over
5 Â 48% +1093.3% 59 Â 23% sched_debug.cfs_rq[0]:/.nr_spread_over
0 Â 0% +Inf% 1 Â 0% sched_debug.cfs_rq[0]:/.nr_running
6 Â 41% +2036.8% 135 Â 20% sched_debug.cfs_rq[0]:/.load
14 Â 37% +563.6% 97 Â 43% sched_debug.cfs_rq[0]:/.blocked_load_avg
20 Â 20% +437.7% 109 Â 40% sched_debug.cfs_rq[0]:/.tg_load_contrib
5 Â 29% +1752.9% 105 Â 48% sched_debug.cpu#1.load
7 Â 34% +672.7% 56 Â 7% sched_debug.cfs_rq[110]:/.blocked_load_avg
5 Â 29% +1752.9% 105 Â 48% sched_debug.cfs_rq[1]:/.load
5 Â 28% +1286.7% 69 Â 32% sched_debug.cpu#2.load
1 Â 35% +2975.0% 41 Â 8% sched_debug.cfs_rq[68]:/.nr_spread_over
2 Â 20% +1685.7% 41 Â 15% sched_debug.cfs_rq[109]:/.nr_spread_over
16 Â 49% +446.0% 91 Â 32% sched_debug.cfs_rq[68]:/.tg_load_contrib
5 Â 28% +1280.0% 69 Â 33% sched_debug.cfs_rq[2]:/.load
5 Â 16% +860.0% 48 Â 35% sched_debug.cpu#3.load
8 Â 34% +428.0% 44 Â 25% sched_debug.cfs_rq[22]:/.blocked_load_avg
5 Â 31% +1912.5% 107 Â 27% sched_debug.cfs_rq[39]:/.load
5 Â 31% +1912.5% 107 Â 27% sched_debug.cpu#39.load
5 Â 16% +680.0% 39 Â 7% sched_debug.cfs_rq[3]:/.load
4 Â 20% +1933.3% 81 Â 14% sched_debug.cfs_rq[38]:/.load
0.00 Â 0% +7.2e+14% 7171133.21 Â 29% sched_debug.cfs_rq[38]:/.max_vruntime
0.00 Â 0% +7.2e+14% 7171133.21 Â 29% sched_debug.cfs_rq[38]:/.MIN_vruntime
4 Â 20% +1933.3% 81 Â 14% sched_debug.cpu#38.load
6 Â 37% +560.0% 44 Â 9% sched_debug.cfs_rq[6]:/.blocked_load_avg
0 Â 0% +Inf% 1 Â 0% sched_debug.cfs_rq[33]:/.nr_running
2 Â 46% +2087.5% 58 Â 10% sched_debug.cfs_rq[77]:/.nr_spread_over
4 Â 10% +1514.3% 75 Â 49% sched_debug.cfs_rq[101]:/.load
0.00 Â 0% +1e+15% 10289272.67 Â 11% sched_debug.cfs_rq[101]:/.max_vruntime
0.00 Â 0% +1e+15% 10289272.67 Â 11% sched_debug.cfs_rq[101]:/.MIN_vruntime
1 Â 35% +3600.0% 49 Â 22% sched_debug.cfs_rq[8]:/.nr_spread_over
5 Â 44% +1064.7% 66 Â 37% sched_debug.cpu#9.load
2133469 Â 15% +393.5% 10527887 Â 28% sched_debug.cfs_rq[100]:/.max_vruntime
2133469 Â 15% +393.5% 10527887 Â 28% sched_debug.cfs_rq[100]:/.MIN_vruntime
2 Â 40% +1733.3% 36 Â 8% sched_debug.cfs_rq[78]:/.nr_spread_over
5 Â 44% +1064.7% 66 Â 37% sched_debug.cfs_rq[9]:/.load
1 Â 28% +3480.0% 59 Â 38% sched_debug.cfs_rq[79]:/.nr_spread_over
9 Â 22% +442.9% 50 Â 35% sched_debug.cfs_rq[30]:/.tg_load_contrib
1 Â 0% +3000.0% 31 Â 28% sched_debug.cfs_rq[30]:/.nr_spread_over
5 Â 16% +826.7% 46 Â 33% sched_debug.cfs_rq[29]:/.load
3 Â 46% +972.7% 39 Â 34% sched_debug.cfs_rq[29]:/.nr_spread_over
4 Â 35% +3491.7% 143 Â 42% sched_debug.cfs_rq[97]:/.load
18 Â 26% +710.9% 148 Â 26% sched_debug.cpu#82.load
5 Â 8% +768.8% 46 Â 33% sched_debug.cpu#29.load
4 Â 35% +3491.7% 143 Â 42% sched_debug.cpu#97.load
7 Â 39% +840.9% 69 Â 32% sched_debug.cfs_rq[96]:/.load
7 Â 11% +681.0% 54 Â 25% sched_debug.cfs_rq[28]:/.load
2 Â 20% +1557.1% 38 Â 6% sched_debug.cfs_rq[13]:/.nr_spread_over
7 Â 11% +681.0% 54 Â 25% sched_debug.cpu#28.load
7 Â 39% +840.9% 69 Â 32% sched_debug.cpu#96.load
0.00 Â 0% +5.3e+14% 5307632.34 Â 17% sched_debug.cfs_rq[14]:/.MIN_vruntime
0.00 Â 0% +5.3e+14% 5307632.57 Â 17% sched_debug.cfs_rq[14]:/.max_vruntime
18 Â 26% +765.5% 158 Â 31% sched_debug.cfs_rq[82]:/.load
2 Â 20% +1257.1% 31 Â 19% sched_debug.cfs_rq[15]:/.nr_spread_over
1 Â 0% +5866.7% 59 Â 41% sched_debug.cfs_rq[26]:/.nr_spread_over
6 Â 26% +1400.0% 95 Â 24% sched_debug.cpu#84.load
6 Â 18% +1500.0% 106 Â 35% sched_debug.cfs_rq[25]:/.load
6 Â 18% +1500.0% 106 Â 35% sched_debug.cpu#25.load
6 Â 26% +1400.0% 95 Â 24% sched_debug.cfs_rq[84]:/.load
7 Â 30% +847.6% 66 Â 36% sched_debug.cfs_rq[24]:/.load
6 Â 40% +583.3% 41 Â 22% sched_debug.cfs_rq[24]:/.nr_spread_over
6 Â 29% +647.4% 47 Â 24% sched_debug.cfs_rq[17]:/.nr_spread_over
1 Â 28% +2460.0% 42 Â 20% sched_debug.cfs_rq[90]:/.nr_spread_over
7 Â 30% +847.6% 66 Â 36% sched_debug.cpu#24.load
2 Â 20% +1857.1% 45 Â 28% sched_debug.cfs_rq[23]:/.nr_spread_over
2673866 Â 23% +273.9% 9996605 Â 30% sched_debug.cfs_rq[83]:/.max_vruntime
2673866 Â 23% +273.9% 9996605 Â 30% sched_debug.cfs_rq[83]:/.MIN_vruntime
6 Â 13% +372.2% 28 Â 14% sched_debug.cfs_rq[1]:/.nr_spread_over
11 Â 40% +408.8% 57 Â 5% sched_debug.cfs_rq[52]:/.blocked_load_avg
8.75 Â 23% +440.4% 47.26 Â 3% perf-profile.cpu-cycles.free_hot_cold_page.free_hot_cold_page_list.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free
13 Â 22% +387.2% 63 Â 5% sched_debug.cfs_rq[110]:/.tg_load_contrib
12 Â 23% +310.8% 50 Â 6% sched_debug.cfs_rq[6]:/.tg_load_contrib
10.03 Â 21% +375.4% 47.70 Â 3% perf-profile.cpu-cycles.free_hot_cold_page_list.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu
1949810 Â 43% +367.8% 9120805 Â 21% sched_debug.cfs_rq[25]:/.MIN_vruntime
1949810 Â 43% +367.8% 9120805 Â 21% sched_debug.cfs_rq[25]:/.max_vruntime
2575 Â 47% +233.4% 8586 Â 18% sched_debug.cpu#42.ttwu_local
0.20 Â 7% +345.0% 0.89 Â 8% perf-profile.cpu-cycles._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__pmd_alloc
8 Â 36% +162.5% 21 Â 40% sched_debug.cfs_rq[96]:/.runnable_load_avg
8 Â 27% +146.2% 21 Â 37% sched_debug.cpu#96.cpu_load[0]
23 Â 46% +147.9% 58 Â 36% sched_debug.cfs_rq[90]:/.tg_load_contrib
2042 Â 16% +282.6% 7813 Â 48% sched_debug.cpu#116.ttwu_local
4017 Â 10% +247.4% 13959 Â 27% sched_debug.cpu#87.ttwu_count
17 Â 30% +258.5% 63 Â 4% sched_debug.cfs_rq[52]:/.tg_load_contrib
14 Â 18% +274.4% 53 Â 25% sched_debug.cfs_rq[22]:/.tg_load_contrib
1.96 Â 29% +169.8% 5.30 Â 13% perf-profile.cpu-cycles.cpu_stopper_thread.smpboot_thread_fn.kthread.ret_from_fork
1.98 Â 29% +167.7% 5.30 Â 13% perf-profile.cpu-cycles.smpboot_thread_fn.kthread.ret_from_fork
1.99 Â 29% +166.9% 5.30 Â 13% perf-profile.cpu-cycles.kthread.ret_from_fork
1.99 Â 29% +166.9% 5.30 Â 13% perf-profile.cpu-cycles.ret_from_fork
7 Â 17% +195.5% 21 Â 18% sched_debug.cfs_rq[84]:/.runnable_load_avg
2608707 Â 46% +217.3% 8276853 Â 42% sched_debug.cfs_rq[73]:/.max_vruntime
2608707 Â 46% +217.3% 8276853 Â 42% sched_debug.cfs_rq[73]:/.MIN_vruntime
1.74 Â 30% +183.7% 4.95 Â 14% perf-profile.cpu-cycles.multi_cpu_stop.cpu_stopper_thread.smpboot_thread_fn.kthread.ret_from_fork
11 Â 31% +262.9% 42 Â 39% sched_debug.cfs_rq[88]:/.tg_load_contrib
18 Â 46% +201.9% 54 Â 34% sched_debug.cfs_rq[118]:/.tg_load_contrib
4472 Â 43% +95.8% 8756 Â 25% sched_debug.cpu#98.ttwu_count
5 Â 16% +213.3% 15 Â 18% sched_debug.cpu#0.cpu_load[1]
24 Â 16% +243.8% 83 Â 17% sched_debug.cfs_rq[76]:/.nr_spread_over
2267 Â 25% +215.1% 7145 Â 23% sched_debug.cpu#91.ttwu_local
2000856 Â 40% +218.4% 6369750 Â 27% sched_debug.cfs_rq[52]:/.max_vruntime
2000856 Â 40% +218.4% 6369750 Â 27% sched_debug.cfs_rq[52]:/.MIN_vruntime
2288564 Â 29% +169.4% 6164335 Â 27% sched_debug.cfs_rq[59]:/.MIN_vruntime
2288564 Â 29% +169.4% 6164335 Â 27% sched_debug.cfs_rq[59]:/.max_vruntime
13 Â 43% +269.2% 48 Â 13% sched_debug.cfs_rq[10]:/.blocked_load_avg
1796249 Â 30% -56.7% 777468 Â 11% sched_debug.cpu#30.max_idle_balance_cost
4093 Â 28% +228.7% 13457 Â 34% sched_debug.cpu#91.ttwu_count
8.21 Â 25% +179.4% 22.94 Â 2% perf-profile.cpu-cycles.alloc_pages_vma.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
2250 Â 36% +238.9% 7625 Â 32% sched_debug.cpu#113.ttwu_local
19 Â 37% +159.3% 51 Â 37% sched_debug.cfs_rq[117]:/.blocked_load_avg
8540 Â 22% +132.3% 19837 Â 22% sched_debug.cpu#16.ttwu_count
4596 Â 20% +156.6% 11797 Â 13% sched_debug.cpu#99.ttwu_count
22 Â 36% +109.1% 46 Â 29% sched_debug.cfs_rq[81]:/.tg_load_contrib
24 Â 34% +140.3% 57 Â 32% sched_debug.cfs_rq[117]:/.tg_load_contrib
8954 Â 30% -49.9% 4486 Â 26% numa-meminfo.node3.Mapped
3637 Â 17% +187.0% 10440 Â 27% sched_debug.cpu#73.ttwu_count
5 Â 8% +225.0% 17 Â 31% sched_debug.cpu#0.cpu_load[2]
14 Â 46% -64.3% 5 Â 16% sched_debug.cfs_rq[46]:/.runnable_load_avg
15 Â 24% +160.0% 39 Â 20% sched_debug.cfs_rq[58]:/.tg_load_contrib
2162 Â 31% -46.7% 1152 Â 28% numa-vmstat.node3.nr_mapped
4443 Â 34% +146.0% 10933 Â 6% sched_debug.cpu#30.ttwu_local
6492 Â 22% +112.6% 13800 Â 35% sched_debug.cpu#37.ttwu_local
8100 Â 35% +110.5% 17049 Â 2% sched_debug.cpu#30.ttwu_count
7 Â 30% +195.2% 20 Â 46% sched_debug.cfs_rq[82]:/.runnable_load_avg
18 Â 29% +212.5% 58 Â 18% sched_debug.cfs_rq[10]:/.tg_load_contrib
5889 Â 49% +85.7% 10934 Â 18% sched_debug.cpu#112.ttwu_count
31004066 Â 12% +133.0% 72248414 Â 8% cpuidle.C1E-IVT-4S.time
32 Â 49% +118.6% 70 Â 22% sched_debug.cfs_rq[116]:/.tg_load_contrib
1326640 Â 27% +139.7% 3180074 Â 41% sched_debug.cfs_rq[71]:/.spread0
3474 Â 29% +107.2% 7198 Â 25% sched_debug.cpu#22.ttwu_local
10.20 Â 22% +137.8% 24.27 Â 3% perf-profile.cpu-cycles.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault.do_page_fault
9557 Â 21% +109.7% 20040 Â 31% sched_debug.cpu#3.ttwu_count
10 Â 32% -53.3% 4 Â 20% sched_debug.cpu#46.cpu_load[2]
3289 Â 26% +114.8% 7064 Â 16% sched_debug.cpu#105.ttwu_local
5542 Â 34% +107.9% 11524 Â 31% sched_debug.cpu#16.ttwu_local
470 Â 7% +119.1% 1030 Â 39% sched_debug.cpu#42.sched_goidle
7359 Â 20% +115.1% 15831 Â 11% sched_debug.cpu#45.ttwu_count
1215 Â 11% +99.3% 2421 Â 28% sched_debug.cpu#119.nr_uninterruptible
8 Â 36% +145.8% 19 Â 46% sched_debug.cpu#74.cpu_load[1]
7 Â 32% +156.5% 19 Â 46% sched_debug.cpu#74.cpu_load[0]
6066 Â 47% +142.4% 14706 Â 22% sched_debug.cpu#42.ttwu_count
2633 Â 19% +219.9% 8426 Â 46% sched_debug.cpu#87.ttwu_local
10 Â 31% -50.0% 5 Â 8% sched_debug.cpu#46.cpu_load[4]
10 Â 31% -53.1% 5 Â 16% sched_debug.cpu#46.cpu_load[3]
5 Â 16% +133.3% 11 Â 31% sched_debug.cpu#0.cpu_load[0]
5 Â 16% +133.3% 11 Â 31% sched_debug.cfs_rq[0]:/.runnable_load_avg
24 Â 27% +113.7% 52 Â 40% sched_debug.cfs_rq[109]:/.tg_load_contrib
8 Â 24% +120.0% 18 Â 35% sched_debug.cpu#96.cpu_load[1]
1807256 Â 40% +167.0% 4824533 Â 45% sched_debug.cfs_rq[107]:/.MIN_vruntime
1807256 Â 40% +167.0% 4824533 Â 45% sched_debug.cfs_rq[107]:/.max_vruntime
1556425 Â 5% -58.0% 653884 Â 2% sched_debug.cpu#9.max_idle_balance_cost
2624527 Â 26% -48.7% 1347263 Â 10% sched_debug.cpu#30.avg_idle
5750 Â 41% +75.6% 10099 Â 20% sched_debug.cpu#41.ttwu_local
3 Â 25% +100.0% 7 Â 17% sched_debug.cpu#90.cpu_load[4]
188 Â 9% +138.4% 449 Â 10% sched_debug.cpu#67.sched_goidle
1123782 Â 43% +169.5% 3028424 Â 3% sched_debug.cfs_rq[13]:/.spread0
3297 Â 11% +118.8% 7213 Â 17% sched_debug.cpu#99.ttwu_local
4304 Â 25% +125.4% 9703 Â 23% sched_debug.cpu#38.ttwu_local
7 Â 17% +122.7% 16 Â 42% sched_debug.cpu#84.cpu_load[0]
8 Â 31% +112.0% 17 Â 35% sched_debug.cpu#98.nr_running
7 Â 17% +122.7% 16 Â 42% sched_debug.cpu#84.cpu_load[1]
6489 Â 26% +164.1% 17137 Â 10% sched_debug.cpu#12.ttwu_count
4901 Â 43% +80.5% 8847 Â 30% sched_debug.cpu#56.ttwu_count
1581915 Â 26% -55.6% 702853 Â 11% sched_debug.cpu#42.max_idle_balance_cost
8 Â 14% +108.0% 17 Â 9% sched_debug.cpu#102.nr_running
1209 Â 41% +170.8% 3276 Â 18% sched_debug.cpu#89.nr_uninterruptible
8743 Â 36% +96.9% 17218 Â 14% sched_debug.cpu#68.ttwu_count
263566 Â 43% -49.9% 132010 Â 24% numa-meminfo.node1.FilePages
65848 Â 43% -49.9% 32982 Â 24% numa-vmstat.node1.nr_file_pages
4262 Â 32% +111.3% 9005 Â 17% sched_debug.cpu#36.ttwu_local
3728 Â 22% +146.3% 9185 Â 22% sched_debug.cpu#12.ttwu_local
548 Â 28% +74.1% 954 Â 28% sched_debug.cpu#12.sched_goidle
4998 Â 30% +114.9% 10738 Â 27% sched_debug.cpu#118.ttwu_count
4858 Â 44% +82.5% 8867 Â 20% sched_debug.cpu#77.ttwu_local
7965 Â 35% -51.7% 3850 Â 5% numa-meminfo.node0.Mapped
7 Â 30% +123.8% 15 Â 48% sched_debug.cpu#82.cpu_load[1]
6 Â 25% +155.0% 17 Â 37% sched_debug.cpu#0.cpu_load[3]
7 Â 30% +100.0% 14 Â 44% sched_debug.cpu#82.cpu_load[2]
7 Â 40% +117.4% 16 Â 31% sched_debug.cpu#0.cpu_load[4]
6 Â 25% +110.0% 14 Â 44% sched_debug.cpu#82.cpu_load[4]
7724 Â 35% +96.7% 15196 Â 24% sched_debug.cpu#93.ttwu_count
222 Â 14% +86.7% 415 Â 26% sched_debug.cpu#118.sched_goidle
21.77 Â 10% +122.1% 48.34 Â 3% perf-profile.cpu-cycles.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu.unmap_region
8 Â 34% +136.0% 19 Â 46% sched_debug.cpu#74.cpu_load[2]
4035 Â 31% +83.7% 7412 Â 23% sched_debug.cpu#26.ttwu_local
22.13 Â 10% +118.8% 48.43 Â 3% perf-profile.cpu-cycles.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu.unmap_region.do_munmap
22.20 Â 10% +118.2% 48.43 Â 3% perf-profile.cpu-cycles.tlb_flush_mmu_free.tlb_finish_mmu.unmap_region.do_munmap.sys_brk
22.30 Â 10% +117.6% 48.52 Â 3% perf-profile.cpu-cycles.tlb_finish_mmu.unmap_region.do_munmap.sys_brk.system_call_fastpath
2562 Â 27% +112.5% 5444 Â 25% sched_debug.cpu#73.ttwu_local
1505413 Â 27% -44.5% 835772 Â 17% sched_debug.cpu#33.max_idle_balance_cost
9000 Â 36% +106.2% 18563 Â 32% sched_debug.cpu#31.ttwu_count
232 Â 41% +83.8% 427 Â 5% sched_debug.cpu#89.sched_goidle
0.55 Â 24% +97.0% 1.08 Â 6% perf-profile.cpu-cycles.__alloc_pages_nodemask.alloc_pages_current.__pmd_alloc.handle_mm_fault.__do_page_fault
31 Â 33% +56.4% 49 Â 30% sched_debug.cfs_rq[56]:/.blocked_load_avg
7036 Â 28% -42.0% 4078 Â 15% sched_debug.cpu#65.ttwu_local
57267 Â 9% +110.9% 120775 Â 5% cpuidle.C6-IVT-4S.usage
8107 Â 43% +101.0% 16298 Â 15% sched_debug.cpu#9.ttwu_count
477 Â 14% +95.7% 935 Â 30% sched_debug.cpu#40.sched_goidle
2534332 Â 21% -49.0% 1291932 Â 11% sched_debug.cpu#42.avg_idle
9820 Â 39% +75.4% 17227 Â 21% sched_debug.cpu#41.ttwu_count
6719 Â 21% +149.5% 16768 Â 37% sched_debug.cpu#38.ttwu_count
5 Â 23% +112.5% 11 Â 29% sched_debug.cpu#22.cpu_load[4]
9 Â 36% -50.0% 4 Â 20% sched_debug.cpu#46.cpu_load[1]
4 Â 47% +61.5% 7 Â 20% sched_debug.cpu#44.cpu_load[4]
4 Â 28% +138.5% 10 Â 38% sched_debug.cfs_rq[83]:/.runnable_load_avg
7995 Â 30% -51.6% 3867 Â 7% numa-meminfo.node1.Mapped
2745 Â 36% +113.2% 5852 Â 24% sched_debug.cpu#56.ttwu_local
7737 Â 31% +108.3% 16115 Â 28% sched_debug.cpu#36.ttwu_count
3128964 Â 47% +153.5% 7932574 Â 15% sched_debug.cfs_rq[69]:/.MIN_vruntime
3128964 Â 47% +153.5% 7932574 Â 15% sched_debug.cfs_rq[69]:/.max_vruntime
1344853 Â 14% -50.5% 665504 Â 4% sched_debug.cpu#16.max_idle_balance_cost
8106 Â 28% +67.7% 13597 Â 17% sched_debug.cpu#77.ttwu_count
6142 Â 40% +105.8% 12643 Â 21% sched_debug.cpu#105.ttwu_count
7961 Â 34% -51.9% 3827 Â 8% sched_debug.cpu#75.ttwu_local
7398 Â 19% +95.3% 14449 Â 32% sched_debug.cpu#26.ttwu_count
1457355 Â 10% -49.2% 740916 Â 15% sched_debug.cpu#76.max_idle_balance_cost
14560 Â 31% +90.6% 27746 Â 7% proc-vmstat.pgactivate
3687 Â 37% +114.8% 7919 Â 27% sched_debug.cpu#48.ttwu_local
1928296 Â 43% +161.5% 5042966 Â 33% sched_debug.cfs_rq[37]:/.MIN_vruntime
1928296 Â 43% +161.5% 5042966 Â 33% sched_debug.cfs_rq[37]:/.max_vruntime
2.57 Â 0% -48.4% 1.33 Â 3% turbostat.Pkg%pc6
1353942 Â 26% -45.8% 734298 Â 6% sched_debug.cpu#46.max_idle_balance_cost
233 Â 23% +83.1% 426 Â 6% sched_debug.cpu#66.sched_goidle
36 Â 29% +52.8% 55 Â 28% sched_debug.cfs_rq[56]:/.tg_load_contrib
2243220 Â 23% -39.4% 1359766 Â 10% sched_debug.cpu#33.avg_idle
1397744 Â 19% -45.1% 766710 Â 9% sched_debug.cpu#51.max_idle_balance_cost
9 Â 22% +82.1% 17 Â 24% sched_debug.cpu#74.nr_running
10 Â 25% +48.4% 15 Â 25% sched_debug.cpu#99.nr_running
8 Â 24% +88.0% 15 Â 31% sched_debug.cpu#96.cpu_load[2]
9 Â 15% +70.4% 15 Â 13% sched_debug.cpu#42.nr_running
101062 Â 3% +83.6% 185547 Â 1% sched_debug.cfs_rq[80]:/.exec_clock
102313 Â 2% +81.4% 185589 Â 1% sched_debug.cfs_rq[61]:/.exec_clock
101160 Â 2% +83.1% 185178 Â 0% sched_debug.cfs_rq[67]:/.exec_clock
1335294 Â 21% -47.8% 697529 Â 15% sched_debug.cpu#85.max_idle_balance_cost
101420 Â 2% +82.6% 185195 Â 1% sched_debug.cfs_rq[38]:/.exec_clock
205 Â 13% +86.2% 383 Â 15% sched_debug.cpu#99.sched_goidle
101218 Â 2% +82.5% 184755 Â 2% sched_debug.cfs_rq[71]:/.exec_clock
101454 Â 2% +83.9% 186548 Â 0% sched_debug.cfs_rq[104]:/.exec_clock
101437 Â 2% +81.7% 184331 Â 1% sched_debug.cfs_rq[82]:/.exec_clock
37386253 Â 13% +110.4% 78643981 Â 10% cpuidle.C1-IVT-4S.time
101153 Â 2% +81.3% 183428 Â 1% sched_debug.cfs_rq[60]:/.exec_clock
237 Â 0% +85.6% 440 Â 0% time.elapsed_time.max
237 Â 0% +85.6% 440 Â 0% time.elapsed_time
102196 Â 2% +82.1% 186102 Â 1% sched_debug.cfs_rq[94]:/.exec_clock
27203013 Â 5% +81.5% 49381527 Â 1% sched_debug.cfs_rq[78]:/.min_vruntime
101863 Â 2% +81.5% 184893 Â 2% sched_debug.cfs_rq[72]:/.exec_clock
101737 Â 2% +82.4% 185546 Â 0% sched_debug.cfs_rq[98]:/.exec_clock
12 Â 18% -33.3% 8 Â 27% sched_debug.cpu#61.cpu_load[4]
7 Â 32% +113.0% 16 Â 33% sched_debug.cpu#100.nr_running
12 Â 11% -33.3% 8 Â 27% sched_debug.cpu#61.cpu_load[3]
102912 Â 2% +82.2% 187552 Â 0% sched_debug.cfs_rq[91]:/.exec_clock
101227 Â 1% +83.4% 185699 Â 1% sched_debug.cfs_rq[90]:/.exec_clock
101261 Â 2% +82.0% 184266 Â 1% sched_debug.cfs_rq[73]:/.exec_clock
101233 Â 2% +82.0% 184281 Â 1% sched_debug.cfs_rq[14]:/.exec_clock
100721 Â 2% +84.1% 185432 Â 2% sched_debug.cfs_rq[65]:/.exec_clock
100943 Â 2% +82.7% 184431 Â 2% sched_debug.cfs_rq[12]:/.exec_clock
101944 Â 2% +82.5% 186020 Â 0% sched_debug.cfs_rq[100]:/.exec_clock
1892 Â 32% -48.4% 976 Â 3% numa-vmstat.node0.nr_mapped
101427 Â 2% +81.9% 184495 Â 1% sched_debug.cfs_rq[66]:/.exec_clock
101615 Â 2% +81.7% 184592 Â 1% sched_debug.cfs_rq[70]:/.exec_clock
101534 Â 2% +82.5% 185266 Â 0% sched_debug.cfs_rq[44]:/.exec_clock
101957 Â 2% +81.3% 184884 Â 0% sched_debug.cfs_rq[55]:/.exec_clock
26799107 Â 4% +80.0% 48245402 Â 5% sched_debug.cfs_rq[57]:/.min_vruntime
101400 Â 2% +82.7% 185228 Â 0% sched_debug.cfs_rq[81]:/.exec_clock
101306 Â 2% +81.2% 183566 Â 1% sched_debug.cfs_rq[11]:/.exec_clock
101824 Â 2% +80.8% 184114 Â 1% sched_debug.cfs_rq[63]:/.exec_clock
101564 Â 2% +81.0% 183830 Â 1% sched_debug.cfs_rq[77]:/.exec_clock
26553407 Â 4% +82.3% 48417772 Â 4% sched_debug.cfs_rq[21]:/.min_vruntime
101847 Â 2% +82.4% 185725 Â 0% sched_debug.cfs_rq[103]:/.exec_clock
101060 Â 2% +82.1% 184007 Â 1% sched_debug.cfs_rq[25]:/.exec_clock
101793 Â 2% +81.2% 184462 Â 1% sched_debug.cfs_rq[57]:/.exec_clock
100971 Â 2% +81.3% 183037 Â 0% sched_debug.cfs_rq[39]:/.exec_clock
32 Â 35% +125.0% 72 Â 15% sched_debug.cfs_rq[63]:/.tg_load_contrib
101684 Â 2% +81.8% 184905 Â 1% sched_debug.cfs_rq[85]:/.exec_clock
1157 Â 26% +58.7% 1836 Â 28% sched_debug.cpu#15.sched_goidle
101440 Â 2% +80.4% 182973 Â 1% sched_debug.cfs_rq[13]:/.exec_clock
100286 Â 2% +81.4% 181960 Â 2% sched_debug.cfs_rq[3]:/.exec_clock
100725 Â 1% +81.8% 183083 Â 1% sched_debug.cfs_rq[20]:/.exec_clock
1256666 Â 17% -43.8% 706338 Â 12% sched_debug.cpu#10.max_idle_balance_cost
9153 Â 31% +89.8% 17374 Â 15% sched_debug.cpu#69.ttwu_count
101750 Â 2% +81.6% 184801 Â 1% sched_debug.cfs_rq[29]:/.exec_clock
101860 Â 2% +80.0% 183309 Â 1% sched_debug.cfs_rq[62]:/.exec_clock
102080 Â 2% +80.7% 184434 Â 1% sched_debug.cfs_rq[56]:/.exec_clock
101290 Â 2% +81.5% 183795 Â 1% sched_debug.cfs_rq[75]:/.exec_clock
101514 Â 2% +82.0% 184723 Â 0% sched_debug.cfs_rq[99]:/.exec_clock
101599 Â 2% +80.9% 183823 Â 1% sched_debug.cfs_rq[7]:/.exec_clock
100950 Â 2% +81.2% 182964 Â 0% sched_debug.cfs_rq[21]:/.exec_clock
100964 Â 3% +81.9% 183605 Â 0% sched_debug.cfs_rq[54]:/.exec_clock
101815 Â 2% +81.6% 184860 Â 0% sched_debug.cfs_rq[43]:/.exec_clock
101743 Â 2% +82.4% 185570 Â 0% sched_debug.cfs_rq[59]:/.exec_clock
100726 Â 2% +82.1% 183459 Â 1% sched_debug.cfs_rq[24]:/.exec_clock
25946174 Â 4% +80.6% 46856738 Â 3% sched_debug.cfs_rq[15]:/.min_vruntime
101074 Â 2% +81.4% 183383 Â 0% sched_debug.cfs_rq[37]:/.exec_clock
101965 Â 2% +81.5% 185029 Â 0% sched_debug.cfs_rq[96]:/.exec_clock
101774 Â 2% +80.6% 183808 Â 1% sched_debug.cfs_rq[28]:/.exec_clock
101457 Â 2% +81.4% 184061 Â 1% sched_debug.cfs_rq[26]:/.exec_clock
101304 Â 1% +81.2% 183554 Â 1% sched_debug.cfs_rq[10]:/.exec_clock
101665 Â 2% +80.4% 183417 Â 1% sched_debug.cfs_rq[23]:/.exec_clock
101851 Â 2% +80.3% 183636 Â 1% sched_debug.cfs_rq[69]:/.exec_clock
101135 Â 2% +80.4% 182419 Â 0% sched_debug.cfs_rq[35]:/.exec_clock
101428 Â 2% +82.0% 184635 Â 1% sched_debug.cfs_rq[102]:/.exec_clock
5 Â 22% +76.5% 10 Â 21% sched_debug.cpu#13.cpu_load[4]
101580 Â 2% +79.4% 182280 Â 2% sched_debug.cfs_rq[97]:/.exec_clock
102153 Â 2% +81.8% 185698 Â 0% sched_debug.cfs_rq[95]:/.exec_clock
100891 Â 2% +78.8% 180412 Â 1% sched_debug.cfs_rq[47]:/.exec_clock
101592 Â 2% +80.5% 183385 Â 0% sched_debug.cfs_rq[41]:/.exec_clock
27458570 Â 5% +79.8% 49375221 Â 4% sched_debug.cfs_rq[117]:/.min_vruntime
101926 Â 2% +81.5% 185027 Â 1% sched_debug.cfs_rq[117]:/.exec_clock
102134 Â 2% +79.5% 183374 Â 2% sched_debug.cfs_rq[68]:/.exec_clock
101045 Â 2% +79.1% 181006 Â 2% sched_debug.cfs_rq[4]:/.exec_clock
27390900 Â 2% +78.1% 48788289 Â 3% sched_debug.cfs_rq[73]:/.min_vruntime
101288 Â 2% +81.5% 183825 Â 1% sched_debug.cfs_rq[22]:/.exec_clock
101820 Â 2% +80.3% 183600 Â 1% sched_debug.cfs_rq[51]:/.exec_clock
101850 Â 2% +79.9% 183196 Â 1% sched_debug.cfs_rq[58]:/.exec_clock
101010 Â 2% +80.1% 181959 Â 1% sched_debug.cfs_rq[5]:/.exec_clock
101383 Â 2% +80.8% 183257 Â 2% sched_debug.cfs_rq[89]:/.exec_clock
102604 Â 2% +79.8% 184489 Â 1% sched_debug.cfs_rq[93]:/.exec_clock
102435 Â 2% +79.8% 184173 Â 1% sched_debug.cfs_rq[64]:/.exec_clock
101445 Â 2% +79.9% 182458 Â 0% sched_debug.cfs_rq[83]:/.exec_clock
101568 Â 1% +81.6% 184421 Â 0% sched_debug.cfs_rq[115]:/.exec_clock
26692197 Â 1% +83.6% 49006366 Â 2% sched_debug.cfs_rq[88]:/.min_vruntime
101323 Â 2% +80.9% 183292 Â 1% sched_debug.cfs_rq[78]:/.exec_clock
101514 Â 2% +80.7% 183420 Â 1% sched_debug.cfs_rq[86]:/.exec_clock
3507 Â 8% +135.7% 8269 Â 38% sched_debug.cpu#47.ttwu_local
101437 Â 2% +79.0% 181547 Â 1% sched_debug.cfs_rq[50]:/.exec_clock
102073 Â 2% +79.7% 183393 Â 0% sched_debug.cfs_rq[53]:/.exec_clock
101703 Â 2% +81.2% 184322 Â 1% sched_debug.cfs_rq[84]:/.exec_clock
101396 Â 2% +78.8% 181320 Â 0% sched_debug.cfs_rq[49]:/.exec_clock
102099 Â 2% +79.7% 183447 Â 0% sched_debug.cfs_rq[9]:/.exec_clock
102052 Â 2% +80.3% 183978 Â 0% sched_debug.cfs_rq[118]:/.exec_clock
1298126 Â 17% -44.2% 724667 Â 8% sched_debug.cpu#100.max_idle_balance_cost
101645 Â 2% +79.4% 182355 Â 0% sched_debug.cfs_rq[52]:/.exec_clock
100805 Â 3% +83.0% 184480 Â 0% sched_debug.cfs_rq[119]:/.exec_clock
102045 Â 2% +79.8% 183508 Â 0% sched_debug.cfs_rq[101]:/.exec_clock
101501 Â 2% +80.1% 182840 Â 0% sched_debug.cfs_rq[105]:/.exec_clock
101696 Â 2% +80.6% 183656 Â 1% sched_debug.cfs_rq[87]:/.exec_clock
101080 Â 2% +82.1% 184117 Â 0% sched_debug.cfs_rq[40]:/.exec_clock
102262 Â 1% +80.8% 184856 Â 0% sched_debug.cfs_rq[108]:/.exec_clock
101505 Â 2% +80.6% 183340 Â 0% sched_debug.cfs_rq[36]:/.exec_clock
101534 Â 2% +80.4% 183130 Â 1% sched_debug.cfs_rq[42]:/.exec_clock
101034 Â 2% +79.3% 181122 Â 0% sched_debug.cfs_rq[34]:/.exec_clock
102179 Â 2% +78.8% 182709 Â 2% sched_debug.cfs_rq[92]:/.exec_clock
101151 Â 2% +79.2% 181268 Â 1% sched_debug.cfs_rq[6]:/.exec_clock
102542 Â 1% +80.0% 184613 Â 1% sched_debug.cfs_rq[109]:/.exec_clock
101684 Â 1% +80.5% 183545 Â 1% sched_debug.cfs_rq[74]:/.exec_clock
30648 Â 12% +50.7% 46198 Â 15% sched_debug.cpu#27.sched_count
100820 Â 2% +78.1% 179548 Â 1% sched_debug.cfs_rq[31]:/.exec_clock
25 Â 43% +86.8% 47 Â 16% sched_debug.cfs_rq[37]:/.tg_load_contrib
102489 Â 2% +80.5% 185024 Â 0% sched_debug.cfs_rq[112]:/.exec_clock
101949 Â 2% +80.3% 183856 Â 1% sched_debug.cfs_rq[79]:/.exec_clock
99315 Â 2% +77.4% 176144 Â 1% sched_debug.cfs_rq[15]:/.exec_clock
101607 Â 2% +77.9% 180741 Â 1% sched_debug.cfs_rq[48]:/.exec_clock
100838 Â 2% +78.0% 179484 Â 0% sched_debug.cfs_rq[32]:/.exec_clock
26621023 Â 4% +84.3% 49059445 Â 4% sched_debug.cfs_rq[84]:/.min_vruntime
100148 Â 0% +83.5% 183754 Â 1% sched_debug.cfs_rq[88]:/.exec_clock
101809 Â 1% +80.3% 183548 Â 0% sched_debug.cfs_rq[113]:/.exec_clock
27438729 Â 3% +81.2% 49726216 Â 4% sched_debug.cfs_rq[81]:/.min_vruntime
26871931 Â 4% +78.3% 47914921 Â 1% sched_debug.cfs_rq[27]:/.min_vruntime
101697 Â 2% +79.7% 182756 Â 1% sched_debug.cfs_rq[27]:/.exec_clock
4680 Â 25% +101.2% 9418 Â 8% sched_debug.cpu#68.ttwu_local
103202 Â 1% +78.2% 183913 Â 1% sched_debug.cfs_rq[76]:/.exec_clock
4353 Â 1% +92.9% 8399 Â 26% sched_debug.cpu#31.ttwu_local
102347 Â 2% +79.5% 183678 Â 0% sched_debug.cfs_rq[114]:/.exec_clock
101101 Â 2% +80.1% 182078 Â 1% sched_debug.cfs_rq[8]:/.exec_clock
100958 Â 2% +77.9% 179582 Â 0% sched_debug.cfs_rq[46]:/.exec_clock
102141 Â 2% +80.8% 184710 Â 1% sched_debug.cfs_rq[107]:/.exec_clock
99795 Â 2% +78.3% 177916 Â 0% sched_debug.cfs_rq[16]:/.exec_clock
1309094 Â 21% -36.2% 835062 Â 17% sched_debug.cpu#92.max_idle_balance_cost
101805 Â 1% +79.1% 182380 Â 0% sched_debug.cfs_rq[116]:/.exec_clock
5 Â 8% +62.5% 8 Â 14% sched_debug.cpu#41.cpu_load[3]
5 Â 8% +62.5% 8 Â 23% sched_debug.cpu#41.cpu_load[4]
5 Â 8% +56.3% 8 Â 11% sched_debug.cpu#41.cpu_load[2]
4 Â 26% +135.7% 11 Â 46% sched_debug.cpu#83.cpu_load[3]
5 Â 29% +88.2% 10 Â 28% sched_debug.cpu#22.cpu_load[3]
4 Â 21% +84.6% 8 Â 17% sched_debug.cfs_rq[3]:/.runnable_load_avg
11 Â 49% -58.8% 4 Â 26% sched_debug.cfs_rq[112]:/.runnable_load_avg
100434 Â 2% +77.1% 177850 Â 1% sched_debug.cfs_rq[1]:/.exec_clock
101587 Â 2% +77.5% 180296 Â 1% sched_debug.cfs_rq[18]:/.exec_clock
26838464 Â 5% +78.9% 48004471 Â 3% sched_debug.cfs_rq[75]:/.min_vruntime
101283 Â 2% +78.8% 181047 Â 0% sched_debug.cfs_rq[33]:/.exec_clock
102852 Â 1% +78.6% 183721 Â 0% sched_debug.cfs_rq[110]:/.exec_clock
26455283 Â 3% +79.4% 47466884 Â 2% sched_debug.cfs_rq[16]:/.min_vruntime
26621833 Â 1% +79.9% 47897020 Â 4% sched_debug.cfs_rq[28]:/.min_vruntime
103107 Â 1% +77.4% 182957 Â 1% sched_debug.cfs_rq[17]:/.exec_clock
100396 Â 2% +74.7% 175411 Â 1% sched_debug.cfs_rq[45]:/.exec_clock
1179305 Â 18% -41.7% 686968 Â 12% sched_debug.cpu#87.max_idle_balance_cost
626 Â 29% +47.7% 925 Â 21% sched_debug.cpu#29.sched_goidle
100064 Â 2% +76.7% 176787 Â 0% sched_debug.cfs_rq[30]:/.exec_clock
102840 Â 2% +79.0% 184103 Â 0% sched_debug.cfs_rq[111]:/.exec_clock
12677315 Â 18% +71.1% 21692326 Â 10% cpuidle.C3-IVT-4S.time
2001371 Â 15% -35.6% 1289682 Â 10% sched_debug.cpu#10.avg_idle
27333794 Â 2% +81.6% 49643263 Â 3% sched_debug.cfs_rq[80]:/.min_vruntime
8922 Â 29% +60.9% 14351 Â 9% sched_debug.cpu#71.ttwu_count
27279862 Â 3% +75.5% 47868960 Â 3% sched_debug.cfs_rq[13]:/.min_vruntime
27143621 Â 3% +81.7% 49330780 Â 3% sched_debug.cfs_rq[22]:/.min_vruntime
27591306 Â 3% +74.2% 48073762 Â 1% sched_debug.cfs_rq[87]:/.min_vruntime
27028818 Â 2% +78.5% 48256392 Â 4% sched_debug.cfs_rq[23]:/.min_vruntime
1376263 Â 16% -45.1% 755698 Â 10% sched_debug.cpu#78.max_idle_balance_cost
103923 Â 2% +76.9% 183852 Â 0% sched_debug.cfs_rq[106]:/.exec_clock
26466266 Â 3% +83.7% 48605429 Â 3% sched_debug.cfs_rq[20]:/.min_vruntime
26952223 Â 4% +77.9% 47936242 Â 3% sched_debug.cfs_rq[65]:/.min_vruntime
26734375 Â 3% +73.9% 46482790 Â 0% sched_debug.cfs_rq[18]:/.min_vruntime
27615182 Â 2% +79.1% 49455144 Â 3% sched_debug.cfs_rq[82]:/.min_vruntime
1378865 Â 4% -30.7% 955679 Â 28% sched_debug.cpu#15.max_idle_balance_cost
1303702 Â 13% -39.5% 788630 Â 8% sched_debug.cpu#107.max_idle_balance_cost
26538631 Â 7% +79.2% 47554705 Â 3% sched_debug.cfs_rq[54]:/.min_vruntime
27481450 Â 5% +74.9% 48051960 Â 3% sched_debug.cfs_rq[115]:/.min_vruntime
1231477 Â 15% -37.2% 773854 Â 9% sched_debug.cpu#56.max_idle_balance_cost
26970806 Â 1% +79.7% 48474886 Â 4% sched_debug.cfs_rq[19]:/.min_vruntime
28200525 Â 5% +73.8% 49008304 Â 3% sched_debug.cfs_rq[107]:/.min_vruntime
1371903 Â 13% -37.4% 859087 Â 29% sched_debug.cpu#75.max_idle_balance_cost
1359440 Â 19% -40.6% 807907 Â 28% sched_debug.cpu#4.max_idle_balance_cost
135178 Â 49% +114.6% 290104 Â 24% numa-meminfo.node2.Shmem
3 Â 34% +100.0% 7 Â 17% sched_debug.cpu#90.cpu_load[0]
8 Â 30% +129.2% 18 Â 49% sched_debug.cpu#74.cpu_load[3]
4 Â 28% +76.9% 7 Â 22% sched_debug.cpu#72.cpu_load[4]
8 Â 34% +68.0% 14 Â 11% sched_debug.cpu#70.nr_running
1.87 Â 1% -43.0% 1.07 Â 0% turbostat.Pkg%pc2
104837 Â 4% +70.2% 178457 Â 2% sched_debug.cfs_rq[0]:/.exec_clock
2973469 Â 33% +194.0% 8741737 Â 41% sched_debug.cfs_rq[22]:/.max_vruntime
2973469 Â 33% +194.0% 8741737 Â 41% sched_debug.cfs_rq[22]:/.MIN_vruntime
103733 Â 3% +75.1% 181687 Â 0% sched_debug.cfs_rq[19]:/.exec_clock
2033976 Â 6% -41.0% 1200745 Â 1% sched_debug.cpu#16.avg_idle
26819480 Â 5% +79.6% 48161949 Â 6% sched_debug.cfs_rq[60]:/.min_vruntime
1.18 Â 6% +74.9% 2.06 Â 3% turbostat.CPU%c1
3062 Â 8% -44.3% 1705 Â 0% numa-vmstat.node2.nr_alloc_batch
2556733 Â 10% -44.8% 1411192 Â 15% sched_debug.cpu#76.avg_idle
27675230 Â 3% +75.3% 48523627 Â 4% sched_debug.cfs_rq[83]:/.min_vruntime
27174325 Â 5% +74.3% 47372226 Â 5% sched_debug.cfs_rq[59]:/.min_vruntime
26796563 Â 5% +78.7% 47880566 Â 5% sched_debug.cfs_rq[24]:/.min_vruntime
27612035 Â 1% +75.8% 48551086 Â 2% sched_debug.cfs_rq[79]:/.min_vruntime
218 Â 9% +55.0% 338 Â 20% sched_debug.cpu#69.sched_goidle
27543770 Â 1% +73.8% 47880110 Â 3% sched_debug.cfs_rq[25]:/.min_vruntime
28177961 Â 7% +73.3% 48830676 Â 3% sched_debug.cfs_rq[111]:/.min_vruntime
33876 Â 48% +114.2% 72551 Â 26% numa-vmstat.node2.nr_shmem
27039328 Â 4% +71.6% 46391238 Â 6% sched_debug.cfs_rq[4]:/.min_vruntime
28197115 Â 6% +74.2% 49114662 Â 5% sched_debug.cfs_rq[112]:/.min_vruntime
105888 Â 1% +72.6% 182731 Â 1% sched_debug.cfs_rq[2]:/.exec_clock
27890316 Â 3% +74.4% 48638474 Â 3% sched_debug.cfs_rq[85]:/.min_vruntime
27230926 Â 2% +76.7% 48122630 Â 3% sched_debug.cfs_rq[26]:/.min_vruntime
28025228 Â 6% +71.4% 48035574 Â 4% sched_debug.cfs_rq[47]:/.min_vruntime
233 Â 22% +65.9% 387 Â 24% sched_debug.cpu#119.sched_goidle
27249686 Â 1% +75.6% 47863170 Â 4% sched_debug.cfs_rq[67]:/.min_vruntime
27446598 Â 6% +74.1% 47784676 Â 5% sched_debug.cfs_rq[52]:/.min_vruntime
3601 Â 37% +61.6% 5821 Â 26% sched_debug.cpu#118.ttwu_local
27513769 Â 5% +72.0% 47315703 Â 3% sched_debug.cfs_rq[17]:/.min_vruntime
27330261 Â 7% +75.8% 48036272 Â 3% sched_debug.cfs_rq[51]:/.min_vruntime
26818906 Â 3% +74.0% 46656069 Â 4% sched_debug.cfs_rq[5]:/.min_vruntime
27150746 Â 3% +70.9% 46410420 Â 5% sched_debug.cfs_rq[76]:/.min_vruntime
28876314 Â 7% +61.6% 46656199 Â 1% sched_debug.cfs_rq[102]:/.min_vruntime
27466106 Â 2% +72.3% 47316775 Â 5% sched_debug.cfs_rq[7]:/.min_vruntime
1238904 Â 17% -42.1% 717411 Â 8% sched_debug.cpu#28.max_idle_balance_cost
25026 Â 0% +70.7% 42712 Â 0% time.system_time
28253526 Â 7% +70.2% 48078369 Â 3% sched_debug.cfs_rq[118]:/.min_vruntime
5772 Â 26% +86.9% 10786 Â 4% sched_debug.cpu#69.ttwu_local
1274510 Â 11% -38.1% 788927 Â 12% sched_debug.cpu#88.max_idle_balance_cost
1837 Â 30% -35.8% 1179 Â 11% sched_debug.cpu#4.sched_goidle
27378468 Â 3% +72.3% 47176518 Â 5% sched_debug.cfs_rq[29]:/.min_vruntime
27494234 Â 8% +71.9% 47265116 Â 3% sched_debug.cfs_rq[58]:/.min_vruntime
174 Â 23% -33.0% 117 Â 0% numa-vmstat.node2.nr_mlock
174 Â 23% -33.0% 117 Â 0% numa-vmstat.node2.nr_unevictable
27206728 Â 6% +74.2% 47384019 Â 4% sched_debug.cfs_rq[114]:/.min_vruntime
28142247 Â 3% +71.5% 48277760 Â 4% sched_debug.cfs_rq[53]:/.min_vruntime
26094619 Â 4% +71.4% 44738063 Â 4% sched_debug.cfs_rq[0]:/.min_vruntime
28848632 Â 4% +64.2% 47377792 Â 1% sched_debug.cfs_rq[93]:/.min_vruntime
284 Â 1% +72.0% 488 Â 0% uptime.boot
1392003 Â 17% -36.3% 887367 Â 4% sched_debug.cpu#65.max_idle_balance_cost
27139321 Â 5% +67.0% 45329173 Â 5% sched_debug.cfs_rq[2]:/.min_vruntime
27674956 Â 3% +74.2% 48210953 Â 5% sched_debug.cfs_rq[71]:/.min_vruntime
2176632 Â 12% -37.7% 1355560 Â 11% sched_debug.cpu#87.avg_idle
2964602 Â 40% +56.8% 4648110 Â 24% sched_debug.cfs_rq[21]:/.MIN_vruntime
2964602 Â 40% +56.8% 4648110 Â 24% sched_debug.cfs_rq[21]:/.max_vruntime
28026809 Â 3% +69.7% 47563138 Â 6% sched_debug.cfs_rq[11]:/.min_vruntime
27490315 Â 2% +70.2% 46778295 Â 6% sched_debug.cfs_rq[68]:/.min_vruntime
27144389 Â 7% +72.8% 46915163 Â 3% sched_debug.cfs_rq[46]:/.min_vruntime
26837788 Â 5% +72.4% 46269925 Â 5% sched_debug.cfs_rq[3]:/.min_vruntime
1688 Â 14% +69.2% 2857 Â 3% numa-vmstat.node2.nr_mapped
27103059 Â 5% +72.3% 46696486 Â 0% sched_debug.cfs_rq[49]:/.min_vruntime
28180142 Â 6% +70.7% 48111115 Â 2% sched_debug.cfs_rq[103]:/.min_vruntime
27936247 Â 4% +70.4% 47607682 Â 1% sched_debug.cfs_rq[109]:/.min_vruntime
28315435 Â 8% +69.2% 47896596 Â 3% sched_debug.cfs_rq[106]:/.min_vruntime
27721329 Â 3% +73.4% 48057933 Â 3% sched_debug.cfs_rq[86]:/.min_vruntime
27886820 Â 8% +70.2% 47466269 Â 4% sched_debug.cfs_rq[116]:/.min_vruntime
27396905 Â 2% +69.5% 46446865 Â 4% sched_debug.cfs_rq[119]:/.min_vruntime
28166905 Â 3% +71.3% 48244521 Â 6% sched_debug.cfs_rq[77]:/.min_vruntime
27735077 Â 4% +68.3% 46681964 Â 4% sched_debug.cfs_rq[63]:/.min_vruntime
2093163 Â 4% -43.6% 1180095 Â 7% sched_debug.cpu#9.avg_idle
5625 Â 11% +53.0% 8607 Â 8% sched_debug.cpu#18.ttwu_local
28192307 Â 2% +64.0% 46229983 Â 0% sched_debug.cfs_rq[33]:/.min_vruntime
5076 Â 19% +56.2% 7930 Â 16% sched_debug.cpu#45.ttwu_local
2212969 Â 15% -40.2% 1324292 Â 12% sched_debug.cpu#85.avg_idle
28431122 Â 2% +68.5% 47919119 Â 4% sched_debug.cfs_rq[72]:/.min_vruntime
12891 Â 14% +75.2% 22582 Â 13% sched_debug.cpu#2.ttwu_count
67 Â 39% -49.3% 34 Â 43% sched_debug.cfs_rq[114]:/.tg_load_contrib
6 Â 25% +60.0% 10 Â 15% sched_debug.cpu#53.cpu_load[4]
6 Â 32% +89.5% 12 Â 36% sched_debug.cpu#29.cpu_load[4]
5 Â 22% +82.4% 10 Â 27% sched_debug.cpu#13.cpu_load[3]
6 Â 40% +77.8% 10 Â 30% sched_debug.cfs_rq[37]:/.runnable_load_avg
6 Â 32% +68.4% 10 Â 30% sched_debug.cpu#37.cpu_load[0]
6 Â 32% +68.4% 10 Â 30% sched_debug.cpu#37.cpu_load[1]
6 Â 32% +63.2% 10 Â 27% sched_debug.cpu#37.cpu_load[2]
27680297 Â 2% +68.2% 46566672 Â 6% sched_debug.cfs_rq[64]:/.min_vruntime
27184673 Â 1% +69.0% 45941045 Â 4% sched_debug.cfs_rq[8]:/.min_vruntime
186 Â 12% +71.3% 319 Â 20% sched_debug.cpu#104.sched_goidle
1395005 Â 18% -42.8% 798056 Â 14% sched_debug.cpu#25.max_idle_balance_cost
28334086 Â 6% +68.2% 47646717 Â 1% sched_debug.cfs_rq[108]:/.min_vruntime
29231619 Â 1% +65.3% 48333955 Â 2% sched_debug.cfs_rq[95]:/.min_vruntime
28631447 Â 3% +64.7% 47170091 Â 4% sched_debug.cfs_rq[96]:/.min_vruntime
1136411 Â 28% -35.6% 731487 Â 17% sched_debug.cpu#81.max_idle_balance_cost
27931979 Â 8% +70.5% 47636804 Â 4% sched_debug.cfs_rq[105]:/.min_vruntime
27587618 Â 8% +72.4% 47559163 Â 3% sched_debug.cfs_rq[55]:/.min_vruntime
27135866 Â 3% +72.0% 46685004 Â 5% sched_debug.cfs_rq[62]:/.min_vruntime
28430783 Â 3% +71.2% 48685881 Â 4% sched_debug.cfs_rq[113]:/.min_vruntime
5219 Â 14% +55.6% 8119 Â 4% sched_debug.cfs_rq[97]:/.tg_load_avg
29151752 Â 6% +58.0% 46050590 Â 2% sched_debug.cfs_rq[35]:/.min_vruntime
1738343 Â 23% -49.7% 874932 Â 8% sched_debug.cpu#26.max_idle_balance_cost
28321876 Â 3% +65.0% 46728175 Â 4% sched_debug.cfs_rq[39]:/.min_vruntime
28223816 Â 6% +69.0% 47692852 Â 3% sched_debug.cfs_rq[56]:/.min_vruntime
9834 Â 13% +51.9% 14936 Â 7% cpuidle.C1E-IVT-4S.usage
28242267 Â 2% +69.1% 47765761 Â 4% sched_debug.cfs_rq[12]:/.min_vruntime
27312596 Â 1% +73.0% 47245567 Â 6% sched_debug.cfs_rq[66]:/.min_vruntime
2317047 Â 15% -41.1% 1365345 Â 10% sched_debug.cpu#100.avg_idle
27500028 Â 8% +68.7% 46401166 Â 5% sched_debug.cfs_rq[45]:/.min_vruntime
5013 Â 14% +54.4% 7742 Â 10% sched_debug.cfs_rq[106]:/.tg_load_avg
27873808 Â 1% +72.7% 48137417 Â 7% sched_debug.cfs_rq[61]:/.min_vruntime
1932895 Â 19% -32.9% 1296385 Â 2% sched_debug.cpu#46.avg_idle
27897442 Â 5% +68.3% 46942450 Â 3% sched_debug.cfs_rq[48]:/.min_vruntime
234 Â 23% +47.6% 346 Â 16% sched_debug.cpu#86.sched_goidle
2362217 Â 19% -35.4% 1526318 Â 11% sched_debug.cpu#92.avg_idle
28072066 Â 2% +63.5% 45911460 Â 2% sched_debug.cfs_rq[92]:/.min_vruntime
28983640 Â 2% +62.8% 47192611 Â 5% sched_debug.cfs_rq[99]:/.min_vruntime
27287880 Â 1% +70.1% 46419381 Â 6% sched_debug.cfs_rq[6]:/.min_vruntime
1411332 Â 28% -47.6% 739234 Â 9% sched_debug.cpu#12.max_idle_balance_cost
164302 Â 0% +61.8% 265806 Â 0% sched_debug.sched_clk
28549966 Â 5% +65.5% 47239054 Â 5% sched_debug.cfs_rq[50]:/.min_vruntime
233 Â 17% +71.4% 400 Â 7% sched_debug.cpu#75.sched_goidle
7 Â 16% +69.6% 13 Â 25% sched_debug.cpu#101.nr_running
8 Â 19% +57.7% 13 Â 24% sched_debug.cpu#110.nr_running
5075 Â 16% +53.2% 7774 Â 10% sched_debug.cfs_rq[107]:/.tg_load_avg
5035 Â 14% +56.6% 7883 Â 6% sched_debug.cfs_rq[103]:/.tg_load_avg
4925 Â 11% +58.3% 7797 Â 9% sched_debug.cfs_rq[105]:/.tg_load_avg
485 Â 4% +47.3% 714 Â 15% sched_debug.cpu#37.sched_goidle
160491 Â 1% +62.6% 261038 Â 1% sched_debug.ktime
160491 Â 1% +62.6% 261038 Â 1% sched_debug.cpu_clk
2386826 Â 4% -31.4% 1637817 Â 27% sched_debug.cpu#75.avg_idle
160788 Â 1% +62.6% 261480 Â 1% sched_debug.cpu#6.clock
160802 Â 1% +62.6% 261518 Â 1% sched_debug.cpu#7.clock
160923 Â 1% +62.6% 261714 Â 1% sched_debug.cpu#14.clock
160914 Â 1% +62.6% 261693 Â 1% sched_debug.cpu#13.clock
161267 Â 1% +62.6% 262154 Â 1% sched_debug.cpu#41.clock
161182 Â 1% +62.6% 262091 Â 1% sched_debug.cpu#37.clock
160950 Â 1% +62.6% 261747 Â 1% sched_debug.cpu#16.clock
161199 Â 1% +62.6% 262103 Â 1% sched_debug.cpu#38.clock
160941 Â 1% +62.6% 261729 Â 1% sched_debug.cpu#15.clock
160911 Â 1% +62.6% 261654 Â 1% sched_debug.cpu#12.clock
160856 Â 1% +62.6% 261593 Â 1% sched_debug.cpu#9.clock
160774 Â 1% +62.6% 261448 Â 1% sched_debug.cpu#5.clock
160995 Â 1% +62.6% 261791 Â 1% sched_debug.cpu#19.clock
160825 Â 1% +62.6% 261559 Â 1% sched_debug.cpu#8.clock
161170 Â 1% +62.6% 262059 Â 1% sched_debug.cpu#36.clock
160960 Â 1% +62.6% 261758 Â 1% sched_debug.cpu#17.clock
160862 Â 1% +62.6% 261596 Â 1% sched_debug.cpu#10.clock
161164 Â 1% +62.6% 262042 Â 1% sched_debug.cpu#35.clock
161253 Â 1% +62.6% 262129 Â 1% sched_debug.cpu#40.clock
160875 Â 1% +62.6% 261628 Â 1% sched_debug.cpu#11.clock
161005 Â 1% +62.6% 261806 Â 1% sched_debug.cpu#20.clock
160740 Â 1% +62.6% 261381 Â 1% sched_debug.cpu#4.clock
161014 Â 1% +62.6% 261815 Â 1% sched_debug.cpu#21.clock
160978 Â 1% +62.6% 261773 Â 1% sched_debug.cpu#18.clock
161157 Â 1% +62.6% 262026 Â 1% sched_debug.cpu#34.clock
161062 Â 1% +62.6% 261894 Â 1% sched_debug.cpu#25.clock
161283 Â 1% +62.5% 262159 Â 1% sched_debug.cpu#42.clock
161035 Â 1% +62.6% 261853 Â 1% sched_debug.cpu#23.clock
161041 Â 1% +62.6% 261865 Â 1% sched_debug.cpu#24.clock
160715 Â 1% +62.6% 261349 Â 1% sched_debug.cpu#3.clock
161145 Â 1% +62.6% 262007 Â 1% sched_debug.cpu#33.clock
160695 Â 1% +62.6% 261297 Â 1% sched_debug.cpu#1.clock
161025 Â 1% +62.6% 261823 Â 1% sched_debug.cpu#22.clock
161138 Â 1% +62.6% 262000 Â 1% sched_debug.cpu#32.clock
161124 Â 1% +62.6% 261975 Â 1% sched_debug.cpu#31.clock
161116 Â 1% +62.6% 261958 Â 1% sched_debug.cpu#30.clock
161218 Â 1% +62.6% 262106 Â 1% sched_debug.cpu#39.clock
161097 Â 1% +62.6% 261940 Â 1% sched_debug.cpu#29.clock
160654 Â 1% +62.6% 261225 Â 1% sched_debug.cpu#0.clock
161075 Â 1% +62.6% 261904 Â 1% sched_debug.cpu#26.clock
161083 Â 1% +62.6% 261918 Â 1% sched_debug.cpu#27.clock
161379 Â 1% +62.5% 262269 Â 1% sched_debug.cpu#48.clock
160709 Â 1% +62.6% 261317 Â 1% sched_debug.cpu#2.clock
161092 Â 1% +62.6% 261919 Â 1% sched_debug.cpu#28.clock
161332 Â 1% +62.5% 262204 Â 1% sched_debug.cpu#44.clock
161368 Â 1% +62.5% 262240 Â 1% sched_debug.cpu#47.clock
161353 Â 1% +62.5% 262229 Â 1% sched_debug.cpu#46.clock
161338 Â 1% +62.5% 262218 Â 1% sched_debug.cpu#45.clock
161320 Â 1% +62.5% 262191 Â 1% sched_debug.cpu#43.clock
161401 Â 1% +62.5% 262289 Â 1% sched_debug.cpu#50.clock
161396 Â 1% +62.5% 262281 Â 1% sched_debug.cpu#49.clock
161407 Â 1% +62.5% 262299 Â 1% sched_debug.cpu#51.clock
161515 Â 1% +62.5% 262444 Â 1% sched_debug.cpu#59.clock
161466 Â 1% +62.5% 262377 Â 1% sched_debug.cpu#54.clock
161512 Â 1% +62.5% 262435 Â 1% sched_debug.cpu#58.clock
161469 Â 1% +62.5% 262386 Â 1% sched_debug.cpu#55.clock
161519 Â 1% +62.5% 262456 Â 1% sched_debug.cpu#60.clock
161443 Â 1% +62.5% 262341 Â 1% sched_debug.cpu#53.clock
161485 Â 1% +62.5% 262388 Â 1% sched_debug.cpu#56.clock
161428 Â 1% +62.5% 262319 Â 1% sched_debug.cpu#52.clock
4960 Â 14% +57.6% 7817 Â 8% sched_debug.cfs_rq[104]:/.tg_load_avg
161495 Â 1% +62.5% 262412 Â 1% sched_debug.cpu#57.clock
27927 Â 5% +48.5% 41476 Â 14% sched_debug.cpu#61.nr_switches
161558 Â 1% +62.5% 262519 Â 1% sched_debug.cpu#65.clock
161543 Â 1% +62.5% 262493 Â 1% sched_debug.cpu#63.clock
161539 Â 1% +62.5% 262481 Â 1% sched_debug.cpu#62.clock
161535 Â 1% +62.5% 262465 Â 1% sched_debug.cpu#61.clock
161569 Â 1% +62.5% 262529 Â 1% sched_debug.cpu#66.clock
161556 Â 1% +62.5% 262501 Â 1% sched_debug.cpu#64.clock
28199750 Â 6% +69.6% 47818383 Â 3% sched_debug.cfs_rq[43]:/.min_vruntime
161583 Â 1% +62.5% 262538 Â 1% sched_debug.cpu#67.clock
27815099 Â 2% +67.3% 46543638 Â 5% sched_debug.cfs_rq[9]:/.min_vruntime
161654 Â 1% +62.5% 262634 Â 1% sched_debug.cpu#72.clock
161628 Â 1% +62.5% 262622 Â 1% sched_debug.cpu#70.clock
161603 Â 1% +62.5% 262550 Â 1% sched_debug.cpu#68.clock
161612 Â 1% +62.5% 262563 Â 1% sched_debug.cpu#69.clock
161644 Â 1% +62.5% 262626 Â 1% sched_debug.cpu#71.clock
161732 Â 1% +62.4% 262728 Â 1% sched_debug.cpu#79.clock
161720 Â 1% +62.4% 262704 Â 1% sched_debug.cpu#77.clock
161714 Â 1% +62.4% 262672 Â 1% sched_debug.cpu#76.clock
161699 Â 1% +62.4% 262642 Â 1% sched_debug.cpu#74.clock
161727 Â 1% +62.4% 262711 Â 1% sched_debug.cpu#78.clock
161690 Â 1% +62.4% 262635 Â 1% sched_debug.cpu#73.clock
161711 Â 1% +62.4% 262667 Â 1% sched_debug.cpu#75.clock
161759 Â 1% +62.4% 262754 Â 1% sched_debug.cpu#80.clock
161804 Â 1% +62.4% 262784 Â 1% sched_debug.cpu#83.clock
161780 Â 1% +62.4% 262770 Â 1% sched_debug.cpu#81.clock
161801 Â 1% +62.4% 262775 Â 1% sched_debug.cpu#82.clock
27463683 Â 4% +65.2% 45357873 Â 7% sched_debug.cfs_rq[89]:/.min_vruntime
161838 Â 1% +62.4% 262807 Â 1% sched_debug.cpu#85.clock
161846 Â 1% +62.4% 262820 Â 1% sched_debug.cpu#86.clock
161881 Â 1% +62.4% 262836 Â 1% sched_debug.cpu#88.clock
161830 Â 1% +62.4% 262797 Â 1% sched_debug.cpu#84.clock
161872 Â 1% +62.4% 262825 Â 1% sched_debug.cpu#87.clock
161911 Â 1% +62.3% 262848 Â 1% sched_debug.cpu#89.clock
161944 Â 1% +62.3% 262873 Â 1% sched_debug.cpu#92.clock
161927 Â 1% +62.3% 262864 Â 1% sched_debug.cpu#91.clock
161918 Â 1% +62.3% 262858 Â 1% sched_debug.cpu#90.clock
5629 Â 24% +74.2% 9807 Â 40% sched_debug.cpu#32.ttwu_local
28953465 Â 4% +64.1% 47502972 Â 0% sched_debug.cfs_rq[101]:/.min_vruntime
162000 Â 1% +62.3% 262896 Â 1% sched_debug.cpu#94.clock
162002 Â 1% +62.3% 262882 Â 1% sched_debug.cpu#93.clock
162012 Â 1% +62.3% 262902 Â 1% sched_debug.cpu#95.clock
162018 Â 1% +62.3% 262919 Â 1% sched_debug.cpu#96.clock
162023 Â 1% +62.3% 262918 Â 1% sched_debug.cpu#97.clock
162036 Â 1% +62.3% 262928 Â 1% sched_debug.cpu#98.clock
162122 Â 1% +62.2% 262987 Â 1% sched_debug.cpu#104.clock
162090 Â 1% +62.2% 262956 Â 1% sched_debug.cpu#101.clock
162130 Â 1% +62.2% 262998 Â 1% sched_debug.cpu#105.clock
162103 Â 1% +62.2% 262982 Â 1% sched_debug.cpu#103.clock
162048 Â 1% +62.3% 262930 Â 1% sched_debug.cpu#99.clock
162098 Â 1% +62.2% 262966 Â 1% sched_debug.cpu#102.clock
162089 Â 1% +62.2% 262948 Â 1% sched_debug.cpu#100.clock
162148 Â 1% +62.2% 262999 Â 1% sched_debug.cpu#106.clock
162155 Â 1% +62.2% 263003 Â 1% sched_debug.cpu#107.clock
1137981 Â 5% -41.4% 667034 Â 3% sched_debug.cpu#22.max_idle_balance_cost
162236 Â 1% +62.2% 263094 Â 1% sched_debug.cpu#113.clock
162185 Â 1% +62.2% 263017 Â 1% sched_debug.cpu#108.clock
162250 Â 1% +62.2% 263100 Â 1% sched_debug.cpu#114.clock
162230 Â 1% +62.2% 263075 Â 1% sched_debug.cpu#112.clock
162208 Â 1% +62.2% 263040 Â 1% sched_debug.cpu#110.clock
162217 Â 1% +62.2% 263050 Â 1% sched_debug.cpu#111.clock
162272 Â 1% +62.1% 263116 Â 1% sched_debug.cpu#117.clock
162269 Â 1% +62.1% 263110 Â 1% sched_debug.cpu#116.clock
162273 Â 1% +62.1% 263122 Â 1% sched_debug.cpu#118.clock
162262 Â 1% +62.1% 263106 Â 1% sched_debug.cpu#115.clock
162197 Â 1% +62.2% 263021 Â 1% sched_debug.cpu#109.clock
162291 Â 1% +62.1% 263120 Â 1% sched_debug.cpu#119.clock
5314 Â 12% +54.9% 8234 Â 3% sched_debug.cfs_rq[96]:/.tg_load_avg
28710836 Â 1% +62.0% 46517027 Â 5% sched_debug.cfs_rq[70]:/.min_vruntime
29248702 Â 2% +60.6% 46981187 Â 3% sched_debug.cfs_rq[90]:/.min_vruntime
27932754 Â 3% +65.2% 46138545 Â 4% sched_debug.cfs_rq[40]:/.min_vruntime
28588795 Â 1% +64.3% 46974714 Â 5% sched_debug.cfs_rq[69]:/.min_vruntime
7433 Â 10% +72.1% 12789 Â 25% sched_debug.cpu#2.ttwu_local
28987283 Â 4% +57.9% 45772264 Â 2% sched_debug.cfs_rq[42]:/.min_vruntime
5 Â 8% +87.5% 10 Â 35% sched_debug.cpu#101.cpu_load[3]
4 Â 10% +57.1% 7 Â 12% sched_debug.cpu#94.cpu_load[2]
4 Â 10% +57.1% 7 Â 12% sched_debug.cpu#94.cpu_load[3]
4 Â 10% +64.3% 7 Â 6% sched_debug.cpu#94.cpu_load[4]
5 Â 8% +62.5% 8 Â 28% sched_debug.cpu#101.cpu_load[4]
4 Â 10% +57.1% 7 Â 12% sched_debug.cpu#94.cpu_load[1]
4 Â 26% +121.4% 10 Â 38% sched_debug.cpu#83.cpu_load[2]
4 Â 26% +121.4% 10 Â 38% sched_debug.cpu#83.cpu_load[1]
9 Â 36% -46.4% 5 Â 16% sched_debug.cpu#46.cpu_load[0]
28001170 Â 3% +66.3% 46559477 Â 6% sched_debug.cfs_rq[10]:/.min_vruntime
27914392 Â 3% +65.0% 46047332 Â 3% sched_debug.cfs_rq[36]:/.min_vruntime
1184985 Â 9% -40.7% 702992 Â 2% sched_debug.cpu#83.max_idle_balance_cost
29294276 Â 3% +63.1% 47768220 Â 0% sched_debug.cfs_rq[98]:/.min_vruntime
1246116 Â 8% -36.9% 785893 Â 2% sched_debug.cpu#82.max_idle_balance_cost
214 Â 12% +58.7% 340 Â 16% sched_debug.cpu#80.sched_goidle
25 Â 40% +135.1% 60 Â 21% sched_debug.cfs_rq[63]:/.blocked_load_avg
28130394 Â 2% +59.4% 44831870 Â 0% sched_debug.cfs_rq[32]:/.min_vruntime
1084955 Â 7% -34.4% 711689 Â 5% sched_debug.cpu#66.max_idle_balance_cost
5070 Â 14% +52.5% 7732 Â 4% sched_debug.cfs_rq[102]:/.tg_load_avg
5.14 Â 0% -36.3% 3.28 Â 1% turbostat.CPU%c6
28782262 Â 3% +62.5% 46781968 Â 4% sched_debug.cfs_rq[34]:/.min_vruntime
7329 Â 13% +74.8% 12810 Â 20% sched_debug.cpu#51.ttwu_count
28626263 Â 2% +62.0% 46361731 Â 3% sched_debug.cfs_rq[37]:/.min_vruntime
27427430 Â 2% +68.7% 46269426 Â 7% sched_debug.cfs_rq[1]:/.min_vruntime
5200 Â 15% +52.4% 7927 Â 5% sched_debug.cfs_rq[99]:/.tg_load_avg
4905 Â 10% +55.6% 7632 Â 4% sched_debug.cfs_rq[60]:/.tg_load_avg
208 Â 6% +55.3% 323 Â 5% sched_debug.cpu#115.sched_goidle
5239 Â 21% +49.1% 7811 Â 17% sched_debug.cfs_rq[119]:/.tg_load_avg
29416507 Â 3% +63.2% 47994239 Â 3% sched_debug.cfs_rq[94]:/.min_vruntime
29656 Â 5% +49.0% 44173 Â 14% sched_debug.cpu#61.sched_count
11220 Â 1% -35.7% 7216 Â 2% proc-vmstat.nr_alloc_batch
29127989 Â 2% +62.1% 47216255 Â 4% sched_debug.cfs_rq[100]:/.min_vruntime
28779496 Â 4% +61.8% 46571097 Â 2% sched_debug.cfs_rq[41]:/.min_vruntime
29226291 Â 3% +55.6% 45477016 Â 1% sched_debug.cfs_rq[31]:/.min_vruntime
27066866 Â 4% +61.5% 43707502 Â 7% sched_debug.cfs_rq[74]:/.min_vruntime
28036423 Â 4% +61.0% 45129846 Â 2% sched_debug.cfs_rq[44]:/.min_vruntime
107909 Â 1% +53.4% 165540 Â 3% sched_debug.cpu#92.nr_load_updates
28754125 Â 6% +67.1% 48047831 Â 6% sched_debug.cfs_rq[110]:/.min_vruntime
4918 Â 10% +56.7% 7706 Â 4% sched_debug.cfs_rq[61]:/.tg_load_avg
29026526 Â 4% +62.6% 47205685 Â 1% sched_debug.cfs_rq[38]:/.min_vruntime
5060 Â 10% +51.8% 7679 Â 2% sched_debug.cfs_rq[58]:/.tg_load_avg
26942107 Â 0% +64.9% 44438451 Â 7% sched_debug.cfs_rq[14]:/.min_vruntime
5173 Â 8% +47.6% 7634 Â 0% sched_debug.cfs_rq[89]:/.tg_load_avg
1222098 Â 13% -35.2% 792193 Â 16% sched_debug.cpu#61.max_idle_balance_cost
107362 Â 1% +52.7% 163932 Â 2% sched_debug.cpu#103.nr_load_updates
464 Â 8% +85.8% 862 Â 40% sched_debug.cpu#41.sched_goidle
4613837 Â 17% +76.8% 8156323 Â 16% sched_debug.cfs_rq[27]:/.MIN_vruntime
4613837 Â 17% +76.8% 8156323 Â 16% sched_debug.cfs_rq[27]:/.max_vruntime
192 Â 19% +63.1% 313 Â 6% sched_debug.cpu#88.sched_goidle
2020781 Â 7% -35.2% 1309642 Â 3% sched_debug.cpu#66.avg_idle
28456363 Â 3% +57.3% 44775761 Â 3% sched_debug.cfs_rq[30]:/.min_vruntime
107337 Â 1% +53.8% 165069 Â 1% sched_debug.cpu#97.nr_load_updates
29224089 Â 0% +58.0% 46175403 Â 4% sched_debug.cfs_rq[91]:/.min_vruntime
109347 Â 0% +53.4% 167708 Â 3% sched_debug.cpu#15.nr_load_updates
9 Â 9% +44.4% 13 Â 16% sched_debug.cpu#95.nr_running
108991 Â 1% +52.5% 166234 Â 2% sched_debug.cpu#35.nr_load_updates
109437 Â 0% +52.4% 166818 Â 2% sched_debug.cpu#32.nr_load_updates
5127 Â 14% +51.2% 7753 Â 6% sched_debug.cfs_rq[101]:/.tg_load_avg
107410 Â 0% +52.8% 164098 Â 2% sched_debug.cpu#90.nr_load_updates
107654 Â 1% +52.6% 164271 Â 3% sched_debug.cpu#82.nr_load_updates
107451 Â 1% +52.6% 163970 Â 1% sched_debug.cpu#98.nr_load_updates
108702 Â 0% +53.2% 166482 Â 1% sched_debug.cpu#37.nr_load_updates
108765 Â 0% +51.6% 164908 Â 3% sched_debug.cpu#44.nr_load_updates
110443 Â 0% +52.5% 168464 Â 3% sched_debug.cpu#18.nr_load_updates
106978 Â 0% +51.9% 162474 Â 2% sched_debug.cpu#104.nr_load_updates
108440 Â 0% +52.1% 164975 Â 2% sched_debug.cpu#93.nr_load_updates
108935 Â 0% +53.4% 167078 Â 3% sched_debug.cpu#45.nr_load_updates
108668 Â 1% +53.2% 166513 Â 2% sched_debug.cpu#39.nr_load_updates
107883 Â 0% +51.9% 163899 Â 2% sched_debug.cpu#101.nr_load_updates
2905 Â 6% -36.5% 1845 Â 2% numa-vmstat.node1.nr_alloc_batch
5003 Â 9% +54.5% 7727 Â 4% sched_debug.cfs_rq[59]:/.tg_load_avg
5200 Â 14% +49.8% 7789 Â 5% sched_debug.cfs_rq[100]:/.tg_load_avg
108012 Â 1% +52.1% 164245 Â 3% sched_debug.cpu#77.nr_load_updates
107888 Â 1% +51.3% 163277 Â 3% sched_debug.cpu#80.nr_load_updates
107273 Â 1% +52.6% 163688 Â 2% sched_debug.cpu#99.nr_load_updates
107262 Â 0% +52.7% 163754 Â 3% sched_debug.cpu#88.nr_load_updates
107872 Â 0% +52.9% 164886 Â 1% sched_debug.cpu#95.nr_load_updates
107106 Â 0% +52.2% 163026 Â 2% sched_debug.cpu#102.nr_load_updates
109736 Â 0% +51.3% 166061 Â 3% sched_debug.cpu#27.nr_load_updates
109125 Â 0% +52.4% 166280 Â 2% sched_debug.cpu#31.nr_load_updates
107700 Â 1% +51.8% 163526 Â 3% sched_debug.cpu#78.nr_load_updates
5254 Â 21% +44.8% 7609 Â 16% sched_debug.cfs_rq[117]:/.tg_load_avg
107716 Â 0% +51.6% 163321 Â 2% sched_debug.cpu#100.nr_load_updates
107720 Â 1% +51.5% 163183 Â 3% sched_debug.cpu#89.nr_load_updates
109165 Â 0% +52.1% 165997 Â 2% sched_debug.cpu#30.nr_load_updates
108596 Â 1% +53.2% 166368 Â 4% sched_debug.cpu#58.nr_load_updates
107682 Â 1% +51.9% 163537 Â 3% sched_debug.cpu#86.nr_load_updates
109158 Â 0% +51.7% 165558 Â 3% sched_debug.cpu#24.nr_load_updates
109865 Â 0% +51.7% 166646 Â 3% sched_debug.cpu#28.nr_load_updates
109336 Â 0% +51.5% 165615 Â 3% sched_debug.cpu#25.nr_load_updates
107840 Â 1% +51.1% 162988 Â 3% sched_debug.cpu#85.nr_load_updates
108834 Â 0% +52.7% 166202 Â 2% sched_debug.cpu#42.nr_load_updates
109234 Â 0% +52.2% 166269 Â 1% sched_debug.cpu#34.nr_load_updates
4925 Â 11% +55.3% 7649 Â 5% sched_debug.cfs_rq[62]:/.tg_load_avg
108152 Â 0% +52.7% 165167 Â 3% sched_debug.cpu#59.nr_load_updates
109021 Â 0% +51.9% 165589 Â 2% sched_debug.cpu#43.nr_load_updates
108849 Â 1% +52.3% 165824 Â 3% sched_debug.cpu#51.nr_load_updates
107509 Â 0% +52.1% 163470 Â 3% sched_debug.cpu#81.nr_load_updates
107923 Â 0% +52.2% 164280 Â 2% sched_debug.cpu#94.nr_load_updates
109152 Â 0% +51.9% 165841 Â 2% sched_debug.cpu#36.nr_load_updates
109233 Â 0% +51.7% 165669 Â 3% sched_debug.cpu#16.nr_load_updates
794269 Â 3% +56.5% 1243280 Â 1% proc-vmstat.numa_hint_faults_local
35 Â 12% +70.8% 60 Â 32% sched_debug.cfs_rq[91]:/.nr_spread_over
109376 Â 0% +52.4% 166701 Â 4% sched_debug.cpu#48.nr_load_updates
109884 Â 0% +51.6% 166583 Â 3% sched_debug.cpu#76.nr_load_updates
108768 Â 0% +52.2% 165524 Â 2% sched_debug.cpu#40.nr_load_updates
205 Â 8% +68.3% 345 Â 31% sched_debug.cpu#102.sched_goidle
107755 Â 1% +52.6% 164409 Â 2% sched_debug.cpu#75.nr_load_updates
108911 Â 0% +51.8% 165300 Â 2% sched_debug.cpu#41.nr_load_updates
107700 Â 1% +51.2% 162847 Â 4% sched_debug.cpu#87.nr_load_updates
109089 Â 0% +52.7% 166556 Â 3% sched_debug.cpu#50.nr_load_updates
108755 Â 0% +52.3% 165588 Â 4% sched_debug.cpu#57.nr_load_updates
109446 Â 0% +51.1% 165342 Â 3% sched_debug.cpu#29.nr_load_updates
109107 Â 0% +51.8% 165639 Â 2% sched_debug.cpu#38.nr_load_updates
108848 Â 0% +53.0% 166487 Â 3% sched_debug.cpu#52.nr_load_updates
106969 Â 0% +51.7% 162260 Â 3% sched_debug.cpu#119.nr_load_updates
106941 Â 0% +52.5% 163075 Â 3% sched_debug.cpu#115.nr_load_updates
109659 Â 0% +51.2% 165776 Â 3% sched_debug.cpu#26.nr_load_updates
107508 Â 0% +51.7% 163079 Â 4% sched_debug.cpu#83.nr_load_updates
2522110 Â 16% -33.8% 1668472 Â 6% sched_debug.cpu#65.avg_idle
108310 Â 1% +51.8% 164390 Â 3% sched_debug.cpu#79.nr_load_updates
2775775 Â 2% +70.6% 4734115 Â 20% sched_debug.cfs_rq[29]:/.MIN_vruntime
2775775 Â 2% +70.6% 4734115 Â 20% sched_debug.cfs_rq[29]:/.max_vruntime
109087 Â 0% +52.3% 166102 Â 4% sched_debug.cpu#47.nr_load_updates
235 Â 7% +51.8% 357 Â 19% sched_debug.cpu#110.sched_goidle
109679 Â 0% +52.2% 166887 Â 2% sched_debug.cpu#33.nr_load_updates
109126 Â 0% +51.6% 165478 Â 3% sched_debug.cpu#54.nr_load_updates
107690 Â 0% +52.0% 163667 Â 2% sched_debug.cpu#96.nr_load_updates
5292 Â 21% +45.0% 7672 Â 16% sched_debug.cfs_rq[118]:/.tg_load_avg
110492 Â 0% +50.8% 166619 Â 3% sched_debug.cpu#17.nr_load_updates
108673 Â 0% +52.1% 165280 Â 3% sched_debug.cpu#56.nr_load_updates
109029 Â 0% +52.8% 166601 Â 4% sched_debug.cpu#49.nr_load_updates
107054 Â 0% +52.6% 163405 Â 4% sched_debug.cpu#118.nr_load_updates
107650 Â 0% +52.3% 163918 Â 3% sched_debug.cpu#107.nr_load_updates
4967 Â 10% +52.6% 7577 Â 6% sched_debug.cfs_rq[63]:/.tg_load_avg
107748 Â 0% +52.2% 164028 Â 3% sched_debug.cpu#108.nr_load_updates
109190 Â 0% +52.1% 166110 Â 4% sched_debug.cpu#46.nr_load_updates
109337 Â 0% +52.1% 166310 Â 3% sched_debug.cpu#53.nr_load_updates
109296 Â 0% +50.9% 164960 Â 2% sched_debug.cpu#91.nr_load_updates
107273 Â 0% +52.4% 163514 Â 3% sched_debug.cpu#105.nr_load_updates
107283 Â 0% +51.8% 162892 Â 3% sched_debug.cpu#113.nr_load_updates
110338 Â 0% +50.6% 166118 Â 3% sched_debug.cpu#20.nr_load_updates
28262689 Â 6% +57.7% 44576831 Â 1% sched_debug.cfs_rq[104]:/.min_vruntime
107669 Â 0% +51.4% 162981 Â 3% sched_debug.cpu#114.nr_load_updates
109677 Â 0% +51.3% 165912 Â 3% sched_debug.cpu#23.nr_load_updates
2590 Â 1% -33.6% 1720 Â 4% numa-vmstat.node3.nr_alloc_batch
108080 Â 0% +52.2% 164457 Â 3% sched_debug.cpu#110.nr_load_updates
109062 Â 0% +51.5% 165250 Â 3% sched_debug.cpu#55.nr_load_updates
107194 Â 1% +51.8% 162747 Â 3% sched_debug.cpu#117.nr_load_updates
110953 Â 0% +50.0% 166477 Â 2% sched_debug.cpu#2.nr_load_updates
27852 Â 8% +41.5% 39400 Â 10% sched_debug.cpu#72.sched_count
107830 Â 1% +51.2% 163048 Â 3% sched_debug.cpu#84.nr_load_updates
107606 Â 0% +51.7% 163257 Â 3% sched_debug.cpu#112.nr_load_updates
107935 Â 0% +51.6% 163607 Â 4% sched_debug.cpu#109.nr_load_updates
29741 Â 4% +49.0% 44305 Â 8% sched_debug.cpu#31.sched_count
1216064 Â 14% -30.5% 845359 Â 1% sched_debug.cpu#60.max_idle_balance_cost
107059 Â 0% +52.0% 162689 Â 3% sched_debug.cpu#116.nr_load_updates
29310290 Â 4% +55.8% 45652985 Â 2% sched_debug.cfs_rq[97]:/.min_vruntime
29578 Â 4% +57.3% 46529 Â 10% sched_debug.cpu#51.sched_count
8042 Â 38% +65.2% 13290 Â 22% sched_debug.cpu#48.ttwu_count
109288 Â 0% +51.2% 165223 Â 3% sched_debug.cpu#106.nr_load_updates
1311713 Â 14% -41.4% 769187 Â 19% sched_debug.cpu#84.max_idle_balance_cost
109701 Â 0% +48.3% 162643 Â 3% sched_debug.cpu#13.nr_load_updates
109703 Â 0% +51.1% 165809 Â 3% sched_debug.cpu#22.nr_load_updates
107554 Â 1% +48.5% 159686 Â 3% sched_debug.cpu#60.nr_load_updates
109305 Â 1% +48.4% 162181 Â 3% sched_debug.cpu#61.nr_load_updates
5319 Â 15% +52.7% 8124 Â 6% sched_debug.cfs_rq[98]:/.tg_load_avg
108496 Â 0% +48.5% 161166 Â 3% sched_debug.cpu#63.nr_load_updates
1151250 Â 4% -28.2% 826707 Â 21% sched_debug.cpu#59.max_idle_balance_cost
107688 Â 0% +49.0% 160435 Â 3% sched_debug.cpu#67.nr_load_updates
108013 Â 0% +51.3% 163432 Â 3% sched_debug.cpu#111.nr_load_updates
34161 Â 15% +31.9% 45069 Â 7% sched_debug.cpu#76.sched_count
109384 Â 0% +48.7% 162600 Â 3% sched_debug.cpu#12.nr_load_updates
110197 Â 0% +48.2% 163294 Â 3% sched_debug.cpu#9.nr_load_updates
5436 Â 14% +43.7% 7814 Â 1% sched_debug.cfs_rq[92]:/.tg_load_avg
109524 Â 0% +48.5% 162625 Â 2% sched_debug.cpu#11.nr_load_updates
111639 Â 0% +49.4% 166777 Â 2% sched_debug.cpu#19.nr_load_updates
107985 Â 1% +48.5% 160360 Â 3% sched_debug.cpu#66.nr_load_updates
110171 Â 0% +48.2% 163257 Â 3% sched_debug.cpu#1.nr_load_updates
109927 Â 0% +47.6% 162248 Â 3% sched_debug.cpu#7.nr_load_updates
109771 Â 0% +48.7% 163258 Â 2% sched_debug.cpu#3.nr_load_updates
110205 Â 1% +47.4% 162416 Â 3% sched_debug.cpu#5.nr_load_updates
109489 Â 0% +48.1% 162168 Â 3% sched_debug.cpu#14.nr_load_updates
108164 Â 0% +48.2% 160297 Â 3% sched_debug.cpu#65.nr_load_updates
107902 Â 0% +48.8% 160592 Â 3% sched_debug.cpu#71.nr_load_updates
110212 Â 0% +47.6% 162632 Â 3% sched_debug.cpu#6.nr_load_updates
30677 Â 1% +47.1% 45133 Â 4% sched_debug.cpu#30.sched_count
108799 Â 0% +47.8% 160814 Â 3% sched_debug.cpu#64.nr_load_updates
108024 Â 1% +47.8% 159638 Â 3% sched_debug.cpu#70.nr_load_updates
108213 Â 0% +49.1% 161366 Â 3% sched_debug.cpu#69.nr_load_updates
111428 Â 2% +49.2% 166255 Â 3% sched_debug.cpu#21.nr_load_updates
109726 Â 1% +48.0% 162385 Â 3% sched_debug.cpu#8.nr_load_updates
27587 Â 2% +44.7% 39928 Â 4% sched_debug.cpu#114.sched_count
110432 Â 1% +47.9% 163348 Â 3% sched_debug.cpu#4.nr_load_updates
108611 Â 1% +48.2% 160954 Â 3% sched_debug.cpu#62.nr_load_updates
107911 Â 1% +48.6% 160379 Â 3% sched_debug.cpu#72.nr_load_updates
5220 Â 13% +32.5% 6914 Â 9% sched_debug.cfs_rq[55]:/.tg_load_avg
109545 Â 0% +48.1% 162278 Â 3% sched_debug.cpu#10.nr_load_updates
108397 Â 0% +48.1% 160543 Â 3% sched_debug.cpu#68.nr_load_updates
107763 Â 0% +48.6% 160128 Â 3% sched_debug.cpu#73.nr_load_updates
5179 Â 6% +44.9% 7502 Â 0% sched_debug.cfs_rq[83]:/.tg_load_avg
111024 Â 1% +46.7% 162924 Â 2% sched_debug.cpu#0.nr_load_updates
108178 Â 0% +48.2% 160338 Â 3% sched_debug.cpu#74.nr_load_updates
12 Â 13% -39.5% 7 Â 26% sched_debug.cpu#61.cpu_load[0]
3 Â 34% +90.9% 7 Â 20% sched_debug.cfs_rq[90]:/.runnable_load_avg
5 Â 17% +68.8% 9 Â 9% sched_debug.cfs_rq[2]:/.runnable_load_avg
12 Â 13% -36.8% 8 Â 30% sched_debug.cfs_rq[61]:/.runnable_load_avg
6 Â 13% +105.6% 12 Â 44% sched_debug.cfs_rq[119]:/.runnable_load_avg
5 Â 8% +58.8% 9 Â 18% sched_debug.cpu#6.cpu_load[3]
6 Â 19% +94.7% 12 Â 44% sched_debug.cpu#119.cpu_load[3]
5 Â 8% +64.7% 9 Â 22% sched_debug.cpu#6.cpu_load[4]
9 Â 15% +37.0% 12 Â 3% sched_debug.cpu#60.nr_running
6 Â 13% +111.1% 12 Â 46% sched_debug.cpu#119.cpu_load[0]
5466 Â 12% +48.5% 8119 Â 1% sched_debug.cfs_rq[95]:/.tg_load_avg
29718 Â 17% +36.6% 40592 Â 9% sched_debug.cpu#110.sched_count
13922434 Â 1% +49.5% 20808272 Â 1% softirqs.TIMER
5162 Â 4% +46.1% 7542 Â 0% sched_debug.cfs_rq[82]:/.tg_load_avg
26938 Â 3% +48.6% 40036 Â 3% sched_debug.cpu#42.sched_count
5149 Â 7% +48.3% 7638 Â 1% sched_debug.cfs_rq[87]:/.tg_load_avg
2332563 Â 13% -33.9% 1541851 Â 13% sched_debug.cpu#88.avg_idle
27547 Â 3% +41.7% 39040 Â 8% sched_debug.cpu#64.sched_count
60 Â 4% +40.7% 85 Â 14% sched_debug.cfs_rq[112]:/.load
60 Â 4% +40.7% 85 Â 14% sched_debug.cpu#112.load
24782 Â 7% +42.3% 35271 Â 1% sched_debug.cpu#97.nr_switches
25929 Â 3% +46.5% 37986 Â 2% sched_debug.cpu#42.nr_switches
5221 Â 12% +31.3% 6853 Â 11% sched_debug.cfs_rq[54]:/.tg_load_avg
1304697 Â 22% -32.6% 879516 Â 20% sched_debug.cpu#39.max_idle_balance_cost
1203390 Â 1% +50.9% 1816143 Â 1% proc-vmstat.numa_hint_faults
5079 Â 11% +37.3% 6974 Â 9% sched_debug.cfs_rq[56]:/.tg_load_avg
4906 Â 15% +53.6% 7537 Â 17% sched_debug.cfs_rq[115]:/.tg_load_avg
5404 Â 13% +43.1% 7732 Â 1% sched_debug.cfs_rq[91]:/.tg_load_avg
28540 Â 9% +40.2% 40012 Â 9% sched_debug.cpu#60.sched_count
30308 Â 2% +39.4% 42242 Â 12% sched_debug.cpu#14.sched_count
25740 Â 6% +44.3% 37150 Â 3% sched_debug.cpu#97.sched_count
5090 Â 8% +49.5% 7609 Â 5% sched_debug.cfs_rq[64]:/.tg_load_avg
27137 Â 3% +46.5% 39749 Â 2% sched_debug.cpu#119.sched_count
5325 Â 5% +41.3% 7524 Â 1% sched_debug.cfs_rq[88]:/.tg_load_avg
1.736e+09 Â 0% +48.9% 2.585e+09 Â 1% cpuidle.C6-IVT-4S.time
25353 Â 1% +47.2% 37327 Â 5% sched_debug.cpu#110.nr_switches
32636 Â 8% +40.8% 45953 Â 7% sched_debug.cpu#45.sched_count
28275 Â 9% +36.6% 38610 Â 4% sched_debug.cpu#69.sched_count
26561 Â 3% +43.7% 38169 Â 3% sched_debug.cpu#105.sched_count
1015481 Â 10% -33.9% 670966 Â 2% sched_debug.cpu#23.max_idle_balance_cost
28283 Â 4% +42.3% 40258 Â 8% sched_debug.cpu#59.sched_count
8413 Â 41% +55.6% 13090 Â 7% sched_debug.cpu#86.ttwu_count
27509 Â 5% +44.1% 39633 Â 6% sched_debug.cpu#12.sched_count
5500 Â 13% +45.0% 7977 Â 1% sched_debug.cfs_rq[94]:/.tg_load_avg
1045 Â 20% +27.4% 1331 Â 9% sched_debug.cpu#33.sched_goidle
27462 Â 1% +43.9% 39523 Â 6% sched_debug.cpu#68.sched_count
27467 Â 6% +41.8% 38947 Â 2% sched_debug.cpu#9.sched_count
5162 Â 3% +43.1% 7385 Â 2% sched_debug.cfs_rq[84]:/.tg_load_avg
26027 Â 4% +47.7% 38436 Â 0% sched_debug.cpu#91.nr_switches
577 Â 12% +36.5% 788 Â 1% sched_debug.cpu#9.sched_goidle
5628 Â 11% +41.2% 7944 Â 1% sched_debug.cfs_rq[93]:/.tg_load_avg
32096 Â 3% +43.5% 46053 Â 1% sched_debug.cpu#18.sched_count
5274 Â 21% +44.2% 7606 Â 19% sched_debug.cfs_rq[116]:/.tg_load_avg
969427 Â 12% -21.0% 765453 Â 14% sched_debug.cpu#114.max_idle_balance_cost
26844 Â 1% +47.6% 39614 Â 1% sched_debug.cpu#77.sched_count
235576 Â 28% +65.8% 390538 Â 18% numa-meminfo.node2.FilePages
7482 Â 20% +43.8% 10758 Â 7% numa-meminfo.node2.Mapped
26506 Â 4% +39.7% 37034 Â 5% sched_debug.cpu#69.nr_switches
29776 Â 12% +28.3% 38204 Â 7% sched_debug.cpu#11.sched_count
5367 Â 9% +43.9% 7724 Â 0% sched_debug.cfs_rq[90]:/.tg_load_avg
5153 Â 4% +44.5% 7444 Â 2% sched_debug.cfs_rq[85]:/.tg_load_avg
27197 Â 5% +48.1% 40288 Â 7% sched_debug.cpu#118.sched_count
26039 Â 4% +42.9% 37202 Â 2% sched_debug.cpu#9.nr_switches
473249 Â 8% -31.5% 324204 Â 9% numa-vmstat.node1.nr_active_anon
5143 Â 6% +46.3% 7526 Â 1% sched_debug.cfs_rq[86]:/.tg_load_avg
30636 Â 4% +37.8% 42219 Â 6% sched_debug.cpu#48.sched_count
8036 Â 6% -29.7% 5650 Â 0% proc-vmstat.nr_mapped
2714 Â 10% -33.0% 1818 Â 2% numa-vmstat.node0.nr_alloc_batch
25864 Â 5% +49.3% 38613 Â 7% sched_debug.cpu#96.sched_count
5146 Â 13% +32.6% 6822 Â 5% sched_debug.cfs_rq[53]:/.tg_load_avg
28062 Â 0% +38.4% 38825 Â 6% sched_debug.cpu#48.nr_switches
520 Â 21% +62.0% 843 Â 19% sched_debug.cpu#11.sched_goidle
27022 Â 3% +34.9% 36441 Â 7% sched_debug.cpu#11.nr_switches
2196363 Â 21% -36.3% 1400119 Â 10% sched_debug.cpu#78.avg_idle
8 Â 20% +44.0% 12 Â 11% sched_debug.cpu#56.nr_running
28927 Â 5% +38.5% 40055 Â 11% sched_debug.cpu#14.nr_switches
33663 Â 7% -29.1% 23851 Â 7% meminfo.Mapped
6319 Â 3% +70.8% 10792 Â 41% sched_debug.cpu#106.ttwu_count
5139 Â 7% +49.2% 7668 Â 6% sched_debug.cfs_rq[65]:/.tg_load_avg
30255 Â 4% +34.6% 40733 Â 5% sched_debug.cpu#18.nr_switches
58978 Â 28% +65.6% 97662 Â 19% numa-vmstat.node2.nr_file_pages
236 Â 12% +61.8% 382 Â 18% sched_debug.cpu#73.sched_goidle
1702406 Â 14% -15.1% 1445409 Â 19% sched_debug.cpu#81.avg_idle
231 Â 6% +78.2% 411 Â 34% sched_debug.cpu#72.sched_goidle
25132 Â 5% +47.9% 37175 Â 7% sched_debug.cpu#96.nr_switches
25897 Â 2% +48.0% 38340 Â 5% sched_debug.cpu#99.sched_count
2140357 Â 16% -36.5% 1359195 Â 8% sched_debug.cpu#12.avg_idle
245 Â 3% +39.0% 340 Â 10% sched_debug.cpu#108.sched_goidle
5183 Â 14% +31.3% 6805 Â 4% sched_debug.cfs_rq[52]:/.tg_load_avg
26151 Â 6% +38.4% 36193 Â 1% sched_debug.cpu#73.sched_count
5267 Â 15% +45.8% 7676 Â 10% sched_debug.cfs_rq[108]:/.tg_load_avg
1130665 Â 8% -29.9% 793127 Â 11% sched_debug.cpu#105.max_idle_balance_cost
24839 Â 3% +48.4% 36863 Â 6% sched_debug.cpu#99.nr_switches
26510 Â 5% +41.6% 37527 Â 2% sched_debug.cpu#38.nr_switches
5218 Â 11% +31.2% 6846 Â 1% sched_debug.cfs_rq[51]:/.tg_load_avg
6 Â 18% +55.0% 10 Â 19% sched_debug.cpu#116.nr_running
12 Â 29% -36.1% 7 Â 12% sched_debug.cfs_rq[111]:/.runnable_load_avg
10 Â 0% -26.7% 7 Â 17% sched_debug.cpu#111.cpu_load[4]
7279 Â 41% +65.2% 12024 Â 6% sched_debug.cpu#19.ttwu_local
5025 Â 11% +40.3% 7049 Â 10% sched_debug.cfs_rq[57]:/.tg_load_avg
28309 Â 3% +41.4% 40023 Â 12% sched_debug.cpu#66.sched_count
26799 Â 1% +41.5% 37929 Â 6% sched_debug.cpu#68.nr_switches
25268 Â 3% +42.1% 35917 Â 3% sched_debug.cpu#116.nr_switches
2215641 Â 13% -31.2% 1523682 Â 12% sched_debug.cpu#61.avg_idle
2190137 Â 12% -30.6% 1518876 Â 9% sched_debug.cpu#107.avg_idle
217 Â 6% +41.5% 307 Â 11% sched_debug.cpu#97.sched_goidle
31974 Â 17% +57.5% 50358 Â 29% sched_debug.cpu#50.sched_count
28978 Â 6% +42.1% 41178 Â 7% sched_debug.cpu#10.sched_count
26909 Â 2% +39.8% 37624 Â 3% sched_debug.cpu#116.sched_count
993513 Â 12% -29.7% 698714 Â 13% sched_debug.cpu#63.max_idle_balance_cost
611 Â 26% +47.1% 899 Â 11% sched_debug.cpu#24.sched_goidle
27865 Â 2% +52.2% 42425 Â 17% sched_debug.cpu#26.sched_count
26493 Â 4% +39.1% 36863 Â 1% sched_debug.cpu#35.nr_switches
8950 Â 19% +27.4% 11401 Â 5% sched_debug.cpu#115.ttwu_count
29865 Â 4% +45.1% 43332 Â 12% sched_debug.cpu#53.sched_count
25579 Â 3% +44.6% 36986 Â 2% sched_debug.cpu#113.sched_count
27503 Â 4% +47.6% 40584 Â 1% sched_debug.cpu#91.sched_count
26257 Â 6% +34.7% 35374 Â 1% sched_debug.cpu#118.nr_switches
25532 Â 6% +35.4% 34573 Â 3% sched_debug.cpu#73.nr_switches
26796 Â 3% +38.8% 37185 Â 2% sched_debug.cpu#51.nr_switches
27113 Â 2% +42.1% 38532 Â 7% sched_debug.cpu#36.nr_switches
27281 Â 2% +46.3% 39909 Â 7% sched_debug.cpu#31.nr_switches
29467 Â 3% +43.1% 42179 Â 6% sched_debug.cpu#28.sched_count
1175129 Â 10% -36.4% 747032 Â 10% sched_debug.cpu#53.max_idle_balance_cost
1767310 Â 3% -31.2% 1215759 Â 5% sched_debug.cpu#22.avg_idle
1595 Â 26% -36.6% 1011 Â 4% numa-vmstat.node1.nr_mapped
1033389 Â 5% -28.0% 744367 Â 1% sched_debug.cpu#55.max_idle_balance_cost
4694027 Â 2% +42.8% 6701218 Â 1% proc-vmstat.numa_pte_updates
1972874 Â 11% -24.3% 1493774 Â 5% sched_debug.cpu#8.avg_idle
2146277 Â 17% -34.6% 1404698 Â 24% sched_debug.cpu#4.avg_idle
24990 Â 3% +36.2% 34039 Â 1% sched_debug.cpu#82.nr_switches
27712 Â 0% +37.6% 38140 Â 6% sched_debug.cpu#71.sched_count
1238255 Â 19% -36.4% 787212 Â 9% sched_debug.cpu#52.max_idle_balance_cost
1927086 Â 16% -17.0% 1599072 Â 16% sched_debug.cpu#57.avg_idle
10737 Â 26% +64.3% 17643 Â 26% sched_debug.cpu#20.ttwu_count
28223 Â 9% +31.9% 37225 Â 5% sched_debug.cpu#63.sched_count
1111919 Â 19% -27.2% 809669 Â 14% sched_debug.cpu#40.max_idle_balance_cost
1269294 Â 19% -38.0% 787044 Â 10% sched_debug.cpu#17.max_idle_balance_cost
220 Â 6% +48.9% 327 Â 15% sched_debug.cpu#95.sched_goidle
37.31 Â 7% +38.8% 51.80 Â 2% perf-profile.cpu-cycles.unmap_region.do_munmap.sys_brk.system_call_fastpath
4 Â 10% +42.9% 6 Â 7% sched_debug.cpu#42.cpu_load[4]
4 Â 10% +42.9% 6 Â 7% sched_debug.cpu#60.cpu_load[3]
4 Â 21% +61.5% 7 Â 11% sched_debug.cpu#34.cpu_load[0]
4 Â 21% +61.5% 7 Â 11% sched_debug.cpu#72.cpu_load[0]
4 Â 26% +50.0% 7 Â 11% sched_debug.cpu#72.cpu_load[1]
5 Â 0% +33.3% 6 Â 7% sched_debug.cpu#42.cpu_load[3]
4 Â 26% +57.1% 7 Â 17% sched_debug.cpu#72.cpu_load[2]
5 Â 16% +80.0% 9 Â 39% sched_debug.cpu#108.cpu_load[4]
4 Â 21% +61.5% 7 Â 11% sched_debug.cfs_rq[34]:/.runnable_load_avg
5 Â 8% +43.8% 7 Â 22% sched_debug.cpu#79.cpu_load[1]
4 Â 10% +57.1% 7 Â 17% sched_debug.cpu#94.cpu_load[0]
7 Â 11% -23.8% 5 Â 8% sched_debug.cpu#26.cpu_load[0]
4 Â 26% +57.1% 7 Â 17% sched_debug.cpu#72.cpu_load[3]
4 Â 10% +57.1% 7 Â 17% sched_debug.cfs_rq[94]:/.runnable_load_avg
27918 Â 3% +42.4% 39757 Â 1% sched_debug.cpu#30.nr_switches
4912 Â 15% +53.3% 7533 Â 16% sched_debug.cfs_rq[112]:/.tg_load_avg
37.35 Â 7% +38.7% 51.80 Â 2% perf-profile.cpu-cycles.do_munmap.sys_brk.system_call_fastpath
26816 Â 2% +37.6% 36887 Â 6% sched_debug.cpu#71.nr_switches
28775 Â 10% +34.4% 38664 Â 3% sched_debug.cpu#86.sched_count
29274 Â 12% +26.7% 37100 Â 6% sched_debug.cpu#27.nr_switches
1764342 Â 11% -24.0% 1341672 Â 11% sched_debug.cpu#63.avg_idle
26782 Â 2% +34.7% 36073 Â 5% sched_debug.cpu#26.nr_switches
26817 Â 2% +36.7% 36651 Â 2% sched_debug.cpu#98.sched_count
25729 Â 3% +37.6% 35410 Â 1% sched_debug.cpu#105.nr_switches
1081473 Â 12% -15.4% 914772 Â 17% sched_debug.cpu#116.max_idle_balance_cost
26388 Â 2% +38.7% 36600 Â 2% sched_debug.cpu#119.nr_switches
1357022 Â 22% -36.9% 856356 Â 12% sched_debug.cpu#18.max_idle_balance_cost
31442 Â 6% +39.6% 43888 Â 2% sched_debug.cpu#39.sched_count
28981 Â 5% +39.2% 40331 Â 5% sched_debug.cpu#29.sched_count
5433 Â 3% +38.7% 7537 Â 6% sched_debug.cfs_rq[66]:/.tg_load_avg
4921 Â 15% +51.3% 7448 Â 16% sched_debug.cfs_rq[114]:/.tg_load_avg
25228 Â 2% +38.5% 34952 Â 2% sched_debug.cpu#98.nr_switches
4872 Â 15% +54.1% 7508 Â 17% sched_debug.cfs_rq[113]:/.tg_load_avg
5456 Â 3% +39.7% 7621 Â 5% sched_debug.cfs_rq[67]:/.tg_load_avg
26245 Â 3% +41.0% 37009 Â 4% sched_debug.cpu#12.nr_switches
1806508 Â 12% -15.0% 1536103 Â 17% sched_debug.cpu#59.avg_idle
5189 Â 7% +38.4% 7185 Â 3% sched_debug.cfs_rq[35]:/.tg_load_avg
27298 Â 5% +36.0% 37117 Â 1% sched_debug.cpu#58.nr_switches
2123992 Â 14% -30.4% 1477497 Â 13% sched_debug.cpu#25.avg_idle
5381 Â 4% +41.6% 7618 Â 7% sched_debug.cfs_rq[68]:/.tg_load_avg
5206 Â 4% +42.4% 7415 Â 4% sched_debug.cfs_rq[78]:/.tg_load_avg
1214040 Â 10% -31.4% 832296 Â 3% sched_debug.cpu#8.max_idle_balance_cost
26098 Â 2% +32.9% 34687 Â 5% sched_debug.cpu#100.nr_switches
27056 Â 4% +35.4% 36638 Â 8% sched_debug.cpu#70.sched_count
5445 Â 5% +35.0% 7353 Â 7% sched_debug.cfs_rq[16]:/.tg_load_avg
1348748 Â 33% -39.2% 820178 Â 17% sched_debug.cpu#115.max_idle_balance_cost
237 Â 3% +47.3% 349 Â 17% sched_debug.cpu#65.sched_goidle
29166 Â 4% +33.8% 39015 Â 2% sched_debug.cpu#92.sched_count
5237 Â 4% +43.4% 7511 Â 6% sched_debug.cfs_rq[77]:/.tg_load_avg
29502 Â 2% +35.8% 40055 Â 5% sched_debug.cpu#55.sched_count
5131 Â 5% +43.2% 7347 Â 3% sched_debug.cfs_rq[79]:/.tg_load_avg
6216 Â 17% +19.5% 7426 Â 16% sched_debug.cpu#28.ttwu_local
1856221 Â 7% -29.5% 1309528 Â 7% numa-meminfo.node1.Active(anon)
1028819 Â 15% -31.3% 706940 Â 19% sched_debug.cpu#89.max_idle_balance_cost
27275 Â 3% +39.7% 38115 Â 3% sched_debug.cpu#93.sched_count
29855 Â 4% +35.7% 40525 Â 2% sched_debug.cpu#38.sched_count
29007 Â 5% +42.6% 41357 Â 5% sched_debug.cpu#37.sched_count
5329 Â 4% +35.9% 7241 Â 2% sched_debug.cfs_rq[42]:/.tg_load_avg
27832 Â 9% +32.0% 36745 Â 6% sched_debug.cpu#60.nr_switches
222 Â 20% +95.5% 435 Â 37% sched_debug.cpu#70.sched_goidle
5091 Â 37% +63.8% 8338 Â 15% sched_debug.cpu#71.ttwu_local
1879246 Â 7% -29.1% 1331476 Â 7% numa-meminfo.node1.Active
26986 Â 5% +43.5% 38734 Â 4% sched_debug.cpu#89.sched_count
5150 Â 7% +38.5% 7133 Â 4% sched_debug.cfs_rq[34]:/.tg_load_avg
28382 Â 5% +35.5% 38461 Â 3% sched_debug.cpu#41.nr_switches
5352 Â 1% +36.3% 7293 Â 1% sched_debug.cfs_rq[32]:/.tg_load_avg
1061971 Â 12% -30.6% 737057 Â 2% sched_debug.cpu#118.max_idle_balance_cost
27541 Â 9% +28.8% 35465 Â 5% sched_debug.cpu#63.nr_switches
27538 Â 7% +36.5% 37600 Â 2% sched_debug.cpu#92.nr_switches
26657 Â 2% +32.9% 35418 Â 3% sched_debug.cpu#95.sched_count
29418 Â 1% +34.6% 39588 Â 3% sched_debug.cpu#33.nr_switches
27453 Â 3% +34.1% 36823 Â 3% sched_debug.cpu#55.nr_switches
29552 Â 6% +35.7% 40112 Â 4% sched_debug.cpu#41.sched_count
30468 Â 6% +38.8% 42275 Â 9% sched_debug.cpu#1.nr_switches
26283 Â 5% +41.4% 37154 Â 3% sched_debug.cpu#89.nr_switches
367 Â 10% +25.2% 459 Â 11% sched_debug.cpu#62.sched_goidle
32787 Â 4% +33.3% 43700 Â 3% sched_debug.cpu#17.sched_count
24810 Â 2% +41.4% 35073 Â 2% sched_debug.cpu#113.nr_switches
28737 Â 2% +34.1% 38551 Â 6% sched_debug.cpu#83.sched_count
5188 Â 7% +39.2% 7219 Â 4% sched_debug.cfs_rq[37]:/.tg_load_avg
31975 Â 6% +43.7% 45939 Â 13% sched_debug.cpu#4.sched_count
27794 Â 6% +36.7% 38003 Â 4% sched_debug.cpu#50.nr_switches
5053 Â 15% +47.9% 7472 Â 15% sched_debug.cfs_rq[111]:/.tg_load_avg
2234917 Â 20% -31.2% 1536891 Â 14% sched_debug.cpu#115.avg_idle
26385 Â 4% +33.6% 35241 Â 9% sched_debug.cpu#70.nr_switches
27192 Â 8% +29.5% 35222 Â 6% sched_debug.cpu#72.nr_switches
30013 Â 9% +33.4% 40036 Â 4% sched_debug.cpu#74.sched_count
191 Â 9% +36.7% 262 Â 9% sched_debug.cpu#103.sched_goidle
32365 Â 6% +37.1% 44370 Â 9% sched_debug.cpu#1.sched_count
25870 Â 0% +41.8% 36675 Â 11% sched_debug.cpu#87.sched_count
26515 Â 5% +35.9% 36034 Â 0% sched_debug.cpu#101.sched_count
30573 Â 9% +31.4% 40166 Â 5% sched_debug.cpu#52.sched_count
25974 Â 1% +37.0% 35588 Â 4% sched_debug.cpu#77.nr_switches
26327 Â 2% +37.2% 36116 Â 2% sched_debug.cpu#93.nr_switches
28679 Â 7% +34.7% 38643 Â 4% sched_debug.cpu#44.nr_switches
5233 Â 4% +40.3% 7341 Â 4% sched_debug.cfs_rq[80]:/.tg_load_avg
1029635 Â 17% -21.5% 808634 Â 14% sched_debug.cpu#77.max_idle_balance_cost
5366 Â 10% +26.9% 6810 Â 1% sched_debug.cfs_rq[50]:/.tg_load_avg
449 Â 6% +49.9% 673 Â 17% sched_debug.cpu#58.sched_goidle
5240 Â 4% +42.2% 7451 Â 3% sched_debug.cfs_rq[81]:/.tg_load_avg
5211 Â 5% +39.2% 7252 Â 5% sched_debug.cfs_rq[75]:/.tg_load_avg
235 Â 19% +31.0% 308 Â 13% sched_debug.cpu#83.sched_goidle
1702841 Â 10% -30.6% 1181187 Â 3% sched_debug.cpu#23.avg_idle
5540 Â 6% +32.4% 7337 Â 7% sched_debug.cfs_rq[17]:/.tg_load_avg
27061 Â 6% +41.4% 38276 Â 3% sched_debug.cpu#90.sched_count
32534 Â 11% +35.5% 44092 Â 12% sched_debug.cpu#8.sched_count
2186645 Â 5% +32.0% 2885949 Â 4% time.voluntary_context_switches
29185 Â 3% +38.0% 40280 Â 3% sched_debug.cpu#58.sched_count
27586 Â 9% +29.5% 35727 Â 4% sched_debug.cpu#43.nr_switches
1093890 Â 14% -26.4% 804711 Â 14% sched_debug.cpu#71.max_idle_balance_cost
28103 Â 1% +45.8% 40961 Â 13% sched_debug.cpu#23.sched_count
5396 Â 5% +34.5% 7260 Â 2% sched_debug.cfs_rq[41]:/.tg_load_avg
5298 Â 14% +41.8% 7511 Â 12% sched_debug.cfs_rq[109]:/.tg_load_avg
5195 Â 9% +36.5% 7093 Â 3% sched_debug.cfs_rq[36]:/.tg_load_avg
2152230 Â 9% -29.5% 1517828 Â 3% sched_debug.cpu#82.avg_idle
26931 Â 2% +34.3% 36161 Â 2% sched_debug.cpu#103.sched_count
25123 Â 0% +38.7% 34839 Â 10% sched_debug.cpu#87.nr_switches
30 Â 24% +68.1% 51 Â 21% sched_debug.cfs_rq[98]:/.blocked_load_avg
1273151 Â 13% -30.8% 880690 Â 11% sched_debug.cpu#45.max_idle_balance_cost
25763 Â 1% +31.7% 33928 Â 3% sched_debug.cpu#95.nr_switches
28493 Â 4% +33.6% 38060 Â 7% sched_debug.cpu#13.sched_count
2245631 Â 10% -32.9% 1507645 Â 18% sched_debug.cpu#84.avg_idle
25777 Â 3% +30.1% 33536 Â 1% sched_debug.cpu#78.nr_switches
1822496 Â 19% -27.0% 1330059 Â 4% sched_debug.cpu#28.avg_idle
28020 Â 4% +32.1% 37026 Â 3% sched_debug.cpu#52.nr_switches
31099 Â 1% +29.2% 40180 Â 4% sched_debug.cpu#44.sched_count
5478 Â 6% +37.1% 7512 Â 9% sched_debug.cfs_rq[69]:/.tg_load_avg
5212 Â 15% +44.9% 7552 Â 15% sched_debug.cfs_rq[110]:/.tg_load_avg
240 Â 27% +52.1% 366 Â 9% sched_debug.cpu#114.sched_goidle
1138290 Â 15% -20.6% 903719 Â 12% sched_debug.cpu#20.max_idle_balance_cost
27438 Â 4% +34.0% 36780 Â 2% sched_debug.cpu#22.nr_switches
25415 Â 3% +36.0% 34559 Â 0% sched_debug.cpu#101.nr_switches
26822 Â 4% +31.8% 35339 Â 6% sched_debug.cpu#59.nr_switches
1947631 Â 18% -23.2% 1495390 Â 15% sched_debug.cpu#77.avg_idle
8153 Â 32% +47.4% 12013 Â 18% sched_debug.cpu#72.ttwu_count
26753 Â 2% +32.6% 35462 Â 0% sched_debug.cpu#78.sched_count
27644 Â 4% +34.1% 37071 Â 11% sched_debug.cpu#66.nr_switches
30324 Â 13% +18.9% 36049 Â 4% sched_debug.cpu#100.sched_count
29298 Â 9% +30.9% 38343 Â 4% sched_debug.cpu#74.nr_switches
5 Â 22% +64.7% 9 Â 28% sched_debug.cpu#13.cpu_load[2]
5 Â 28% +106.7% 10 Â 38% sched_debug.cpu#83.cpu_load[0]
5 Â 8% +41.2% 8 Â 10% sched_debug.cpu#104.cpu_load[4]
12 Â 10% -29.7% 8 Â 14% sched_debug.cpu#61.cpu_load[2]
5 Â 22% +70.6% 9 Â 31% sched_debug.cpu#4.cpu_load[4]
11 Â 17% +37.1% 16 Â 5% sched_debug.cpu#36.nr_running
5 Â 8% +41.2% 8 Â 10% sched_debug.cpu#2.cpu_load[0]
5 Â 8% +41.2% 8 Â 10% sched_debug.cpu#2.cpu_load[1]
16 Â 10% -25.0% 12 Â 13% sched_debug.cpu#28.nr_running
5 Â 8% +41.2% 8 Â 10% sched_debug.cpu#6.cpu_load[2]
696 Â 11% +52.1% 1059 Â 23% sched_debug.cpu#25.sched_goidle
5397 Â 1% +32.2% 7137 Â 2% sched_debug.cfs_rq[33]:/.tg_load_avg
27409 Â 5% +33.6% 36612 Â 7% sched_debug.cpu#67.sched_count
1312754 Â 29% -40.7% 777920 Â 17% sched_debug.cpu#13.max_idle_balance_cost
5248 Â 5% +37.0% 7188 Â 6% sched_debug.cfs_rq[76]:/.tg_load_avg
5515 Â 4% +32.2% 7292 Â 1% sched_debug.cfs_rq[31]:/.tg_load_avg
556 Â 9% +62.4% 902 Â 36% sched_debug.cpu#31.sched_goidle
27490 Â 4% +32.6% 36446 Â 8% sched_debug.cpu#13.nr_switches
25880 Â 4% +33.0% 34412 Â 3% sched_debug.cpu#103.nr_switches
29002 Â 2% +33.6% 38756 Â 6% sched_debug.cpu#16.nr_switches
5394 Â 7% +27.3% 6865 Â 1% sched_debug.cfs_rq[49]:/.tg_load_avg
5561 Â 5% +30.5% 7257 Â 7% sched_debug.cfs_rq[18]:/.tg_load_avg
5676 Â 4% +32.8% 7538 Â 3% sched_debug.cfs_rq[12]:/.tg_load_avg
224 Â 10% +38.6% 311 Â 9% sched_debug.cpu#116.sched_goidle
27295 Â 2% +34.4% 36679 Â 5% sched_debug.cpu#112.sched_count
26358 Â 7% +37.4% 36226 Â 4% sched_debug.cpu#90.nr_switches
1974844 Â 22% -32.0% 1342988 Â 3% sched_debug.cpu#83.avg_idle
27495 Â 3% +36.1% 37423 Â 3% sched_debug.cpu#56.sched_count
5233 Â 6% +37.3% 7185 Â 5% sched_debug.cfs_rq[38]:/.tg_load_avg
1246808 Â 15% -24.0% 947735 Â 9% sched_debug.cpu#19.max_idle_balance_cost
28012 Â 8% +25.8% 35251 Â 1% sched_debug.cpu#57.nr_switches
28066 Â 11% +29.4% 36323 Â 2% sched_debug.cpu#86.nr_switches
29329 Â 6% +27.5% 37400 Â 3% sched_debug.cpu#43.sched_count
26265 Â 3% +32.1% 34685 Â 4% sched_debug.cpu#25.nr_switches
27913 Â 4% +28.6% 35888 Â 6% sched_debug.cpu#65.sched_count
26682 Â 4% +33.1% 35500 Â 1% sched_debug.cpu#82.sched_count
951444 Â 14% -32.2% 644834 Â 6% sched_debug.cpu#64.max_idle_balance_cost
27988 Â 2% +35.1% 37814 Â 4% sched_debug.cpu#28.nr_switches
27940 Â 4% +37.9% 38524 Â 7% sched_debug.cpu#115.sched_count
30202 Â 6% +32.5% 40022 Â 4% sched_debug.cpu#39.nr_switches
26356 Â 4% +34.4% 35432 Â 3% sched_debug.cpu#23.nr_switches
28482 Â 3% +31.7% 37521 Â 4% sched_debug.cpu#111.sched_count
2542743 Â 15% -36.2% 1621241 Â 14% sched_debug.cpu#26.avg_idle
28395 Â 1% +30.9% 37160 Â 0% sched_debug.cpu#46.nr_switches
27609 Â 2% +32.2% 36485 Â 3% sched_debug.cpu#34.nr_switches
10833 Â 18% +40.7% 15242 Â 15% sched_debug.cpu#18.ttwu_count
5181 Â 7% +38.8% 7191 Â 4% sched_debug.cfs_rq[39]:/.tg_load_avg
29051 Â 4% +33.2% 38708 Â 4% sched_debug.cpu#24.sched_count
5597 Â 4% +31.7% 7369 Â 3% sched_debug.cfs_rq[19]:/.tg_load_avg
1369381 Â 29% -31.8% 934348 Â 14% sched_debug.cpu#49.max_idle_balance_cost
5351 Â 5% +34.1% 7178 Â 4% sched_debug.cfs_rq[40]:/.tg_load_avg
30437 Â 4% +31.4% 40006 Â 3% sched_debug.cpu#47.sched_count
1801226 Â 13% -27.9% 1298032 Â 5% sched_debug.cpu#3.avg_idle
29625 Â 4% +39.5% 41313 Â 12% sched_debug.cpu#8.nr_switches
1817115 Â 10% -21.0% 1435255 Â 13% sched_debug.cpu#56.avg_idle
28674 Â 4% +25.9% 36107 Â 4% sched_debug.cpu#53.nr_switches
5507 Â 6% +32.9% 7317 Â 5% sched_debug.cfs_rq[15]:/.tg_load_avg
26512 Â 1% +28.4% 34043 Â 3% sched_debug.cpu#112.nr_switches
25060 Â 1% +28.9% 32293 Â 5% sched_debug.cpu#85.nr_switches
215 Â 10% +39.7% 300 Â 8% sched_debug.cpu#90.sched_goidle
5721 Â 9% +36.3% 7796 Â 2% sched_debug.cfs_rq[5]:/.tg_load_avg
5203 Â 6% +41.6% 7367 Â 8% sched_debug.cfs_rq[74]:/.tg_load_avg
5414 Â 6% +27.3% 6891 Â 1% sched_debug.cfs_rq[48]:/.tg_load_avg
27623 Â 3% +33.9% 36978 Â 5% sched_debug.cpu#102.sched_count
29325 Â 7% +28.6% 37717 Â 4% sched_debug.cpu#104.nr_switches
30775 Â 4% +29.9% 39987 Â 3% sched_debug.cpu#17.nr_switches
26588 Â 3% +31.7% 35007 Â 1% sched_debug.cpu#81.sched_count
5499 Â 5% +33.2% 7323 Â 3% sched_debug.cfs_rq[14]:/.tg_load_avg
28617 Â 3% +29.5% 37068 Â 4% sched_debug.cpu#47.nr_switches
31246 Â 3% +29.7% 40511 Â 1% sched_debug.cpu#46.sched_count
2027799 Â 23% -31.5% 1388630 Â 11% sched_debug.cpu#17.avg_idle
5495 Â 4% +34.0% 7362 Â 1% sched_debug.cfs_rq[30]:/.tg_load_avg
94026 Â 1% +29.6% 121872 Â 2% proc-vmstat.numa_pages_migrated
94026 Â 1% +29.6% 121872 Â 2% proc-vmstat.pgmigrate_success
26345 Â 2% +28.3% 33796 Â 5% sched_debug.cpu#85.sched_count
1244633 Â 20% -27.7% 899308 Â 7% sched_debug.cpu#54.max_idle_balance_cost
494 Â 9% +24.7% 616 Â 6% sched_debug.cpu#57.sched_goidle
5645 Â 2% +29.3% 7299 Â 3% sched_debug.cfs_rq[20]:/.tg_load_avg
30549 Â 4% +37.6% 42027 Â 12% sched_debug.cpu#3.nr_switches
26592 Â 5% +31.6% 34988 Â 7% sched_debug.cpu#67.nr_switches
27112 Â 4% +31.6% 35672 Â 3% sched_debug.cpu#115.nr_switches
31504 Â 8% +36.9% 43130 Â 15% sched_debug.cpu#20.sched_count
32860 Â 2% +32.2% 43448 Â 2% sched_debug.cpu#2.nr_switches
5247 Â 8% +39.0% 7291 Â 7% sched_debug.cfs_rq[73]:/.tg_load_avg
26610 Â 6% +23.6% 32887 Â 2% sched_debug.cpu#80.nr_switches
5359 Â 7% +34.9% 7227 Â 8% sched_debug.cfs_rq[71]:/.tg_load_avg
28056 Â 5% +34.9% 37854 Â 5% sched_debug.cpu#79.sched_count
26386 Â 2% +28.8% 33995 Â 5% sched_debug.cpu#94.nr_switches
5315 Â 7% +35.4% 7199 Â 9% sched_debug.cfs_rq[72]:/.tg_load_avg
27169 Â 4% +25.7% 34149 Â 6% sched_debug.cpu#65.nr_switches
28023 Â 5% +30.2% 36494 Â 5% sched_debug.cpu#29.nr_switches
27834 Â 5% +35.9% 37819 Â 3% sched_debug.cpu#37.nr_switches
30353 Â 5% +24.5% 37799 Â 1% sched_debug.cpu#109.sched_count
1226327 Â 20% -22.1% 955059 Â 12% sched_debug.cpu#0.max_idle_balance_cost
7 Â 11% +23.8% 8 Â 5% sched_debug.cpu#14.cpu_load[2]
7 Â 11% +38.1% 9 Â 17% sched_debug.cpu#14.cpu_load[4]
6 Â 14% +100.0% 12 Â 46% sched_debug.cpu#119.cpu_load[1]
7 Â 11% +38.1% 9 Â 17% sched_debug.cpu#14.cpu_load[3]
6 Â 14% +94.7% 12 Â 44% sched_debug.cpu#119.cpu_load[2]
1087052 Â 20% -34.3% 714255 Â 4% sched_debug.cpu#3.max_idle_balance_cost
1778166 Â 3% -24.3% 1345995 Â 7% sched_debug.cpu#53.avg_idle
5494 Â 7% +28.9% 7081 Â 4% sched_debug.cfs_rq[46]:/.tg_load_avg
5468 Â 8% +28.1% 7006 Â 3% sched_debug.cfs_rq[47]:/.tg_load_avg
5679 Â 4% +30.7% 7421 Â 3% sched_debug.cfs_rq[21]:/.tg_load_avg
5860 Â 12% +33.4% 7819 Â 1% sched_debug.cfs_rq[7]:/.tg_load_avg
32477 Â 5% +28.2% 41627 Â 7% sched_debug.cpu#16.sched_count
26721 Â 4% +31.9% 35235 Â 4% sched_debug.cpu#102.nr_switches
31233 Â 6% +27.1% 39710 Â 3% sched_debug.cpu#49.sched_count
33200 Â 9% +32.2% 43880 Â 6% sched_debug.cpu#19.sched_count
26316 Â 3% +34.2% 35317 Â 4% sched_debug.cpu#56.nr_switches
2646863 Â 11% -10.7% 2363160 Â 11% numa-meminfo.node2.MemUsed
32279 Â 4% +37.5% 44369 Â 11% sched_debug.cpu#3.sched_count
1825301 Â 6% -23.5% 1395510 Â 11% sched_debug.cpu#104.avg_idle
3380 Â 5% +23.3% 4169 Â 1% sched_debug.cpu#89.curr->pid
5395 Â 4% +33.3% 7192 Â 3% sched_debug.cfs_rq[43]:/.tg_load_avg
29662 Â 6% +24.5% 36923 Â 3% sched_debug.cpu#49.nr_switches
1817343 Â 14% -26.6% 1334015 Â 18% sched_debug.cpu#89.avg_idle
1482860 Â 14% -19.9% 1187970 Â 4% numa-meminfo.node3.AnonPages
34289 Â 9% +22.9% 42158 Â 10% sched_debug.cpu#5.sched_count
1131215 Â 22% -28.2% 811999 Â 8% sched_debug.cpu#24.max_idle_balance_cost
239 Â 16% +25.3% 300 Â 8% sched_debug.cpu#82.sched_goidle
30120 Â 8% +28.9% 38816 Â 10% sched_debug.cpu#20.nr_switches
35446 Â 4% +34.1% 47517 Â 7% sched_debug.cpu#2.sched_count
1967297 Â 8% -24.6% 1484173 Â 11% sched_debug.cpu#105.avg_idle
5723 Â 4% +29.6% 7419 Â 0% sched_debug.cfs_rq[27]:/.tg_load_avg
389145 Â 2% -24.8% 292509 Â 8% numa-vmstat.node1.nr_anon_pages
29197 Â 10% +18.8% 34699 Â 3% sched_debug.cpu#117.sched_count
1653002 Â 9% -17.3% 1366525 Â 3% sched_debug.cpu#55.avg_idle
5633 Â 3% +29.8% 7310 Â 0% sched_debug.cfs_rq[28]:/.tg_load_avg
27839 Â 6% +32.3% 36845 Â 3% sched_debug.cpu#10.nr_switches
30726 Â 11% +27.2% 39069 Â 4% sched_debug.cpu#40.sched_count
226 Â 23% +43.0% 323 Â 12% sched_debug.cpu#113.sched_goidle
2645436 Â 5% -22.5% 2049208 Â 6% numa-meminfo.node1.MemUsed
1847392 Â 11% -23.7% 1410061 Â 8% sched_debug.cpu#24.avg_idle
28233 Â 7% +32.0% 37281 Â 5% sched_debug.cpu#40.nr_switches
27956 Â 3% +31.9% 36868 Â 7% sched_debug.cpu#108.sched_count
5752 Â 11% +33.7% 7690 Â 3% sched_debug.cfs_rq[6]:/.tg_load_avg
5439 Â 9% +34.6% 7322 Â 7% sched_debug.cfs_rq[70]:/.tg_load_avg
5657 Â 5% +30.3% 7369 Â 5% sched_debug.cfs_rq[22]:/.tg_load_avg
28613 Â 7% +26.2% 36117 Â 3% sched_debug.cpu#94.sched_count
266 Â 9% +20.3% 320 Â 8% sched_debug.cpu#68.sched_goidle
28845 Â 6% +34.4% 38775 Â 12% sched_debug.cpu#7.sched_count
1525284 Â 2% -22.4% 1183745 Â 7% numa-meminfo.node1.AnonPages
5440 Â 6% +31.4% 7150 Â 2% sched_debug.cfs_rq[44]:/.tg_load_avg
28527 Â 4% +31.0% 37379 Â 4% sched_debug.cpu#88.sched_count
960040 Â 9% -16.9% 798124 Â 18% sched_debug.cpu#104.max_idle_balance_cost
26784 Â 3% +33.0% 35620 Â 9% sched_debug.cpu#64.nr_switches
1097006 Â 24% -28.3% 786424 Â 11% sched_debug.cpu#69.max_idle_balance_cost
906449 Â 3% -17.9% 744374 Â 6% sched_debug.cpu#109.max_idle_balance_cost
41058 Â 11% +15.8% 47561 Â 2% sched_debug.cpu#0.nr_switches
5939 Â 7% +28.5% 7629 Â 2% sched_debug.cfs_rq[11]:/.tg_load_avg
29504 Â 4% +31.8% 38876 Â 3% sched_debug.cpu#45.nr_switches
5636 Â 5% +30.6% 7361 Â 3% sched_debug.cfs_rq[26]:/.tg_load_avg
27534 Â 6% +33.7% 36827 Â 12% sched_debug.cpu#7.nr_switches
31358 Â 8% +27.4% 39935 Â 4% sched_debug.cpu#19.nr_switches
29002 Â 4% +22.2% 35442 Â 3% sched_debug.cpu#54.nr_switches
7017225 Â 5% -17.1% 5820718 Â 1% meminfo.Active(anon)
524 Â 8% +24.1% 650 Â 4% sched_debug.cpu#46.sched_goidle
8 Â 10% +29.2% 10 Â 12% sched_debug.cpu#51.nr_running
27219 Â 2% +27.0% 34579 Â 3% sched_debug.cpu#111.nr_switches
2011680 Â 9% -20.7% 1595763 Â 0% sched_debug.cpu#60.avg_idle
340 Â 9% +32.0% 448 Â 11% sched_debug.cpu#107.sched_goidle
27060 Â 3% +28.5% 34766 Â 5% sched_debug.cpu#108.nr_switches
7105638 Â 5% -16.8% 5909786 Â 1% meminfo.Active
1671225 Â 13% -25.5% 1245685 Â 7% sched_debug.cpu#64.avg_idle
4576561 Â 1% +24.9% 5715773 Â 0% time.involuntary_context_switches
922503 Â 17% -21.6% 723007 Â 9% sched_debug.cpu#97.max_idle_balance_cost
5734 Â 4% +27.1% 7289 Â 4% sched_debug.cfs_rq[13]:/.tg_load_avg
29345 Â 5% +21.9% 35761 Â 3% sched_debug.cpu#75.sched_count
29638 Â 4% +32.3% 39221 Â 4% sched_debug.cpu#32.nr_switches
5756 Â 12% +31.4% 7561 Â 3% sched_debug.cfs_rq[4]:/.tg_load_avg
32219 Â 3% +22.2% 39367 Â 5% sched_debug.cpu#15.nr_switches
1984007 Â 20% -31.6% 1357898 Â 10% sched_debug.cpu#51.avg_idle
1697256 Â 13% -18.5% 1384031 Â 9% sched_debug.cpu#97.avg_idle
30898 Â 7% +28.9% 39835 Â 11% sched_debug.cpu#6.sched_count
1484902 Â 5% -16.2% 1244348 Â 3% proc-vmstat.nr_anon_pages
28617 Â 5% +27.4% 36466 Â 3% sched_debug.cpu#107.sched_count
1748635 Â 5% -17.0% 1450658 Â 2% proc-vmstat.nr_active_anon
5788 Â 7% +25.9% 7288 Â 7% sched_debug.cfs_rq[23]:/.tg_load_avg
4379 Â 8% -19.7% 3516 Â 14% sched_debug.cpu#102.curr->pid
1150238 Â 16% -25.0% 862204 Â 7% sched_debug.cpu#47.max_idle_balance_cost
27806 Â 4% +26.9% 35276 Â 1% sched_debug.cpu#24.nr_switches
34173 Â 3% +23.9% 42352 Â 5% sched_debug.cpu#15.sched_count
4205 Â 3% -17.4% 3472 Â 6% sched_debug.cpu#27.curr->pid
28487 Â 3% +21.2% 34517 Â 2% sched_debug.cpu#80.sched_count
25689 Â 4% +27.7% 32794 Â 2% sched_debug.cpu#81.nr_switches
5740 Â 5% +28.5% 7374 Â 4% sched_debug.cfs_rq[25]:/.tg_load_avg
5963200 Â 6% -16.2% 4997164 Â 2% meminfo.AnonPages
5561 Â 7% +27.5% 7093 Â 4% sched_debug.cfs_rq[45]:/.tg_load_avg
31633 Â 5% +23.5% 39068 Â 0% sched_debug.cpu#54.sched_count
294 Â 8% +21.5% 357 Â 3% sched_debug.cpu#63.sched_goidle
29726 Â 7% +25.5% 37296 Â 9% sched_debug.cpu#6.nr_switches
29596 Â 6% +17.3% 34726 Â 4% sched_debug.cpu#109.nr_switches
2054994 Â 17% -29.8% 1442181 Â 8% sched_debug.cpu#52.avg_idle
26878 Â 2% +24.0% 33339 Â 3% sched_debug.cpu#114.nr_switches
5744 Â 5% +27.0% 7297 Â 5% sched_debug.cfs_rq[24]:/.tg_load_avg
49 Â 9% -30.4% 34 Â 34% sched_debug.cfs_rq[34]:/.blocked_load_avg
1187754 Â 14% -24.8% 893289 Â 18% sched_debug.cpu#58.max_idle_balance_cost
28510 Â 6% +34.4% 38332 Â 13% sched_debug.cpu#62.sched_count
27321 Â 5% +26.7% 34618 Â 2% sched_debug.cpu#79.nr_switches
26879 Â 3% +21.8% 32748 Â 3% sched_debug.cpu#117.nr_switches
1164295 Â 13% -21.5% 913984 Â 3% sched_debug.cpu#80.max_idle_balance_cost
40.77 Â 2% -18.1% 33.39 Â 3% perf-profile.cpu-cycles.page_fault
5708 Â 5% +26.0% 7192 Â 2% sched_debug.cfs_rq[29]:/.tg_load_avg
1141082 Â 5% -17.9% 936291 Â 4% sched_debug.cpu#79.max_idle_balance_cost
27855 Â 5% +24.0% 34552 Â 2% sched_debug.cpu#107.nr_switches
1975985 Â 10% -16.7% 1646645 Â 4% sched_debug.cpu#54.avg_idle
32906 Â 5% +20.0% 39487 Â 2% sched_debug.cpu#35.sched_count
1237982 Â 39% -36.6% 785127 Â 2% sched_debug.cpu#35.max_idle_balance_cost
9897 Â 9% -28.2% 7109 Â 14% meminfo.AnonHugePages
1037987 Â 8% -20.1% 829268 Â 6% sched_debug.cpu#34.max_idle_balance_cost
27597 Â 6% +30.6% 36047 Â 10% sched_debug.cpu#62.nr_switches
5616 Â 14% +27.3% 7151 Â 5% sched_debug.cfs_rq[0]:/.tg_load_avg
27484 Â 3% +28.2% 35245 Â 6% sched_debug.cpu#88.nr_switches
233 Â 4% +26.7% 295 Â 10% sched_debug.cpu#98.sched_goidle
31608 Â 4% +18.1% 37335 Â 4% sched_debug.cpu#5.nr_switches
34639 Â 7% +25.8% 43586 Â 7% sched_debug.cpu#33.sched_count
152174 Â 4% -14.6% 129947 Â 3% proc-vmstat.nr_page_table_pages
3903 Â 5% +14.3% 4462 Â 4% sched_debug.cpu#96.curr->pid
5842 Â 14% +25.3% 7322 Â 5% sched_debug.cfs_rq[1]:/.tg_load_avg
2063762 Â 8% -15.0% 1754321 Â 3% sched_debug.cpu#80.avg_idle
11014 Â 6% +57.7% 17371 Â 38% sched_debug.cpu#46.ttwu_count
28538 Â 5% +18.2% 33729 Â 2% sched_debug.cpu#75.nr_switches
1673095 Â 12% -18.6% 1361234 Â 7% numa-meminfo.node3.Active(anon)
4 Â 10% +35.7% 6 Â 7% sched_debug.cpu#60.cpu_load[2]
5 Â 0% +33.3% 6 Â 14% sched_debug.cfs_rq[60]:/.runnable_load_avg
5 Â 0% +26.7% 6 Â 7% sched_debug.cpu#42.cpu_load[1]
4 Â 10% +42.9% 6 Â 14% sched_debug.cpu#60.cpu_load[1]
5 Â 0% +33.3% 6 Â 14% sched_debug.cpu#60.cpu_load[0]
5 Â 0% +26.7% 6 Â 7% sched_debug.cpu#42.cpu_load[2]
5 Â 0% +26.7% 6 Â 7% sched_debug.cfs_rq[42]:/.runnable_load_avg
12 Â 13% -28.9% 9 Â 15% sched_debug.cpu#61.cpu_load[1]
5 Â 0% +26.7% 6 Â 7% sched_debug.cpu#42.cpu_load[0]
3709346 Â 45% +59.4% 5911105 Â 6% sched_debug.cfs_rq[45]:/.MIN_vruntime
3709346 Â 45% +59.4% 5911105 Â 6% sched_debug.cfs_rq[45]:/.max_vruntime
5866 Â 12% +30.1% 7632 Â 3% sched_debug.cfs_rq[9]:/.tg_load_avg
31023 Â 6% +26.6% 39279 Â 4% sched_debug.cpu#104.sched_count
1694696 Â 12% -18.3% 1383800 Â 7% numa-meminfo.node3.Active
1890513 Â 18% -26.5% 1389035 Â 3% sched_debug.cpu#118.avg_idle
1662229 Â 6% -17.2% 1375913 Â 8% numa-meminfo.node0.Active(anon)
1712 Â 11% -20.5% 1362 Â 11% sched_debug.cpu#17.sched_goidle
1683702 Â 6% -17.0% 1397865 Â 8% numa-meminfo.node0.Active
5919 Â 13% +31.5% 7783 Â 2% sched_debug.cfs_rq[8]:/.tg_load_avg
263 Â 6% +35.1% 355 Â 22% sched_debug.cpu#94.sched_goidle
43073 Â 8% +6.7% 45939 Â 7% numa-vmstat.node1.numa_other
28033 Â 2% +24.1% 34785 Â 6% sched_debug.cpu#83.nr_switches
30484 Â 6% +28.7% 39244 Â 6% sched_debug.cpu#4.nr_switches
5864 Â 13% +23.7% 7254 Â 4% sched_debug.cfs_rq[2]:/.tg_load_avg
562 Â 18% +60.8% 904 Â 32% sched_debug.cpu#35.sched_goidle
37321 Â 12% +40.5% 52452 Â 21% cpuidle.C1-IVT-4S.usage
609130 Â 4% -14.4% 521666 Â 3% meminfo.PageTables
420531 Â 4% -17.5% 346979 Â 6% numa-vmstat.node0.nr_active_anon
38.09 Â 4% -15.4% 32.21 Â 3% perf-profile.cpu-cycles.do_page_fault.page_fault
10851 Â 30% +33.1% 14438 Â 3% sched_debug.cpu#10.ttwu_count
3593 Â 1% -12.5% 3145 Â 2% vmstat.procs.r
538 Â 15% +40.8% 758 Â 28% sched_debug.cpu#59.sched_goidle
8 Â 17% -25.0% 6 Â 0% sched_debug.cfs_rq[55]:/.runnable_load_avg
5 Â 17% +93.8% 10 Â 45% sched_debug.cpu#63.cpu_load[3]
4324 Â 0% -10.5% 3871 Â 6% sched_debug.cpu#117.curr->pid
1393435 Â 3% -12.6% 1218408 Â 6% numa-meminfo.node2.AnonPages
28446 Â 1% +22.2% 34748 Â 6% sched_debug.cpu#84.sched_count
2296766 Â 21% -28.3% 1646601 Â 7% sched_debug.cpu#86.avg_idle
5.487e+11 Â 6% -10.0% 4.939e+11 Â 2% meminfo.Committed_AS
2767 Â 4% +12.0% 3100 Â 1% vmstat.procs.b
5951 Â 14% +22.5% 7288 Â 3% sched_debug.cfs_rq[3]:/.tg_load_avg
3738 Â 13% -21.0% 2953 Â 10% cpuidle.C3-IVT-4S.usage
33409 Â 8% +15.7% 38662 Â 7% sched_debug.cpu#21.sched_count
1300 Â 6% +18.2% 1536 Â 5% slabinfo.RAW.active_objs
1300 Â 6% +18.2% 1536 Â 5% slabinfo.RAW.num_objs
243 Â 7% +23.9% 301 Â 16% sched_debug.cpu#112.sched_goidle
367782 Â 7% -14.5% 314602 Â 8% numa-vmstat.node0.nr_anon_pages
996148 Â 3% -13.5% 861912 Â 0% softirqs.RCU
5888 Â 12% +27.8% 7527 Â 3% sched_debug.cfs_rq[10]:/.tg_load_avg
37.34 Â 4% -14.2% 32.02 Â 3% perf-profile.cpu-cycles.__do_page_fault.do_page_fault.page_fault
30518 Â 8% +21.5% 37079 Â 4% sched_debug.cpu#25.sched_count
41 Â 0% +15.4% 47 Â 2% turbostat.PkgTmp
41 Â 1% +15.3% 47 Â 1% turbostat.CoreTmp
5548 Â 4% -11.9% 4888 Â 9% sched_debug.cpu#82.ttwu_local
1251765 Â 22% -30.6% 868819 Â 9% sched_debug.cpu#86.max_idle_balance_cost
819484 Â 3% -16.4% 684803 Â 7% sched_debug.cpu#73.max_idle_balance_cost
3450 Â 3% +14.8% 3961 Â 1% proc-vmstat.nr_inactive_anon
1935049 Â 21% -21.5% 1519199 Â 7% sched_debug.cpu#47.avg_idle
3297 Â 11% +15.9% 3823 Â 4% numa-vmstat.node1.nr_slab_reclaimable
13193 Â 11% +15.9% 15291 Â 4% numa-meminfo.node1.SReclaimable
27183 Â 4% +22.7% 33360 Â 6% sched_debug.cpu#84.nr_switches
13791 Â 3% +15.2% 15884 Â 2% meminfo.Inactive(anon)
2041793 Â 34% -31.0% 1409086 Â 6% sched_debug.cpu#35.avg_idle
2539329 Â 4% -12.1% 2232032 Â 5% numa-meminfo.node0.MemUsed
852 Â 3% +9.5% 934 Â 1% sched_debug.cfs_rq[95]:/.tg_runnable_contrib
39475 Â 4% +9.4% 43179 Â 1% sched_debug.cfs_rq[95]:/.avg->runnable_avg_sum
3925 Â 4% +11.1% 4360 Â 2% sched_debug.cpu#23.curr->pid
4010 Â 3% +13.6% 4556 Â 2% sched_debug.cpu#104.curr->pid
1967501 Â 22% -24.0% 1495374 Â 12% sched_debug.cpu#69.avg_idle
4197 Â 7% +12.0% 4699 Â 2% sched_debug.cpu#3.curr->pid
33469 Â 12% +15.9% 38807 Â 2% sched_debug.cpu#34.sched_count
1570094 Â 0% -14.8% 1337145 Â 8% sched_debug.cpu#73.avg_idle
21481 Â 2% +10.0% 23631 Â 6% proc-vmstat.numa_other
1641150 Â 5% -10.5% 1468708 Â 5% sched_debug.cpu#109.avg_idle
673 Â 27% +44.9% 975 Â 21% sched_debug.cpu#50.sched_goidle
272 Â 0% +11.7% 304 Â 0% turbostat.PKG_%
4888 Â 4% -11.9% 4306 Â 1% sched_debug.cpu#6.curr->pid
1161012 Â 9% -19.0% 940822 Â 12% sched_debug.cpu#48.max_idle_balance_cost
38580 Â 3% +11.1% 42867 Â 0% sched_debug.cfs_rq[99]:/.avg->runnable_avg_sum
834 Â 4% +10.8% 924 Â 0% sched_debug.cfs_rq[99]:/.tg_runnable_contrib
37872 Â 0% +12.2% 42512 Â 1% sched_debug.cfs_rq[97]:/.avg->runnable_avg_sum
35710 Â 6% +20.3% 42961 Â 8% sched_debug.cpu#32.sched_count
35.36 Â 5% -11.2% 31.41 Â 3% perf-profile.cpu-cycles.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
820 Â 1% +12.0% 918 Â 1% sched_debug.cfs_rq[97]:/.tg_runnable_contrib
4444 Â 5% -13.4% 3851 Â 4% sched_debug.cpu#63.curr->pid
10844 Â 0% -9.1% 9852 Â 0% time.percent_of_cpu_this_job_got
79342 Â 1% +10.6% 87737 Â 0% numa-meminfo.node2.Inactive
446 Â 1% +108.9% 932 Â 0% turbostat.CorWatt
461 Â 1% +105.4% 947 Â 0% turbostat.PkgWatt
180657 Â 0% -27.7% 130525 Â 1% vmstat.system.in
31683 Â 0% -25.5% 23619 Â 2% vmstat.system.cs
1126 Â 0% +8.4% 1221 Â 0% turbostat.Avg_MHz
93.66 Â 0% +1.0% 94.64 Â 0% turbostat.%Busy
brickland1: Brickland Ivy Bridge-EX
Memory: 128G
To reproduce:
apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
---
testcase: aim7
default-monitors:
wait: pre-test
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 10
default_watchdogs:
watch-oom:
watchdog:
cpufreq_governor: performance
commit: 4134f249a14dcd1dc05dbac8649cadb1d3a6e65a
model: Brickland Ivy Bridge-EX
nr_cpu: 120
memory: 128G
hdd_partitions: "/dev/sda2"
swap_partitions:
aim7:
load: 6000
test: page_test
testbox: brickland1
tbox_group: brickland1
kconfig: x86_64-rhel
enqueue_time: 2015-02-13 06:08:24.413020858 +08:00
head_commit: 4134f249a14dcd1dc05dbac8649cadb1d3a6e65a
base_commit: bfa76d49576599a4b9f9b7a71f23d73d6dcff735
branch: linux-devel/devel-hourly-2015021614
kernel: "/kernel/x86_64-rhel/4134f249a14dcd1dc05dbac8649cadb1d3a6e65a/vmlinuz-3.19.0-wl-ath-06199-g4134f24"
user: lkp
queue: cyclic
rootfs: debian-x86_64-2015-02-07.cgz
result_root: "/result/brickland1/aim7/performance-6000-page_test/debian-x86_64-2015-02-07.cgz/x86_64-rhel/4134f249a14dcd1dc05dbac8649cadb1d3a6e65a/0"
job_file: "/lkp/scheduled/brickland1/cyclic_aim7-performance-6000-page_test-x86_64-rhel-HEAD-4134f249a14dcd1dc05dbac8649cadb1d3a6e65a-0-20150213-101686-1wizjhe.yaml"
dequeue_time: 2015-02-16 16:20:25.949491850 +08:00
job_state: finished
loadavg: 3175.53 3736.22 1916.25 1/936 17661
start_time: '1424074905'
end_time: '1424075325'
version: "/lkp/lkp/.src-20150216-123607"
_______________________________________________
LKP mailing list
LKP@xxxxxxxxxxxxxxx